Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Networking Technology

The History of Ethernet 322

Z34107 tips an article at Ars about the history of ethernet, from its humble beginnings at Xerox PARC in the mid-'70s, to its standardization and broad adoption, to the never-ending quest for higher throughput. Quoting: "It's hard to believe now, but in the early 1980s, 10Mbps Ethernet was very fast. Think about it: is there any other 30-year-old technology still present in current computers? 300 baud modems? 500 ns memory? Daisy wheel printers? But even today, 10Mbps is not an entirely unusable speed, and it's still part of the 10/100/1000Mbps Ethernet interfaces in our computers. Still, by the early 1990s, Ethernet didn't feel as fast as it did a decade earlier. Consider the VAX-11/780, a machine released in 1977 by Digital Equipment Corporation. The 780 comes with some 2MB RAM and runs at 5MHz. Its speed is almost exactly one MIPS and it executes 1757 dhrystones per second. (Dhrystone is a CPU benchmark developed in 1984; the name is a play on the even older Whetstone benchmark.) A current Intel i7 machine may run at 3GHz and have 3GB RAM, executing nearly 17 million dhrystones per second. If network speeds had increased as fast as processor speeds, the i7 would today at least have a 10Gbps network interface, and perhaps a 100Gbps one."
This discussion has been archived. No new comments can be posted.

The History of Ethernet

Comments Filter:
  • by aaaaaaargh! ( 1150173 ) on Friday July 15, 2011 @10:55AM (#36775296)

    ...does not feel much faster than my MacPlus, because operating system and software makers managed to slow everything down again using "advanced software engineering techniques."

    • Odd. My Athlon II x4 feels much, much faster than my old 486 DX2. I was running SuSE on the 486 and running Debian 6.0 on my current machine. Running things like GIMP or Povray was way more painful on the 486.

    • Re:Yet my i7... (Score:5, Insightful)

      by fuzzyfuzzyfungus ( 1223518 ) on Friday July 15, 2011 @11:13AM (#36775504) Journal
      I'm guessing that cool toys like "Actual memory protection so that the stability of the system doesn't depend on every last scrap of code behaving itself", "Not having to use a 512x324 display", and "Not costing $2600" probably help dull the pain a bit...
      • and "Not costing $2600" probably help dull the pain a bit...

        You forgot to account for inflation. $2600 in the 80s is probably worth about $10,000 now, or more.

        Can you imagine spending even $5k for a computer now? Or $2k?

        • I can imagine it, but I'm not sure what I would do with it beyond multiscreen gaming, which is why I spend big pieces of money like that on things like cars instead of computers. I probably will put a Phenom II X6 into my desktop when they come down to a hundred bucks, though. (From a reputable reseller...)

        • Depending on which inflation numbers you use, $2600 in 1980 would be ~$7100 now (according to the BLS).

        • by Lumpy ( 12016 )

          "Can you imagine spending even $5k for a computer now? Or $2k?"

          Yes I can as well as a lot of others. In the pro world we do that a lot.

          My last video editor cost me $5000.00 for just the computer.
          The laptop I just bought for my field tech cost $3500.00 and It's a dual core i3 (Panasonic Toughbook)

          If you look outside the world of really low end consumer, Most of us that use computers for a living actually pay those prices.

        • Yes I can imagine spending over $5k for a computer. So far, my next system budget has reached $3k in just hardware. Add in Software and it's easily pushing north of $8k.

          The big question, is why you seem so surprised that a high end system can easiy pass $2k. Hell when the IBM PC came out, it would easily cost $5k with all the options and I know damn well what our TRS80 workstation setup cost. That easily pushed $3k between software and hardware. We even had a massive 15M external Winchester Disk for it that

    • Bullshit. Either you had one blazing fast MacPlus or something is seriously wrong with your current computer. While it may consume an inordinate amount of RAM, my current Core2 Duo based MacBook Pro is snappier than any computer I've ever used in the past (I installed SSD, so that helps). I recall launching games and then going off to get lunch instead of waiting for it to load when I was a kid. Now I complain when there's a 5 second pause between levels in a game. It wasn't really until the P5 processor (1

    • Seriously? I had a Mac Plus - someone who purchased opted not to have a hdd or more than 512 megabytes of ram and the machine was a serious PITA to use. My Dell Core i5 is a dream :).

      Swapping floppies reminds me of those videos of a swichboard operator trying to connect a call.

    • I used to play a game called "Harpoon 2" (a naval warfare sim) on the original Pentium. Once more than a few contacts were present, it was dog slow. Slow to the point where you'd make some moves, enter, then go get a cup of coffee while the machine thought for a few minutes. Game time would advance by 30 seconds. Repeat. Later I loaded it up on a P4, and was pleasantly surprised to find the game was actually playable.

      And that experience was not unique to that one application - just about any application tha

    • by Hadlock ( 143607 )

      Also, the Mac Plus (along with any mac running system 1-6) was running an OS coded entirely in assembly. I suspect Win7 would run dramatically faster in assembly, as well!

  • by PhilHibbs ( 4537 ) <> on Friday July 15, 2011 @10:56AM (#36775308) Homepage Journal

    10Mbps was huge at the time. It was much faster (proportional to need) than any of the other components in a computer system. So it's not really surprising that it hasn't quite kept pace. Many home networks are still 10Mbps, and that's plenty for two or three computers.

    • How is 10Mbps plenty? 10Mbps internet connections are becoming more common, I currently pay for 15 but average about 30Mbps. That's only going to keep increasing.

      But that's off topic. 10Mbps was infact really fast, and yes most likely over-specced. 1Gbps is really nice, there's not much need for anything faster to the end machine right now, at least not in the home or small business area. My switch is 1Gbps but has 10Gbps switching fabic, it seems completely sufficient for anything I need now and goin
      • by Guspaz ( 556486 )

        1Gbps is pretty decent, but for a lot of modern file sharing, it's a bottleneck. My disk-based home fileserver can transfer at several times that, and even my desktop's SSD (which can likely be had for $200-300 today) can double that.

        Do I *need* to be able to copy files at 250MB/s or 500MB/s instead of 125MB/s? No, but it's just as hard to argue that I need 1 Gbps instead of 100 Mbps. Point is, 1 Gbps is a bottleneck for an increasingly large number of consumers now, not 4 years from now.

        • Most common home NAS devices dont come anywhere close to saturating a gbps link, and filesharing off of a standard desktop will hit about 1gbps on top-end mechanicals.

    • 10mbps is about what most people get on wireless-G (real world throughput) and they don't even really notice. Typical home use involves internet usage and the occasional large file transfer. 10mbps is entirely usable for most people.

      I always also thought 650megs for a CD was unusually large at the time. Especially considering that at the time data CDROMs came out we were all still using floppies and the occasional pricey 100meg zip drive you had to carry around because no one else owned one.

      • by Guspaz ( 556486 )

        650 meg CDs were unusually large. Heck, my first computer with a 2x CD-ROM drive had a 160MB hard disk. But one of the wonderful things about such an enormous capacity media is that it suddenly enabled a huge range of things that weren't possible before.

        Myst? The Seventh Guest? Computer encyclopedias? None of this was really feasible before. Sure, a game like Myst looks bloated today because modern multimedia compression could offer significantly better quality in far less space (Cinepak, how I loathed and

      • No 650MB was as much as they could cram in there with the technology of the time (remember, CDs were developed in the 70s/early 80s), and all that space was absolutely necessary.

        The goal of a CD was to achieve high-fidelity sound reproduction in a digital format. This demanded a 44.1kHz sampling rate at 16bits per sample, in stereo, so you could capture the entire human hearing range (up to 20kHz). Back then, there was no data compression technology, so it had to be uncompressed. And finally, they wanted

    • It is plenty only because the internet connection is the bottleneck. Start doing any LAN transfers and you'll feel the hurt of 10Mbps.

      And who still uses 10Mbps??? 100Mps has been standard for well over a decade. You'd have to be using one seriously old hub.

  • by rbrausse ( 1319883 ) on Friday July 15, 2011 @10:56AM (#36775312)

    Q: is there any other 30-year-old technology still present in current computers?

    What about SCSI? or RS-232? not as omnipresent as Ethernet but still more or less common. Happy birthday Ethernet, but you are not the only remaining dinosaur...

    • by jo_ham ( 604554 )

      I still use an RS-232 interface daily for the UV spectrometer in the instrument room. I don't even think it goes through a serial/USB converter, unlike the one on the lab microwave reactor.

      There's a lot to be said for simple, well tested interfaces if you don't need massive throughput.

      • Of course, 9600 baud was really fast back then, and some of them today use 115200 instead. You could crank a Unibus up to 9600 or maybe even 19200 if you had the I/O processor card (KMC?).

      • by Guspaz ( 556486 )

        It can still be hard or expensive to use such things.

        I recently needed an RS-232 connection to talk to the serial console of my PandaBoard. I saw that the local The Source store claimed to have four USB adapters for $20 a pop, but when I showed up they had none. The only other store within a few blocks I could find wanted $70 for theirs, which was a bit silly.

        In the end, I had to borrow a friend's ancient AMD K6 laptop (K6, no bloody 2 or 3), running Mandrake 8, which had a serial port. Luckily, the laptop

      • RS-232 is still common in time and frequency [] devices.
    • 300 baud modems? 500 ns memory? Daisy wheel printers?

      All of these (modems, memory, printers) exist in modern computing in faster updated form, just like ethernet.

      • I think British Telecom might still be using them. We've been on a 20Kb/s connection for the last week. It's called "broadband" apparently. The guy on the phone said we could speed things up by taking out the ethernet cable altogether, but then the computer stopped talking to the telephone and he sounded surprised. Seriously, I'd LOVE to have a 1Mb/s connection today, let alone a 1Gb/s.
      • Not really, except maybe the memory. The one that does (modems) is rarely used.

        I haven't used a telephone modem in ages. I suppose some laptops still come with them, but no one uses them, as landlines have pretty much gone the way of the dinosaur. I can't even remember the last time I've seen a real landline phone or outlet. Even at places with wired phones (hotels, offices), they're all digital now, connected to some sort of PBX or similar, not POTS. You can't unplug the phone and plug in your modem a

      • by uncanny ( 954868 )
        Funny thing is, we still use dot matrix printers where i work for log keeping purposes. When i first started this job i couldn't believe it. I thought, didn't these things die out in the 90's? But they still make them too, and slow as ever!
    • the TRS connectors we use for our basic Audio output have been around for about 100 years (the patent on the first design was 1907). The three-plug (red/white/yellow) RCA connector has been around since the 1940s (although that's normally only found on specialised kit).

      S-Video and VGA were 1987, so they don't quite hit the 30 years but they're still pretty old.

    • The fan in your desktop PC hasn't changed much since then. (It's a lot different from your laptop fan, or the fans in a VAX 11/780.) And the VAX didn't "come with 2MB" of RAM or have a speed of about 1 MIPS. The canonical definition of 1 MIPS was "as fast as a VAX 11/780", and you could get different amounts of RAM; mine had 4MB in two cabinets. Princeton University's Massive Memory Machine Project later had a VAX 11/785 with 128 MB of RAM, so they could experiment with what you could do if you had "e

    • by pz ( 113803 )

      Mice are essentially the same age as Ethernet, and invented in the same place.

      I use a CRT almost daily. That technology (raster scanning CRTs, that is) are approximately as old as Television (vector based are somewhat older).

      Keyboards hark back to teletypes, pre-dating Ethernet and all that networking jazz.

      Heck, ASCII. According to Wikipedia ( it was first standardized in 1963, with work starting on the standard in 1960. That would be 50 years ago. Anyone reading this

      • Anyone reading this is using ASCII.

        unfortunately. The single most stupid thing on Slashdot* is the missing UTF-8 support.

        kreuzdämlich & scheiße if you often use non-ASCII-characters...

        *) excluding content problems like TFS, TFA, TFC, ... :)

    • Q: is there any other 30-year-old technology still present in current computers?

      Sigh. Kids these days.

      32-bit IP adresses, TCP-IP, the whole Berkeley stack
      The x86 Instruction set*
      Winchester hard-disc drives
      Switching power supplies
      Dynamic RAM
      Getting bored now

      You get the idea. More things are the same (with minor evolutionary improvements) than are different.

      * OK, yes, they've been updated/added to, but the basic technology is unchanged.

    • I'd be very interested to see where you can get a current (by which I mean less than about a year old) computer with RS-232 or SCSI interfaces. SCSI was never all that widespread, at least in the PC (vs. Mac) world - I used to own a lot of SCSI gear, and always had to buy interface card. And I haven't seen a new computer with a serial port in many, many years.
    • The power network, 60Hz at 120VAC, it has been around much longer.
  • Keyboards? The plug on the end changed...the keys stayed the same.

    • And some people (like me) use mechanically actuated keyboards, which still click just like the ones from 30 years ago.
      • by f8l_0e ( 775982 )
        Let me save the rest of you a bunch of time. "Nothing compares to my Model M..." "I spent 300 dollars on my Cherry and it is still worth every penny..." Ad naseum, ad infinitum. btw: I miss my PS/2 selectric touch keyboard.
      • Those keyboards remain popular in Japan, and annoy the ever loving shit out of me. Having to listen to an office full of people banging away on those things all day just drives me batshit insane.
    • And you'd be surprised how well the older flavors have hung on: AT is a simple mechanical adapter away from working with PS/2, and there are plenty of desktops on the shelves today with PS/2 ports, and laptops on the shelves that, while the internal wiring is purely proprietary, still have PS/2 mice and keyboards at a protocol level.(even ADB made a surprisingly late last stand in laptops, only dying for good in 2005...)
    • by skids ( 119237 )

      the keys stayed the same

      Unless you're old enough to remember back when they had 83 or 84 keys.

  • It's hard to believe, but how many of you fuckers were even born yet in the early 1980s?!
    • by es330td ( 964170 )
      Not only had I been born, I had written a Basic program to generate D&D character stats by 1980. Yes, my PUBLIC elementary school had some forward thinking administrators.
    • by quenda ( 644621 )

      Sorry, gotta ask: I've seen that "get off my lawn" so many times on slashdot, but never anywhere else - until recently I saw Clint do it in Gran Torino.
      Is that where the expression was popularised?

      And weren't those early 80's ethernets all running on coaxial 50-ohm loops with BNC connectors? hardly something you can easily use today.
      No, wait it was worse: "thick ethernet" where you punched a hole in the cable to join a node on.
      If we are going to count non-physical standards that are that old, the list is l

  • if the situation needs, I got pissed off at our IT dude trying to bounce a wifi signal over 5 repeaters though real 3 hour fire walls and steel beams, I swiped a box of cat3 out of the storage closet and even though its 10mbs, that's 10x faster than our internet and I don't have to hear "my email doesn't work" 50 fucking times a day

    • This is why is has stuck around, for a lot of applications it is robust and provides plenty of bandwidth. Do most people really need higher speed for most things. I have a 6/3 internet connection and it provides the bandwidth necessary for the Neflix HD movies, I know they aren't the best quality HD but are good enough on a 32 inch class (it might be a 34 or 36 I forget) TV from across the living room. This is probably the most bandwidth intensive thing I consistently do and don't have problems. There are t
  • .... was the loading of a still image.

    • by OzPeter ( 195038 )

      .... was the loading of a still image.

      I disagree. I definitely remember seeing an animated line drawn porn movie being rendered on an EGA display in either '87 or '88. Granted this was the late '80s, but it was still the '80s. The scary thing is how well I can remember the images, including the blue colour palette.

      • by Thud457 ( 234763 )
        The was porn on the Apple II in the late 70's.
        And definitely lineprinter cheesecake if not actual porn (60's? 50's? teletype?)
      • And ten years before that, there was the ASCII "Bambi VS Godzilla" that ran on 1200 baud "VDT"s. (That's "Video Display Terminal", as opposed to a teletype/DecWriter (paper) terminal)
    • .... was the loading of a still image.

      After performing a manual uudecode from the Usenet download.

  • If you have a bad cable/connector, 10MB/s can be much more reliable than 100MB/s.

    • True. And if 10 MBit/full-duplex isn't working for you, you can always chop that down to 10 MBit/half-duplex if needbe.
  • by milgr ( 726027 ) on Friday July 15, 2011 @11:11AM (#36775488)

    In the 1980s, ethernet tended to be over Thinnet or Thicknet. I seem to recall speeds of 1-3Mbps over those technologies. Twisted pair came out somewhere around 1990 at 10Mbps.

    Today I mostly use 1Gps, but deal with servers that are 10G.40G and 100G will be standard in datacenters in a few years.

    The blurb indicates that Ethernet is the only technology that we are using from 30 years ago. Back then all the machines I used had Memory, cpus, displays, and keyboards. The particualr technology changed - just like Ethernet technology's changes.

    • by 0123456 ( 636235 )

      The blurb indicates that Ethernet is the only technology that we are using from 30 years ago. Back then all the machines I used had Memory, cpus, displays, and keyboards. The particualr technology changed - just like Ethernet technology's changes.

      To be fair, you can still plug a modern Ethernet card into a 10Mbps Ethernet network and it will work; the ancient technology is still built into the hardware. You can't plug a Z80 CPU or a 500ns DRAM or a Sinclair rubber keyboard into a modern PC... heck, you can't even plug a PS/2 keyboard into my new server, it has to be USB.

    • Your point is valid, but to be generous, I think the point of this post is that with appropriate cabling & server setup, a machine from 30 years ago that understood Ethernet could talk to one today. CPUs, memory etc on the other hand didn't survive the years intact - the concept might remain, but the technology behind it is not compatible.

    • In the 1980s, ethernet tended to be over Thinnet or Thicknet. I seem to recall speeds of 1-3Mbps over those technologies.

      Thinnet and thicknet were both 10 megabit.

      The big advantage of 10 base T was not speed, it was the fact that the cheap cable made it feasable to star wire it. That lead to higher reliability and the ability to use switches to increase the effective bandwidth by only sending data where it needed to be sent.

      The particualr technology changed - just like Ethernet technology's changes.

      The great thing about ethernet is that while the technology has changed there is a high degree of compatibility between equipment of different ages. You can take an old peice of test gear with an AUI port,

  • Supply and demand addresses this. We simply do not need a great deal of network speed at this time. For years network bandwidth stagnated simply because no one had a burning need to do more with it. Then our workstations became capable of processing more data faster, thus we moved from 100mbit to 1gig over a very short period of time.

    I'd be willing to bet that there is a correlation between HD sizes and network bandwidth, now that I think about it.

  • That 10Mbps Ethernet was hella fast at the time.

    Remember, everything was text. No fancy graphics or sounds (except for the single beep tone) ... so a terminal wired right into the mainframe at 9600 baud on a serial line was an absolutely screaming connection. Most people couldn't read at the scroll rate of 9600 baud anyway.

    Hell, in 1988 when I started university, we still used line editors ... oddly enough, I think it actually was on a VAX 11/780. With a line editor, a 300 baud modem was a usable speed.

  • If network speeds had increased as fast as processor speeds, the i7 would today at least have a 10Gbps network interface, and perhaps a 100Gbps one.

    This sounds similar the the "If cars improved like computers []" joke.

  • If network speeds had increased as fast as processor speeds, the i7 would today at least have a 10Gbps network interface, and perhaps a 100Gbps one."

    If ISPs were not trying to screw their customers out of every penny 10G networks WOULD be common place.

  • I didn't know the i7 cpu even had an ethernet interface; I thought it was the motherboard or the add-on card that gave me my network connection. huh, learn something new everyday
  • by Animats ( 122034 ) on Friday July 15, 2011 @12:45PM (#36776666) Homepage

    Now this really dates me. But in 1975, I got a tour of Xerox PARC when I was taking a summer course in computer architecture at UC Santa Cruz. Alan Kay showed us some of the early Alto machines. They were still having trouble getting a smooth phosphor coating on the custom-made page-sized CRTs. We saw the PARC 3mb/s Ethernet, which Kay described as "an Alohanet with a captive ether," the first networked file server, and the first networked laser printer. It was clear this was the future, if the price could come down by about a factor of 10. Kay was hoping that some day a workstation might cost as little as a grand piano.

    At Ford Aerospace, I was responsible for putting in the first Ethernet, around 1981. It was mostly "thick Ethernet" at 10mb/s. Ethernet cables weren't standard items, but Ford Aerospace routinely built cables for satellite ground stations, so we had the appropriate cables made up and pulled through the telephone ducts run through the building's concrete floors. I checked out a time-domain reflectometer from the measurement equipment pool and took a look at the cable. Cables ended in PL-239 coax connectors, and sections were joined with a barrel. The Ethernet tranceivers had SO-239 connectors on both ends, so the cable went through them. We used a vampire tap once or twice, but it didn't work out as well. The TDR showed a transceiver as generating almost no reflections. But bending the cable tighter than a 1' radius caused a noticeable impedance mismatch.

    We were bothered that coax Ethernet wasn't a balanced system. There's a DC component to the signal, which means you can't use decoupling capacitors between sections to get rid of hum. We spent time on grounding issues and looked at the cable signal with scopes a lot. Repeaters were very expensive then, and we were trying to avoid them.

    The network interfaces were mostly 3Com boards. Our original network consisted of a PDP 11/70, a PDP 11/45, a VAX 11/780, and a PDP 11/34 used as a gateway to a 9600 baud leased line "backbone link" to Ford HQ in Dearborn MI. We later added four Sun 2 workstations and a Sun server. Everything ran TCP/IP. Ford HQ had a similar link to Ford Aerospace in Colorado Springs,which had an ARPANET IMP. So we could get to the ARPANET over a 9600 baud shared backbone. We could FTP files instead of mailing tapes! I used to Telnet into Stanford's machines over that link.

    I did a lot of work on 3COM's TCP/IP implementation, which originally was totally incapable of coping with a mix of speeds in the network. That's why I have those RFCs on network congestion with my name on them. This was before telephone de-regulation, and that 9600 baud leased line was expensive.

    The article mentions that "There used to be a lot of fear, uncertainty, and doubt surrounding the performance impact of collisions." There was a period around 1984-1990 when coax Ethernet performance in practice was much worse than theory predicted. The problem was finally figured out by Wes Irish at Xerox PARC. [] It turns out that the defective design of a SEEQ Ethernet interface chip was causing the problem. As the state machine of the chip transitioned at the end of receiving a packet, there was a period of a few nanoseconds when the chip momentarily turned on the transmitter power, jamming the coax for a few nanoseconds. This reset the "quiet time" timer on all the other stations on the cable, causing them to ignore any following packet for several microseconds, after which they dropped back to the proper "look for sync" state. Back-to-back packets thus lost the second packet, which caused retransmissions and killed performance, but didn't show up as a "collision" to the controll

  • They were essentially similar to early USB: []
    "The basic design of the transputer included serial links that allowed it to communicate with up to four other transputers, each at 5, 10 or 20 Mbit/s -- which was very fast for the 1980s. Any number of transputers could be connected together over even longish links (tens of metres) to form a single computing "farm"."

    For a time in the 1980s, with five transputers (four borrowed), using a link endpoint to drive a robot, I had the fastest (or maybe second fastest) computer (cluster) on Princeton's University's campus (in a robotics lab I managed). But it was awkward to program it in Occam. And eventually I had to return the borrowed transputers.

    What the transputers could have become... Sad they ended up in the dustbin of history...

All laws are simulations of reality. -- John C. Lilly