Happy 40th Birthday, Internet RFCs 58
WayHomer was one of several readers to point out the 40th birthday of an important tool in the formation of the Internet, and a look back at it by the author of the first of many. "Stephen Crocker in the New York Times writes, 'Today is an important date in the history of the Internet: the 40th anniversary of what is known as the Request for Comments (RFC).' 'RFC1 — Host Software' was published 40 years ago today, establishing a framework for documenting how networking technologies and the Internet itself work. Distribution of this memo is unlimited."
John Postel (Score:5, Interesting)
I hadn't heard much about him before, but now, he is a personal hero of mine.
It is a testament that his structure for documentation has lasted so long and remained pertinent a decade after his passing.
Great example of why patents don't work. (Score:3, Interesting)
Do newer apps even follow RFC's anymore? (Score:1, Interesting)
With everyone trying to create the newest and greatest thing to make money from, do people even follow or refer to RFC's for compliance?
Try to proxy and recreate most protocols or data sessions. Many will break on the other side of a proxy once it gets created according to RFC specifications. HTTP versus out of banding garbage over port 80 is one of the better examples of how developers never seem to follow RFC's anymore.
Re:Great article (Score:5, Interesting)
Honestly, you would think these dont exist when you look at the state of things and how no one seems to regard them...This is not flame bait, how many of you sysadmins out there have had difficulty with people not following RFCs and their e-mail rejecting or being rejected, piss poor networks built, or just flat out disregard for them. The creators did a wonderful thing, makes my life easy, but it is almost like an idealistic goal that will never be reached because there are too many fake admins out there. Hell I'm lucky when I walk into a door at a job that anyone has even heard of the term RFC.
Re:Great article (Score:5, Interesting)
I particularly liked the description of his visit to Bangalore -- it goes to the heart of why we do open source.
For those who didn't read TFA, this refers to "... as part of the visit I was introduced to a student who had built a fairly complex software system. Impressed, I asked where he had learned to do so much. He simply said, "I downloaded the R.F.C.'s and read them.""
There are a lot of stories like this. The one I like to tell is about a number of projects that I worked on, where part of my job was making our software work over the OSI protocols. What happened repeatedly was that the ISO specs weren't available for downloading, so we had to buy a printed copy. This inevitably entailed making out a purchase order, getting it approved by the Right People, sending it off, and waiting for the arrival of the package.
In the meantime, we'd work on what we could, which was the IP-based part of the code. This entailed going to an online archive and downloading the relevant FTPs, typically a matter of a few minutes, with no signatures required by anyone. By the time the ISO docs arrived a few weeks later, we'd have the IP version written, debugged, and stuck into the libraries for the use of other developers or customers. Then we could start working on the ISO code.
The result, of course, was that everyone would end up going with the IP-based stuff, since it appeared first and was the code that was thoroughly tested. It also helped a lot that the Internet had lots of forums (mostly email at first) where one could ask dumb questions and get actual answers from others who had already stumbled around and found the answers (and wanted to show off their superior knowledge). Such forums never developed for ISO, at least not anywhere we could generally find quickly.
In this case, the proper term isn't really "open source"; it's "open publication". This is what has made modern science the success that it is, and it's much of what put the Internet ahead of its competitors. Many people argued that several other networking schemes were better technically. This claim has been made for both DECnet and ISO, and they may be right. But it doesn't matter; IP/UDP/TCP/... was good enough, and its specs were published openly. This meant that anyone could quickly grab them and start coding; you did't need permission from anyone to read and use them.
Of course, "open source" is based on the same idea. If you make your information easily available to everyone, they can build on your ideas. This gives your ideas dominance over other "for sale" or "by permission only" ideas, even if someone else's hidden ideas happen to be slightly better.
I've always wondered whether DECnet was as good as its proponents claimed. But even when I worked as a contractor at DEC, I wasn't allowed access to the DECnet specs, so I guess I'll never know. I'm of mixed mind over ISO, which I learned a little about. Some parts are probably better than IP, and others aren't, but without widespread deployment we'll probably never really know how ISO would work with a billion users.
Re:RFC0? HELO computer, NE1 127.0.0.1? (Score:3, Interesting)
Check out RFC 208 to see how addressing was actually done in the old days.
6 bits of IMP (essentially the network address)
2 bits of host
Heh. I remember reading several versions of the debates leading up to an expansion of packet fields some years later. The stories generally describe it as a debate between the "conservatives" who thought a small host field would suffice, and the "radicals" who advocated a larger size for when the Net would be a lot bigger than the conservatives expected. Finally, the story goes, the radicals won out - and they went with a full 8-bit host number.
That's not the end of the story, of course, because it hasn't ended yet. For years now we've been debating the wisdom of going to IPv6, with a 128-bit host address. But so far it's the conservatives who have won, arguing that we're doing just fine with a 32-bit address, switching over would be a huge expense, the larger addresses just mean larger packets and thus slower data throughput, and all the other reasons we've read here and in other tech forums.
People do have a way of putting off upgrades until the old system is falling apart from the overload. Even then, they prefer all sorts of kludgy ad-hoc patches to the current system, rather than moving to a cleanly-designed higher-capacity system.