What Vint Cerf Would Do Differently (computerworld.com) 125
An anonymous Slashdot reader quotes ComputerWorld:
Vint Cerf is considered a father of the internet, but that doesn't mean there aren't things he would do differently if given a fresh chance to create it all over again. "If I could have justified it, putting in a 128-bit address space would have been nice so we wouldn't have to go through this painful, 20-year process of going from IPv4 to IPv6," Cerf told an audience of journalists Thursday... For security, public key cryptography is another thing Cerf would like to have added, had it been feasible.
Trouble is, neither idea is likely to have made it into the final result at the time. "I doubt I could have gotten away with either one," said Cerf, who won a Turing Award in 2004 and is now vice president and chief internet evangelist at Google. "So today we have to retrofit... If I could go back and put in public key crypto, I probably would try."
Vint Cerf answered questions from Slashdot users back in 2011.
Trouble is, neither idea is likely to have made it into the final result at the time. "I doubt I could have gotten away with either one," said Cerf, who won a Turing Award in 2004 and is now vice president and chief internet evangelist at Google. "So today we have to retrofit... If I could go back and put in public key crypto, I probably would try."
Vint Cerf answered questions from Slashdot users back in 2011.
Re: (Score:2)
IoA (Score:2, Insightful)
Is 128 bits enough? We really need to have enough bits to assign every atom in the universe an IP address.
Re: (Score:3)
Re: (Score:2)
Are you sure that one IP address per atom is sufficient? Just to be safe, throw in another address byte. So, 272 bit addresses it will be.
Re: (Score:2)
we should really round that up to a power of 2, so make it 512 bits.
Then we can assign addresses to the universes next door too, once we can get there.
Re: (Score:2)
Why not just make it 640k bits? 640k ought to be enough for anybody.
Re:IoA (Score:4, Interesting)
At the time 32 bits seemed like a lot of data to send.
On a 300bps modem it would take a noticeable fraction of a second. 64bit or 128 bit would take much longer, and slowdown nearly everything. Also RAM was small think kilobytes having to store that much data would be sacrificing it somewhere else in the code.
In short if it were implement back then, it would never catch on, and we would be using a different networking protocol now. Perhaps one with much more problematic limitations.
Today using 128bit address having the ability to give more IP Addresses than possible in the universe, really make sure that just randomly picking an address probably will not create a duplicate address.
Re: (Score:2)
Today using 128bit address having the ability to give more IP Addresses than possible in the universe, really make sure that just randomly picking an address probably will not create a duplicate address.
That would be well and fine if most IPv6 addresses didn't have a 64-bit or even 80-bit prefix, identical for everything routable at the endpoint. Then there are DHCP addressing schemes that use the MAC as part of the address, further reducing it.
Sure, it can be planned better, but we're doing our damndest already to use up parts of the IPv6 address space through thoughtless assignments and rules.
Re:IoA (Score:4, Informative)
That would be well and fine if most IPv6 addresses didn't have a 64-bit or even 80-bit prefix, identical for everything routable at the endpoint.
That 64-bit network prefix is the equivalent of 4 billion entire IPv4 internets—and each "host" in each of those internets contains its very own set of 2**32 IPv4 internets in the 64-bit suffix. Quadrupling the number of bits from 32 to 128 means raising the number of addresses to the fourth power (2**32 vs. 2**128 = (2**32)**4). We can afford to spare a few bits for the sake of a more hierarchical and yet automated allocation policy that addresses some of the more glaring issues with IPv4, like the address conflicts which inevitably occur when merging two existing private networks.
Think of it this way: If we manage to be just half as efficient in our use of address bits compared to IPv4, it will still be enough to give every public IPv4 address its own private 32-bit IPv4 internet. Right now the vast majority of IPv6 unicast space is still classified as "reserved", so we have plenty of time to adjust our policies if it turns out that we need to be more frugal.
Then there are DHCP addressing schemes that use the MAC as part of the address, further reducing it.
Automatic address assignment (based on MAC or random addresses or whatever) comes out of the host-specific suffix, not the network prefix, so it doesn't reduce the number of usable addresses any more than the prefix alone. It does imply that you need at least a 64-bit host part in order to ensure globally uniqueness without manual assignment, but the recommended 64-bit split between network and host was already part of the standard.
Re: (Score:3)
Re: (Score:2)
1 IPv4 address hosting a subnet with NAT vs. an IPv6 /64 prefix are roughly equivalent
Uhhh....
2^32 = ~4.3 billion
2^64 = ~18 billion billion
So they are only "roughly equivalent" if by that you mean "within 10 orders of magnitude of each other".
I think you meant "one IPv4 Internet (4.3 billion hosts) where each host NATs an entire IPv4 internet vs. one IPv6 /64 prefix (4.3 billion IPv4 Internets) are exactly equivalent".
In practice you can't assign anything smaller than a /64
-- snip --
It's still way more address space than we'll ever reasonably need, but not quite as ridiculous as it looks at first glance.
While true, you will get that /64 assigned to you within a /64 prefix. In other words, there are 18 billion billion prefixes each with 18 billion
Re: (Score:2)
Re: (Score:2)
When people talk about an IPv6 address being 128 bits they're technically true but they miss the bigger picture. In practice you can't assign anything smaller than a /64 so really there are only 64 bits of address space as we think of it today. 1 IPv4 address hosting a subnet with NAT vs. an IPv6 /64 prefix are roughly equivalent. It's still way more address space than we'll ever reasonably need, but not quite as ridiculous as it looks at first glance.
No worries, ISPs will soon figure out how to issue single IPv6 addresses or at least block all but one unless you pay for the feature of having a complete allocation.
Re: (Score:2)
NAT is still possible with IPv6. It would really just force people to "break" the internet if they tried something like that.
Re: (Score:2)
NAT is still possible with IPv6. It would really just force people to "break" the internet if they tried something like that.
Sure NAT is still possible although I would not put it beyond them to eventually try getting people charged with violations of the CFAA or some other law for using it without permission.
And why wouldn't they want to break the internet? We are after all talking about companies like AT&T who once told me that they were blocking IPv6 tunneling because otherwise customers could get free static IPv6 IPs without paying for them.
Re: (Score:2)
64bit or 128 bit would take much longer, and slow down nearly everything.
FTFY.
Slowdown is a noun. Slow down is the verb you wanted.
Marty... (Score:2)
I wouldn't have (Score:2)
I personally wouldn't have really changed much. Today, in retrospect, yes, these things are more important. But in the earlier days, adoption was the most important goal of the internet. IPv4 is a relatively easy spec to implement. This goes for routing tables, system administration, hardware support, etc. Yes, the migration to IPv6 is a pain in the ass, but the learning curve for admins and hobbyists as well as the implementation burden on manufacturers some 20-30 years ago simply wouldn't be feasible.
Re: I wouldn't have (Score:1)
Re: I wouldn't have (Score:5, Insightful)
Re: (Score:2)
96 bytes was a lot of data in the mid-80s. On a 1200 bps connection, that's almost an entire second per packet. When I was a college student in the early 90s, we had 2400 bps modems in the dialup pool, and the entire university (~3000 students) lived on a 56k leased line. Nowadays, that's trivial. In 1984, not so much.
OTOH we didn't run SLIP (PPP wasn't invented yet) over our 1200BPS/2400BPS modems, at least I don't know of anyone who did, except as a test. We ran terminal software to login to the university computers remotely. So address space wouldn't have impacted that much. In fact, where I was, TCP/IP didn't become a thing until we were connected to the internet proper. (1988), but of course YMMV.
That said, 16 bytes for an address would probably not have flown. But 6 I can sort of see.
Re: (Score:1)
That is a common mistake from people not in the field. Packet header size is a concern, but it has never been the worst, or the key concern.
IPv6 requires at least *double* the TCAM size (it actually requires 4x TCAM than IPv4, but most routers will only hardware-route/forward based on the top 64bits, anything else degrades performance to a crawl by being bumped to the RE CPU for software-based routing). That would be the reason most hardware-based routing hardware has half the route capacity in IPv6 when
Re: (Score:2)
Re: (Score:2)
ipv6 doubles the packet header size.. back then, that would've had significant impacts on performance.
Yeah, particularly since what we had then were 8 and 16 bit CPUs, and 32 bit was just entering the market. So that would have had a significant hit on an already slow internet at the time. But for those sort of applications, one would already have implemented them in FPGAs and ASICs.
I agree w/ darkain - we would not have had the opportunity to learn everything we did w/ IPv4. Also, another thing worth considering - at the time IP was conceived in DARPA, the idea was that it would be used only by the
Re: I wouldn't have (Score:3)
Re: I wouldn't have (Score:2)
Especially given how we shed two digits from the year to save memory, I kinda doubt having quadruple the number of bytes in an IP address would fly.
Still though, it occurs to me that the first internet protocol implementation could have even longer address space than ipv6 without mandating (at the time) unfathomable memory needs. Why did they change that?
Re: (Score:2)
Re: (Score:2)
32 bits address (Score:4, Insightful)
Re: (Score:2)
Vint Cerf did an awesome technical and visionary job*
*and still does! (at Google)
Re: (Score:3)
Yeah, at the time, the thought of needing more than four billion internet addresses probably seemed a bit ludicrous when it was still mostly just government and university mainframes connecting. That number must have seemed like a nearly never-ending well, especially seeing how generously it was initially carved up into massive blocks. Some of the earliest corporations and universities to receive allocations still have a relatively ridiculous number of Class A blocks allocated addresses (16+ million).
We'd
public routing table vs connection tuple (Score:2)
There could have been an optional 32-bit client sub-address ignored by the public routing backbone.
Then, for most purposes, non-backbone routers need two routing tables: a routing table for the public network (if more complex than a few simple gateways), and an organization-local internal routing table (with 32-bit addresses, just like the public table).
The actual problem is that each TCP/IP con
Re: (Score:2)
Re:public routing table vs connection tuple (Score:4, Interesting)
I always thought the Netware IPX/SPX network numbering system was quite clever -- 32 bits of network addressing and a 48 bit node address, usually based on MAC addresses.
I always think of how much simpler IP would have been with a similar structure -- subnets could have scaled easily without renumbering or routing when common /24 limits were hit. The use of MAC addresses for node addresses would have eliminated DHCP for the most part or essentially automated it as clients would have only had to query for a network number, not a node address.
Re: (Score:2)
There are 3 ways of assigning IPv6 addresses. One is SLAAC, which is the usage of MAC addresses in EUI-64. Another is privacy extensions. The third is the use of DHCP v6, which would be analogous to DHCP v4 in IPv4. The third would be the ideal way to do it - manage the networks logically, and avoid needing >1 /64.
'NAT' on IPv6 is a Prefix translator, which avoids the pitfalls of NAT on IPv4, while retaining its advantages. It is a 1:1 relationship b/w a global unicast address and a site local add
Re: (Score:2)
I don't agree that 4 billion was ludicrous. Just theoretically. I mean, the world had more than a billion people then, which could easily be imagined as growing to 4 billion. If one imagined that each person would have something in his hand that would require an IP address, there was the case right there that 4 billion wouldn't suffice.
64 would have - if we had that, then we could have had Class B size subnets for everything, and everything beyond address 16 would have been the network address.
Re: (Score:2)
You're welcome to pay for the memory upgrades to the world's routers to hold the routing table full of those /32s (16M per /8 that you've "freed"); the current full routing table is ~620k entries.
And as we all remember, 640k should be enough for everyone!
Still leaves 20k though, as long as we're mixing our units...
Re: (Score:1)
Re: (Score:2)
Encryption was expensive (Score:1)
Computationally that is, I don't think it would have flown in the early 90s and the adoption rate would have been the same it was with SSL (and TLS). It wasn't not so long ago that I actually had to provide resource impact reports on servers where everything would be encrypted. Nowadays (unless you deal with extreme large volumes), encrypting (using an symmetric key that is) doesn't have a significant impact. Web servers, load-balancers, etc can support it without breaking a sweat.
How dare you! (Score:1)
I really hate these people who take credit for my brilliant invention. Something you peons refer to as "the internets."
Oh well, at least I can still take full credit for my hockey stick.
Sincerley,
Al Gore
Re: (Score:2)
Re: (Score:2)
First, All Gore never said that.
That's okay, because GP isn't actually Al Gore.
Re: (Score:2)
UTF-8 style would have been better (Score:4, Insightful)
So the 1992 UTF-8 specification didn't exist when the 1983 IP specification was created, but they could have done:
First 2^31: 0(31)
Next 2^59: 110(29) 10(30)
Next 2^88: 1110(28) 10(30) 10(30)
Next 2^117: 11110(27) 10(30) 10(30) 10(30)
And just declared that for now it's 0(31) - still 2 billion addresses but the sky is the limit. Heck, they might even have used shorts (16 bit) that way and declared that hardware/software should update as the need approached:
First 2^15: 0(15)
Next 2^27: 110(13) 10(14)
Next 2^40: 1110(12) 10(14) 10(14)
Next 2^53: 11110(11) 10(14) 10(14) 10(14)
(...)
Next 2^140: 1111111111111111(0) 10(14) 10(14) 10(14) 10(14) 10(14) 10(14) 10(14) 10(14) 10(14)
As for PKI, that couldn't possibly have happened. US export regulations wouldn't have allowed it at the time, this was long before Zimmerman and PGP.
Re:Encode as ASCII (Score:4, Insightful)
The major problem your concept would cause is the massive increase in CPU load required to process text instead of simple bit masks, it may not matter for processing a couple of requests a second, but a core router handles trillions of packets and the text comparison process would require massive CPU capacity.
IP address space was designed for very rapid and low processor load bit masking to do route matching. To decide whether a route applies to an address, the netmask is applied to get rid of the more specific parts of the address and reduce the comparison to a simple equality operation.
We see IP addresses as a string of period separated numbers, but the address is the whole 8 byte number as a whole.
Additionally, your concept prevents the multiple path topology of the internet that results in the high resilience to damage we all know and love. Your system results in a single path into any domain space and that domain space is an invisible blob to the rest of the world.
Re: (Score:2)
Variable length addresses aren't a new idea, and there were some advocates for IPv6. However, a lot of the IPv6 changes were made to streamline the IP header, to make it faster for intermediate systems to process. Using a variable length field would add complexity. More general isn't necessarily better, else we would be using BigInts everywhere.
Re: (Score:1)
I thought it might have been possible to do some sort of nested packet idea. Something like 0-7FFFFFFF are global addresses, 80000000-FFFFFFFF are lo
48 bit IPs would have been nice (Score:3)
Would be enough to support even the internet of things, possibly with some very minor NATing. And spared us from the 128 bit monsters. 48 bits is what early Ethernet used, and seemed like a good number.
But I can understand that when IP was developed there were only a few thousand computers in the world likely to be connected, so 16 bits would seem adequate. Using 32 bits would have been a bit of a stretch at that time. Memory and bandwidth were expensive back then.
Re: (Score:2)
1976...! You were only 40 years old!
Re: (Score:2)
Ummmmmmm... Have you ever seen an IP (v4) address?
Re: (Score:2)
48 or 64 bit addresses would have been enough, IPv6 only used 128 bits because its designers wanted to be really, really, really sure that we wouldn't run out, this time, for sure. The initial classful address allocations didn't help, but we eventually reached a point where a single wasted class A only puts off exhaustion by a few months. What broke everything in the end was the sheer enormous number of addresses used by mobile networks. Now we have enough bits that we can use Ethernet MACs as part of routa
Re: (Score:2)
In a Classful IPv4 setup, the max you have is Class A, or 2^24. I've never heard of a Class A being superneted to support a larger network.
Therefore, I think that in IPv6, having a whole 64 bits dedicated to the interface ID is a total waste, even allowing for autoconfiguration. Even if they had assigned the bottom 32 bits to the interface ID, they'd have a subnet equal to the entire IPv4 network (sans NAT'ed ones). Then the entire top half of the address would have been the Global Prefix, and the sub
Bit fields (Score:3)
Remember boys and girls, always make bit fields extensible. He could have reserved the last part of the upper range (e.g. the ones presently wasted in multicast) for an extended range to be defined in the future. I.e. an address in the upper 224.0.0.0 range is followed by another 32 bits whose meaning would be IPV6 like.
Re: (Score:3)
that probably would not have made much of a difference. People would have assumed that this would never happen and would have made practical implementation assuming a fixed 32 bit space. By the time it became a practical problem, we would have had a creep of devices that does not follow the norm, and managing that would be a nightmare.
Just see the sorry state of utf-8. We still have so many code bases that are not utf-8 compliant despite we have seen the need for it over 15 years ago.
Re: (Score:2)
Not any more of a nightmare than the IPv4 to IPv6 conversion which has been in the making for twenty years now.
Re: (Score:2)
Actually, I'd say it would be more of a nightmare. Here we have the devils we know. In that scenario, it would be hellish even *knowing* what can't handle things.
Every hop in the network not being certain whether the next hop could or could not handle the conversation would be a nightmare in the making.
Re: (Score:2)
that probably would not have made much of a difference. People would have assumed that this would never happen and would have made practical implementation assuming a fixed 32 bit space. By the time it became a practical problem, we would have had a creep of devices that does not follow the norm, and managing that would be a nightmare.
Yes, but it would have put more pressure on the existing user base like Y2K compliance to follow the "full" standard. Right now it's like we're on IPv4, tagged WORKS4ME so why bother with IPv6. But I know I've made many more "shortcuts" than limiting something to 4 billion...
Re: (Score:2)
I think you could solve the old, incompatible device issue with something like NAT. It would be horrible of course, but so is IPv6. And most stuff would be updated - IPv6 has been supported since the 90s.
Re: (Score:2)
It seems funny, because that *ultimately* is the reality of IPv6.
The difference is that in the above scheme, *knowing* whether a hop in the network could or could not do 'big addresses' would be more difficult. With IPv6, it allows things to be very clearly delineated whether the communication is IPv6 capable or not.
The biggest obstacle to IPv6 was the stubborn resistance to any form of NAT. That IPv6 should be all or nothing. The piece needed to make IPv6-only *clients* possible was carrier grade NAT64.
Re: Bit fields (Score:2)
From the linked article: "There was debate about the possibility of variable-length addresses, but proponents of the idea were ultimately defeated because of the extra processing power "
Re: (Score:2)
I know that was the excuse, and it is bogus, since you can simply ignore the extensible field in the first hardware implementation. It's not until the extension is used, many years into the future, that you have to have the processing power to handle it.
Re: (Score:2)
who cares (Score:2)
that's like asking Henry Ford how he would make tesla autopilot problems go away
Crypto? They *removed* that from IPv6... (Score:2, Interesting)
You know, IPv6 AH (authentication -- forged packets are "impossible"), and IPv6 ESP (encryption, for privacy) through IPSEC were a non-optional part of the protocol until a few years ago.
But industry-wide incompetence, the interference from the usual suspects, as well as the usual shit pulled by the embedded space crowd killed it down to "optional". Yes, IPSEC can be nasty (mostly because of IKE), but it would be *something*.
For one, it would have killed DNS poisoning much better, and much earlier, than DN
Re: (Score:2)
I personally would rather *not* have crypto at the IP or TCP layer. Reason being is that in practical terms, updates *must* be delivered through kernel updates. Given the nature of crypto updates, I'd much rather have librariers in userspace be the channel for updates.
I don't think I need a big conspiracy about AH/ESP. They were really awkward approaches, and largely redundant with higher layer strategies.
The issues with DNS/DNSSEC are more reasonably addressed in the DNS layer. There is a lack of will
Re: (Score:2)
I don't think I need a big conspiracy about AH/ESP. They were really awkward approaches, and largely redundant with higher layer strategies.
They were made really awkward by the conspiracy to prevent usage.
Re: (Score:2)
It is not like the usual suspects had to kill AH to protect their need for mass-spying, making ESP optional would be enough (and outlaw it where required). But no, they need to actually be able to inject false traffic (which *is* against the !@#$!@#$ law everywhere, even for governments)...
Better to kill AH now to prevent people from thinking they have a right to any unapproved and secure cryptography. Plus there is the whole wanting to inject false traffic issue.
I manage Internet connections in 148 locations... (Score:1)
and none support IPv6. I think the move from 4 to 6 has already failed.
Re: (Score:2)
Assuming you meant Comcast, they should have v6 deployed over their entire network. Qwest are apparently doing 6rd so you should be able to get v6 with them too, albeit over a tunnel.
Can't comment on dialup though. I suspect most ISPs would rather just let their dialup platforms die rather than change anything about them.
Re: (Score:2)
Qwest are apparently doing 6rd so you should be able to get v6 with them too, albeit over a tunnel.
I have this set up, and can attest that it works reasonably well. The only real problem is that (presumably unlike native IPv6) you aren't assigned a static IPv6 prefix; it's tied to your dynamic IPv4 address. Consequently, I also have a Hurricane Electric tunnel configured with a static IPv6 prefix for use in DNS. This required some complicated source-based routing rules, though, so it's not for everyone. (You can't route HE packets out over the 6rd tunnel or vice-versa, and normal routing only looks at th
Re: (Score:2)
When I pull up my cell phone ip information, it's IPv6.
Now that there's carrier-grade nat to allow ipv6-only endpoints to speak to ipv4-only hosts, it *finally* is plausible to offer most mobile/residential ipv6-only. So a lot of the people who are ipv6-only are precisely the ones that would never realize it.
For enterprise networks and internal networks, those are ipv4 and likely to stay ipv4-only (which is a bummer for software development, because IPv4/IPv6 agnostic code is still relatively rare, since t
Who? (Score:3, Funny)
I don't care what this Vince dude thinks. I want to hear what Bennet Haselton has to say.
CIDR, large memory, and bps(and baud) (Score:2)
Going from 32-bit addresses to 128-bit addresses at the outset would have meant the IMPs and then the NSFnet routers (Cisco AGS+s) would have had the routing tables take up 4x the space (well 257/65s but who's counting). That would have meant a more severe and quicker router memory exhaustion with backbone routes, and a quicker move to CIDR (where we specify NETWORK/NUMBER-OF-CONSECUTIVE-1-BITS instead of NETWORK/MASK).
For every choice we ever make (routers, IP design, freeway offramp selection, etc.) if
There *was* a proposal simpler than IPv6.. IPxl (Score:3, Interesting)
http://bill.herrin.us/network/ipxl.html
A one-page solution, too simple of course for a huge committee to accept.
If only someone could have convinced a few key router manufacturers (Cisco) and Linux to adopt this, perhaps we could get critical mass and make IPv6 irrelevant. I guess it wouldn't have been enough of a make-work project though.
Re: (Score:2)
IPv6 is a one-page solution too, if you ignore all of the actual details.
As a small example, consider these two paragraphs that the page says for DNS:
Re: (Score:2)
What hardware concerns does IPv6 actually address? Far as I can tell it was created without too much concern for hardware. 128 bits for example. Most cpu's are going to have to use multiple cpu cycles. Due to registers being 32/64 bit (not including simd extensions.) These aren't really a concern considering how fast our computers are and that networking gear has special processors.
IPv6 fails in a few areas that some people refuse to even acknowledge. If they wanted IPv6 to be successful they would have ke
Yeah Right (Score:2)
At the same time he was doing IPv4, CLNS/CLNP was using 20BYTE Addresses (variable lenght). And Xerox (which Cerf himself credits with inspiration to his project) was using 12Byte Addresses. So that excuse of "could not have done it" is Bollocks.
As for encryption, that's what optional headers are for! Should have defined two or three of those fron the start: Weak encription (to handle export/munitions restrictions of the time), strong encription (either for countries which are not under US influence, or whe
Re: (Score:2)
But were those long address protocols designed to be routable in a worldwide network? Sure, Ethernet had a 48-bit address too, but it was only intended to be a unique hardware ID. There is no way to contact an arbitrary Ethernet MAC address outside of your LAN, even if you already know that it exists. Were they designed to work with the low-speed serial links that were common back in the day? Sure, you can spare a few extra bits when you've got over a million per second, but not when you've got a mere thous
Re: (Score:2)
At the time, I saw some people claiming that 300bps was all people really needed, since as text it was about 300 wpm, and most people couldn't read that fast.
Re: (Score:2)
Re: (Score:2)
1) First I would have done only countries and no other TLD.
Personally, I would have done the opposite, and demoted country-specific sites to a second-level domain like .us.gov. The Internet is an international network; forcing every domain to be classified first and foremost according to its national origin would cause needless discord. Only a small minority of sites are truly country-specific.
it could have been debian.cc or debian.de or any other that they wanted
In which case the country code would communicate zero information about the site—so why have it at all?
What might make more sense would be using registrars as TLDs (e.g
Why not a scalable address space? (Score:2)
Why hard code the size of addresses? for just a single bit of information, the ip headers could contain either the standard address or an extensible one.
That would have been really clever from his part.
what else? (Score:1)
OK, so what was a viable competitor to IPv.4 or v.anything, actually.....