Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet

What Vint Cerf Would Do Differently (computerworld.com) 125

An anonymous Slashdot reader quotes ComputerWorld: Vint Cerf is considered a father of the internet, but that doesn't mean there aren't things he would do differently if given a fresh chance to create it all over again. "If I could have justified it, putting in a 128-bit address space would have been nice so we wouldn't have to go through this painful, 20-year process of going from IPv4 to IPv6," Cerf told an audience of journalists Thursday... For security, public key cryptography is another thing Cerf would like to have added, had it been feasible.

Trouble is, neither idea is likely to have made it into the final result at the time. "I doubt I could have gotten away with either one," said Cerf, who won a Turing Award in 2004 and is now vice president and chief internet evangelist at Google. "So today we have to retrofit... If I could go back and put in public key crypto, I probably would try."

Vint Cerf answered questions from Slashdot users back in 2011.
This discussion has been archived. No new comments can be posted.

What Vint Cerf Would Do Differently

Comments Filter:
  • IoA (Score:2, Insightful)

    by Anonymous Coward

    Is 128 bits enough? We really need to have enough bits to assign every atom in the universe an IP address.

    • There are 10^80 atoms in the Universe. So we only need an address of 266 bits to ensure all atoms will safely have an IP at their disposal!
      • Are you sure that one IP address per atom is sufficient? Just to be safe, throw in another address byte. So, 272 bit addresses it will be.

        • by suutar ( 1860506 )

          we should really round that up to a power of 2, so make it 512 bits.

          Then we can assign addresses to the universes next door too, once we can get there.

    • Re:IoA (Score:4, Interesting)

      by jellomizer ( 103300 ) on Monday September 26, 2016 @09:14AM (#52961887)

      At the time 32 bits seemed like a lot of data to send.
      On a 300bps modem it would take a noticeable fraction of a second. 64bit or 128 bit would take much longer, and slowdown nearly everything. Also RAM was small think kilobytes having to store that much data would be sacrificing it somewhere else in the code.

      In short if it were implement back then, it would never catch on, and we would be using a different networking protocol now. Perhaps one with much more problematic limitations.

      Today using 128bit address having the ability to give more IP Addresses than possible in the universe, really make sure that just randomly picking an address probably will not create a duplicate address.

      • by arth1 ( 260657 )

        Today using 128bit address having the ability to give more IP Addresses than possible in the universe, really make sure that just randomly picking an address probably will not create a duplicate address.

        That would be well and fine if most IPv6 addresses didn't have a 64-bit or even 80-bit prefix, identical for everything routable at the endpoint. Then there are DHCP addressing schemes that use the MAC as part of the address, further reducing it.
        Sure, it can be planned better, but we're doing our damndest already to use up parts of the IPv6 address space through thoughtless assignments and rules.

        • Re:IoA (Score:4, Informative)

          by JesseMcDonald ( 536341 ) on Monday September 26, 2016 @12:43PM (#52963291) Homepage

          That would be well and fine if most IPv6 addresses didn't have a 64-bit or even 80-bit prefix, identical for everything routable at the endpoint.

          That 64-bit network prefix is the equivalent of 4 billion entire IPv4 internets—and each "host" in each of those internets contains its very own set of 2**32 IPv4 internets in the 64-bit suffix. Quadrupling the number of bits from 32 to 128 means raising the number of addresses to the fourth power (2**32 vs. 2**128 = (2**32)**4). We can afford to spare a few bits for the sake of a more hierarchical and yet automated allocation policy that addresses some of the more glaring issues with IPv4, like the address conflicts which inevitably occur when merging two existing private networks.

          Think of it this way: If we manage to be just half as efficient in our use of address bits compared to IPv4, it will still be enough to give every public IPv4 address its own private 32-bit IPv4 internet. Right now the vast majority of IPv6 unicast space is still classified as "reserved", so we have plenty of time to adjust our policies if it turns out that we need to be more frugal.

          Then there are DHCP addressing schemes that use the MAC as part of the address, further reducing it.

          Automatic address assignment (based on MAC or random addresses or whatever) comes out of the host-specific suffix, not the network prefix, so it doesn't reduce the number of usable addresses any more than the prefix alone. It does imply that you need at least a 64-bit host part in order to ensure globally uniqueness without manual assignment, but the recommended 64-bit split between network and host was already part of the standard.

        • by jandrese ( 485 )
          When people talk about an IPv6 address being 128 bits they're technically true but they miss the bigger picture. In practice you can't assign anything smaller than a /64 so really there are only 64 bits of address space as we think of it today. 1 IPv4 address hosting a subnet with NAT vs. an IPv6 /64 prefix are roughly equivalent. It's still way more address space than we'll ever reasonably need, but not quite as ridiculous as it looks at first glance. This also means that 64 bit machines (most of them
          • 1 IPv4 address hosting a subnet with NAT vs. an IPv6 /64 prefix are roughly equivalent

            Uhhh....
            2^32 = ~4.3 billion
            2^64 = ~18 billion billion

            So they are only "roughly equivalent" if by that you mean "within 10 orders of magnitude of each other".

            I think you meant "one IPv4 Internet (4.3 billion hosts) where each host NATs an entire IPv4 internet vs. one IPv6 /64 prefix (4.3 billion IPv4 Internets) are exactly equivalent".

            In practice you can't assign anything smaller than a /64
            -- snip --
            It's still way more address space than we'll ever reasonably need, but not quite as ridiculous as it looks at first glance.

            While true, you will get that /64 assigned to you within a /64 prefix. In other words, there are 18 billion billion prefixes each with 18 billion

            • by jandrese ( 485 )
              They're roughly equivalent in that's what you'll be assigned by your ISP. And in practice it doesn't matter if you have 32 bits for your local network or 64, they're effectively unlimited.
          • by Agripa ( 139780 )

            When people talk about an IPv6 address being 128 bits they're technically true but they miss the bigger picture. In practice you can't assign anything smaller than a /64 so really there are only 64 bits of address space as we think of it today. 1 IPv4 address hosting a subnet with NAT vs. an IPv6 /64 prefix are roughly equivalent. It's still way more address space than we'll ever reasonably need, but not quite as ridiculous as it looks at first glance.

            No worries, ISPs will soon figure out how to issue single IPv6 addresses or at least block all but one unless you pay for the feature of having a complete allocation.

            • NAT is still possible with IPv6. It would really just force people to "break" the internet if they tried something like that.

              • by Agripa ( 139780 )

                NAT is still possible with IPv6. It would really just force people to "break" the internet if they tried something like that.

                Sure NAT is still possible although I would not put it beyond them to eventually try getting people charged with violations of the CFAA or some other law for using it without permission.

                And why wouldn't they want to break the internet? We are after all talking about companies like AT&T who once told me that they were blocking IPv6 tunneling because otherwise customers could get free static IPv6 IPs without paying for them.

      • 64bit or 128 bit would take much longer, and slow down nearly everything.

        FTFY.

        Slowdown is a noun. Slow down is the verb you wanted.

  • ...it works! [slashdot.org].
  • I personally wouldn't have really changed much. Today, in retrospect, yes, these things are more important. But in the earlier days, adoption was the most important goal of the internet. IPv4 is a relatively easy spec to implement. This goes for routing tables, system administration, hardware support, etc. Yes, the migration to IPv6 is a pain in the ass, but the learning curve for admins and hobbyists as well as the implementation burden on manufacturers some 20-30 years ago simply wouldn't be feasible.

    • Especially given how we shed two digits from the year to save memory, I kinda doubt having quadruple the number of bytes in an IP address would fly.

      Still though, it occurs to me that the first internet protocol implementation could have even longer address space than ipv6 without mandating (at the time) unfathomable memory needs. Why did they change that?

    • Comment removed based on user account deletion
  • 32 bits address (Score:4, Insightful)

    by hcs_$reboot ( 1536101 ) on Sunday September 25, 2016 @08:11PM (#52959705)
    It seems Vint engages in self-flagellation each time someone raises the number of limited IPv4 addresses available (like "here" [slashdot.org]). At the time (40 years ago! The "640k is enough" meme is 'only' 35 y.o!), who would have anticipated the success of Internet? (and for starters, everyone would have reserved the juicy .com domains in the early 90's!). Vint Cerf did an awesome technical and visionary job and deserves a lot of credit for that.
    • Vint Cerf did an awesome technical and visionary job*

      *and still does! (at Google)

    • Yeah, at the time, the thought of needing more than four billion internet addresses probably seemed a bit ludicrous when it was still mostly just government and university mainframes connecting. That number must have seemed like a nearly never-ending well, especially seeing how generously it was initially carved up into massive blocks. Some of the earliest corporations and universities to receive allocations still have a relatively ridiculous number of Class A blocks allocated addresses (16+ million).

      We'd

      • Even a 64-bit address would have been seen as doubling memory requirements of routing hardware for no good reason.

        There could have been an optional 32-bit client sub-address ignored by the public routing backbone.

        Then, for most purposes, non-backbone routers need two routing tables: a routing table for the public network (if more complex than a few simple gateways), and an organization-local internal routing table (with 32-bit addresses, just like the public table).

        The actual problem is that each TCP/IP con

        • by Sique ( 173459 )
          As soon as you have regulary data transfers from any point to a publicly accessible network, you can do fingerprinting. It's not HTTP per se, that's the culprit. Any other protocol becoming popular could have been used. HTTP is just the one most popular.
        • by swb ( 14022 ) on Monday September 26, 2016 @07:12AM (#52961413)

          I always thought the Netware IPX/SPX network numbering system was quite clever -- 32 bits of network addressing and a 48 bit node address, usually based on MAC addresses.

          I always think of how much simpler IP would have been with a similar structure -- subnets could have scaled easily without renumbering or routing when common /24 limits were hit. The use of MAC addresses for node addresses would have eliminated DHCP for the most part or essentially automated it as clients would have only had to query for a network number, not a node address.

      • I don't agree that 4 billion was ludicrous. Just theoretically. I mean, the world had more than a billion people then, which could easily be imagined as growing to 4 billion. If one imagined that each person would have something in his hand that would require an IP address, there was the case right there that 4 billion wouldn't suffice.

        64 would have - if we had that, then we could have had Class B size subnets for everything, and everything beyond address 16 would have been the network address.

    • The domain name system should be name dot country dot com/edu/gov/org/home, with each country having master control over their domains. Giving the US complete and total control over the entire COM domain is insane.
  • Computationally that is, I don't think it would have flown in the early 90s and the adoption rate would have been the same it was with SSL (and TLS). It wasn't not so long ago that I actually had to provide resource impact reports on servers where everything would be encrypted. Nowadays (unless you deal with extreme large volumes), encrypting (using an symmetric key that is) doesn't have a significant impact. Web servers, load-balancers, etc can support it without breaking a sweat.

  • I really hate these people who take credit for my brilliant invention. Something you peons refer to as "the internets."

    Oh well, at least I can still take full credit for my hockey stick.

    Sincerley,
    Al Gore

  • by Kjella ( 173770 ) on Sunday September 25, 2016 @08:30PM (#52959759) Homepage

    So the 1992 UTF-8 specification didn't exist when the 1983 IP specification was created, but they could have done:

    First 2^31: 0(31)
    Next 2^59: 110(29) 10(30)
    Next 2^88: 1110(28) 10(30) 10(30)
    Next 2^117: 11110(27) 10(30) 10(30) 10(30)

    And just declared that for now it's 0(31) - still 2 billion addresses but the sky is the limit. Heck, they might even have used shorts (16 bit) that way and declared that hardware/software should update as the need approached:

    First 2^15: 0(15)
    Next 2^27: 110(13) 10(14)
    Next 2^40: 1110(12) 10(14) 10(14)
    Next 2^53: 11110(11) 10(14) 10(14) 10(14)
    (...)
    Next 2^140: 1111111111111111(0) 10(14) 10(14) 10(14) 10(14) 10(14) 10(14) 10(14) 10(14) 10(14)

    As for PKI, that couldn't possibly have happened. US export regulations wouldn't have allowed it at the time, this was long before Zimmerman and PGP.

    • by Shimbo ( 100005 )

      Variable length addresses aren't a new idea, and there were some advocates for IPv6. However, a lot of the IPv6 changes were made to streamline the IP header, to make it faster for intermediate systems to process. Using a variable length field would add complexity. More general isn't necessarily better, else we would be using BigInts everywhere.

    • I can see a justification for fixed size packets though. Still, nice to have the flexibility. My other thought is that you should have a gateway with a short address and anything that uses it a longer address. E.g. find some way to have 123.45 as your gateway, and 123.45.1 through 123.45.256 as machines on the subnet. Maybe just zero terminate the addresses.

      I thought it might have been possible to do some sort of nested packet idea. Something like 0-7FFFFFFF are global addresses, 80000000-FFFFFFFF are lo
  • by aberglas ( 991072 ) on Sunday September 25, 2016 @08:40PM (#52959813)

    Would be enough to support even the internet of things, possibly with some very minor NATing. And spared us from the 128 bit monsters. 48 bits is what early Ethernet used, and seemed like a good number.

    But I can understand that when IP was developed there were only a few thousand computers in the world likely to be connected, so 16 bits would seem adequate. Using 32 bits would have been a bit of a stretch at that time. Memory and bandwidth were expensive back then.

    • 1976...! You were only 40 years old!

    • But I can understand that when IP was developed there were only a few thousand computers in the world likely to be connected, so 16 bits would seem adequate. Using 32 bits would have been a bit of a stretch at that time.

      Ummmmmmm... Have you ever seen an IP (v4) address?

    • by Megane ( 129182 )

      48 or 64 bit addresses would have been enough, IPv6 only used 128 bits because its designers wanted to be really, really, really sure that we wouldn't run out, this time, for sure. The initial classful address allocations didn't help, but we eventually reached a point where a single wasted class A only puts off exhaustion by a few months. What broke everything in the end was the sheer enormous number of addresses used by mobile networks. Now we have enough bits that we can use Ethernet MACs as part of routa

      • In a Classful IPv4 setup, the max you have is Class A, or 2^24. I've never heard of a Class A being superneted to support a larger network.

        Therefore, I think that in IPv6, having a whole 64 bits dedicated to the interface ID is a total waste, even allowing for autoconfiguration. Even if they had assigned the bottom 32 bits to the interface ID, they'd have a subnet equal to the entire IPv4 network (sans NAT'ed ones). Then the entire top half of the address would have been the Global Prefix, and the sub

  • by Alomex ( 148003 ) on Sunday September 25, 2016 @09:09PM (#52959897) Homepage

    Remember boys and girls, always make bit fields extensible. He could have reserved the last part of the upper range (e.g. the ones presently wasted in multicast) for an extended range to be defined in the future. I.e. an address in the upper 224.0.0.0 range is followed by another 32 bits whose meaning would be IPV6 like.

    • by godrik ( 1287354 )

      that probably would not have made much of a difference. People would have assumed that this would never happen and would have made practical implementation assuming a fixed 32 bit space. By the time it became a practical problem, we would have had a creep of devices that does not follow the norm, and managing that would be a nightmare.

      Just see the sorry state of utf-8. We still have so many code bases that are not utf-8 compliant despite we have seen the need for it over 15 years ago.

      • by Alomex ( 148003 )

        Not any more of a nightmare than the IPv4 to IPv6 conversion which has been in the making for twenty years now.

        • by Junta ( 36770 )

          Actually, I'd say it would be more of a nightmare. Here we have the devils we know. In that scenario, it would be hellish even *knowing* what can't handle things.

          Every hop in the network not being certain whether the next hop could or could not handle the conversation would be a nightmare in the making.

      • by Kjella ( 173770 )

        that probably would not have made much of a difference. People would have assumed that this would never happen and would have made practical implementation assuming a fixed 32 bit space. By the time it became a practical problem, we would have had a creep of devices that does not follow the norm, and managing that would be a nightmare.

        Yes, but it would have put more pressure on the existing user base like Y2K compliance to follow the "full" standard. Right now it's like we're on IPv4, tagged WORKS4ME so why bother with IPv6. But I know I've made many more "shortcuts" than limiting something to 4 billion...

      • by AmiMoJo ( 196126 )

        I think you could solve the old, incompatible device issue with something like NAT. It would be horrible of course, but so is IPv6. And most stuff would be updated - IPv6 has been supported since the 90s.

        • by Junta ( 36770 )

          It seems funny, because that *ultimately* is the reality of IPv6.

          The difference is that in the above scheme, *knowing* whether a hop in the network could or could not do 'big addresses' would be more difficult. With IPv6, it allows things to be very clearly delineated whether the communication is IPv6 capable or not.

          The biggest obstacle to IPv6 was the stubborn resistance to any form of NAT. That IPv6 should be all or nothing. The piece needed to make IPv6-only *clients* possible was carrier grade NAT64.

    • From the linked article: "There was debate about the possibility of variable-length addresses, but proponents of the idea were ultimately defeated because of the extra processing power "

      • by Alomex ( 148003 )

        I know that was the excuse, and it is bogus, since you can simply ignore the extensible field in the first hardware implementation. It's not until the extension is used, many years into the future, that you have to have the processing power to handle it.

    • Some ranges were reserved! But due to Internet explosion they were used at some time, anyway
  • that's like asking Henry Ford how he would make tesla autopilot problems go away

  • by Anonymous Coward

    You know, IPv6 AH (authentication -- forged packets are "impossible"), and IPv6 ESP (encryption, for privacy) through IPSEC were a non-optional part of the protocol until a few years ago.

    But industry-wide incompetence, the interference from the usual suspects, as well as the usual shit pulled by the embedded space crowd killed it down to "optional". Yes, IPSEC can be nasty (mostly because of IKE), but it would be *something*.

    For one, it would have killed DNS poisoning much better, and much earlier, than DN

    • by Junta ( 36770 )

      I personally would rather *not* have crypto at the IP or TCP layer. Reason being is that in practical terms, updates *must* be delivered through kernel updates. Given the nature of crypto updates, I'd much rather have librariers in userspace be the channel for updates.

      I don't think I need a big conspiracy about AH/ESP. They were really awkward approaches, and largely redundant with higher layer strategies.

      The issues with DNS/DNSSEC are more reasonably addressed in the DNS layer. There is a lack of will

      • by Agripa ( 139780 )

        I don't think I need a big conspiracy about AH/ESP. They were really awkward approaches, and largely redundant with higher layer strategies.

        They were made really awkward by the conspiracy to prevent usage.

    • by Agripa ( 139780 )

      It is not like the usual suspects had to kill AH to protect their need for mass-spying, making ESP optional would be enough (and outlaw it where required). But no, they need to actually be able to inject false traffic (which *is* against the !@#$!@#$ law everywhere, even for governments)...

      Better to kill AH now to prevent people from thinking they have a right to any unapproved and secure cryptography. Plus there is the whole wanting to inject false traffic issue.

  • and none support IPv6. I think the move from 4 to 6 has already failed.

    • by Junta ( 36770 )

      When I pull up my cell phone ip information, it's IPv6.

      Now that there's carrier-grade nat to allow ipv6-only endpoints to speak to ipv4-only hosts, it *finally* is plausible to offer most mobile/residential ipv6-only. So a lot of the people who are ipv6-only are precisely the ones that would never realize it.

      For enterprise networks and internal networks, those are ipv4 and likely to stay ipv4-only (which is a bummer for software development, because IPv4/IPv6 agnostic code is still relatively rare, since t

  • Who? (Score:3, Funny)

    by Hognoxious ( 631665 ) on Sunday September 25, 2016 @10:51PM (#52960183) Homepage Journal

    I don't care what this Vince dude thinks. I want to hear what Bennet Haselton has to say.

  • Going from 32-bit addresses to 128-bit addresses at the outset would have meant the IMPs and then the NSFnet routers (Cisco AGS+s) would have had the routing tables take up 4x the space (well 257/65s but who's counting). That would have meant a more severe and quicker router memory exhaustion with backbone routes, and a quicker move to CIDR (where we specify NETWORK/NUMBER-OF-CONSECUTIVE-1-BITS instead of NETWORK/MASK).

    For every choice we ever make (routers, IP design, freeway offramp selection, etc.) if

  • by Anonymous Coward on Monday September 26, 2016 @12:14AM (#52960401)

    http://bill.herrin.us/network/ipxl.html

    A one-page solution, too simple of course for a huge committee to accept.

    If only someone could have convinced a few key router manufacturers (Cisco) and Linux to adopt this, perhaps we could get critical mass and make IPv6 irrelevant. I guess it wouldn't have been enough of a make-work project though.

    • IPv6 is a one-page solution too, if you ignore all of the actual details.

      As a small example, consider these two paragraphs that the page says for DNS:

      Name to address mapping is performed with the AX record. AX returns a 64-bit IP address instead of a 32-bit IP address. A DNS server which supports IPxl should automatically translate any known A records for a given name to AX records by prepending 0.0.0.1 to the address. It should likewise automatically translate any AX records with the prefix 0.0.0.1 into A

  • At the same time he was doing IPv4, CLNS/CLNP was using 20BYTE Addresses (variable lenght). And Xerox (which Cerf himself credits with inspiration to his project) was using 12Byte Addresses. So that excuse of "could not have done it" is Bollocks.

    As for encryption, that's what optional headers are for! Should have defined two or three of those fron the start: Weak encription (to handle export/munitions restrictions of the time), strong encription (either for countries which are not under US influence, or whe

    • by Megane ( 129182 )

      But were those long address protocols designed to be routable in a worldwide network? Sure, Ethernet had a 48-bit address too, but it was only intended to be a unique hardware ID. There is no way to contact an arbitrary Ethernet MAC address outside of your LAN, even if you already know that it exists. Were they designed to work with the low-speed serial links that were common back in the day? Sure, you can spare a few extra bits when you've got over a million per second, but not when you've got a mere thous

  • Comment removed based on user account deletion
    • 1) First I would have done only countries and no other TLD.

      Personally, I would have done the opposite, and demoted country-specific sites to a second-level domain like .us.gov. The Internet is an international network; forcing every domain to be classified first and foremost according to its national origin would cause needless discord. Only a small minority of sites are truly country-specific.

      it could have been debian.cc or debian.de or any other that they wanted

      In which case the country code would communicate zero information about the site—so why have it at all?

      What might make more sense would be using registrars as TLDs (e.g

  • Why hard code the size of addresses? for just a single bit of information, the ip headers could contain either the standard address or an extensible one.

    That would have been really clever from his part.

  • OK, so what was a viable competitor to IPv.4 or v.anything, actually.....

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...