Follow Slashdot stories on Twitter


Forgot your password?
The Internet Technology

Vint Cerf Keeps Blaming Himself For IPv4 Limit 309

netbuzz writes "Everyone knows that IPv4 addresses are nearly gone and the ongoing move to IPv6 is inevitable if not exactly welcomed by all. If you've ever wondered why the IT world finds itself in this situation, Vint Cerf, known far and wide as one of the fathers of the Internet, wants you to know that it's OK to blame him. He certainly does so himself. In fact, he does so time and time and time again."
This discussion has been archived. No new comments can be posted.

Vint Cerf Keeps Blaming Himself For IPv4 Limit

Comments Filter:
  • by Anonymatt ( 1272506 ) on Friday October 22, 2010 @01:34PM (#33987556)

    Is this a backwards opportunity taken for asserting that he is one of the Fathers of the Internet?

    • by Hognoxious ( 631665 ) on Friday October 22, 2010 @02:12PM (#33988110) Homepage Journal

      We all know it wasn't him. Seriously - is there anyone here who doesn't know who algoreithms are named after?

    • Re: (Score:3, Informative)

      by mcgrew ( 92797 ) *

      No need to assert; it's common knowledge.

      Vinton Gray "Vint" Cerf[1] [] ( /srf/; born June 23, 1943) is an American computer scientist who is recognized as one of [4] the fathers of the Internet", sharing this title with American computer scientist Bob Kahn.[5][6] His contributions have been acknowledged and lauded, repeatedly, with honorary degrees, and awards that include the National Medal of Technology,[1] the Turing Award,[7] the Presidential Medal of Freedom,[8] and membership in the National Academy of

    • Is this a backwards opportunity taken for asserting that he is one of the Fathers of the Internet?

      It's an opportunity to get attention []. Perhaps that bring consulting dollars, who knows.

    • My thought exactly.

      It's like "Sue me, and make me famous, again!".

    • by Daniel Phillips ( 238627 ) on Friday October 22, 2010 @04:00PM (#33989706)

      Is this a backwards opportunity taken for asserting that he is one of the Fathers of the Internet?

      I would say so. Below is the references section of RFC 791 []. Cerf shows up only on the "Catenet" article while the bulk of the heavy lifting was apparently done by John Postel, a rather more humble person it would appear. And Bob Kahn, who for some reason does not appear in these references. On the whole, Cerf seems to have mainly acted as a PM and money man.

      [1] Cerf, V., "The Catenet Model for Internetworking," Information
                Processing Techniques Office, Defense Advanced Research Projects
                Agency, IEN 48, July 1978.

      [2] Bolt Beranek and Newman, "Specification for the Interconnection of
                a Host and an IMP," BBN Technical Report 1822, Revised May 1978.

      [3] Postel, J., "Internet Control Message Protocol - DARPA Internet
                Program Protocol Specification," RFC 792, USC/Information Sciences
                Institute, September 1981.

      [4] Shoch, J., "Inter-Network Naming, Addressing, and Routing,"
                COMPCON, IEEE Computer Society, Fall 1978.

      [5] Postel, J., "Address Mappings," RFC 796, USC/Information Sciences
                Institute, September 1981.

      [6] Shoch, J., "Packet Fragmentation in Inter-Network Protocols,"
                Computer Networks, v. 3, n. 1, February 1979.

      [7] Strazisar, V., "How to Build a Gateway", IEN 109, Bolt Beranek and
                Newman, August 1979.

      [8] Postel, J., "Service Mappings," RFC 795, USC/Information Sciences
                Institute, September 1981.

      [9] Postel, J., "Assigned Numbers," RFC 790, USC/Information Sciences
                Institute, September 1981.

  • by powerlord ( 28156 ) on Friday October 22, 2010 @01:36PM (#33987564) Journal

    Cool. Now that we've assigned blame, hopefully we can move forward with FIXING the problem.

    Since there is already a fix available (IPv6), if/when this DOES become a problem, THAT problem should be assigned squarely on the shoulders of the people who failed to implement the FIX in a timely enough manner.

    • What happened to IPv5?

    • Re: (Score:3, Informative)

      by powerlord ( 28156 )

      Since I actually bothered to read the article:

      But Cerf, chief Internet evangelist at Google, has long known a good laugh line when he has one. In an Aug. 17 talk at NASA, he said:

      This is the amount of IP version 4 address space, about 5% left -- my fault actually. In 1977 I was running the Internet program for the defense department, I had to decide how much address space this Internet thing needs. ... After a year of arguing among the engineers, no one knowing, 32 bits, 3.4 billion terminations, has to be enough for an experiment. The problem is the experiment never ended.

      So, since the internet is just an experiment that never ended, can we name this "Endless October"? :)

    • Re: (Score:3, Insightful)

      by CAIMLAS ( 41445 )

      IPv6 is a good example of a fix to an existing problem which adds more problems in the meantime.

      It's like an application bug/security fix which adds a new user interface, which is entirely different than the original, exports different functionality, and has a massive learning curve. If a vendor were to release something like this, they'd be laughed at and ridiculed until they released a proper 'fix' which didn't break functionality and usability.

      Whatever the fix may be, it needs to be backward compatible -

  • by Rene S. Hollan ( 1943 ) on Friday October 22, 2010 @01:36PM (#33987568)

    ... to quote that hilarious line from Idiocracy.

  • Frankly... (Score:4, Insightful)

    by Daniel Phillips ( 238627 ) on Friday October 22, 2010 @01:37PM (#33987590)

    Vint Cerf should blame himself for the IPv6 mess instead.

    • Re: (Score:3, Interesting)

      by thasmudyan ( 460603 )

      Vint Cerf should blame himself for the IPv6 mess instead.

      Exactly. I assert that the migration would already have happened (and seamlessly) if we had just extended the address space and left everything else the way it was. To be fair, I believe this is a marketing problem. At the time when IPv6 became serious, all sorts of ideas were floated and sensationalized. A bunch of journalists said stuff like "in the future, a device will have just one static IP wherever it goes" and "we'll do away with firewalls". W

  • by Matt Perry ( 793115 ) <[moc.oohay] [ta] [45ttam.yrrep]> on Friday October 22, 2010 @01:37PM (#33987598)

    It's a good thing IPv4's address space is 32-bit. Without that limitation we'd never move to IPv6 and get all of the other benefits that it offers.

    • by Yvan256 ( 722131 ) on Friday October 22, 2010 @01:50PM (#33987800) Homepage Journal

      We should have put Gillette in charge of the solution. I'm pretty sure it would have been "fuck everything, we're doing 256-bit". IPv6 won't last long once we start assigning an IP address to everything* such as light bulbs, toasters, etc.

      * no, we won't stop to think if we should. We'll only see that we can.

      • Re: (Score:2, Funny)

        by operagost ( 62405 )
        I think that the IPv6 space is big enough to give an address to every molecule in the solar system.
        • Re: (Score:3, Funny)

          by 0123456 ( 636235 )

          I think that the IPv6 space is big enough to give an address to every molecule in the solar system.

          Yeah, but there are a lot of other solar systems. That's why I'm switching to IPV7 with 256-bit addresses.

          Of course the cross-galaxy ping time is a bit of a problem.

      • by abigor ( 540274 ) on Friday October 22, 2010 @02:09PM (#33988084)

        Eh, that's a lot of toasters to use up 3.4*10^38 addresses. If a toaster takes up a square metre (big toaster), you'd have to stack them ten billion high over every single metre of the Earth to use them up.

        • by toastar ( 573882 )

          Eh, that's a lot of toasters to use up 3.4*10^38 addresses. If a toaster takes up a square metre (big toaster), you'd have to stack them ten billion high over every single metre of the Earth to use them up.

          You can never have enough toasters ;)

      • Current estimates are that IPv6 has sufficient address space to assign every living human approximately 4 billion IPs. I could assign an IP to every single item I own down to the spare buttons for my shirts, and the unused sandwich bags in my pantry, and not even get to the first percent of my "allocation". The population of earth could increase by an order of magnitude and we'd all *still* have a few million addresses for our very own... we won't have anywhere to stand, but we'll have plenty of IP addres

  • Bogus shortage (Score:2, Interesting)

    by Anonymous Coward

    There isn't a true shortage with companies that are hording large blocks of IP addresses. Example HP has 2 class A address blocks among others which gives them over 32 million IP's. With all the mergers that have happened why isn't there a process to recover address blocks that can be reused properly.

    Part of the problem is that no one thought of recovering address blocks when companies merge. You can't tell me that HP needs 32 million plus IP's?

    There is also the fact that both companies and ISP's can use th

    • by div_2n ( 525075 )

      By the time companies expend the time and resources necessary to validate that all of their "unused" IP blocks aren't actually being used by something, engineering migration plans for those that are being used by non-critical systems, etc. they could just go ahead and move to IPv6.

      Apply a cure, not a band-aid.

    • Re:Bogus shortage (Score:5, Insightful)

      by jandrese ( 485 ) <> on Friday October 22, 2010 @02:01PM (#33987958) Homepage Journal
      The scary thing is that for every Class A returned to the pool, you only buy like a month of life for IPv4. It's just growing too fast now and we're going to start seeing a lot of stories about people not getting their IP addresses in a year or two. Luckily it won't affect existing customers too badly, but it will be a real limit on growth.
      • by blair1q ( 305137 )

        The question is: why is it growing at all?

        Every new device should be IPv6 compatible.

        Who's making IPv4 crap? And why aren't we charging them $100 a number?

        • Re: (Score:2, Informative)

          by hardburn ( 141468 )

          Mostly home gateways and some VoIP phones. Host OSen and business routers have had the necessary support for ages. Even most smartphones sold now probably do. But if you want an IPv6-capable Wireless N router, you're either going to have to look very carefully, or buy one that can load a custom firmware.

        • Re: (Score:3, Interesting)

          by jandrese ( 485 )
          You would drop your ISP so fast if they gave you an IPv6 only service today. It's just not ready yet. You can get some some services, but a great many would be broken, and you can forget about hooking up a ton of your existing hardware, because it will never support IPv6.

          Hell, do the Wii/360/PS3 support IPv6? I'm pretty sure the Wii doesn't, but I don't know about the other two. Not to mention Tivos, Slingboxes, Rokus, etc...

          That's not to say however that I'm letting ISPs off of the hook. We sho
          • Re: (Score:3, Interesting)

            by gmack ( 197796 )

            You cam hardly blame the ISPs since the most popular os on the planet (XP) does a very poor job of supporting IPv6. More annoying is the fact that MS refuses to support the TLS extensions that would allow servers to virtual host SSL based sites meaning that we can't do proper SSL based virtual hosts until people stop using IE and Chrome on XP. If I could have done that my last job would have had 5 ips instead of over 100.

        • Re: (Score:3, Interesting)

          by Firethorn ( 177587 )

          I know my 802.11N router at home is IPv6 compatible, but then, it's also a dual radio gigabit port beast.

          Honestly enough, I figure that the USA/Europe will be one of the last ones to switch over - we're more mature; our growth rate is slower than China and other developing countries, and our investment is still proportionally larger.

          Still, last time IPv6 came around I double checked, and my computers/router have IPv6 addresses. Hard to tell if they're getting used, but that's life.

    • Re:Bogus shortage (Score:4, Insightful)

      by Hatta ( 162192 ) on Friday October 22, 2010 @02:07PM (#33988052) Journal

      There are more people on Earth than there are IPv4 addresses. There is a true shortage, whether companies are sitting on address blocks or not.

    • Re: (Score:3, Informative)

      by compro01 ( 777531 )

      1. The legacy address space is a special case. They were issued directly from IANA before ARIN and the other RIRs were formed and were given out without many rules attached, so reclaiming those is legally difficult at best. Typical blocks issued today can be and are reclaimed when they're not being used and you currently have to go to significant lengths to show you need the address space, especially with RIPE's policies.

      2. We've been fucking doing that. NAT is why we are running out of addresses now rat

  • After hearing this story and the '640k ought to be enough' story, the lesson learned is that whenever you are planning on building something technical, be sure to go wayyyy overboard on the size and scope of the projected requirements in order to future-proof the technology.

    By the way, is Vint short for 'Vincent?' or 'Voila...Internet?"
    • by frank_adrian314159 ( 469671 ) on Friday October 22, 2010 @02:06PM (#33988022) Homepage

      ... the lesson learned is that whenever you are planning on building something technical, be sure to go wayyyy overboard on the size and scope of the projected requirements in order to future-proof the technology.

      Yeah! That's why we should be building CPUs with 1024-bit addresses!

      • by blair1q ( 305137 )

        I was going to ask why that's modded Funny, then I realized that yeah, it is.

        We should be building network protocols with variable-length addressing, and getting rid of fixed constraints entirely.

        Though you should have said "2048". Like the letters 'k' and 'q', it just sounds funnier when used in a joke.

    • Another example is Sony when they decided, "One hour of tape is enough." That decision eventually killed the Betamax VCR. The competition called JVC also thought it was enough time but RCA, which was used to dealing with consumer expectations, insisted it had to be 4 hours minimum so Americans could tape football games. JVC complied and VHS won.

      I wonder if we'll ever run out of phone numbers? The current US limit is 9,999,999,999 or about 10 billion. That's enough for 30 phones per citizen, so I suppo

    • by gmack ( 197796 )

      And then you risk a bloated mess that will probably still need to be extended in some way you didn't think about.

  • by ciaran_o_riordan ( 662132 ) on Friday October 22, 2010 @01:49PM (#33987788) Homepage

    In a speech around 2004, I remember Alan Cox said that the reason IPv6 wasn't advancing was that big software players were afraid to adopt it before it turns 20 in case there are submarine patents / patent ambush.

    Anyone got links to confirm / disprove this theory? []

    • by ciaran_o_riordan ( 662132 ) on Friday October 22, 2010 @01:54PM (#33987858) Homepage

      Here's an interview where he says it: []

      """Alan Cox: The same has happened with IP version 6. You notice that everyone
      is saying IP version 6 is this, is that, and there's all this research
      software up there. No one at Cisco is releasing big IPv6 routers.
      Not because there's no market demand, but because they want 20
      years to have elapsed from the publication of the standard before
      the product comes out -- because they know that there will be
      hundreds of people who've had guesses at where the standard
      would go and filed patents around it. And it's easier to let things
      lapse for 20 years than fight the system."""

      (More info would be good - any other prominent techs saying this?)

      • Actually, since this problem is sure to boom in the coming months, I've started a wiki page for it: []

        • How many years is it from the start of alleged infringement to the rebuttable presumption that the patent holder has snoozed and lost []?
          • by Overzeetop ( 214511 ) on Friday October 22, 2010 @02:42PM (#33988584) Journal

            Never, or in more practical terms, less than 6 years after the expiration of the patent. Patents need not be defended like trademarks, and you can "back sue" for up to 6 years of infringement. There was a recent story on /. about a company that bought a little known patent right before it expired, then went about suing everybody and anybody for infringement *after* the expiration, but going back 6 years for damages.

          • For patents, not likely ever if you go to trial, and definitely never when the defense simply settles out of court (which is practically always). Trademarks you keep for as long as you're defending them, but patents go until the official expiration date. Unisys was able to sit around and wait for GIFs to become the standard lossless format on the Internet, then spring patent claims on everyone, and got away with it until the patent officially expired.

      • Re: (Score:3, Informative)

        More info would be good - any other prominent techs saying this?

        This is not exactly new one, but I read a pretty reasonable article [] about the effect of James Watt's patents (steam engine) on the industrial revolution - basically how it was delayed by a few decades.

        That was 18th century, things moved slower then. Now-a-days within our 5 year obsolescence cycle things completely moved out of whack of course.

        • by blair1q ( 305137 )

          One reason that raising the length of patent protection, rather than reducing it, was a crime against the people.

      • by blair1q ( 305137 )

        That's fucking stupid.

        It's way cheaper to set your patent lawyers on a search for related patents and prior art than it is to fight them (in fact, that's a primary part of the application process).

        And by waiting you're just giving your competitors all the time they need to eat your lunch before you dare put out your first product. They'll be filing all sorts of patents on the thing you wanted to make, and resetting your 20-year grousing clock every time they click "send to USPTO".

        Either Cox is misquoted, o

      • Re: (Score:3, Insightful)

        by Jeremi ( 14640 )

        No one at Cisco is releasing big IPv6 routers.
        Not because there's no market demand, but because they want 20
        years to have elapsed from the publication of the standard before
        the product comes out -- because they know that there will be
        hundreds of people who've had guesses at where the standard
        would go and filed patents around it. And it's easier to let things
        lapse for 20 years than fight the system.

        I'm glad to see our patent system is still "promoting the progress of science and the useful arts". :^P

    • Re: (Score:3, Insightful)

      If this is true then wouldn't it mean that IPv6 won't get adopted until 2018? 20 years after the original RFC was published.

      I personally think the problem is that compatibility with IPv4 seems like it was an afterthought. The designers of IPv6 should have designed the system so that individual computers/routers/networks could be upgraded independently of each other in much the same way you can easily upgrade your network from 100mb to GigE.

    • Re: (Score:3, Informative)

      by Grond ( 15515 )

      Anyone got links to confirm / disprove this theory?

      Short version: Cox was just wrong. Cisco wasn't shipping big IPv6 routers in 2004 (although they were shipping other IPv6 hardware and software), but it wasn't because of patents. It was because there was no demand from the telecommunications companies, who knew they had several years before IPv4 ran out. Furthermore, Cisco's current largest routers (the carrier grade CRS series) support IPv6 (example []), yet 20 years from the publication of the main IPv6

  • The examples of him putting the blame on himself for IPV4 running out of address space is just a modest way of saying "Hey I invented the Internet" in a real way not in an Al Gore kind of way.

    I can only wish that I would have such a failure in my career!

    Nick Powers

  • How we got here. (Score:5, Informative)

    by Animats ( 122034 ) on Friday October 22, 2010 @02:12PM (#33988116) Homepage

    At the time, XNS, the Xerox protocol for Ethernet networks, was in use. It had 24 bits for the network number, and 24 bits for the device ID. Thinking at the time was that each network would be a local LAN, and "internetworking" would interconnect LANs. Xerox was thinking of this as a business system, with multiple machines on each LAN. So XNS had a 48-bit address spade. That's what we call a "MAC address" today.

    The telephony people were pushing X.25 and TP4, which used phone numbers for addressing. Back then, phone numbers were very hierarchical; the area code and exchange parts of the number determined the routing to the final switch. "Number portability", where all the players have huge tables, was a long way off.

    The problem with a big address space is that memory was too expensive in those days to deal with huge address tables. A big issue was locative vs non-locative address spaces. In a locative address space, there's a hierarchy - you can take some part of the address and make a local decision about what direction to go, even if you don't have enough detailed information to get to the final destination. IP was originally organized like that - routers looked up class A, B, and C networks. A huge, flat address space implemented using multi-level caches was way beyond what you could do in a router back then. Routers used to be dinky machines, with less than one MIPS and maybe 256K of RAM.

    There was a lot of worry about packet overhead. Each key press on a terminal sends 41 bytes over a TCP/IP network. That was a big deal when companies had long-haul links in the 9600 to 56Kb/s range. Adding another 24 bytes to each packet to allow for future expansion seemed grossly excessive. Especially since the X.25 people had far less overhead.

    So there were good reasons not to overdesign the system. I don't blame Cerf for that.

    The foot-dragging on IPv6 is excessive. The big deployment problem was getting it into everyone's Windows desktop. That's been done.

  • I feel a bit guilty myself now, I got a block of 16 IPv4 addresses last week when I changed ISP. Although they also give me real honest non-tunnelled IPv6 too.

    C'mon Slashdot, start supporting IPv6! - even Youtube's on there now!

  • Here's a question for the day: Why did they pick a class A network to place the local machine address ( in? Why not

    • by Nightwraith ( 180411 ) on Friday October 22, 2010 @02:32PM (#33988452)

      I don't know about you, but I'm extremely satisfied that my interface's home is in a Class A network.

      I mean, who wants to live in a sub-class neighborhood?

    • it's a test *network* that RFC 790 made. normally it's used for loopback, but could be used for other testing including socket-like things for a machine to talk to itself.

      And it's not just address, you'll get a response from any address in that network, but those packets will never appear on real network outside your machine.

    • by Sloppy ( 14984 ) on Friday October 22, 2010 @07:15PM (#33991948) Homepage Journal

      I could explain this to you, but I would have to write a science fiction novel to do it. Well ok, I'll summarize the novel. Just remember this is a selective summary; pretend that all sorts of really cool things are happening and my characters are totally interesting and the plot is fucking fantastic. Can you do that for me, Wowbagger? Ok.

      In an alternate universe, the IP4 designers did just as you suggest, and the loopback network was Class C. In this alternate universe, other things went in a different direction too. By 2010 we all have CPUs with thousands of cores, but they all run at 1 MHz and programmers discuss ways to improve the linearization of their code.

      And we all have a weird crippled piece of shit operating system, which got popular despite all its limitations. (This may seem hard to believe to us, but remember I'm talking about an alternate reality.) One of its limitations, is that its networking code doesn't deal with port numbers, because the designers thought that was a waste of 16 bits. (Computers in this reality have about as much memory as what we're used to, but there are more addresses and the words are 4 bits wide, so working with 16 bit data is kind of a pain in the ass.) Another of its limitations is that is has no IPC as we currently know it. Fortunately in the 1990s some programmers "invented" IPC by having each process use the loopback network, but since there are no port numbers, each process has to have its own address on the loopback network so that the OS can sort out what process gets what message. This inevitably led to mocking jokes:

      "255 loopback addresses ought to be enough for anyone." -- Vint Cert

      There were terrible hacks for running hundreds of processes and having them all be able to talk to one another, where a proxy process would emulate a sub-loopback network for 254 other processes and present a single loopback address to the OS. It was such a broken, terrible system, that it delayed the popularization of personal computer networking, so there was no "mainstream" use of the internet and the supply of IP4 addresses lasted much longer. In 2010, there was no non-loopback address shortage; it wasn't expected for another decade.

      Then one day a poster named whoasacker got on Hyphencolon and asked, "Why didn't they just use a Class A network for the loopback?" And a poster named Slippery answered, explaining, "In an alternate universe, they did..."

  • by Anonymous Coward on Friday October 22, 2010 @02:27PM (#33988358)

    Choosing 32 bits for IPV4 was reasonable at the time when 56kbps was considered a fast link.
    The real problem is that when IPV6 was designed it did not allow IPV4 to be included as a subspace.
    so you cannot have an IPV4 address that is a valid IPV6 address.
    That means that there is no soft migration path from IPV4 to IPV6.
    The people who designed IPV6 did not consider the problems of real world users;
    they designed in a vacuum. A properly designed IPV6 would be in widespread use by
    now, and the problem would be under control.

    • by Anonymous Coward on Friday October 22, 2010 @05:13PM (#33990664)

      IPv4 was created decades before 56kbps was considered a fast link.

      I've heard this complaint before about IPv6 not being backwards compatible, but, and no offence, I've never heard a constructive argument about how it should have been designed. I have my doubts that people who make this complaint have actually sat down and worked through the details of how they would have made IPv6 backwards compatible.

      Consider a hypothetical IPvA (short for IPvAwesome) which obsolesces IPv4 and is backwards compatible. We have to imagine that the IPvA address space is bigger than 32 bits, either a fixed larger address space or a variable-length "extension" address stuck in the optional parts of the IP header or something like that. The problem is that no matter what mechanism you choose, every packet you send across the Internet is going to hit a 10 year-old router that's never even heard of IPvA. There's a 100% chance this router will have no idea whatsoever what to do with the parts of the IP header it's never seen before. If you're lucky the router will just drop the packet as being malformed. If you're unlucky maybe it'll do something silly like truncate the packet down to the RFC-specified 32-bit IPv4 address and your reply packets will end up getting routed to China somewhere.

      The problem is this: whatever protocol you put in to replace IPv4, most of the infrastructure on the Internet will have no idea what to do with it. That means it's virtually impossible that you'll ever be able to seamlessly bridge between stupid old ignorant IPv4 routers and the more aware routers.

      What you could do is have routers that nicely bridge between IPvA and IPv4. So you send out an IPvA packet and it magically finds its way to a router that speaks both IPvA and IPv4 and can nicely bridge between them. That would be cool, and in fact, I've just described to you how 6to4 works.

      Truth be told, even you sat down and came up with a new protocol that was designed for nothing else but bridging between codgy old IPv4 routers and some kind (any kind!) of new Internet protocol, I doubt you could do better than IPv6 and its cohorts (6to4, 6over4, 6in4, 4in6, etc.)

      Maybe I'm missing something, but if you're going to make this complaint, you're going to have to come up with something better than "they didn't think about backwards compatibility". They did think about backwards compatibility and they did it in the best way possible from what I can tell.

  • by harald ( 29216 ) on Friday October 22, 2010 @02:38PM (#33988518) Homepage

    $ host -t AAAA has no AAAA record

    'nuff said. Our organisation (that's me) is already 96% dual-stack. We treat non-ipv6 connectivity as fatal. When are you gonna do it?

  • ...the ongoing move to IPv6 is impossible.

    There's zero economic incentive to stand up an IPv6 service, and won't be until a critical mass of clients have only IPv6 connectivity (no IPv4). There's no economic incentive for an ISP to provide IPv6 unless the customers demand it, and they don't care because there aren't any services or content exclusively on IPv6.

    It's sad to us geeks, but the future is an internet of many-layered NAT where connections can only be routed from end-user to well

Ya'll hear about the geometer who went to the beach to catch some rays and became a tangent ?