Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet

The Internet Is 'Built Wrong' 452

An anonymous reader writes "API Lead at Twitter, Alex Payne, writes today that the Internet was 'built wrong,' and continues to be accepted as an inferior system, due to a software engineering philosophy called Worse Is Better. 'We now know, for example, that IPv4 won't scale to the projected size of the future Internet. We know too that near-universal deployment of technologies with inadequate security and trust models, like SMTP, can mean millions if not billions lost to electronic crime, defensive measures, and reduced productivity,' says Payne, who calls for a 'content-centric approach to networking.' Payne doesn't mention, however, that his own system, Twitter, was built wrong and is consistently down."
This discussion has been archived. No new comments can be posted.

The Internet Is 'Built Wrong'

Comments Filter:
  • by boxless ( 35756 ) on Tuesday October 28, 2008 @04:32PM (#25546821)

    On the shoulders of giants we stand.

    Any of these ideas of improvement are not new. But neither are they working. And the internet as we know it is working quite well. Far beyond what anyone would have predicted.

    Are there things to be fixed? Sure, around every corner. But I'm not going to listen to some guy from some wicked kewl startup in SFO tell me how to do it.

  • by ergo98 ( 9391 ) on Tuesday October 28, 2008 @04:39PM (#25546937) Homepage Journal

    It's hardly going out on a limb criticizing IPv4 -- it has proven an easy target for going on two decades now, with its weakness apparent to all.

    And the switch to IPv6 is happening. Many backbone providers are rolling it out, and it is gaining wider support among mainstream operating systems and applications. The only reason it hasn't been a hastier migration is that NAT really did undermine the necessity for expediency.

  • Re:Yea me! (Score:5, Informative)

    by corbettw ( 214229 ) on Tuesday October 28, 2008 @04:45PM (#25547037) Journal
  • Re:n00b (Score:3, Informative)

    by Paralizer ( 792155 ) on Tuesday October 28, 2008 @04:45PM (#25547049) Homepage
    Interesting idea and I see why that would be a desirable feature. You give it google.com and you get routed directly to google.com without a potential MITM DNS attack. However, it seems to me that DNS and routing should be separated as they perform entirely different functions.

    Routing is how to get there.
    DNS is where you want to go.

    If there were an efficient way to combine them that would be a cool feature, but routing really should only be how to get from point A to point B. What would you do about things like load balancing, failover, and anycast [wikipedia.org]?

    The biggest problem with your idea is that DNS updates take way too long to propagate, whereas routing updates are much faster, especially if BGP is not involved. What happens if foobar.com's main servers go down and you need to reroute to the backups? You'd have to update the DNS, and that could take a long time to propagate to all the DNS servers around the world. A routing update would be much faster.

    Sounds good on paper, but I don't think it would work.
  • Re:*Brain Asplodes* (Score:3, Informative)

    by nine-times ( 778537 ) <nine.times@gmail.com> on Tuesday October 28, 2008 @05:05PM (#25547367) Homepage

    It's very easy to sit back today and say, "Wow it could have been so much better!" But that is armchair crap at the best of times...

    Sure, but you don't have to be some kind of genius to see that protocols like FTP and SMTP have some problems. Although I'm not qualified to do anything about it, I suffer some problems due to some of those problems and limitations, so I'd like to reserve the right to complain even if it is "armchair crap".

    I *do* find it frustrating how common and necessary FTP can be in spite of it being really awful, particularly because I know that FTP is used largely out of inertia and familiarity, and out of ignorance of its flaws.

    I'm not arguing that there's anything very new or interesting in the article, though.

  • small hint (Score:3, Informative)

    by circletimessquare ( 444983 ) <circletimessquar ... m minus language> on Tuesday October 28, 2008 @05:16PM (#25547531) Homepage Journal

    security and trust models are philosophically and technologically joined at the hip with command and control models

    you build a supersafe, 100% trustworthy internet, and you build the internet that beijing and tehran love

    the internet is a wild and wacky and dangerous place. and it is also free. sure, there is the tragedy of the commons, but i'd much rather wade through GNA and /b/tard comments than deal with the Internet Security Office. which is what security and trust models empower

    don't knock what you got until you lose it, and forever more lament the loss of the golden years

  • by Anonymous Coward on Tuesday October 28, 2008 @05:20PM (#25547569)

    I don't think Akamai is 20%. They are 12.5% of the traffic on my network. Google (youtube/doubleclick) is pretty close to the same volume as well as Limelight.

  • by Dr_Barnowl ( 709838 ) on Tuesday October 28, 2008 @06:17PM (#25548225)

    IPv6 ... still a limit, who cares if it's 10 or 10,000 years in the future

    2^128 addresses, or 2^52 addresses for every observable star in the known universe. Compared to 2^32 for IPv4.

    IPv6 may well not be the last protocol on the web, but it won't be for lack of addresses.

  • by geekoid ( 135745 ) <dadinportlandNO@SPAMyahoo.com> on Tuesday October 28, 2008 @06:26PM (#25548317) Homepage Journal

    Not to address this specific issues, but that is a fallacy.

    Someone can point out something that is wrong without needing to create something better.

  • by caluml ( 551744 ) <slashdot@spamgoe ... minus herbivore> on Tuesday October 28, 2008 @08:32PM (#25549517) Homepage
    I never understand this argument. Why does it have to be backward compatible?
    You run the two protocols simultaneously for years, add AAAA records to DNS which get looked up and tried before A records, and when you notice that no-one is connecting to your services over v4 any more, you have a v6 only network.

    Look:

    $ telnet www.kame.net 80
    Trying 2001:200:0:8002:203:47ff:fea5:3085...
    Connected to www.kame.net.
    Escape character is '^]'.

    Try it yourself. Your box will look up the v6 address, try to connect, and if not, use the v4 address.

    It's quite depressing really how people (mainly American, it seems) on Slashdot are so anti IPv6. They bleat on about NAT, and how there are loads of addresses, and why on earth would you want your fridge with an IP address. It's not just to do with the extra addresses. There. Did you get that?

  • Re:Satisficing (Score:3, Informative)

    by c_g_hills ( 110430 ) <chaz AT chaz6 DOT com> on Tuesday October 28, 2008 @08:58PM (#25549733) Homepage Journal

    Then there are the privacy issues -- DHCP IPv4 provides some masking, while IPv6 provides none whatsoever and likely gets archived.

    This is FUD. IPv6 has privacy extensions for stateless autoconfiguration that specifically address this problem. Please read RFC 3041 [rfc-archive.org]. It has been around since 2001.

  • Re:*Brain Asplodes* (Score:3, Informative)

    by Raenex ( 947668 ) on Tuesday October 28, 2008 @10:54PM (#25550521)

    I don't believe Rails is more crash-prone or scaling-challenged than other platforms.

    You can read the Rails is a Ghetto" [zedshaw.com] rant and do a find for "restart". Apparently Rails had a buggy garbage collector.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...