Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet

The Internet Is 'Built Wrong' 452

An anonymous reader writes "API Lead at Twitter, Alex Payne, writes today that the Internet was 'built wrong,' and continues to be accepted as an inferior system, due to a software engineering philosophy called Worse Is Better. 'We now know, for example, that IPv4 won't scale to the projected size of the future Internet. We know too that near-universal deployment of technologies with inadequate security and trust models, like SMTP, can mean millions if not billions lost to electronic crime, defensive measures, and reduced productivity,' says Payne, who calls for a 'content-centric approach to networking.' Payne doesn't mention, however, that his own system, Twitter, was built wrong and is consistently down."
This discussion has been archived. No new comments can be posted.

The Internet Is 'Built Wrong'

Comments Filter:
  • "Content centric"? (Score:5, Insightful)

    by KDR_11k ( 778916 ) on Tuesday October 28, 2008 @03:13PM (#25546511)

    Does that translate to "owned by the big media cartels"?

    • by OrangeTide ( 124937 ) on Tuesday October 28, 2008 @03:18PM (#25546605) Homepage Journal

      My buzzword filter prevented that term from reaching my conscious mind.

    • Re: (Score:3, Interesting)

      I got the impression that he was talking about divorcing the content from the presentation, which sounds fine in theory but a lot of people want to have more control of the presentation...That was kinda the point of HTML in the first place; we'd have stuck with Gopher if all we wanted was pure content with a static presentation.

      Even in a modern context, we could have switched to XML to divorce the information from the presentation, and there hasn't really been a charge in that direction.

      It's hard to say wha

      • by Firehed ( 942385 ) on Tuesday October 28, 2008 @03:55PM (#25547237) Homepage

        Well, given that Twitter really only took off because of it's API (which is XML-based), you could say that it really is taking off, especially with how many other user-content-driven sites have APIs. Beats the hell out of page scraping, anyways.

        The problem is that serving straight-up XML with an XSLT is rather flaky cross-browser (especially on mobile devices), and adds a level of confusion that not only isn't necessary in 99% of websites but is best piped through a semi-regulated system. Twitter is an awful example as they still don't have a business model (or even a revenue stream at all AFAIK), but providing premium access to certain sections of an API or an increased request limit is certainly a valid way to monetize a service like Twitter, and that will quickly fall apart if were to serve straight-up XML.

        Other than cross-browser standards support and a couple of quirky CSS attributes, there's really nothing wrong with separating the content and presentation with the systems that are widely in use today. They also allow users to override the presentation with their own stylesheet. Sure, you'd generally have to do it on a site-by-site basis as there's neither a <content> or <menu> tag (but rather divs and lists with IDs set, with no cross-site consistency at all), but implementing that kind of system effectively would be beyond a nightmare. I suppose you could link out to a semantic XML version of a page via a meta tag like how we currently handle RSS feeds (could just be another xmlns attribute for this kind of thing, though you could get most of the info off of a full rss feed anyways), but there are so few people that would want to override the default presentation of a site (and even fewer who would be bothered to do so) that it just doesn't make any sense, especially as there's currently no monetary incentive to do so.

        • by i.of.the.storm ( 907783 ) on Tuesday October 28, 2008 @06:21PM (#25548831) Homepage

          "How are sites slashdotted when nobody reads TFAs?"

          That really is one of the great mysteries of slashdot.

    • Does that translate to "owned by the big media cartels"?

      I think it means, roughly, identifier vs. locator oriented, so that you ask some (presumably, nearby in network topology) server to get you a particular identified piece of content, and it does so efficiently without having to always go to the origin server and get it directly from there, unless the need to do that is inherent in the request.

      HTTP/1.1 supports that quite well (including with negotiation of content types), its just a matter of having the

    • by Inda ( 580031 ) <slash.20.inda@spamgourmet.com> on Tuesday October 28, 2008 @03:56PM (#25547257) Journal
      Ah, I thought it was the little hole in the middle of a DVD. Thanks for clearing that up.
    • by Arancaytar ( 966377 ) <arancaytar.ilyaran@gmail.com> on Tuesday October 28, 2008 @03:57PM (#25547261) Homepage

      That's what I guessed, yeah. It's the same philosophy that causes right-click-disabling Javascript.

      --

      The danger of letting the "content-centric" people take over the internet is of course that web browsers will be mandatory closed-source clients that decode the heavy-duty encryption while a camera on your computer checks to make sure nobody else is looking at your screen for free.

  • *Brain Asplodes* (Score:5, Insightful)

    by SatanicPuppy ( 611928 ) * <Satanicpuppy.gmail@com> on Tuesday October 28, 2008 @03:14PM (#25546537) Journal

    Okay, so a guy who works for Twitter a crash prone, non-scaling application, says that the internet is "built wrong", where one of the examples of wrong is scaling. He goes on to list a few specific apps that he thinks are good example of "wrong" like IP4 and SMTP, which won out against better designed (but strangely unmentioned) alternatives because of wacky market stuff, which, again, not described.

    No one who knows anything about the Internet would say that it was perfect. It's not even close. There are a lot of places where unholy cludges exist and are perpetuated because it's a lot easier to live with them than it is to try and change everything that depends on them. Things like, for example, Twitter.

    Sure there were alternatives, but they were all either patent-encumbered, or hard to deploy, or too complex to easily develop for. They died. It's called competition. TCP/IP and SMTP came out the other side, and grew into cornerstones of the largest network this world has ever known, in a shockingly short period of time. No, not perfect, but pretty damn good none-the-less.

    It's very easy to sit back today and say, "Wow it could have been so much better!" But that is armchair crap at the best of times...I'd sneer if Vint Cerf said it. Coming from someone who demonstrably can't do better, and can't even be bothered to champion a specific alternative...That's as pointless and lacking in content as most of the crap that comes through his crappily coded service.

    • by OrangeTide ( 124937 ) on Tuesday October 28, 2008 @03:16PM (#25546567) Homepage Journal

      It's not Twitter's fault, it's the Internet's!

    • by rob1980 ( 941751 )
      It takes one to know one, so even though his system is down frequently because of "too many Tweets" at least he knows a borked system when he sees one!
    • Re: (Score:3, Insightful)

      by AceJohnny ( 253840 )

      Sure there were alternatives, but they were all either patent-encumbered, or hard to deploy, or too complex to easily develop for.

      Or they came too late or didn't survive the competitor's marketing onslaught. Remember the power of inertia.

      Speaking of Twitter, there are alternatives [identi.ca], and there are better architectures [metajack.im]

      • by Firehed ( 942385 )

        Yes, but Twitter has the audience already. Being the first is often far more important than being the best, unfortunately.

        • Re: (Score:3, Insightful)

          Tell it to netscape, ;)

          These things are fad driven; if Twitter doesn't get its act together, someone else will do it better.

          And since Twitter is basically non-revenue generating, it's not like they're getting anything out of their early dominance except user goodwill.

          • Re: (Score:3, Funny)

            by ozphx ( 1061292 )

            This is new revolutionary* economy* of Web2.0*. You need to understand that its synergy* with the community* that drive these content-driven* revolutions*.

            More seriously, their exit strategy is to get bought out by some moronic yahoo (pun intended).

            IPOs are kinda out these days, the trick is to be bought out and make turning a bunch of poor uni student subscribers into a revenue stream someone elses problem.

            * Bullshit.

    • by JonTurner ( 178845 ) on Tuesday October 28, 2008 @03:27PM (#25546739) Journal

      Thank you for that. It's exactly right.

      What he fails to realize is, everything is an incremental, transitional technology. Nobody planned out this current hideous jumble of technologies we call teh intertubez, it started with a simple message protocol on top of a network protocol and evolved, and evolved, and evolved further from its inferior predecessors; at each state incremental improvements happened as necessary.

      Web 1.0 was "good enough" for some tasks. But when it wasn't, the technology adapted. It remains is as good as the need requires and the market demands at this moment. Mistakes are culled, successes survive. A giant, electronic petri dish, if you will.

      • by RiotingPacifist ( 1228016 ) on Tuesday October 28, 2008 @04:56PM (#25548009)

        shhh, we all know evolution is a lie! It was all designed by some inventor and the tubes are in exactly the same form they were 3 years ago (any good source cites 3 years as the age of the internet and who am I to go questioning this when i find some of these "older" sites)

      • by Target Practice ( 79470 ) on Tuesday October 28, 2008 @05:35PM (#25548415)

        Web 1.0

        NO! Dammit! I refuse to let you retroactively coin a phrase for an era in which all of the damned rabid PHP weasels had no part!

        You can have your blogosphere, twitter, all those lame-ass social networking sites that do nothing for the good of mankind; but I have to draw the line when you reach into the past and blaspheme the good old days of gopher, FTP, and just reading the web page for the content and not the blinking god damned gnome game!

        It was NOT web 1.0. It was an era of purity of information and good porn the likes of which will never grace your browser again!

        Now, take your PHP for weasels book and get off my lawn!

        • Actually... (Score:4, Insightful)

          by Moraelin ( 679338 ) on Wednesday October 29, 2008 @02:24AM (#25551813) Journal

          Actually, the funny thing is that Web 2.0 vs Web 1.0 wasn't even supposed to be about technology as such. And the inventor of that buzzword still insists that it isn't, long after the Grinch... err... the marketing bulshitters stole it and ran away with it.

          Web 2.0 -- and by contrast Web 1.0 -- wasn't about techno-fetishism, but about techno-utopianism. It has nothing to do with PHP or any other particular technology.

          The basic idea of Web 2.0 was that if you put a million monkeys on a million keyboards, they're still monkeys. But if you interconnect them and let them write and edit each other's content, now that's teh nirvana and age of enlightenment. Give the users wikis instead of writing your own content. (I'm sure you'll be thrilled to discover that your product was made from baby seals and your CEO blows goats, but, hey, if the users wrote it, it must be true. 'Cause emergent collective intelligence is never wrong;) Have forums. Let the users tag your content instead of categorizing it or any other automated way of finding it. (I'm sure the tags on Slashdot would be sooo much more useful to find an article than full-text search;) Etc.

          At a basic level, none of those _really_ needs PHP or JavaScript or anything. You could make a primitive almost-wiki back in the day, by just giving the users FTP access to the site and letting them edit and re-upload the HTML files.

          Anyway, in true zealot fashion, where no price is too high for his utopia if someone else pays it, this was packed in a further lie: that, see, that's also the path to making the big bucks and verily everyone will beg to give you their money if you only had a wiki. I guess you can't really preach stuff like, "why you should blow your money to give us our free, collaborative online utopia", so it had to be repacked as, "you could be the next Google if you do!"

          No, seriously. If you listen to him, Tim O'Reilly looked at what companies survived the dot-com bubble and what were their defining characteristics. And somehow he managed to completely miss the fact that it's those who had a business plan, silly. E.g., that why Google thrived was because Google became the damn best ad provider. Nah, what he saw is that it was those with wikis, and bittorrent and other collaborative stuff. That's the way to the big bucks.

          So he envisioned and preached a DotCom Bubble 2.0... er... Web 2.0 golden age, where everyone has those, and someone gives them money for nothing for doing it.

    • by j_166 ( 1178463 ) on Tuesday October 28, 2008 @03:44PM (#25547017)

      "There are a lot of places where unholy cludges exist and are perpetuated because it's a lot easier to live with them than it is to try and change everything that depends on them."

      You're telling me. I personally witnessed a critical point that 75% of all internet data passes through in an unnamed very large University that is powered by a goddamned lobster on a treadmill! If Pinchy ever gives up the ghost, we are all well and truly FCKed.

    • Re: (Score:3, Informative)

      by nine-times ( 778537 )

      It's very easy to sit back today and say, "Wow it could have been so much better!" But that is armchair crap at the best of times...

      Sure, but you don't have to be some kind of genius to see that protocols like FTP and SMTP have some problems. Although I'm not qualified to do anything about it, I suffer some problems due to some of those problems and limitations, so I'd like to reserve the right to complain even if it is "armchair crap".

      I *do* find it frustrating how common and necessary FTP can be in spite of it being really awful, particularly because I know that FTP is used largely out of inertia and familiarity, and out of ignoranc

  • Van Jacobson, an award-winning specialist in networking to whom the Internet owes its continued existence, gave a talk at Google in 2006 outlining a content-centric approach to networking. Jacobsonâ(TM)s approach leverages wide distribution of data across a variety of devices and media, while baking in security and simplifying the development model for networked applications.

    If the majority of Internet usage continues to be about content, an approach like Jacobsonâ(TM)s would be not just prudent, but necessary. You neednâ(TM)t do more than attempt to watch a streaming video on a busy office LAN or oversubscribed DSL circuit to understand that even the best-served markets for Internet connectivity are struggling to keep up with demand for networked content. Add to this that providing adequate security models for such content is a virtual impossibility on todayâ(TM)s Internet, and the need for a better approach is even clearer.

    When Jacobson says things should be focused on content, I think all he means is that you should ask for content and the internet should be able to find it using many different ways (IP, VPN, zeroconf, proxies, you name it). That's what he means by that stupid buzzword "content-centric." And that's not going to solve anything! Everything else he preaches sounds like disseminating content once from New York to Seattle so that when an Oregon resident wants to read the Wall Street Journal they don't make 8 hops across the country for every article. You move the data once closer to the consumer and then you have less network usage.

    I may be misinformed but how is this any different than a Content Delivery Network (CDN) [wikipedia.org]? I believe these were all the rage years ago (look at the commercial list at the bottom of the article). They are nothing new. So are you proposing that the internet have these built into them to increase efficiency and network usage? Wouldn't it just be easier to let people pay for these services like we've been doing? Oh no, my bandwidth is being ate up and people on the other side of the country are experiencing huge latency! Time to fork out a monthly fee to a CDN, I guess. It'll be more expensive to host a large site but nothing some ads couldn't take care of--free market to the rescue.

    I'm sick of people that get up on a soapbox and rip apart a good idea because it's not perfect. Bitch bitch bitch IPv4 has been broken from the start. Well, duh, do you think IPv6 is any less flawed? There's still a limit, who cares if it's 10 or 10,000 years in the future because it's going to have to be dealt with at some point!

    This article really is a piece of work. A man who works on the API of something that thrives on "a broken internet" bashing said internet and pointing at others to dream up ideas to fix what he thinks is wrong. All I see is griping, not a single original solution to these problems. Yeah, I'm sorry consumers don't have the same priorities and requirements that you do but, well, that's why you're going to see a technology like Windows 98 triumph over Linux. Align yourself with your user or consumer and you'll start to understand things.

    • Re: (Score:3, Interesting)

      by Pollardito ( 781263 )

      I may be misinformed but how is this any different than a Content Delivery Network (CDN) [wikipedia.org]? I believe these were all the rage years ago (look at the commercial list at the bottom of the article).

      Akamai claims that 20% of the internet's traffic flows through their network, so I'd say they're all the rage now

    • by Dr_Barnowl ( 709838 ) on Tuesday October 28, 2008 @05:17PM (#25548225)

      IPv6 ... still a limit, who cares if it's 10 or 10,000 years in the future

      2^128 addresses, or 2^52 addresses for every observable star in the known universe. Compared to 2^32 for IPv4.

      IPv6 may well not be the last protocol on the web, but it won't be for lack of addresses.

  • by mcgrew ( 92797 ) * on Tuesday October 28, 2008 @03:16PM (#25546573) Homepage Journal

    So was a 1932 Ford. So were the highways in 1932. So was an analog computer in 1959.

    The only thing wrong about the internet is that it has become obsessed with money rather than information. Technical issues will be worked out over time.

  • by hansraj ( 458504 ) on Tuesday October 28, 2008 @03:16PM (#25546575)

    Film at 11!

    The internet wasn't designed to be used the way it is being used today anyway. So, you keep finding shortcomings and try to work your way around. SMTP has problems? Well here use some PGP and *some* of the problems are taken care of. Most things work in an evolutionary way anyway.

  • by jeffmeden ( 135043 ) on Tuesday October 28, 2008 @03:16PM (#25546579) Homepage Journal

    This is very ironic coming from a web-2.0 junkie who captains a site that is *constantly* having outages.
     
    I think this may be semantics, but the Internet was not built wrong, it was *used* wrong. The original design perfectly met the needs of the time. Expectations change, and all we are seeing is that under our *present* needs the system can bend in some areas, and break in others. If we could go back and "fix" it we would do it a lot differently, of course. Hindsight is 20/20 after all.
     
    I, for one, think it was put together pretty well. It's up to us to keep it working, the internet is always ready for re-invention.

  • by $RANDOMLUSER ( 804576 ) on Tuesday October 28, 2008 @03:17PM (#25546585)
    HTTP and JavaScript on TCP/IP over IPV4 is "not the best it could be"?

    Wow, I'm fascinated by your ideas and would like to subscribe to your newsletter.
  • by TimHunter ( 174406 ) on Tuesday October 28, 2008 @03:18PM (#25546597)

    The only thing wrong with Twitter is that it has too many users. The way to fix it is to stop using it.

  • I have found nothing useful in twitter. This is not the revelation that will change my mind.
  • I work for Twitter and now somebody other than my mom may listen to me! Twitter is important damnit!
  • let me get this straight

    twitter?! twitter redefined Fail when it comes to how to run large sites/service

    they should be the last people to listen on this subject

  • Because at 2^32-1 addresses it simply stops. We are running out of ipv4 and there is only one real solution. Adopt ipv6. Unfortunately there is some extreme prejudice again ipv6 especially here at Slashdot. And if somehow a contingent of techies is against it, the spread of ipv6 will be slowed down due to non-adoption.

    Wake up people, we are running out of addresses and time. Don't settle for half-baked NAT, adopt ipv6. Whine to your ISP and to your boss that it is absolutely neccesairy to ipv6.

    That or pray

    • by mcgrew ( 92797 ) * on Tuesday October 28, 2008 @03:30PM (#25546789) Homepage Journal

      And stack some canned soup and shotguns

      I've found that a simple crank-operated can opener works far better than a shotgun.

      And soup? Screw the soup, stockpile beer!

    • Sounds great, you can pay for it.

      I haven't seen any real estimates of the cost of moving to IPV6, but it's going to be substantial. How much do you have in your wallet?

    • by SatanicPuppy ( 611928 ) * <Satanicpuppy.gmail@com> on Tuesday October 28, 2008 @03:35PM (#25546857) Journal

      Meh. I've got access to a block of addresses that is so hilariously larger than anything I'll ever need that I NAT some of my home servers through proxies at work for the static IP. If we reclaimed all the unused addresses, we could string out IPv4 for another decade or so.

      Moving to IPv6 is one of those things that sounds like it's going to be soooooo easy, and has the potential to be hell on earth. Adoption is happening, slowly and surely, but it's still happening. I see no reason to panic and try and force a quick transition when the only thing that that will get us is chaos.

    • Kapor is in his element now, fluent, thoroughly in command in his material. "You go tell a hardware Internet hacker that everyone should have a node on the Net," he says, "and the first thing they're going to say is, 'IP doesn't scale!'" ("IP" is the interface protocol for the Internet. As it currently exists, the IP software is simply not capable of indefinite expansion; it will run out of usable addresses, it will saturate.) "The answer," Kapor says, "is: evolve the protocol! Get the smart people together

    • Because at 2^32-1 addresses it simply stops. We are running out of ipv4 and there is only one real solution.

      The "Real Solution" is to stop running out, whether it be due to more practical usage or due to changing over to a system which has a larger address range. Who says you *need* to adopt a completely different system? While there are plenty of advantages to IPv6, don't think you will win anyone over with the "but our 4,294,967,294 addresses are almost gone!" argument. You will not.

    • It's hardly going out on a limb criticizing IPv4 -- it has proven an easy target for going on two decades now, with its weakness apparent to all.

      And the switch to IPv6 is happening. Many backbone providers are rolling it out, and it is gaining wider support among mainstream operating systems and applications. The only reason it hasn't been a hastier migration is that NAT really did undermine the necessity for expediency.

    • Re: (Score:3, Insightful)

      IPv4 was defined by RFC 791, which was published in 1981. It allows for 4.295 billion addresses. In 1981, the entire population of the world was about 4.5 billion. Sure, that means that from the start, there weren't enough IP addresses to go around, but back then, it was unreasonable to expect that even 1% of the population would have use for an IP address.

      The limitation on IPv4 addresses isn't a design flaw. It's just a symptom of IPv4 being old. It has just about outlived its usefulness.

      (In fact, using mo

  • by CheeseburgerBrown ( 553703 ) on Tuesday October 28, 2008 @03:21PM (#25546661) Homepage Journal
    Many systems that have grown in an organic or semi-organic fashion are non-optimal (like, for example, most people you know and every decision ever rendered by a committee).

    With something as complex and "live" as the Internet, process is more important than paradigm: the real question is how to optimize from the current live state, rather than mumbling pointlessly about how it should've had better roots.

    Shoulda but didna. So, let's move on.

    Also, I tried to send this guy a tweet but all I got was a message saying, "I'm sorry, a problem has occured; please reload the page."

    Wanker.
  • X Windows?? (Score:4, Insightful)

    by kisrael ( 134664 ) on Tuesday October 28, 2008 @03:22PM (#25546665) Homepage

    He quotes Alan Kay:
    "HTML on the Internet has gone back to the dark ages because it presupposes that there should be a browser that understands its formats... You don't need a browser, if you followed what this Staff Sergeant in the Air Force knew how to do in 1961. You just read [data] in. It should travel with all the things that it needs, and you don't need anything more complex than something like X Windows."

    Whoa.
    I'm not sure which is worse; the idea of every screen being rendered on a server and then piped over to the user, or every interaction is an object being sent with its data, which seems like a security nightmare.

    besides don't most of us download, say, the browser anyway? Kind of a boot strap thing.

    It's kind of like those "enhanced" DVDs then, put in a PC, offer to install some weird ass player...

    • Re: (Score:2, Insightful)

      by dedazo ( 737510 )

      X was built as a graphical client-server protocol, and it scales OK for a few dozen users (with caveats w/r to your bandwidth, etc) although it's not used that way very much anymore.

      But it would never scale to the level that a just-serving-html-ma'am BSD/Apache box with a gig of RAM does.

    • by jjohnson ( 62583 ) on Tuesday October 28, 2008 @03:53PM (#25547183) Homepage

      Sorry, I couldn't read the rest of your post. My brain short-circuited at this line:

      don't need anything more complex than something like X Windows.

      • Re: (Score:3, Interesting)

        by kisrael ( 134664 )

        Heh, yeah I noticed that.
        Reminds me of that cyberpunk parody http://www.netfunny.com/rhf/jokes/91q1/ozpunk.html [netfunny.com] :
        I needed to read the X Windows/Motif 1.1 manual, so I came to the bar and asked Ratz to fix the documentation data in liquid form for me. It made a bitter, painful drink, but it was better than spending days turning pages in realspace.

        Ratz put a bucket of liquid in front of me.

        "I wanted a glass of docs, Ratz. What the hell is this?" I barked.

        "Motif don't fit in a glass anymore," he barked back.

        I

  • by ghmh ( 73679 ) on Tuesday October 28, 2008 @03:22PM (#25546667)

    This can basically be summarised as "Hindsight is a wonderful thing.....if only we knew then what we know now..."

    This spurious argument also equally applies to:

    • Human evolution
    • Town planning
    • How you should have described the haircut you wanted, instead of the one you got

    amongst countless other things...

    (Oh noes, someone is wrong on the internet [xkcd.com])

  • This article reads like a Ron Paul supporter griping about the evils of government given the efficacy of the invisible hand.
  • Satisficing (Score:3, Interesting)

    by redelm ( 54142 ) on Tuesday October 28, 2008 @03:23PM (#25546695) Homepage
    "Better is the enemy of the good". Sure, there are apparent (theoretical?) flaws in the Intarwebs. As there are in all things. The bigger question is whether these flaws are fatal in practice.

    IPv6 is an interesting case study. Theoretically better, but largely unadopted. The net benefits cannot be large.

    Too many projects have been killed by over-optimizing. And people who say something is impossible should get out of the way of those actually doing it!

  • Oh, yeah, the way the internet has absolutely nothing to do with cost and technological limitations decades ago and when the internet was born it was a military network and later was generally a tool used between schools, then businesses, and much later just anyone who wanted to use it.

    Nothing at all.

  • If I'm to understand you properly, you want the internet to be more trucklike, because the tubes are too long.

  • n00b (Score:4, Interesting)

    by Zebra_X ( 13249 ) on Tuesday October 28, 2008 @03:30PM (#25546781)

    SMTP is a terrible example. Ultimately the users are the ones opening e-mails, getting browser-jacked and their passwords stolen because they don't know what is in front of them. Sure clients were the problem for a while but that "phase" has passed, developers have learned how to mitigate most attacks.

    The only thing that is "wrong" fundamentally with the internet is the separation of DNS and the routing protocol.

    For all intents and purposes a DNS failure causes a network outage. It also dramatically increases client latency when it is not configured correctly which look like network issues, but are not.

    I'm sure when IPv4 was created the notion of mixing both services was unthinkable due to the additional amount of data needed to move names around at layer 2/3. This is no longer the case and we should really try to move away from a central naming system.

    • Re: (Score:3, Informative)

      by Paralizer ( 792155 )
      Interesting idea and I see why that would be a desirable feature. You give it google.com and you get routed directly to google.com without a potential MITM DNS attack. However, it seems to me that DNS and routing should be separated as they perform entirely different functions.

      Routing is how to get there.
      DNS is where you want to go.

      If there were an efficient way to combine them that would be a cool feature, but routing really should only be how to get from point A to point B. What would you do about
    • Re: (Score:3, Insightful)

      by mcrbids ( 148650 )

      The only thing that is "wrong" fundamentally with the internet is the separation of DNS and the routing protocol.

      This has to be one of the DUMBEST ideas I think I've ever heard of....

      As an application hosting provider, we provide a very strong level of redundancy, including hot disaster recovery hosting. If anything serious were to happen to our primary hosting facility, we'd update DNS and within a few hours, our secondary hosting would become active.

      By definition, the secondary hosting is in another city,

  • by boxless ( 35756 ) on Tuesday October 28, 2008 @03:32PM (#25546821)

    On the shoulders of giants we stand.

    Any of these ideas of improvement are not new. But neither are they working. And the internet as we know it is working quite well. Far beyond what anyone would have predicted.

    Are there things to be fixed? Sure, around every corner. But I'm not going to listen to some guy from some wicked kewl startup in SFO tell me how to do it.

    • Worse is better (Score:3, Interesting)

      by AlpineR ( 32307 )

      Okay, I"ve never heard of "Worse is better". The author uses it disparagingly, saying "an inferiorly designed system or piece of software may be more successful than its better-designed competitor".

      But the Wikipedia article says it means Simplicity > Correctness > Consistency > Completeness, as opposed to an alternate valuing of Correctness > Consistency > Completeness > Simplicity. In other words, doing a few things right and easy is better than doing everything consistently.

      I challenge

  • by Mr. Slippery ( 47854 ) <tms@infamous.n3.14et minus pi> on Tuesday October 28, 2008 @03:34PM (#25546847) Homepage

    "We reject: kings, presidents and voting. We believe in: rough consensus and running code." - David D. Clark [wikipedia.org], former chair of the IAB

    You get to say the internet was "built wrong" as soon as we see your "better" idea run.

  • by inotocracy ( 762166 ) on Tuesday October 28, 2008 @03:38PM (#25546915) Homepage
    ...who could barely keep Twitter up and running for 24 hours straight without it going down?

    http://www.istwitterdown.com

  • by the_duke_of_hazzard ( 603473 ) on Tuesday October 28, 2008 @03:46PM (#25547067)
    This guy sounds like the kind of twat who joins our company, bitches about how badly everything's been written, then leaves behind a load of shitty unmaintainable code that's "really clever". And somehow he's in charge at Twitter? Christ.
  • by wickerprints ( 1094741 ) on Tuesday October 28, 2008 @03:52PM (#25547163)

    Internet protocols and standards were originally implemented for academic use. Decades ago, TCP/IP, SMTP, DNS, and HTTP were created with an implicit assumption of trust between client and server--indeed, between all nodes in the network. The Internet was an exercise in efficient data transfer across a network. It was not designed for spam, or DDoS, or phishing; nor was it designed for shopping, bank account management, or YouTube. That we can do these things now is a reflection of the workarounds that have been developed in the meantime.

    Furthermore, hardware at the time of the development of these protocols was not what it is today.

    And then, over the course of several years, the monetizing and commercialization of this academic project occurred. ISPs, in order to reach the masses, established an inherently unequal system of access that encouraged consumption of content but discouraged users from hosting it. The solution that has come about in more recent years, with blogs, social networks, and so forth, was to have users submit content and have it hosted by large, ad-revenue based corporations. This has led to serious problems concerning the nature of ownership of information.

    And now, we have one of the people running such a site, complaining that the underlying model on which their company relies is "built wrong" because it doesn't suit their needs. Well, isn't that rich? It smacks of willful ignorance of not only what the Internet is, but more importantly, the original design principles (egalitarian, neutral) that the Internet embodied.

    The pace of technology is rapid. History, however, is long, and the danger I see here is not that you have one idiot who hasn't learned his history lesson, but that as time goes by, more and more people and corporations and politicians will forget why the Internet was originally built. That's why we have companies against Net neutrality. They have forgotten or ignored history. They took something free and made billions off of it, and they want to milk it for all it's worth. And therein lies the real problem, because when you forget where something came from, you become disconnected from the past and blind to the future.

  • by russotto ( 537200 ) on Tuesday October 28, 2008 @03:52PM (#25547175) Journal

    Of course the Internet doesn't scale to its projected size, and of course SMTP is insufficiently secure. This has nothing to do with the worse-is-better design, though. It's just that the Internet existed before any of those requirements were even conceived.

    Nobody thought, "Hmm, you know, we have a requirement for electronic mail to be secure, but that's too hard, so we'll just skip it". Certainly no one thought "We're going to need more than 2^32 Internet nodes, but that's too hard, so we won't do it. Instead, the use to which IPv4 and SMTP have been put to have resulted in newly discovered requirements which were simply not there originally.

  • small hint (Score:3, Informative)

    by circletimessquare ( 444983 ) <circletimessquare@@@gmail...com> on Tuesday October 28, 2008 @04:16PM (#25547531) Homepage Journal

    security and trust models are philosophically and technologically joined at the hip with command and control models

    you build a supersafe, 100% trustworthy internet, and you build the internet that beijing and tehran love

    the internet is a wild and wacky and dangerous place. and it is also free. sure, there is the tragedy of the commons, but i'd much rather wade through GNA and /b/tard comments than deal with the Internet Security Office. which is what security and trust models empower

    don't knock what you got until you lose it, and forever more lament the loss of the golden years

  • by John Sokol ( 109591 ) on Tuesday October 28, 2008 @04:52PM (#25547953) Homepage Journal

    > API Lead at Twitter, Alex Payne
    Yes Newbie. He clearly has no appreciation of the history or how things came to be.
    It's not like TCP/IP was the only choice, many other network technologies tried to become the Internet and failed. TCP/IP won out because it was the first to work and was open enough to work across all platforms. Novell, IBM, Microsoft all had their Networking technologies that they tried to push out TCP/IP with and almost completely did for a while with LAN's. But that all fell apart when companies want to move to WAN's and scaling up, TCP/IP was the only thing what work for both LAN and WAN. People forget the large push ATM to the desktop had, where they tried to replace TCP/IP with ATM. ATM's strength coming from preexisting telecoms switches for large voice WAN's was the only thing that supported the high bandwidth fiber for a long time. But TCP/IP just tunneled right over ATM, where ATM was too sensitive to tunnel, and was limited in what medium it can operate over. (No ATM over 2400 Baud Modems for example!)

    In the end it's about Evolution, it's not some engineer or any group of humans that get to make the final decision. Call it the market, but people will choose what ever get's there job done best for them. This includes many factors that engineers never consider, legacy gear, awareness of terminology, software support, reliability, cost, platform support, open-ness of standard, multi-vendor support, cost of HW/SW. Maturity of technology, what is the TOP and and BOTTOM end.
    By this I mean TCP/IP can run on a PIC Microchip and a billion Dollar super computer. It can run over radio, fiber, satellite, carrier pigeon(RFC 1149) even.

    What ever takes it's place will have to be that flexible, where every light bulb can have it's own network address and still support TerraBit networks.

    This is no small task, the reality is IP is flexible, so much so that you can run other protocols through it or it though other protocols. As such it will most likely be around forever and just have stuff layered over and under it. Like VPN's, PPPoE, RTP/RTSP.

    Anyone is free to start creating there own IPv6 or what ever other kind of network and selling it, or running it in parallel with the internet or even over the IPv4/6 Internet.

    So at this point to think your going to convince everyone to drop IPv4/6 and try something immature that is untried and untested it just unrealistic and ignorant to suggest.

  • Ad-hominem (Score:5, Insightful)

    by Peaker ( 72084 ) <gnupeaker.yahoo@com> on Tuesday October 28, 2008 @06:07PM (#25548693) Homepage

    Why does everyone try to divert attention from his claim that the internet does not scale well, to attacks on his own works?

    If his claims have no merit, refute the claims. Do not attempt to instead discredit the source.

An adequate bootstrap is a contradiction in terms.

Working...