Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet

The Internet Is 'Built Wrong' 452

An anonymous reader writes "API Lead at Twitter, Alex Payne, writes today that the Internet was 'built wrong,' and continues to be accepted as an inferior system, due to a software engineering philosophy called Worse Is Better. 'We now know, for example, that IPv4 won't scale to the projected size of the future Internet. We know too that near-universal deployment of technologies with inadequate security and trust models, like SMTP, can mean millions if not billions lost to electronic crime, defensive measures, and reduced productivity,' says Payne, who calls for a 'content-centric approach to networking.' Payne doesn't mention, however, that his own system, Twitter, was built wrong and is consistently down."
This discussion has been archived. No new comments can be posted.

The Internet Is 'Built Wrong'

Comments Filter:
  • "Content centric"? (Score:5, Insightful)

    by KDR_11k ( 778916 ) on Tuesday October 28, 2008 @04:13PM (#25546511)

    Does that translate to "owned by the big media cartels"?

  • *Brain Asplodes* (Score:5, Insightful)

    by SatanicPuppy ( 611928 ) * <Satanicpuppy.gmail@com> on Tuesday October 28, 2008 @04:14PM (#25546537) Journal

    Okay, so a guy who works for Twitter a crash prone, non-scaling application, says that the internet is "built wrong", where one of the examples of wrong is scaling. He goes on to list a few specific apps that he thinks are good example of "wrong" like IP4 and SMTP, which won out against better designed (but strangely unmentioned) alternatives because of wacky market stuff, which, again, not described.

    No one who knows anything about the Internet would say that it was perfect. It's not even close. There are a lot of places where unholy cludges exist and are perpetuated because it's a lot easier to live with them than it is to try and change everything that depends on them. Things like, for example, Twitter.

    Sure there were alternatives, but they were all either patent-encumbered, or hard to deploy, or too complex to easily develop for. They died. It's called competition. TCP/IP and SMTP came out the other side, and grew into cornerstones of the largest network this world has ever known, in a shockingly short period of time. No, not perfect, but pretty damn good none-the-less.

    It's very easy to sit back today and say, "Wow it could have been so much better!" But that is armchair crap at the best of times...I'd sneer if Vint Cerf said it. Coming from someone who demonstrably can't do better, and can't even be bothered to champion a specific alternative...That's as pointless and lacking in content as most of the crap that comes through his crappily coded service.

  • by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Tuesday October 28, 2008 @04:15PM (#25546557) Journal

    Van Jacobson, an award-winning specialist in networking to whom the Internet owes its continued existence, gave a talk at Google in 2006 outlining a content-centric approach to networking. Jacobsonâ(TM)s approach leverages wide distribution of data across a variety of devices and media, while baking in security and simplifying the development model for networked applications.

    If the majority of Internet usage continues to be about content, an approach like Jacobsonâ(TM)s would be not just prudent, but necessary. You neednâ(TM)t do more than attempt to watch a streaming video on a busy office LAN or oversubscribed DSL circuit to understand that even the best-served markets for Internet connectivity are struggling to keep up with demand for networked content. Add to this that providing adequate security models for such content is a virtual impossibility on todayâ(TM)s Internet, and the need for a better approach is even clearer.

    When Jacobson says things should be focused on content, I think all he means is that you should ask for content and the internet should be able to find it using many different ways (IP, VPN, zeroconf, proxies, you name it). That's what he means by that stupid buzzword "content-centric." And that's not going to solve anything! Everything else he preaches sounds like disseminating content once from New York to Seattle so that when an Oregon resident wants to read the Wall Street Journal they don't make 8 hops across the country for every article. You move the data once closer to the consumer and then you have less network usage.

    I may be misinformed but how is this any different than a Content Delivery Network (CDN) [wikipedia.org]? I believe these were all the rage years ago (look at the commercial list at the bottom of the article). They are nothing new. So are you proposing that the internet have these built into them to increase efficiency and network usage? Wouldn't it just be easier to let people pay for these services like we've been doing? Oh no, my bandwidth is being ate up and people on the other side of the country are experiencing huge latency! Time to fork out a monthly fee to a CDN, I guess. It'll be more expensive to host a large site but nothing some ads couldn't take care of--free market to the rescue.

    I'm sick of people that get up on a soapbox and rip apart a good idea because it's not perfect. Bitch bitch bitch IPv4 has been broken from the start. Well, duh, do you think IPv6 is any less flawed? There's still a limit, who cares if it's 10 or 10,000 years in the future because it's going to have to be dealt with at some point!

    This article really is a piece of work. A man who works on the API of something that thrives on "a broken internet" bashing said internet and pointing at others to dream up ideas to fix what he thinks is wrong. All I see is griping, not a single original solution to these problems. Yeah, I'm sorry consumers don't have the same priorities and requirements that you do but, well, that's why you're going to see a technology like Windows 98 triumph over Linux. Align yourself with your user or consumer and you'll start to understand things.

  • by hansraj ( 458504 ) on Tuesday October 28, 2008 @04:16PM (#25546575)

    Film at 11!

    The internet wasn't designed to be used the way it is being used today anyway. So, you keep finding shortcomings and try to work your way around. SMTP has problems? Well here use some PGP and *some* of the problems are taken care of. Most things work in an evolutionary way anyway.

  • by jeffmeden ( 135043 ) on Tuesday October 28, 2008 @04:16PM (#25546579) Homepage Journal

    This is very ironic coming from a web-2.0 junkie who captains a site that is *constantly* having outages.
     
    I think this may be semantics, but the Internet was not built wrong, it was *used* wrong. The original design perfectly met the needs of the time. Expectations change, and all we are seeing is that under our *present* needs the system can bend in some areas, and break in others. If we could go back and "fix" it we would do it a lot differently, of course. Hindsight is 20/20 after all.
     
    I, for one, think it was put together pretty well. It's up to us to keep it working, the internet is always ready for re-invention.

  • X Windows?? (Score:4, Insightful)

    by kisrael ( 134664 ) on Tuesday October 28, 2008 @04:22PM (#25546665) Homepage

    He quotes Alan Kay:
    "HTML on the Internet has gone back to the dark ages because it presupposes that there should be a browser that understands its formats... You don't need a browser, if you followed what this Staff Sergeant in the Air Force knew how to do in 1961. You just read [data] in. It should travel with all the things that it needs, and you don't need anything more complex than something like X Windows."

    Whoa.
    I'm not sure which is worse; the idea of every screen being rendered on a server and then piped over to the user, or every interaction is an object being sent with its data, which seems like a security nightmare.

    besides don't most of us download, say, the browser anyway? Kind of a boot strap thing.

    It's kind of like those "enhanced" DVDs then, put in a PC, offer to install some weird ass player...

  • by ghmh ( 73679 ) on Tuesday October 28, 2008 @04:22PM (#25546667)

    This can basically be summarised as "Hindsight is a wonderful thing.....if only we knew then what we know now..."

    This spurious argument also equally applies to:

    • Human evolution
    • Town planning
    • How you should have described the haircut you wanted, instead of the one you got

    amongst countless other things...

    (Oh noes, someone is wrong on the internet [xkcd.com])

  • by br00tus ( 528477 ) on Tuesday October 28, 2008 @04:24PM (#25546703)

    Twitter a crash prone, non-scaling application

    Well I can't blame them that much aside from an initial fatal architectural decision, namely to build Twitter on Ruby-on-Rails. Its clear which one went by the wayside in the Fast, Cheap, Good equation with that.

  • by AceJohnny ( 253840 ) <jlargentaye&gmail,com> on Tuesday October 28, 2008 @04:26PM (#25546725) Journal

    Sure there were alternatives, but they were all either patent-encumbered, or hard to deploy, or too complex to easily develop for.

    Or they came too late or didn't survive the competitor's marketing onslaught. Remember the power of inertia.

    Speaking of Twitter, there are alternatives [identi.ca], and there are better architectures [metajack.im]

  • by JonTurner ( 178845 ) on Tuesday October 28, 2008 @04:27PM (#25546739) Journal

    Thank you for that. It's exactly right.

    What he fails to realize is, everything is an incremental, transitional technology. Nobody planned out this current hideous jumble of technologies we call teh intertubez, it started with a simple message protocol on top of a network protocol and evolved, and evolved, and evolved further from its inferior predecessors; at each state incremental improvements happened as necessary.

    Web 1.0 was "good enough" for some tasks. But when it wasn't, the technology adapted. It remains is as good as the need requires and the market demands at this moment. Mistakes are culled, successes survive. A giant, electronic petri dish, if you will.

  • by Ralph Spoilsport ( 673134 ) on Tuesday October 28, 2008 @04:27PM (#25546753) Journal
    Besides the above noted and obvious problems, there are sub-problems that are just as nasty.

    Flash

    We all know Flash sucks. But alternatives to it require hiring an engineer.

    invisibility

    You can draw a picture in PostScript by typing to the interpreter. Then Fontographer came along and that was followed by FreeHand and Illustrator and then Quark and InDesign. The code became invisible. Where is the Quark and InDesign tool for the web? Cuz Dreamweaver sure ain't it, especially with how CSS dominates graphic dicussion.

    Proprietary Browsing.

    Every browser is different and they all suck in different ways. MS has been especially egregious with IE.

    TLD

    is US centric. Is insufficient. Is a mess.

    Squatting

    Personally, I would cheerfully put a bullet in the head of every sitename squatter on the planet.

    Code

    It's code centric. It shouldn't be. It should be design centric. Then we could dump all these expensive programmers and get some work done.

    Scalability

    covered in the article, still true.

    Argh. with the advent of CSS, AJAX, and Web2.0 everything is getting this creepy sameness. It's getting boring. Something's gotta give. Soon.

    RS

  • by bersl2 ( 689221 ) on Tuesday October 28, 2008 @04:32PM (#25546817) Journal

    Code

    It's code centric. It shouldn't be. It should be design centric. Then we could dump all these expensive programmers and get some work done.

    Computers are code-centric. If you can't handle it, GTFO.

  • by Mr. Slippery ( 47854 ) <tms&infamous,net> on Tuesday October 28, 2008 @04:34PM (#25546847) Homepage

    "We reject: kings, presidents and voting. We believe in: rough consensus and running code." - David D. Clark [wikipedia.org], former chair of the IAB

    You get to say the internet was "built wrong" as soon as we see your "better" idea run.

  • by SatanicPuppy ( 611928 ) * <Satanicpuppy.gmail@com> on Tuesday October 28, 2008 @04:35PM (#25546857) Journal

    Meh. I've got access to a block of addresses that is so hilariously larger than anything I'll ever need that I NAT some of my home servers through proxies at work for the static IP. If we reclaimed all the unused addresses, we could string out IPv4 for another decade or so.

    Moving to IPv6 is one of those things that sounds like it's going to be soooooo easy, and has the potential to be hell on earth. Adoption is happening, slowly and surely, but it's still happening. I see no reason to panic and try and force a quick transition when the only thing that that will get us is chaos.

  • Re:X Windows?? (Score:2, Insightful)

    by dedazo ( 737510 ) on Tuesday October 28, 2008 @04:41PM (#25546971) Journal

    X was built as a graphical client-server protocol, and it scales OK for a few dozen users (with caveats w/r to your bandwidth, etc) although it's not used that way very much anymore.

    But it would never scale to the level that a just-serving-html-ma'am BSD/Apache box with a gig of RAM does.

  • by the_duke_of_hazzard ( 603473 ) on Tuesday October 28, 2008 @04:46PM (#25547067)
    This guy sounds like the kind of twat who joins our company, bitches about how badly everything's been written, then leaves behind a load of shitty unmaintainable code that's "really clever". And somehow he's in charge at Twitter? Christ.
  • by wickerprints ( 1094741 ) on Tuesday October 28, 2008 @04:52PM (#25547163)

    Internet protocols and standards were originally implemented for academic use. Decades ago, TCP/IP, SMTP, DNS, and HTTP were created with an implicit assumption of trust between client and server--indeed, between all nodes in the network. The Internet was an exercise in efficient data transfer across a network. It was not designed for spam, or DDoS, or phishing; nor was it designed for shopping, bank account management, or YouTube. That we can do these things now is a reflection of the workarounds that have been developed in the meantime.

    Furthermore, hardware at the time of the development of these protocols was not what it is today.

    And then, over the course of several years, the monetizing and commercialization of this academic project occurred. ISPs, in order to reach the masses, established an inherently unequal system of access that encouraged consumption of content but discouraged users from hosting it. The solution that has come about in more recent years, with blogs, social networks, and so forth, was to have users submit content and have it hosted by large, ad-revenue based corporations. This has led to serious problems concerning the nature of ownership of information.

    And now, we have one of the people running such a site, complaining that the underlying model on which their company relies is "built wrong" because it doesn't suit their needs. Well, isn't that rich? It smacks of willful ignorance of not only what the Internet is, but more importantly, the original design principles (egalitarian, neutral) that the Internet embodied.

    The pace of technology is rapid. History, however, is long, and the danger I see here is not that you have one idiot who hasn't learned his history lesson, but that as time goes by, more and more people and corporations and politicians will forget why the Internet was originally built. That's why we have companies against Net neutrality. They have forgotten or ignored history. They took something free and made billions off of it, and they want to milk it for all it's worth. And therein lies the real problem, because when you forget where something came from, you become disconnected from the past and blind to the future.

  • by russotto ( 537200 ) on Tuesday October 28, 2008 @04:52PM (#25547175) Journal

    Of course the Internet doesn't scale to its projected size, and of course SMTP is insufficiently secure. This has nothing to do with the worse-is-better design, though. It's just that the Internet existed before any of those requirements were even conceived.

    Nobody thought, "Hmm, you know, we have a requirement for electronic mail to be secure, but that's too hard, so we'll just skip it". Certainly no one thought "We're going to need more than 2^32 Internet nodes, but that's too hard, so we won't do it. Instead, the use to which IPv4 and SMTP have been put to have resulted in newly discovered requirements which were simply not there originally.

  • by SatanicPuppy ( 611928 ) * <Satanicpuppy.gmail@com> on Tuesday October 28, 2008 @04:53PM (#25547205) Journal

    Me and the Twitter guy have something in common: if we were great minds, we'd be out doing great things, not sitting around with the belief that our opinions matter.

    I don't have his hubris, thinking that his laughable Twitter credentials put him in some sort of position where he is qualified to pontificate on the sad state of the internets, but I'm not so deluded as to think my sniping at his idiocy is in any way deep or meaningful.

  • by Firehed ( 942385 ) on Tuesday October 28, 2008 @04:55PM (#25547237) Homepage

    Well, given that Twitter really only took off because of it's API (which is XML-based), you could say that it really is taking off, especially with how many other user-content-driven sites have APIs. Beats the hell out of page scraping, anyways.

    The problem is that serving straight-up XML with an XSLT is rather flaky cross-browser (especially on mobile devices), and adds a level of confusion that not only isn't necessary in 99% of websites but is best piped through a semi-regulated system. Twitter is an awful example as they still don't have a business model (or even a revenue stream at all AFAIK), but providing premium access to certain sections of an API or an increased request limit is certainly a valid way to monetize a service like Twitter, and that will quickly fall apart if were to serve straight-up XML.

    Other than cross-browser standards support and a couple of quirky CSS attributes, there's really nothing wrong with separating the content and presentation with the systems that are widely in use today. They also allow users to override the presentation with their own stylesheet. Sure, you'd generally have to do it on a site-by-site basis as there's neither a <content> or <menu> tag (but rather divs and lists with IDs set, with no cross-site consistency at all), but implementing that kind of system effectively would be beyond a nightmare. I suppose you could link out to a semantic XML version of a page via a meta tag like how we currently handle RSS feeds (could just be another xmlns attribute for this kind of thing, though you could get most of the info off of a full rss feed anyways), but there are so few people that would want to override the default presentation of a site (and even fewer who would be bothered to do so) that it just doesn't make any sense, especially as there's currently no monetary incentive to do so.

  • by Arancaytar ( 966377 ) <arancaytar.ilyaran@gmail.com> on Tuesday October 28, 2008 @04:57PM (#25547261) Homepage

    That's what I guessed, yeah. It's the same philosophy that causes right-click-disabling Javascript.

    --

    The danger of letting the "content-centric" people take over the internet is of course that web browsers will be mandatory closed-source clients that decode the heavy-duty encryption while a camera on your computer checks to make sure nobody else is looking at your screen for free.

  • by Anonymous Coward on Tuesday October 28, 2008 @05:02PM (#25547333)

    Don't worry the advertisers are not interested in your concious mind :-P

  • by 644bd346996 ( 1012333 ) on Tuesday October 28, 2008 @05:07PM (#25547397)

    IPv4 was defined by RFC 791, which was published in 1981. It allows for 4.295 billion addresses. In 1981, the entire population of the world was about 4.5 billion. Sure, that means that from the start, there weren't enough IP addresses to go around, but back then, it was unreasonable to expect that even 1% of the population would have use for an IP address.

    The limitation on IPv4 addresses isn't a design flaw. It's just a symptom of IPv4 being old. It has just about outlived its usefulness.

    (In fact, using more than 4 bytes for addresses in a time when 32-bit minicomputers were still fairly new and consumer machines had just gotten to 16-bits would have been absurdly wasteful of resources, and nobody would have bothered to implement such a protocol.)

  • by Anonymous Coward on Tuesday October 28, 2008 @05:14PM (#25547503)

    wow, and here i've conscientiously avoided those trite phrases. i know i'd be modded insightful for this, but unfortunately, ac...

  • by Glendale2x ( 210533 ) <[su.yeknomajnin] [ta] [todhsals]> on Tuesday October 28, 2008 @05:17PM (#25547537) Homepage

    Re: spam; unlikely. Those people do the damnedest things and spend ungodly amounts of time to ensure their spew gets out. They will always find a way.

  • by ArsonSmith ( 13997 ) on Tuesday October 28, 2008 @05:24PM (#25547611) Journal

    Of course it is. Microsoft spent billions building an aura of acceptably broken software. People will even claim that it is all ok, then just accept the hang of their entire system while they launch another application. Apple broke this expectation by making a quality OS that doesn't require an in flight missile repair man to maintain.

  • by SatanicPuppy ( 611928 ) * <Satanicpuppy.gmail@com> on Tuesday October 28, 2008 @05:28PM (#25547663) Journal

    Tell it to netscape, ;)

    These things are fad driven; if Twitter doesn't get its act together, someone else will do it better.

    And since Twitter is basically non-revenue generating, it's not like they're getting anything out of their early dominance except user goodwill.

  • by geekoid ( 135745 ) <dadinportland&yahoo,com> on Tuesday October 28, 2008 @06:25PM (#25548297) Homepage Journal

    "A good designer and a bad coder creates better output then a poor designer and a good coder."

    hahahaha, no. Either side of the equation is bad, then it all stinks. It just stink differently.

    The good news is if both sides are bad then it'
    s a funny train wreck...like two trains carrying clowns colliding...either that, or you get twitter.

    Zing.

  • by Dun Malg ( 230075 ) on Tuesday October 28, 2008 @06:43PM (#25548491) Homepage

    You miss the point. A good designer and a bad coder creates better output then a poor designer and a good coder.

    Are you really suggesting that a clean, efficient design that crashes constantly because it is rife with coding errors is better than a kludgy mess of extensions and exceptions that somehow works anyway? I think you're nuts. Or are you suggesting that a good, clean, open design makes a bad coder irrelevant because you can always fire him and start over with a good coder? That's even MORE nuts.

  • Re:n00b (Score:1, Insightful)

    by Anonymous Coward on Tuesday October 28, 2008 @06:45PM (#25548517)

    The only thing that is "wrong" fundamentally with the internet is the separation of DNS and the routing protocol.

    Whoha..hold yer horses. Repeat after me..layer violation.

    For all intents and purposes a DNS failure causes a network outage.

    No.

    I'm sure when IPv4 was created the notion of mixing both services was unthinkable due to the additional amount of data needed to move names around at layer 2/3. This is no longer the case and we should really try to move away from a central naming system.

    It's very much still the case. The poor BGP routers struggles at this very moment with the ever increasing routing table. Imagine them keeping a bunch of up to 255 byte long names for each address. Heck, if they're going to keep names as well, why not skip IP-addresses all together which would grow the global routing table even more because prefix aggregation would become quite impossible.

    Your system of a decentralized naming system is quite exciting. Decentralized means no authority, imagine the fun of mDNS on a global scale where every kiddie in the would would fight over cnn.com.

    DNS _IS_ scalable (who controls the root is a whole different story) and today there exists a couple of hundred root server distributed globally with any-cast (not the popular 13 figure that tend to pop up).
    If you want your own zones to be resistant, make sure to have servers in at least two different AS (or one big fat AS with multiple entry points).

  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Tuesday October 28, 2008 @07:06PM (#25548687) Journal

    I don't believe Rails is more crash-prone or scaling-challenged than other platforms. Twitter is certainly not the only site to fall over frequently (look at Myspace). And Rails has been used for much more reliable apps than Twitter.

    Anyone can build a shitty app on a good architecture.

  • Ad-hominem (Score:5, Insightful)

    by Peaker ( 72084 ) <gnupeaker@nOSPAM.yahoo.com> on Tuesday October 28, 2008 @07:07PM (#25548693) Homepage

    Why does everyone try to divert attention from his claim that the internet does not scale well, to attacks on his own works?

    If his claims have no merit, refute the claims. Do not attempt to instead discredit the source.

  • by FooBarWidget ( 556006 ) on Tuesday October 28, 2008 @07:07PM (#25548695)

    The old "Rails can't scale" myth again. Yellow Pages, MTV, New York Times, Reuters and many other high-profile companies managed to scale Rails. Twitter's scaling problems are Twitter-specific, not inherent to Rails.

  • by Kent Recal ( 714863 ) on Tuesday October 28, 2008 @07:36PM (#25548989)

    Maybe someone can finally explain that twitter "phenomena" to me?
    I just don't get it. They basically reinvented IRC and instant messaging poorly and put it on the web. Hm, okay, but why do the unwashed masses flock to it like that?

    Back on topic: So the guy that couldn't even get the trivial use-case of a large scale pub/sub right is now complaining about the internet architecture? Too much cocaine?

  • by mcpkaaos ( 449561 ) on Tuesday October 28, 2008 @07:48PM (#25549113)

    Trick is to make it better wtihout the idiots migrating.

    We need our own by-invitation only internet.

    Unfortunately, whenever you create an exclusive club and invite everyone but the idiots, you just end up with an exclusive group of idiots.

  • by JSBiff ( 87824 ) on Tuesday October 28, 2008 @09:22PM (#25549901) Journal

    I'll start by confessing I'm no network engineer, but as a user, some things I'd like to see (all of which, I think, IPv6 implements?):

    Trivial encryption for any type of data / all network apps required to support encryption:

    It bugs me just how many network apps, from Instant Messaging clients, VoIP, etc, which arguably should have encryption, don't. Recently, I was looking into online telephony providers, and I like the idea of using a standard-based provider which uses SIP (something like a Vonage, Gizmo, Fonosip, etc), but as far as I can tell, right now *none* of the SIP telephony providers support encryption (Gizmo5 does for Internet-only calls, but not Net-to-POTS), which is pretty mind-boggling to me; so, I'll probably just go with Skype, even though I'd prefer an open standard).

    Granted, not every application *needs* encryption, and in some cases, the performance overhead could be bad for the intended traffic (things like video-games, live broadcast-streamed video or audio [things like TV shows, web-seminars, etc, which maybe the person streaming the data doesn't need encrypted because it's for a general audience and is not private], etc), but crypto should be much more pervasive, so that if I *want* to turn it on in any app, I can (maybe I want to run a secure Quake server and can live with the performance degradation). I think putting it into the protocol stack could make this possible?

    I think IPv6 does this with the IPSec concept, doesn't it, where all the implementation of encryption is done in the protocol stack, so that applications don't have to individually link in crypto libraries, but instead, the app basically sets a flag to true or false whether the connection should be encrypted?

    The end of price-gouging for multiple public addresses:

    I really think it's *stupid* to have to pay $5 or $10 per month, or whatever, for a *number*. Numbers should be free. There's an infinite supply, so the law of supply and demand should make them free. I'm already paying for Internet service, so I shouldn't have to pay more for addresses. Of course, right now, because there is a limited supply of IP addresses, you do end up paying for them (after the first) because there *aren't* an infinite (or effectively so) number of addresses.

    Having a public static IP address makes things like direct connections from one person to another for things like VoIP, file transfers, VNC/RDP, games, etc much easier. Yes, there are schemes to work around NAT nowadays, but almost all of them require the use of some third-party node which *does* have a public IP address.

    I sometimes hear people raise as a would-be counter argument that NAT increases security, but not really more than a simple firewall on your cable/dsl modem or WAP would do. The problem with NAT is that it is a bit more difficult, if you have multiple users behind the NAT, to all receive inbound traffic on the same port (which might happen for certain applications; e.g if you are hosting a LAN-party or you just have multiple gamers living in the same house).

    Secure DNS:

    This, I don't think, actually requires IPv6 (but might be made easier with IPSec?); I think it can, and will eventually, be done with IPv4, but it's still an issue with the current Internet. I'd like to always use a *trusted* DNS server no matter what network I'm roaming on. That is, always use my ISP's DNS server, or my own DNS server, instead of the DNS server of whatever WiFi network I'm currently on. I could try that without secure DNS, but there's not much guarantee that a man-in-the-middle isn't intercepting the DNS requests en-route to my 'trusted' DNS server, so I can't really trust the replies.

    Email origin forging:

    It's entirely too easy to forge the "From:" address on emails on the current Internet. Yes, you can use signing/encryption software to get around this (PGP/GPG, or the SSL certificat

  • by Anonymous Coward on Tuesday October 28, 2008 @10:32PM (#25550381)

    I think the scale problems for the internet are more related to monopolies and anti-competitive behavior from cable + phone providers than technology issues.

    Sure, we can look to change how content is being distributed to try to be more efficient. However, this is also a problem fiber to the residence would solve. As other countries push forward to get faster access to the homes, the US relies on the good will of phone + cable companies to retrofit fiber.

    The problem is that they have no good reason to do so. They make money by controlling a constrained resource (your physical connection) as well as having a strong lobbying machine. They have no good incentive to compete.

    It is good to see Verizion trying to make progress on this with their VIOS service but unless some of these operators get more serious about competing with each other much of the country will continue living in the dark ages in regard to network access.

  • by Anonymous Coward on Tuesday October 28, 2008 @10:35PM (#25550389)

    Yellow Pages, MTV, New York Times, Reuters and many other high-profile companies managed to scale Rails.

    That's interesting, the New York Times [nytimes.com] runs on rails? Oh no. No it doesn't. Not even close.

    Yellow Pages [yellowpages.com]?

    A site that is so useless that I defy you to find a single person linking to it in the history of Slashdot [google.com]. Zero hits.

    Despite the fact that the Yellow Pages (paper edition) is incredibly well known. Everybody knows the Yellow Pages!

    But yellowpages.com is so utterly useless it only gets linked a total of 2600 times in all of the Internet!!! [google.com]. That's even more pathetic than Slashdot, which no one knows about, but is linked 50000 times [google.com]. How well known are the two? "yellow pages" (quotes) = 244,000,000 hits. [google.com] "slashdot.org" = 6,550,000 hits. [google.com]

    What were you saying about Rails again?

  • Re:n00b (Score:3, Insightful)

    by mcrbids ( 148650 ) on Wednesday October 29, 2008 @12:01AM (#25550919) Journal

    The only thing that is "wrong" fundamentally with the internet is the separation of DNS and the routing protocol.

    This has to be one of the DUMBEST ideas I think I've ever heard of....

    As an application hosting provider, we provide a very strong level of redundancy, including hot disaster recovery hosting. If anything serious were to happen to our primary hosting facility, we'd update DNS and within a few hours, our secondary hosting would become active.

    By definition, the secondary hosting is in another city, on another network, through a different power company, to provide as much differentiation and minimization of downtime as possible. If you combined DNS and routing, how would this switchover happen?

    Sorry. Bad idea. Bad, bad bad idea.

    I could see an argument for combining PKI and DNS, and indeed, it's not only been suggested elsewhere, it's been implemented [wikipedia.org]. Obviously, there's a good case to make, here.

    But mixing routing and DNS? WTF?

  • by ronabop ( 520121 ) on Wednesday October 29, 2008 @01:45AM (#25551421)
    Simple UI for creating and sharing content.

    What do: Geocities, Blogger, Myspace, *chan, slashdot, and MediaWiki all have in common? None of them do anything that can't actually be done with a text editor and hosting space.... and a fair bit of clue.

    What they did was remove the needed "clue".

    This, of course, has had some side effects.

  • Actually... (Score:4, Insightful)

    by Moraelin ( 679338 ) on Wednesday October 29, 2008 @03:24AM (#25551813) Journal

    Actually, the funny thing is that Web 2.0 vs Web 1.0 wasn't even supposed to be about technology as such. And the inventor of that buzzword still insists that it isn't, long after the Grinch... err... the marketing bulshitters stole it and ran away with it.

    Web 2.0 -- and by contrast Web 1.0 -- wasn't about techno-fetishism, but about techno-utopianism. It has nothing to do with PHP or any other particular technology.

    The basic idea of Web 2.0 was that if you put a million monkeys on a million keyboards, they're still monkeys. But if you interconnect them and let them write and edit each other's content, now that's teh nirvana and age of enlightenment. Give the users wikis instead of writing your own content. (I'm sure you'll be thrilled to discover that your product was made from baby seals and your CEO blows goats, but, hey, if the users wrote it, it must be true. 'Cause emergent collective intelligence is never wrong;) Have forums. Let the users tag your content instead of categorizing it or any other automated way of finding it. (I'm sure the tags on Slashdot would be sooo much more useful to find an article than full-text search;) Etc.

    At a basic level, none of those _really_ needs PHP or JavaScript or anything. You could make a primitive almost-wiki back in the day, by just giving the users FTP access to the site and letting them edit and re-upload the HTML files.

    Anyway, in true zealot fashion, where no price is too high for his utopia if someone else pays it, this was packed in a further lie: that, see, that's also the path to making the big bucks and verily everyone will beg to give you their money if you only had a wiki. I guess you can't really preach stuff like, "why you should blow your money to give us our free, collaborative online utopia", so it had to be repacked as, "you could be the next Google if you do!"

    No, seriously. If you listen to him, Tim O'Reilly looked at what companies survived the dot-com bubble and what were their defining characteristics. And somehow he managed to completely miss the fact that it's those who had a business plan, silly. E.g., that why Google thrived was because Google became the damn best ad provider. Nah, what he saw is that it was those with wikis, and bittorrent and other collaborative stuff. That's the way to the big bucks.

    So he envisioned and preached a DotCom Bubble 2.0... er... Web 2.0 golden age, where everyone has those, and someone gives them money for nothing for doing it.

  • by Kent Recal ( 714863 ) on Thursday October 30, 2008 @01:01AM (#25565443)

    Replying to my own post, to sum up what I learned: Nothing...

    Maybe I'm an anti-social fag but I *still* don't get it. And the feeling gets stronger that I never will.

    Yes I do have friends in the real world but for some reason I really don't care what they are doing at every given point in time. If they do something interesting then I'm sure they'll tell me anyways, probably in more than 128 characters. Moreover I'm already using two messenger programs (Skype and pidgin for all the rest), so I'm already getting more "thinking bubbles", away messages, birthday reminders etc. than I would ever care to read.

    From all your arguments only the "persistence" and "zero barrier of entry (no clue required)" stick with me.
    I guess these two make the big deal out of twitter. People just love to leave their piss-marks everywhere - and stupid people even more so.

    How this got *that* big is still beyond me, though. I guess we, as a species, still have a long way to go...

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...