Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Operating Systems The Internet Technology

The State of the Internet Operating System 74

macslocum writes "Tim O'Reilly: 'I've been talking for years about "the internet operating system," but I realized I've never written an extended post to define what I think it is, where it is going, and the choices we face. This is that missing post. Here you will see the underlying beliefs about the future that are guiding my publishing program as well as the rationale behind conferences I organize.'"
This discussion has been archived. No new comments can be posted.

The State of the Internet Operating System

Comments Filter:
  • Meh (Score:3, Informative)

    by WrongSizeGlass ( 838941 ) on Tuesday March 30, 2010 @08:28AM (#31669492)
    That article isn't exactly cromulent. Is there a daily prize for obviousness?
    • Re: (Score:1, Funny)

      by Anonymous Coward

      Is there a daily prize for obviousness?

      No, there isn't.

    • O'rielly is pointing out the same dangers of the Cloud as Stallman, but in a reasonable voice. The question is how to preserve the DIY environment when hardware is sealed (see iPad) and software is ran on corporate computers. Will innovation be constrained or will the cloud be open enough to allow people to change vendors easily without total reworks?
      • Re: (Score:3, Informative)

        by tadghin ( 2229 )

        You've got to be kidding that I'm channelling Stallman. He's finally waking up to an issue that I put in front of him all the way back in 1999. At the time, he said "It didn't matter." See for yourself, in the transcript of our interchange at the 1999 Wizards of OS conference in Berlin. They are a fair way through the PDF of the transcript, so read on down: http://tim.oreilly.com/archives/mikro_discussion.pdf [oreilly.com]

        At the time I was talking about "infoware" rather than "Web 2.0" but the concepts I was working

    • by FyRE666 ( 263011 ) *

      The guy also seems to have a problem differentiating between an operating system and a network infrastructure.

      • The internet has an operating system just as much as a colony of ants has a hive mind. They don't, but they sure act like they do.

        Metaphors. Learn them. Use them. Love them. But don't anthropomorphise them. They hate it when you do that.

    • by AP31R0N ( 723649 )

      How crumulent is it? We know it's not exactly cromulent, but you've left it's actual cromulence vague.

  • by elrous0 ( 869638 ) * on Tuesday March 30, 2010 @08:29AM (#31669508)

    This whole "Internet OS" thing reminds me of the periodic resurgences of the dumb terminal/thin client idea that goes back to the mainframe days. It seems like every ten years or so, everyone is talking about thin clients in every office, with the OS and apps running on some offsite server somewhere (now with the added twist of multiple servers over the internet). Ostensibly this is seen as a good way to save IT money and overhead. But in every actual deployment I've seen, it only causes hassles, additional expense, and headaches.

    Back in the 90's we tried this at my old university. We networked all our computers and put all our apps on a central server. Even though this was all done on a local network (much more reliable in those days than the internet), it was still a complete disaster. Every time there was a glitch in the network; every student, professor, and staff member at the university lost the ability to do anything on their computer--they couldn't so much as type a Word document. Now, with little network downtime, you would think this wouldn't be so much of a problem--but when you're talking about thousands of people who live and die by the written word, and who are often working on class deadlines, you can imagine that even 30 minutes of downtime was a nightmare. I was skeptical of this system from the get-go, but got overruled by some "visionaries" who had bought into the whole thin client argument with a religious fervor. Of course, long story short, we ended up scrapping the system after a year and going back to the old system (with a significant cost to the state and university for our folly).

    • Re: (Score:3, Insightful)

      by Em Emalb ( 452530 )

      Actually, it's all just one big cycle. When I first broke into the IT world, PCs were a bit of a novelty in most businesses. Then, the PC explosion caused things to move towards a "client-side" setup, with faster desktops, laptops and not as much horse power required on the server side. Then, in an effort to save money, tied in with servers/CPUs/memory becoming cheaper, and security concerns, companies started (or have started) to slowly pull things back from the client side and put more emphasis on the

      • I don't think we are going to go back anymore to the old days of client-side apps. There's a big difference today, the growing ubiquity of network access. Decades ago we didn't have internet (or it was crappy, slow and too inexpensive), every once in a while a new computer generation focused in client-side software because networks didn't really matter that much. With the ubiquity of internet I don't think we we'll see that again. We are starting to see MB/s of internet bandwith, it won't be too long until

        • I like a hybrid approach.

          Our Enterprise accounting system is on the server, but office apps are local. Daily workflow seems to produce a lot of "debris", which conveniently forms little digital compost heaps on people's local machines. (With a little nudging) if there's a document that's usefully finalized, post that version to the server folder.

          MS Office Basic is "essentially almost-free" for OEM hardware purchases, so why put Word and Excel on a server?

      • Corporate networks, in my experience, are typically much more prone to solid uptimes, unlike the internet.

        Does the internet go down a lot? I haven't noticed.

        • Re: (Score:1, Funny)

          by Anonymous Coward

          "Does the internet go down a lot? I haven't noticed."
          Then you are watching the wrong videos. They almost always start with going down.

        • I get phonecalls from my mother all the time telling me that the internet has gone down once again. Her home phone, which she uses when she calls me about this, is VoIP.

      • "Actually, it's all just one big cycle."

        Not since the internet, the problem with thin clients is it starts to create single point of failure. The great thing about the net is redundancy even if that comes at a cost it gives you extreme amounts of flexibility.

        The great thing about the net is redundancy, is a site down? find it in cache's of the net.

    • Re: (Score:1, Funny)

      by Anonymous Coward

      Dumb peeple and dumb terminalz dont mix either.

    • It's also dumb. Even if you bought a low-end Intel Atom machine, why would you want to waste that CPU letting it be a dumb terminal? Put that CPU to work by enabling it to do tasks independently even if the network connection fails.

      • Re: (Score:2, Funny)

        by oldspewey ( 1303305 )
        Exactly - keep your CPU busy running SETI@Home while all your apps sit on a server somewhere.
      • It's also dumb. Even if you bought a low-end Intel Atom machine, why would you want to waste that CPU letting it be a dumb terminal? Put that CPU to work by enabling it to do tasks independently even if the network connection fails.

        I weep for OpenMOSIX. I was hoping that the project would continue and ere long we'd be motivated to buy all one architecture in our house simply because all the machines would form a cluster almost without our involvement and just accelerate each others' tasks. A terminal cluster where the terminals also make the entire system faster is kind of an ideal dream.

        • You're not suggesting... A world wide beowulf cluster?!
          • You're not suggesting... A world wide beowulf cluster?!

            That would be nice too, but there are many issues to be worked out first. Let Amazon &c work them out before we start building intentional cloud botnets. This would only provide you a single system image cluster in your house, and because Unix works on a process model, MOSIX works on process relocation. But when combined with LTSP and a bunch of machines of the same architecture (you could treat anything from Pentium on up, in x86 land, as i586 for example) then it would eliminate the need for local sto

        • by david.given ( 6740 ) <dg@cowlark.com> on Tuesday March 30, 2010 @09:38AM (#31670472) Homepage Journal

          I weep for OpenMOSIX. I was hoping that the project would continue and ere long we'd be motivated to buy all one architecture in our house simply because all the machines would form a cluster almost without our involvement and just accelerate each others' tasks. A terminal cluster where the terminals also make the entire system faster is kind of an ideal dream.

          What happened to OpenMOSIX, anyway? I used it very successfully to turn groups of workstations into build servers; they all ran OpenMOSIX, and then make -j8 on any of the workstations would farm out the build to all the workstations. And it all Just Worked, and there was bugger all maintenance involved, etc. I was really looking forward to it getting mainlined into the kernel and then it just all kind of vanished.

          There's no indication of what happened on the mailing list --- it just stops. There's a new project called LinuxPMI [linuxpmi.org] that claims to be a continuation but there's no mailing list traffic...

        • Yeah, in fact I built just that for my school many years ago. 10 computers (PIII's), set up as an openmosix terminal cluster. It worked really well. If all terminals were in use people had the power of one PIII just like normal, and if fewer people used it, then there would be more power for everyone. This was far more efficient, especially as the computers would be on anyway, and scaled really well, as we didn't need to invest in really beefy servers to host all the apps on. It really was cost effective, a

      • Because having someone else do the crunching decentralizes the storage of the data you are using? With it decentralized you no longer have to upgrade your hard drive capacity, you can have a power outage and the data will still be procesed and multiple people can process it at the same time without interference?

        This is assuming a perfect system, the server has to upgrade appropriately and have proper data, power and network backups to prevent the same issues but how often does slashdot go down these days
        • Well coming from an era when we had dumb terminals, I have no desire to go back to that. I like being able to use my computer even when it's not connected to the net. Like last night, I was watching videos without a connection. I couldn't do that with one of those so-called "cloud" computers, because neither the movie nor the player software would be on my machine.

          And if you're really concerned about backing-up your data, there are services you can use NOW to upload your HDD to the net, so if your house

          • This isn't really something that works perfectly now but it is something that could work in the near future (if everything goes perfect).

            You can already connect your computer to the cell phone network for internet, get this updated for bandwidth and reliability and there is no reason a computer cannot always be connected to the internet.

            Additionally having the OS on the internet instead of your device allows you to be working on a document on you desktop, move it over to your iphone to continue work on
        • Re: (Score:3, Insightful)

          by jc42 ( 318812 )

          ... but how often does slashdot go down these days?

          Actually, that's a good way to phrase it. That is, it may be true that slashdot itself is almost always up and running. But from my viewpoint, out here on an internet "leaf" node, slashdot quite often seems to be "down". It's fairly common that when I do a refresh, it can take a minute or more to complete. Sometimes when the "Done" appears at the bottom left of the window, the window is mostly blank, and it takes another refresh to get the summaries bac

          • Re: (Score:3, Interesting)

            by Drethon ( 1445051 )
            Where I'm at on the other hand I almost never have delay loading a slashdot page. Look at where the internet is now compared to where it was ten years ago. We have an explosion of broadband access compared to back then. If the internet continues growing that may no longer be a problem in ten years.

            On the other hand an internet OS will use a lot of that bandwidth, likely leading to increased lag even as bandwidth increases (see hardware requirements of Win 95 vs Win 7...).

            Unfortunately the only sure w
            • by jc42 ( 318812 )

              Unfortunately the only sure way to know how well it will work or not is to try it and see what happens.

              Probably, and of course the open nature of the Internet means that people are free to experiment with a network OS. Actually, I've done that myself. Some 25 years ago, I demoed a "distributed POSIX" library that allowed me to do things like type "make" on one machine, and watch as it spun off subprocesses that compiled source on N other machines, linked them with libraries on other machines, and installe

              • My preference on this would be to only use sites that allow encrypted files. I've noticed a lot of clouds don't allow that, wonder why :)
      • by cynyr ( 703126 )

        running things locally would work great if >90% these days didn't need files from some sort of network drive/server/export/etc, requiring network access anyways. Lots of commercial software won't run if it can't get a license from the network, Outlook is just about worthless without a network connection. So really you need that connection anyways. Why do you seem to think that the loss of network access would need to imeditly kill any thing you were doing at the time? wait for the network to come back up

    • Dumb terminals have the capability to eliminate nearly all hardware requirements for the client except for ability to process the connection. On the other hand they require extreme levels of backup on the server side that has the potential to be cost prohibitive.

      We may be at the point where things are stable enough (How often do you loose your gmail? Yes it went down for me the other day but its the first time in at least a couple years). The risks are much higher than the gains but they can be overcom
    • by Trails ( 629752 )

      So your implementation didn't handle faults well, therefore we should throw out the idea?

      There are certainly criticism to be made for the centralized model, but your anecdote isn't one of them. If the product you bought and/or stuff you built wasn't fault tolerant then you bought and/or built the wrong solution.

    • by tlhIngan ( 30335 )

      This whole "Internet OS" thing reminds me of the periodic resurgences of the dumb terminal/thin client idea that goes back to the mainframe days. It seems like every ten years or so, everyone is talking about thin clients in every office, with the OS and apps running on some offsite server somewhere (now with the added twist of multiple servers over the internet). Ostensibly this is seen as a good way to save IT money and overhead. But in every actual deployment I've seen, it only causes hassles, additional

    • Yeah, I feel like there are a few problems with the vision of running a terminal/mainframe model, first and most obvious being, as you said, it introduces a central point of failure for everyone. If the server goes down, everyone on that server is suddenly unable to work. People will counter by saying, "well you just distribute it across a bunch of servers so there's no more single point of failure." It's harder than it sounds. If you distribute across servers, how do you manage that distribution? What

    • I was skeptical of this system from the get-go, but got overruled by some "visionaries" who had bought into the whole thin client argument with a religious fervor.

      Or alternately, those "visionaries" were expecting to profit personally from the thin client manufacturer.

    • There's no reason why we can't have both - data backed up/synchronized to the "cloud", and applications that can continue to run on locally cached data when the network is unavailable for whatever reason. There are still some cases where this is problematic - e.g. my iPhone Google Maps application really doesn't work in the hinterlands, as the phone won't have the maps locally stored - but this is really just a problem of caches not being big enough or smart enough to do what we need. The problem will be pa

    • Re: (Score:2, Insightful)

      by Tubal-Cain ( 1289912 )

      Every time there was a glitch in the network; every student, professor, and staff member at the university lost the ability to do anything on their computer--they couldn't so much as type a Word document.

      Meh. That's true for my workplace despite our thick clients. Network folders, Internet connection, Active Directory... If anything goes down the office just sort of grinds to a halt.

    • by GWBasic ( 900357 )

      Back in the 90's we tried this at my old university. We networked all our computers and put all our apps on a central server.

      That's the point of local storage in HTML 5. Applications that make good use of it can run without a network connection, or when the server suffers a 30-minute "glitch."

  • P or NP (Score:4, Insightful)

    by daveime ( 1253762 ) on Tuesday March 30, 2010 @08:37AM (#31669596)

    It seems the hardest and most time-consuming problem with Internet operating systems is figuring out how to work offline.

    And the easiest solution, which seems to escape almost everybody, is "don't work online in the first place".

    • Re:P or NP (Score:4, Interesting)

      by starfishsystems ( 834319 ) on Tuesday March 30, 2010 @09:18AM (#31670116) Homepage
      Not really. Your situation of working offline is a particular case of working online. It just happens to have high latency. So the easiest solution, for the user, is one which generalizes to encompass high latency.

      The converse is not true. Of course you can retain the capabilities of an offline environment even after you add a wire to it, but those capabilities do not generalize to managing the resources on the other end of the wire.

      The easiest solution to implement is a pencil and a piece of paper. Oh, you want capabilities too? Well, that's different.
    • Re: (Score:3, Interesting)

      And the easiest solution, which seems to escape almost everybody, is "don't work offline in the first place".

      FTFY. Having my data available on any online computer or device that I happen to be at *increases* its availability to me, even in the presence of occasional outages. There's down-sides, such as privacy, but availability isn't one of them: it's a net positive.

  • by gmuslera ( 3436 ) on Tuesday March 30, 2010 @08:42AM (#31669640) Homepage Journal
    If were a living thing, it would have cancer, several kinds of it, spread all around the body. Botnets, zombies armies, spam, malware sites... a good percent of it is just badly sick. It have several brains too, some of them playing against the health of the whole body by not letting the "blood" flow freely all around, as some governments censoring it because political reasons or lobbying ones.

    It have its strengths too, is maturing (hopely), have a good defense system so the sickness spread around don't infect everything, and it evolves fast (even if limited by laws, patents, trolls, etc), getting more personal and localized.

    With a bit of luck people, institutions and governments starts to worry about its health, the ecosystem that it is and start working on preserving it as much as the planet we live.
  • by Anonymous Coward

    It does sound like everything Plan 9 was trying to solve and did solve to a certain extent.
    The trouble is plan 9 was too early for its time and it still is.
    There is a larger problem too. Ownership. It is clear who owns and responsible for
    individual machines. But who owns the mystical "between the machines space".
    Google? Government? United Nations? Can't pick which is worse.

  • that would be IOS right :-)

    ps for non networking types IOS is Ciscos OS
  • I think a better version of the future is to secure the PC using sandboxing and capabilities to limit the side effects of applications. This then allows you to download and run apps on your PC, without the need to trust them. You could then have redundant copies of your stuff spread across your various devices. Your stuff includes photos, videos, documents, and the code to manipulate them.

    The focus on services is a result of the distortions caused by the lack of a good security model on the PC. Once that ge

    • Agreed. The idea of cloud computing is a power play to make users feel more secure given the inherent problems of (primarily) Microsoft Windows usage on the Internet.

      The pitch is: "We'll do everything for you in the cloud and then it won't matter what you are running on your internet access device."

      The problem with that model is that everything gets controlled by someone else. But the majority of non-technical customers do not understand how much they are giving away with that service model. They feel safer

"I'm a mean green mother from outer space" -- Audrey II, The Little Shop of Horrors

Working...