Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

Operating Systems of the Future 436

An anonymous reader writes: "'Imagine computers in a group providing disk storage for their users, transparently swapping files and optimizing their collective performance, all with no central administration.' Computerworld is predicting that over the next 10 years, operating systems will become highly distributed and 'self-healing,' and they'll collaborate with applications, making application programmers' jobs easier."
This discussion has been archived. No new comments can be posted.

Operating Systems of the Future

Comments Filter:
  • by joebp ( 528430 )
    Imagine a Beowulf cluster of these 'future' computers!

    Oh, wait...

  • by satterth ( 464480 ) on Monday February 11, 2002 @02:34PM (#2988093) Homepage Journal
    What happens when some user click on a VBS script ?

    I image great horrors as the whole cluster goes down in a mass emailing.

    /satterth
  • Amoeba (Score:5, Informative)

    by oyving ( 115582 ) on Monday February 11, 2002 @02:34PM (#2988100) Homepage
    Tanenbaums Amoeba [cs.vu.nl] is way ahead of the game then.
  • by SirSlud ( 67381 ) on Monday February 11, 2002 @02:35PM (#2988102) Homepage
    I'm so sick and tired of what the next 10 years will bring us. Howabout OSes that dont crash? How about hardware that won't lock up your computer? How about open standards, a generally more cautious approach to computing that will allow us to stabilize the developments that occur? Nah .. of course not. Lets take this overly complicated not-so-realiable thing and throw a transparent layer of 'self-healing' autonomy to it. I know thats what I've been looking for ... yet another reason why I have to explain to my boss that computers ain't perfect. I can hear him now: "But they're supposed to heal themselves! Why didn't the OS dial up our energy provider and ask why the power went out?!"
    • Most unfortunately, postulating what we could do is much more exciting than perfecting something we already do.

      Apparently, the public has a certain tolerance to defects and bugs. A fine exmple is the automobile, with its near-certain breakdowns, despite Tucker proving otherwise [protsman-antiques.com].
      • I think this is because we've been told that this is the best it will get (or, in MS speak, it doesn't break in the first place.)

        Cell phones rarely crash (granted, much simpler in terms of the complexity of their input), but I think this is because, since there is no focus in marketing about their 'stability', makers really do have to make them stable. As long as 'stability' is a marketable selling point, computers will have to be unstable.
        • There's never been a focus on stability because until recently PCs weren't on 24/7.

          Of course, that's changing, what with web servers and such.

          On the other hand, are you willing to pay the price premium for a Unix desktop PC? Ala Apple, OS X, Darwin, BSD, etc?
    • I'm still waiting for those computers that will program themselves, a prediction that was being made in the early 80's. But it's just as well, as a lot of us would be out of a job...
      • >computers that will program themselves

        It's called a compiler. You use C/C++, or whatever, to 'tell' the computer what the program it should make will do.

        Computers that can 'program themselves' is simply an extention of that concept to the point where (presumably) you can 'code' in your natural spoken language. A computer shouldn't do anything until you've told it what to do. Currently, we use C, but there really isn't a functional difference between English and C except for the granularity of the specification of the problem and the desired implentation of its solution. For instance, with PHP, I no longer need to tell the computer that the $foobar variable will be an unsigned long ... of course, you'll always give up speed, just as when you tell someone else to do something. The more granular you describe the solution you want, the less time the other person/computer has to spend figuring it out themselves.
        • Currently, we use C, but there really isn't a functional difference between English and C except for the granularity of the specification of the problem and the desired implentation of its solution.

          Really? Please tell me how to break down:

          "Why are we here?", or "I think I love her", or "He died last week"

          into a sufficient granularity to be implemented in C, of course with the full semantic connotations involved. There's a huge difference between a formally defined language and a natural language. That's why NLP is so damn hard.

          As far as computers programming themselves, well... a c/c++ compiler translating c/c++ code into machine code isn't the same thing. Translation *is* a necessary step, but you also have to add the ability to change the running program. For that you need a language that blurs the distinction between data and instructions.
          • by SirSlud ( 67381 ) on Monday February 11, 2002 @04:26PM (#2988943) Homepage
            >"For that you need a language that blurs the distinction between data and instructions"

            My point was that instructions are data. But I challenge you to illustrate that in order to solve a problem, you can provide data that does not encompas the intrucstions. "My house is on fire" is data that will instruct people to run out of it, but only because they were previously programmed with a 'fire' trigger. Escape it when it's inputted into your system.

            So neither english nor C can go outside of it's own contextural setting. English is just so more complicated with so many more possible branches of execution based on data that it's difficult to compare the two without either belittling humanity or getting 1984ish about technology. C /can/ change itself via function pointers and, lets say, random data to throw on the execution stack. But brute force only works when you can test a result within the programattic bounds of the inputted data, including instructions. I mean, really, humans are just wildly complex computers, which is why our data-exchange set is so much more advanced. :)

            "Why are we here?" has multiple answers, so you can really only validate successful self-programming if you already think you know what the answer is. And for that, you depend on previous data entry ... etc, etc, etc ..
      • Check out genetic programming. [genetic-programming.org] Automatic programming via genetic algorithms.
    • Oh great. It's already impossible to find a job with my measly bachelor's degree and now I have self-healing computers to look forward to. I should studied accounting...
    • Sadly, reliability doesn't sell. The average computer user wants fast and cheap. Even on slashdot, you see endless dicksize wars over who has the most 'leet, overclocked system running last night's kernel release on the latest CPU, chipset and motherboard. It doesn't have to work reliably if it looks cool doing it.
      • Yeah, I know, but that doesn't prevent me from hoping that some day futurists will be led to conclude:

        "In the next 10 years, humans will be able to make sensible decisions that do not give them excuses and scape-goats to feel unhappy about their experiences in this society."

        Honestly, I think there is an entrenchment in the 'bitterness' and 'stress' social industry that we're lenient to give up. The day computers actually start working, we'd have to start focusing on our own problems again - the very antithesis of the desires of a market.
      • by CoreyG ( 208821 )
        Since when doesn't reliability sell? That's exactly why the Honda Accord and Toyota Camry are consistently the most popular sedans sold in the U.S. and not the Geo Metro or Daewoo Anything. It's exactly why Consumer Reports is so popular. People read it to find out which things work well and which things don't break. It's exactly why people buy computers from manufacturers(they're supposedly pieces of electronics that work). That's exactly why Apple's iMac sold so well, and why it continues to do so.
      • In honor of my brother (from whom I first heard this, though he could have ripped it off) I present John's Ominutile Justification: "Sure it's stupid, but chicks dig it."
    • >>Howabout OSes that dont crash? How about hardware that won't lock up your computer

      One of the key laws of nature is : Shit Happens.

      This is as true for code in your PC as it is for crawlies in nature.

      We want to fool ourselves that the PC is a clean and closed environment which we have full control of but it just isn't true. That storage device that was there a picosecond ago may have just failed or been removed, the network connection may have just been severed, another program may be running amok and draining system resources just as another needs it.

      Nature mostly gets around unexpected problems, we need OS's and languages that can do the same.

      Your goal of OS's that don't crash and hardware that doesn't "lock up" arn't incompatible with that.
      • Well, lets ask why nature gets around unexpected problems. I suspect it is because nature doesn't 'invest functionality' in a natural thing that requires excluding certain types of input in order to survive or function.

        > Nature mostly gets around unexpected problems

        The dinos would agree with 'mostly'. I want mostly. I want computers that are built to work regardless of input, unless said input is likely to occurr on a frequency of say, once every decade or some crap.

        Companies are notorious for turning this around. Witness warrentees. "This product will work unless you do X" Sometimes X is why people buy it in the first place!

        In the realm of computer and hardware, there is nothing to say that we can't make the PCI bus X times slower in order to build complete down-to-electron-level fault tolerance into it. Obviously, I'm unaware of the actual feasibility of this, but I think people above, in blaming the market, were far more on point than saying, "Well it happens in nature, so it happens in PCs." Sure, but I didn't see species dropping off the face of the earth like flies until the 1970s, when we starting making impossible-to-fulfill demands of our eco system.

        Same of computers. The vision, the story, the 'sales pitch' is really lightyears ahead of the design. It could only happen in an economy who's goal is to get shit out as fast and cheaply as possible to everyone, instead of considering the social and unquanitiable costs of certain technologies. Until manufacturers are really allowed to say, "We made it X times slower, but you can't crash it short of excersising your physical superiority on it, so I dare you to even try to feel stress or mistreatment in using it", and I think that might be never under current circumstances, posters above were more on point than you were.

        Which isn't to say that I don't agree .. I think it's just more about the demands you place on the technology over aknowledging the unpredictability of it's operating envrionment.
    • I'm so sick and tired of what the next 10 years will bring us.

      Right. I think the point is, though, to quote from the article:

      The target environment for Farsite is an organization in 2006 with 100,000 computers, 10 billion files and 10 petabytes (10,000TB) of data.

      Managing data and applications on that scale with PCs today sucks. Data synchronization is a HUGE issue already. The question futurists ask is what must we change for that to be manageable?

    • No one can predict what will happen in 10 years. Anyone who claims that this "is" what will happen is selling something. Anyone who says "maybe" is admitting that they are engaging in pointless masturbation. In 10 years, events run far outside of anyone's ability to predict cause-and-effect.

      That's just in general. Apply it to the technology sector, and it becomes even more true. About the best you can do is say "wouldn't it be cool if...?" But basically these guys just take an interesting research paper (out of the thousands out there) and act like that's what's actually going to happen.

      But I'm better than them! I really can predict the future! I predict that in 10 years, there'll be a bunch of people predicting what will happen 10 years from then, and nearly all of them will end up being wrong. That's right, you heard it here first.

    • Futurists are full of crap. They've been predicting a techno utopia where technology actually breaks ahead of itself and solves problems that it created.

      Instead what we end up with a distopia that looks more like "Blade Runner" and less like "The Jetsons".

      • This because technology can absolutely never solve more problems than its design, implenetation, production, and use cause. Ever.

        It's in the laws of thermodynamics, but we have to ignore it because we all depend on it to offload those problems (and sometimes the origional problem if the technology 'transports' the original problem rather than solves it) to other parts of the world.
  • by chrysalis ( 50680 ) on Monday February 11, 2002 @02:35PM (#2988115) Homepage
    IMHO, future operating systems will tend to something like the ErOS operating system [eros-os.org] . This OS is based on multiple tiny extremely reliable components, within a strong capability model to provide a high level of security.
    It's definitely a good approach, although ErOS is still quite experimental yet.


    • For the purposes of mind expansion you could do much worse [slashdot.org] :-) than lurking on the EROS [eros-os.org] and E language [erights.org] mailing lists [eros-os.org]. Decentralization [yahoo.com] is another good one, though much less focused.
    • Gosh, how about Assembly? All the opcodes used by a microprocessor are extremely reliable components. The problem with any language, and any program, is when everything starts to interact. Components begin to be used in conditions the original author didn't intend, people try to hack the system, it all gets more complex...

      So while it is certainly a good approach to have very stable base components, it isn't an all-solving approach.

    • Lots of small utilites, each with only one function, which it does very well , and can have its output piped to other such utilities or vice versa. Sounds like Unix to me.
    • by AJWM ( 19027 ) on Monday February 11, 2002 @03:18PM (#2988489) Homepage
      This OS is based on multiple tiny extremely reliable components

      Unfortunately that doesn't necessarily make the OS itself reliable. The emergent behaviour of a system is different from the behaviours of its components.

      After all, all software is based on multiple tiny extremely reliable components (F00F and FDIV bugs aside)-- the processors op-codes -- and look how flakey most software is.

      Sure, you've got to start with reliable components, but you have to combine them in just the right way, too.
      • Eros components arent just small and work exactly as documented like your assembly example- that would be enough if every programmer were an anal retentive computer scientist maybe.
        in eros everything is orthogonally persistant meaning that every object, without doing anything on its own, has it's state saved by the system.
        the other neat feature that makes it more reliable even in the face of bad application level code is that instead of access list based security ala unix, there are fine grained permissions called capabilites that govern what any object may do to any other.
        these features coupled with transparent distribution could guarantee that even if the terminal in front of you is struck by lightning you'll be able to move to the nearest working one and pick up *exactly* where you left off!

        check it out- there are a lot of kewl os level ideas that could make life better if adopted by more mainstream oses.
      • Sure, you've got to start with reliable components, but you have to combine them in just the right way, too.

        First off, we should learn a lesson from biology. The bee, for example, has about a million interconnected neurons. Yet the bee's highly sophisticated behavior is extremely robust and efficient. How does nature do it? The answer has to do with parallelism and expectations.

        1. Parallel processing insures that signals are not delayed, i.e., their relative arrival times are guaranteed to be consistent.

        2. Expectations are assumptions that neurons make about the relative order of signal arrival times.

        We can emulate the robustness of nature by first realizing that computing is really a genus of a species known as signal processing. We can obtain very high reliability by emulating the parallelism of nature and enforcing a program's expectations about the temporal order of messages: no signal/message should arrive before its time. The use of stringent timing constraints will ensure that interactions between multiple tiny modules remains consistently robust. Enforcement should be fully automated and an integral part of the OS.

        Of course, this is only part of it. The other constraints (e.g., the use of plug-compatible links, strong typing, etc...) are known already. No message should be sent between objects unless first establishing that plugs are connected to compatible sockets, i.e., that they must be of the same type.

        The most problematic aspect of computing, IMO, is that it is currently based on the algorithm. Problem is that algorithms wreak havoc in process timing and the end result is unreliability. The algorithm should not be the basis of computing. To ensure reliability, computing should be based on signal processing. Algorithms should only be part of application design, not process design. Just one man's opinion.
  • I thought they were talking about home users before I read the article. The Microsoft farsight thing seems like a good idea, as long as it is large corporations only. Using something like that at home is not good. What happened to Microsoft not working on anything new this month? Speaking of then not working on anything new this month I still haven't seen any patches on windows update. In 11 days of February they haven't solved 1 single problem with windows 2000 or IE? Give me a break.
    • Re:Whew (Score:4, Funny)

      by scott1853 ( 194884 ) on Monday February 11, 2002 @03:36PM (#2988611)
      They're learning this month, not identifying bugs. It takes about 30 days to teach an MS programmer with a CS degree that makes $60,000+ a year that you can't fit 2048 bytes of uncompressed data into a 256 byte buffer.
  • Microsoft to build something that obviously requires such a high degree of integration. If they can't build standalone securely, how on earth are they going to build this gigantic interwoven network without creating a hundred gaping holes? Oh wait, I must have forgotten about that Trustworthy computing initiative....

    I doubt this will mean the death of the sys admin...someone still has to orchestrate this thing from some sort of central-type position.

  • by HisMother ( 413313 ) on Monday February 11, 2002 @02:38PM (#2988144)
    Reading the Microsoft part of the story, I can't help but laugh out loud at what sounds like inspired self-parody on their part. All computers on Earth connected in one vast, organic, interdependent web, with biomechanical implants, cube-shaped ships, and active "healing" capability, to boot.

    I imagine my Linux boxen surrounded by a couple of stiff-legged, lumbering, wire-encrusted Borg machines, finally proving that resistance is, indeed, futile, as they make my boxen over in their own image.

    And Bill's head, with that little shiny snake-like tail, being clamped onto his body as he assumes command.

  • Linux? (Score:2, Funny)

    by squant0 ( 553256 )
    I don't ever see M$ doing something good in the future, I mean hey, they have been around since the 70s and look what you get, Winblows. Linux on the otherhand, may not be moving toward self healing, but definitely is moving in a better way than M$. 10 years to do the same thing it took M$ to do in 30? And with the average salary going to developers of linux software being about a dollar a year... 10 years is a long time, I think something cool needs to happen in a year for it to make any kind of impact, I mean the macintosh was revolutionary in the eairly 80s, but 10 years from then, guis were standard. The OS sector needs something revolutionary like the GUI to spur it on into the next few years, not the next 10.
  • I'm looking forward to a time when the computer can be addressed from any point in the ship... um building, knows every bit of data ever publically recorded, remembers my music tastes, keeps a log for me, can run a Holodeck, make me a perfect dinner, and control warp fields... uh... in house heating- with perfection.

    Until then, this all sounds like cute window dressing built on top of the next NT kernel.
  • As long as... (Score:5, Informative)

    by BoarderPhreak ( 234086 ) on Monday February 11, 2002 @02:42PM (#2988183)
    ...This software isn't like Office X from Microsoft on the Mac where it scans your network for anti-piracy measures, but in the process opens up your machine wide to the Internet by opening several ports... Worse yet, not tell anyone about it!

    Grumble, grumble...

    • Comment removed based on user account deletion
      • by Pfhor ( 40220 )
        MS Notice:
        http://www.microsoft.com/technet/security/bullet in /MS02-002.asp

        And a thread talking about it on macintouch:
        http://www.macintouch.com/officevx3.html#feb08
  • Beware this "distributed storage" push. As the intellectual "property" "industries" gain more and more control of the world's governments, storage will be in the hands of a few large companies, and not under the control of individual users.

    Your digital "rights" managed TrustedPCs will connect to a giant virtual disk array via the network, where what you store will be subject to government and corporate monitoring and removal.

    Think this is nuts? Where are the 200GB drives? Why is Intuit pushing us to store tax and financial information on their site? Why does Microsoft want to give us an authentication token that's good for retrieving our information "anywhere, anytime."

    Why would anyone (other than a legitimate large corporation) have a need for local storage, once the Internet storage product is fast and cheap? I can only imagine one use for local storage--copyright infringement.

    • Where are the 200GB drives?

      Here [pricewatch.com].

      Why is Intuit pushing us to store tax and financial information on their site? Why does Microsoft want to give us an authentication token that's good for retrieving our information "anywhere, anytime."

      For now, they're giving you the option more for your convenience than anything. If you multiboot, or even if you lose your Quicken data in a hard drive crash (this has happened to me before), there will be an offsite backup of it that you can access.

      Not to say that it won't turn into something bad, though. As most of us here probably do, I prefer backing up my own data instead of letting the software company do it for me. I am a big proponent of privacy, and I see a definite potential for abuse of these "convenient" features later on. But that doesn't mean they're doing anything bad with it just yet.
  • by mblase ( 200735 ) on Monday February 11, 2002 @02:46PM (#2988231)
    The target environment for [Microsoft's] Farsite is an organization in 2006 with 100,000 computers, 10 billion files and 10 petabytes (10,000TB) of data.

    Surely there will be major scalability problems with something like this, a la Gnutella [slashdot.org]?

    The potential pitfalls of 100,000 computers trying to access each other across the same network gives me headaches just thinking about it.
    • The potential pitfalls of 100,000 computers trying to access each other across the same network gives me headaches just thinking about it.
      The number of machines on the network isn't the issue--an AC's tongue-in-cheek response to this comment pointed out that by that logic the Internet shouldn't work--but the bandwidth requirements and network architecture do matter. Gnutella's problem is that it requires a LOT of bandwidth and it is easily bogged down by slow (i.e. modem) connections. A well-designed protocol and architecture (i.e. not an pre-alpha binary posted on the web for less than 24 hours ;-) ) would probably be up to the task. Of course, knowing Microsoft, they'd probably ship a protocol and architecture that scales worse than Gnutella... :-p
      • The latest versions of limewire are much better since they use so-called super peers which makes gnutella very similar to the fasttrack protocol used in morpheus and kazaa. It seems that the gnutella protcol is evolving in the right direction. Especially the early versions were rather stupid and naive.

        Right now gnutella's main problem is that nobody knows this, the network can easily handle the amount of users that use it, only it is competing with the much larger fasttrack network which simply has more to offer.
    • by Salamander ( 33735 ) <jeff@ p l . a t y p.us> on Monday February 11, 2002 @04:00PM (#2988764) Homepage Journal
      Surely there will be major scalability problems with something like this, a la Gnutella

      That's why it's research. I've met and talked to Bill Bolosky (Farsite project lead); he's very clueful wrt scalability in general, and well aware of the problems that networks like Gnutella (an unusually naive protocol, BTW) have run into. However, like the folks working on OceanStore [berkeley.edu] or CFS [mit.edu] or many other projects, the Farsite folks have a fairly formidable arsenal of innovative techniques they can apply to the problem. The details are still being worked out, of course, because that's what research is all about, but the people working in this area do seem to be making real progress toward solutions that could scale to such levels.

  • Scary (Score:2, Interesting)

    by amaprotu ( 527512 )
    'Self Healing' scares me. I'm not entirely sure why, but I want to be in control of my computer. I'm afraid that with 'self healing' my computer can install things I don't want installed, uninstall things I do want and send all my information to Big Brother.

    Now if it was open source, distributed OS with self healing I might be ok, I guess I just object to giving that much control to a large coorporation whos main concern is profits and not my privacy.
    • I'm afraid that with 'self healing' my computer can install things I don't want installed, uninstall things I do want and send all my information to Big Brother.

      When I worked at BigCorp everyone was networked, and you couldn't log on until you'd installed the latest gimcrack they had pushed to your desktop - never mind if it rucked up your other programs. And they would interrupt whatever you were doing to "push" news broadcasts onto your screen every time they made a sale -- at least that was back when they were actually making sales. It seemed kind of Big Brotherish. (Of course, it was their gear.)

      I don't even like it when someone comes into my cube and looks over my shoulder, much less sharing all my files.

      As far as my own gear goes, I'd rather sit in a cave alone and scratch images into the sand with a sharp stick than be connected to the kind of all-encompassing network you describe.
  • by 8string ( 316088 ) on Monday February 11, 2002 @02:47PM (#2988244)
    Farsite is just one of several projects at Microsoft Research and other labs around the world that will render operating systems all but unrecognizable in 10 years. Farsite embodies several characteristics--such as fault tolerance, self-tuning and robust security--that will distinguish operating systems of the future.

    So, Bill is finally going to release a version of windows that will automatically simulate pressing ctrl-alt-delete when it blue screens.

    Many people would say it's MS's customers that have been fault tolerant.
    <rimshot!>
    • So, Bill is finally going to release a version of windows that will automatically simulate pressing ctrl-alt-delete when it blue screens.

      Actually, they already invented that with W2k.. if you khappen to be on a coffee break while it crashes and don't pay attention whether you are doing a login or a unlock, then you might be surprised to a fresh desktop just when you thought there was too many apps anyway..

  • by gorilla ( 36491 ) on Monday February 11, 2002 @02:47PM (#2988245)
    VAX Clusters [uni-ulm.de].
  • Hmmm... (Score:5, Interesting)

    by dghcasp ( 459766 ) on Monday February 11, 2002 @02:48PM (#2988253)
    Oh, you mean something like Plan 9 [fywss.com] from Bell Labs?

    I predict that there will never be a revolutionary new operating system until we break free of the chains imposed by Posix compliance. Until then, we're stuck with files that have to be streams of bytes, ugo-style permissions, non-wandering processes, incompatable RPC calls, &c.

    And the real pain is there have been OS'es that have had simple & elegant solutions to problems that are hard under unix (Aegis, Multics, VMS, TOPS, ...) that were pushed aside by the steamroller that is Unix.

    But to be fair, many of the forgotten O/S's are now forgotten because they weren't as general purpose as Unix. Unix is the great compromise. But it's hard to strive for the best when you've already accepted compromise.

    • But to be fair, many of the forgotten O/S's are now forgotten because they weren't as general purpose as Unix. Unix is the great compromise. But it's hard to strive for the best when you've already accepted compromise.

      OK, you tell the CIO of [mid-sized corp] that he has to junk his $5m worth of Sun boxes because his O/S is a 'compromise'. The enterprise game is a one-shot deal. This isn't "ok, that pc is broken, ship it back to Dell" it's "you spent $500k on a machine that wasn't good enough? go find a new job".

      The people that make technology decisions don't care about elegance.
  • And get all these ideas implemented in the Linux kernel! Now that we know the future, we can be the first ones there!

    But seriously, somehow I don't see this in 10 years.
  • by dfenstrate ( 202098 ) <dfenstrate@gmaiEULERl.com minus math_god> on Monday February 11, 2002 @02:50PM (#2988269)
    Farsite, while ingenious, looks more like a fantastic file storage system than anything else. Is it possible that they've tweaked the UI that most of us are accostomed to the point where any more upgrades are aesthetic, feature or reliability driven, and aren't fundamental improvements on the current desktop analogy?
    Will the majority of the computer using populace still be double clicking, dragging and dropping, and 'opening' folders and hard drives 10, 15 years from now?

    Could be. Could be.

    • Will the majority of the computer using populace still be double clicking, dragging and dropping, and 'opening' folders and hard drives 10, 15 years from now?
      No. The majority of the computer-using populace will be having one of the following conversations with their computers:
      Bob? Yes, John? How long has it been since I emailed mom? It's been three weeks, John. You should really send her another. Right. Send her this: Dear Mom...

      Bob? Can I help you, Sally? How long has it been since I got a letter from John? Three weeks, but his computer tells me it looks like he's trying to write a new one now. Splendid... let me know when it arrives.

      Bob? What? Leave the toaster oven alone. But it doesn't have the latest... I don't CARE! I do not want to upgrade it. All right, Steve. I'll remind you in 30 days. The lucky ones will be those who remember how to use the desktop metaphor or the CLI.
  • Freenet (Score:2, Informative)

    by commonchaos ( 309500 )
    Looking at the diagram at the bottom of the article, I was reminded of how Freenet works... so at least in that area it looks a bit redundant. The article seems to describe more of a grouping of many ideas which have been out for a while and adding in a bit of marketing hype. Nothing to impressive, but intresting none the less.
  • Druthers (Score:5, Funny)

    by r_j_prahad ( 309298 ) <r_j_prahad AT hotmail DOT com> on Monday February 11, 2002 @02:52PM (#2988282)
    I don't need a self-healing computer nearly as much as I need a self-painting house and a self-mowing lawn. And my wife could sure as heck use a self-fueling car.
    • I don't need a self-healing computer nearly as much as I need a self-painting house and a self-mowing lawn. And my wife could sure as heck use a self-fueling car.
      Fsck all that, I just want self-washing dishes...some self-laundering clothes (and/or money) might be nice too... ;-)
    • There ARE self-painting houses; the marvelous new technology that allows this feet of engineering is known as "vinyl siding."
  • by kfg ( 145172 ) on Monday February 11, 2002 @02:52PM (#2988288)
    Don'cha just love it when people "predict" what's already nearly available? And without even mentioning its existence in the article.

    And don'cha just love it when MS "predicts" that they'll "inovate" by duplicating it under the MS banner?

    Anybody care to "predict" the havoc that might insue when such OS's gain wide public use? I'd be leery of using such even in my isolated from the internet home network until it was proven to be absolutely secure, something today's less interactive computer nets can't even manage.

    I'm happy that people are looking forward to, and researching, the future.

    Would it hurt if a few people spent a bit more time making the present work worth a shit?

    KFG
  • by d5w ( 513456 ) on Monday February 11, 2002 @02:56PM (#2988310)
    There's a good side and a bad side to this, considering the companies working on it. The good news is that whenever the researchers are talking about Byzantine fault tolerance you can translate that as "assume the machines on the network are unsecured Windows PCs". In that sense it's great to hear of Microsoft feeding a reporter that phrase, since it suggests a from-the-ground-up specification that doesn't inherit the security holes of the past and is robust against insecure machines.

    The bad side, which is closer to reality, is that a computer company working in an "extend our existing market" mode will find find it irresistable to tie new things tightly to the innards of what already been deployed. That's a great way to ensure that you inherit security flaws from whatever old model you had, however good the theory of your new system is.

  • by PHAEDRU5 ( 213667 ) <instascreed.gmail@com> on Monday February 11, 2002 @03:00PM (#2988344) Homepage
    Let me see if I've got this straight:

    1. /. story about Microsoft getting legal permission to take over your computer, as part of a EULA.

    2. ComputerWorld story that includes a line about how Microsft sees the computer of the future as one giant logical system with many small partitions.

    Is anyone else joining the dots like I am?
  • by Arcanix ( 140337 ) on Monday February 11, 2002 @03:04PM (#2988373)
    I assume Microsoft will be releasing the source code and freely distributing Farsite so I support this project.
  • by GSloop ( 165220 ) <networkguru@sloo ... minus physicist> on Monday February 11, 2002 @03:04PM (#2988375) Homepage
    How about getting rid of IRQ's on the PC platform!

    How about getting rid of drive letters in Windows/Dos and having mount points!

    How about a better drive interface than the stupid IDE interface. (Macs did it right with SCSI, but now to be "cheap" they do it too [sigh])

    And for self healing? If Windows is still around and the predominant OS, I'll pass on the "self healing" - it'll be more like "death-without-dignity." Remember NT 4 SP 6? [Shivver] I don't want MS "self-healing" my machine!

    In fact, I don't think I want anyone self healing my machine until software is lots more robust than it is now. At least when I apply patches to my machine and notice that something isn't working right, I know I _just_ patched it, so it might be the patch. With someone else applying patches without my knowing, I would be screwed!

    Yeah, all those "wonderful things are just around the corner" articles are neat, but I would truly be happy with some "incremental" changes.

    Lets forget "visionary" for a while and just fix the crap that's broken right now! Pleeeeease!

    Cheers!
    • by mike_g ( 24445 )
      How about getting rid of IRQ's on the PC platform!

      Perhaps I am misunderstanding, but you want to get rid of interrupts? Interrupts are a good thing, what we need to do is increase the number of them instead of removing them. If I remember correctly powerpc architecture has 64 hardware interrupts instead of the measly 16 on the x86 platform. We want more interupts not less.

      How about getting rid of drive letters in Windows/Dos and having mount points!

      I agree with this. While in the short term it would be a pain migrating existing users over. Everyone would have to learn to use /mnt/floppy (or its equivalent) instead of a:. Some sort of symlinking could get around this though. Mount points for hard drives are a great improvement.

      How about a better drive interface than the stupid IDE interface. (Macs did it right with SCSI, but now to be "cheap" they do it too [sigh])

      Oh the great IDE Vs SCSI debate. I don't think that Macs support IDE to be "cheap", I think that they do it to be relatively competative/affordable. For some reason unknown to me, SCSI drives are much more expensive than IDE drives. Looking at todays pricewatch listings, I found that the cheapest $ per GB for SCSI was $3.85/GB for a 36.4GB drive. While on the IDE side you could get a 60 GB for $1.37/GB. The cheapest SCSI is over 2.81 times the price of IDE per GB. Never mind that some SCSI drives ran over $10/GB. While I do realize that SCSI is superior to IDE (higher performance, less cpu utilization, more devices per controller), and I would never use SCSI in a server or workstation, is it really worth almost 3 times the price for the desktop? Most desktop uses (browsing internet, email, word processing, solitare) would not even be noticeably improved by the increase in performance. For tasks such as these IDE is more than adequate.

      What I would find interesting is a size/performance comparison between a $x SCSI drive and a $x IDE hardware RAID array.
  • Links provided (Score:3, Informative)

    by NearlyHeadless ( 110901 ) on Monday February 11, 2002 @03:05PM (#2988381)
    Farsite [microsoft.com]

    Butler Lampson [microsoft.com], for papers on Byzantine reliability, mostly based on the work of

    Leslie Lamport [microsoft.com]
  • The market for alternative operating systems has completely dried up, so you should really be asking what will be in future versions of Windows and Linux, because unless there is a huge surge in OS research, these are going to be all thats left in ten years.
  • Brrr... (Score:5, Funny)

    by sharkey ( 16670 ) on Monday February 11, 2002 @03:12PM (#2988436)
    ...operating systems...and they'll collaborate with applications...

    Windows Inheritance: "Psst. You crouch behind j.user's legs and I'll give him a push."
    Clippy 5000: "OK"
    *SHOVE*-splat!
    Software: "Have a nice trip? See you next Fall! Muahaha!"
  • by Guppy06 ( 410832 ) on Monday February 11, 2002 @03:14PM (#2988457)
    "Imagine computers in a group providing disk storage for their users, transparently swapping files and optimizing their collective performance, all with no central administration."

    Whoever thought up this pipe dream apparently doesn't understand the Zeroth Law of Network Security: If you want information to be secure, DON'T PUT IT ON THE FUCKING NETWORK!

    Seriously! As if most business OSes don't default to the least-secure settings already! Why would you want to run important apps on a system where the default is to share anything and everything with any computer in listening distance?
  • Weren't there predictions just like this ten years ago?
  • by Whatsthiswhatsthis ( 466781 ) on Monday February 11, 2002 @03:19PM (#2988499)
    ...there won't be much drastic change from now till the next 18 years. For evidence of this, look at the Apple Lisa. The Lisa had windows, icons, a menubar, a WYSIWYG interface, and a mouse. Today's computers are little more than a glorified Lisa interface, whether they are running Mac OS X or Windows XP (I know because I run both.) Like the Lisa, todays computers still crash and still corrupt themselves. I doubt that this could be easilly changed in the next five, ten, or even fifteen years.

    I'll believe the distributed file-storage myth when I see it. To me, it sounds as if it would hog bandwidth, just like gnutella does. I don't see any change coming in the way I store files on my computer. It's fast, effecient, and hasn't needed a change.

    SysAdmins need not quit their day-jobs. As long as Microsoft is providing this technology, you can be sure that it will run into snags and security vulnerabilities. Increased complexity = increased vulnerability.

    ...and that's all I've got to say about that
  • The target environment for Farsite is an organization in 2006 with 100,000 computers, 10 billion files and 10 petabytes (10,000TB) of data.

    Hmmm...my first thought..."ScanDisk is checking harddrive C..."

    Farsite is a serverless, distributed system that doesn't assume mutual trust among its client computers. Although there's no central server machine, the system as a whole looks to users like a single file server.

    Cool...Microsoft invents the cluster. I'm sure the folks who created Beowulf clusters stole the idea from them...come to think of it, those Gnutella folks blatantly ripped them off too...

    ...the Farsite project at Microsoft Corp...embodies several characteristics--such as...robust security...

    I'd say something mean, but I assume this was meant as a joke...
  • Farsite is a serverless, distributed system that doesn't assume mutual trust among its client computers. Although there's no central server machine, the system as a whole looks to users like a single file server. High reliability and security are ensured because each file has one or more encrypted and digitally signed replicas elsewhere in the cluster.

    It sounds to me like MS is worried about the future of the file server market. Perhaps they see the writing on the wall... it says LINUX. Who's likely to implement linux servers? Those that can't afford to pay for a Win2K Server license. "But wait, if you upgrade to the new Farsite OS, you don't need a server! So you don't need to use Linux at all! Think of the cost savings when you don't need to buy or maintain a separate server! Think of the savings in administration costs!" Or some hype along those lines. With large corporations, with all that spare hard drive space and idle processors, how many servers could they replace? Have they done the math and come up with figures that spell doom for the file server market?

  • Maybe it's just me, but the Farsite diagram at the bottom of the article really reminded me of how I understand Freenet to work...Is MS attempting to create a DRM-enabled variation of this same idea?

    I don't imagine that Farsite has the same goals as the Freenet project, but there is enough similar in the underlying technology that I was struck by it. Maybe MS is recognizing the value of the architecture, if not some of it's potential uses?

  • I find that part about a "self-healing" OS in a fantastically complicated distributed system rather unbelievable. Microsoft has actually been attempting to edge Windows towards self-healing. But that depends on the OS actually being able to identify problems and find the fixes. So far, "self-damaging" seems to be a more accurate assessment of the results -- and this is for an OS residing on a single box. In a distributed system...
  • That's fine, but what does it look like?

    More than anything else, the user cares about the OS interface. How does it work?

    The user doesn't give a damn about where a file is stored. He just wants to launch his programs quickly and locate his files fast. Why can't we do some thinking on this basic issue (and not have the end result be some bulky goofy 3-D environment)?
  • UNIX (or Linux) can be as transparent as you want it, if you want to put lots of intelligence in a storage driver. It wouldn't matter in principle where the data was on tape or disk. You could have just one monster file storage device. In practice large applications want some control to increase efficiency.

    Mainframes got very sophisticated in automating this. It was also somewhat difficult to program commands in IBMs or DECs data-definition languages. Much of this was lost in downsizing to personal workstations and is being rediscovered again.
  • The article is poorly researched. IBM's autonomic computing != Farsite. IBM's autonomic computing [ibm.com] is a very ambitious project. Here's the opening paragraph from the autonomic site:

    IBM believes that we are at just such a threshold right now in computing. The millions of businesses, billions of humans that compose them, and trillions of devices that they will depend upon all require the services of the I/T industry to keep them running. And it's not just a matter of numbers. It's the complexity of these systems and the way they work together that is creating a shortage of skilled I/T workers to manage all of the systems. It's a problem that's not going away, but will grow exponentially, just as our dependence on technology has.

    From my understanding, autonomic computing and other projects like are going for something much bigger than "lets make our OS smarter." I seriously doubt this is targeted at the consumer, since there are too many privacy issues. The real benefit of "self healing" is in the corporate environment where up time is critical. Autonomic's goal as I read it is about making systems work together seamlessly to improve reliability and scalability. Say a server has some hardware problem or a switch is dying. Things like these could cause real financial losses, so having smart systems that reconfigure/heal itself could reduce the cost of hardware and software failures. How many times have admins had to get up at 3 am to fix the webserver because some log ran amuck and ate up all the HD space. Having a standard system for handling these problems would help make systems more reliable.

    Too many reporters are getting way too lazy.

  • The target environment for Farsite is an organization in 2006 with 100,000 computers, 10 billion files and 10 petabytes (10,000TB) of data.
    Now the problem as I see is that only fortune 1000 size companies have 100,000 computers, and a good whack of those currently are pretty old, and will be in 2006. While it is likely that there will be 100Gb per machine, does anyone really believe that there will be an average of 10,000 (assuming american billion) files per machine? Remember, this is a distributed OS, therefore the OS files only get counted once, averaging to roughly 1 file per machine. That means that every machine will have roughly 9,999 unique files. That's a lot of Pr0n!

    Plus, as these are fortune 1000 companies, what is the bet that they won't even look at this technology for another 10+ years.

    Maybe, just maybe, it will be possible (well, it already is, but...) what is the chance of it being really deployed?

    Plus, where are the offsite backups going to be done? Does this mean that every workstation has to be left on at all times. How much retraining does this require. Yes, we know that you used to get fired for leaving your machine on, but if you don't from now on, you will be fired!

    Methinks that the dream will not match the reality....

    • does anyone really believe that there will be an average of 10,000 (assuming american billion) files per machine?

      There are 80,000 files on my machine. 42,000 are in or under the Windows folder. That is 38,000 non-OS files. (Actually many more than that, because lots of non-OS stuff gets into the Windows folder -- e.g., every internet bookmark is a separate file in Windows\Favorites.) And that's on a 10G hard drive, with less than 7G used!
  • Farsite is just one of several projects at Microsoft Research and other labs around the world that will render operating systems all but unrecognizable in 10 years.
    Ahem ... ahem ... I feel like I'm karma-whoring here, but ...

    How long has it taken for Microsoft to make an OS that simply DOES NOT CRASH?!

    With around 15 years of work and refinement, they may just about have gotten to that point with Win2000 and WinXP. How much effort did it take them to do long file names, for heaven's sake? Let's not even get into issues about the quality of multitasking.

    I simply can't take a prediction seriously that a (real) Borg Operating System will be a reality in 10 years. Especially coming from Microsoft. Heck, I wouldn't believe such a prediction from an OS company I respect. But from Microsoft??? Consider the source.

    Sig: What Happened To The Censorware Project (censorware.org) [sethf.com]

  • Computers will become easier to use.

    And as they get easier to use, the number of people who really understand computers will also decrease.

    As less and less people need to understand how a computer ticks in order to use it, the current class of knowledgable computer users will become a smaller and smaller subgroup of computer users.

    This elite class of computer 'brains' will be increasingly in demand for those cases where VB Programming 101 is not sufficient.

    This elite class will be paid vast sums to keep the rest of the computer-using world happy (I can dream can't I? :-) )

    Cheers,

    Toby Haynes

  • by pmz ( 462998 ) on Monday February 11, 2002 @04:04PM (#2988801) Homepage
    What was they hype ten years ago? Twenty?? Then, why am I still using UNIX??? And why is UNIX still the most powerful OS commonly used????

    I think there hasn't been a new idea widely used in computing since the '70s! What gives?
  • Personally I see it in several of the applications I use regularly. Acdsee Classic? Eudora Mail? Forte Agent? Opera? mIRC? WinAmp? They're almost never updated, the application layer is getting "done". Ok you can add the latest wiz-bang features, and I'd upgrade to it too if it's free but it's not providing any real add-on value.

    The only thing left to compete on when the consumer don't need any new features, is cost. Windows apps are getting there, Windows itself isn't there yet, nor is Linux and their apps, but they're getting there and there's no competing with something that's free (BSD free or GNU free, doesn't matter much to the enduser). Look at Win2k (Pro) vs. WinXP Pro. What *good* corporate features are there? Damn close to none, and a whole lot of crap and eyecandy from the home edition that doesn't provide any business value whatsoever.

    Kjella
  • I'd like to see it (Score:2, Informative)

    by On Lawn ( 1073 )
    It may be relatively recent technology, but I wonder if this will happen or not.

    Mosix does a pretty good job of balancing processing time, but won't split tasks that require shared memory, sockets, and is not fine grained enough to put threads on different machines. It also requires a simular kernel to run on all of the machines. But I run it now because it is the closest we have. I think it may catch on.

    For distributed disk sharing, the closest we could find was Coda, although it has a few disadvantages also. You can't have very large volumes, its difficult to configure, it takes painfuly earned experience to use efficiently.

    Mosix has its MFS, which gives everyone a shot at everyone's disk drive. This is an interesting possibility also, however it is not configurable. You can't lay the volumes down where you want them to be. It could be used.

    But then, we could partitian available disk space to large network raids with network devices. GFS I believe works along this principle. Lower layered than Coda, but without the caching that I think lets the system work efficiently over the network.

    I guess the funny thing is that I use and consider them them inspite of the challenges. Kind of like Linux in the 1.2.13 days. Ahh the good ol' days when "Hey we finaly got X working" would bring a round of congradulations from lab. "Oh no, the mouse doesn't work" would only mean we'd be happy to fumble around for another few hours with faith that it would eventually work, if we changed something somewhere.

    Hey wait a minute. You know, maybe linux isn't dead like some have said. Maybe there is still software frontier to cover and being covered that we can download/compile and enjoy....

    (Although I have yet to get a workable EROS kernel doing anything useful...)
  • This kind of thing was being done in 1968 - check out the UC Irvine "Distributed Computing System". If I remember right it went well beyond things like file sharing among relatively autonomous machines, it even had the memory allocator running on different machines than those holding the memory being allocated.

    I believe that it also used an intresting mechanism in which resource requests were allocated using an auction like mechanism - if one of the boxes needed to spawn a process it would put out an RFP and machines willing to undertake the job would offer bids with costs. A second committment phase bound the offer to the bid.

    All this in the late 1960's.
  • VMS (Score:2, Interesting)

    by D.Throttle ( 432930 )
    VMS has been doing all of those things for years. Now can anyone tell me where it is right now?
  • by nathanh ( 1214 ) on Monday February 11, 2002 @04:41PM (#2989029) Homepage

    My strong belief is that the best "predictions" occur when you find something in use today - only too expensive for the home user - and "predict" it will be ubiquitous within a few years. So here are my completely predictable predictions.

    1. Stereo equipment will start to offer Ethernet ports and "integration with your home computer". Initially this will be limited to song selections via Windows-only software.
    2. Affordable SANs will become popular. Initially this will occur within school/university labs but the gear will spread into "tech homes" as well.
    3. The word processor will become "that thing you get for free with your computer" thanks to efforts from Sun and OpenOffice, similar to what currently occurs with web browsers and media players.
    4. People will get sick of managing hundreds of incompatible devices; stereo, computer, MP3 player, discman, mobile phone, PDA, etc. Vendors will form large alliances to offer an integrated system.

    Notice how all of my predictions sort-of exist already. This is what makes predictions so easy.

  • by crovira ( 10242 ) on Monday February 11, 2002 @04:49PM (#2989078) Homepage
    And if you believe this piece of dross, read their predictions from ten years ago.

    'Nuff said.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...