Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology

Operating Systems of the Future 436

An anonymous reader writes: "'Imagine computers in a group providing disk storage for their users, transparently swapping files and optimizing their collective performance, all with no central administration.' Computerworld is predicting that over the next 10 years, operating systems will become highly distributed and 'self-healing,' and they'll collaborate with applications, making application programmers' jobs easier."
This discussion has been archived. No new comments can be posted.

Operating Systems of the Future

Comments Filter:
  • by SirSlud ( 67381 ) on Monday February 11, 2002 @02:35PM (#2988102) Homepage
    I'm so sick and tired of what the next 10 years will bring us. Howabout OSes that dont crash? How about hardware that won't lock up your computer? How about open standards, a generally more cautious approach to computing that will allow us to stabilize the developments that occur? Nah .. of course not. Lets take this overly complicated not-so-realiable thing and throw a transparent layer of 'self-healing' autonomy to it. I know thats what I've been looking for ... yet another reason why I have to explain to my boss that computers ain't perfect. I can hear him now: "But they're supposed to heal themselves! Why didn't the OS dial up our energy provider and ask why the power went out?!"
  • by Peridriga ( 308995 ) on Monday February 11, 2002 @02:42PM (#2988186)
    Is it just me or does "such as fault tolerance, self-tuning and robust security" just not sound like a Microsoft product to me...

    And...

    http://www.computerworld.com/computerworld/recor ds /images/story/Farsite.gif

    Was it just me or does the notion of a "Centralized file server" NOT sound like distributed computing to you?

    Leave it to Microsoft to translate distributed into centralized
  • by Anonymous Coward on Monday February 11, 2002 @02:44PM (#2988208)
    ...this won't fly well at all in systems that are required to be audited and validated, for things like pharmaceuticals, where a certain degree of determinism is required...
  • by PowerTroll 5000 ( 524563 ) on Monday February 11, 2002 @02:44PM (#2988214)
    Most unfortunately, postulating what we could do is much more exciting than perfecting something we already do.

    Apparently, the public has a certain tolerance to defects and bugs. A fine exmple is the automobile, with its near-certain breakdowns, despite Tucker proving otherwise [protsman-antiques.com].
  • by mblase ( 200735 ) on Monday February 11, 2002 @02:46PM (#2988231)
    The target environment for [Microsoft's] Farsite is an organization in 2006 with 100,000 computers, 10 billion files and 10 petabytes (10,000TB) of data.

    Surely there will be major scalability problems with something like this, a la Gnutella [slashdot.org]?

    The potential pitfalls of 100,000 computers trying to access each other across the same network gives me headaches just thinking about it.
  • by dfenstrate ( 202098 ) <dfenstrate@gmaiEULERl.com minus math_god> on Monday February 11, 2002 @02:50PM (#2988269)
    Farsite, while ingenious, looks more like a fantastic file storage system than anything else. Is it possible that they've tweaked the UI that most of us are accostomed to the point where any more upgrades are aesthetic, feature or reliability driven, and aren't fundamental improvements on the current desktop analogy?
    Will the majority of the computer using populace still be double clicking, dragging and dropping, and 'opening' folders and hard drives 10, 15 years from now?

    Could be. Could be.

  • by kfg ( 145172 ) on Monday February 11, 2002 @02:52PM (#2988288)
    Don'cha just love it when people "predict" what's already nearly available? And without even mentioning its existence in the article.

    And don'cha just love it when MS "predicts" that they'll "inovate" by duplicating it under the MS banner?

    Anybody care to "predict" the havoc that might insue when such OS's gain wide public use? I'd be leery of using such even in my isolated from the internet home network until it was proven to be absolutely secure, something today's less interactive computer nets can't even manage.

    I'm happy that people are looking forward to, and researching, the future.

    Would it hurt if a few people spent a bit more time making the present work worth a shit?

    KFG
  • by Detritus ( 11846 ) on Monday February 11, 2002 @03:00PM (#2988346) Homepage
    Sadly, reliability doesn't sell. The average computer user wants fast and cheap. Even on slashdot, you see endless dicksize wars over who has the most 'leet, overclocked system running last night's kernel release on the latest CPU, chipset and motherboard. It doesn't have to work reliably if it looks cool doing it.
  • by GSloop ( 165220 ) <networkguru@sloo ... minus physicist> on Monday February 11, 2002 @03:04PM (#2988375) Homepage
    How about getting rid of IRQ's on the PC platform!

    How about getting rid of drive letters in Windows/Dos and having mount points!

    How about a better drive interface than the stupid IDE interface. (Macs did it right with SCSI, but now to be "cheap" they do it too [sigh])

    And for self healing? If Windows is still around and the predominant OS, I'll pass on the "self healing" - it'll be more like "death-without-dignity." Remember NT 4 SP 6? [Shivver] I don't want MS "self-healing" my machine!

    In fact, I don't think I want anyone self healing my machine until software is lots more robust than it is now. At least when I apply patches to my machine and notice that something isn't working right, I know I _just_ patched it, so it might be the patch. With someone else applying patches without my knowing, I would be screwed!

    Yeah, all those "wonderful things are just around the corner" articles are neat, but I would truly be happy with some "incremental" changes.

    Lets forget "visionary" for a while and just fix the crap that's broken right now! Pleeeeease!

    Cheers!
  • by SparkyUK ( 10333 ) on Monday February 11, 2002 @03:17PM (#2988473)
    >>Howabout OSes that dont crash? How about hardware that won't lock up your computer

    One of the key laws of nature is : Shit Happens.

    This is as true for code in your PC as it is for crawlies in nature.

    We want to fool ourselves that the PC is a clean and closed environment which we have full control of but it just isn't true. That storage device that was there a picosecond ago may have just failed or been removed, the network connection may have just been severed, another program may be running amok and draining system resources just as another needs it.

    Nature mostly gets around unexpected problems, we need OS's and languages that can do the same.

    Your goal of OS's that don't crash and hardware that doesn't "lock up" arn't incompatible with that.
  • by AJWM ( 19027 ) on Monday February 11, 2002 @03:18PM (#2988489) Homepage
    This OS is based on multiple tiny extremely reliable components

    Unfortunately that doesn't necessarily make the OS itself reliable. The emergent behaviour of a system is different from the behaviours of its components.

    After all, all software is based on multiple tiny extremely reliable components (F00F and FDIV bugs aside)-- the processors op-codes -- and look how flakey most software is.

    Sure, you've got to start with reliable components, but you have to combine them in just the right way, too.
  • by russellh ( 547685 ) on Monday February 11, 2002 @03:26PM (#2988541) Homepage

    I'm so sick and tired of what the next 10 years will bring us.

    Right. I think the point is, though, to quote from the article:

    The target environment for Farsite is an organization in 2006 with 100,000 computers, 10 billion files and 10 petabytes (10,000TB) of data.

    Managing data and applications on that scale with PCs today sucks. Data synchronization is a HUGE issue already. The question futurists ask is what must we change for that to be manageable?

  • by Hal-9001 ( 43188 ) on Monday February 11, 2002 @03:53PM (#2988704) Homepage Journal
    The potential pitfalls of 100,000 computers trying to access each other across the same network gives me headaches just thinking about it.
    The number of machines on the network isn't the issue--an AC's tongue-in-cheek response to this comment pointed out that by that logic the Internet shouldn't work--but the bandwidth requirements and network architecture do matter. Gnutella's problem is that it requires a LOT of bandwidth and it is easily bogged down by slow (i.e. modem) connections. A well-designed protocol and architecture (i.e. not an pre-alpha binary posted on the web for less than 24 hours ;-) ) would probably be up to the task. Of course, knowing Microsoft, they'd probably ship a protocol and architecture that scales worse than Gnutella... :-p
  • by pmz ( 462998 ) on Monday February 11, 2002 @04:04PM (#2988801) Homepage
    What was they hype ten years ago? Twenty?? Then, why am I still using UNIX??? And why is UNIX still the most powerful OS commonly used????

    I think there hasn't been a new idea widely used in computing since the '70s! What gives?
  • by mike_g ( 24445 ) on Monday February 11, 2002 @04:09PM (#2988828) Homepage
    How about getting rid of IRQ's on the PC platform!

    Perhaps I am misunderstanding, but you want to get rid of interrupts? Interrupts are a good thing, what we need to do is increase the number of them instead of removing them. If I remember correctly powerpc architecture has 64 hardware interrupts instead of the measly 16 on the x86 platform. We want more interupts not less.

    How about getting rid of drive letters in Windows/Dos and having mount points!

    I agree with this. While in the short term it would be a pain migrating existing users over. Everyone would have to learn to use /mnt/floppy (or its equivalent) instead of a:. Some sort of symlinking could get around this though. Mount points for hard drives are a great improvement.

    How about a better drive interface than the stupid IDE interface. (Macs did it right with SCSI, but now to be "cheap" they do it too [sigh])

    Oh the great IDE Vs SCSI debate. I don't think that Macs support IDE to be "cheap", I think that they do it to be relatively competative/affordable. For some reason unknown to me, SCSI drives are much more expensive than IDE drives. Looking at todays pricewatch listings, I found that the cheapest $ per GB for SCSI was $3.85/GB for a 36.4GB drive. While on the IDE side you could get a 60 GB for $1.37/GB. The cheapest SCSI is over 2.81 times the price of IDE per GB. Never mind that some SCSI drives ran over $10/GB. While I do realize that SCSI is superior to IDE (higher performance, less cpu utilization, more devices per controller), and I would never use SCSI in a server or workstation, is it really worth almost 3 times the price for the desktop? Most desktop uses (browsing internet, email, word processing, solitare) would not even be noticeably improved by the increase in performance. For tasks such as these IDE is more than adequate.

    What I would find interesting is a size/performance comparison between a $x SCSI drive and a $x IDE hardware RAID array.
  • by markmoss ( 301064 ) on Monday February 11, 2002 @04:13PM (#2988865)
    Because Unix wasn't designed as an OS that came with all the functions you would ever need (until next year, when you had to replace it), but rather a framework for adding in whatever turned out to be needed.
  • by SirSlud ( 67381 ) on Monday February 11, 2002 @04:26PM (#2988943) Homepage
    >"For that you need a language that blurs the distinction between data and instructions"

    My point was that instructions are data. But I challenge you to illustrate that in order to solve a problem, you can provide data that does not encompas the intrucstions. "My house is on fire" is data that will instruct people to run out of it, but only because they were previously programmed with a 'fire' trigger. Escape it when it's inputted into your system.

    So neither english nor C can go outside of it's own contextural setting. English is just so more complicated with so many more possible branches of execution based on data that it's difficult to compare the two without either belittling humanity or getting 1984ish about technology. C /can/ change itself via function pointers and, lets say, random data to throw on the execution stack. But brute force only works when you can test a result within the programattic bounds of the inputted data, including instructions. I mean, really, humans are just wildly complex computers, which is why our data-exchange set is so much more advanced. :)

    "Why are we here?" has multiple answers, so you can really only validate successful self-programming if you already think you know what the answer is. And for that, you depend on previous data entry ... etc, etc, etc ..
  • by crovira ( 10242 ) on Monday February 11, 2002 @04:49PM (#2989078) Homepage
    And if you believe this piece of dross, read their predictions from ten years ago.

    'Nuff said.

Neutrinos have bad breadth.

Working...