Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Operating Systems Software Security

Europe Funds Secure Operating System Research 376

narramissic writes "A Dutch university has received a $3.3 million grant from the European Research Council to fund 5 more years of work on a Unix-type operating system, called Minix, that aims to be more reliable and secure than either Linux or Windows. The latest grant will enable the three researchers and two programmers on the project to further their research into a making Minix capable of fixing itself when a bug is detected, said Andrew S. Tanenbaum, a computer science professor at Vrije Universiteit. 'It irritates me to no end when software doesn't work,' Tanenbaum said. 'Having to reboot your computer is just a pain. The question is, can you make a system that actually works very well?'"
This discussion has been archived. No new comments can be posted.

Europe Funds Secure Operating System Research

Comments Filter:
  • by Anonymous Coward on Tuesday April 28, 2009 @04:30AM (#27743003)

    I thought Windows was secure. Why not use that? *cough* *cough*

    • by 4D6963 ( 933028 ) on Tuesday April 28, 2009 @04:32AM (#27743011)

      I thought Windows was secure. Why not use that? *cough* *cough*

      I thought OpenBSD was secure. Why not use that?

      • Re: (Score:3, Funny)

        by Anonymous Coward

        I though Minix was dead for some 15 years....

        • Re: (Score:2, Insightful)

          by Anonymous Coward
          It is. This is just some researchers grabbing fund money. Nothing will come from this.
        • I though Minix was dead for some 15 years....

          Did netcraft confirm it?

        • Re: (Score:3, Informative)

          by Z00L00K ( 682162 )

          Minix did get an reputation of being unstable some 20 years ago, but of course - much have happened since then.

          The more interesting thing is that Minix has a different architecture than Linux using a microkernel. This is in some ways a good idea, even if it also have disadvantages.

          • Why don't we all just use HURD, then?

            • by entgod ( 998805 )
              You mean like how we all use linux on the desktop? ;)

              Kidding, of course. Most anyone DOES use linux on the desktop, don't they :)
              • Re: (Score:3, Interesting)

                by V!NCENT ( 1105021 )

                That would take a loooooong time. First Minix needs to a reach 'gold/stable' release. Then there are the X11 galium noveau and open source ATI driver. Then we are going to need sound support, a port of Gnome and/or KDE 4.8 :') and soundcard and network drivers.

                By that time DNF is probably released for Windows NT 7.0 and Wine has kept up with Windows 7 to run it...

          • by Fred_A ( 10934 ) <fred@ f r e dshome.org> on Tuesday April 28, 2009 @07:05AM (#27743949) Homepage

            Minix did get an reputation of being unstable some 20 years ago, but of course - much have happened since then.

            The one thing that hasn't changed though is that Minix is still just a toy system that's meant to be poked at in schools and that nobody actually uses (yes I know about the 3 rabid Minix users, they probably run AmigaOS too).
            Oh, wait, now it finally supports X11 (woohoo !). Wait, has it got a mouse driver too ?

            However Minix3 *does* feature support for "Over 650 UNIX programs [minix3.org]" (such as man, mkdir and ps). *650* ! It's like 130 × 5 ! Think about it !

            Granted, starting from a small scale system such as Minix is certainly simpler than with a much more mainstream OS such as one of the BSDs or Linux but even if anything comes out of the project, it won't ever gain even "niche" status. More people must be running Plan9 or Inferno.
            The whole idea is utterly futile, except possibly if the code or the concepts can be reused with another system later on.

            • Re: (Score:3, Interesting)

              by xaxa ( 988988 )

              The whole idea is utterly futile, except possibly if the code or the concepts can be reused with another system later on.

              After reading the summary, I expect the whole idea is that the concepts will be reused in another system later on.

            • Re: (Score:3, Informative)

              by gnapster ( 1401889 )

              The whole idea is utterly futile, except possibly if the code or the concepts can be reused with another system later on.

              That is exactly the point of academic research. Toy systems that introduce new concepts are rarely used widely, but the concepts are borrowed for use in other systems later on.

          • by Antique Geekmeister ( 740220 ) on Tuesday April 28, 2009 @07:39AM (#27744187)

            Yes, most developers moved to Linux and stopped writing that pesky, unstable software that anyone actually uses.

            Keeping a kernel that is 10 years behind the leading edge in file systems or communications, especially by kicking it all out of the kernel and saying "Naah-naah-naah! Not my problem!!!!" is like having a very secure car that doesn't have a reverse gear, seats, or door handles. It certainly helps contribute to stability. But the associated software to handle USB, firewire, packet filtering, or network file systems just isn't up to speed.

            • by AVee ( 557523 ) <slashdot AT avee DOT org> on Tuesday April 28, 2009 @11:15AM (#27746843) Homepage
              That kind of car is actually build regularly by most car manufacturers. The amount of money spend on those cars is often in the same ballpark, or even more. They call it concept cars, and they generally also only explore certain aspects of cars while happily ignoring others.

              Thats is not going to be your car for daily use. Minix probably isn't going to be you daily OS anytime soon either, but that no reason not to spend research money on it. The IT industry could do with some more proper research instead of just reinventing the same weels (but this type using XML and HTTP!) all the time.
        • Re: (Score:3, Insightful)

          by Burnhard ( 1031106 )
          I hacked Minix a new memory manager in a System Programming at University class back in 1996. I'm quite literally apathetic with incredulity that the EU are funding further development. Why not get undergrads to do it for free?
        • Re: (Score:3, Funny)

          by DickeyWayne ( 581479 )

          I though Minix was dead for some 15 years....

          No, *Linux* is dead. Those monolithic kernels are just "one big mess!"

      • Re:Wait a second... (Score:5, Interesting)

        by xouumalperxe ( 815707 ) on Tuesday April 28, 2009 @05:05AM (#27743239)

        I guess the idea is less about creating an all around well-built system that's pretty secure in practice, and more about creating something that, even if it might have implementation bugs today is fundamentally, conceptually more secure.

        • Re: (Score:2, Insightful)

          by Jurily ( 900488 )

          more about creating something that, even if it might have implementation bugs today is fundamentally, conceptually more secure.

          So they're dropping C?

          • Re:Wait a second... (Score:5, Interesting)

            by Hurricane78 ( 562437 ) <deleted&slashdot,org> on Tuesday April 28, 2009 @06:15AM (#27743599)

            That was my thought too. If you want to do it right, why not program it in Haskell in the first place. Sure, it might be a little bit slower (not even much actually). But if you go for security, that's not that important anyways.

            Now how they will solve the PEBKAC problem, if they end up with a TCPA-like system (in the original intended way of protecting the user, not protecting from the user) and what they will do against tricks like remotely reading computer input, the inevitability of programming errors and bios virii, is a completely different question.

          • Re:Wait a second... (Score:5, Interesting)

            by mustafap ( 452510 ) on Tuesday April 28, 2009 @06:55AM (#27743875) Homepage

            If you don't understand security it wont matter what language you write in, it will still be crap.

          • by xouumalperxe ( 815707 ) on Tuesday April 28, 2009 @07:20AM (#27744039)

            Dropping C... for what exactly? We're not talking application level security. We're talking kernel level. That means talking to the bare metal. Even if you implement a microkernel with userspace modules for everything, and with those modules written in something more robust than C, that last crucial bit of code that is the microkernel itself is probably going to end up being written in C with ASM snippets, simply because at some point you need to explicitly state what the hardware is doing.

            • Re:Wait a second... (Score:5, Interesting)

              by Cyberax ( 705495 ) on Tuesday April 28, 2009 @09:01AM (#27745057)

              Dropping C is possible.

              For example, CoyotOS (http://www.coyotos.org/) uses BitC and aims for the completely proved kernel. I.e. it will be formally proven that its microkernel CAN'T crash or do something wrong.

              Or look at QNX, their microkernel used to be something like 12Kb of hand-written assembly code (and so stable that QNX systems literally work for decades now without reboots). The rest can be done using other tools than plain C.

              • Re: (Score:3, Interesting)

                How is hand-coded assembly a move to a "more secure language" (whatever that means) than C (which is what I was replying to)? Is that not precisely the job for which compiled languages were created?

                Regarding CoyotOS and BitC, those are quite interesting references, thank you. It might be a stillbirth, though, since one of the lead guys is leaving the BitCC team. Either way, one could argue that coming up with your own low-level language to develop your own secure operating system is pretty much the only way

                • Re: (Score:3, Interesting)

                  by Cyberax ( 705495 )

                  Assembly can be more secure because it doesn't depend on a compiler :)

                  In any case, 12 Kb of asm/C code is vanishingly small quantity for modern operation systems. For most purposes 12 Kb is the same as 'none'.

                  "How intrinsically secure is the languange, in and of itself? What does it have that makes it special?"

                  It allows you to maintain _invariants_, checking them automatically. Including very complex invariants expressed as theorems.

                  Formal correctness checking is not feasible for large programs, but a forma

                  • Re: (Score:3, Funny)

                    Well, I think the key point here is what we understand as secure. "Secure" is "easy" to define in terms of a system, but, to me, seems a remarkably nebulous concept when applied to a language. While it's very easy to screw up in C, that isn't a matter of "barbed wire and armed security guards", but rather "flying trapeze and safety nets".

      • by c0p0n ( 770852 )

        I don't see how the parent is funny. OpenBSD is quite possibly the most secure OS around. At least for an OS that you can use for both server and desktop.

        • Re: (Score:3, Informative)

          by Anonymous Coward

          Try OpenVMS, a considerably more secure operating system than any Unix variant.

          OpenBSD is relatively bug free, but that only makes it superficially more secure than more popular, usable, operating systems. As a basic example, virtually every application not audited by the OpenBSD team themselves opens a potential attack vector. That's true of most operating systems. But VMS at least had the advantage of a locked down privilege system that made it much harder for a hole in an application to create a space wh

    • Re:Wait a second... (Score:5, Interesting)

      by Jacques Chester ( 151652 ) on Tuesday April 28, 2009 @05:32AM (#27743403)

      The sad thing about Windows NT is that the design was pretty good, the implementation was OK, but the default security policy is totally useless. Hooray for backwards compatibility.

  • by oneirophrenos ( 1500619 ) on Tuesday April 28, 2009 @04:32AM (#27743009)

    The question is, can you make a system that actually works very well?

    I'm glad someone finally got to asking this question.

    • Re: (Score:3, Interesting)

      by u38cg ( 607297 )
      You can. It just requires well defined inputs and outputs and to run on certified hardware. Software, heal thyself? There's a reason self-modifying code is frowned upon. Besides, is kernel reliability really an issue these days? Even the Windows kernel only really crashes when you feed it bad memory.
      • by Chrisq ( 894406 ) on Tuesday April 28, 2009 @04:54AM (#27743161)

        Software, heal thyself? There's a reason self-modifying code is frowned upon. Besides, is kernel reliability really an issue these days? Even the Windows kernel only really crashes when you feed it bad memory.

        They are actually talking about things like driver isolation with monitoring and restarts. The answer to whether kernels are stable enough depends on your requirements. I find that I am much less forgiving when my DVD player crashes and doesn't record the film I have set than when my computer crashes, though both are now very rare events. Monitoring, isolation and restarting is used in things like engine management systems, where failures are even less welcome and a full OS with this level of reliability is bound to have applications in medicine, industry, "defence", etc.

        • by Jurily ( 900488 )

          The answer to whether kernels are stable enough depends on your requirements.

          If the Linux kernel is not stable enough, you'd better roll your own because you obviously know better.

          Monitoring, isolation and restarting is used in things like engine management systems, where failures are even less welcome and a full OS with this level of reliability is bound to have applications in medicine, industry, "defence", etc.

          Linux does just the opposite. They test driver reliability before they release it. Seems to be working so far.

          And if you need something that goes down less than the power grid, I suggest multiple computers on multiple locations.

    • Sometimes (Score:2, Interesting)

      I have been trying to answer that question for more than 40 years, and I can say the answer is :: sometimes. The trouble is you need lots of money (i.e. man hours + very good kit + a very well defined problem + lots of testing), unfortunately experience shows that when you get all of that, the system is obsolete by the time you hand it over to the user. It's better to aim for good enough.
  • by Viol8 ( 599362 ) on Tuesday April 28, 2009 @04:33AM (#27743017) Homepage

    .. they want their funding back.

    Seriously , I thought minix had been put out to pasture years ago.

    Also what are 5 people going to manage that entire corporations and thousands of OSS developers failed to do in the last few decades? Ok , one of them might be the next Alan Turing and surprise us all but I won't hold my breath.

    • by FourthAge ( 1377519 ) on Tuesday April 28, 2009 @04:41AM (#27743069) Journal

      The aim is not to produce a better operating system, the aim is to secure funding. This is what academics actually do; good research is (at best) a byproduct. This is business as usual for a research group. The real research will be a low priority, because the group will need to satisfy the EU bureaucracy that they are doing something worthwhile. Consequently, most of their time will be spent writing reports.

      Bear in mind that ideas like "self healing software" are buzzwords that you put on research proposals in order to get them accepted. See also: "cyber-physical systems", "multicore paradigms" and "sensor networks".

      • HEY ... "sensor networks"is cool,
        the rest ... is just used mouthwash
        • I second that. There are actual sensor-networks out there, that are made out of many many little nodes, that are so robust, that you can spread them with an airplane, and leave them there for months or more. They self-network, and send you their data back, when you fly over them again. If this does not impress you, then I don't know what will.

      • EU Burocracy... (Score:5, Informative)

        by js_sebastian ( 946118 ) on Tuesday April 28, 2009 @05:56AM (#27743511)

        The aim is not to produce a better operating system, the aim is to secure funding. This is what academics actually do; good research is (at best) a byproduct. This is business as usual for a research group.

        Not really. The purpose is doing the research you are interested in doing (even if it's just for your career ambitions). For that you need funding. So of course you have to do some marketing to sell the research you want to do to the people deciding whom to fund. You think this guy has been doing MINIX for 20 years just to get funding? It's the other way around, you get funding, to be independent and have people work for you so you can get some interesting stuff done. Or, if you are more cynical, he's working on MINIX because it generated enough interest that he could get a ton of publications out of it.

        The real research will be a low priority, because the group will need to satisfy the EU bureaucracy that they are doing something worthwhile. Consequently, most of their time will be spent writing reports.

        From my experience this is a bit of an exaggeration. It's true that EU-funded projects have more strings attached than those from many other funding sources, but running the burocracy/reports/financials for an EU project that is funding 3 full time people at our university still only takes a rather small percentage of my time.

        And that's a lot more freedom to do real research than in any company environment i've seen or heard of so far. Big companies (even the good ones) have IMHO more bureaucracy, not less, and short-term horizon (want returns in 3, 5 years at the most), which means very little of what is called "research and development" has anything to do with research.

    • by Zumbs ( 1241138 ) on Tuesday April 28, 2009 @04:42AM (#27743077) Homepage
      The point may not be to build the next big $SUPER_DUPER_OS, but to try out some new ideas and concepts for better and more robust OSs in a very controlled environment. If they get good results, the ideas may be integrated into the kernal of those other OSs, hopefully improving the quality of the OS.
    • by VoidCrow ( 836595 ) on Tuesday April 28, 2009 @04:57AM (#27743185)
      That tendency of unimaginative geeks to piss all over ideas that aren't actually in front of them and in use at that point... It's loathsome and saddening.
      • What ideas? (Score:2, Insightful)

        by Viol8 ( 599362 )

        All I can see is some buzzwords and them waffling about microkernels - a 1970/80s concept if ever there was one which so far has proved less than impressive in the real world.

    • by PhotoGuy ( 189467 ) on Tuesday April 28, 2009 @05:46AM (#27743461) Homepage

      I remember Minix. Before there was Linux, Minix was around. It was my first exposure to a Unix-like operating system on a PC. It was surprisingly lean and elegant and Unix-like. I still have the box of floppies. I remember recompiling and modifying the operating system. It was indeed quite a powerful tool, and I dare say an important precursor to Linux.

      (When I first heard about Linux, I had incorrectly assumed it was an evolution of Linux.)

      I see a lot of people bashing Minix here; I don't think it will replace Linux by any means, but it is an important historical OS, IMHO.

      Wiki notes (about Linux):

      In 1991 while attending the University of Helsinki, Torvalds began to work on a non-commercial replacement for MINIX,[13] which would eventually become the Linux kernel.

      Linux was dependent on the MINIX user space at first. With code from the GNU system freely available, it was advantageous if this could be used with the fledgling OS.

  • MINIX guy (Score:5, Informative)

    by 4D6963 ( 933028 ) on Tuesday April 28, 2009 @04:34AM (#27743031)

    said Andrew S. Tanenbaum, a computer science professor at Vrije Universiteit

    It sounds intentionally misleading to present them as "a computer science professor" when he's the one MINIX guy.

  • What's the point? (Score:3, Informative)

    by seeker_1us ( 1203072 ) on Tuesday April 28, 2009 @04:34AM (#27743033)

    All respect to Andrew Tanenbaum, I'm not trying to troll. It's a sincere question.

    He has said Minix was to be a teaching tool.

    Now they want to turn it into a super reliable OS?

    I don't think it's to make it into another production OS. Could it be in order to develop new OS concepts and ideas which can be spread out to the world?

    • by MrMr ( 219533 ) on Tuesday April 28, 2009 @04:56AM (#27743179)
      Yes, imagine that: A professor trying to teach students how to implement something new and potentially useful rather than clicking ok in the 'solve my problem' wizard.
    • Re:What's the point? (Score:5, Interesting)

      by MichaelSmith ( 789609 ) on Tuesday April 28, 2009 @05:32AM (#27743405) Homepage Journal
      Back when Linus started to write his kernel the debate between monolithic and micro kernels still made some sense. But now more features and drivers are being written for linux and it is getting bigger and more bloated. Functions are being put into modules but that only solves half of your problem because a module can still bring down the kernel.

      I think AST was right. Linux can't continue to use a monolithic architecture.
    • Re: (Score:3, Informative)

      by slabbe ( 736852 )
      From www.minix3.org "MINIX 1 and 2 were intended as teaching tools; MINIX 3 adds the new goal of being usable as a serious system on resource-limited and embedded computers and for applications requiring high reliability"
    • Re:What's the point? (Score:5, Informative)

      by EMN13 ( 11493 ) on Tuesday April 28, 2009 @05:53AM (#27743491) Homepage

      It's also a research OS - the aim isn't to make minix the next best thing, the aim is to research self-healing OS software by using minix as a test platform.

      Most good production software takes a good look at similar software to imitate the best features of each - this isn't a competition between minix and linux, it's testing a feature is a simpler (and thus cheaper) fashion.

    • Re:What's the point? (Score:5, Informative)

      by irexe ( 567524 ) on Tuesday April 28, 2009 @07:03AM (#27743937)
      I asked Tanenbaum this question at a lecture he gave on Minix 3 earlier this year. He responded that he changed his mind somewhat about the education-only issue because he felt that, to prove a point about the superiority of the microkernel design, you need to get it out of the lab and into the real world. He also felt that he could do this without hurting the simplicity of the system as a teaching tool. Incidentally, his intention is not to compete with Linux or Windows on the desktop, but rather to make a robust OS for embedded applications.
  • I don't see this taking off to be honest. Minix was always a research toy. There is too much momentum in Linux. But what it might do is spur some ideas that get incorporated into the likes of Linux or BSD etc.
  • A self-repairing OS? (Score:3, Interesting)

    by cpghost ( 719344 ) on Tuesday April 28, 2009 @04:50AM (#27743121) Homepage
    Actually, it's not such a bad idea. The concept of putting important components in user-space has been around for a while, and it still has potential w.r.t. reliability. But the real question is: are only microkernel architectures capable of self-healing?
    • by Jacques Chester ( 151652 ) on Tuesday April 28, 2009 @05:31AM (#27743397)

      No, but dividing things into smaller pieces makes it easier to fix those pieces in isolation. It's easier for a microkernel system to be self-healing because of that isolation.

      This is not an amazing revelation. We've known about the idea of isolating changes since the invention of the sub-routine. The reason microkernels have always been relegated to second-best is that they require more context switching than a regular monolithic kernel. The tradeoff between "fast enough" and "reliable enough" has for some time now favoured "fast enough".

      But that's changing -- people's computers are getting plenty fast. The 10-15% slowdown Tanenbaum claims for Minix3 is less of a drag than, say, an anti-virus program and could serve to more effectively prevent viruses into the bargain.

      People who say microkernels are passe forget our industry is not set in stone. Priorities change and technologies change with them. In the last 10 years performance has become progressively less important than reducing bugs or speed of development. Microkernels have lots to offer in such a world.

  • by fishexe ( 168879 ) on Tuesday April 28, 2009 @04:50AM (#27743129) Homepage

    Now that Minix 3 is here, Linus can take his monolithic kernel and stuff it! Microkernels are the wave of the future, man!

  • According to the professor, it should soon make Linux obsolete [dina.kvl.dk].

    Phillip.

    • by fishexe ( 168879 ) on Tuesday April 28, 2009 @05:08AM (#27743257) Homepage

      "Of course 5 years from now that will be different, but 5 years from now everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5."

      Man, remember back in '96 when we all got SPARCstations? Those were the days.

    • Hahaha. I'm completely new to this debate (yeah, I know - what a n00b !). Has Tanenbaum ever withdrawn his arguments in the light of experience ? Has he ever thrown up his hands and said "You know, I was just plain wrong. Mea culpa." ?

      Anyone who remembers the climate in microcomputers at that time can kind of appreciate where he was coming from but the landscape has changed so much (if you'll allow me a little metaphor-mixing) since then that most of his points have either been soundly refuted or shown to b

      • Hahaha. I'm completely new to this debate (yeah, I know - what a n00b !). Has Tanenbaum ever withdrawn his arguments in the light of experience ? Has he ever thrown up his hands and said "You know, I was just plain wrong. Mea culpa." ?

        Anyone who remembers the climate in microcomputers at that time can kind of appreciate where he was coming from but the landscape has changed so much (if you'll allow me a little metaphor-mixing) since then that most of his points have either been soundly refuted or shown to be overly cautious/conservative.

        Since the landscape has changed AST can hardly be said to have been wrong at the time. But anyway the landscape is changing towards lightweight embedded systems. Linux is a better fit in that environment than Vista, but a smaller, more modular kernel would be an even better fit.

      • Re: (Score:3, Insightful)

        by AVee ( 557523 )

        Hahaha. I'm completely new to this debate (yeah, I know - what a n00b !). Has Tanenbaum ever withdrawn his arguments in the light of experience ? Has he ever thrown up his hands and said "You know, I was just plain wrong. Mea culpa." ?

        No, why should he? Because Linux is more popular then minix? I'd guess most people here should start sending Mea Culpa's to Microsoft...

  • Minix 3 source code (Score:4, Informative)

    by Jacques Chester ( 151652 ) on Tuesday April 28, 2009 @05:01AM (#27743217)

    I'd recommend people take a look at the source code for Minix 3. It's actually pretty easy to wrap your head around, even for a C-phobic person like I am.

  • by Opportunist ( 166417 ) on Tuesday April 28, 2009 @05:06AM (#27743243)

    The other is user security. And you cannot solve that problem with technology.

    The circle you have to square here is that the user/admin should be allowed and able to run his software, but at the same time he must not run harmful software. Now, how do you plan to implement that? Either he can run arbitrary software, then you cannot identify security risks before it is too late. Or he cannot run software that is a potential security risk and he is no longer the master, owner and root of his own machine.

    Oh, you want a system where the user can generally do his work but has to ask for special privileges when he wants to install new software or change security critical settings? Where have I heard 'bout that before... hmmm...

    • Re: (Score:3, Informative)

      The Singularity project at MSR looked at this problem in a different way. What if each piece of software carries a protocol specification? What services it will require, in what order?

      Then you can do various clever things involving proving that the system won't do anything malicious. If the software tries to do something outside of its specified protocol, then zappo, it's gone. This has the nice side effect that you don't need to rely on hardware memory protection and therefore you don't have to pay context

  • by account_deleted ( 4530225 ) * on Tuesday April 28, 2009 @05:12AM (#27743297)
    Comment removed based on user account deletion
  • by udippel ( 562132 )

    This is what I thought when I read the post. It really smells as if the poster, narramissic, had not been around when microkernels and minix were fashionable. And neither was the person to allow for it to show on slashdot.

    Let's call the minix discussion flogging a dead horse, until these chaps have come up with something real. If they manage to come up with something that is close to the beauty the idea of microkernels has on paper.

  • by ei4anb ( 625481 ) on Tuesday April 28, 2009 @05:18AM (#27743323)
    I remember submitting some patches to them many years ago when I got Minix working in less that one megabyte of RAM (at the time Minix worked at 1Mb and up) and thinking that it would be nice if it were GPL and if I had the time...
    As I recall some guy in Finland did have the time
  • I was just thinking recently about Microsoft's Singularity research operating system written in C#, which is cute, but somewhat useless in the real world. One big advantage though of statically verifiable byte-code languages like C# in operating systems though is security, because you can ensure a block of code is secure once and then run it at full speed without further access checks. That's almost impossible with generic C or assembler, but tractable with bytecode-based languages like Java or C#.

    While a *

    • How about not (Score:3, Informative)

      by Viol8 ( 599362 )

      A number of issues I can see:

      - A bug in the VM could effect EVERY driver on the system
      - Drivers generally need to respond to hardware interrupts and send out data to hardware in real time. Thats unlikely to
          happen if its managed code.
      - A VM/JIT system would only catch memory issues. It wouldn't catch out bad logic or instructions that make the
          hardware go nutes and crash the machine anyway.

    • Re: (Score:3, Interesting)

      by dido ( 9125 )

      The folks at Bell Labs who invented Unix and Plan 9 have been doing all that and more since the mid-1990s with Inferno [vitanuova.com]. The core kernel is pure C, which has a bytecode interpreter for the Dis virtual machine, which almost all userspace code runs as, allowing it to run code safely even on CPUs that don't have hardware memory protection. Add to that a neat C-like programming language called Limbo that natively supports primitives inspired by C.A.R. Hoare's Communicating Sequential Processes, full support fo

  • by master_p ( 608214 ) on Tuesday April 28, 2009 @06:11AM (#27743581)

    The real reason there is no security and that we have the monolithic vs micro kernel is that CPUs provide process isolation and not component isolation. Within a process, CPUs do not provide any sort of component isolation. If they did, then we would not have this discussion.

    I once asked Tanenbaum (via email, he was kind enough to reply) why CPUs do not have in-process module isolation. He replied:

    From: Andy Tanenbaum [ast@cs.vu.nl]
    Sent: Ðáñáóêåõ, 1 Öåâñïõáñßïõ 2008 4:00 ìì
    To:
    Subject: Re: The debate monolithic vs micro kernels would not exist if CPUs
    supported in-process modules.

    I think redesigning CPUs is going to be a pretty tough sell.

    Andy Tanenbaum

    But why? I disagree with that for two reasons:

    1) the flat address space need not be sacrificed. All that is required is a paging system extension that defines the component a page belongs to. The CPU can check inter-component access in the background. No change in the current software will be required. The only extra step would be to isolate components within a process, by setting the appropriate paging system extensions.

    2) The extension will require minimal CPU space and CPU designers already have great experience in such designs (TLBs, etc). Money has been invested for less important problems (hardware sound, for example), so why not for in-process components? it will be very cheap, actually.

    Of course, security is not only due to the lack of in-process component isolation, but it's a big step in the right direction...

  • It doesn't seem a lot to me...

You know you've landed gear-up when it takes full power to taxi.

Working...