Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Operating Systems Software Security

Europe Funds Secure Operating System Research 376

narramissic writes "A Dutch university has received a $3.3 million grant from the European Research Council to fund 5 more years of work on a Unix-type operating system, called Minix, that aims to be more reliable and secure than either Linux or Windows. The latest grant will enable the three researchers and two programmers on the project to further their research into a making Minix capable of fixing itself when a bug is detected, said Andrew S. Tanenbaum, a computer science professor at Vrije Universiteit. 'It irritates me to no end when software doesn't work,' Tanenbaum said. 'Having to reboot your computer is just a pain. The question is, can you make a system that actually works very well?'"
This discussion has been archived. No new comments can be posted.

Europe Funds Secure Operating System Research

Comments Filter:
  • MINIX guy (Score:5, Informative)

    by 4D6963 ( 933028 ) on Tuesday April 28, 2009 @05:34AM (#27743031)

    said Andrew S. Tanenbaum, a computer science professor at Vrije Universiteit

    It sounds intentionally misleading to present them as "a computer science professor" when he's the one MINIX guy.

  • What's the point? (Score:3, Informative)

    by seeker_1us ( 1203072 ) on Tuesday April 28, 2009 @05:34AM (#27743033)

    All respect to Andrew Tanenbaum, I'm not trying to troll. It's a sincere question.

    He has said Minix was to be a teaching tool.

    Now they want to turn it into a super reliable OS?

    I don't think it's to make it into another production OS. Could it be in order to develop new OS concepts and ideas which can be spread out to the world?

  • Tanenbaum? (Score:1, Informative)

    by Norsefire ( 1494323 ) * on Tuesday April 28, 2009 @05:49AM (#27743117) Journal
    He's the guy that argued with Torvalds back in 1992, right? The one who claimed that "Linux is obsolete" and Torvalds should "[b]e thankful you are not my student. You would not get a high grade for such a design." (link) [google.com]

    Therefore, I'm not inclined to listen to anything he has to say about kernels/operating systems.
  • by Chrisq ( 894406 ) on Tuesday April 28, 2009 @05:54AM (#27743161)

    Software, heal thyself? There's a reason self-modifying code is frowned upon. Besides, is kernel reliability really an issue these days? Even the Windows kernel only really crashes when you feed it bad memory.

    They are actually talking about things like driver isolation with monitoring and restarts. The answer to whether kernels are stable enough depends on your requirements. I find that I am much less forgiving when my DVD player crashes and doesn't record the film I have set than when my computer crashes, though both are now very rare events. Monitoring, isolation and restarting is used in things like engine management systems, where failures are even less welcome and a full OS with this level of reliability is bound to have applications in medicine, industry, "defence", etc.

  • Re:What's the point? (Score:1, Informative)

    by Anonymous Coward on Tuesday April 28, 2009 @06:00AM (#27743209)

    He said that about the original Minix and somewhat less so v2. The current version, Minix 3, is a different beast.

  • Minix 3 source code (Score:4, Informative)

    by Jacques Chester ( 151652 ) on Tuesday April 28, 2009 @06:01AM (#27743217)

    I'd recommend people take a look at the source code for Minix 3. It's actually pretty easy to wrap your head around, even for a C-phobic person like I am.

  • by Vanders ( 110092 ) on Tuesday April 28, 2009 @06:14AM (#27743303) Homepage

    The problem with driver isolation is that it's a layering violation given most today's PC hardware.

    That depends on how you've designed things, I guess. "Today's PC hardware" (& yesterdays for that matter) has always provided 4 protection ring levels, but very few OSes have ever made use of more than 2 (1 for the kernel, one for userspace). You could certainly put drivers in a higher ring than the kernel and allow them to only have limited access to memory, just as you do with a user-space application.

  • Even more misleading (Score:5, Informative)

    by EmTeedee ( 948267 ) on Tuesday April 28, 2009 @06:21AM (#27743337) Journal
    ...is to call this news. The grant was received in November 2008! (see http://www.minix3.org/news/ [minix3.org])
  • by Jacques Chester ( 151652 ) on Tuesday April 28, 2009 @06:23AM (#27743347)

    The Singularity project at MSR looked at this problem in a different way. What if each piece of software carries a protocol specification? What services it will require, in what order?

    Then you can do various clever things involving proving that the system won't do anything malicious. If the software tries to do something outside of its specified protocol, then zappo, it's gone. This has the nice side effect that you don't need to rely on hardware memory protection and therefore you don't have to pay context switches. Singularity's process startup and kill times leave everyone else for dead.

    But Singularity only works because of language features and requires you to do everything in a conforming language (Spec#). Probably the most meaningful predecessor was Oberon.

    Minix has a better chance of working in the "real world" because it takes a less all-or-nothing approach to the problem. For instance, Minix3 is coded in C, which is fast but unsafe. But Minix supports a lot of POSIX and could conceivably add Linux emulation as a module, whereas Singularity requires you to rewrite everything to enjoy the guarantees.

    Tanenbaum makes the further point that no matter what you prove, software has bugs. If you isolate the bugs you reduce their cost. If you simplify recovery from failure you reduce their cost still further. A microkernel approach does just these things and so would presumably be more reliable on a per-line-of-code basis than a monolithic kernel.

  • Re:What's the point? (Score:3, Informative)

    by slabbe ( 736852 ) on Tuesday April 28, 2009 @06:39AM (#27743441) Journal
    From www.minix3.org "MINIX 1 and 2 were intended as teaching tools; MINIX 3 adds the new goal of being usable as a serious system on resource-limited and embedded computers and for applications requiring high reliability"
  • Re:Wait a second... (Score:3, Informative)

    by Z00L00K ( 682162 ) on Tuesday April 28, 2009 @06:41AM (#27743447) Homepage Journal

    Minix did get an reputation of being unstable some 20 years ago, but of course - much have happened since then.

    The more interesting thing is that Minix has a different architecture than Linux using a microkernel. This is in some ways a good idea, even if it also have disadvantages.

  • by PhotoGuy ( 189467 ) on Tuesday April 28, 2009 @06:46AM (#27743461) Homepage

    I remember Minix. Before there was Linux, Minix was around. It was my first exposure to a Unix-like operating system on a PC. It was surprisingly lean and elegant and Unix-like. I still have the box of floppies. I remember recompiling and modifying the operating system. It was indeed quite a powerful tool, and I dare say an important precursor to Linux.

    (When I first heard about Linux, I had incorrectly assumed it was an evolution of Linux.)

    I see a lot of people bashing Minix here; I don't think it will replace Linux by any means, but it is an important historical OS, IMHO.

    Wiki notes (about Linux):

    In 1991 while attending the University of Helsinki, Torvalds began to work on a non-commercial replacement for MINIX,[13] which would eventually become the Linux kernel.

    Linux was dependent on the MINIX user space at first. With code from the GNU system freely available, it was advantageous if this could be used with the fledgling OS.

  • Re:What's the point? (Score:5, Informative)

    by EMN13 ( 11493 ) on Tuesday April 28, 2009 @06:53AM (#27743491) Homepage

    It's also a research OS - the aim isn't to make minix the next best thing, the aim is to research self-healing OS software by using minix as a test platform.

    Most good production software takes a good look at similar software to imitate the best features of each - this isn't a competition between minix and linux, it's testing a feature is a simpler (and thus cheaper) fashion.

  • EU Burocracy... (Score:5, Informative)

    by js_sebastian ( 946118 ) on Tuesday April 28, 2009 @06:56AM (#27743511)

    The aim is not to produce a better operating system, the aim is to secure funding. This is what academics actually do; good research is (at best) a byproduct. This is business as usual for a research group.

    Not really. The purpose is doing the research you are interested in doing (even if it's just for your career ambitions). For that you need funding. So of course you have to do some marketing to sell the research you want to do to the people deciding whom to fund. You think this guy has been doing MINIX for 20 years just to get funding? It's the other way around, you get funding, to be independent and have people work for you so you can get some interesting stuff done. Or, if you are more cynical, he's working on MINIX because it generated enough interest that he could get a ton of publications out of it.

    The real research will be a low priority, because the group will need to satisfy the EU bureaucracy that they are doing something worthwhile. Consequently, most of their time will be spent writing reports.

    From my experience this is a bit of an exaggeration. It's true that EU-funded projects have more strings attached than those from many other funding sources, but running the burocracy/reports/financials for an EU project that is funding 3 full time people at our university still only takes a rather small percentage of my time.

    And that's a lot more freedom to do real research than in any company environment i've seen or heard of so far. Big companies (even the good ones) have IMHO more bureaucracy, not less, and short-term horizon (want returns in 3, 5 years at the most), which means very little of what is called "research and development" has anything to do with research.

  • How about not (Score:3, Informative)

    by Viol8 ( 599362 ) on Tuesday April 28, 2009 @07:22AM (#27743627) Homepage

    A number of issues I can see:

    - A bug in the VM could effect EVERY driver on the system
    - Drivers generally need to respond to hardware interrupts and send out data to hardware in real time. Thats unlikely to
        happen if its managed code.
    - A VM/JIT system would only catch memory issues. It wouldn't catch out bad logic or instructions that make the
        hardware go nutes and crash the machine anyway.

  • Re:What's the point? (Score:1, Informative)

    by Hurricane78 ( 562437 ) <deleted&slashdot,org> on Tuesday April 28, 2009 @07:27AM (#27743665)

    This is no troll. Linus said himself, that his biggest error with Linux was, that he made it monolithic.
    I agree on that. Modularity (in multiple dimensions too, think "aspects") is nearly always a good thing.
    Sure it takes a bit of the speed out. But it is well worth it.

  • Re:Wait a second... (Score:3, Informative)

    by Anonymous Coward on Tuesday April 28, 2009 @07:42AM (#27743781)

    Try OpenVMS, a considerably more secure operating system than any Unix variant.

    OpenBSD is relatively bug free, but that only makes it superficially more secure than more popular, usable, operating systems. As a basic example, virtually every application not audited by the OpenBSD team themselves opens a potential attack vector. That's true of most operating systems. But VMS at least had the advantage of a locked down privilege system that made it much harder for a hole in an application to create a space where user files, let alone system files, were suddenly attackable.

    And, yeah, I'm aware you mentioned the possibility of running OpenVMS on the desktop. DEC made a few "desktop" VAXes and Alphas in their time, and DECWindows was the user interface.

  • Re:So? (Score:2, Informative)

    by zevans ( 101778 ) <zacktesting.googlemail@com> on Tuesday April 28, 2009 @07:52AM (#27743849)

    It's interesting to a good number of people here, especially those with six-figure or shorter UIDs, for historical reasons. Pity the summary doesn't mention those reasons AT ALL.

    Minix came Before Linux (yes, there is such an era) and the Minix and Gnu communities encouraged one another in the same way that Linux and FOSS cross-fertilise now.

  • Re:What's the point? (Score:5, Informative)

    by irexe ( 567524 ) on Tuesday April 28, 2009 @08:03AM (#27743937)
    I asked Tanenbaum this question at a lecture he gave on Minix 3 earlier this year. He responded that he changed his mind somewhat about the education-only issue because he felt that, to prove a point about the superiority of the microkernel design, you need to get it out of the lab and into the real world. He also felt that he could do this without hurting the simplicity of the system as a teaching tool. Incidentally, his intention is not to compete with Linux or Windows on the desktop, but rather to make a robust OS for embedded applications.
  • by Chrisq ( 894406 ) on Tuesday April 28, 2009 @08:40AM (#27744195)
    basically a microkernel architecture splits subsystems such as file systems, device drivers and security out of the kernel and into separate modules. This leads to an overhead of context switching to different processes on a single processor. A user process requesting access to a file may need a context switch to the kernel, another to security, another to the filesystem and then another to the disk device driver. With multiple processors this overhead can be removed.
  • Re:Wait a second... (Score:5, Informative)

    by pasamio ( 737659 ) on Tuesday April 28, 2009 @08:41AM (#27744207) Homepage

    Andy said at LCA2007 it was a 30% hit, I don't see a 30% performance hit being 'slightly' slower.

  • Re:Wait a second... (Score:3, Informative)

    by gnapster ( 1401889 ) on Tuesday April 28, 2009 @09:04AM (#27744443)

    The whole idea is utterly futile, except possibly if the code or the concepts can be reused with another system later on.

    That is exactly the point of academic research. Toy systems that introduce new concepts are rarely used widely, but the concepts are borrowed for use in other systems later on.

  • Re:Wait a second... (Score:2, Informative)

    by gnapster ( 1401889 ) on Tuesday April 28, 2009 @09:33AM (#27744715)

    It may well be that this group is "starting with Minix" because that's what they know best. I have not looked into this to know how much of the code for Minix3 is shared with prior versions. But Tannenbaum et al. know it inside out, so for them it is probably the best sandbox for these new ideas. They may already have done some work, and that was part of their argument in the funding proposal.

    My hero is G. H. Hardy, the number theorist who loved his field because it had no practical application. He would never have guessed that his concepts would be vital for public-key encryption and other things which are used by millions of people every day.

  • Re:Wait a second... (Score:3, Informative)

    by V!NCENT ( 1105021 ) on Tuesday April 28, 2009 @09:33AM (#27744717)

    30% hit compared to what? Compared to itself if it wasn't a Microkernel?

    Remember that the microkernel has only 4000 lines of code. Remember that on Linux the graphics drivers are also in userspace, in X11, on top of the shell that is on top of the Linux kernel.

    It sure as hell shouldn't be any slower than Linux...

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...