Europe Funds Secure Operating System Research 376
narramissic writes "A Dutch university has received a $3.3 million grant from the European Research Council to fund 5 more years of work on a Unix-type operating system, called Minix, that aims to be more reliable and secure than either Linux or Windows. The latest grant will enable the three researchers and two programmers on the project to further their research into a making Minix capable of fixing itself when a bug is detected, said Andrew S. Tanenbaum, a computer science professor at Vrije Universiteit. 'It irritates me to no end when software doesn't work,' Tanenbaum said. 'Having to reboot your computer is just a pain. The question is, can you make a system that actually works very well?'"
Wait a second... (Score:5, Funny)
I thought Windows was secure. Why not use that? *cough* *cough*
Re:Wait a second... (Score:4, Insightful)
I thought Windows was secure. Why not use that? *cough* *cough*
I thought OpenBSD was secure. Why not use that?
Re: (Score:3, Funny)
I though Minix was dead for some 15 years....
Re: (Score:2, Insightful)
Re:Wait a second... (Score:5, Insightful)
Why would you think Minix was dead? (Score:5, Funny)
I though Minix was dead for some 15 years....
Did netcraft confirm it?
Re: (Score:3, Interesting)
Re: (Score:3, Informative)
Minix did get an reputation of being unstable some 20 years ago, but of course - much have happened since then.
The more interesting thing is that Minix has a different architecture than Linux using a microkernel. This is in some ways a good idea, even if it also have disadvantages.
Re: (Score:2)
Why don't we all just use HURD, then?
Re: (Score:2)
Kidding, of course. Most anyone DOES use linux on the desktop, don't they
Re: (Score:3, Interesting)
That would take a loooooong time. First Minix needs to a reach 'gold/stable' release. Then there are the X11 galium noveau and open source ATI driver. Then we are going to need sound support, a port of Gnome and/or KDE 4.8 :') and soundcard and network drivers.
By that time DNF is probably released for Windows NT 7.0 and Wine has kept up with Windows 7 to run it...
Re:Wait a second... (Score:5, Insightful)
Minix did get an reputation of being unstable some 20 years ago, but of course - much have happened since then.
The one thing that hasn't changed though is that Minix is still just a toy system that's meant to be poked at in schools and that nobody actually uses (yes I know about the 3 rabid Minix users, they probably run AmigaOS too).
Oh, wait, now it finally supports X11 (woohoo !). Wait, has it got a mouse driver too ?
However Minix3 *does* feature support for "Over 650 UNIX programs [minix3.org]" (such as man, mkdir and ps). *650* ! It's like 130 × 5 ! Think about it !
Granted, starting from a small scale system such as Minix is certainly simpler than with a much more mainstream OS such as one of the BSDs or Linux but even if anything comes out of the project, it won't ever gain even "niche" status. More people must be running Plan9 or Inferno.
The whole idea is utterly futile, except possibly if the code or the concepts can be reused with another system later on.
Re: (Score:3, Interesting)
The whole idea is utterly futile, except possibly if the code or the concepts can be reused with another system later on.
After reading the summary, I expect the whole idea is that the concepts will be reused in another system later on.
Re: (Score:3, Informative)
The whole idea is utterly futile, except possibly if the code or the concepts can be reused with another system later on.
That is exactly the point of academic research. Toy systems that introduce new concepts are rarely used widely, but the concepts are borrowed for use in other systems later on.
Re: (Score:3, Insightful)
Of course, Tannenbaum is also partly responsible for the creation of Linux. Torvalds would regularly engage in heated debate regarding Minix's non-monolithic architecture.
I read those as they unfolded.
It's true that Tannenbaum is in part responsible for the creation of Linux. But only because at the time (I think it was available then) Minix was the only option on a PC and nobody wanted to run that. Tannenbaum failed at creating something decent so a better system was called for. Later on he may have whined for all he was worth, his system is still ignored (although I, and many others read and appreciated his book, nobody cares about Minix, it's a toy).
I ran Linux on my own
Re:Wait a second... (Score:5, Insightful)
Yes, most developers moved to Linux and stopped writing that pesky, unstable software that anyone actually uses.
Keeping a kernel that is 10 years behind the leading edge in file systems or communications, especially by kicking it all out of the kernel and saying "Naah-naah-naah! Not my problem!!!!" is like having a very secure car that doesn't have a reverse gear, seats, or door handles. It certainly helps contribute to stability. But the associated software to handle USB, firewire, packet filtering, or network file systems just isn't up to speed.
Re:Wait a second... (Score:5, Insightful)
Thats is not going to be your car for daily use. Minix probably isn't going to be you daily OS anytime soon either, but that no reason not to spend research money on it. The IT industry could do with some more proper research instead of just reinventing the same weels (but this type using XML and HTTP!) all the time.
Re: (Score:3, Insightful)
Re: (Score:3, Funny)
I though Minix was dead for some 15 years....
No, *Linux* is dead. Those monolithic kernels are just "one big mess!"
Re:Wait a second... (Score:5, Interesting)
I guess the idea is less about creating an all around well-built system that's pretty secure in practice, and more about creating something that, even if it might have implementation bugs today is fundamentally, conceptually more secure.
Re: (Score:2, Insightful)
more about creating something that, even if it might have implementation bugs today is fundamentally, conceptually more secure.
So they're dropping C?
Re:Wait a second... (Score:5, Interesting)
That was my thought too. If you want to do it right, why not program it in Haskell in the first place. Sure, it might be a little bit slower (not even much actually). But if you go for security, that's not that important anyways.
Now how they will solve the PEBKAC problem, if they end up with a TCPA-like system (in the original intended way of protecting the user, not protecting from the user) and what they will do against tricks like remotely reading computer input, the inevitability of programming errors and bios virii, is a completely different question.
Re:Wait a second... (Score:5, Interesting)
If you don't understand security it wont matter what language you write in, it will still be crap.
Re:Wait a second... (Score:4, Interesting)
I'll say this, like I always say it: there is no magic bullet when it comes to security. Even an operating system written from the ground up around security like OpenBSD can be configured incorrectly. Even an operating system written from the ground up around security can have security bugs.
OpenBSD was not written securely from the ground up. It was secured from an inherited codebase over a long, long time. And they have witnessed, time after time, how they combed over the source code for a specific class of bugs, cleaned it, and two versions later the same bug appeared from upstream because the programmer did not fully grok the API he was using.
Just google for strlcpy().
Re:Wait a second... (Score:5, Insightful)
Dropping C... for what exactly? We're not talking application level security. We're talking kernel level. That means talking to the bare metal. Even if you implement a microkernel with userspace modules for everything, and with those modules written in something more robust than C, that last crucial bit of code that is the microkernel itself is probably going to end up being written in C with ASM snippets, simply because at some point you need to explicitly state what the hardware is doing.
Re:Wait a second... (Score:5, Interesting)
Dropping C is possible.
For example, CoyotOS (http://www.coyotos.org/) uses BitC and aims for the completely proved kernel. I.e. it will be formally proven that its microkernel CAN'T crash or do something wrong.
Or look at QNX, their microkernel used to be something like 12Kb of hand-written assembly code (and so stable that QNX systems literally work for decades now without reboots). The rest can be done using other tools than plain C.
Re: (Score:3, Interesting)
How is hand-coded assembly a move to a "more secure language" (whatever that means) than C (which is what I was replying to)? Is that not precisely the job for which compiled languages were created?
Regarding CoyotOS and BitC, those are quite interesting references, thank you. It might be a stillbirth, though, since one of the lead guys is leaving the BitCC team. Either way, one could argue that coming up with your own low-level language to develop your own secure operating system is pretty much the only way
Re: (Score:3, Interesting)
Assembly can be more secure because it doesn't depend on a compiler :)
In any case, 12 Kb of asm/C code is vanishingly small quantity for modern operation systems. For most purposes 12 Kb is the same as 'none'.
"How intrinsically secure is the languange, in and of itself? What does it have that makes it special?"
It allows you to maintain _invariants_, checking them automatically. Including very complex invariants expressed as theorems.
Formal correctness checking is not feasible for large programs, but a forma
Re: (Score:3, Funny)
Well, I think the key point here is what we understand as secure. "Secure" is "easy" to define in terms of a system, but, to me, seems a remarkably nebulous concept when applied to a language. While it's very easy to screw up in C, that isn't a matter of "barbed wire and armed security guards", but rather "flying trapeze and safety nets".
Re:Wait a second... (Score:5, Insightful)
And with almost everything going to interpreter environments today (Python, Ruby, Java, .Net), there's a better argument that building a JIT as a kernel component and that the message passing overhead is less of an issue.
Let me get this right, after stating that the advantage of a microkernel lies in the much smaller size in LOCs, you just argued that adding JIT compiler to the microkernel itself is a good idea?
Part of the idea behind a microkernel is that you only need to prove correctness for a small amount of code. The other part is that, when you want to add features, you only need to prove the features you want work correctly. So, instead of proving that each driver works correctly (which, for most environments where this stuff really matters, only needs to be done for a "handful" of drivers), you just upped the ante and said "prove the whole JIT compiler works correctly". And the "message passing overhead" pales in comparison with a poorly-optimized JITC, which is what you get if you want to keep TLOC count low.
Re: (Score:3, Interesting)
You don't need a JIT compiler or an interpreted language to have a secure kernel - you just need a well-designed, type-safe language (which C is not). You can start, for example, from Haskell, as these guys [pdx.edu] are doing. Haskell is a compiled language, with minimal boxing and, thus, gives all the speed you want without the idiocy of buffer overruns and invalid pointer references. Its performance is within a couple of percent of C.
Re: (Score:3, Insightful)
Anything else that compiles to native opcodes? It's not like C is the only magical language capable of talking to hardware.
C is obviously not magically endowed with some special abilities. But since that was an answer to someone who wanted to replace C with something more secure, the question is: "what language that is naturally more secure than C would you suggest, then?"
Besides the obvious practical question of "give me an actual language that's actually more secure than C", there's the more theoretical question of "what the hell does it mean for a language to be secure?" A programming language is only an abstraction on top o
Re: (Score:3)
Anyway, are there real-world kernels that don't use C ?
Yes. [utah.edu]
Re: (Score:2)
I don't see how the parent is funny. OpenBSD is quite possibly the most secure OS around. At least for an OS that you can use for both server and desktop.
Re: (Score:3, Informative)
Try OpenVMS, a considerably more secure operating system than any Unix variant.
OpenBSD is relatively bug free, but that only makes it superficially more secure than more popular, usable, operating systems. As a basic example, virtually every application not audited by the OpenBSD team themselves opens a potential attack vector. That's true of most operating systems. But VMS at least had the advantage of a locked down privilege system that made it much harder for a hole in an application to create a space wh
Re: (Score:3, Funny)
I thought Windows was secure. Why not use that? *cough* *cough*
I thought OpenBSD was secure. Why not use that?
I thought DOS was secure. Why not use that?
I thought stone tablets were secure. Why not use them?
Re: (Score:3, Funny)
Re:Wait a second... (Score:5, Interesting)
The sad thing about Windows NT is that the design was pretty good, the implementation was OK, but the default security policy is totally useless. Hooray for backwards compatibility.
Re:Wait a second... (Score:5, Informative)
Andy said at LCA2007 it was a 30% hit, I don't see a 30% performance hit being 'slightly' slower.
Re: (Score:3, Informative)
30% hit compared to what? Compared to itself if it wasn't a Microkernel?
Remember that the microkernel has only 4000 lines of code. Remember that on Linux the graphics drivers are also in userspace, in X11, on top of the shell that is on top of the Linux kernel.
It sure as hell shouldn't be any slower than Linux...
Re: (Score:3, Funny)
Andy said at LCA2007 it was a 30% hit, I don't see a 30% performance hit being 'slightly' slower.
Yeah. Moore says [1] you'd have to wait an extra six months for hardware to catch up.
[1] Don't get all pedantic on me. I know what he really said.
Re: (Score:3, Insightful)
Why Minix is supposedly better than Windows or Linux is because it has a Microkernel, so it is harder for anything to kill or confuse the Kernel
What runs on a microkernel? Services. And if you exploit a highly privileged service, you've exploited the whole system. Or what am I missing?
A very good question (Score:4, Insightful)
The question is, can you make a system that actually works very well?
I'm glad someone finally got to asking this question.
Re: (Score:3, Interesting)
Re:A very good question (Score:5, Informative)
Software, heal thyself? There's a reason self-modifying code is frowned upon. Besides, is kernel reliability really an issue these days? Even the Windows kernel only really crashes when you feed it bad memory.
They are actually talking about things like driver isolation with monitoring and restarts. The answer to whether kernels are stable enough depends on your requirements. I find that I am much less forgiving when my DVD player crashes and doesn't record the film I have set than when my computer crashes, though both are now very rare events. Monitoring, isolation and restarting is used in things like engine management systems, where failures are even less welcome and a full OS with this level of reliability is bound to have applications in medicine, industry, "defence", etc.
Re: (Score:2)
The answer to whether kernels are stable enough depends on your requirements.
If the Linux kernel is not stable enough, you'd better roll your own because you obviously know better.
Monitoring, isolation and restarting is used in things like engine management systems, where failures are even less welcome and a full OS with this level of reliability is bound to have applications in medicine, industry, "defence", etc.
Linux does just the opposite. They test driver reliability before they release it. Seems to be working so far.
And if you need something that goes down less than the power grid, I suggest multiple computers on multiple locations.
Re:A very good question (Score:5, Informative)
That depends on how you've designed things, I guess. "Today's PC hardware" (& yesterdays for that matter) has always provided 4 protection ring levels, but very few OSes have ever made use of more than 2 (1 for the kernel, one for userspace). You could certainly put drivers in a higher ring than the kernel and allow them to only have limited access to memory, just as you do with a user-space application.
Sometimes (Score:2, Interesting)
Re: (Score:2, Funny)
10 print "no"
20 goto 10
The 1980s called... (Score:5, Insightful)
.. they want their funding back.
Seriously , I thought minix had been put out to pasture years ago.
Also what are 5 people going to manage that entire corporations and thousands of OSS developers failed to do in the last few decades? Ok , one of them might be the next Alan Turing and surprise us all but I won't hold my breath.
Re:The 1980s called... (Score:5, Insightful)
The aim is not to produce a better operating system, the aim is to secure funding. This is what academics actually do; good research is (at best) a byproduct. This is business as usual for a research group. The real research will be a low priority, because the group will need to satisfy the EU bureaucracy that they are doing something worthwhile. Consequently, most of their time will be spent writing reports.
Bear in mind that ideas like "self healing software" are buzzwords that you put on research proposals in order to get them accepted. See also: "cyber-physical systems", "multicore paradigms" and "sensor networks".
Re: (Score:2)
the rest
Re: (Score:2)
I second that. There are actual sensor-networks out there, that are made out of many many little nodes, that are so robust, that you can spread them with an airplane, and leave them there for months or more. They self-network, and send you their data back, when you fly over them again. If this does not impress you, then I don't know what will.
EU Burocracy... (Score:5, Informative)
The aim is not to produce a better operating system, the aim is to secure funding. This is what academics actually do; good research is (at best) a byproduct. This is business as usual for a research group.
Not really. The purpose is doing the research you are interested in doing (even if it's just for your career ambitions). For that you need funding. So of course you have to do some marketing to sell the research you want to do to the people deciding whom to fund. You think this guy has been doing MINIX for 20 years just to get funding? It's the other way around, you get funding, to be independent and have people work for you so you can get some interesting stuff done. Or, if you are more cynical, he's working on MINIX because it generated enough interest that he could get a ton of publications out of it.
The real research will be a low priority, because the group will need to satisfy the EU bureaucracy that they are doing something worthwhile. Consequently, most of their time will be spent writing reports.
From my experience this is a bit of an exaggeration. It's true that EU-funded projects have more strings attached than those from many other funding sources, but running the burocracy/reports/financials for an EU project that is funding 3 full time people at our university still only takes a rather small percentage of my time.
And that's a lot more freedom to do real research than in any company environment i've seen or heard of so far. Big companies (even the good ones) have IMHO more bureaucracy, not less, and short-term horizon (want returns in 3, 5 years at the most), which means very little of what is called "research and development" has anything to do with research.
Re:The 1980s called... (Score:5, Insightful)
Re:The 1980s called... (Score:5, Insightful)
What ideas? (Score:2, Insightful)
All I can see is some buzzwords and them waffling about microkernels - a 1970/80s concept if ever there was one which so far has proved less than impressive in the real world.
Re:The 1980s called... (Score:5, Informative)
I remember Minix. Before there was Linux, Minix was around. It was my first exposure to a Unix-like operating system on a PC. It was surprisingly lean and elegant and Unix-like. I still have the box of floppies. I remember recompiling and modifying the operating system. It was indeed quite a powerful tool, and I dare say an important precursor to Linux.
(When I first heard about Linux, I had incorrectly assumed it was an evolution of Linux.)
I see a lot of people bashing Minix here; I don't think it will replace Linux by any means, but it is an important historical OS, IMHO.
Wiki notes (about Linux):
Re:The 1980s called... (Score:5, Funny)
No no no, your assumption was correct!
Re:The 1980s called... (Score:5, Insightful)
Along the same lines as the above post.... What a waste of my taxes. I am getting fed up of hearing about cash going to dubious research projects. There are some big problems to be solved out there for example reducing mans dependence on fossil fuels and reducing the damage they cause our planet. Why are we wasting cash on this dubious project?????
Many PHD students will feed back what they learned into industry on graduation. Its called education, and it is not a waste of money even if Minix 3 is not the next best OS. Some things that come out of it will almost certainly be used.
MINIX guy (Score:5, Informative)
said Andrew S. Tanenbaum, a computer science professor at Vrije Universiteit
It sounds intentionally misleading to present them as "a computer science professor" when he's the one MINIX guy.
Even more misleading (Score:5, Informative)
Re: (Score:2)
I agree.
Just to put things in the right context here is a link to the famous Tanenbaum-Torvalds debate.
http://oreilly.com/catalog/opensources/book/appa.html [oreilly.com]
ehi come on!
every self respeting geek has already read it 10 years ago, and it's not like Tanenbaum never did anything else but that flame war.
Mini3 is a very interesting open source OS and I can only be happy it has received some founding and wish the project the best luck.
What's the point? (Score:3, Informative)
All respect to Andrew Tanenbaum, I'm not trying to troll. It's a sincere question.
He has said Minix was to be a teaching tool.
Now they want to turn it into a super reliable OS?
I don't think it's to make it into another production OS. Could it be in order to develop new OS concepts and ideas which can be spread out to the world?
Re:What's the point? (Score:5, Insightful)
Re:What's the point? (Score:5, Interesting)
I think AST was right. Linux can't continue to use a monolithic architecture.
Re:What's the point? (Score:5, Insightful)
[citation needed]
All these years after the Tenenbaum-Torvalds debate Linus admitted his prof was right? You'd think that would have been in the news somewhere.
Re: (Score:3, Informative)
Re:What's the point? (Score:5, Informative)
It's also a research OS - the aim isn't to make minix the next best thing, the aim is to research self-healing OS software by using minix as a test platform.
Most good production software takes a good look at similar software to imitate the best features of each - this isn't a competition between minix and linux, it's testing a feature is a simpler (and thus cheaper) fashion.
Re:What's the point? (Score:5, Informative)
Sounds like an idealist (Score:2)
A self-repairing OS? (Score:3, Interesting)
Re:A self-repairing OS? (Score:5, Insightful)
No, but dividing things into smaller pieces makes it easier to fix those pieces in isolation. It's easier for a microkernel system to be self-healing because of that isolation.
This is not an amazing revelation. We've known about the idea of isolating changes since the invention of the sub-routine. The reason microkernels have always been relegated to second-best is that they require more context switching than a regular monolithic kernel. The tradeoff between "fast enough" and "reliable enough" has for some time now favoured "fast enough".
But that's changing -- people's computers are getting plenty fast. The 10-15% slowdown Tanenbaum claims for Minix3 is less of a drag than, say, an anti-virus program and could serve to more effectively prevent viruses into the bargain.
People who say microkernels are passe forget our industry is not set in stone. Priorities change and technologies change with them. In the last 10 years performance has become progressively less important than reducing bugs or speed of development. Microkernels have lots to offer in such a world.
Linux is Obsolete! (Score:5, Funny)
Now that Minix 3 is here, Linus can take his monolithic kernel and stuff it! Microkernels are the wave of the future, man!
Linux is obsolete (Score:2)
According to the professor, it should soon make Linux obsolete [dina.kvl.dk].
Phillip.
Re:Linux is obsolete (Score:4, Funny)
"Of course 5 years from now that will be different, but 5 years from now everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5."
Man, remember back in '96 when we all got SPARCstations? Those were the days.
Re: (Score:2)
Hahaha. I'm completely new to this debate (yeah, I know - what a n00b !). Has Tanenbaum ever withdrawn his arguments in the light of experience ? Has he ever thrown up his hands and said "You know, I was just plain wrong. Mea culpa." ?
Anyone who remembers the climate in microcomputers at that time can kind of appreciate where he was coming from but the landscape has changed so much (if you'll allow me a little metaphor-mixing) since then that most of his points have either been soundly refuted or shown to b
Re: (Score:2)
Hahaha. I'm completely new to this debate (yeah, I know - what a n00b !). Has Tanenbaum ever withdrawn his arguments in the light of experience ? Has he ever thrown up his hands and said "You know, I was just plain wrong. Mea culpa." ?
Anyone who remembers the climate in microcomputers at that time can kind of appreciate where he was coming from but the landscape has changed so much (if you'll allow me a little metaphor-mixing) since then that most of his points have either been soundly refuted or shown to be overly cautious/conservative.
Since the landscape has changed AST can hardly be said to have been wrong at the time. But anyway the landscape is changing towards lightweight embedded systems. Linux is a better fit in that environment than Vista, but a smaller, more modular kernel would be an even better fit.
Re: (Score:3, Insightful)
Hahaha. I'm completely new to this debate (yeah, I know - what a n00b !). Has Tanenbaum ever withdrawn his arguments in the light of experience ? Has he ever thrown up his hands and said "You know, I was just plain wrong. Mea culpa." ?
No, why should he? Because Linux is more popular then minix? I'd guess most people here should start sending Mea Culpa's to Microsoft...
Minix 3 source code (Score:4, Informative)
I'd recommend people take a look at the source code for Minix 3. It's actually pretty easy to wrap your head around, even for a C-phobic person like I am.
System security is only half the rent (Score:3, Insightful)
The other is user security. And you cannot solve that problem with technology.
The circle you have to square here is that the user/admin should be allowed and able to run his software, but at the same time he must not run harmful software. Now, how do you plan to implement that? Either he can run arbitrary software, then you cannot identify security risks before it is too late. Or he cannot run software that is a potential security risk and he is no longer the master, owner and root of his own machine.
Oh, you want a system where the user can generally do his work but has to ask for special privileges when he wants to install new software or change security critical settings? Where have I heard 'bout that before... hmmm...
Re: (Score:3, Informative)
The Singularity project at MSR looked at this problem in a different way. What if each piece of software carries a protocol specification? What services it will require, in what order?
Then you can do various clever things involving proving that the system won't do anything malicious. If the software tries to do something outside of its specified protocol, then zappo, it's gone. This has the nice side effect that you don't need to rely on hardware memory protection and therefore you don't have to pay context
Comment removed (Score:5, Funny)
I'm really getting old (Score:2, Interesting)
This is what I thought when I read the post. It really smells as if the poster, narramissic, had not been around when microkernels and minix were fashionable. And neither was the person to allow for it to show on slashdot.
Let's call the minix discussion flogging a dead horse, until these chaps have come up with something real. If they manage to come up with something that is close to the beauty the idea of microkernels has on paper.
perhaps their work will inspire (Score:5, Interesting)
As I recall some guy in Finland did have the time
How about JIT in the Kernel? (Score:2)
I was just thinking recently about Microsoft's Singularity research operating system written in C#, which is cute, but somewhat useless in the real world. One big advantage though of statically verifiable byte-code languages like C# in operating systems though is security, because you can ensure a block of code is secure once and then run it at full speed without further access checks. That's almost impossible with generic C or assembler, but tractable with bytecode-based languages like Java or C#.
While a *
How about not (Score:3, Informative)
A number of issues I can see:
- A bug in the VM could effect EVERY driver on the system
- Drivers generally need to respond to hardware interrupts and send out data to hardware in real time. Thats unlikely to
happen if its managed code.
- A VM/JIT system would only catch memory issues. It wouldn't catch out bad logic or instructions that make the
hardware go nutes and crash the machine anyway.
Re: (Score:3, Interesting)
The folks at Bell Labs who invented Unix and Plan 9 have been doing all that and more since the mid-1990s with Inferno [vitanuova.com]. The core kernel is pure C, which has a bytecode interpreter for the Dis virtual machine, which almost all userspace code runs as, allowing it to run code safely even on CPUs that don't have hardware memory protection. Add to that a neat C-like programming language called Limbo that natively supports primitives inspired by C.A.R. Hoare's Communicating Sequential Processes, full support fo
Doesn't anybody think the hardware is the problem? (Score:5, Interesting)
The real reason there is no security and that we have the monolithic vs micro kernel is that CPUs provide process isolation and not component isolation. Within a process, CPUs do not provide any sort of component isolation. If they did, then we would not have this discussion.
I once asked Tanenbaum (via email, he was kind enough to reply) why CPUs do not have in-process module isolation. He replied:
From: Andy Tanenbaum [ast@cs.vu.nl]
Sent: Ðáñáóêåõ, 1 Öåâñïõáñßïõ 2008 4:00 ìì
To:
Subject: Re: The debate monolithic vs micro kernels would not exist if CPUs
supported in-process modules.
I think redesigning CPUs is going to be a pretty tough sell.
Andy Tanenbaum
But why? I disagree with that for two reasons:
1) the flat address space need not be sacrificed. All that is required is a paging system extension that defines the component a page belongs to. The CPU can check inter-component access in the background. No change in the current software will be required. The only extra step would be to isolate components within a process, by setting the appropriate paging system extensions.
2) The extension will require minimal CPU space and CPU designers already have great experience in such designs (TLBs, etc). Money has been invested for less important problems (hardware sound, for example), so why not for in-process components? it will be very cheap, actually.
Of course, security is not only due to the lack of in-process component isolation, but it's a big step in the right direction...
$3.3 million for 5 years? (Score:2)
It doesn't seem a lot to me...
Re:Oh gawd , not microkernels again *yawn* (Score:5, Insightful)
How many times is this old chestnut going to be tossed around?
MS tried a microkernel with NT and its HAL. It didn't really work very well. Most Unix varients don't even bother to try.
I think you are right at the moment. I am not sure that you will still be right when processors are 256-core or greater. I think that at some point the overhead of microkernals will be made up for by utilisation of greater parallelisation.
Re: (Score:3, Informative)
In related news, Linux missed the desktop chance (Score:2)
I think it's very interesting that if you go RTFA (yeah, I'm new here), you can read the related headline[1]:
"Desktop Linux: Why it may have lost its chance"
I think the dear AST is up to no good...
[1]: http://www.itworld.com/open-source/67022/desktop-linux-why-it-may-have-lost-its-chance [itworld.com]
Re: (Score:2)
Given that one of the main reasons for microkernels is to seperate dodgy drivers from the kernel and hence improve stability - it doesn't say much for the implementation of amigaDOS if it kept crashing!
Re: (Score:2)
Re:Tanenbaum? (Score:5, Funny)
Re:Tanenbaum? (Score:5, Insightful)
Re:Tanenbaum? (Score:5, Insightful)
That's a rather ignorant viewpoint.
Tanenbaum argued for greater modularity and really that's no bad thing, his arguments were pretty solid theoretically. But as we all know, just as the most beautiful, maintainable, stable software designs are sacrificed in business for something that works now even if it has it's flaws, Linux was available, easy to use and just worked the way people wanted, that didn't mean it was inherently better in theory or that Tanenbaum is wrong anymore than it means Windows is a vastly superior OS to Linux and MacOS X simply because it has such a massively larger user base.
Basing your view on Tanenbaum's one comment towards Torvalds is also rather ignorant, throughout the discussion you're referring to, Tanenbaum was well composed and formed coherent arguments, whilst Torvalds at times acted like your average troll.
You see, the very fact Windows is far and away the most popular OS followed by MacOS X followed by Linux is evidence enough that popularity means nothing in terms of the actual quality of an OS, it merely shows which played the business game better.
Tanenbaum is worth listening to, his ideas and justifications included in that 17 year old discussion you mention aren't wrong even if his predictions on the future of computing were. This is a man who understands the theory of how to make a better OS more so than most people do, and yes possibly even more so than Torvalds. The problem is that he's a theoretical guy, so whilst his proposals may be better, they may not be practical at the time they're announced or he simply may not have the time to dedicate to proving their practicality. If they're not practical at the time he proposes them though that doesn't mean they'll never be practical as changes in computing architecture or even raw computing power may make them practical.
Hopefully he'll put this funding to good use and it'll help provide him the time and resources he needs to take his ideas beyond mere theory and he'll be able to backup his theories with actual working demonstrations rather than just arguments now. You can be a Torvalds fanboy all you want but Tanenbaum and Torvalds are two different people - Tanenbaum is someone who comes up with theoretical new concepts, Torvalds is someone who takes existing concepts and implements them well. Both have their strengths, but writing one or the other off is foolish when both have a lot to offer.
Re: (Score:3, Insightful)
I agree, I suppose the kind of factors in terms of quality that Windows lacks vs. say Linux are those of security and stability, but Windows is also historically much stronger in terms of usability which is a measure of quality that matters more than any other to most end users - they just want to be able to use it, even if it's not perhaps all that secure.
I would argue though, that from a more objective perspective though, security, stability and modularity are more important factors when measuring overall
Re: (Score:3, Insightful)
30 seconds when you're sat on your ass in front of your PC.
Try power-cycling a weather satellite in 30 seconds.