Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

Andy Tanenbaum Releases Minix 3 528

Guillaume Pierre writes "Andy Tanenbaum announced the availability of the next version of the Minix operating system. "MINIX 3 is a new open-source operating system designed to be highly reliable and secure. This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code. The parts that run in user mode are divided into small modules, well insulated from one another. For example, each device driver runs as a separate user-mode process so a bug in a driver (by far the biggest source of bugs in any operating system), cannot bring down the entire OS. In fact, most of the time when a driver crashes it is automatically replaced without requiring any user intervention, without requiring rebooting, and without affecting running programs. These features, the tiny amount of kernel code, and other aspects greatly enhance system reliability."In case anyone wonders: yes, he still thinks that micro-kernels are more reliable than monolithic kernels ;-) Disclaimer: I am the chief architect of Globule, the experimental content-distribution network used to host www.minix3.org."
This discussion has been archived. No new comments can be posted.

Andy Tanenbaum Releases Minix 3

Comments Filter:
  • live-CD (Score:5, Informative)

    by DreamerFi ( 78710 ) <john.sinteur@com> on Monday October 24, 2005 @07:03AM (#13862753) Homepage
    And you can try it out on your current PC - the download [minix3.org] is a live-cd!
    • Re:live-CD (Score:2, Funny)

      by xgadflyx ( 828530 ) *
      Well before I burn the bandwidth, does it have KDE or Gnome?
    • by JanMark ( 547992 ) on Monday October 24, 2005 @07:57AM (#13863065) Homepage
      Not only that, there is also a VMware image [minix3.org]!
  • Phew (Score:5, Funny)

    by Anonymous Coward on Monday October 24, 2005 @07:03AM (#13862757)
    Now we can all switch over from Linux, at least until Hurd [gnu.org] ships.

    *pummeling ensues*

    GNU/Hurd!! I meant GNU/Hurd!!!
  • Love this quote (Score:5, Interesting)

    by strider44 ( 650833 ) on Monday October 24, 2005 @07:05AM (#13862763)
    While I could go into a long story here about the relative merits of the two designs, suffice it to say that among the people who actually design operating systems, the debate is essentially over. Microkernels have won.

    In retrospect that might have been a bit overconfident.
    • Re:Love this quote (Score:5, Interesting)

      by strider44 ( 650833 ) on Monday October 24, 2005 @07:07AM (#13862776)
      In the meantime, RISC chips happened, and some of them are running at over 100 MIPS. Speeds of 200 MIPS and more are likely in the coming years. These things are not going to suddenly vanish. What is going to happen is that they will gradually take over from the 80x86 line. They will run old MS-DOS programs by interpreting the 80386 in software. (I even wrote my own IBM PC simulator in C, which you can get by FTP from ftp.cs.vu.nl = 192.31.231.42 in dir minix/simulator.) I think it is a gross error to design an OS for any specific architecture, since that is not going to be around all that long. Another one. Reading predictions from fifteen years ago is really quite entertaining, showing that even the smartest people can get things slightly wrong.
      • Re:Love this quote (Score:2, Insightful)

        by Anonymous Coward
        ...which is exactly what everything post Pentium Pro does... So yes, he got it exactly right...
      • Comment removed (Score:5, Interesting)

        by account_deleted ( 4530225 ) on Monday October 24, 2005 @07:17AM (#13862824)
        Comment removed based on user account deletion
        • Re:Love this quote (Score:3, Interesting)

          by iabervon ( 1971 )
          It turns out that, in order to get good RISC performance, you need to design a new architecture every few revisions. The RISC instruction set that the P4 uses is different from the P3, which is different from the P2, and so forth. The benefit of the way Intel does things is that, since they're emulating a CISC instruction set on a RISC core, they can switch to emulating the same CISC instruction set on a better RISC core, and nobody has to be told about the new architecture. The translation layer can optimi
        • Re:Love this quote (Score:3, Insightful)

          by johansalk ( 818687 )
          Coca Cola is far more popular than much healthier drinks. The marketplace is governed by economic and political considerations that should not mean what fails it is inferior.
      • Re:Love this quote (Score:2, Informative)

        by mklencke ( 780322 )
        > They will run old MS-DOS programs by interpreting the 80386 in software.

        Well, this is partly true :-) I use Dosbox [sf.net] quite often for playing old games, tinkering with TASM, etc.
    • Re:Love this quote (Score:5, Insightful)

      by MikeFM ( 12491 ) on Monday October 24, 2005 @08:04AM (#13863117) Homepage Journal
      That'd be more convincing if I could see a microkernel OS that didn't suck. The theory is great.. sort of like object oriented programming.. but doesn't always work out. The biggest problem seems to be that that extra layer of abstraction slows things down (which makes sense really). Then you have to weigh the benefits of running faster and leaner or easier programming. From a programmers point of view most will like the abstracted and easier option because you can spend more time writing code and less time debugging and fixing but real world usgae doesn't always work well with that.

      Still.. as fast as modern computers are I think we may be reaching a point where raw speed is less important and well designed microkernels can probably run almost as fast as monolithic kernels. If heavy usage servers can be run as virtual machines in Xen then why not use a microkernel too?

      So. Any examples of microkernel OS's that handle heavy server load, function well as a desktop, and can handle multimedia tasks like gaming? OS X uses BSD under a microkernel I think but my experience is that it is slow and the tests I've seen have shown that Linux performs a lot better on it than OS X (no idea if that was due to microkernel use). I'd find it hard to believe that with solid numbers showing that microkernel is just as fast and without additional overhead that someone like Linus wouldn't use it since it's an easier programming model (better for security, stability, etc).
      • Re:Love this quote (Score:3, Informative)

        by ChrisDolan ( 24101 )
        How about Mac OS X and Windows NT/2000/XP? Those are microkernel-based architectures. OS X uses Mach under the hood. Some BSD variants also support running on otop of Mach.

        • Re:Love this quote (Score:3, Informative)

          by wilsone8 ( 471353 )
          The original Windows NT architecture might have been a microkernel. But ever since the 4.0 version where Microsoft pulled the video subsystem into kernel space, there is no way you can still call that a microkernal. And they have only been pulling more and more into kernal space (for example, large portions of the IIS 6.0 HTTP processor actually run in kernel space on Windows Server 2003).
          • Re:Love this quote (Score:3, Informative)

            by ChrisDolan ( 24101 )
            Good point. Both OS X and NT+ do violate the microkernel philosophy in the name of performance (Wikipedia calls them Hybrid kernels [wikipedia.org]). However, they differ significantly from monolithic kernels like Linux in that third party drivers are by default outside of the kernel instead of inside.

            So perhaps they're millikernels? :-)
      • Re:Love this quote (Score:4, Informative)

        by DrXym ( 126579 ) on Monday October 24, 2005 @09:00AM (#13863461)
        QNX Neutrino [qnx.com] is an example of a microkernel which doesn't suck. In fact QNX sees heavy use in realtime environments, where both space and performance matter a great deal. Some applications of QNX put considerable importance on the thing not collapsing in a heap after a failure of some part.
      • Re:Love this quote (Score:4, Informative)

        by naasking ( 94116 ) <naasking@gm[ ].com ['ail' in gap]> on Monday October 24, 2005 @10:31AM (#13864118) Homepage
        The biggest problem seems to be that that extra layer of abstraction slows things down (which makes sense really). [...] If heavy usage servers can be run as virtual machines in Xen then why not use a microkernel too?

        Funny you should mention Xen, because it's essentially a microkernel running other kernels as protected processes.

        So. Any examples of microkernel OS's that handle heavy server load, function well as a desktop, and can handle multimedia tasks like gaming?

        Other posts mention QNX, so I won't bother.

        I'd find it hard to believe that with solid numbers showing that microkernel is just as fast and without additional overhead that someone like Linus wouldn't use it since it's an easier programming model (better for security, stability, etc).

        You'd be surprised. There's a lot of vested interest in the current programming paradigms and existing codebase. A principled microkernel architecture [sourceforge.net] might just be incompatible with POSIX, which eliminates a large swath of portable and useful software.

        If you want performance, you need look no further than L4 [l4ka.org], EROS [l4ka.org] (and it's successor CapROS [sourceforge.net]). For a principled design, I'd go with EROS/CapROS or the next generation capability system Coyotos [coyotos.org] (who's designers are trying very hard to implement POSIX while maintaining capability security).

        Something useful right now, doesn't exist as far as I know.
      • Re:Love this quote (Score:5, Informative)

        by Sentry21 ( 8183 ) on Monday October 24, 2005 @10:40AM (#13864202) Journal
        OS X uses BSD under a microkernel I think but my experience is that it is slow and the tests I've seen have shown that Linux performs a lot better on it than OS X (no idea if that was due to microkernel use).

        The OS X kernel is a different situation. Darwin is a mixture of microkernel and monolithic, as is (for example) Linux. In Linux, a lot of things (like device configuration, etc.) get done in userspace by daemons using a kernel interface, which means the kernel need only contain the code necessary to initialize the device. Darwin's kernel (xnu), however, is a more complex design in terms of overall design (though the internals may be less complex - I'm not a kernel developer), and is derived from Mach 3.0 and FreeBSD 5.0.

        Mach provides xnu with kernel threads, message-passing (for IPC), memory management (including protected memory and VM), kernel debugging, realtimeness, and console I/O. It also enables the use of the Mach-O binary format, which allows one binary to contain code for multiple architectures (e.g. x86 and PPC). In fact, when I installed OpenDarwin quite a while ago, all the binaries that came with it were dual-architecture-enabled, meaning I could mount the same drive on PPC or x86 and execute them (which is kind of neat).

        The BSD kernel provides (obviously) the BSD layer, as well as POSIX, the process model, security policies, UIDs, networking, VFS (with filesystem-independant journalling), permissions, SysV IPC, the crypto framework, and some primitives.

        On top of all that is IOKit, the driver framework. It uses a subset of C++, and the OO design allows faster development with less code, and easier debugging as well. It is multi-threaded, SMP-safe, and allows for hot-plugging and dynamic configuration, and most interestingly of all, some drivers can be written to run in userspace, providing stability in the case of a crash.

        Now, as to your comment about performance, it is possible you are referring to the tests done using MySQL a while back, which shows MySQL performance as being (as I recall) abysmal compared to Linux on the same hardware. The problem with that test is that MySQL uses functions that tell the kernel to flush writes to disk. These functions are supposed to block so that the program can't continue until the writes are done and the data is stored on the platter. On OS X, this is exactly what happens, and every time MySQL requests data be flushed, the thread doing the flushing has to wait until the data is on the platter (or at the very least, in the drive's write cache). On Linux, this function returns instantly, as Linux (apparently) assumes that hard drives and power supplies are infallible, and obviously if you're that concerned about your data, get a UPS.

        It should be noted that MySQL, in the online manual, strongly recommend turning that feature off for production systems, forcing Linux to block until the write is completed, and lowering performance. I would be interested to see a benchmark comparing the two with this configuration.

        This discrepancy in the way Linux handles flush requests vs. the way OS X handles them gives a noticable drop in performance in a standard MySQL situation. I am told that the version that ships with OS X Server 10.4 is modified so as to increase performance while keeping reliability. Unfortunately, I cannot confirm this at this point.
      • by RevMike ( 632002 ) <revMike@gmail. c o m> on Monday October 24, 2005 @11:02AM (#13864401) Journal
        That'd be more convincing if I could see a microkernel OS that didn't suck. The theory is great.. sort of like object oriented programming.. but doesn't always work out. The biggest problem seems to be that that extra layer of abstraction slows things down (which makes sense really).

        Actually, the bigger problem with microkernel is debugging. When passing messages around inside an OS, there is a potential for lots of race states and the like. The trick to microkernel is getting the messages to run around as fast as possible without adding synchronization points. Every synchronization point slows the system a little, but makes the system a little more stable. Once you've optimized the system for performance, and small change to any module the kernel talks to can throw the whole thing out of balance, and you need to go back and debug the race states and retune the code.

        In short, a kernel can be fast, flexible, or reliable. You can have two, but it is really difficult to have three. Macro-kernels are generally fast and reliable. Micro-kernels can be fast and flexible, flexible and reliable, but rarely are they fast and reliable.

      • Re:Love this quote (Score:3, Informative)

        by po8 ( 187055 )

        Cheriton's V System [wikipedia.org] didn't suck. I used it in a commercial project in the late 1980s and loved it; it met all of your criteria. If Cheriton had open-sourced it, I think it would have had a huge impact. But he didn't, and for whatever reason it hasn't.

      • Re:Love this quote (Score:3, Interesting)

        by Samrobb ( 12731 )

        Still.. as fast as modern computers are I think we may be reaching a point where raw speed is less important and well designed microkernels can probably run almost as fast as monolithic kernels.

        I'm surprised nobody's mentioned this yet... there was an article in the latest C/C++ User's Journal titled Interprocess Communication & the L4 Microkernel [cuj.com]. Made for interesting reading. The main idea seems to be that traditional microkernel designs spend too much time and effort having the kernel validate

    • Re:Love this quote (Score:3, Insightful)

      by RAMMS+EIN ( 578166 )
      ``While I could go into a long story here about the relative merits of the two designs, suffice it to say that among the people who actually design operating systems, the debate is essentially over. Microkernels have won.

      In retrospect that might have been a bit overconfident.''

      Perhaps, but it's true as stated. The consensus among OS designers really was that microkernels were superior. Linus opted for a monolithic kernel, because he didn't believe in Microkernels, but he was the odd one out. Linux's success
  • Honest question (Score:5, Interesting)

    by Frogbert ( 589961 ) <frogbert@gma[ ]com ['il.' in gap]> on Monday October 24, 2005 @07:05AM (#13862765)
    Honest question, is Minix compatable with Linux or something? Or do they just sound the same by coincidence? Or is it more like your BSD's in comparision to Linux?
    • Its hardly a coincidence they sound the same! Linus made Linux as a free version of Mininx! Dunno how compatible they are anymore though...
      • It's hardly a coincidence, as they both branch off the Unix standard.
        • by Anonymous Coward
          Could you please forward the contact information for this Andy Tanenbaum person to me at the address below. I would like to have a word with him. Thanks.

          355 South 520 West
          Suite 100
          Lindon, UT 84042
    • Re:Honest question (Score:5, Interesting)

      by TheMMaster ( 527904 ) <hp@tmm.TWAINcx minus author> on Monday October 24, 2005 @07:13AM (#13862803)
      more like your BSD's in comparision to Linux :)

      all three are more or less posix compliant operating systems, which means that most software should run on both.

      Some software that requires some functionality not found in posix will need to be ported sperately. Like xorg for instance, but, most of the gnu tools will probably run (probably) and there is the question what c library it uses, if it uses its own, there are going to be a whole myrad of other interesting problems, if it uses glibc or bsd's libc, then it's easier.

      in other words :"more like your BSD's in comparision to Linux"
    • Re:Honest question (Score:5, Informative)

      by shadowknot ( 853491 ) * on Monday October 24, 2005 @07:15AM (#13862815) Homepage Journal
      Linus Torvalds was kind of inspired by minix to create a more useable and extensible Open Source OS and the original source for the Linux Kernel was written using a minix install. Check out the DVD of RevolutionOS [revolution-os.com] for a detailed history.
    • Re:Honest question (Score:3, Insightful)

      by jtshaw ( 398319 ) *
      If you read Linus' book he basically says he started writing Linux because he thought Minix was terrible. That is why Linux original used ext2 (same fs as minix). So in reality, Minix and they have similar sounding names because they are both trying to be like unix.
      • Re:Honest question (Score:3, Informative)

        by arkane1234 ( 457605 )
        That is why Linux original used ext2 (same fs as minix)

        No, Minix used (uses? not sure what the "new" minix uses) the minix filesystem, which was only able to address I think it was 32mb of space.
        Linux had the EXT2 filesystem at a later date, migrating away from the Minix filesystem. If you compile your kernel, you'll still see the option to have the minix filesystem functionality compiled in. (or modularized)

        I still remember having to decide if I wanted to go with the "new" ext2 filesystem, which will not
    • by metamatic ( 202216 ) on Monday October 24, 2005 @10:13AM (#13863984) Homepage Journal
      Well, since nobody else has posted a very informative answer...

      Linux is based on MINIX. It was built on MINIX, using MINIX. It started off as Linus's weekend hack to build a 386-specific replacement kernel, so he could have MINIX with pre-emptive multi-tasking and memory protection. Andy Tanenbaum didn't want to make MINIX 386-specific because, like the NetBSD and Debian folks, he was trying to make something that would be portable to lots of different hardware. (Like the Atari ST I was running it on.)

      Then there was the big flamewar over monolithic kernels vs modular microkernels. Linus went off in a huff and turned Linux into a complete OS by ripping out all the MINIX and adding all the GNU stuff instead. Then over the years he introduced a modular kernel and made it portable to multiple architectures, basically admitting he was wrong but never saying so.

      At that point, Linux started to become usable as an OS. And in the mean time, MINIX had been killed by toxic licensing policies of the copyright owner (not Andy Tanenbaum). That, and the x86 architecture had expanded to 90% of the market. So, we arrived at the situation we have today, where MINIX is largely forgotten, and we have a MINIX-like Linux with all the mindshare.

      And now, ironically, Andy Tanenbaum has made MINIX 3 only run on the x86. So perhaps he and Linus can now both admit they were wrong in major respects, and make friends?
      • by kl76 ( 445787 ) on Monday October 24, 2005 @10:43AM (#13864222)

          It started off as Linus's weekend hack to build a 386-specific replacement kernel


        There was already a 386-specific 32-bit version of the MINIX kernel around at the time; it was called MINIX-386, unsurprisingly enough, and was widely used in the MINIX hacker community.


        Linus went off in a huff and turned Linux into a complete OS by ripping out all the MINIX and adding all the GNU stuff instead.


        There wasn't ever any MINIX code in Linux - there couldn't have been, as MINIX was a commercial product at the time. What there was, was plenty of minor MINIX influences on the design (lack of raw disk devices, "kernel", "fs" and "mm" subdirectories in the kernel source, Minix-compatible on-disk filesystem format, major/minor device numbers etc.) but no major ones (ie. the microkernel paradigm).


          And in the mean time, MINIX had been killed by toxic licensing policies of the copyright owner (not Andy Tanenbaum). That, and the x86 architecture had expanded to 90% of the market.


        Well, yes, you had to pay for MINIX, but there were no free OSs to speak of in those days. The reason MINIX seemed to disappear was that most of the MINIX hacker types were using MINIX because it was the closest thing to real UNIX they could afford. Once Linux appeared, as open source, with its simple goal of being a UNIX clone (rather than a model OS for teaching purposes, as MINIX was meant to be), it was inevitable that most of the MINIX hacker community would migrate en masse.
        • Linus went off in a huff and turned Linux into a complete OS by ripping out all the MINIX and adding all the GNU stuff instead.

          There wasn't ever any MINIX code in Linux

          Linux-the-kernel never contained MINIX code, but Linux-the-OS was Linux-the-kernel running inside an OS made of MINIX code. A new Linux-the-OS was made by ripping the MINIX bits out and replacing them with the OS bits from GNU and newly written stuff as necessary.

          And I don't remember ever hearing about MINIX-386--but then again, as

  • About Tannenbaum. (Score:3, Informative)

    by Anonymous Coward on Monday October 24, 2005 @07:05AM (#13862766)
    Tannenbaum's home page:

    http://www.cs.vu.nl/~ast/ [cs.vu.nl]

    Yes, it's the same guy who wrote the book for your networking course.
  • by CyricZ ( 887944 ) on Monday October 24, 2005 @07:07AM (#13862772)
    I just want to thank you, Andy, for your decades of effort towards advancing the field of computing. Your contributions have been much appreciated. After all, if it were not for Minix we would not have Linux today. Thanks, Andy!

    • by TrappedByMyself ( 861094 ) on Monday October 24, 2005 @07:17AM (#13862829)
      Your contributions have been much appreciated.

      Yes, thank you. My sadistic operating systems professor used your textbook. Your name still gives me nightmares to this day.
  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Monday October 24, 2005 @07:09AM (#13862783)
    Comment removed based on user account deletion
  • by Anonymous Coward on Monday October 24, 2005 @07:10AM (#13862786)
    In case you don't know, Andy was the professor who originally suggested to Linus that he create a kernel, and then provided all the support and positive encouragement that would obviously be needed to successfully complete such an undertaking. He knew from the outset that Linux was going to be a massive hit. He is truely one of Computer Science's great visionaries.
  • by nmoog ( 701216 ) on Monday October 24, 2005 @07:15AM (#13862812) Homepage Journal
    That'd be cool to give this a bash with our shiny new VMWare players!
  • But... (Score:2, Funny)

    by orzetto ( 545509 )
    ... does it run on Linux?
    (points the right pinky finger to the lip and laughs hysterically)
  • Than being posted under the LINUX section of Slashdot either.

    Just think ot the Irony, of the Fater , so to say being listed as a Child.

    While not a direct descendent Minix was CERTAINLY the original inspiration for Linus ("Ill tak that as gopsel since Linus himself said so:).

    But now what nearly 15-18 years later and well we see no Minix section of Slashdot....
  • Deja vu (Score:2, Informative)

    by ladybugfi ( 110420 )
    Ahh, I used to run Minix 1.X on an Atari ST ages ago, maybe it's time to take a nostalgia trip.

    However, while I agree that microkernels are conceptually smarter, Linux has clearly won the "unix on a PC hardware" contest. But then again, as far as I could tell, that contest was never on AST's agenda anyway. For him the Minix system was a teaching tool.
  • Wow, I was just reading "Just For Fun" for the second time, mainly because of the lack of other books and today I was wondering what happened to Minix. In his book, Linus describes what is wrong with Minix and microkernels in general.

    The Tanenbaum-Torvalds Debate [oreilly.com]
  • .. so does it finally have a multithreaded filesystem?
  • More reliable (Score:3, Interesting)

    by samjam ( 256347 ) on Monday October 24, 2005 @07:34AM (#13862919) Homepage Journal
    "yes, he still thinks that micro-kernels are more reliable than monolithic kernels ;"

    Does anybody dispute this?

    AFAIK reliability is not the main pressing benefit of a monolithic kernel design so much as being able to scramble all over the internal structs of other kernel modules without needing a context switch, which can be very helpful and quick.

    Sam
  • Old laptops (Score:2, Interesting)

    by Kim0 ( 106623 )
    I have a number of old laptops lying around. It would be nice to use them for embedded systems. Linux is too big to fit. I guess Minix has too few drivers. What to do? Any recommendations?

    Kim0
    • Re:Old laptops (Score:5, Informative)

      by Vo0k ( 760020 ) on Monday October 24, 2005 @07:48AM (#13863002) Journal
      NetBSD.
      I've found myself in similar situation once, Linux or Solaris wouldn't fit with reasonable amount of useful stuff on a 200M harddrive of an old SUN. Then I managed to fit most of the NetBSD distro, with 2 desktop managers, Netscape Navigator (pre-Moz times), bunch of servers for running a remote diskless workstation and still managed to cut 40M of diskspace for swap memory for that remote workstation :)
    • Re:Old laptops (Score:3, Interesting)

      by Tx ( 96709 )
      Windows 98 can be shrunk to ~4MB [chalmers.se], and has plenty of drivers. And I kid thee not, I have seen Windows 95 used as an embedded OS in some very expensive products. Scary.
  • X11 port? (Score:3, Interesting)

    by CyricZ ( 887944 ) on Monday October 24, 2005 @07:36AM (#13862928)
    On the news page it states that 'The port of X Windows is coming along well.'

    Which implementation of X is it that is being ported? I would hope that it is X.org, and at least the 6.8.2 release.

  • by idlake ( 850372 ) on Monday October 24, 2005 @07:43AM (#13862969)
    Tanenbaum rightly criticized Linus for creating a big monolithic operating system kernel, but at least Linus was copying something that was successful and he made it a success himself.

    But, geez, how often do microkernels have to fail before Tanenbaum will admit that there must be something fundamentally wrong with his approach, too? Microkernels attempt to address the right problem (kernel fault isolation), just in such an idiotic way that they keep failing in the real world. But instead of a detailed criticial analysis of previous failures, Tanenbaum and Herder just go on merrily implementing Minix3, apparently on the assumption that all previous failures of microkernels were just due to programmer incompetence, an incompetence that they themselves naturally don't suffer from.

    Both Linux-style monolithic kernels and Tanenbaum-style microkernels are dead ends. But at least Linux gets the job done more or less in the short term. In the long term, we'll probably have to wait for dinosaurs like Tanenbaum to die out before a new generation of computer science students can approach the problem of operating system design with a fresh perspective.
    • by 0xABADC0DA ( 867955 ) on Monday October 24, 2005 @08:58AM (#13863444)
      It's because traditional microkernels solve the wrong problem. The goal is reliability and flexibility (user-space drivers and whatnot). The wrong problem is using separate memory spaces to achieve the goals. They are just too clumsy... they are ridiculously slow, are coarse grained (4k page is the smallest unit), and you cannot apply a filter to memory accesses.

      If the microkernel was combined with a safe language, like Java or C#, then the problems would go away. You wouldn't need to change the page table, so that massive penalty is not there. Accessing memory through a memory object would allow any arbitrary range (down to single bits). You could also apply a filter, so the driver could implement the commands to the disk but the hardware access object would only allow valid use of the bus; this wouldn't be perfect but would greatly increase reliability over microkernels, which are already much more reliable than monolithic.

      And speed? It could be faster than C-based code for various reasons (using the dirty bit to accellerate garbage collection, no context switches, etc). It's not like there isn't precendent: the berkely packet filter is actually an interpreted bytecode that is run inside the kernel. It has a number of restrictions to ensure safety (like only branching forwards), but basically in all unix operating systems it is a giant switch statement that interprets the bytecode. This is plenty fast enough to handle the packets, orders of magnitude faster than sending the packets into user-space.

      If Tanenbaum really cared about reliability or safety or simplicity he would make a managed microkernel, not more of this C/asm based crap.
      • agreed 100% (Score:4, Interesting)

        by idlake ( 850372 ) on Monday October 24, 2005 @09:45AM (#13863767)
        I agree 100%. And there has been excellent prior work in that area, with fault isolation in single-address space kernels; experiments suggest that single-address space approaches are significantly faster. And it doesn't even have to be Java or C#; languages like Modula-3 or Object Pascal are far safer than C and can get by with a tiny runtime. Heck, even consistent use of C++ for writing kernels would be better than what people are doing now, despite the numerous problems that C++ has.

        It is just astounding to me that while anybody else would be laughed at if they tried to write a modern, complex application in ANSI C, operating system designers are somehow considered special, as if concepts like "abstraction", "error handling", and "runtime safety" didn't matter for kernels that are millions of lines big.
  • Recollections (Score:5, Interesting)

    by awol ( 98751 ) on Monday October 24, 2005 @07:45AM (#13862982) Journal
    I was "there" when Andy and Linus had their first "ding dong". I was doing an OS/Design undergraduate (300 level) course at the time using the AT book and MINIX as the tool through which we had to implement changes to the scheduler. The book was excellent, MINIX was pretty cool but more importantly it was an educational tool to allow us to delve into the guts of an operating system and play around with it. It was so accessible and relatively easy to do, certainly compared to anything else available at the time.

    Cruising the newsgroups was pretty much the done thing at the time and comp.os.minux was pretty high on my list for obvious reasons. Saw this stuff happening at the time and, knowing that AST was always pretty direct was entertained by the whole flame war thing. Anyway my point is that AST saw MINIX as a OS theory educational tool and Linus saw it as too defective to be even that and as such Linux was better. Funny, I agree with them both, kinda. I could never have kernel hacked Linux like I did MINIX at the time and MINIX could never have become my primary desktop at home like it is now. I guess they were just talking at crossed purposes even then. Pretty much standard flamewar ;-)
  • by Vo0k ( 760020 ) on Monday October 24, 2005 @07:52AM (#13863027) Journal
    See? It's Minix 3 already, while Linux is still in 2.x! ;)
  • by david.given ( 6740 ) <dg@cowlark.com> on Monday October 24, 2005 @08:01AM (#13863100) Homepage Journal
    It's worth pointing out that one of Minix's great selling-points is that it's all BSD licensed --- including the tool chain. It doesn't use gcc by default; its native compiler is the BSD licensed Amsterdam Compiler Kit [sf.net].

    This makes it, as far as I know, the only completely BSD licensed Unix-like operating system in the world. Even the big BSDs can't claim that, as they all rely on gcc.

    I was in on the Minix beta testing. It's actually extremely impressive. It's quite minimalist; most of the shell commands are pared down to their bare minimum --- for example, tar doesn't support the j or z flags --- and it tends towards SysV rather than BSD with things like options to ps. It runs happily on a 4MB 486 with 1GB of hard drive, with no virtual memory, and will contentedly churn through a complete rebuild without any trouble whatsoever. Slackware users will probably like it.

    Driver support isn't particularly great; apart from the usual communications port drivers, there's a small selection of supported network cards, a FDD driver, an IDE-ATA driver that supports CDROMs, and a BIOS hard disk driver for when you're using SCSI or USB or some other exotic storage. The VFS only supports a single filesystem, MinixFS (surprise, suprise!) but allows you multiple mountpoints. In order to read CDs or DOS floppies you need external commands.

    There's no GUI, of course.

    As a test, as part of the beta program, I did manage to get ipkg working on it. This required a fair bit of hacking, mostly due to ipkg assuming it was running on a gcc/Linux system, but it did work, and I found myself able to construct and install .ipk packages --- rather impressive. Now the real thing's been released, I need to revisit it.

    Oh, yeah, it has one of the nicest boot loaders I've ever seen --- it's programmable!

    • It runs happily on a 4MB 486 with 1GB of hard drive, with no virtual memory, and will contentedly churn through a complete rebuild without any trouble whatsoever.

      Must be a lot of added bloat in there. Minix 1.5 used to run very happily on a PC XT w/ 640K RAM and a 40 MB disk. It would run on a minimal machine w/ as little as 256K RAM and 2 360K floppies. I haven't booted it in a century or so, but I still have an XT with Minix installed on it and a box of 20 or so 360K floppies with binaries and sour
  • Software? (Score:3, Insightful)

    by Jacek Poplawski ( 223457 ) on Monday October 24, 2005 @08:09AM (#13863132)
    But how mature and how usable this OS is?
    What about software for Minix?
    On website there is info about packages - gcc, vim/emacs, old Python, no ncurses, no X... What can I install (by compiling) on Minix and what is not possible and why?
  • by MROD ( 101561 ) on Monday October 24, 2005 @08:12AM (#13863155) Homepage
    For example, each device driver runs as a separate user-mode process so a bug in a driver (by far the biggest source of bugs in any operating system), cannot bring down the entire OS. In fact, most of the time when a driver crashes it is automatically replaced without requiring any user intervention, without requiring rebooting, and without affecting running programs.

    This is all well and good until the crashing device driver locks the system bus or grams an NMI etc. And what if the device driver in qestion is the one accessing the disk? How does the microkernel recover from that one when it can't access the drive the device driver is sitting upon?

    I can see where his thought processes are coming from, but I still think he lives in Computer Science Heaven, I'm afraid, where all hardware is mathematically perfect and I/O never happens (as it's not mathematically provable).

    In the real world device drivers hardly ever crash the system 'cos they're kernel mode, they crash it because the hard-hang the system or denigh the kernel the resources to dig itself out of the hole. Neither of these change by moving the code into user space.
    • I can see where his thought processes are coming from, but I still think he lives in Computer Science Heaven, I'm afraid, where all hardware is mathematically perfect and I/O never happens (as it's not mathematically provable).

      If by "him" you mean Andy Tanenbaum you probably ought to give him the benefit of the doubt, as his position is being represented by some random slashdot person. Maybe just email him.
  • by soldack ( 48581 ) <soldacker@yLIONahoo.com minus cat> on Monday October 24, 2005 @08:37AM (#13863298) Homepage
    I have done my share of kernel programming and I have always thought that it is pretty horrible that simple device driver bugs can take down the system. Almost all of Windows' Blue Screens are from bad third party drivers. Almost all of the oopses I have seen on linux are from device drivers for extra hardware (I mean drivers not for core common O/S features). On linux device driver debug still seems to be horrible; on Windows it is considerably better but still not as good as application debug.
    With common user systems as cheap and fast as they are now, do user mode device drivers make sense? Is the performance worth giving up for the stability? Check out Microsoft's User-mode Driver Framework [microsoft.com] approach. Here is an old linux journal article [linuxjournal.com] on the subject. Does anyone know of other interesting examples of user mode device drivers on any operating systems?
  • No source? (Score:3, Interesting)

    by Markus Registrada ( 642224 ) on Monday October 24, 2005 @09:29AM (#13863649)
    I didn't find a tarball of source code, just the ISO image. When I loopback-mount the ISO image, I don't find anywhere near 80M of stuff. Is the source on the ISO image?
    • Re:No source? (Score:3, Informative)

      by david.given ( 6740 )
      I didn't find a tarball of source code, just the ISO image. When I loopback-mount the ISO image, I don't find anywhere near 80M of stuff. Is the source on the ISO image?

      Yes, it is --- it's on a Minix filesystem tucked away at the top of the ISO filesystem. If you boot the CD, you'll get a complete Minix LiveCD based system, with all the source on it.

      If you want to access it from Linux you'll need to persuade Linux to parse the partition table on the CD, which it normally won't do --- the easiest way to

  • Microkernels... (Score:3, Insightful)

    by Dwonis ( 52652 ) * on Monday October 24, 2005 @10:02AM (#13863894)
    yes, he still thinks that micro-kernels are more reliable than monolithic kernels

    Of course he does. Everyone does. The old argument between Linus and Andy was never about reliability. It was about *practicality* and *efficiency*. Microkernels usually incur a lot of overhead. Andy thought the overhead was worth it; Linus didn't.

  • System Requirements (Score:4, Interesting)

    by mnmn ( 145599 ) on Monday October 24, 2005 @10:54AM (#13864327) Homepage
    16MB ram in the requirements... all I can say is WOW.

    This is supposed to be a simple OS, much simpler than the first version of Linux.

    ucLinux can run on 1MB. Older versions can be trimmed enough to run in 200kb even but thats pushing it. Minix now requires 16MB!!! Thats more than ANY BSD out there.

    I was interested in running it on MCUs with small ram and flash. Trimming down uCLinux to the extreme uses 200kb of ram by the kernel and one shell. eCos requires under 64kb for simple compilations. eCos is POSIX for the most part, but theres hardly any schedulers in there, and no real filesystem drivers or calls.

    Minix is a full OS, but being that simple, I expected the kernel to fit in 64kb ram. I guess I'll use NetBSD as a simpler OS to study before graduating on to Minix 3.
  • Globule? (Score:5, Funny)

    by Thuktun ( 221615 ) on Monday October 24, 2005 @12:52PM (#13865279) Journal
    Disclaimer: I am the chief architect of Globule, the experimental content-distribution network used to host www.minix3.org.

    Translation: "Please load-test my network."

Garbage In -- Gospel Out.

Working...