Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Operating Systems Software Businesses Apple

Get To Know Mach, the Kernel of Mac OS X 413

An anonymous reader writes "Linux is a kernel, not an operating system. So what is Mac OS X's kernel? The Mach microkernel. The debate around Monolithic (Linux) and Micro (Mach) kernels continues, and there is a great chapter online about the Mach system from the very good book 'Operating System Concepts'. Which design is better? I report, you decide." Warning: link is to a PDF.
This discussion has been archived. No new comments can be posted.

Get To Know Mach, the Kernel of Mac OS X

Comments Filter:
  • by iggymanz ( 596061 ) on Monday May 16, 2005 @10:44AM (#12543372)
    The modular design of microkernels makes for easier design & debugging, and with some designs the freedom to make user space services that can only be in privileged space in monolithic designs, but does one want to pay the overhead for all that message passing? Now that we are getting into parallel processing at the consumer level with multicore and hyperthreaded chips, maybe the answer is yes.
  • by dumbnose ( 190140 ) on Monday May 16, 2005 @10:46AM (#12543402)
    Mac OS X uses Mach, but it also uses a FreeBSD kernel and compiles them together. This eliminates the runtime characteristics of a Microkernel. This is actually quite common.

    So, even though it uses Mach, you can't call it a Microkernel.

  • Re:Monolithic (Score:1, Interesting)

    by Anonymous Coward on Monday May 16, 2005 @10:48AM (#12543421)
    So if linux adds new capability than you should be able to just add that subroutine in the a directory and it just works. Well nope. You have to compile the entire kenrel to get new features.
  • Re:Monolithic (Score:1, Interesting)

    by Anonymous Coward on Monday May 16, 2005 @10:52AM (#12543465)
    I could be wrong, and I'm sure I'll be told if I am...but the issue is related to what happens at compilation and runtime. In most instances the Linux kernel with the exception of items setup and KLMs are built into a single monolithic kernel, all drivers and kernel related items are in one system. There is no dynamic loading and unloading under this model. But in the case of Mach and even the rest of the OS X system everything that can be put off till later usually is by only loading what is immediately necessary. The kernel is a small item which loads everything else as it needs it instead of being built with everything it may need at compile time.

    Mach pushes decisions about what it needs off until runtime instead of compile time and this translates into smaller footprints and quicker startup in most cases. Whereas linux only makes limited use of KLMs and that means most decisions have to be made at build-time and is not put off.
  • Re:Monolithic (Score:1, Interesting)

    by Anonymous Coward on Monday May 16, 2005 @10:52AM (#12543469)
    Linux is not 100% monolithic, it does use modules, so it's more like 99% monolithic. The main difference between a microkernel and a monolithic kernel is that the microkernel runs most things in userspace, which is more safe and more interchangeable, but generally thought to be less efficient.

    Basically the microkernel is the most beautiful design, I don't think anyone could disagree with that. But a monolithic kernel gets the job done, so it's not like it's bad either.

    The apple design is, however, what i'd call bad. They've taken a microkernel (Mach), and implemented a monolithic kernel beneath it, to run their legacy apps!!! It's ugly!
  • Mac != Mach (Score:4, Interesting)

    by frankie ( 91710 ) on Monday May 16, 2005 @10:55AM (#12543500) Journal
    Although Darwin does use Mach at the heart, it also has large chunks of the BSD kernel bolted on to avoid Mach's typical performance hit. Consequently, OS X really isn't microkernel, and you can't do all the cool microkernel tricks (load or unload almost anything dynamically, drivers can't crash the OS, etc).

    This approach doesn't make much logical sense to me, but it's what Steve and Avie wanted, and somehow, amazingly, it still just plain works.
  • I suppose it's possible I'm underinformed, but I believe the "BSD subsystem" of OSX is not compiled "into the kernel" and is entirely a compatibility layer on top of it.

    I suspect this is exactly how to never violate the microkernel design and still have BSD compat.
  • by RickHunter ( 103108 ) on Monday May 16, 2005 @10:59AM (#12543545)
    1. QNX has been multi-platform for quite a while.
    2. QNX uses shared memory to pass messages. Its message passing is very lightweight, and the resulting performance is far better than Linux.

    In this day and age, there is no reason to use a macrokernel unless your hardware lacks the features needed for a microkernel. QNX has proved this quite nicely.

  • It is based on Mach with a BSD compatibility layer

    It's not just a "compatibility layer". A Mach system consists of multiple servers providing services to each other and to applications. The BSD server in XNU is an essential part of the system... it's the ringleader, and calls the shots from boot onwards.
  • Massive microkernel (Score:1, Interesting)

    by Anonymous Coward on Monday May 16, 2005 @11:01AM (#12543561)
    Mach must be one of the largest microkernels around. It's also a bit of a cheat - it keeps all the drivers and such inside it. Which means that it's easier to write, perhaps a smidgen quicker, but you loose most of the advantages of such a way of doing things.

    I would have thought that you could implement much of Linux in userspace. Certainly file systems and the IP stack could be done easily, leaving just the hardware drivers in there. At that point, you get something that's not a great deal different from the way Mach does it.
  • History? (Score:1, Interesting)

    by Anonymous Coward on Monday May 16, 2005 @11:15AM (#12543682)
    Does anyone know the technical reasons that Apple (or NeXT) chose to go with Mach rather than BSD kernel?
  • by Lemming Mark ( 849014 ) on Monday May 16, 2005 @11:21AM (#12543737) Homepage
    Monolithic kernels are dominant in practice (so far). Windows started off microkernel-y but has ended up rather monolithic (at least partly for performance reasons). Xnu (Darwin / MacOS kernel) also has strong monolithic leanings, despite being based on Mach.

    The microkernel design still appeals, though. For some things (not all) it is beneficial to move stuff out into less-privileged units. (Small) examples of this in Linux include: FUSE (for implementing non-performance-critical filesystems in Linux userspace), udev instead of devfs, moving initialisation code to the initramfs instead of being in the kernel itself...

    Other systems (e.g. Dragonfly BSD) are also seeking to move functionality to userspace where possible without undue complexity and / or performance cost.

    Some argue that virtual machine monitors are a useful modern equivalent to microkernels. They perform a similar function (partitioning system software into multiple less privileged entities), although they do it in a more "pragmatic", less architecturally "pure" way.

    Virtual machine monitors allow multiple virtual machines to use the same hardware. They have also been used for running Linux drivers in fault-resistant sandboxed virtual machines, with performance within a few percent of a traditional monolithic design (fully privileged drivers).

    The L4 microkernel is being used as a virtual machine monitor for this work by one research group, Xen has these capabilities also.
  • by Anonymous Coward on Monday May 16, 2005 @11:23AM (#12543772)
    I'm sick of all these stupid "which is better?" religious wars that geeks are always so interested in having.

    It isn't just geeks that have religious wars. Ask people that care about cars, sports, women, etc. And more choices doesn't necessarily mean that some of the choices aren't clear cut better than others.
  • Re:Monolithic (Score:5, Interesting)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Monday May 16, 2005 @11:23AM (#12543778) Homepage Journal
    The modules may be loaded into the kernel proper, but that does not make Linux necessarily monolithic, as the bindings are necessarily on-the-fly and the failure of a given module does not automatically mean the failure of the kernel as a whole.


    mkLinux is not the only microkernel Linux - L4Linux is still maintained and is much more advanced. Nor are these the only Linux kernels to run in userspace - UML Linux, for example, does just fine. It is not clear where XEN fits into the picture.


    All in all, though, the situation with Linux is actually a highly complex one, and should not be regarded as being definitely anything.

  • Re:XNU vs Linux. (Score:2, Interesting)

    by Ezdaloth ( 675945 ) on Monday May 16, 2005 @11:24AM (#12543788) Homepage
    More accurately, L4 is pretty much totally different. L4 is probably the smallest microkernel you can make, while mach is the biggest. Which is better is highly subjective.
  • by Maury Markowitz ( 452832 ) on Monday May 16, 2005 @11:31AM (#12543859) Homepage
    Although interesting, Mach was developed at a university and shows a huge number of problems as a result. Notably performance is terrible, due largely to the IPC performance. When people actually tried the "collection of servers" operating system in Mach 3, it was clear it was simply not a workable solution. Workplace OS, Star Trek and any number of other OS's died as a result.

    What's sad about this is that the failure of Mach tainted ALL ukernels. By the mid-1990s the idea was basically dead. But what an idea! Don't have your machine on a network? Simply don't run the network program. Using a diskless system? Don't run the disk server. Don't want _VM_... no problem. You can use the exact same OS image to build anything from a minimal OS for a handheld to a full-blown multi-machine cluster, without even compiling. No pluggable kernels, no shared libraries, no stackable file systems, nothing but top and ls.

    But it just didn't work. IIRC performance of a Unix app on a truly collection-of-servers Mach was 56% slower than BSD. Unusable. Of course you can compile the entire thing into a single app, the "co-located servers" idea, but then all the advantages of Mach go away, every single one.

    Now, given this, the question has to be asked: why anyone would still use it? Don't get me wrong, there are real advantages to Mach, notably for Apple who ship a number of multiprocessor machines. But the same support can be added to monokernals. Likewise Apple's version has support for soft realtime, which has also been added to monokernels. So in the end the Mac runs slower than it could, and I am hard pressed to find an upside.

    Of course it didn't have to be this way. The problems in Mach led from the development process, not the concepts within. As L4 shows, it is possible to make a cross-platform IPC system that is not a serious drag on performance. And Sun's Spring went further than anyone, really re-writing the entire OS into something I find really interesting, and still providing fast Unix at the same time. I'd love to see someone build Mac OS X on Spring...
  • Re:As always... (Score:3, Interesting)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Monday May 16, 2005 @11:36AM (#12543904) Homepage Journal
    That is very true. Arguably, though, what you have is a continuum, with monolithic kernels at one extreme and exokernels (virtually everything in userspace) at the other extreme. Different requirements would need different designs, somewhere on that continuum, but there would be no "overall best" for all circumstances.


    Actually, as kernels have started adding parallelism (such as SMP, clustering, etc), it becomes harder to really say exactly what sort of design a kernel really has. (Design is intrinsic and should not depend on surroundings, so does not depend on whether you are actually in a cluster, but merely whether the kernel recognizes the concept.)

  • Exokernels (Score:3, Interesting)

    by Jotham ( 89116 ) on Monday May 16, 2005 @11:39AM (#12543939)
    Mono vs Macro... what about Exo [mit.edu]
  • L4 performance? (Score:3, Interesting)

    by emil ( 695 ) on Monday May 16, 2005 @11:41AM (#12543956)

    HURD abandoned Mach because of performance issues and is being reimplemented on L4 [l4ka.org].

    If Apple had chosen L4, would it have been necessary from a performance perspective to include BSD at a peer level with the microkernel?

    Is it now far too late for Apple to dump Mach?

  • Re:Monolithic (Score:3, Interesting)

    by AT-SkyWalker ( 610033 ) on Monday May 16, 2005 @11:46AM (#12544013)
    The apple design is, however, what i'd call bad. They've taken a microkernel (Mach), and implemented a monolithic kernel beneath it, to run their legacy apps!!! It's ugly!

    I would disagree with you there. Apple's design may not be beautiful, but it certainly has the best of both worlds.

    The BSD Layer, Memeory Managment, etc are all built inside XNU (OS X Kernel) but at the same time its still functions as a microkernel allowing things such as Kernel Extensions (Kext).

    The problem with a fully MicroKernel is that its very slow because of all the context switching that has to go on between userland/and kernel land to do what is essentially kernel functionality. Apple solved this by making XNU not act as a microkernel for things such as the BSD layer.
    The result is a Kernel that is less prone to panics. In Linux a bad KLM would certainly panic the kernel because it runs in the same address space as the kernel. In OS X a bad Kext would just die like anyother user space program.

    As you said, it may be ugly in your opinion, but it gets the job done, it has the best of both worlds, and its less prone to panics. Now that's what I call a step in the right direction.

  • by jacksonj04 ( 800021 ) <nick@nickjackson.me> on Monday May 16, 2005 @12:34PM (#12544441) Homepage
    I'm glad I'm not the only person to think that. Makes bugger all difference to my shave though.

    In other news: Anybody else being persistantly bugged to moderate or M2? Every single day I come back to find they want me to M2, and almost daily I find another 5 mod points.
  • by TheRaven64 ( 641858 ) on Monday May 16, 2005 @12:41PM (#12544519) Journal
    The benchmarks for Mach 4.0 showed it within 20% the speed of a monolithic kernel of the same era. Check the site for more details, although I seem to recall that the project is no longer in active development.

    There is one very easy way to kill a microkernel's performance - force it to use a synchronous system call API (e.g. POSIX). With a synchronous system call API, a context switch is required for every system call. With an asynchronous API, the process simply writes messages into a buffer (or set of buffers for different kernel services) until it either needs to wait for a response or its quantum expires. At this point, you switch to the next context (perhaps a kernel server) and process the incoming messages. This reduces the total number of context switches (and, more importantly the number of mode switches). If you want to see good performance from QNX, then use the native system call API, not the POSIX wrapper.

  • by Animats ( 122034 ) on Monday May 16, 2005 @12:42PM (#12544530) Homepage
    Although interesting, Mach was developed at a university and shows a huge number of problems as a result.

    Sad, but true. The developers of Mach chose to start with BSD and tried to hack it into a microkernel, one section at a time. This was a flop. Mach 2.5, which Apple uses, is basically BSD with some Mach features. Mach 3 is more of a microkernel, but is so awful that nobody uses it.

    There are really only two microkernels that work - VM, for IBM mainframes, and QNX. In both cases, incredible care was put into getting the key primitives - interprocess communication and scheduling - right. If those are botched, the system never recovers.

    Mach suffered from too much "cool idea" syndrome. There's too much generality in key primitives that need to work fast. Message passing has too many options. The ability to build heterogeneous multiprocessor clusters out of whatever you have lying around complicates the simpler cases. And sharing memory across the network isn't worth the trouble.

    It's clear from VM and QNX how a microkernel should work. Interprocess communication and scheduling need to play well together. Interprocess communication primitives should be like subroutine calls, not I/O operations. Try for an overhead of about 20%, and don't get carried away with the "zero copy" mania. Organize the I/O system so that the channel drivers that manage memory access are separate from the device drivers that manage the device functions.

    This is how you get uptime measured in years.

  • by AaronLawrence ( 600990 ) on Monday May 16, 2005 @12:48PM (#12544577)
    Perhaps this means that not many people are moderating or meta-modding. I do find it tedious myself; mostly because the moderation isn't immediate, you have to remember to click moderate after reading the page, instead of just closing it, I know that sounds silly but its easy mistake to make.
  • by 0xABADC0DA ( 867955 ) on Monday May 16, 2005 @12:51PM (#12544604)
    Actually, monolithic kernels will always be faster... in fact why not make all software monolithic? What I am talking about is running all programs in the kernel address space with simple function calls to kernel services. That would make the computer much faster, and it can be done.

    If the entire operating system were written in a safe language such as Java or C# ("managed" code only) then the performance impact from syscalls, virtual memory (TLB flush/lookup), complicated task switching, and extra copies of data from/to the kernel would be almost entirely eliminated. A safe language is one that does not allow arbitrary pointers.

    FYI, in a linux 2.6 kernel on a 512 meg machine 4 megs is taken just to have page tables -- not even including the overhead when processes actually add pages to their memory spaces, just to have support for VM in the first place. Syscalls take ~1000x longer than normaly functions, so they are always going to be a bottleneck. And when you call a syscall that takes a data parameter (string for instance) the data is in the best case copied (in the worst case the kernel sets the address of the reading instructions in a table, then a page fault happens and the fault handler checks the table to see if the access was okay). IO using read/write is always copied at least twice, and even mmap suffers from a lot of overhead from the kernel managing the pages.

    Basically kernels written in C or other archaic systems programming language are needlessly slowing down the computer a LOT. With a safe language for instance, instead of using the virtual memory to force programs to not mess with each other, they simply can't do that so the VM can be used for other things. One nice performance enhancement is to allocate all memory (objects) in a 'new' zone and use VM to track what pages have been written to; when the 'new' zone fills up only pages that have been written to are checked for references during garbage collection. So basically you could do 1 billion memory allocations of arbitrary sizes and it would take only 1 billion instructions (each allocation increments an integer and that's all). Also, "system" calls are then just normal method calls and can even be inlined, so instead of getpid() taking the time of 1000 instructions it could easily take only 1 (direct inlined access to the variable).

    So lots of people will mod this down since they assume that the low-level details like cache lines are more important than oh, say, free memory management. But I got some news: a few minor tweaks and you can do all that same low-level crap in Java or managed C# and get all the benefits of a safe kernel.
  • Re:Monolithic (Score:4, Interesting)

    by EvilTwinSkippy ( 112490 ) <yoda AT etoyoc DOT com> on Monday May 16, 2005 @01:40PM (#12545120) Homepage Journal
    Actually, the Mac OS X Kernel is the Mac OS X Kernel, and the Linux Kernel is the Linux Kernel.

    To couch them in terms of Monolithic versus Micro would be like trying to classify an economy as Capitalist or Communist.

    Neither economy has ever existed in it's pure form. Both descriptions also have political overtones that have precious little to do with their actualy description.

  • by Anonymous Coward on Monday May 16, 2005 @02:05PM (#12545423)
    What's better? PHP or Python?

    Python

    What's better Pepsi or Coke?

    Coke

    What's better? C++ or Java?

    C++

    What's better? IE or Mozilla?

    Mozilla

    In all seriousness, though, I personally believe that the microkernel architecture is better. Whatever yields the most functionality is what's best to me, and even if performance is lower in some areas, that's okay; I'm used to making sacrifices for operating systems. After all, if all I cared about was speed, I'd still be using DOS! This is obviously not the criteria we use to choose OS's.

  • by Phong ( 38038 ) on Monday May 16, 2005 @02:14PM (#12545528)
    In this post, you see that Linus was effectively trying to rename GNU

    That's certainly one cynical viewpoint, but is not what really happened. Linus started his own OS project and he named it as he pleased (or really those around him named it and he accepted the name). There's nothing wrong with naming your own project and then cherry picking the items you want to be in your project from the available choices. Keep in mind that the GNU folks were working on HURD at the time, and were not all that keen on the Linux kernel. So, this was not a case of someone coming along and completing the GNU project (at least, not at that time) -- this was a different OS project that shared a lot of the same code. In some ways it could be considered to be a fork, but even that is not right conceptually because the project didn't start out to be a GNU system. If the BSD utilities hadn't been under a cloud of a potential lawsuit, it may well have been that more BSD code would have made its way into the early versions of Linux (IIRC, the GNU tools were slightly buggier but more feature rich than the BSD tools at the time).

    Stallman tells us the call a GNU system running on Linux GNU/Linux.

    Stallman has every right to advocate that based on the perspective of someone close to the GNU project, and I have every right to ignore him based on my historical experiences with Linux from as far back as version 0.11 (I switched over from Minix to Linux, and helped Remy Card with some of the early work on ext2, so I've been using Linux for a long time).

  • by freedom_india ( 780002 ) on Monday May 16, 2005 @02:17PM (#12545559) Homepage Journal
    C++ or Java? This is a dumb question. Of course Java is better, unless of course you want to have a headache, heartache, stomach ache, etc., in which C++ is probably the right way to go-:)

    For the rest of us, Java is much, much simpler and easier to use.

  • by titusjan ( 219930 ) on Monday May 16, 2005 @03:20PM (#12546318)
    I've got the opposite. I haven't had mod points in a year. And yes, I've got the 'willing to moderate' box checked and positive karma.

    Slashdot works in mysterious ways.
  • by callipygian-showsyst ( 631222 ) on Monday May 16, 2005 @04:24PM (#12547079) Homepage
    ...as anyone with XP device driver experience could tell you. Unlike the 60s era Unix technology that's the core of an Unix-based architecture, Windows XP was designed from the ground up to be modular, portable, and extensible.

    Cutler wrote a book on it, which is still worth reading, though out of print. Microsoft has a current "XP Internals" book available from Microsoft press.

    Also, Microsoft has an XP-based embeddable operating system, which eliminates many of XPs "desktop" enhancements. And of course, the excellent handheld operating systems that are the heart of Windows Mobile.

  • by just-a-stone ( 766843 ) on Monday May 16, 2005 @05:11PM (#12547660)
    still intersting and in some kind very funny the tanenbaum vs. torvalds debates [oreilly.com] about microkernel vs. monolithic architecture

Say "twenty-three-skiddoo" to logout.

Working...