Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Operating Systems

Multi-Server Microkernel OS Genode 12.11 Can Build Itself 102

An anonymous reader wrote in with a story on OS News about the latest release of the Genode Microkernel OS Framework. Brought to you by the research labs at TU Dresden, Genode is based on the L4 microkernel and aims to provide a framework for writing multi-server operating systems (think the Hurd, but with even device drivers as userspace tasks). Until recently, the primary use of L4 seems to have been as a glorified Hypervisor for Linux, but now that's changing: the Genode example OS can build itself on itself: "Even though there is a large track record of individual programs and libraries ported to the environment, those programs used to be self-sustaining applications that require only little interaction with other programs. In contrast, the build system relies on many utilities working together using mechanisms such as files, pipes, output redirection, and execve. The Genode base system does not come with any of those mechanisms let alone the subtle semantics of the POSIX interface as expected by those utilities. Being true to microkernel principles, Genode's API has a far lower abstraction level and is much more rigid in scope." The detailed changelog has information on the huge architectural overhaul of this release. One thing this release features that Hurd still doesn't have: working sound support. For those unfamiliar with multi-server systems, the project has a brief conceptual overview document.
This discussion has been archived. No new comments can be posted.

Multi-Server Microkernel OS Genode 12.11 Can Build Itself

Comments Filter:
  • Re:No plans for LLVM (Score:2, Interesting)

    by loufoque ( 1400831 ) on Sunday December 02, 2012 @01:07PM (#42161721)

    In particular, because it is very rigid in the tools it needs to work with, making it more complicated to have a full working toolchain on exotic platforms. Hurd still doesn't have: working sound support. For those unfamiliar with multi-server systems, the project has a brief conceptual overview document.

    clang/llvm can actually cross-compile to several different architectures with the same binary. That thing would be absolutely impossible with GCC.

  • by phantomfive ( 622387 ) on Sunday December 02, 2012 @01:59PM (#42162003) Journal
    I believe it's because you need to verify a lot of things that come from user space into kernel space. This makes things like DMA and port communication somewhat more difficult.
  • Re:No plans for LLVM (Score:3, Interesting)

    by Entrope ( 68843 ) on Sunday December 02, 2012 @01:59PM (#42162009) Homepage

    Microkernels are long on the "security and accountability" hype and somewhat short on reality. Sure, the services provided by the microkernel are less likely to have bugs or holes than a monolithic kernel -- but that's because the microkernel doesn't provide most of the monolithic kernel's functionality. Once you roll in all the device drivers, network stack, and the rest, the microkernel-based system is generally at least as bloated and typically less performant.

  • by Bomazi ( 1875554 ) on Sunday December 02, 2012 @02:12PM (#42162093)

    It depends. Hurd itself is an implementation of the unix api as servers running on top of a microkernel. Drivers are not its concern.

    The way drivers are handled on a Hurd system depends on the choice of microkernel. Mach includes drivers, so they run in kernel space. L4 doesn't have drivers, so they will have to be written separately and run in user space.

  • Re:No plans for LLVM (Score:3, Interesting)

    by HornWumpus ( 783565 ) on Sunday December 02, 2012 @03:21PM (#42162501)

    Come back when you get the point. Kernel space is shared memory, a kernel mode component can crash the system and leave no trace of what did it. Like pre X MacOS or DOS.

    And never say or type 'performant' again. It makes you look like a douche. 'less performant' == 'slower'.

    Everybody knows mircrokernels are slower. They are more stable. Misbehaving drivers are identified quickly. They usually have fewer issues and the issues they have don't take the whole system down.

    That said, count the context switches needed to draw a single pixel.

  • Re:No plans for LLVM (Score:4, Interesting)

    by Entrope ( 68843 ) on Sunday December 02, 2012 @07:06PM (#42163953) Homepage

    I would say that you're the one who needs to get the point. Major components that crash will still generally leave the system in a state that is difficult or impractical to diagnose or recover. If your disk driver or filesystem daemon crashes, you don't have many ways to log in or start a replacement instance. If your network card driver or TCP/IP stack crashes, you still need a remote management console to fix that web server. In the mean time, people with modern kernels have figured out how to make those monolithic kernels still fairly usable in spite of panics or other corruption. The only reason that microkernels look better on the metrics you claim is that they support less hardware and use less of the hardware's complex (high-performance) features.

  • Re:No plans for LLVM (Score:4, Interesting)

    by drinkypoo ( 153816 ) <> on Monday December 03, 2012 @12:16PM (#42169747) Homepage Journal

    if a disk or network driver crashes on a production server, how much do you care that the rest of the system is still working? These things must not crash, period -- if they do crash, the state of the rest of the system is usually irrelevant.

    That's not really true. The storage driver can ask the disk driver which blocks (or whatever you call them) have been successfully written, and not retire them from the cache until they have been recorded. And hopefully one day we will get MRAM, and then we'll have recoverable ramdisks even better than the ones we had on the Amiga -- where they could persist through a warm boot, simply getting mapped again. So you could load your OS from floppy into RAM, but you'd only have to do it once per cold boot, which is nice because the Amiga would crash a lot because it had no memory protection...

    This conversation is especially interesting because the Amiga was a microkernel-based system with user-mode drivers, which is much of how they solved hardware autoconfiguration; you could include a config rom and the OS would load (in fact, run) your driver process from it. This was enough at least for booting, and then you could load any updated drivers which can kick the old driver out of memory. And now we have reached the limits of what I know about it :)

    If the network card driver crashes, the same thing is true. The network server knows which packets have been ACKed and which ones haven't, and it knows the sequence number of the last packet it received. The driver is restarted, some retransmits are requested, and everything proceeds as normal. The only case in which the user even has to notice is when the driver is crashing so fast that it can't do any useful work before it does so.

I have the simplest tastes. I am always satisfied with the best. -- Oscar Wilde