Multi-Server Microkernel OS Genode 12.11 Can Build Itself 102
An anonymous reader wrote in with a story on OS News about the latest release of the Genode Microkernel OS Framework. Brought to you by the research labs at TU Dresden, Genode is based on the L4 microkernel and aims to provide a framework for writing multi-server operating systems (think the Hurd, but with even device drivers as userspace tasks). Until recently, the primary use of L4 seems to have been as a glorified Hypervisor for Linux, but now that's changing: the Genode example OS can build itself on itself: "Even though there is a large track record of individual programs and libraries ported to the environment, those programs used to be self-sustaining applications that require only little interaction with other programs. In contrast, the build system relies on many utilities working together using mechanisms such as files, pipes, output redirection, and execve. The Genode base system does not come with any of those mechanisms let alone the subtle semantics of the POSIX interface as expected by those utilities. Being true to microkernel principles, Genode's API has a far lower abstraction level and is much more rigid in scope." The detailed changelog has information on the huge architectural overhaul of this release. One thing this release features that Hurd still doesn't have: working sound support. For those unfamiliar with multi-server systems, the project has a brief conceptual overview document.
No plans for LLVM (Score:5, Informative)
For anybody wondering [osnews.com]:
Re:No plans for LLVM (Score:4, Informative)
That's just not correct.
A Phenom II x6, especially the lower clocking ones are certainly not high end any more.
A dual core system is now certainly low end, given even netbooks have dual core processors.
Plenty of ultrabooks come with quad core processors these days, and they are not especially high speed machines, trading speed for power consumption and size.
Very Simple (Score:4, Informative)
All interrupts in processors are handled in a single context, the 'ring 0' or 'kernel state'. Device drivers (actual drivers that is) handle interrupts, that's their PURPOSE. When the user types a keystroke the keyboard controller generates an interrupt to hardware which FORCES a CPU context switch to kernel state and the context established for handling interrupts (the exact details depend on the CPU and possibly other parts of the specific architecture, in some systems there is just a general interrupt handling context and software does a bunch of the work, in others the hardware will set up the context and vector directly to the handler).
So, just HAVING an interrupt means you've had one context switch. In a monolithic kernel that could be the only one, the interrupt is handled and normal processing resumes with a switch back to the previous context or something similar. In a microkernel the initial dispatching mechanism has to determine what user space context will handle things and do ANOTHER context switch into that user state, doubling the number of switches required. Not only that but in many cases something like I/O will also require access to other services or drivers. For instance a USB bus will have a USB driver, but layered on top of that are HID drivers, disk drivers, etc, sometimes 2-3 levels deep (IE a USB storage subsystem will emulate SCSI, so there is an abstract SCSI driver on top of the USB driver and then logical disk storage subsystems on top of them). In a microkernel it is QUITE likely that as data and commands move up and down through these layers each one will force a context switch, and they may well also force some data to be moved from one address space to another, etc.
Microkernels will always be a tempting concept, they have a certain architectural level of elegance. OTOH in practical terms they're simply inefficient, and most of the benefits remain largely theoretical. While it is true that dependencies and couplings COULD be reduced and security and stability COULD improve, the added complexity generally results in less reliability and less provable security. Interactions between the various subsystems remain, they just become harder to trace. So far at least monolithic kernels have proven to be more practical in most applications. Some people of course maintain that the structure of OSes running on systems with large numbers of (homogeneous or heterogeneous) will more closely resemble microkernels than standard monolithic ones. Of course work on this sort of software is still in its infancy, so it is hard to say if this may turn out to be true or not.
Re:Very Simple (Score:4, Informative)
Most operating systems these days don't run device driver interrupt handling code directly in the interrupt handler --- it's considered bad practice, as not only do you not know what state the OS is in (because it's just been interrupted!), which means you have an incredibly limited set of functionality available to you, but also while the interrupt handler's running some, if not all, of your interrupts are disabled.
So instead what happens is that you get out of the interrupt handler as quickly as possible and delegate the actual work to a lightweight thread of some description. This will usually run in user mode, although it's part of the kernel and still not considered a user process. This thread is then allowed to do things like wait on mutexes, allocate memory, etc. The exact details all vary according to operating system, of course.
This means that you nearly always have an extra couple of context switches anyway. The extra overhead in a well designed microkernel is negligible. Note that most microkernels are not well designed.
L4 is well designed. It is frigging awesome. One of its key design goals was to reduce context switch time --- we're talking 1/30th the speed of Linux here. I've seen reports that Linux running on top of L4 is actually faster than Linux running on bare metal! L4 is a totally different beast to microkernels like Mach or Minix, and a lot of microkernel folklore simply doesn't apply to L4.
L4 is ubiquitous on the mobile phone world; most featurephones have it, and at least some smartphones have it (e.g. the radio processor on the G1 runs an L4-based operating system). But they're mostly using it because it's small (the kernel is ~32kB), and because it provides excellent task and memory management abstraction. A common setup for featurephones is to run the UI stack in one task, the real-time radio stack in another task, with the UI stack's code dynamically paged from a cheap compressed NAND flash setup --- L4 can do this pretty much trivially.
This is particularly exciting because it looks like the first genuinely practical L4-based desktop operating system around. There have been research OSes using this kind of security architecture for decades, but this is the first one I've seen that actually looks useful. If you haven't watched the LiveCD demo video [youtube.com], do so --- and bear in mind that this is from a couple of years ago. It looks like they're approaching the holy grail of desktop operating systems which, is to be able to run any arbitrary untrusted machine code safely. (And bear in mind that Genode can be run on top of Linux as well as on bare metal. I don't know if you still get the security features without L4 in the background, though.)
This is, basically, the most interesting operating system development I have seen in years.