World's First Formally-Proven OS Kernel 517
An anonymous reader writes "Operating systems usually have bugs — the 'blue screen of death,' the Amiga Hand, and so forth are known by almost everyone. NICTA's team of researchers has managed to prove that a particular OS kernel is guaranteed to meet its specification. It is fully, formally verified, and as such it exceeds the Common Criteria's highest level of assurance. The researchers used an executable specification written in Haskell, C code that mapped to Haskell, and the Isabelle theorem prover to generate a machine-checked proof that the C code in the kernel matches the executable and the formal specification of the system."
Does it run Linux? "We're pleased to say that it does. Presently, we have a para-virtualized version of Linux running on top of the (unverified) x86 port of seL4. There are plans to port Linux to the verified ARM version of seL4 as well." Further technical details are available from NICTA's website.
Amiga Hand? (Score:5, Informative)
Or do you mean the "Guru Meditation Error"?
Re:Apps running on top will crash... so (Score:5, Informative)
> Even if we have a perfect kernel, it won't insulate us from bugs in the
> software running on top of that kernel, so do we really gain much?
Since a kernel crash kills all your applications and background processes, kills your network connection, requires you to reboot, and can scribble anywhere on the disk, yes.
Idiot poster (Score:1, Informative)
every respectable geek knows the amiga hand is not an indication the system crashed, but the hardware asking for its OS.
Nope, the amiga crashes are known for the guru meditation, where the screen goes black and displays a BLINKING RED text.
Most amiga users have jumped out of their seat at least once or twice out of sheer surprise...
Re:Apps running on top will crash... so (Score:2, Informative)
Well, kernel bug brings down whole OS and all the applications running top of it. But the kernel is usually most secure and stable. Most errors comes from drivers.
http://www.usenix.org/publications/login/2006-04/openpdfs/herder.pdf [usenix.org]
And that is the problem with the monolithic OS's. One bug in the driver or any other part of the OS, brings it down. But even it is about monolithic OS, it can be as secure as microkernel-based OS's like Hurd, Minix or NT.
And I believe that the star of the next 10 years will come to be the MS Singularity.
Re:The Amiga Hand? (Score:5, Informative)
The missing word is formal.
They use a formal specification [wikipedia.org], which is then formally verified [wikipedia.org].
The overhead? It took something like 5 years for a 10,000 line program. The benefit is if the specification is right, the program should be right.
Other questions are answered in the FAQ linked in the summary and this page [nicta.com.au].
World's First Formally-Proven OS Kernel - NOT (Score:4, Informative)
Honywell SCOMP Early 1980s Was intended to me a secure front-end to Multics.
Verified by NSA et all as part of the first Orange-book A1 level certification.
For the time it was a magnificent bit of work.
Re:spec? (Score:5, Informative)
It only means it meets the spec, not that the spec is correct ...
It does not mean that the faulty or erratic hardware cannot crash the system
It does not mean that other programs cannot crash and lose your data ...
It does not mean that buggy device drivers can make your system unusable
It does not mean that the system is perfect, only that it will always do what it is supposed to ... which may not be what you want ...
Re:Provable? (Score:2, Informative)
only if you want to prove it automatically. this proof is constructed by hand, with some automatical steps in between, and then *verified* automatically. to manage the complexity of the proof, the proof is done in a more abstract formal specification language , then refined two times (haskel, C) . the correctness of the refinements are formally proven by the Isabelle system.
Re:Sounds like automated unit test generation to m (Score:3, Informative)
Formal proofs are not unit tests. Unit tests test that certain values work correctly. Formal proofs show that the code works to the specification in all cases (modulo errors in the proof, of course). They of course cannot find bugs in the specification (which unit tests might, if they test what you thought the specification said, instead of what the specification really said).
Great work, but... (Score:3, Informative)
...there are two caveats. One is that "executable specification in Haskell" is a bit of a dodge. It means that they have effectively verified the translation from Haskell to C, but if there are bugs in the Haskell then there will be bugs in the C as well (in fact this is part of what they've proven). They admit as much at the very top of http://ertos.nicta.com.au/research/l4.verified/proof.pml [nicta.com.au], but it still leaves the question of why this is any better than simply rewriting the microkernel in Haskell. If the benefit of a concise and abstract specification is lost, what's left?
The second major caveat is that they had to leave out some features that they couldn't verify. Most significant is that they didn't verify multi-core - by which it seems they mean multi-processor, but "multi-core" is the much hipper word. Since even laptops are multi-processor now, this limits applicability until they take the next step. With all of these shortcuts, though, if you look at the list of what they did manage to verify - mostly pointer errors and overflows - it's not clear that it couldn't have been done with a more real-code-oriented tool such as the Stanford Checker or sparse.
Re:spec? (Score:3, Informative)
The study of the halting problem taught us that certain categories of algorithms cannot be formally proved to do certain things. Work on incompleteness tells us that we know algorithms must exist that fit that category.
The question, really, is whether the algorithms in and design of operating systems can be done in such a way that we don't run into incompleteness and halting-problem issues within the constraints of what we are trying to achieve -- after all, we aren't trying to achieve the impossible, just trying to make some specific, absolute guarantees.
I'd recommend a solid study of the Mizar Journal of Formalized Mathematics, the implications of the standard axioms of set theory (maybe including reduced forms of the axiom of choice, but that's not necessarily highly relevant), and careful analyses of what Goedel and Turning actually proved. After that, emergent behavior from cellular automata may be a good next investigation.
[IANA professional set-theory mathematician, logician, nor computer scientist, so take my post as a starting point for finding answers, please, not as anything definitive!]
Re:slow peice of crap (Score:1, Informative)
You really think C has *no* abstraction? So when you define a variable, you define the memory address for it then? You choose which registers to use in calculations? Look, even ASSEMBLY uses some abstraction; only machine language does not.
Re:OK, you had me going there for a while... (Score:3, Informative)
Microkernels provde:
L4 checks these boxes. Nanokernel and picokernel are poorly defined, emotive terms. Not reasonable criticism.
Re:Thank goodness (Score:3, Informative)
You're conflating two different concepts. Common Criteria Evaluation Assurance Level focuses on security while this test focuses on complete mathematical provability IOW, can it be mathematically proven that the kernel meets all of its specifications and that the compiled kernel is exactly what was specified in the source code? CC EAL focuses only on security aspects.
Furthermore, a system that was specified as being completely secure[1] would, in theory, be equivalent to a EAL 7, not merely "6+".
[1] I mean this only in a hypothetical sense since I don't believe it is even possible to specify a system that is completely impenetrable, let alone implement one. But then, that's because I subscribe to the theory of information security that says a completely secure system is impossible, therefore we must use multiple compensating ocntrols that get us to a 'virtually impenetrable' state, tuned to prevent the most likely types of attacks (cheap) vs. the possibility of someone building a multi-billion dollar super cluster to break the security.
Re:Thank goodness (Score:2, Informative)
It doesn't return anything about success or failure.
Re:Who proved the proof-checker? (Score:2, Informative)
Although in C it can look very ugly, for functional languages such as Haskell or Erlang, it is required as you have to express loops through recursion due to only being able to assign to variables once. In these langauges, it looks a lot better as you can pattern match on the value of arguments, allowing you to concisely deal with obvious error or base cases, leaving you to handle the generic cases without worrying about those cases. Scheme implementations are required to implement tail call optimization.