Google Announces KataOS (phoronix.com) 69
Last Friday, Google announced the release of KataOS, a security-minded operating system focused on embedded devices running ambient machine learning workloads. As Phoronix notes, it uses the Rust programming language and is "built atop the seL4 microkernel as its foundatin." From Google's Open-Source Blog: As the foundation for this new operating system, we chose seL4 as the microkernel because it puts security front and center; it is mathematically proven secure, with guaranteed confidentiality, integrity, and availability. Through the seL4 CAmkES framework, we're also able to provide statically-defined and analyzable system components. KataOS provides a verifiably-secure platform that protects the user's privacy because it is logically impossible for applications to breach the kernel's hardware security protections and the system components are verifiably secure. KataOS is also implemented almost entirely in Rust, which provides a strong starting point for software security, since it eliminates entire classes of bugs, such as off-by-one errors and buffer overflows.
The current GitHub release includes most of the KataOS core pieces, including the frameworks we use for Rust (such as the sel4-sys crate, which provides seL4 syscall APIs), an alternate rootserver written in Rust (needed for dynamic system-wide memory management), and the kernel modifications to seL4 that can reclaim the memory used by the rootserver. KataOS code is being worked on via GitHub under the AmbiML umbrella.
The current GitHub release includes most of the KataOS core pieces, including the frameworks we use for Rust (such as the sel4-sys crate, which provides seL4 syscall APIs), an alternate rootserver written in Rust (needed for dynamic system-wide memory management), and the kernel modifications to seL4 that can reclaim the memory used by the rootserver. KataOS code is being worked on via GitHub under the AmbiML umbrella.
Fuchsia OS (Score:3)
What happened to Fuchsia OS?
Here is what happened (Score:4, Interesting)
Fuchsia is meant to allow better ad revenue (TL;DR; search for "prevailed" in the article): https://www.bnnbloomberg.ca/pr... [bnnbloomberg.ca]
There are also security issues expected in a new OS: https://blog.quarkslab.com/pla... [quarkslab.com]
Re: (Score:2)
Send it to the graveyard.
Re: Here is what happened (Score:5, Insightful)
IMO Stadia was Google's final mistake. Cancelling something like that isn't like cancelling another one of their chat apps. I honestly can't see any developers ever trusting Google again with anything that hasn't yet turned Google a big profit. It's going to be really, really hard for Google to launch any big products in the future, which means their future growth prospects are crap.
Re: (Score:2)
Re: (Score:2)
Re: Here is what happened (Score:2)
Android has already delivered Google a big profit, and continues to do so, so I don't think it's at risk. Just new things going forward, no matter how ambitious, Google has signaled that if it doesn't offer a quick ROI, then it's DOA. They gave that stupid Google Plus thing a much bigger chance, and it likely saw even less revenue than Stadia.
I personally didn't ever use Stadia, but you don't launch something like that and promote it as much as they did and then kill it as soon as they did. Even if they fou
Re: (Score:1)
I cannot take seriously somebody who is so pretentious as to put the names C++, Rust and Go into typewriter font but doesn't do that for "USB".
Re: (Score:2)
I cannot take seriously somebody who is so pretentious as to put the names C++, Rust and Go into typewriter font but doesn't do that for "USB".
The USB sig logo isn't in typewriter font. So that's fine.
Re: (Score:2)
Don't forget ChromeOS which seems to be slowly being shunted to one side in favour of Android.
What a ridiculous duplication of effort. Only a company with money to burn but little direction could develop multiple OS's side by side and be quite happy to bin one of more of them.
Re: (Score:2)
Originally it was supposed to be a core c++ kernel and a lot golang system/userland (think of iOS and Objective C), with along with legacy java runtime. They make the mistake of listening to Eric Raymond. Early on they found Go was rather useless for any system work in Fuchsia (big hint, no Go is NOT going to replace C, though Rust is a different story). They eventually banished Go entirely in Fuchsia development outside of pure application work, and were moving more of it to Rust over the last few years.
Re: (Score:2)
Go is a fine language, it's designed to replace server software. That's why it's popular in devops.
Re: Fuchsia OS (Score:2)
Eh... IMO its attempt at being simple induces the programmer into using too many anti-patterns. Try asynchronous programming in it. Try mimicking joinall like behavior from rust or c#. In those languages, it's dead simple. In golang it's a hot mess that reminds me of cable tv: 500 channels and nothing's on.
Re: Fuchsia OS (Score:2)
I find Go great for tedious work like processing files and network requests and producing output files and responses.
Re: (Score:2)
>I find Go great for tedious work like processing files and network requests and producing output files and responses.
Tedious? That's the fun stuff.
Re: (Score:2)
Sounds like you are trying to force your preferred paradigm onto the language, rather than using the paradigm the language was designed with. That always leads to unhappiness.
Re: Fuchsia OS (Score:2)
No, just had an application already written, and needed to add a feature to it that needed asynchronous access. That's not a matter of design paradigm.
Re: (Score:2)
Indeed, I do find it great for building middleware-ish integration servers and quick integration tools; things I would have used python or ruby for, and before that, maybe perl. For that simplicity, a large ecosystem, and fast build times do help a lot.
Re: (Score:2)
I will say I miss Perl's parsing capabilities.
Re: (Score:2)
But you sure as hell wouldn't want this in system programming and especially not in embedded where its yo-yo like memory consumption and the device's physical-only memory would cause all kinds of headaches.
Re: (Score:3)
Google chose to use a SafeStack [llvm.org] scheme for C, C++ and Rust for protecting against stack smashing.
However, Go has its own ABI with lots of small stacks, and is not interoperable with SafeStack. The safety of SafeStack relies on there not being any other ABI in the program -- because that could leak the safe stack pointer.
Re: (Score:2)
What happened to Fuchsia OS?
It rusted.
Re: (Score:2)
Google: securely spies on you and delivers ads (Score:5, Interesting)
Google does 2 things. They spy on you. They deliver ads. Everything else is secondary or unintended consequence.
Which of those 2 things is this OS designed to do?
And how long until the cancel this project, anyway?
I'd love to see a headline, "Google project *not* canceled! First ever!"
Re: (Score:2)
Without their ad business generating a gazillion dollars a year, Google would have gone out of business years ago.
You mean a business can't survive without their main source of income? What a revelation!
Re:Google: securely spies on you and delivers ads (Score:5, Insightful)
Google does 2 things. They spy on you. They deliver ads. Everything else is secondary or unintended consequence.
Which of those 2 things is this OS designed to do?
This "secure" OS is designed to prevent device "owner" to tamper with the system and hardware, so the spying and adds features cannot be disabled or removed.
Another week, another Google experiment... (Score:4, Insightful)
... gets tossed over the fence. They'll play with it for a year or 2 then it'll be quietly dumped along with any 3rd parties who spent time and effort working with it.
No ones falling for this shit anymore google.
Re: (Score:2)
Indeed. Any effort invested into this is wasted. It is like they only do "innovation" for the press releases now. If that was ever different with Google.
Re: (Score:2)
They should just get it over with and release a product called "April's Fool".
Black box burps out code (Score:3, Insightful)
Secure? (Score:5, Insightful)
it is mathematically proven secure, with guaranteed confidentiality, integrity, and availability.
lol *grabs popcorn*
Re: (Score:2)
They really made _that_ statement? Well. Talk about the Big Lie approach...
Re: (Score:3)
L4 is fantastic. I used it quite a bit back in the day when I was doing embedded work. Beats the snot out of competitor microkernels.
Re: (Score:2)
Well, yes. But this does not mean what you apparently think it means.
The claim as however "mathematically proven secure, with guaranteed confidentiality, integrity, and availability" and that is simply not possible as "guaranteed confidentiality, integrity, and availability" includes a lot of aspects that are not accessible to a mathematical proof at all and hence any such proof does very much not assure that things are secure in the real world.
Re: (Score:2)
Their silly marketing claim is just that: A silly marketing claim.
However, what it does mean, is that it is formally proven that there are no bugs in the kernel.
This of course says nothing for Google's own code which implements the microkernel servers.
I was just pointing to the kernel of truth that the marketing drivel was built upon.
Re: (Score:2)
However, what it does mean, is that it is formally proven that there are no bugs in the kernel
And that is actually not what it means. It means there are no violations of the spec in the kernel. If the spec has bugs (about as likely as code bugs, hence quite likely) or is incomplete (basically assured as a complete spec is pretty much impossible for anything of some complexity), then the kernel is free to have a lot of bugs and the mathematical proof will still show that the kernel satisfies the spec. Incidentally, the proof and proof-assist system used to verify the proof can also have bugs. That is
Re: (Score:2)
And that is actually not what it means. It means there are no violations of the spec in the kernel. If the spec has bugs (about as likely as code bugs, hence quite likely) or is incomplete (basically assured as a complete spec is pretty much impossible for anything of some complexity), then the kernel is free to have a lot of bugs and the mathematical proof will still show that the kernel satisfies the spec. Incidentally, the proof and proof-assist system used to verify the proof can also have bugs. That is less likely though.
Bugs may have been a bit broad for me to have said.
What it actually means is that it is free of any programming errors, not just that it is free of violations of the spec (though that's certainly implicit).
As to the "likeliness" of "spec bugs", or incompleteness, those are a vanishingly small percentage of all bugs located (because they're the easiest to locate: lowest fruit gets picked first)
L4 being mature and open means there are highly unlikely to be bugs of that class.
No Really. (Score:1)
Its possible to formally prove some software the same way any other math is proven. Its a horrible headache and I actually had nightmares in school that I’d graduate and find myself doing it for the rest of my life.
Of course they can still make mistakes and security problems can exist outside the kernel but this + Rust is an excellent first step as much as I like Google less and less these days.
Re: (Score:2)
No, it is actually not. It is possible (but usually not feasible) to do it for an idealized model of compiler, machine and using the formal semantics of the language (if it has one). It also requires a formal model of what happens on the other side of all interfaces and there we will always see gross simplifications.
I have some background in the formal verification space. It is nice in some contexts and can serve as a form of diversification. But it cannot prove any real world thing is secure. As any testin
Re:Secure? (Score:5, Informative)
The seL4 microkernel isn't new, and isn't from Google. It is one of several in the L4 family, and pretty well known in the OS research community for being the first to have been formally verified by a proof-checker to work correctly ... under certain assumptions.
You can read more about the proof [www.sel4.systems]. A bunch of research papers [trustworthy.systems] have also been published about it.
Re: (Score:3)
The assumptions include a huge caveat where they explicitly state that their assumptions about side-channels are not true in the real world. Being that side-channel attacks have been some of the biggest headline-making ones in the past few years I think that's a pretty huge gaping hole right there.
Making headline claims about "mathematically proven secure, with guaranteed confidentiality, integrity, and availability" when you know that is only true under certain not-real-world conditions is just a recipe fo
Re:Secure? (Score:5, Informative)
The assumptions include a huge caveat where they explicitly state that their assumptions about side-channels are not true in the real world. Being that side-channel attacks have been some of the biggest headline-making ones in the past few years I think that's a pretty huge gaping hole right there.
Making headline claims about "mathematically proven secure, with guaranteed confidentiality, integrity, and availability" when you know that is only true under certain not-real-world conditions is just a recipe for trouble.
There is a problem with formally showing resistance to side channel and fault injection attacks - It's hard and no one has done it.
My job has me working on side channel and fault injection mitigation methods in silicon. It's the opposite of cryptography, it's well engineered obfuscation.
To get to higher levels of security assurance, you need SC and FI mitigation stuff both in the hardware and software, along with formally showing the rest of the code works as intended. It's not an either/or, you need all three. You need to get the crypto right too, so that's four things.
This is why higher level security certification schemes like FIPS-140 L3 and L4 focus so much on the box. It's hard to sidechannel the HW when the box will detect your attempts to get inside.
My problem space is keeping a long term secret within the silicon, which is a much harder problem when you assume the adversary has possession of the silicon.
I wouldn't diss someone for achieving formal correctness for code. It's fixing what they can fix in the domain they work in and it is not trivial work.
Re: (Score:2)
It might be a gaping hole, but it you manage to close that gaping hole - the most difficult hole to close - you can't just have left the other holes open, nor does it make claiming the other holes are far more closed than in other software useless, less valuable, misleading, etc. The consumers of this work are people who are well aware of the constraints in the claims being made. There's no "recipe for trouble here", just posters who seem to think they know something technical experts working in the domain
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Formally proving software is an actual thing. There are theorem provers (Agda, Coq etc) which can assist in this work.
Google is not making a statement that they will formally prove their own software, only that they are building on a microkernel which has been formally proven. Which is still a big deal. Formally proving functional and security requirements of a kernel is a lot of work.
https://en.wikipedia.org/wiki/... [wikipedia.org].
Re: (Score:2)
The point is that the statement 'proven correct' inspires a confidence in a layman that isn't quite deserved. You are allowed to stipulate assumptions (that don't hold in the real world) and you may have a sound proof or unsound proof, still counts. It's also trying to imply that the entire ecosystem is 'proven correct', when in reality only a small chunk of it is (mathematic proof of 'correct' code is impractical for a full scale operating environment. Incidentally, 'correct' doesn't mean it implements f
Re: (Score:2)
laymen are not consumers of the claims nor the work, so why would you care what they think?
Re: (Score:2)
>The point is that the statement 'proven correct' inspires a confidence in a layman that isn't quite deserved.
Perhaps it is. One of the biggest caveats for the proof are that there are no side channel attacks. For a normal person, a layman, this is likely the case. No one is scanning their EMF signature.
>You are allowed to stipulate assumptions (that don't hold in the real world)
What other assumption did you see that does not hold up in the real world, because this one does hold up for most people.
Re: (Score:2)
Will be abandoned with 5 years... (Score:2, Redundant)
The usual mode Google operates in. So better not invest any effort here.
Re: (Score:2)
To the person modding this "redundant": I completely agree. Still merits being pointed out.
Google definition of security (Score:2, Redundant)
Where "security" is keeping your information safe from anyone but its rightful owner - that owner being Google.
Google and security!?! (Score:4, Insightful)
Never mind a head of lettuce... (Score:4, Funny)
... let's bet on the lifespan of Liz Truss's career vs. Google caring about KataOS.
Place yer bets! (Score:3)
So let's start placing bets - how long before it shows up here?
https://killedbygoogle.com/ [killedbygoogle.com]
From another angle (Score:2)
Google being Google, is the plan to publish this as Open Source in the hope that they get free labor to improve their next generation of human monitoring devices? For some reason I don't think the intent of adding ML to a smart device is to improve wifi reception.
People have the wrong idea about this. (Score:2)
Re: (Score:2)
Note that 'security' in this case is as often defined as 'operates as the manufacturer intends even if the user doesn't want it to'.
For example, TEE environments are most commonly in service trying to keep media in an unreachable place to harden DRM restrictions.
There are use cases that actually can protect the user, but generally that's a tiny subset of TEE capabilities.
Re: (Score:2)
The encryption co-processor is a couple bucks and they dropped the requirement any way, the Apple tax for MFI is almost certainly the biggest hurdle.
Re: (Score:2)
For those who don't understand what this is really about: https://opentitan.org/ [opentitan.org] - this is stuff that generally runs baseband and security processors in your phone.
This is entirely for stuff where security is paramount to anything else. It's not for regular consumer devices. Your average IoT embedded shit probably need not apply, they struggle to run the super basic webservers embedded in themselves. Look at how many devices struggle or don't even bother supporting HomeKit - they do that because it has always had higher requirements. (They'd love to support it, but chips bloat BOMs.)
This: https://opentitan.org/ [opentitan.org]
and in two presentations I saw at a security conference last week..
had me furrowing my eyebrows over the use of RoT (Root of Trust). Because I design things that are RoTs and in all three instances, they were referring to much bigger, more complex things as being RoTs. My understanding for many years it the root of trust being some information secured away that can be used, like a root cert in a CA or a key in a PUF etc. They underlay a system that does attestation or signing or
Hard to have confidence in a google project (Score:3, Informative)
Google drops such projects all the time. Why invest in a google OS, or language, or social platform, or whatever; when google is so likely to pull the plug?
security comes with a cost (Score:2)
- You're spending your time doing a lot of MFAs on different levels for different things
- Everything JS is treated as malicious software
- UX is broken or non existent
I'll pass thanks
KakaOS or CacaOS (Score:2)
A better, more fitting name for anything Google dumps out and then later flushes but only after letting it sit around for a few months to a couple of years.
If it's yellow,
Let it mellow.
If it's brown,
Flush it down.