Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Operating Systems Google

Google Announces KataOS (phoronix.com) 69

Last Friday, Google announced the release of KataOS, a security-minded operating system focused on embedded devices running ambient machine learning workloads. As Phoronix notes, it uses the Rust programming language and is "built atop the seL4 microkernel as its foundatin." From Google's Open-Source Blog: As the foundation for this new operating system, we chose seL4 as the microkernel because it puts security front and center; it is mathematically proven secure, with guaranteed confidentiality, integrity, and availability. Through the seL4 CAmkES framework, we're also able to provide statically-defined and analyzable system components. KataOS provides a verifiably-secure platform that protects the user's privacy because it is logically impossible for applications to breach the kernel's hardware security protections and the system components are verifiably secure. KataOS is also implemented almost entirely in Rust, which provides a strong starting point for software security, since it eliminates entire classes of bugs, such as off-by-one errors and buffer overflows.

The current GitHub release includes most of the KataOS core pieces, including the frameworks we use for Rust (such as the sel4-sys crate, which provides seL4 syscall APIs), an alternate rootserver written in Rust (needed for dynamic system-wide memory management), and the kernel modifications to seL4 that can reclaim the memory used by the rootserver.
KataOS code is being worked on via GitHub under the AmbiML umbrella.
This discussion has been archived. No new comments can be posted.

Google Announces KataOS

Comments Filter:
  • by phantomfive ( 622387 ) on Tuesday October 18, 2022 @05:04AM (#62976349) Journal

    What happened to Fuchsia OS?

    • by Laxator2 ( 973549 ) on Tuesday October 18, 2022 @05:23AM (#62976387)

      Fuchsia is meant to allow better ad revenue (TL;DR; search for "prevailed" in the article): https://www.bnnbloomberg.ca/pr... [bnnbloomberg.ca]
      There are also security issues expected in a new OS: https://blog.quarkslab.com/pla... [quarkslab.com]

      • Send it to the graveyard.

        • by ArmoredDragon ( 3450605 ) on Tuesday October 18, 2022 @07:17AM (#62976671)

          IMO Stadia was Google's final mistake. Cancelling something like that isn't like cancelling another one of their chat apps. I honestly can't see any developers ever trusting Google again with anything that hasn't yet turned Google a big profit. It's going to be really, really hard for Google to launch any big products in the future, which means their future growth prospects are crap.

          • This is why I have not bothered, as a mobile developer, to get involved in the Android market. Google is too much "Here today, gone tomorrow."
            • You should try developing for Meta then.
            • Android has already delivered Google a big profit, and continues to do so, so I don't think it's at risk. Just new things going forward, no matter how ambitious, Google has signaled that if it doesn't offer a quick ROI, then it's DOA. They gave that stupid Google Plus thing a much bigger chance, and it likely saw even less revenue than Stadia.

              I personally didn't ever use Stadia, but you don't launch something like that and promote it as much as they did and then kill it as soon as they did. Even if they fou

      • by Anonymous Coward

        I cannot take seriously somebody who is so pretentious as to put the names C++, Rust and Go into typewriter font but doesn't do that for "USB".

        • I cannot take seriously somebody who is so pretentious as to put the names C++, Rust and Go into typewriter font but doesn't do that for "USB".

          The USB sig logo isn't in typewriter font. So that's fine.

    • by Viol8 ( 599362 )

      Don't forget ChromeOS which seems to be slowly being shunted to one side in favour of Android.

      What a ridiculous duplication of effort. Only a company with money to burn but little direction could develop multiple OS's side by side and be quite happy to bin one of more of them.

    • by dyfet ( 154716 )

      Originally it was supposed to be a core c++ kernel and a lot golang system/userland (think of iOS and Objective C), with along with legacy java runtime. They make the mistake of listening to Eric Raymond. Early on they found Go was rather useless for any system work in Fuchsia (big hint, no Go is NOT going to replace C, though Rust is a different story). They eventually banished Go entirely in Fuchsia development outside of pure application work, and were moving more of it to Rust over the last few years.

      • Go is a fine language, it's designed to replace server software. That's why it's popular in devops.

        • Eh... IMO its attempt at being simple induces the programmer into using too many anti-patterns. Try asynchronous programming in it. Try mimicking joinall like behavior from rust or c#. In those languages, it's dead simple. In golang it's a hot mess that reminds me of cable tv: 500 channels and nothing's on.

          • I find Go great for tedious work like processing files and network requests and producing output files and responses.

            • >I find Go great for tedious work like processing files and network requests and producing output files and responses.

              Tedious? That's the fun stuff.

          • Sounds like you are trying to force your preferred paradigm onto the language, rather than using the paradigm the language was designed with. That always leads to unhappiness.

        • by dyfet ( 154716 )

          Indeed, I do find it great for building middleware-ish integration servers and quick integration tools; things I would have used python or ruby for, and before that, maybe perl. For that simplicity, a large ecosystem, and fast build times do help a lot.

        • by DrXym ( 126579 )
          It's popular in devops because it is a swiss army knife. The standard library is stupidly expansive so it's easy to knock lots of things together. Performance is quite good too - more than acceptable.

          But you sure as hell wouldn't want this in system programming and especially not in embedded where its yo-yo like memory consumption and the device's physical-only memory would cause all kinds of headaches.

      • by Misagon ( 1135 )

        Google chose to use a SafeStack [llvm.org] scheme for C, C++ and Rust for protecting against stack smashing.
        However, Go has its own ABI with lots of small stacks, and is not interoperable with SafeStack. The safety of SafeStack relies on there not being any other ABI in the program -- because that could leak the safe stack pointer.

    • What happened to Fuchsia OS?

      It rusted.

    • It's gone to plaid.
  • by iAmWaySmarterThanYou ( 10095012 ) on Tuesday October 18, 2022 @05:49AM (#62976441)

    Google does 2 things. They spy on you. They deliver ads. Everything else is secondary or unintended consequence.

    Which of those 2 things is this OS designed to do?

    And how long until the cancel this project, anyway?

    I'd love to see a headline, "Google project *not* canceled! First ever!"

  • by Viol8 ( 599362 ) on Tuesday October 18, 2022 @05:49AM (#62976443) Homepage

    ... gets tossed over the fence. They'll play with it for a year or 2 then it'll be quietly dumped along with any 3rd parties who spent time and effort working with it.

    No ones falling for this shit anymore google.

    • by gweihir ( 88907 )

      Indeed. Any effort invested into this is wasted. It is like they only do "innovation" for the press releases now. If that was ever different with Google.

    • They should just get it over with and release a product called "April's Fool".

  • by Anachronomicon ( 10196039 ) on Tuesday October 18, 2022 @06:09AM (#62976483)
    My perception of Google as a software provider is of a black box from which code gets emitted. I never see discussion or engagement with the community engineers over what they want, bugs they report, or features they'd like to see a little different. Google groups are always just a graveyard of unhappy users locked threads, and tone deaf "Thanks for the report, we'll take it into consideration!" community leaders. So, pass. Google think they're too smart and capable to need outside input, and I think that's just hubris. I'll wait for someone to take Google's experiment and make it market ready.
  • Secure? (Score:5, Insightful)

    by bazmail ( 764941 ) on Tuesday October 18, 2022 @06:19AM (#62976503)

    it is mathematically proven secure, with guaranteed confidentiality, integrity, and availability.

    lol *grabs popcorn*

    • by gweihir ( 88907 )

      They really made _that_ statement? Well. Talk about the Big Lie approach...

      • seL4 was formally proven functionally correct [sigops.org] (free of common defects)

        L4 is fantastic. I used it quite a bit back in the day when I was doing embedded work. Beats the snot out of competitor microkernels.
        • by gweihir ( 88907 )

          Well, yes. But this does not mean what you apparently think it means.

          The claim as however "mathematically proven secure, with guaranteed confidentiality, integrity, and availability" and that is simply not possible as "guaranteed confidentiality, integrity, and availability" includes a lot of aspects that are not accessible to a mathematical proof at all and hence any such proof does very much not assure that things are secure in the real world.

          • No, it does mean what I think it means.

            Their silly marketing claim is just that: A silly marketing claim.
            However, what it does mean, is that it is formally proven that there are no bugs in the kernel.
            This of course says nothing for Google's own code which implements the microkernel servers.

            I was just pointing to the kernel of truth that the marketing drivel was built upon.
            • by gweihir ( 88907 )

              However, what it does mean, is that it is formally proven that there are no bugs in the kernel

              And that is actually not what it means. It means there are no violations of the spec in the kernel. If the spec has bugs (about as likely as code bugs, hence quite likely) or is incomplete (basically assured as a complete spec is pretty much impossible for anything of some complexity), then the kernel is free to have a lot of bugs and the mathematical proof will still show that the kernel satisfies the spec. Incidentally, the proof and proof-assist system used to verify the proof can also have bugs. That is

              • And that is actually not what it means. It means there are no violations of the spec in the kernel. If the spec has bugs (about as likely as code bugs, hence quite likely) or is incomplete (basically assured as a complete spec is pretty much impossible for anything of some complexity), then the kernel is free to have a lot of bugs and the mathematical proof will still show that the kernel satisfies the spec. Incidentally, the proof and proof-assist system used to verify the proof can also have bugs. That is less likely though.

                Bugs may have been a bit broad for me to have said.
                What it actually means is that it is free of any programming errors, not just that it is free of violations of the spec (though that's certainly implicit).

                As to the "likeliness" of "spec bugs", or incompleteness, those are a vanishingly small percentage of all bugs located (because they're the easiest to locate: lowest fruit gets picked first)
                L4 being mature and open means there are highly unlikely to be bugs of that class.

      • Its possible to formally prove some software the same way any other math is proven. Its a horrible headache and I actually had nightmares in school that I’d graduate and find myself doing it for the rest of my life.

        Of course they can still make mistakes and security problems can exist outside the kernel but this + Rust is an excellent first step as much as I like Google less and less these days.

        • by gweihir ( 88907 )

          No, it is actually not. It is possible (but usually not feasible) to do it for an idealized model of compiler, machine and using the formal semantics of the language (if it has one). It also requires a formal model of what happens on the other side of all interfaces and there we will always see gross simplifications.

          I have some background in the formal verification space. It is nice in some contexts and can serve as a form of diversification. But it cannot prove any real world thing is secure. As any testin

    • Re:Secure? (Score:5, Informative)

      by Misagon ( 1135 ) on Tuesday October 18, 2022 @06:51AM (#62976591)

      The seL4 microkernel isn't new, and isn't from Google. It is one of several in the L4 family, and pretty well known in the OS research community for being the first to have been formally verified by a proof-checker to work correctly ... under certain assumptions.

      You can read more about the proof [www.sel4.systems]. A bunch of research papers [trustworthy.systems] have also been published about it.

      • The assumptions include a huge caveat where they explicitly state that their assumptions about side-channels are not true in the real world. Being that side-channel attacks have been some of the biggest headline-making ones in the past few years I think that's a pretty huge gaping hole right there.

        Making headline claims about "mathematically proven secure, with guaranteed confidentiality, integrity, and availability" when you know that is only true under certain not-real-world conditions is just a recipe fo

        • Re:Secure? (Score:5, Informative)

          by TechyImmigrant ( 175943 ) on Tuesday October 18, 2022 @10:51AM (#62977227) Homepage Journal

          The assumptions include a huge caveat where they explicitly state that their assumptions about side-channels are not true in the real world. Being that side-channel attacks have been some of the biggest headline-making ones in the past few years I think that's a pretty huge gaping hole right there.

          Making headline claims about "mathematically proven secure, with guaranteed confidentiality, integrity, and availability" when you know that is only true under certain not-real-world conditions is just a recipe for trouble.

          There is a problem with formally showing resistance to side channel and fault injection attacks - It's hard and no one has done it.
          My job has me working on side channel and fault injection mitigation methods in silicon. It's the opposite of cryptography, it's well engineered obfuscation.

          To get to higher levels of security assurance, you need SC and FI mitigation stuff both in the hardware and software, along with formally showing the rest of the code works as intended. It's not an either/or, you need all three. You need to get the crypto right too, so that's four things.

          This is why higher level security certification schemes like FIPS-140 L3 and L4 focus so much on the box. It's hard to sidechannel the HW when the box will detect your attempts to get inside.

          My problem space is keeping a long term secret within the silicon, which is a much harder problem when you assume the adversary has possession of the silicon.

          I wouldn't diss someone for achieving formal correctness for code. It's fixing what they can fix in the domain they work in and it is not trivial work.

        • by SirSlud ( 67381 )

          It might be a gaping hole, but it you manage to close that gaping hole - the most difficult hole to close - you can't just have left the other holes open, nor does it make claiming the other holes are far more closed than in other software useless, less valuable, misleading, etc. The consumers of this work are people who are well aware of the constraints in the claims being made. There's no "recipe for trouble here", just posters who seem to think they know something technical experts working in the domain

      • by dargaud ( 518470 )
        Let me guess what those assumptions are: when it's sunk into concrete with its power cord cut off ?
      • by keltor ( 99721 ) *
        Security Processors in all Apple Phones since the A7 (the first in-house iPhone chip) have been L4-based. There was a whole period of time when Apple developers were trying to actually migrate from their Mach 2.5-based microkernel to running XNU as an L4 process. Someone else ported their entire Mach microkernel to run on L4 and then this ended up getting ported to their Security Enclave Processor by Apple. (They did it buy literally hiring the guy: https://www.linkedin.com/in/ch... [linkedin.com])
    • Formally proving software is an actual thing. There are theorem provers (Agda, Coq etc) which can assist in this work.

      Google is not making a statement that they will formally prove their own software, only that they are building on a microkernel which has been formally proven. Which is still a big deal. Formally proving functional and security requirements of a kernel is a lot of work.

      https://en.wikipedia.org/wiki/... [wikipedia.org].

      • by Junta ( 36770 )

        The point is that the statement 'proven correct' inspires a confidence in a layman that isn't quite deserved. You are allowed to stipulate assumptions (that don't hold in the real world) and you may have a sound proof or unsound proof, still counts. It's also trying to imply that the entire ecosystem is 'proven correct', when in reality only a small chunk of it is (mathematic proof of 'correct' code is impractical for a full scale operating environment. Incidentally, 'correct' doesn't mean it implements f

        • by SirSlud ( 67381 )

          laymen are not consumers of the claims nor the work, so why would you care what they think?

        • by jvkjvk ( 102057 )

          >The point is that the statement 'proven correct' inspires a confidence in a layman that isn't quite deserved.

          Perhaps it is. One of the biggest caveats for the proof are that there are no side channel attacks. For a normal person, a layman, this is likely the case. No one is scanning their EMF signature.

          >You are allowed to stipulate assumptions (that don't hold in the real world)

          What other assumption did you see that does not hold up in the real world, because this one does hold up for most people.

    • Computing 101. An Operating System is configurable. Privileged calls are secured with granular permissions. Why is it that crazy people think a take it all, as is, or nothing is normal nowadays. I concur: Google has terrible form of baking in undesirable things, and removing granularity, and dropping support long term. The Chinese figure a bootloader, with busybox is pretty minimal. The surface area of web and browser is too huge to ever declare secure.
  • The usual mode Google operates in. So better not invest any effort here.

    • by gweihir ( 88907 )

      To the person modding this "redundant": I completely agree. Still merits being pointed out.

  • Where "security" is keeping your information safe from anyone but its rightful owner - that owner being Google.

  • by MacColossus ( 932054 ) on Tuesday October 18, 2022 @07:02AM (#62976629) Journal
    Going to Google for security is like going to a hooker for a hug or Mcdonaldâ(TM)s for a salad. Not really their thingâ¦.
  • by dskoll ( 99328 ) on Tuesday October 18, 2022 @07:42AM (#62976725) Homepage

    ... let's bet on the lifespan of Liz Truss's career vs. Google caring about KataOS.

  • by nbvb ( 32836 ) on Tuesday October 18, 2022 @08:18AM (#62976795) Journal

    So let's start placing bets - how long before it shows up here?

    https://killedbygoogle.com/ [killedbygoogle.com]

  • "KataOS is intended for use with the ever-growing number of smart devices with a particular emphasis on embedded hardware running machine learning applications"

    Google being Google, is the plan to publish this as Open Source in the hope that they get free labor to improve their next generation of human monitoring devices? For some reason I don't think the intent of adding ML to a smart device is to improve wifi reception.
  • For those who don't understand what this is really about: https://opentitan.org/ [opentitan.org] - this is stuff that generally runs baseband and security processors in your phone. This is entirely for stuff where security is paramount to anything else. It's not for regular consumer devices. Your average IoT embedded shit probably need not apply, they struggle to run the super basic webservers embedded in themselves. Look at how many devices struggle or don't even bother supporting HomeKit - they do that because it has
    • by Junta ( 36770 )

      Note that 'security' in this case is as often defined as 'operates as the manufacturer intends even if the user doesn't want it to'.

      For example, TEE environments are most commonly in service trying to keep media in an unreachable place to harden DRM restrictions.

      There are use cases that actually can protect the user, but generally that's a tiny subset of TEE capabilities.

    • The encryption co-processor is a couple bucks and they dropped the requirement any way, the Apple tax for MFI is almost certainly the biggest hurdle.

    • For those who don't understand what this is really about: https://opentitan.org/ [opentitan.org] - this is stuff that generally runs baseband and security processors in your phone.

      This is entirely for stuff where security is paramount to anything else. It's not for regular consumer devices. Your average IoT embedded shit probably need not apply, they struggle to run the super basic webservers embedded in themselves. Look at how many devices struggle or don't even bother supporting HomeKit - they do that because it has always had higher requirements. (They'd love to support it, but chips bloat BOMs.)

      This: https://opentitan.org/ [opentitan.org]
      and in two presentations I saw at a security conference last week..
      had me furrowing my eyebrows over the use of RoT (Root of Trust). Because I design things that are RoTs and in all three instances, they were referring to much bigger, more complex things as being RoTs. My understanding for many years it the root of trust being some information secured away that can be used, like a root cert in a CA or a key in a PUF etc. They underlay a system that does attestation or signing or

  • by walterbyrd ( 182728 ) on Tuesday October 18, 2022 @10:35AM (#62977177)

    Google drops such projects all the time. Why invest in a google OS, or language, or social platform, or whatever; when google is so likely to pull the plug?

  • Hi security almost always seems to imply:

    - You're spending your time doing a lot of MFAs on different levels for different things
    - Everything JS is treated as malicious software
    - UX is broken or non existent

    I'll pass thanks
  • A better, more fitting name for anything Google dumps out and then later flushes but only after letting it sit around for a few months to a couple of years.

    If it's yellow,
    Let it mellow.
    If it's brown,
    Flush it down.

Technology is dominated by those who manage what they do not understand.

Working...