Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Bug Chrome Google IT Technology

Chrome: 70% of All Security Bugs Are Memory Safety Issues (zdnet.com) 52

Roughly 70% of all serious security bugs in the Chrome codebase are memory management and safety bugs, Google engineers said. From a report: Half of the 70% are use-after-free vulnerabilities, a type of security issue that arises from incorrect management of memory pointers (addresses), leaving doors open for attackers to attack Chrome's inner components. The percentage was compiled after Google engineers analyzed 912 security bugs fixed in the Chrome stable branch since 2015, bugs that had a "high" or "critical" severity rating. The number is identical to stats shared by Microsoft. Speaking at a security conference in February 2019, Microsoft engineers said that for the past 12 years, around 70% of all security updates for Microsoft products addressed memory safety vulnerabilities.
This discussion has been archived. No new comments can be posted.

Chrome: 70% of All Security Bugs Are Memory Safety Issues

Comments Filter:
  • by gweihir ( 88907 ) on Monday May 25, 2020 @03:09PM (#60103330)

    And look, it was here:
    https://developers.slashdot.or... [slashdot.org]

  • by fahrbot-bot ( 874524 ) on Monday May 25, 2020 @03:13PM (#60103346)

    Half of the 70% are use-after-free vulnerabilities, ...

    Never free() anything -- problem solved. :-)

    • by SuperKendall ( 25149 ) on Monday May 25, 2020 @03:39PM (#60103434)

      Never free() anything -- problem solved. :-)

      Ahh, you must work on the Chrome codebase...

      • I've literally wondered if Chrome did this deliberately as a temporary stopgap between finding a vulnerability and deciding what to do about it.

      • by dfghjk ( 711126 )

        Do you know what free() is, SuperKendall?

        We know you are an "iOS programmer" who knows emacs is cool and all, but considering you're barely through high school, if that, it has to be asked if you've ever even dealt with memory management yourself. Did they teach that in 10th grade?

    • by lgw ( 121541 ) on Monday May 25, 2020 @04:14PM (#60103530) Journal

      Never free() anything -- problem solved. :-)

      You joke, but for the first 5 years of my career i worked in an environment with no dynamic memory allocation. It does indeed solve the problem. Very safe and very fast. Not the easiest thing to code, however.

      • by fahrbot-bot ( 874524 ) on Monday May 25, 2020 @05:20PM (#60103686)

        Never free() anything -- problem solved. :-)

        You joke, but for the first 5 years of my career i worked in an environment with no dynamic memory allocation. It does indeed solve the problem. Very safe and very fast. Not the easiest thing to code, however.

        Probably better/easier than working on AIX and with the "SIGDANGER" signal... At least way back, don't know if this is still true.

        As I understand it... IBM (or someone) did a study and noted that applications that allocate large amounts of memory often end up not using all that memory, so, in an effort to alleviate the need to pre-allocate large amounts of system swap, AIX would simply return successfully from malloc() for the requested amount of memory w/o actually allocating it to the process. AIX would then allocate the memory on-the-fly if/when the process tried to access it. However... if memory/swap wasn't actually available at that time, AIX would send the process the SIGDANGER signal to indicate that the OS had lied and the memory the process thought it had wasn't actually available. As a result, AIX administrators had to allocate even *more* system swap to prevent this situation from ever occurring.

        I hope this isn't the behavior anymore.

        • by Waccoon ( 1186667 ) on Monday May 25, 2020 @09:02PM (#60104154)

          Reminds me of "gimme.dll", a memory allocation process used in some Electronic Arts games. Back in the days of Win98, this DLL used a number of aggressive techniques to push the OS into a high-pressure memory situation, forcing it to give up as much memory as possible to the game (kernel memory paging and all that). It worked great, and ensured the game got every damn byte of physical memory possible. The problem was that the game would take ALL the memory in the machine, whether the game could use it or not. The games only needed a couple hundred megs, but would suck up the entire 2GB memory pool if that's what you had installed in your machine.

          Naturally, all these games stopped working when Win2000 came out, because the OS would shut them down for being greedy assholes. Also naturally, EA never patched the games. You had to use the Application Compatibility Toolkit (ACT) or Win98 compatibility mode in XP to get these games working.

          • This is the kind of example that I was looking for to slap at people who act like browsers (or any program for that matter) eating up memory (versus possibly predicting when they need more and taking it then) is OK, and never something to question or view as bad.
        • by _merlin ( 160982 )

          I've got bad news for you. All major operating systems overcommit virtual memory now. You get pages that read as zero and get no backing store (RAM or swap) until you write to them. Linux, macOS, Windows and iOS all do this. Windows CE and Symbian were the last things (besides VxWorks and other RTOS) where you really know what you're working with when you allocate memory.

          • Windows famously doesn't overcommit virtual memory. Unless you have evidence this has changed recently? (A thread on Windows can run out of memory by hitting its stack's guard page but that's not really the same thing.)

        • Comment removed based on user account deletion
      • Never free() anything -- problem solved. :-)

        You joke, but for the first 5 years of my career i worked in an environment with no dynamic memory allocation. It does indeed solve the problem. Very safe and very fast. Not the easiest thing to code, however.

        That's a core principle in an embedded world. No allocation means no leaks and no memory shortages. You try to avoid recursive loops, too, to avoid stack size issues.

      • by dfghjk ( 711126 )

        Embedded systems frequently have no dynamic memory allocation. If you think that's hard, or unusual, you aren't much of a programmer. Furthermore, embedded systems far outnumber general purpose computers, so arguably the vast majority of all programming lacks that feature.

        Nearly 100% of the projects I have worked on in my career have had no dynamic memory allocation.

    • This is in essence why most systems have a sandbox environment.

      But this is what I consider one of the biggest drawbacks towards using Lower Level Languages like C/C++. While you are coding, you need to focus on getting the business requirement logic to work, and make sure every memory variable is properly allocated and destroyed. You can have you program just use more and more RAM, with Personal Computers averaging 16 gigs of RAM, and Servers with hundreds of Gigs of RAM, You can probably get away with ju

  • This can be fixed (Score:5, Interesting)

    by lgw ( 121541 ) on Monday May 25, 2020 @03:15PM (#60103354) Journal

    I've worked on projects where we took dangling pointers seriously. We wrote code in such a way that we'd quickly discover any use-after-free (or, far worse, use-after-re-allocation). It's amazing how often that error was found. Even scarier, though, was how often we'd detect a pointer to just some random location. Think about how often any code bumps into null pointers/references due to bugs, and then realize that's only checking for one possible wrong value.

    Anyhow, in a context where either security is paramount, or preventing data corruption is paramount, it's vital to use a coding style where you do all the tedious work to check every pointer passed into every function. It's a real eye-opener.

    • If you run your code in Valgrind (and you should), it will check that for you. Helps you build habits that avoid these problems.
      • by lgw ( 121541 )

        Tools help a bit. Serious projects use every available tool, yet all combined are inadequate. You simply must program in a very different way to take this stuff seriously. Which, of course, is the norm is stuff like life safety code. You won't find people doing malloc and free in anti-lock brake software, for example, so use-after-free just doesn't come up. Other bugs still do, of course.

        • Some tools produce monstrous numbers of warnings. QAC, for example, can produce hundreds of thousands in a large project, and good luck finding the 1 or 2 that are real issues.

          The problem is these are mostly standards violations, which are not bugs, but might be if the programmer didn't consider the odd case the standard is worrying about.

          What you end up with is code that "fixes" the issue, by introducing castings and extra parentheses, making the code much harder to understand, but satisfying to QAC.

          You h

        • by dfghjk ( 711126 )

          "You won't find people doing malloc and free in anti-lock brake software, for example..."

          Curious you'd use a very ordinary example of embedded/realtime software that doesn't use dynamic memory after posting above that such a thing is somehow unusual. It's as though you like to talk more than think.

          • by lgw ( 121541 )

            Chrome was not written this way, is the point. Perhaps it should have been.

  • tl;dr version (Score:5, Insightful)

    by 93 Escort Wagon ( 326346 ) on Monday May 25, 2020 @03:17PM (#60103362)

    It's another rust article.

    • by Tailhook ( 98486 )

      Get used to them. When an industry finds itself plagued by a severe problem and a solution for that problem emerges you're going to see a lot of articles.

      That's the thing about the memory safety discussion; one side offers happy talk (C++2037.5 fixes that) and bad advice (program better herp derp) and the other side has a straightforward solution and is also equipped with limitless evidence delivered fresh each day at no cost.

      Intelligent people have no trouble selecting the correct side of that debate

    • It's another rust article.

      No, it's the same rust article again.

  • by Anonymous Coward

    Paging Gerald Butler. Will Gerald Butler please pick up the white courtesy phone?

  • by Anonymous Coward

    Dinosaur here.

    One thing I've noticed over the decades, is that the more tightly integrated the development environment is, the more likely there are memory issues. Why because the programming monkey assumes that the IDE is always right and always takes care of it.

    I looked at a piece of C code written completely in the Eclipse frame work, Yes, Eclipse would compile it, but I had to really hunt to get Eclipse to run lint or
    show me the info/warning level messages from the compiler. 2000 lines of C code.

    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Monday May 25, 2020 @06:18PM (#60103796)
      Comment removed based on user account deletion
      • The people I know who write in C typically maintain relatively small codebases. For the rest of us, the client wants a team of 10 to build a Facebook in three months using NodeJS. ... and mistakes happen...

        • by dfghjk ( 711126 )

          ...and yet those small C codebases aren't inferior to the large NodeJS ones, especially when you factor in that one of them is a rush job. Added to that is the likelihood that the NodeJS codebase is built on giant piles of code from unknown sources and interacts with similar code of dubious quality.

          The problem isn't limitations of the language, it's the attitude and approach to the development problem. Rather than look down at small C codebases, consider that they benefit not from having a trivial goal bu

    • by mr.morbo ( 6346556 ) on Tuesday May 26, 2020 @03:17AM (#60104754)

      One thing I've noticed over the decades, is that the more tightly integrated the development environment is, the more likely there are memory issues. Why because the programming monkey assumes that the IDE is always right and always takes care of it.

      That's your dinosaur opinion. I've been doing this gig 30 years and I rely heavily on an IDE at times. There's just no way to get around a complex multi-million line codebase and remain sane without the tooling to help you.

      I still cut the majority of my code in Vim because, unlike the IDE, it gets out of my way and lets me code. I spend more of my capacity writing code rather than fighting the IDE's opinion of what code I should write. There is a time and place for both tools.

      I looked at a piece of C code written completely in the Eclipse frame work, Yes, Eclipse would compile it, but I had to really hunt to get Eclipse to run lint or
      show me the info/warning level messages from the compiler. 2000 lines of C code.. 4500 lines of messages from lint and almost as many warnings from
      gcc. Back when all I did was grind out C/C++ code, numbers like that would have gotten me fired or at a minimum prevented me from checking the code back in.

      Mistake 1: Eclipse is an IDE, not a framework.
      Mistake 2: Eclipse doesn't compile it. It calls a compiler and you're free to adjust the compiler options. It certainly shows you the compiler output in a tab.
      Mistake 3: Using Eclipse!

      The biggest mistake was that whoever wrote the code was allowed to commit it in such a state in the first place. I have instituted a no warnings policy on all our C/C++ code. We build with -Wall and -Werror and the code *must* compile without warning. Some modules even get -Wextra.

      Over the top of that, we run Clang's static analyzer as a periodic post-commit task and expect that the original committer will fix any analysis bugs that come out of it. We're more lax about that because it generates a lot of false positives still.

      It's not too difficult to write code which doesn't emit warnings. My experience has shown me a few things:

      - A lot of warnings are emitted for dumb shit like inadvertent truncating casts or inadvertent assignment in conditionals and other such crap.
      - Programmers will just slap in an explicit C cast to shut up the compiler and hide all the potential bugs. This is why the C++ core guidelines recommends gsl::narrow - to turn silent truncation with a forced cast into a runtime error if it actually happens.
      - Warnings are emitted when the programmer fancies themselves as a rockstar and writes typical rockstar dumbfuckery with hacks rather than writing simple code that can be maintained. Consider the intenional assignment in a conditional: while (nbytes = fread()) {} (ugh!), type punning, etc.
      - The number of warnings is directly proportional to the difficulty working out what the hell the code actually does but the absence of warnings is not inversely proportional to the same.

      There's a time and a place for all tools, but generally the time for enabling compiler warnings is always and the place is everywhere. Lint is more of a syntax checker than anything. It can pick up a few classes of bugs in simple code but it's not great. Whole-program static analysis is great at finding subtle and well hidden bugs in the interactions between modules.

      And there is absolutely no substitute for writing simple, well documented code and using all of the language-provided safety tools. There isn't much in C, but C++ gives you vector and array for blocks of memory. You get unique_ptr and shared_ptr to confer ownership of memory rather than needing to work it out from raw pointers. Etc.

      • I've never in my career worked on a project where anyone paid any attention whatsoever to compiler warnings. Sad but true.

        And now that we're all doing server side JavaScript, we can even ignore errors!

        • by dfghjk ( 711126 )

          I've worked in one environment where full compiler warnings were mandatory. The programmers there were consistently worse than elsewhere and plenty of workarounds were hacked into code to achieve checkmarks rather than code quality.

          With better programmers, such things are left to the judgement of the individual to use the tools in whatever ways are appropriate. Like you, I've seen plenty of cases where warnings are ignored (disabled) but that's because those programmers aren't good. Accepting compiler wa

      • by dfghjk ( 711126 )

        "while (nbytes = fread()) {} (ugh!)"

        Nothing wrong with this. The complaint here is that you, as the code reader, don't want to be burdened with understanding context. In my experience, programmers who reject stuff like this aren't all that good to begin with. There are plenty of things to consider, this is not one.

        I recently worked with a relatively inexperienced programmer who knew things like the above were "unacceptable" but couldn't write a proper macro because he didn't know the difference between a

  • Dupe (Score:4, Funny)

    by bloodhawk ( 813939 ) on Monday May 25, 2020 @07:23PM (#60103962)
    70% of all security bugs are Dupes.
  • In straight C, when you're done with a pointer, set it to NULL. Then, if you dereference it later, it just accesses zero, causing a harmless segfault. No use-after-free there. In C++, really learn to use unique_ptr. Myself, I still use "new" as the argument to unique_ptr's constructor, I don't see the harm. Then you can pass that around by reference. Though, I'm currently learning Rust. I don't think C and C++ can or should be replaced entirely, but Rust is interesting and I think we're definitely gonna se
    • How do you know there is only one pointer?

      Wrap Free so that it destroys what it freed. Sets it all to 808080...

      The first time you do that your program will fail. Pretty much guaranteed.

      Why not leave it like that at run time. In production. Oh! No! you cry -- inefficient. Negligible for most applications.

    • > I still use "new" as the argument to unique_ptr's constructor, I don't see the harm

      You should consider std::make_unique(type_args...)

      There were good reasons to always use make_unique in C++14: https://stackoverflow.com/ques... [stackoverflow.com]

      Even with C++17 you absolutely should always use std::make_shared, and using std::make_unique allows consistency with that.

      It avoids duplication of the type information and hides raw pointers (the return value of new) from your code.

      Consider

      std::unique_ptr p_type{new Type(args)};

    • I tried using unique_ptr with a library someone else had written. The compiler generated a 4500 character error message that I had no clue what it meant. So I used a raw pointer to allocate it once and never deallocate it.
  • Ask Einstein (Score:4, Interesting)

    by ytene ( 4376651 ) on Tuesday May 26, 2020 @01:15AM (#60104580)
    Pretty sure that it's Einstein who is generally credited (without substantive proof) with the quotation,

    "Insanity is doing the same thing over and over again and expecting different results"

    To put this a slightly different way: if you ask your search engine of choice, "what language is the chrome browser written in?" then you will learn that the core is written in C++, with the Mac/iOS UI written in Objective-C and the Android UI written in Java. Once you know that the core of {insert software identifier of your choice} is written in C/C++, then you can extrapolate with reasonable certainty that there is a high probability that if it contains errors, they will include memory/pointer errors. So why does this discovery feature as newsworthy?

    The "Google Chrome" part is far, far, less relevant than the programming language - less relevant, perhaps, than the project discipline and the experience of the developers and testers who worked on that project. The part that interests me in this story is the part relevant to the Einstein quote. We know that C/C++ are not "memory safe" languages, yet, somehow, this finding is considered newsworthy - or at least slashdot-newsworthy.

    A much more interesting dimension to this story would have been to explore the reasons for writing the browser in C/C++ in the first place: the developers would have known at that time of plans to port Chrome to Android (Android shipped in September 2008, Chrome in December the same year, so these projects would have been developed in parallel). Yet, despite knowing that they would be porting Chrome to at least 4 major platforms (Windows, Linux, Mac/iOS and Android), despite knowing that memory errors in a browser could lead to exploits that could allow malware to compromise web site security, despite knowing how much of the world now runs via a browser (including, most likely, your finances, some of your most private interactions, etc.) a decision was made to base the project on a language that has these design flaws.

    Armed with that background and context, it would have been interesting to ask the project team how they managed bugs, how they did their testing, what their "release threshold" for known vulnerabilities was. This would be interesting because, arguably, Chrome is one of Google's flagship products. It bears their name. You would rather hope, therefore, that their risk appetite reflected that.

    Perhaps most insightful of all to ask the developers to look back over the 12-year history of Chrome and it's vulnerabilities and bugs and to say whether what has been discovered since publication aligns with their initial risk tolerance for the product. While it is the developer who gets to set their "risk appetite" [in terms of defects and vulnerabilities], it is the end user who gets to take the risk. That's where the disconnect lies. That's where the liability [the financial loss and fallout of a vulnerability leading to malware exploit, for example] transfers from the developer to the user.

    That's the bit that 99.999% of users don't understand.
    • by fintux ( 798480 )

      I do wonder, however, to what extent the usage of smart pointers in C++11 and newer would solve these issues. It does of course require using them and not passing raw pointers or dangling references. So it's not a silver bullet, but it could help avoid a significant portion.

      And this would _not_ be doing the same thing and expecting different results, as C++11 is quite a different beast than C++98.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...