Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Google Android Security IT Technology

Researchers Beat Google's Bouncer 44

An anonymous reader writes "When earlier this year Google introduced Bouncer — an automated app scanning service that analyzes apps by running them on Google's cloud infrastructure and simulating how they will run on an Android device — it shared practically nothing about how it operates, in the hopes of making malicious app developers' scramble for a while to discover how to bypass it. As it turned out, several months later security researchers Jon Oberheide and Charlie Miller discovered — among other things — just what kind of virtual environment Bouncer uses (the QEMU processor emulator) and that all requests coming from Google came from a specific IP block, and made an app that was instructed to behave as a legitimate one every time it detected this specific virtual environment. Now two more researchers have effectively proved that Bouncer can be rather easily fooled into considering a malicious app harmless."
This discussion has been archived. No new comments can be posted.

Researchers Beat Google's Bouncer

Comments Filter:
  • by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Friday July 27, 2012 @11:18AM (#40791867)

    It seems like they just found that the sandbox Google simulates the apps in is a little sloppy in its simulation (IP addresses are predictable), so it's easy to tell you're inside the sandbox. But they could fix that part pretty easily.

    Was hoping for something more halting-problem-esque, since it's really difficult to "scan an app for malware" in general.

    • by TheLink ( 130905 )
      Yeah it's harder than solving the halting problem. In theory the halting problem is impossible, but at least with the halting problem you are provided with the full accurate description of the program (the program and complete inputs).

      Whereas with the "is this malware" problem you're not.

      One workaround is sandboxing. From the "halting problem" perspective, sandboxing would be the like setting a time limit so that all programs will halt by a certain time.
    • I don't think searching for malware is equivalent to solving the halting problem. For e.g. for a game it's enough to check where it wants to write; if it wants to write outside of it's own directory than it raises red flags. Basically it's enough to analyse what kind of APIs it uses. (The OS sandbox should provide an API that jails your writes to a certain directory.)

  • I thought bunch of nerds gave a drubbing to a bouncer at Google-sponsored party. Must be the bad coffee.
    • Re: (Score:3, Funny)

      by localman57 ( 1340533 )

      I thought bunch of nerds gave a drubbing to a bouncer at Google-sponsored party.

      Just out of curiousity, when have a bunch of nerds -- ever -- given a drubbing to a bouncer? (Physical drubbings only please, chicken-shit revenge tactics don't count...)

      • Re: (Score:3, Informative)

        by somersault ( 912633 )

        Not all nerds are weak. A guy in my CompSci course actually worked as a bouncer. Really nice guy too - not just someone who was out to beat people up. A bunch of drunken nerds could take a single bouncer if they actually had the motivation. Bouncers tend to have backup though.

      • That's why it would be a news, dumb-dumb.
      • As the global telecom guru for a Fortune 500 I max my bench at 350 while weighing 170 and I've just recently got my squat to 505. Some people geek out over numbers in D&D and some geek out over what an extra 2g of glutamine will do in a post workout drink.

        Let's grow up, shall we? /rant
  • by schitso ( 2541028 ) on Friday July 27, 2012 @11:21AM (#40791925)
    "Google was aware of and blessed the research, and has been apprised of its results so that it can make changes and better secure Google Play against malicious individuals."

    "A renowned security researcher who claims he discovered a flaw in iOS was kicked out of Apple's iOS Developers program."

    Just sayin'.
    • by Desler ( 1608317 )

      Yes, because he didn't apprise Apple of the research beforehand. That makes a pretty big difference than having the company be aware you are doing the research and give its blessing.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        Actually, he did. (Assuming we're talking about Charlie Miller). He did several times and was promptly ignored. I'm sure if you google it, you'll find that out real quick.

        Then he made an application that abused said bug silently to prove a point, since nobody was listening.

    • by crmarvin42 ( 652893 ) on Friday July 27, 2012 @11:33AM (#40792097)
      My impression was that they kicked him out for submitting the app to the store (for customers to purchase), not for finding the vulnerability. I know it's a bit of splitting hairs, but I suspect no penalty would have occured had he limited his actions to telling apple about the problem. Still think it was a bad response though.

      If Apple wants to seriously engage the security community there ought to be a way for the researchers to submit proof of concept apps to the app store to see if their current review process can catch them (obviously the reviewers would need to be blinded as to the identity of the submitter). They could improve their review process, catch security issues, AND avoid the negative press of booting a developer like this.
      • My impression was that they kicked him out for submitting the app to the store (for customers to purchase), not for finding the vulnerability.

        As did the guys who were testing Bouncer. They put an SMS blocker app on the Google Play Store and repeatedly updated it, adding more malicious behavior each time.

    • He was kicked out for knowingly creating and releasing malware that downloaded malicious code to take control of the user's device, not because he discovered a flaw in iOS. Just sayin'.
      • by Anonymous Coward

        He was kicked out for making Apple look bad by allowing any security flaw ever to become public

  • Actually any malware that's "smart" enough to fool Bouncer is left alone while the NSA, FBI, and MPAA are alerted. Black helicopters full of hot women in black latex arrive...

  • News Flash: Any automated security system can be beaten.

    In further news, using technology to secure against technology is only as effective as the minds behind it.

    Tune in at 11.

  • Running unsigned apps on a smart phone is just plain stupid. Why not just require android apps to be signed by a revokable certificate.. Charge at least $100 to get the certificate.. and then reward the malware-free app developers with a credit of at least $100 to cover the certificate cost.
    • Re: (Score:2, Troll)

      by h4rr4r ( 612664 )

      They already do that, unless the user decides to turn it off.

      Any other ideas you want to share that are already in use?

    • Running signed apps will help you how from the corrupt government? You remember MS and some "safe" certificates used by some "safe" viruses???
      • by bhlowe ( 1803290 )
        Yes, the gubmint will be able to sign code and spy on you. And your point? Required signing would certainly help with malware, as the cost to produce and maintain a fresh supply of certs will be costly and will allow instant removal from the phone. And with a certificate, you can learn more about an app vendor--see if they appear legit or shady.
  • Lets see what we have:
    1. Inside Google - A bunch of college boys (no girls, as they are not smart enough for google), very, extremely good at solving entry interview quiz and questions, but extremely poor and incompetent at actually doing what they were hired to do, DEVELOPING.
    2. Outside Google - A bunch of software developers, usually old, with a lot of experience, some of them even PhD, and who are actually DEVELOPING a software that google buys, because their bunch is so incompetent...
    So to sumariz
  • by endus ( 698588 ) on Friday July 27, 2012 @01:53PM (#40794153)

    It's almost as though they're trying to achieve security by making information about their service very obscure. Has anyone ever tried this before?

    • As long as you know what you're doing, obscurity can work just fine as another layer of protection.

      The problem is that most people choosing obscurity aren't secure to start with, so it's the *only* layer of protection.

  • Google was aware of and blessed the research, and has been apprised of its results so that it can make changes and better secure Google Play against malicious individuals.

  • 1 - not using random proxies
    2 - not going out of their way to make the VMs look like real machines. This is already a problem with PC viruses, many of them are designed not to infect a VM to slow analysis.

  • The problem that Bouncer is trying to solve (telling whether an app is malicious or not) is otherwise known as program verification. Rice's theorem states that this is undecidable, not totally unlike the Y2K problem. It may even be highly undecidable, so even if Google had a hypercomputer at their disposal, Bouncer would still lose.

    So if Google wants to keep malware out, Bouncer is fundamentally the wrong approach.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...