Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Security Software

Imparting Malware Resistance With a Randomizing Compiler 125

First time accepted submitter wheelbarrio (1784594) writes with this news from the Economist: "Inspired by the natural resistance offered to pathogens by genetically diverse host populations, Dr Michael Franz at UCI suggests that common software be similarly hardened against attack by generating a unique executable for each install. It sounds like a cute idea, although the article doesn't provide examples of what kinds of diversity are possible whilst maintaining the program logic, nor what kind of attacks would be prevented with this approach." This might reduce the value of MD5 sums, though.
This discussion has been archived. No new comments can be posted.

Imparting Malware Resistance With a Randomizing Compiler

Comments Filter:
  • Cute but dumb (Score:5, Insightful)

    by oldhack ( 1037484 ) on Thursday May 29, 2014 @05:38PM (#47123743)
    You think you have buggy software now, this idea will multiply a single bug into a dozen.
  • by cant_get_a_good_nick ( 172131 ) on Thursday May 29, 2014 @05:39PM (#47123753)

    Can you imagine parsing a stack trace or equivalent from one of these? Each stack is different.

    Ignoring the fact that Heisenbugs would be much more prevalent.

    Part of programming is paring of states. The computer is an (effectively) infinite-state machine. When you add bounds and checks you're reducing the number of states. This would add a great deal, making bugs more prevalent. Since a lot of attacks are based on bugs, this may increase the likelihood of some attacks.

  • ....why? (Score:5, Insightful)

    by Anonymous Coward on Thursday May 29, 2014 @05:47PM (#47123821)

    ..would a professor of CompSci think this is a good idea, despite the hundreds of problems it *causes* with existing practices and procedures?

    Oh, wait.. maybe because the idea is patented and he'll get paid a lot.
    http://www.google.com/patents/US8239836

  • by NotInHere ( 3654617 ) on Thursday May 29, 2014 @05:47PM (#47123827)

    So we should use something like ABS with that randomisation enabled? Or should we trust to download distinct blobs for every download? For the latter, nice try NSA, but I don't want you to be abled to incorporate spyware into my download and not be noticed.
    Its already a pity software gets signed only by so few entities (usually one at a time, at least for deb). Perhaps I know that the blob came from Debian, but I can't verify whether it is the version the public gets, or the special version with some ... extra features. The blobs should be signed by more entities, so then all would have to be NSLed.

  • Re:Cute but dumb (Score:5, Insightful)

    by tepples ( 727027 ) <tepples.gmail@com> on Thursday May 29, 2014 @05:56PM (#47123897) Homepage Journal
    If bugs are detected earlier, they can be fixed earlier. Randomizing can turn a latent bug into an incredibly obvious bug [orain.org].
  • Re:Cute but dumb (Score:2, Insightful)

    by Anonymous Coward on Thursday May 29, 2014 @06:02PM (#47123971)

    And would make that buggy software nearly impossible to patch.
    Every time there's a security vulnerability found, you'd essentially have to reinstall the whole application.

    Knock on wood, but I've not had enough bad experiences with malware to think the tradeoff is worth it.

  • by vux984 ( 928602 ) on Thursday May 29, 2014 @06:07PM (#47124007)

    The problem with this in "Explain like I'm Five" terms:

    You can have no idea what the program you are running does.

    You cannot trust it. You cannot know it hasn't been tampered with. You cannot know a given copy works the same as another copy. You cannot know your executable has no back doors.

    On the security minded front we have a trend towards striving for deterministic build capability; so that we have some confidence and method of validating that a source code to executable transformation hasn't been tampered with, that the binaries you just downloaded were actually generated from the source code in a verifiable way.

    Another technique I'm seeing in secure conscious areas is executable whitelisting, where IT hashes and whitelists executables, and stuff not on the whitelist is flagged and/or rejected.

    Now this guy comes along and runs headlong in the other direction suggesting every executable should be different. And I'm not sure I see any real benefit, nevermind a benefit that offsets the losses outlined above.

  • Re:Cute but dumb (Score:5, Insightful)

    by tepples ( 727027 ) <tepples.gmail@com> on Thursday May 29, 2014 @06:31PM (#47124225) Homepage Journal
    Each bug report would include the the key used to randomize a particular build.
  • by Zeek40 ( 1017978 ) on Thursday May 29, 2014 @06:50PM (#47124365)
    You respectfully disagree with his points without actually providing any reason why, and while nick's post makes complete sense, your statements seem to have a ton of unexplained assumptions built in.
    1. What kinds of bugs do you think would manifest earlier using this technique, and why do you think that earlier manifestation of that class of bugs will outweigh the tremendous burden of chasing down all the heisenbugs that only occur on some small percentage of randomized builds?
    2. How does such an environment reward programmers who invest more time in validation? More time spent in validation will result in better code regardless of whether you're using a randomized or non-randomized build. More time spent in validation is a cost you're paying, not some free thing provided by the randomized build process.
    3. I don't know what this sentence means: "Debugging suck, if instigated soon enough to matter, returns 100x ROI as compared to debugging code." If what instigated soon enough?
    4. "Determinism should not be reduced to a crutch for failing to code correctly" - What does this even mean? An algorithm is either deterministic or non-deterministic. If your build system is changing a deterministic algorithm into a non-deterministic algorithm, your build system is broken. If your algorithm was non-deterministic to begin with, a randomized build is not going to make it any easier to track down why the algorithm is not behaving as desired.

    All in all, your post reads like a smug "Code better, noob!" while completely ignoring the tremendous extra costs that are going to be necessary to properly test hundreds of thousands of randomized builds for consistency.

FORTUNE'S FUN FACTS TO KNOW AND TELL: A giant panda bear is really a member of the racoon family.

Working...