Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Software Programming The Internet IT

Cramming Software With Thousands of Fake Bugs Could Make It More Secure, Researchers Say (vice.com) 179

It sounds like a joke, but the idea actually makes sense: More bugs, not less, could theoretically make a system safer. From a report: Carefully scatter non-exploitable decoy bugs in software, and attackers will waste time and resources on trying to exploit them. The hope is that attackers will get bored, overwhelmed, or run out of time and patience before finding an actual vulnerability. Computer science researchers at NYU suggested this strategy in a study published August 2, and call these fake-vulnerabilities "chaff bugs." Brendan Dolan-Gavitt, assistant professor at NYU Tandon and one of the researcher on this study, told me in an email that they've been working on techniques to automatically put bugs into programs for the past few years as a way to test and evaluate different bug-finding systems. Once they had a way to fill a program with bugs, they started to wonder what else they could do with it. "I also have a lot of friends who write exploits for a living, so I know how much work there is in between finding a bug and coming up with a reliable exploit -- and it occurred to me that this was something we might be able to take advantage of," he said. "People who can write exploits are rare, and their time is expensive, so if you can figure out how to waste it you can potentially have a great deterrent effect." Brendan has previously suggested that adding bugs to experimental software code could help with ultimately winding up with programs that have fewer vulnerabilities.
This discussion has been archived. No new comments can be posted.

Cramming Software With Thousands of Fake Bugs Could Make It More Secure, Researchers Say

Comments Filter:
  • Are you serious? (Score:5, Insightful)

    by mhkohne ( 3854 ) on Monday August 06, 2018 @02:28PM (#57080236) Homepage

    We can't manage to build the smallest bit of software without bugs, and now we're supposed to introduce large numbers of fake bugs that are, in fact, provably not exploitable.

    Sure...that'll work.

    Right up until the 'fake' bugs turn out to actually be exploitable (because, you know, we were wrong about them), and the bad guys win again.

    • Re: (Score:3, Insightful)

      by Xest ( 935314 )

      Most seriously secure software goes through a number of procedures to ensure the highest level of security possible, static code analysis is an imperfect, but still helpful step in such a process. Doing this would likely kill any point in trying to decipher the output of an SCA tool.

      As such, in software that's truly developed under a secure software development lifecycle including a multitude of approaches including things like an SCA would be near impossible with this approach.

      Thus, even if in theory it mi

      • In most cases a C compiler would already remove the unused bugs, and it wouldn't really be that hard to alter an SCA tool to know about a few of the tricks that could be used to prevent it getting optimized out.

      • And they generally get though our vigorous internal security scans just fine (i.e. the code generally compiles). And none of this fake bugs, my decoy bugs are real ones. Genuine bugs. Lots of then.

        So many that even the most determined hacker is unlikely to figure out how to make the software work correctly, let alone hack it.

        • by rtb61 ( 674572 )

          Want more reliable more secure software. Simply take all the flixibility out of it, so it can only do what it is programmed to do. That and make real software warranties regulated and compulsary. Treat purposefully bad software like fraud, hand out custodial sentences for code coming out the door that they knew was bad.

          Want better software, start throwing bad coders in prison, simply the truth. You watch the quality of code improve over night, you all know it is true.

          Software is as bad as it's warranties

          • by dcw3 ( 649211 )

            And watch the cost of code shoot through the roof, along with the time it takes to bring things to market....over night my ass.

      • by HiThere ( 15173 )

        Well, my feeling is that it would cause tremendous software bloat. That said, if the addition of the bugs is automatic, they could be added AFTER the secure development step. And it would mean that those trying to decipher what actual bugs are there would find many of their tools nearly useless.

        So...maybe. It sounds like a long shot, and not terribly good even if it works properly, but it might work. And the code bloat would explode.

        • People don't care about bloat anymore. You can notice this by looking at the node_modules directory of any node project. It's huge.
        • Adding complexity always increases bug count. Adding software after you screen it for bugs always increases bug count. Automated bug screening should filter out code that does nothing, so your fake bugs will be ignored.
      • What seriously secure software are you thinking of here?
      • by Hentes ( 2461350 )

        On the contrary, that's exactly the environment where introducing fake bugs could be useful. Not for misleading attackers, but for testing whether the QA department can find them. I agree that deliberately including bugs in the final product is bad design, but putting them into an internal version to check whether you can find them might give you useful information.

      • by dcw3 ( 649211 )

        While I'll agree that the idea is flawed, I'll disagree with you on the reason. SCA could easily be done because you know where you put the bugs. That info should get passed along to any/all reviewers, just not outside the company/dev group. That said, I'd agree that these fake bugs could and likely would introduce actual flaws, and are most likely just a waste of valuable resources that should be busy finding/fixing actual issues.

    • Moreover, how is this supposed to provide any extra protection unless it's done manually? They talk about having a system for automatically inserting fake bugs (presumably during compilation, minification, or some other similar process that happens after the developer does their work, that way the developer never sees the buggy code and gets distracted trying to fix it), but such additions would be deterministic in nature and easily noticed/removed. In much the same way that we can de-compile code and can d

    • Re:Are you serious? (Score:5, Informative)

      by jellomizer ( 103300 ) on Monday August 06, 2018 @03:00PM (#57080478)

      This idea is like the Honey Pot idea for network protection.

      This was popular 20 years ago, where most hacking was targeted at a network, by individuals. You get a system to similar an insecure pc, have the hackers break in think they are getting away with murder, while it collects information on who is hacking it and how. And using that information to protect your real network.

      Hacking rarely works like that now. Either it is fully automated so a honey pot server would just be a statistical issue while the other servers are getting hit, or if it is more targeted it will often go in via stupid users on the inside of the firewall.

      Most security glitches are not bugs in the traditional sense. For the most part buffer overflows have been fixed. But from lazy software development from developers not thinking about security at the time.

      • Re:Are you serious? (Score:5, Informative)

        by bluefoxlucid ( 723572 ) on Monday August 06, 2018 @03:03PM (#57080500) Homepage Journal

        Honeypots alert you to activity. A network scan hits them. There is nothing useful on this web server, yet someone tried to browse it. Someone tried to connect to the server's file share. You're able to identify malicious traffic and hosts.

        Software bugs don't tell people anything. You put it on a non-networked machine, you probe at it, you take it apart, you crash it, you tell no one.

    • It's brilliant, don't you see it? The entire application should be so confusing, obfuscated, and aggravating in how it works that anyone trying to do anything with it kills themselves just to get rid of the PTSD nightmares of trying to understand how any of it works. It's the perfect defense.
    • At least for once I'm ahead of the game.

    • Holy sheep shit... I was like â€oeIs this a late April 1st entry?â€
  • by jimmifett ( 2434568 ) on Monday August 06, 2018 @02:29PM (#57080242)

    Before you know it, those fake bugs can easily turn into actual bugs. During porting, during updating, during review, etc.

    Instead of laying a minefield of false positives to hide sloppy mistakes, why not just fix the exploitable mistakes in the first place and learn from the experience about what NOT to do?

  • Once a critical nuber if bugs is introduced the software becomes unusable hence making it unattractive for hacz0rz to exploit.
    • Re:Brilliant idea (Score:5, Interesting)

      by ShanghaiBill ( 739463 ) on Monday August 06, 2018 @02:57PM (#57080442)

      Once a critical nuber if bugs is introduced the software becomes unusable hence making it unattractive for hacz0rz to exploit.

      The bugs are not at the UI level. The "bugs" are exploitable code that is never actually executed. Black hats would find them when scanning the code, and try to exploit them, but the exploits wouldn't work because the code never executes.

      The "bugs" would add a trivial amount of bloat, but will otherwise not affect performance or behavior.

      int
      main(void) { // ...
          if (somethingThatWillNeverHappen()) {
              char buffer[20];
              gets(buffer); // Buffer overflow target
              mysql_query(buffer); // SQL injection target // ...
          } // ...
      }

      Disclaimer: I think this is a dumb idea. I am just explaining it, not endorsing it.

      • Re:Brilliant idea (Score:5, Interesting)

        by DarkOx ( 621550 ) on Monday August 06, 2018 @03:50PM (#57080776) Journal

        Black hats would find them when scanning the code

        unless we are talking about open source that someone might feed to an static analysis tool that is pretty unlikely.

        Blackhats don't "scan" binaries they fuzz them - in other words they feed data to "ui level" inputs and than watch where those patterns land in memory. If you are talking about buffer over flow type bug where they can run software locally.

        Logic bugs probably much more common today that over flows given the languages being used and the fact that everything is on the web again get exercised by things like crawlers and maybe directly parameter tampering. They key here again is the process STARTS by identifing inputs the application takes.

        So if you stuff a bunch of unreachable code in the application odds are the blackhats won't ever see it and won't therefore spend any time on it. If someone does scan a binary tools like IDA are pretty good at identifying unreachable code too so its unlikely to be anything but a short lived arms race.

          Now if you put 'fake bugs' like say change all the strncpy calls back to old strcpy calls and than somehow move the bounds check to some far away part of the program while you write into a buffer inside an larger data structure you don't than use for anything you 1) risk screwing it up and creating an bug that is still exploitable 2) leave a useful gadget (code someone who has exploited an actual bug can now lever to help them gain even more control of execution 3) leave a nice hunk of process that can safely be replaced with larger shell codes 4) fail to fool anyone because you are dealing with people who will be able to spot the tells and won't be taken in.

      • by dgatwood ( 11270 )

        The bugs are not at the UI level. The "bugs" are exploitable code that is never actually executed. Black hats would find them when scanning the code, and try to exploit them, but the exploits wouldn't work because the code never executes.

        Or, for Internet-based services, the bugs can be non-exploitable code that will fool fuzzing attacks into thinking that it is wreaking havoc, when in fact it is doing exactly what it was programmed to do.

        For example, if your service can detect an inappropriate number of fai

      • A good compiler for a sane language would simply remove the dead code. If it's not dead code, then it's exploitable.

        • A good compiler for a sane language would simply remove the dead code. If it's not dead code, then it's exploitable.

          That's why you don't say if (0} { ...

          Instead you do something like:

          if (findCounterExampleForGoldbachsConjecture()) { ...

          Good luck deciding if that code is dead.

          Or just call something like foo() { return FALSE; } in a separately compiled module.

    • by Tablizer ( 95088 )

      Once a critical nuber if bugs is introduced the software becomes unusable hence making it unattractive for hacz0rz to exploit.

      I intraduce tuns of spailing and grammer errers so that grammer notzees get to flustured too bothur too complane. Werks ulmost evry time.

      • Werks is an obsolete spelling, not an error. Dumb-ass.

  • by Anonymous Coward on Monday August 06, 2018 @02:30PM (#57080248)

    ... literally.

    The bugs _are_ still there. They're just harder to find.

  • What could POSSIBLY go wrong?
  • by 93 Escort Wagon ( 326346 ) on Monday August 06, 2018 @02:34PM (#57080272)

    This guy is an idiot. An increase in complexity - which this most certainly would entail - will always lead to an increase in genuine bugs. And, as was said by someone further up... when programmers already can’t write bug-free code, how the heck are they going to make up 100% guaranteed non-exploitable false bugs which - at the same time - are indistinguishable from the real thing to a skilled hacker?

    • by mykepredko ( 40154 ) on Monday August 06, 2018 @02:51PM (#57080404) Homepage

      I don't like to call anybody an idiot, but I don't know any other way to end this sentence.

    • Segregating complexity reduces complexity. The monolith vs microkernel argument is a familiar model for this: a monolith can connect any part of its code to any other part, as any operation can affect any internal state for any other operation; whereas a microkernel has defined communication channels and segregates blocks of code so they can only affect their own state and the state of information sent to them.

      In object-oriented programming, objects abstract and encapsulate state. Polymorphism provide

      • whereas a microkernel has defined communication channels and segregates blocks of code so they can only affect their own state and the state of information sent to them.

        If that was actually true, GNU Hurd would be something useful that people work on, instead of something useless that was abandoned!

        • Not really. Popularity and utility are different things. Score voting advocates try to use that argument because they think voting is about marginal utility (finding the candidate most-useful to the electorate), rather than social choice (finding the greatest consensus among the greatest mutual majority).

          Think about it. People actually use FreeBSD as an everyday desktop, yet Linux is the most popular. Why does anyone use FreeBSD? For that matter, why don't BSD users switch to Dragonfly BSD? Why aren

    • when programmers already canâ(TM)t write bug-free code, how the heck are they going to make up 100% guaranteed non-exploitable false bug

      If you RTFA, you would see that the bugs are introduced by an automated source to source transformation tool at build time, not by the original programmer. So the question is, can the writer of the tool guarantee that the bugs it inserts are not exploitable. While I wouldn't sign off on it per-se, that does seem a whole lot easier than having each developer do so -- yo

    • when programmers already can’t write bug-free code, how the heck are they going to make up 100% guaranteed non-exploitable false bugs

      Machine learning.

      I'd put a /s, but no, I'm seriously predicting that they'll try this.

    • The chaff bugs would be inserted by an automated tool. [arxiv.org] No need for programmers to make up anything.

      They would not be indistinguishable from the real thing to a skilled hacker; the idea is that they would be so numerous as to make the effort of finding the actual exploitable bugs uneconomical.

  • by Anonymous Coward

    We have enough issues with trying to make bug free code. So let's add bugs that we "think" aren't exploitable. So we now have the following categories of bugs.
    1. Organic, non-exploitable (we already have these)
    2. Organic, exploitable (we already have these)
    3. Deliberate, non-exploitable (we "want" to introduce these)
    3. Deliberate, exploitable (we hope to not introduce any of these... Just like we already hope to not introduce any of type 1 and 2 bugs above... Yea, right.)

  • But might be tough since it might only be a matter of time before someone makes a mechanism by which to filter out auto-generated "bugs."
    • I agree with this. That which a man can make a man can break.
  • the real advantage is: when your boss comes up to you and asks about a bug...."No, no" That's a chaff bug...honest.
  • This is an application of an age old conflict strategy. Tricking your enemy into thinking they have found a weakness is a well proven tactic if effort is put in to correctly prepare for the situation.
  • by aaronb1138 ( 2035478 ) on Monday August 06, 2018 @02:42PM (#57080310)
    I guess Agile isn't getting the job done, but this is just another in a long list of make work methodologies programmers are deeply entrenched in doing in order to make their jobs perpetual. Also, it's a great hedge for the CPU industry which doesn't have any killer apps to further justify increased computational capacity for the average user. In smartphone land, the approach to forced obsolescence is security / OS updates being withheld from consumers quite maliciously. For desktops and laptops, the update genie is long out of the bottle, so you have to come up with some other way to make people's computers slow. Honeypot code is definitely a new way to do exactly that.

    The downfall of Waterfall is that eventually, you can have a complete and working product. But nobody wants to make a Windows XP or Office 2003 that works forever and gets the job done.
    • by Anonymous Coward

      I'll respond to this post, as it reflects best the current "state of art", and false perpetuated culture of fear/survival. This is what you get.

      As if your "needle in the haystack", isn't someone else's rather fun and innovative disruptive ML sideproject.

      Btw, waterfall was meant as a joke by the original author, and thought nobody would be so ignorant as to follow it by the letter.
      Also, the Agile consultancy industry follow "Agile" to the letter with a straight face, while Agile Manifesto states the exact op

  • by Anonymous Coward

    1) if you have 500 fake bugs then how are the white hat hackers going to tell you about the real bugs that lurk in your system?

    2) how do you prevent the fake bugs from being miscoded to accidentally becoming real bugs? Developers think it's a dummy fake bug and ignore all bug reports about it but it's truly a real bug.

  • by shayd2 ( 1689926 ) on Monday August 06, 2018 @02:44PM (#57080324)
    This might work. Right up until day 2 when someone leaks the list of "chaff bugs"
  • by fahrbot-bot ( 874524 ) on Monday August 06, 2018 @02:44PM (#57080328)

    More bugs, not less, could theoretically make a system safer.

    Just like how adding actual bugs to food makes it tastier.

  • I can't see how to do this without spending a lot of time ensuring that the bugs you are adding are non-exploitable.

    So, during product definition phase, you're going to have to convince the Product Manager that you want to take more time on the application development by adding exploit bugs and then testing said bugs to ensure that they can't be exploited?

    Great non-intuitive idea that clearly came out of academia. Clearly Dolan-Gavitt has never worked for a living.

    • I can't see how to do this without spending a lot of time ensuring that the bugs you are adding are non-exploitable.

      Oddly enough, the authors of the paper explain this in section IV-B Ensuring Non-Exploitability [arxiv.org]

      Great non-intuitive idea that clearly came out of academia. Clearly Dolan-Gavitt has never worked for a living.

      What's with the hostility toward academia? Lots of great ideas have come out of academia. No academic myself, just decades of writing real-world software, during which I've learned not to judge other people, especially without reading their work.

  • If cramming code with bugs increases security, the code I write is incredibly secure.

  • by account_deleted ( 4530225 ) on Monday August 06, 2018 @02:51PM (#57080408)
    Comment removed based on user account deletion
  • by Dallas May ( 4891515 ) on Monday August 06, 2018 @02:52PM (#57080410)

    "Brendan has previously suggested that adding bugs to experimental software code could help with ultimately winding up with programs that have fewer vulnerabilities."

    This is not correct. His theory seems to be that you will get fewer exploits. The number of vulnerabilities will remain constant.

  • This idea comes under the heading of "Well, I don't have any useful research, so why don't I publish a paper taking a contrary tack and get noticed."

    Clearly the author has, never written a line of code in his life that somebody has to maintain, dealt with product managers that want more for less, worked with QA and security or documented anything that he has written (how do you document that you've put deliberate bugs in the code that should be ignored down the road?).

    So, write up a paper extolling the virt

  • "... techniques to automatically put bugs into programs..."
    and
    "...once they had a way to fill a program with bugs..."

    I think I've found my natural calling
    • by jm007 ( 746228 )
      finally!! now's there's a good reason to include some code samples on my resume
  • Yay, I can finally get a job as a programmer!

    Hire me! Hire me!

  • I can honestly say that this idea is so dumb that it hasn't been parodied in the comics (yet).

  • putting bugs into software.... pffft.... been doing it for years now, as have most others in the biz

    determining that they're benign.... now that's the hard part; who signs off on that?
  • Let's suppose (big if) that we can cram a program with bugs that are either highly unlikely to be exploitable, or provably not exploitable. How long would it take an attacker to learn to recognize a real bug from a manufactured bug, and filter out the probably-manufactured ones?

    How long would it take to build a classifier to recognize and flag probably-manufactured bugs?

    This might make more sense if we combined this technique with better means for closing exploitable, or probably exploitable, bugs. But man, color me skeptical.

    • How long would it take to build a classifier to recognize and flag probably-manufactured bugs?

      In an arms race, "how long does it take to (reach the next level playing field)" is a relevant factor. A temporary measure is temporary, but it protects for as long as it lasts.

  • The real vulnerabilities will still be there. This is an age-old concept called "security through obscurity" and is only minimally effective.
  • This would only make it even more difficult if researchers have to navigate their way through decoy code designed
    against finding real bugs in the first place.

    Not to mention, bloated code becomes even MORE bloated code. :|

    So the entire idea is " If you can't fix the bugs in the code, hide it amidst a bunch of fake bugs in code ? "

    In the history of anything, has security through obscurity EVER worked in the long term ?

  • Most attackers use fuzzing for finding bugs to analyse further. Hence these need to be bugs that can be found by fuzzing. Fuzzing needs either a crash or crass bad behavior to detect a bug. So all these "non exploitable" bugs can actually be exploited for DoS or will break your application for the right input data. That is the first reason this is unworkable. The second one is that this makes software hard to maintain and hard to test. It is already hard to maintain and test software, and tthis will make it

    • They did in fact test their chaff bugs against a fuzzer, which found them. They took steps to ensure that the bad behavior is actually harmless (for example, overwrites go to areas of memory that aren't actually used).

      How do you see this making software harder to maintain? The chaff bugs are inserted by an automated tool, maintainers looking at the source code will never see them.

      For testing, run the test cases against chaffed and unchaffed builds. Test cases that fail in both are probably real bugs. Test c

      • by gweihir ( 88907 )

        You cannot have much experience with writing software in an enterprise environment. "Probably real bugs" does not cut it. Maintainers "will never see them" does not cut it. Anything that can be hit by a fuzzer can be hit by some other application doing something stupid. You will find arbitrarily misbehaving code in any enterprise environment. If you the have this crap in your way when trying to find out what is wrong, a problem turns into a disaster.

  • I do that one better. My programs are full of REAL bugs.

  • This one needs the foot icon. The first rule of application security is that obscurity is not security. Also assuming a hacker who's compulsively raiding your software will ever run out of energy is also an incredibly stupid idea. I like the way this one plays out. It's hilarious, but it won't work.

  • "fake" bugs? No, these are real bugs, just non exploitable ones.

    I don't want software to crash or do funny things if it can be avoided. Security is nice but it comes after functionality. If it is not functional, there is no point using the software at all. It is almost like the security guys want to make everything unusable, just because things that are unusable can't be exploited.

  • and clean up your code.
    Code thats full of bugs is not a feature. Secure your code and have real experts help with that.
  • I was taught the bebugging technique during my CS degree over 20 years ago.

    https://en.wikipedia.org/wiki/... [wikipedia.org]

  • Others have pointed out the problems of ensuring a) that the bugs really aren't exploitable b) that the bugs are preserved by the compiler and not corrected. I'd like to add a few other sources of problems:

    First is this is going to contribute to code bloat and make it harder to maintain the code. I've seen a few programmers here on slashdot admit that they've had serious "WTF?" moments when reviewing their own code years or even just months later. Having deliberately non-functional code is going to make su

  • No. The idea is so stupid, it's not worth even discussing. It's like we haven't had 50 years of KISS concept.

  • I did a double-take and went, "like, what? did I see what I thought I saw? " And then I go and look more closely and lo, the absurdity is sincere.
  • This would be pretty stupid. Bugs are bad, mmm-kay? It would be better to just make the code convoluted and redundant but without bugs if you want to wear out attackers.

    On the plus side, you'll know when to stop testing! Since you know all the bugs you planted, if QA finds all of them, you've got a great QA team doing their job, and it's likely they found the ones you did not plant as well.

  • Obviously, this researchers knows nothing about how coding works. More complexity means more bugs and even intentional "harmless" bugs (if such a thing even exists) can have unintentional side-effects that could be leveraged by an attacker. And now the team of coders working on a project should all agree that "oh these bugs are there on purpose so don't fix them, just test them properly to make sure they can't be exploited".... All because of the idea that we want to waste the time of attackers who, by def
  • pointy haired boss had two choices;

    let's hire more programmers to add fake bugs to our programs
    or
    let's hire more security people to help audit and remove security issues from our programs.

    his advisor tried to make it simple and steer him in the right direction, it worked so many times before.
    except for this time.

  • I know this is Slashdot and nowadays people don't even read the summary, let alone the article, much less the actual paper [arxiv.org] to which the article links, but there's an awful lot of straw littering the floor here.

    The chaff bugs are inserted by an automated tool, they will not have to be written or worked around by people writing the actual code.

    The authors are aware that the chaff bugs must not be exploitable.

    The authors are aware that they will face automated tools for finding bugs and determining their explo

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...