Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Software IT

Over Half of Software Fails First Security Tests 145

An anonymous reader writes "Even with all of the emphasis on writing software with security in mind, most software applications remain riddled with security holes, according to a new report released today about the actual security quality of all types of software. Close to 60 percent of the applications tested by application security company Veracode in the past year-and-a-half failed to achieve a successful rating in their first round of testing. And this data is based on software developers who took the time and effort to have their code tested — who knows about the others." Reader sgtrock pointed out another interesting snippet from the article: "'The conventional wisdom is that open source is risky. But open source was no worse than commercial software upon first submission. That's encouraging,' Oberg says. And it was the quickest to remediate any flaws: 'It took about 30 days to remediate open-source software, and much longer for commercial and internal projects,' he says."
This discussion has been archived. No new comments can be posted.

Over Half of Software Fails First Security Tests

Comments Filter:
  • That's great. (Score:4, Insightful)

    by cbiltcliffe ( 186293 ) on Tuesday March 02, 2010 @11:05AM (#31330758) Homepage Journal

    Now they need to test the users.....

    • by ka9dgx ( 72702 )

      Testing the users might make sense if the Operating System had a reasonable security model. If you can't easily restrict a program to a small subset of your machine, you're forced to trust code you didn't write to get anything done.

      Nobody should blame the users, if the OS sucks.

      • Re: (Score:3, Interesting)

        by TheLink ( 130905 )
        > If you can't easily restrict a program to a small subset of your machine, you're forced to trust code you didn't write to get anything done.
        > Nobody should blame the users, if the OS sucks.

        Agreed. And most OSes out there suck in this respect (OSX, Linux, Windows).

        FWIW Windows Vista and Windows 7 kinda suck less - since they actually have some sandboxing with IE8.

        Ubuntu has apparmor sandboxing of firefox as an option that's turned off by default, and even if you turn it on it's not sandboxed enough I
  • by Jurily ( 900488 ) <jurily@nOSPAM.gmail.com> on Tuesday March 02, 2010 @11:05AM (#31330778)

    I thought the only measure of a project was whether it makes the deadline.

  • Bolting On (Score:4, Insightful)

    by Chris Lawrence ( 1733598 ) on Tuesday March 02, 2010 @11:06AM (#31330788) Homepage

    As Bruce Schneier has said, trying to bolt on security to an existing product or application can be very difficult and time consuming. Sometimes you even have to redesign things. Designing for security and using secure coding practices from the beginning, however, makes it much, much easier.

    • by Jurily ( 900488 )

      And nearly 90 percent of internally developed applications contained vulnerabilities in the SANS Top 25 and OWASP Top 10 lists of most common programming errors and flaws in the first round of tests, Oberg says.

      It doesn't matter how much you redesign things, if you fuck up the routine stuff.

      • If the design is good, you can fix the bugs. If the design is fundamentally flawed, you
        need to throw it out and start again. There is a difference.

        • Re: (Score:3, Interesting)

          by hardburn ( 141468 )

          When it comes to security, not necessarily. A good design of classes for the purposes of readability and maintainability does not necessarily mean it's easy to fix a security bug. These are often completing design choices.

          The two biggest errors cited by TFA were cross-site scripting and SQL injection. Generally, XSS can be fixed if you had a decent design--just filter your inputs better. SQL injection even more so. In my experience, good languages have database libraries that make it easy to use placeholder

          • Re:Bolting On (Score:5, Interesting)

            by Bert64 ( 520050 ) <bert@NOSpaM.slashdot.firenzee.com> on Tuesday March 02, 2010 @12:51PM (#31332370) Homepage

            For another encryption example, look at how windows and linux implement user password hashing...

            Linux takes the plaintext password via an input channel (ssh, telnet, gdm, local console etc), passes it to PAM which loads the corresponding password from the shadow file, encrypts the user input with the same algorithm and salt, and compares the output. The backend (pam, encryption cipher) can be changed without affecting how the frontend, making it easy to use a different encryption algorithm as increases in computing power, or discovery of cryptographic flaws, renders the old ones insecure.

            Windows, in a somewhat misguided attempt to prevent plain texts being sent over the network, effectively uses the encrypted hash (yes its more complicated than that, but the general idea is that only the hash ever gets used and the password isnt sent in the clear - unix solves this at a different layer by using encryption of the plaintext password such as ssh)... Because of this, the hashing algorithm is difficult to change. Older windows used lanman which is laughably weak, while modern windows uses ntlm by default which is somewhat stronger but not great... However, modern windows still has lanman support for compatibility reasons, and until vista/2008 it was still enabled by default. If they change the hashing algorithm, then they will still have to retain the old ones for quite some time in order to have compatibility, and also change the protocols to handle a third possible algorithm.
            The fact that you can use the hash without cracking it first is also a design flaw, this isn't possible on unix or anything else i'm aware of.

          • by growse ( 928427 )
            Sorry to nitpick, but XSS is nothing about how you input data and everything about how you output data. There's nothing wrong with being very liberal with what you accept as input (javascript, tag soup, whatever) as long as it's properly encoded on output. If you restrict yourself to just filtering input, that model breaks when a new interface is built that inputs data into your database which doesn't do input filtering.
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Designing for security and using secure coding practices from the beginning, however, makes it much, much easier.

      Sure it does... but that sort of design takes money and expertise. More often software is dreamed up and planned in ad hoc meetings. For example, a person in marketing decides it would be a great idea if their customers can get updates on their phones and Nitwitter accounts. In a 4PM meeting the marketer proposes it to his boss as a necessary value-add function without which the competition woul

  • by Pojut ( 1027544 ) on Tuesday March 02, 2010 @11:08AM (#31330802) Homepage

    I know of at least one rather large and well-known company that doesn't use OSS because of "security", yet voluntarily continues to use IE6.

    That sort of thing really pisses me off.

    • by Opportunist ( 166417 ) on Tuesday March 02, 2010 @11:21AM (#31330982)

      Quite the opposite. OSS is often far more secure than its "commercial" counterpart, for the obvious reasons.

      1) No deadline. OSS is usually "done when it's done". Most OSS software I know is in perpetual beta, never reaching what its maker would call a "release state", but offers at least the same level of security and stability (if not better) as its commercial counterpart. Simply because there is no date we have to push something out the door, secure or not, ready or not, we have to make it for christmas (or for the new Windows version).

      2) No need to "sell" the software. You needn't dumb down and strip security so potential customers accept the level of burden security adds to the fold. Security is never free. It always comes at the price of overhead. When you have two software tools available, customers will pick the one that is more "accessible". Which usually also is the less secure one. Because security often adds layers of additional overhead (either to you, the user, slowing you down and requiring you to enter passwords or access things in a certain way, maybe even with additional tools instead of from "inside" the tool you're mainly using, or to the system, meaning your software will run slower).

      3) Peer review. Your code can easily be reviewed by thousands of "hackers" trying to find an easy way into your system, instead of having to poke at decompiled code. If you can read the source, far more people are able to poke and prod at it, resulting in more secure software instead of less, because security holes get found faster and, in turn, fixed faster. By the time you start using the product, a few months after its release, you may rest assured that all the easy to find security holes have been found by hobbyists. With CSS you often need experienced ASM cracks to dig those holes up, resulting in fewer people able to look at those holes and thus a slower patching cycle.

      • by El Lobo ( 994537 )
        This is only one side of the picture.

        a) While all you say is more or less true, that applies to big well known OS projects only. Obscure little one/two man projects don't have that big of a peer review. If those projects have few users, you can live with critical security holes for years without then even being known.

        b) Extra large OS projects have the chaos factor against them. When a security hole is patched on some Linuz distro: where does this apply? On Ubuntu? Is this present on Kubuntu as well? Wh

        • by Ltap ( 1572175 )
          Most distros leave the kernel alone, it's Redhat that does a lot of stuff that is ported upstream.
        • Re: (Score:2, Insightful)

          Comment removed based on user account deletion
        • by Xtifr ( 1323 )

          ...that applies to big well known OS projects only. Obscure little one/two man projects don't have that big of a peer review. If those projects have few users, you can live with critical security holes for years without then even being known.

          How secure do you imagine that obscure little one/two man proprietary, closed source projects are? The comparison doesn't just apply to big, popular OSS projects. Other things being equal, open source projects will tend to be more secure--if for no other reason, simply because people tend to be more careful when they know others will review their code. However, you're quite correct in pointing out that there are other factors than just open vs. closed, and it's important to factor those in as well.

    • by Lumpy ( 12016 )

      It's typically because whoever is in charge is incredibly under-educated. Probably their CTO or CIO really knows nothing at all, and then filled the ranks below him with yes-men that knows as little as he does.

      At the bottom you have the guys wanting to get things done and secure, they pound their heads against the wall.

    • And let me guess... their IT department would claim that open-source software is too difficult to test and administer patches remotely and keep updated?

  • There is no requirements document for security that you can follow and guarantee that your application is secure. You're really trying to anticipate all the ideas other people may have about compromising your code. In general, this is impossible to achieve, so you do the best you can.

    • There is no document because such a document would be outdated the moment you wrote it.

      I write security guides and tips for a local computer magazine, based on developments in the malware industry. It happens, rarely but it does, that I have to retract my articles at last minute and rewrite them because what I wrote simply is not true anymore. What I wrote a year ago is certainly no longer true. Tips I gave 6 months ago are not offering security anymore. And what I wrote 3 months ago might still hold a hint

      • by Lumpy ( 12016 )

        There is no document because such a document would be outdated the moment you wrote it.

        That's why you put it on a Wiki!

        • You want to base your security guidelines on a wiki? And be held responsible for its implementation? Are you nuts?

      • "There is no document because such a document would be outdated the moment you wrote it."

        I agree. My point was that we shouldn't be surprised that many applications are not secure because it's an open-ended problem.

      • It happens, rarely but it does, that I have to retract my articles at last minute and rewrite them because what I wrote simply is not true anymore.

        Then you're approaching security as if it were a technology. It's not; it's a science. If you write instead about the application of security principles, you won't find yourself having to retract anything.

        Sure, a particular use case might become less relevant over time, but it can't become wrong unless you misunderstood the underlying principle to begin w
  • by maxume ( 22995 )

    And the other half isn't even tested.

  • Well now (Score:5, Informative)

    by Monkeedude1212 ( 1560403 ) on Tuesday March 02, 2010 @11:12AM (#31330842) Journal

    That's extrapolating a bit much, isn't it? And scanning through the article, they don't even name the sample size, just percentages.

    And yes, they mention that its only the stuff that they test, "so imagine what the rest is like". Well - thats it though, if someone is professionally developing with security in mind, they probably know how to test it in office or know somebody who can. Thus - no need to pay this corporation to test something you can do yourself.
    If you are developing with security in mind - but aren't sure exactly what you're looking to protect against - THATS when you go to companies like these.

    This is a pretty much skewed data source (probably a slashvertisement for them, too), and is the only study of its type. Take it with a weeks worth of salt.

    • Well - thats it though, if someone is professionally developing with security in mind, they probably know how to test it in office or know somebody who can.

      Independent security validation is the only way to verify that your approach to security works in practice.

    • by julesh ( 229690 )

      That's extrapolating a bit much, isn't it? And scanning through the article, they don't even name the sample size, just percentages.

      I was wondering about selection bias, and, yes, investigating the company that did the research they appear to specialise in analysing native code (e.g. C or C++ applications) running under Windows. My guess is that a lot of the more security-conscious developers have moved to other environments (interpreted or JIT-compiled code and/or Linux), so they're left analysing the dre

    • by sgtrock ( 191182 )

      I made my submission after first seeing a story in El Reg. While I saw it in several other places, I thought the Dark Reading story was a bit better in highlighting the findings than most. You're right, though. It's very light on the methodology. The press release [veracode.com] on VeraCode's site has a bit more information:

      ...1,600 Internally Developed, Open Source, Outsourced, and Commercial applications analyzed when first submitted to Veracode...

      ...the first report of its kind to provide security intelligence de

  • by Opportunist ( 166417 ) on Tuesday March 02, 2010 @11:13AM (#31330864)

    It just is not. Actually, quite the opposite: The better your security, the more your potential customer will be put off by it.

    Users do not care about security until it is too late (i.e. until after they got infected), and only then they will bitch and rant and complain how insecure your piece of junk is. If you, otoh, take security serious and implement it sensibly, they will bitch and rant already at install because they hate the hoops to jump through and the obstacles to dodge to make your software "just work".

    Security is the antagonist to comfort. By its very definition. No matter where you look, security always means "additional work". Either to the user, which means overhead to his work, or to the program, which means it will invariably be slower than its competing products.

    Thus security is not only an "unnecessary evil" when selling your product. It is actually hurting you when you try to convince someone to buy your stuff. Your software will be slower due to its security "burden", and it will be less comfortable to the user. The user does not see the glaring security holes when he buys the product. Only after, when the product bites him in the ass because it opened him up to an attack. But by then, he will already have paid for your product. And he will have bought your product instead of the more secure product your competitor offered, because yours was faster and easier to use.

    • by characterZer0 ( 138196 ) on Tuesday March 02, 2010 @11:36AM (#31331168)

      Protecting against SQL injection attacks, XSS, buffer overflows, and validating user input does not put off users.

    • by ka9dgx ( 72702 ) on Tuesday March 02, 2010 @11:48AM (#31331326) Homepage Journal

      Actually, good security would be a GREAT selling point, if someone actually implemented it.

      Security is the ability to run code without unwanted side effects. Windows, Mac, Linux do not offer a simple way to do this. The closest you can get is either Sandboxie on Windows, AppArmor on Linux, or setting up a VM per program.

      If you offered a way to specify the limits of side effects on an application before and while it runs, you could make a ton of people very happy. I suspect there is some money to be made there as well.

    • It's not an either/or thing. Secure software is often the *easiest* to configure. It's when configuration is difficult and prone to error that people make mistakes or start using default configurations.

      For example, when a service is installed on a system many installers do not have procedures for configuring the firewall. It may be a range of ports that's needed, or some access to a particular IP address. So people install the software and it doesn't work. They read something on the Internet that it's a f

    • It depends on the product, but there are indeed corporate customers who have policies disallowing them from purchasing / deploying software that does not pass independent security audit.

      It's a mixed bag, and it depends on the market you're in. For some types of software, security is a non-issue. Security is like usability. You can always improve things, but at some point you have to say "up to here, and no further".

    • The better your security, the more your potential customer will be put off by it.

      If, by “better”, you mean more intrusive, controlling, cumbersome, slow, and restrictive... then yes. Of course they will be.

      But if, by “better”, you mean less intrusive, controlling, cumbersome, slow and restrictive...

    • Re: (Score:3, Informative)

      by Lord Ender ( 156273 )

      Security is the antagonist to comfort. [etc. etc. etc.]

      Your entire rant is based on a false premise. In most cases, security actually increases "comfort" or "convenience." It's damn inconvenient to use a system which crashes, misbehaves, and needs to be frequently rebuilt due to security problems. Removing buffer overflow vulnerabilities from your software in no way inconveniences your users.

      Authentication is perhaps the only piece that sometimes is inconvenient. Just typing your username to log in is more

      • The problem is, all those nuisances will not surface until you have the product in a "productive" environment. Most products look quite nice in laboratory conditions (i.e. at demonstrations), even if they're crash prone, insecure and incompatible.

        By the time those crashes, reinstalls and "wtf did it do now" moments appear, management will already have signed the payment for the piece of junk you'll be tied to for the next decade.

  • by SlappyBastard ( 961143 ) on Tuesday March 02, 2010 @11:14AM (#31330880) Homepage

    Hmmm . . . there's a word for that . . . XKCD, can you help me?

    http://www.xkcd.com/703/ [xkcd.com]

  • by weeble ( 50918 ) on Tuesday March 02, 2010 @11:16AM (#31330914) Homepage

    So lots of comparisons between open source and commercial software; however there is a lot of open source software that is sold, i.e. commercial. In addition it has been shown that most of the code for the Linux kernel was developed by people who were paid to do it by Red Hat, IBM, Intel and others. Does that mean that the Linux Kernel is commercial software.

    May be the article should refer to closed source proprietary and open source software.

    The article reads as if the author does not fully understand the how Open Source software is developed and is just a large advert (a.k.a. press release) for the auditing software.

    • The comparison should really be application to application regardless of the open/closed commercial/non-commercial categories. There's no inherent relationship between these categories and security.

  • That is 50-50 is good news if the sample was broad enough . Could be interesting to match that numbers with amount of users... could be a lot of those programs that their userbase coincide (or is even lower) with the amount of developers, and see how insecure are programs with more than 100,1000 or even more users (i.e. if the top 20 % of top safe applications have the 80 % or more of users,or the distribution is better than that).
  • Will a security firm ever certify that a solution is perfect on the first pass? Not if they want to be invited back for a second.
  • Code has bugs, it always will. You need to reduce the attack surface, why not reduce it all the way down to the kernel of the OS? If you don't need to trust any of the users programs with the security of the whole system, you've solved a lot of problems.

    Don't trust the users? Not a good idea. The users are the administrators these days.

    Don't trust the internet? Well... it's a communications medium, just a set of tubes.

    Don't trust the programs? Great idea!

    • by Xtifr ( 1323 )

      Code has bugs, it always will.

      Really? I defy you to find a bug in my implementation of /bin/false. :)

      What is true is that the chance of a bug appearing grows exponentially as the code increases in complexity, so that for any program of moderate or greater complexity, the chance that one or more bugs exist is near certainty, but I wouldn't be posting on slashdot if I didn't enjoy the occasional moment of nitpicking pedantry... :)

  • Sometimes security doesn't matter, esp. with regard to the "internal project" stuff mentioned.

    Of course, this is the area that basic utility scripting is used, you and perhaps one or two others are the only ones using it, you already have access to any other system you could get via a cross scripting technique, access to any DBs you'd get with a SQL injection, etc.

  • In other news... (Score:4, Insightful)

    by GuruBuckaroo ( 833982 ) on Tuesday March 02, 2010 @11:49AM (#31331340) Homepage
    More than 90% of all software tested fails to compile the first time. Seriously, that's what security testing is for - finding holes so they can be filled.
  • Not a shocker (Score:3, Interesting)

    by ErichTheRed ( 39327 ) on Tuesday March 02, 2010 @11:54AM (#31331424)

    Coming from the systems integration side of things, I don't view this as a surprise. Developers are great at writing software, but in my experience they have no idea about how the platform they're deploying it on actually works beyond the API function calls they make. This leads to internal applications that I have to throw back because part of the requirements are, "User must be a member of the Administrators or Power Users group." Most dev guys just don't get that it's very dangerous to give the end user full rights to an Internet-connected Windows box. There's just too many holes in Windows to safely allow it.

    To be fair, there are a lot of reasons for stuff like this...not the least of which is deadlines for deploying "something that works." I've been there on the systems side too...scrambling at the last second to get hardware and operating systems deployed because of a deployment date. There are also a lot of apps coded in C++ and other unmanaged languages that open the system up for all sorts of buffer overrun attacks. Not much you can do there except vigilant code checking.

    I think a little education on both sides of the fence would be useful. Developers should get some kind of training in "systems administration and internals for developers" and systems guys should definitely be educated in what holes are safe to open up on their systems. (That's a big cause of this too -- there's a lot of low-skilled systems admins out there who take the developer's instructions at face value without checking to see if full access is really needed.)

    • by ka9dgx ( 72702 )

      Why force the developers to worry so much about security? Why not instead provide a way to have a contract with the OS, which limits side effects to a known set of limitations? That would save a lot of grief, and let the developers get on with it.

    • Mod parent up. I come from the same SI background and now running a programming shop. What dumbfounded me were the folks with CS degrees that really had no idea how the networking/systems side worked. I can remember a few times they'd call me over after working half a day or more trying to figure out why something in their code wasn't working only to have me take a look and in less than 30 seconds figure out it something on the server wasn't running or there was a network configuration problem.

  • "remediate"? (Score:3, Insightful)

    by Voline ( 207517 ) on Tuesday March 02, 2010 @11:56AM (#31331450)
    Try "remedy", or does that not sound pseudo-technical enough?
  • Obsolete? (Score:3, Informative)

    by vlm ( 69642 ) on Tuesday March 02, 2010 @11:57AM (#31331472)

    The conventional wisdom is that open source is risky.

    Does anyone believe that anymore, other than journalists quoting other journalists and PR people?

    I did some google searching, trying to find when that old FUD campaign started. It seems to not show up much until 1998.

    The 12 year old advertising/FUD campaign is getting kind of tired.

  • Obviously, Veracode's tests aren't thorough enough. But it raises the question, "who tests the testing software?"
  • The conventional wisdom is that open source is risky. But open source was no worse than commercial software upon first submission. That's encouraging

    ...um, I’d have started with the opposite premise, that open-source is safer. In light of that premise, I think their findings are somewhat discouraging... except,

    It took about 30 days to remediate open-source software, and much longer for commercial and internal projects

    Now that’s encouraging.

  • ...and I don't give a **** for security. I am working as freelancer. As such there a two possibilities: I calculate correctly including all costs for proper design and tests, or I get the contract. Customers pay ****, customers want ****, customers get ****.
  • Back when I was in charge of hiring new programmers for a web development shop, the very first thing I'd do when I got a resume would be to load up the applicant's personal website, if he had one.

    No, I didn't look at the aesthetics of the site. I didn't care about the cleanliness of the HTML. The implementation language and web framework didn't matter. I had more important things on my mind: I would find a form, and type hello world' -- ; SHOW TABLES. If the site misbehaved, I'd toss the resume in the trash

    • Attention to correctness is the bedrock trait of a good developer. Everything else comes second; security is just one property of correct code.

      I disagree. This is pure lip service. Everybody wants secure code, but nobody wants to pay for it. So I infer that security and quality is not important. Whenever I submit a realistically estimated cost/time plan I can be sure that I won't get the bid. So cheap and fast, everything else comes second, or third, or fourth.

      A situation I experienced myself.
      Company, which developed medical devices. Project manager: Make sure, that the gui looks nice. Me: Cannot do it, somewhere the is a problem, which caus

  • "Conventional wisdom" depends on who you ask. The convention wisdom I've heard is that OSS is actually more secure. More eyes, etc. The flip side of his analysis is that while OSS was no more vulnerable than closed source it was also no less vulnerable, which would suggest the closed source model is equally capable of producing secure code.
  • It seems to me that a through test would initially flunk pretty close to 100% of all software. So this means Veracode is too lenient by about 40%.

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...