Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security Software IT

Over Half of Software Fails First Security Tests 145

An anonymous reader writes "Even with all of the emphasis on writing software with security in mind, most software applications remain riddled with security holes, according to a new report released today about the actual security quality of all types of software. Close to 60 percent of the applications tested by application security company Veracode in the past year-and-a-half failed to achieve a successful rating in their first round of testing. And this data is based on software developers who took the time and effort to have their code tested — who knows about the others." Reader sgtrock pointed out another interesting snippet from the article: "'The conventional wisdom is that open source is risky. But open source was no worse than commercial software upon first submission. That's encouraging,' Oberg says. And it was the quickest to remediate any flaws: 'It took about 30 days to remediate open-source software, and much longer for commercial and internal projects,' he says."
This discussion has been archived. No new comments can be posted.

Over Half of Software Fails First Security Tests

Comments Filter:
  • by Opportunist ( 166417 ) on Tuesday March 02, 2010 @12:13PM (#31330864)

    It just is not. Actually, quite the opposite: The better your security, the more your potential customer will be put off by it.

    Users do not care about security until it is too late (i.e. until after they got infected), and only then they will bitch and rant and complain how insecure your piece of junk is. If you, otoh, take security serious and implement it sensibly, they will bitch and rant already at install because they hate the hoops to jump through and the obstacles to dodge to make your software "just work".

    Security is the antagonist to comfort. By its very definition. No matter where you look, security always means "additional work". Either to the user, which means overhead to his work, or to the program, which means it will invariably be slower than its competing products.

    Thus security is not only an "unnecessary evil" when selling your product. It is actually hurting you when you try to convince someone to buy your stuff. Your software will be slower due to its security "burden", and it will be less comfortable to the user. The user does not see the glaring security holes when he buys the product. Only after, when the product bites him in the ass because it opened him up to an attack. But by then, he will already have paid for your product. And he will have bought your product instead of the more secure product your competitor offered, because yours was faster and easier to use.

  • by dcraid ( 1021423 ) on Tuesday March 02, 2010 @12:28PM (#31331084)
    Will a security firm ever certify that a solution is perfect on the first pass? Not if they want to be invited back for a second.
  • Re:That's great. (Score:3, Interesting)

    by TrisexualPuppy ( 976893 ) on Tuesday March 02, 2010 @12:35PM (#31331166)
    Why is this such a shock to you?

    For secure software, isn't it just a bit subjective? These tests were submitted by people who NEEDED to have their software tested. Much of the software out there doesn't deal with sensitive data, and much of it is too simple to serve as a system security risk, and it isn't submitted. So you this 60% figure doesn't really mean much. Most software isn't submitted for security checks and never needs to be.

    This article is FUD, and the necessary details are not explained. Methinks that Veracode was just trying to get a little publicity. Thanks again, Soulskill!
  • Re:Bolting On (Score:3, Interesting)

    by hardburn ( 141468 ) <hardburn.wumpus-cave@net> on Tuesday March 02, 2010 @12:44PM (#31331266)

    When it comes to security, not necessarily. A good design of classes for the purposes of readability and maintainability does not necessarily mean it's easy to fix a security bug. These are often completing design choices.

    The two biggest errors cited by TFA were cross-site scripting and SQL injection. Generally, XSS can be fixed if you had a decent design--just filter your inputs better. SQL injection even more so. In my experience, good languages have database libraries that make it easy to use placeholders in SQL statements (if you're using some idiot RDBMS that can't handle placeholders, the library can transparently handle placeholders for you in a secure way). If your design started off with a proper database abstraction layer, and you let an SQL injection attack slip through, it should be easy enough to fix.

    However, the third one mentioned is cryptographic implementations. This is much, much harder to solve, and fixes will often result in breaking backwards compatibility. For instance, the RC4 algorithm is considered reasonably secure on its own, but it's also very fragile. If you later decide to use something else, moving your data away from it can have huge backwards compatibility issues; this was exactly the situation faced by WEP. It can still happen for other algorithms, even one's that are sturdier than RC4.

    Making practically unbreakable algorithms was hard, but it's largely a solved problem. Using those algorithms in practice is much, much harder, and it's a problem that has to be re-solved with each new system.

  • Not a shocker (Score:3, Interesting)

    by ErichTheRed ( 39327 ) on Tuesday March 02, 2010 @12:54PM (#31331424)

    Coming from the systems integration side of things, I don't view this as a surprise. Developers are great at writing software, but in my experience they have no idea about how the platform they're deploying it on actually works beyond the API function calls they make. This leads to internal applications that I have to throw back because part of the requirements are, "User must be a member of the Administrators or Power Users group." Most dev guys just don't get that it's very dangerous to give the end user full rights to an Internet-connected Windows box. There's just too many holes in Windows to safely allow it.

    To be fair, there are a lot of reasons for stuff like this...not the least of which is deadlines for deploying "something that works." I've been there on the systems side too...scrambling at the last second to get hardware and operating systems deployed because of a deployment date. There are also a lot of apps coded in C++ and other unmanaged languages that open the system up for all sorts of buffer overrun attacks. Not much you can do there except vigilant code checking.

    I think a little education on both sides of the fence would be useful. Developers should get some kind of training in "systems administration and internals for developers" and systems guys should definitely be educated in what holes are safe to open up on their systems. (That's a big cause of this too -- there's a lot of low-skilled systems admins out there who take the developer's instructions at face value without checking to see if full access is really needed.)

  • Re:That's great. (Score:3, Interesting)

    by ka9dgx ( 72702 ) on Tuesday March 02, 2010 @01:03PM (#31331576) Homepage Journal

    Yes, the registry sucks, for many reasons.

    Yes, better defaults could have been chosen 2 decades ago.

    Now things have changed, and any system that doesn't let limits get set per task is insufficient. The current choices now are insuring 2 more decades of pain. I'm trying to educate people on the better options available, so that a better choice gets made.

    It's now necessary to think of security with a much finer grain. The user is no longer the natural dividing line. It needs to be per task instance.

  • Re:That's great. (Score:3, Interesting)

    by jsebrech ( 525647 ) on Tuesday March 02, 2010 @01:16PM (#31331770)

    These tests were submitted by people who NEEDED to have their software tested.

    I think the software submitted for testing is actually more secure than the average software, because it's made by people who actually know about the problem.

    Much of the software out there doesn't deal with sensitive data, and much of it is too simple to serve as a system security risk

    All web sites need to have good security. Without good security, you can get all sorts of hijacking attacks, where systems that seem harmless are abused to mount attacks on more sensitive systems.

    The biggest problem with security is the degree it is underestimated. Everyone thinks it's somebody else's problem. Collectively though, the web is a one huge gaping security hole, and it's because of this attitude.

    Most of the books on web development I've opened up contain security holes in the code samples. Even something as basic as SQL injection is still very prevalent in the code samples you find online and in print. Things get much worse when you start talking about subtler flaws like XSS or CSRF. And don't even get me started on the programming forums...

    This article is most definitely not FUD.

  • Re:Bolting On (Score:5, Interesting)

    by Bert64 ( 520050 ) <bert@[ ]shdot.fi ... m ['sla' in gap]> on Tuesday March 02, 2010 @01:51PM (#31332370) Homepage

    For another encryption example, look at how windows and linux implement user password hashing...

    Linux takes the plaintext password via an input channel (ssh, telnet, gdm, local console etc), passes it to PAM which loads the corresponding password from the shadow file, encrypts the user input with the same algorithm and salt, and compares the output. The backend (pam, encryption cipher) can be changed without affecting how the frontend, making it easy to use a different encryption algorithm as increases in computing power, or discovery of cryptographic flaws, renders the old ones insecure.

    Windows, in a somewhat misguided attempt to prevent plain texts being sent over the network, effectively uses the encrypted hash (yes its more complicated than that, but the general idea is that only the hash ever gets used and the password isnt sent in the clear - unix solves this at a different layer by using encryption of the plaintext password such as ssh)... Because of this, the hashing algorithm is difficult to change. Older windows used lanman which is laughably weak, while modern windows uses ntlm by default which is somewhat stronger but not great... However, modern windows still has lanman support for compatibility reasons, and until vista/2008 it was still enabled by default. If they change the hashing algorithm, then they will still have to retain the old ones for quite some time in order to have compatibility, and also change the protocols to handle a third possible algorithm.
    The fact that you can use the hash without cracking it first is also a design flaw, this isn't possible on unix or anything else i'm aware of.

  • Re:That's great. (Score:3, Interesting)

    by TheLink ( 130905 ) on Tuesday March 02, 2010 @02:08PM (#31332712) Journal
    > If you can't easily restrict a program to a small subset of your machine, you're forced to trust code you didn't write to get anything done.
    > Nobody should blame the users, if the OS sucks.

    Agreed. And most OSes out there suck in this respect (OSX, Linux, Windows).

    FWIW Windows Vista and Windows 7 kinda suck less - since they actually have some sandboxing with IE8.

    Ubuntu has apparmor sandboxing of firefox as an option that's turned off by default, and even if you turn it on it's not sandboxed enough IMO (firefox can read and write almost anything in the user's home directory with the exclusion of just a few directories).

    As it is, most users are either forced to:

    1) Solve a version of the Halting Problem where they don't and can't know all the inputs and are unable to read the source code (or even know if that's really the source code of the executable they are about to run ;) ).

    2) Use only software from a Trusted Vendor's repository. Not a good strategy for Microsoft given their Monopoly Status, and this approach/philosophy doesn't actually help the OSS cause that much either.

    You can say "download the source and compile it yourself", when even experts have difficulty finding flaws in the software, how would users find them (see also 1) ).

    Users will just skip the pointless steps and go to "make install" (which often requires root permissions).

    As it is I have proposed that applications request for the sandbox they want to be run in. Then the O/S enforces the sandbox.

    It's easier to figure out the danger the application poses, if you require applications to state up front the limits of what they want. If they say "No Limits" you can assume you don't want to run it.

    The sandboxes can be from a shortlist of template sandboxes, or custom sandboxes which are signed by trusted parties.

    Organizations could have Trusted 3rd Parties audit the application's proposed sandbox and sign it if they believe it's OK.

    It is much easier to audit a sandbox than audit thousands of lines of code.

    Furthermore the code audit results will be invalidated if the program can update itself online, or can possibly fetch new instructions from the Internet. Whereas the sandbox audit would still be valid.

    For example, without sandboxing, a code audited program might fetch new instructions and decide to turn on your webcam without your permission. In contrast if the sandbox doesn't allow the program to access the webcam, the program isn't going to be able to access the webcam even if it fetched new instructions.

    Unless of course there's a bug in the sandboxing. But at least this means you can concentrate more resources on getting the sandbox and O/S bugs fixed, rather than try to get the dozens or hundreds of programs security audited and reaudited everytime there's a new update.
  • Re:That's great. (Score:3, Interesting)

    by david_thornley ( 598059 ) on Tuesday March 02, 2010 @02:57PM (#31333462)

    The referenced defense of the registry is an article that mostly discusses the weaknesses of Microsoft implementations of ".ini" files, and many of those weaknesses are due to Microsoft design features I consider distinctly suboptimal. I'm perfectly willing to agree that the registry might be better than a botched implementation of rc files, but that is hardly a convincing defense.

    Moreover, even if the registry was the right idea in the 16-bit days, that doesn't mean it's not a problem currently. The biggest strength and biggest weakness of MS Windows is the tremendous number of MS Windows-compatible apps out there, making the operating system very useful and very hard to change.

For God's sake, stop researching for a while and begin to think!

Working...