Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Bug Graphics Software IT

Zlib Security Flaw Could Cause Widespread Trouble 372

BlueSharpieOfDoom writes "Whitedust has an interesting article posted about the new zlib buffer overflow. It affects countless software applications, even on Microsoft Windows. Some of the most affected application are those that are able to use the PNG graphic format, as zlib is wildely used in compression of PNG images. Zlib was also in the news in 2002 because of a flaw found in the way it handled memory allocation. The new hole could allow remote attackers to crash the vulnerable program or even the possiblity of executing arbitrary code."
This discussion has been archived. No new comments can be posted.

Zlib Security Flaw Could Cause Widespread Trouble

Comments Filter:
  • by Ckwop ( 707653 ) * on Sunday July 10, 2005 @08:44AM (#13025933) Homepage

    Why are we still having buffer overflows? There's a compile option in Visual C++ that allows automatic buffer overflow protection. Does GCC have this switch? If so, why not? And why are people not using this? We have enough processing power on a typical PC to spend on these security such as this. Performance is not an excuse.

    Looking further, this is an interesting example of the problems with monoculture. The BSD TCP/IP stack was copied for Windows and Mac OSX - this is great, it saves a tonne of time but you also means you inherit the exact same bugs as the BSD stack. This gives you an impression of how difficult it is to design secure operating system. If you borrow code such as this, you have to make sure it's secure. You can't really do that without line by line analysis which is unrealistic. In libraries the problem is especially accute. If you make a mistake in a well used library it could effect hundreds of pieces of software, as we've seen here.

    We can't modularise security either, like we can modularise functionality, because you can take two secure components and put them together and get insecurity. Despite the grand claims people make about formal verification, even this isn't enough. The problem with formal verification is that the abstraction of the language you're using to obtain your proof may not adequately represent the way the compiler actually compiles the program. Besides, it's possible to engineer a compiler that deliberately miscompiles itself such that it compiles programs with security flaws in it.

    What i'm trying to say is that despite what the zealots say, achieving security in software is impossible. The best we can do migitate the risk the best we can. The lesson to learn from security flaws such as this is that while code-reuse is good for maintainability and productivity, for security it's not great. As always, security is a trade-off and the trade-off here is whether we want to develop easy to maintain software quickly or whether we want to run the risk of these exploits being exploited. Personally, I fall in the code-reuse camp.

    Simon.

  • by mistersooreams ( 811324 ) on Sunday July 10, 2005 @08:48AM (#13025947) Homepage
    There's a compile option in Visual C++ that allows automatic buffer overflow protection

    Is there? I haven't seen it. Even if there is (and I'm inclined to trust you), the reason that no one uses it is because it slows programs down so much. The pointer semantics of languages like C and C++ are fundamentally dangerous and the only way you can make them safe (checking every dereference individually) is painfully slow. I think a factor of three or four was the general consensus on /. last time this debate came up.

    I guess it's about time for the Slashdot trolls to start calling for the end of C and C++. Strangely, I think I'm starting to agree with them, at least above the kernel level. Is speed really so critical in zlib?

  • by Ckwop ( 707653 ) * on Sunday July 10, 2005 @08:55AM (#13025969) Homepage

    See here [developer.com]

    On the broad issue on whether we should be using other languages, I think that saying "the programmer should carefully" is a bit misguided. Humans make mistakes and this is something that computers can do very well. Besides, if coding in such languages is slow, we can use a profiler to find the hot-spots and optimise the slow section using a lower level language.

    For that reason, I don't really buy the "but it's too slow argument" - I think it's a good trade-off to use a language that doesn't allow buffer-overflows.

    Simon.

  • by inflex ( 123318 ) on Sunday July 10, 2005 @09:18AM (#13026067) Homepage Journal
    I wonder if it'd be possible to create a binary patch for prebuilt binaries ?

    Anyone got some suggestions?
  • by n0-0p ( 325773 ) on Sunday July 10, 2005 @09:55AM (#13026204)
    Yes, both Visual C++ and the GCC ProPolice extensions provide stack and heap protection. And in general these techniques have a minimal impact on execution speed. Unfortunately, this does not solve the problem. There are still viable attacks that can be preformed by avoiding the stack canaries or heap allocation headers and overwriting other vulnerable data. The probability of developing a successful exploit is lower, but it's still there.

    I don't disagree that building secure applications is hard, but it's certainly not impossible. Modularized code just adds another layer of compilcation and potentially confusion. Most of this can be addressed by documenting the design and interface constraints, and ensuring that they're followed. At that point even most security vulnerabilities are primarily implementation defects. Defects will of course still occur, but the trick is to build systems that fail gracefully.

    Developers must to account for defects and expect that every form of security protection will fail given enough time and effort. This is why the concept of "Defense in Depth" is so important. By layering protective measures you provide a level of security such that multiple layers have to fail before a compromise becomes truly serious. Combine that with logging and monitoring, and a successful attack while usually be identified before damage is done.

    Take the above vulnerabiliy and assume it exists in an exploitable form in a web app running on Apache with a Postgres backend. If the server had been configured from a "Defense in Depth" perspective it would be running in a chroot jail as a low privilege account. Any database access required would be performed through a set of stored procedures or a middleware component that validates the user session and restricts data access. SELinux or GRSecurity would be used for fine grained user control on all running processes. All executables would also be compiled with stack and buffer protection.

    In the above scenario, you see that this single exploit wouldn't get you much. However, most systems are deployed with one layer of security, and that's the problem.
  • by doctormetal ( 62102 ) on Sunday July 10, 2005 @10:08AM (#13026248)
    I also doubt your argument that achieving security in software is impossible. People have been doing it for years and years. Unfortuately we are seeing more and more security breaks because the percentage of careless programmers out there has been steadily rising.

    This is a problem of education. When I was at school you learned to program C and assembler. If you made a stupid programming error you would notice it real soon. Now they mostly teach languages like java which hides most of the defensive programming from the programmer. If you don't know about such things, you should not be programming in languages where you need to these things.

    You can write 100% bug-free code if you take your time, are careful and methodical, and do thorough unit and system tests. Those with the "Hey, all warnings but no errors--Ship It!" mentality give the software writing skill a bad name.

    Consider all compiler warnings as errors is what I always do. Every warning can be a potential runtime error.

    Thinking before you code can also help a lot.
  • very complex code (Score:5, Interesting)

    by ep385 ( 119521 ) on Sunday July 10, 2005 @10:16AM (#13026283)
    Has anyone read the zlib code? While the author clearly tried to make it readable it's still very complex and it's very hard to see at a glance or even after many glances where potential buffer overflow problems may exist (or even where it might fail to implement the deflate algorithm). C is great language for writing an operating system where all you care about is setting bits in a machine register but this algorithm really taxes its abilties.

    For comparison here [franz.com] is the deflate algorithm written in Common Lisp. It all fits neaty into a few pages. This is a far better language comparison example than the oft-cited hello world comparison.
  • by aws4y ( 648874 ) on Sunday July 10, 2005 @10:16AM (#13026286) Homepage Journal
    Why are we still having buffer overflows? There's a compile option in Visual C++ that allows automatic buffer overflow protection. Does GCC have this switch? If so, why not? And why are people not using this? We have enough processing power on a typical PC to spend on these security such as this. Performance is not an excuse.

    The problem I have with this statement is that any checks that Visual C++ may have are at best a fig leaf. Buffer Overflow protection is something that has dogged not just programers but hardware manufactures for decades now. If security is of such great consern why not make the assembler do buffer checks?, why not the operating system? why not the processor?, why not create a ram infrasturcture called SDDR in which the RAM itself does not allow anything to be accessed without a secure hash? the answer to all of these questions is that for every solution, event the stupid one at the bottom, the buffer overflow might take on a new form or the security measures themselves may backfire.

    Ultimatly the parent is IMHO over reacting, we are always going to have buffer overflows. This is not necissarily a problem so long as people are willing to disclose the vulnerability and work hard to get it patched before an exploit is out in the wild. This is the main argument as to why Microsoft software is insecure because often known vulnerabilites go months without being patched. They are getting better but they are nowhere near the transparancy displayed here. They made a mistake in coding, they are attempting to fix it but until all the vulnerable aplications are patched we need to be on guard for signs of malicious behavior from programs relying on zlib. In other words this is just a part of life in the world of computing.

  • by Tyler Durden ( 136036 ) on Sunday July 10, 2005 @10:26AM (#13026321)
    Why have hardware support that simply helps prevent buffer overflows when we can use hardware features that solve it? I believe that can be done with the NX bit in many modern processors. For more information, look in the Wikipedia entry for "buffer overflow". Getting all new machines to run with chips with this feature and operating systems to take advantage of it is the key to stopping the overflows, not new languages to generate low-level code.

    The problem I have with the argument, "Sure the software checks in higher-level languages will slow things down significiantly, but computers are so much faster now," is simple. Ever notice how even as memory/video card frame-rates/hard-drive space increases exponentially it seems that the newest applications tend to still max them out to compete? Well the same thing applies to speed. It's tough to explain to your manager that you are going to purposefully use a language that cripples the efficiency of your newest application to anticiplate your own carelessness. (I'm not saying I'm any better than anyone else on this point. I've had my share of careless programming moments myself).

    Does anyone know of any disadvantages to the NX bit that I don't know about? (Like significant slow-down worse than software checks or possible overflows that it would miss).
  • by macemoneta ( 154740 ) on Sunday July 10, 2005 @10:55AM (#13026411) Homepage
    If the argument were that simple, static linking would never occur.

    The flip side of the argument is that installing a broken zlib will break all application that are dynamically linked, but have no effect on those that are statically linked.

    Remember too that an upgrade to a dynamically linked function means that proper testing must include all software that uses that function. A statically linked application can be tested as a standalone unit.

    The resulting isolation of points of failure and lower MTTR is often seen as an advantage in production environments.

    I remember this specific situation occurring in a production environment I worked in. A common library was updated, causing the failure of multiple critical applications. The ones not impacted? Statically linked.

    Both sides of the discussion clearly have advantages and disadvantages; they have to be weighed to determine the proper risk/benefit.
  • No... (Score:3, Interesting)

    by Junta ( 36770 ) on Sunday July 10, 2005 @11:10AM (#13026472)
    Not in the least bit, observe, just verified with ldd that Xorg and firefox have libz dynamically linked in on my system, which means on program restart, it will pick up the code from the shared library at runtime. It's the whole point of a dynamicly linked library.

    Now once upon a time, a lot of distributions (and open source projects out of the box even) would just static link in libz for some reason or another, but after some security issues in the past that caused massive headaches for package maintainers, that practice has largely ceased.
  • by eldacan ( 726222 ) on Sunday July 10, 2005 @11:49AM (#13026611)
    The flip side of the argument is that installing a broken zlib will break all application that are dynamically linked, but have no effect on those that are statically linked.

    That's why some people like to use (say) debian stable in production environments: security fixes are backported to the well-tested version of the lib, making a breakage quite unlikely.
  • by ringm000 ( 878375 ) on Sunday July 10, 2005 @01:30PM (#13027127)
    Anyway, your language without buffer overflows would not use pointer arithmetic, so would create a zlib a lot slower than the one we now, even if you optimise your high level language to the max.
    Really? I don't see any reasonable way for a compiler to implement arrays internally without pointer arithmetic, be it C or any high-level language.

    As for mandatory array boundary checks, they can really be optimized away whenever they are not needed. Most trivial example: in case the complier has verified that the loop counter is within the boundaries of the array, no check on access is necessary. Provided the optimizer is good, there won't be any significant difference between the optimized code with mandatory checks and the code where all necessary checks are added manually.

    It can even be possible to allow real pointers and pointer arithmetic, have certain source-level compatibility with C, and still have the code which has all necessary checks and is not prone to buffer overflows. E.g. check out Cyclone [att.com], which was discussed on Slashdot some time ago.

  • by po8 ( 187055 ) on Sunday July 10, 2005 @03:23PM (#13027708)

    We're pretty badly off-topic here, but what the hey...

    C was first designed and implemented in the time period from 1969-1973 [bell-labs.com]. It is hardly a critique of its original designers and implementors that we have learned a lot about programming language design and implementation in the succeeding 30+ years, and that many of the constraints of the computing environment have been weakened or removed during that time. Indeed, some of the original designers [rdg.ac.uk] of C and UNIX spent a lot of time 10+ years ago developing an alternative [vitanuova.com] language and runtime for writing operating system and application code that fixes the problems with C that I described.

    "In fact, when you are coding things like process and memory mangement routines and libraries, it is very handy to be able to do arithmetic with and compare to variables that are not "exactly" the same type, if the comparison or operation otherwise makes sense. Hence, things like the boolean FALSE and integer 0 being equal (which Java will complain about) are handy."

    If by "handy", parent meant "tempting" but "error-prone" and "potentially insecure", I think there's about 30 years' experience to back up this claim. Things as fundamental or important as my operating system's process or memory management routines are occasionally broken in particularly dangerous ways because their programmer did something that seemed to "make sense" at the time, even though a "safe" programming language wouldn't allow it. Go look at the changelogs of a recent UNIX kernel [kernel.org] for plenty of examples.

    "The lack of dynamic type checking, operand checking and bounds checking allows the programmer to write low level or system code that gets out of the way of higher level code." I'm sorry, but I don't know what this means.

    "Imagine the performance degradation at the kernel if every comparison was dynamically checked for type, operand and bounds." One would prefer that operations be checked statically whenever possible. This is not so much for performance as because failed runtime checks in low-level code are difficult to handle gracefully. That said, as I mentioned in my previous post on this topic, we have known for a long time how to build programming languages so that a combination of static and runtime type and operand checking will provide some correctness guarantees without signficantly impacting execution performance. IMHO, it's way past time to start using that knowledge.

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...