Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Bug Graphics Software IT

Zlib Security Flaw Could Cause Widespread Trouble 372

BlueSharpieOfDoom writes "Whitedust has an interesting article posted about the new zlib buffer overflow. It affects countless software applications, even on Microsoft Windows. Some of the most affected application are those that are able to use the PNG graphic format, as zlib is wildely used in compression of PNG images. Zlib was also in the news in 2002 because of a flaw found in the way it handled memory allocation. The new hole could allow remote attackers to crash the vulnerable program or even the possiblity of executing arbitrary code."
This discussion has been archived. No new comments can be posted.

Zlib Security Flaw Could Cause Widespread Trouble

Comments Filter:
  • by Ckwop ( 707653 ) * on Sunday July 10, 2005 @07:44AM (#13025933) Homepage

    Why are we still having buffer overflows? There's a compile option in Visual C++ that allows automatic buffer overflow protection. Does GCC have this switch? If so, why not? And why are people not using this? We have enough processing power on a typical PC to spend on these security such as this. Performance is not an excuse.

    Looking further, this is an interesting example of the problems with monoculture. The BSD TCP/IP stack was copied for Windows and Mac OSX - this is great, it saves a tonne of time but you also means you inherit the exact same bugs as the BSD stack. This gives you an impression of how difficult it is to design secure operating system. If you borrow code such as this, you have to make sure it's secure. You can't really do that without line by line analysis which is unrealistic. In libraries the problem is especially accute. If you make a mistake in a well used library it could effect hundreds of pieces of software, as we've seen here.

    We can't modularise security either, like we can modularise functionality, because you can take two secure components and put them together and get insecurity. Despite the grand claims people make about formal verification, even this isn't enough. The problem with formal verification is that the abstraction of the language you're using to obtain your proof may not adequately represent the way the compiler actually compiles the program. Besides, it's possible to engineer a compiler that deliberately miscompiles itself such that it compiles programs with security flaws in it.

    What i'm trying to say is that despite what the zealots say, achieving security in software is impossible. The best we can do migitate the risk the best we can. The lesson to learn from security flaws such as this is that while code-reuse is good for maintainability and productivity, for security it's not great. As always, security is a trade-off and the trade-off here is whether we want to develop easy to maintain software quickly or whether we want to run the risk of these exploits being exploited. Personally, I fall in the code-reuse camp.

    Simon.

    • by mistersooreams ( 811324 ) on Sunday July 10, 2005 @07:48AM (#13025947) Homepage
      There's a compile option in Visual C++ that allows automatic buffer overflow protection

      Is there? I haven't seen it. Even if there is (and I'm inclined to trust you), the reason that no one uses it is because it slows programs down so much. The pointer semantics of languages like C and C++ are fundamentally dangerous and the only way you can make them safe (checking every dereference individually) is painfully slow. I think a factor of three or four was the general consensus on /. last time this debate came up.

      I guess it's about time for the Slashdot trolls to start calling for the end of C and C++. Strangely, I think I'm starting to agree with them, at least above the kernel level. Is speed really so critical in zlib?

      • by Ckwop ( 707653 ) * on Sunday July 10, 2005 @07:55AM (#13025969) Homepage

        See here [developer.com]

        On the broad issue on whether we should be using other languages, I think that saying "the programmer should carefully" is a bit misguided. Humans make mistakes and this is something that computers can do very well. Besides, if coding in such languages is slow, we can use a profiler to find the hot-spots and optimise the slow section using a lower level language.

        For that reason, I don't really buy the "but it's too slow argument" - I think it's a good trade-off to use a language that doesn't allow buffer-overflows.

        Simon.

        • Also hardware support would help. Even 25 years ago the ICL 2900 series systems had a native hardware 'pointer' type (they called it a descriptor). This included in it the size of the object pointed to, and the hardware would check that any dereferences were not out of bounds.
          • That would be neat indeed; but how could such a hardware pointer be included in the modern, convoluted, ubiquous x86 instruction set? Yeah, not all the world is a x86, but most of it is.

            Could some similar feature be supported by the operating system, or even the libc, by keeping track of every malloc, calloc, realloc, and whatever, reserving some memory space to store information about malloc'ed objects and their sizes?
            • ummm, most C libraries do that (how do you expect it to deallocate if it doesn't know the size?)

              The problem is the dereference beyond the range of the block. Either all dereferences have to go through the library (slow) or you need hardware support.

              And even hardware support doesn't completely solve the problem. Nothing prevents you from hitting another block that was allocated, just not the one you're looking at.
            • by Anonymous Coward on Sunday July 10, 2005 @09:20AM (#13026298)
              Actually, x86 already does, but nobody uses these features. When they fixed segmentation with the 386, segments were now accessed through selectors and offsets. The selectors pointed to one of two tables (GDT - global descriptor table or LDT - local descriptor table). Whenever a memory access was made using a selector, the CPU would look up the descriptor corresponding to the selector. It would check whether the current program had necessary access rights and privilege. If not, then a GPF would be thrown. Segments can be marked as read-only, read-write, executable and maybe a few more combos. Although the GDT and LDT each have only room for 8192 entries, that's still probably more than most programs would need. Each segment could correspond to a single object or array of primitive objects. There would be no buffer overflows because the CPU catches attempts to go beyond the limit of a segment. Stack data couldn't be executed inadvertently because the stack segment would properly be marked as non-executable.

              There are a few reasons, though, why we don't use this system. One is that loading descriptors is slow because it was never optimized in the CPU with the equivalent of a TLB as for paging. The other is that using segmentation requires 48-bit pointers rather than 32-bit pointers, or it requires loading segmentation registers and doing a dance with those. I suppose using longer pointers was a problem back in the days when memory was scarce, but it's hardly a problem now (check out 64-bit). Intel *could have* made segment descriptor access checks and loading fast, but I guess there wasn't a demand for it once paging was available.
          • by Tyler Durden ( 136036 ) on Sunday July 10, 2005 @09:26AM (#13026321)
            Why have hardware support that simply helps prevent buffer overflows when we can use hardware features that solve it? I believe that can be done with the NX bit in many modern processors. For more information, look in the Wikipedia entry for "buffer overflow". Getting all new machines to run with chips with this feature and operating systems to take advantage of it is the key to stopping the overflows, not new languages to generate low-level code.

            The problem I have with the argument, "Sure the software checks in higher-level languages will slow things down significiantly, but computers are so much faster now," is simple. Ever notice how even as memory/video card frame-rates/hard-drive space increases exponentially it seems that the newest applications tend to still max them out to compete? Well the same thing applies to speed. It's tough to explain to your manager that you are going to purposefully use a language that cripples the efficiency of your newest application to anticiplate your own carelessness. (I'm not saying I'm any better than anyone else on this point. I've had my share of careless programming moments myself).

            Does anyone know of any disadvantages to the NX bit that I don't know about? (Like significant slow-down worse than software checks or possible overflows that it would miss).
        • by ookaze ( 227977 ) on Sunday July 10, 2005 @08:50AM (#13026185) Homepage
          You don't buy the "but it's too slow argument", OK, that's your right, and it is not surprising when your vision of programming is so narrow.
          We are talking about a low level LIBRARY here. Which means reentrant, placement independant code, efficiency, API accessible easily to any language and compiler.
          I think all of this is still very difficult to do right in other languages than C. It is already very difficult to do in C++.
          Anyway, your language without buffer overflows would not use pointer arithmetic, so would create a zlib a lot slower than the one we now, even if you optimise your high level language to the max.
          When you see that what takes time are basic lines of your language, you are toast.
      • by Anonymous Coward
        Speed is and will always be a feature. Let's say you have a program that takes 1 second to open a zip/gzip file and another program that takes 8 seconds to open a zip/gzip file.
        In the first case most users will just open the file to check the contents, which is a feature, fast access means you'll use the files in another way. An 8 second access time slows you down, so you'll be less inclined to check the contents.

        This goes for everything in computing. Previewing a filter change in your image program before
      • I like programming in C, but I recognize that it is completely inappropriate for many of the applications that it is used for. When modern computers are thousands of times faster than those used for the development of C, we can afford to spend some CPU cycles on reliability and security.
      • Is there? I haven't seen it.

        The updated complier is included in the latest version of the Platform SDK or the free command line compiler that Microsoft is giving away. I haven't actually used it, but I would suspect that most people would only activate it while they were in the process of testing.

    • by Da Fokka ( 94074 ) on Sunday July 10, 2005 @07:52AM (#13025958) Homepage

      For some reason your comment is moderated 'troll', probably because you had the filthy guts of uttering the Forbidden Word 'Visual C++'.

      However, your question is prefectly valid. Automatic buffer overflow protection only covers the straightforward buffer overflow problems, i.e. array index overflows. In the case of more complex pointer arithmetic, where most of these problems occur, automatic protection is not possible (at least not without losing the option of pointer arithmetic).

      • For some reason your comment is moderated 'troll', probably because you had the filthy guts of uttering the Forbidden Word 'Visual C++'.

        Actually, 'forbidden term' would be more appropriate. My bad.

      • Automatic buffer overflow protection only covers the straightforward buffer overflow problems, i.e. array index overflows. In the case of more complex pointer arithmetic, where most of these problems occur, automatic protection is not possible (at least not without losing the option of pointer arithmetic).

        Actually, automatic checking is very much possible, and has been for years. For example, Bounds checking gcc [ic.ac.uk] (that website is down right now, so try my page on the subject [annexia.org]). That was written in 1994, and there are newer systems available now which don't have such a serious performance penalty.

        The real solution is to stop writing critical code in C. Other languages provide bounds checking, and are faster and safer than C: for example OCaml [cocan.org] which I prefer nowadays.

        Rich.

        • > For example, Bounds checking gcc (that website is down right now

          Archive.org has a mirror:

          http://web.archive.org/web/20040611220045/http://w ww-ala.doc.ic.ac.uk/~phjk/BoundsChecking.html [archive.org]
        • by perrin ( 891 ) on Sunday July 10, 2005 @12:59PM (#13027298)
          > The real solution is to stop writing critical code
          > in C.

          Yeah. Right. It pisses me off that whenever security gets spoken of here, someone comes up with this pathetic magic fix.

          Look, for commonly used libraries like zlib, you can't code it in anything but C. That is because only a C library can be called from every other language out there. Code it in, say, Python, and suddenly you lost all but Python programmers. Same goes for almost everything else. Yay for reuse of code!

          Point in case - you write "for example OCaml which I prefer nowadays". Keyword 'nowadays'. What happens when you move on to the next big thing that comes along? Can you call libraries written in OCaml from that language? Extremely unlikely. But I bet it supports calling C libraries, because no serious language can avoid that.

          Lots can be done to make C code safer. For example using safe versions of all non-safe functions, and integrating with a proper security model in the OS to drop non-needed permissions. But dropping the only language that can share code with everything else out there is just silly.
          • by Geoffreyerffoeg ( 729040 ) on Sunday July 10, 2005 @02:25PM (#13027715)
            That is because only a C library can be called from every other language out there.

            This is wrong. What you meant to say is that only a library using the C calling convention can be called from every other language out there. Heck, I can write libraries in Visual Basic of all possible languages, and still have them compile against a C program (on Windows, and I'm sure it'll work on other platforms if someone ports vbc). If a language can call C functions, then it is likely (with a little effort) to have functions called by C - or any language that can call "C".

            I haven't used Python or OCaml, but if either of those languages can produce C-style .lib's, .dll's, .so's, .obj's, whatever, then they'll work as well from C. I've seen libraries fully usable in C coded in Delphi, because Delphi designed its object file format to interface with C.

            What's more, you say that C can work with (at least as a library) every other language out there. So what's the problem with a small C-language interface that just calls the Python function and returns the result?
            • I haven't used Python or OCaml, but if either of those languages can produce C-style .lib's, .dll's, .so's, .obj's, whatever, then they'll work as well from C.

              Python can't, at least without embedding the python interpreter in the dll.

              What's more, you say that C can work with (at least as a library) every other language out there. So what's the problem with a small C-language interface that just calls the Python function and returns the result?

              Any language supports calling C libraries from that language

    • by CaptainFork ( 865941 ) on Sunday July 10, 2005 @08:13AM (#13026041)
      Why are we still having buffer overflows? There's a compile option in Visual C++ that allows automatic buffer overflow protection. Does GCC have this switch? If so, why not?

      If so why not? - and if not, why so?

      Why why not but not if not? Why not not?

      • Foam at the mouth and fall over backwards. Is he foaming at the mouth to fall over backwards or falling over backwards to foam at the mouth. Tonight 'Spectrum' examines the whole question of frothing and falling, coughing and calling, screaming and bawling, walling and stalling, galling and mauling, palling and hauling, trawling and squalling and zalling. Zalling? Is there a word zalling? If there is what does it mean...if there isn't what does it mean? Perhaps both. Maybe neither. What do I mean by the word mean? What do I mean by the word word, what do I mean by what do I mean, what do I mean by do, and what do I do by mean? What do I do by do by do and what do I do by wasting your time like this? Goodnight
        -- Monty Python [ibras.dk]
    • There also exist modifications to gcc that perform the same function. A little checking on your part was all that was necessary to not fall into the publicization trap.

      However, all such methods introduce a very noticable performance penalty.

      Furthermore, there are documented ways of bypassing all such stack protection mechanisms.

      Stop bitching. Audit your goddamn code already. Or would you rather all the bugs be found by the bad guys (this one was found by the Gentoo security team)?
      • publicization trap

        What does this mean? I tried to lookup "publicization" in a few dictionaries, but it doesn't seem to be a word. A little checking on your part was all that was necessary to not fall into the using nonsense words trap ;-) Are you a Bush speech writer? ;-)
      • Stop bitching. Audit your goddamn code already.

        Oh please. When are we going to get past, "I know! Let's just write perfect software all the time!"

        It's well past time to start using typesafe languages for most software - at least in theory. Unfortunately I don't see how this will come about in the OSS world in the forseeable future, because no such virtual runtime exists.

        The OSS JVMs are closest I guess, but without an OSS CLASSPATH they don't really count. Besides, there's no mainstream suppo

        • Whereas if there's a previously undetected bug in the runtime, all of a sudden everything's vulnerable. And now you again have to patch every single machine in existence. How is that any different than in this case?
        • by Dun Malg ( 230075 ) on Sunday July 10, 2005 @12:29PM (#13027122) Homepage
          Stop bitching. Audit your goddamn code already.

          Oh please. When are we going to get past, "I know! Let's just write perfect software all the time!"

          There will always be some subset of people who refuse to accept the impossibility of absolute perfection. I believe their thinking goes like this:

          "Anyone can easily write a single line of bug-free code. If you can write twenty of those lines, you can write a bug free function. Write a dozen such bug-free functions and you've got a bug-free class. Write a half dozen or so bug-free classes and you have a bug free library. Using a collection of such bug free libraries you can write a few more bug-free classes held together by some bug-free lines of code and you've got a bug-free application. You're not so stupid that you can't write a single line of bug-free code, are you? There's no excuse for bugs. Just don't make mistakes. It's a choice, really."

          (I never had to work for anyone who said the above, but my brother in law, a coder for a large trucking company, had to put up with a "quality consultant" whose entire theory was essentially the above, punctuated with shouts of "attention to detail, people!" in between such lectures. A similar consultant is documented in an email in "The Dilbert Principle". Sadly, it's probably not the same guy.)

      • Stop bitching. Audit your goddamn code already. Or would you rather all the bugs be found by the bad guys (this one was found by the Gentoo security team)?

        While we're at it we should stop doing automobile crash tests too, and just design goddamn safe cars already.
        • Um... are you implying that I'm implying that we should stop testing our programs? Because I don't think that's what I'm saying.

          Shit gets through initial testing; the recalls and lawsuits over defects in cars attest to that, and so do these frantic patchfests to defects in software. That's why there exist security teams to go through previous code and find the critical bugs that slipped through.

    • Your argument assumes that buffer overflows are a natural and unavoidable aspect of C programming. I can show you plenty of examples of C modules without buffer overflows. Writing a complex system without buffer overflows is only a matter of using these modules together with a carefully constructed interface.

      I also doubt your argument that achieving security in software is impossible. People have been doing it for years and years. Unfortuately we are seeing more and more security breaks because the per
      • I also doubt your argument that achieving security in software is impossible. People have been doing it for years and years. Unfortuately we are seeing more and more security breaks because the percentage of careless programmers out there has been steadily rising.

        This is a problem of education. When I was at school you learned to program C and assembler. If you made a stupid programming error you would notice it real soon. Now they mostly teach languages like java which hides most of the defensive progra

    • by n0-0p ( 325773 ) on Sunday July 10, 2005 @08:55AM (#13026204)
      Yes, both Visual C++ and the GCC ProPolice extensions provide stack and heap protection. And in general these techniques have a minimal impact on execution speed. Unfortunately, this does not solve the problem. There are still viable attacks that can be preformed by avoiding the stack canaries or heap allocation headers and overwriting other vulnerable data. The probability of developing a successful exploit is lower, but it's still there.

      I don't disagree that building secure applications is hard, but it's certainly not impossible. Modularized code just adds another layer of compilcation and potentially confusion. Most of this can be addressed by documenting the design and interface constraints, and ensuring that they're followed. At that point even most security vulnerabilities are primarily implementation defects. Defects will of course still occur, but the trick is to build systems that fail gracefully.

      Developers must to account for defects and expect that every form of security protection will fail given enough time and effort. This is why the concept of "Defense in Depth" is so important. By layering protective measures you provide a level of security such that multiple layers have to fail before a compromise becomes truly serious. Combine that with logging and monitoring, and a successful attack while usually be identified before damage is done.

      Take the above vulnerabiliy and assume it exists in an exploitable form in a web app running on Apache with a Postgres backend. If the server had been configured from a "Defense in Depth" perspective it would be running in a chroot jail as a low privilege account. Any database access required would be performed through a set of stored procedures or a middleware component that validates the user session and restricts data access. SELinux or GRSecurity would be used for fine grained user control on all running processes. All executables would also be compiled with stack and buffer protection.

      In the above scenario, you see that this single exploit wouldn't get you much. However, most systems are deployed with one layer of security, and that's the problem.
    • by aws4y ( 648874 ) on Sunday July 10, 2005 @09:16AM (#13026286) Homepage Journal
      Why are we still having buffer overflows? There's a compile option in Visual C++ that allows automatic buffer overflow protection. Does GCC have this switch? If so, why not? And why are people not using this? We have enough processing power on a typical PC to spend on these security such as this. Performance is not an excuse.

      The problem I have with this statement is that any checks that Visual C++ may have are at best a fig leaf. Buffer Overflow protection is something that has dogged not just programers but hardware manufactures for decades now. If security is of such great consern why not make the assembler do buffer checks?, why not the operating system? why not the processor?, why not create a ram infrasturcture called SDDR in which the RAM itself does not allow anything to be accessed without a secure hash? the answer to all of these questions is that for every solution, event the stupid one at the bottom, the buffer overflow might take on a new form or the security measures themselves may backfire.

      Ultimatly the parent is IMHO over reacting, we are always going to have buffer overflows. This is not necissarily a problem so long as people are willing to disclose the vulnerability and work hard to get it patched before an exploit is out in the wild. This is the main argument as to why Microsoft software is insecure because often known vulnerabilites go months without being patched. They are getting better but they are nowhere near the transparancy displayed here. They made a mistake in coding, they are attempting to fix it but until all the vulnerable aplications are patched we need to be on guard for signs of malicious behavior from programs relying on zlib. In other words this is just a part of life in the world of computing.

    • Does GCC have this switch?

      Not officially, but back when I used gentoo there was -fstack-protector and IIRC one other switch which implemented stack smash protection and bounds checking. Despite it being a lot slower in theory, I never noticed a difference in practice (except when my programs crashed it'd be "Pointer error detected, aborting" rather than "Segmentation fault")

    • Despite the grand claims people make about formal verification, even this isn't enough. The problem with formal verification is that the abstraction of the language you're using to obtain your proof may not adequately represent the way the compiler actually compiles the program. Besides, it's possible to engineer a compiler that deliberately miscompiles itself such that it compiles programs with security flaws in it.

      Does formal specification and verification eliminate all bugs and security issues? No. Do
  • by alanw ( 1822 ) * <alan@wylie.me.uk> on Sunday July 10, 2005 @07:45AM (#13025936) Homepage
    Here's the patch to inftrees.c (found on Debian.org):
    $ diff -Naur inftrees.c ../zlib-1.2.2.orig/
    --- inftrees.c 2005-07-10 13:38:37.000000000 +0100
    +++ ../zlib-1.2.2.orig/inftrees.c 2004-09-15 15:30:06.000000000 +0100
    @@ -134,7 +134,7 @@
    left -= count[len];
    if (left < 0) return -1; /* over-subscribed */
    }
    - if (left > 0 && (type == CODES || max != 1))
    + if (left > 0 && (type == CODES || (codes - count[0] != 1)))
    return -1; /* incomplete set */

    /* generate offsets into symbol table for each length for sorting */
    And here's the E-Week article [eweek.com] with the quote
    However, Ormandy said, "Zlib is very mature and stable, so development is sporadic, but it's certainly not dead. Mark Adler [a Zlib co-author] responded to my report with a patch and an in-depth investigation and explanation within 24 hours, and I believe he expects to release a new version of Zlib very soon."
    • I wonder if it'd be possible to create a binary patch for prebuilt binaries ?

      Anyone got some suggestions?
      • by Haeleth ( 414428 ) on Sunday July 10, 2005 @08:37AM (#13026136) Journal
        I wonder if it'd be possible to create a binary patch for prebuilt binaries ?

        For specific builds of individual programs, trivial. There are dozens of good, fast, and robust binary patching systems available - xdelta, bsdiff, and jojodiff are three F/OSS options. Of course, bandwidth is cheap enough these days that most people who use binaries can just download the new version in its entirety.

        A general-purpose fix that could be applied to any application using a statically-linked zlib would be much harder, possibly even impossible. This is one of the major advantages of dynamic linking - that a security update to the library in question can automatically benefit any application that uses the library.
        • Absolutely agreed on your points - there is one situation where a small binary patch does become useful, where it's a legacy system which simply cannot be swapped out for the 'latest version', non of us like those situations but they do exist :-(

          Thanks for the pointers - amazing, /. being useful after all :-)
  • by aussie_a ( 778472 ) on Sunday July 10, 2005 @07:46AM (#13025943) Journal
    Because Firefox renders PNG completely, it is prone to these sort of errors. However there is one browser that won't need a patch issued to be safe from this bug, which is Internet Explorer. While IE can render PNG a little, it hasn't implemented the full technology. By using IE, you ensure that you will be safe from any bugs that arise from new technologies, such as PNG.

    So next time someone recommends a browser. Stop and wonder about what technology the latest browser has implemented properly without regard to any security issues, and remember that it will be decades before IE implements the technology (if it ever does) so it will be safe for quite some time, by being a stable browser that rarely changes.


    Mods: This is not an attempt at troll, but a parody of the typical "This is why you should switch to Firefox" posts whenever a vulnerability involving IE. It should be painfully obvious, but then again most of you are on crack.
  • Already patched (Score:4, Informative)

    by Anonymous Coward on Sunday July 10, 2005 @08:03AM (#13025996)
    Both Debian and Ubuntu released the patch for this problem 2 days ago. I assume the other big names in the Linux world have or will follow suit shortly.
    • Re:Already patched (Score:2, Informative)

      by i_like_spam ( 874080 )
      What took 'em so long?

      Gentoo announced the bug July 5th [gentoo.org] and had the patch a day later.
    • Re:Already patched (Score:2, Informative)

      by udippel ( 562132 )
      004: SECURITY FIX: July 6, 2005

      On OpenBSD
  • by Turn-X Alphonse ( 789240 ) on Sunday July 10, 2005 @08:10AM (#13026031) Journal
    even on Microsoft Windows

    NOT WINDOWS! I was just about to move to it from this Linux thing!
  • by yog ( 19073 ) on Sunday July 10, 2005 @08:13AM (#13026044) Homepage Journal
    I'm running RHN alert notification on Fedora Core 3, and my version of zlib has already been updated with a patch for CAN-2005-2096 [mitre.org], the zlib overflow bug.

    It's interesting to read about these as they occur, but it's a nice feeling that my operating system is so well taken care of. Too bad that all personal computers aren't set up for this kind of timely response. I wonder about those millions of library computers, home PCs, small business computers, and other institutional setups where no one even understands the concept of an update, let alone regularly runs the Windows "security" update program.

    Another reason to use Linux!
  • by eldacan ( 726222 ) on Sunday July 10, 2005 @08:31AM (#13026115)
    We've seen many posts on slashdot recently explaining why the packaging systems are no longer desirable (if they ever were), that dependencies are a PITA (even with systems à la APT), etc.
    But when you have a flaw in a very popular library like this, you'll be happy to know that all 354 programs using this library on your system will be safe once the shared library is upgraded... Windows users must upgrade every software manually, and they often won't be able to know precisely what software may be affected...
    • You're assuming every program dynamically links to libZ. That isn't the case at all: statically linked or private copies of this library are common.

      What's more, Windows not having a package manager doesn't mean they're necessarily worse off. Think about it - if the zlib code was in Windows, they could just update the affected DLLs using Windows Update. If it isn't in Windows, then by definition it wouldn't have been packaged anyway.

  • by Ed Avis ( 5917 ) <ed@membled.com> on Sunday July 10, 2005 @08:36AM (#13026133) Homepage
    And this, my friends, is why 'dependency hell' is a good thing. A flaw is found in zlib - no trouble, just run the normal update program that comes with your distribution, 'yum update' or whatever, the centrally installed zlib library will be updated, and all applications will start using it.

    The trouble comes with those software authors that wanted to be clever and to 'cut down on dependencies' and included a whole copy of zlib statically linked into their application. Now you have to replace the whole app to remove the zlib security hole. The dependency on zlib is still there, just better hidden, and in a way that makes upgrading a lot harder.

    If Microsoft had any sense, a zlib.dll would be bundled with Windows and then Office (and many other apps) could use it. But they wouldn't want to do that, partly because it would involve admitting that they use such third-party libraries.
    • by rve ( 4436 )
      because it would involve admitting that they use such third-party libraries.

      A company of that size doesn't sneakily use 3rd party software. They pay $$$ for 3rd party software they include, and they would only use 3rd party software if it was patented by a 3rd party, or prohibitively expensive to develop themselves. I'm pretty sure that does not include z-lib.
    • by macemoneta ( 154740 ) on Sunday July 10, 2005 @09:55AM (#13026411) Homepage
      If the argument were that simple, static linking would never occur.

      The flip side of the argument is that installing a broken zlib will break all application that are dynamically linked, but have no effect on those that are statically linked.

      Remember too that an upgrade to a dynamically linked function means that proper testing must include all software that uses that function. A statically linked application can be tested as a standalone unit.

      The resulting isolation of points of failure and lower MTTR is often seen as an advantage in production environments.

      I remember this specific situation occurring in a production environment I worked in. A common library was updated, causing the failure of multiple critical applications. The ones not impacted? Statically linked.

      Both sides of the discussion clearly have advantages and disadvantages; they have to be weighed to determine the proper risk/benefit.
      • You don't have to test every piece of software that uses a dynamic library. Proper unit tests for the library itself will be enough, as long as they're comprehensive and cover the entire API; the only apps that might still break are those that use the library in ways it's not intended to be, but that's really the app developers' own fault then.
      • The flip side of the argument is that installing a broken zlib will break all application that are dynamically linked, but have no effect on those that are statically linked.

        That's why some people like to use (say) debian stable in production environments: security fixes are backported to the well-tested version of the lib, making a breakage quite unlikely.
      • This is really an argument for versioning dynamic libraries very carefully. The Linux dynamic linker has perfectly good support for avoiding problems like this. Each shared library has a "SONAME" field. Programs linked against the library should be able to use any later version of the library with the same SONAME. If the library changes in some way that breaks desireable behavior, it is supposed to get a new SONAME. The system keeps two sets of symlinks, in addition to the object files: libfoo.so is the lat
    • But what if you are running it in an embedded system that has been deployed on a Spacecraft?It's NOT easy at all to make that change! Zlib is in a LOT of places where data compression is needed as the algorithm and code are free. We found this zlib bug close to 3 years ago at NASA IV&V duing Code Analysis for the SWIFT mission using an Automated Tool (CodeSurfer). The tool told us we has a possible buffer /memory overflow problem in the zlib code at a certain spot. We had to figure out what was causing
  • It affects countless software applications, even on Microsoft Windows.

    I thought Microsoft was proprietary and didn't use open source like zlib? Snicker. I guess Microsoft is being assimilated.

  • by putko ( 753330 ) on Sunday July 10, 2005 @08:38AM (#13026138) Homepage Journal
    It is really something that this flaw impacts so many applications.

    This situation is unnecessary; the problem is that C is not a type-safe language, like ML, CAML, Haskell, Common Lisp, Scheme, Java, etc.

    You could write that code in SML/CAML/Common Lisp and likely get it to run as fast or faster than the original (particularly if you did some space/time tradeoffs ala partial evaluation). Integration with the applications in the form of a library would be the tough part.

    Here's a provocative bit from Paul Graham (Lisp expert entrepreneur) on buffer overflows [paulgraham.com].
    • The problem with writing code in a typesafe language is, as you have noted, integration with libraries. The main reason that so much code is writen in C is that C needs pretty close to zero runtime support.

      Not only do languages like Lisp need a fairly extensive runtime, they need dynamic memory allocation and garbage collection, and when you share garbage-collected objects between languages (potentially between multiple languages each with their own allocation models) you're asking for a whole new kind of
    • by Florian Weimer ( 88405 ) <fw@deneb.enyo.de> on Sunday July 10, 2005 @09:04AM (#13026230) Homepage
      Common Lisp (the language) is not completely safe, it permits unsafe constructs which can even lead to classic buffer overflows. Most implementations omit bounds checks which are not mandated by the standard when optimizing, so these problems can occur in practice.
    • by Anonymous Coward on Sunday July 10, 2005 @09:08AM (#13026251)
      the problem is that C is not a type-safe language

      Please. This is a very boring misconception about types. It's not a type error. It's a pointer arithmetic error. Nothing a type system à la ML, Java, CL, whatever would have corrected.

      However, mandatory bound checking on arrays, at runtime, in those languages would have caught the problem.

      There exist type systems that can catch these kind of errors, but they are very cumbersome, and not very practical.
      • However, mandatory bound checking on arrays, at runtime, in those languages would have caught the problem.

        There exist type systems that can catch these kind of errors, but they are very cumbersome, and not very practical.


        Obviously the right thing to do would have been to use Ada or SPARK which have runtime checks, and in the caes of SPARK extended static checking and formal verification. The real question is: How secure do you want to be? For a random desktop appliation such measures might be going a b
      • the problem is that C is not a type-safe language

        Please. This is a very boring misconception about types. It's not a type error. It's a pointer arithmetic error. Nothing a type system à la ML, Java, CL, whatever would have corrected.


        No, you are wrong. The "pointer arithmetic error" that you mention is called (by experts) a "runtime type error".

        A language like C that allows you to make type errors is "not type safe", which is why I wrote what I wrote.

        Furthermore, not all programs that use ar
        • by po8 ( 187055 ) on Sunday July 10, 2005 @11:34AM (#13026847)

          Why does this topic bring the, uh, technically challenged out of the woodwork?

          I'm a Ph.D. computer science professor with 20 years of experience in design and implementation of programming languages, and the co-author of a C-like programming language [nickle.org] featuring static and dynamic typing and runtime operand checking. The parent poster is confused.

          Static type checking involves automatically recognizing type-unsafe operations at compile time. In many programming languages, including C, if you write"s" - 1 the + operation is ill-defined, because the left-hand operand is of the wrong type: i.e., there is no string that is a legal operation to the - operator. The compiler can detect this at compile time and refuse to compile the program.

          Dynamic type checking involves automatically recognizing type-unsafe operations at run time. In many programming languages, such as Python, if you write"s" - 1 inside a function definition, the compiler will not detect the problem, because the general problem of detecting this kind of error is unsolvable unless one restricts what programmers can write. Instead, the execution engine can detect the problem at runtime when the - is evaluated and refuse to continue executing the program.

          Runtime operand checking involves automatically recognizing at runtime that an operation has an operand that, while of a type that might be legal for the operation being performed, is nonetheless illegal for that operation. In many languages, including Python, if you write 1 / 0 no error will be reported at compile time, because detecting such errors is in general impossible. Instead, the execution engine can detect the problem at runtime, and prevent execution from continuing.

          (Of course, there also is such a thing as static operand checking, which bears the same relation to runtime operand checking that static type checking does to runtime type checking. This is a hot research topic right now.)

          C's problem is that it (a) does not have a "safe" static type system, (b) does not have any dynamic type checking, and (c) has no operand checking. This combination of misfeatures is incredibly and obviously error-prone; offhand, I can think of no other popular language (not derived from C) that is broken in this fashion. Fixing (a) and/or (b) is not sufficient---(c) also needs to be fixed. Java, for example, has shown that this can be done in a C-like compiled language without noticeable loss of efficiency. (This was shown for PL/I more than 30 years ago, so it's no surprise.)

          The parent post gives an example in which the index argument to an array dereference is of the correct type and has correct operands. If x[2] was evaluated, this would be an operand error, since the combination of arguments x and 2 is not a legal operand combination for the array dereference operator. With the statement as given in the parent post, I'm not sure what principle it was trying to illustrate. I think, though, that it doesn't much matter.

  • very complex code (Score:5, Interesting)

    by ep385 ( 119521 ) on Sunday July 10, 2005 @09:16AM (#13026283)
    Has anyone read the zlib code? While the author clearly tried to make it readable it's still very complex and it's very hard to see at a glance or even after many glances where potential buffer overflow problems may exist (or even where it might fail to implement the deflate algorithm). C is great language for writing an operating system where all you care about is setting bits in a machine register but this algorithm really taxes its abilties.

    For comparison here [franz.com] is the deflate algorithm written in Common Lisp. It all fits neaty into a few pages. This is a far better language comparison example than the oft-cited hello world comparison.
  • by MSDos-486 ( 779223 ) on Sunday July 10, 2005 @09:25AM (#13026316)
    or does it seem the end of the world will be caused by a buffer overflow?
  • BSD Status (Score:5, Informative)

    by emidln ( 806452 ) <adam4300@kettering.edu> on Sunday July 10, 2005 @09:31AM (#13026329) Homepage
    For the undead crowd out there:

    OpenBSD is affected, and was patched [openbsd.org] on the 6th of June
    FreeBSD is affected, and was patched [freebsd.org] on the 6th of June
    NetBSD base system is not affected, but a zlib from pkgsrc is, and was patched [netbsd.org] on the 8th of June
  • by timbrown ( 578202 ) <slashdot@machine.org.uk> on Sunday July 10, 2005 @09:39AM (#13026359) Homepage
    It's all very good advocating the use of languages that mitigate against heap and stack overflows et al. But as the recent XML RPC vulnerability in PHP applications should reinforce, the bad guys are migrating to higher level language attacks too.

    The fundamental problem is that all too often, different components of a system have are implemented with different input validation. For example a web browser component running an web application may accept all text input, whereas the backend database is only expecting a subset of text inputs.

    Developers should establish what input the lowest level component and highest level components will require to get the job done and validate input into all components subject to these requirements. Where the lowest and highest level components have different requirements then the developer needs to define some method of encoding values that would otherwise be considered invalid, and ensure that all components enforce this encoding.
    • Input is just one interface... every internal interface in a system also needs to be designed so that it only accepts known safe or encapsulated data, and if there's limits on safe input they need to be something that the upper layer can reasonably check without having to effectively replicate the component it's passing the data to.

      Let's say, for example, that this overflow involves a pathological artificially created compressed stream. To test for it, you may have to implement most of the algorithms in zl
  • If anything, this story is about how good modularization is. Simply updating one shared library (libz.so or zlib.dll) will fix the problem for all of your installed applications. No?
    • If anything, this story is about how good modularization is

      Yes.

      Simply updating one shared library (libz.so or zlib.dll) will fix the problem for all of your installed applications. No?

      No. Some applications ship their own zlib and/or statically link to it, circumventing the benefits of modularity.

  • by yajacuk ( 303678 ) on Sunday July 10, 2005 @11:44AM (#13026903) Homepage
    I ran the AOL Spyware protection twice this week and both times I found spyware in the Zlib library.
    Here is a sample of the Scan log.
    ASP Version: 1.0.77 Definition Date: 01-05-05 Date: 7/6/2005 5:02:02 PM
    Action: Found: c:\Program Files\daimonin\client\zlib.dll
    Spyware Name: Diablo Keys
  • Yawn (Score:4, Insightful)

    by ChiralSoftware ( 743411 ) <info@chiralsoftware.net> on Sunday July 10, 2005 @12:01PM (#13027000) Homepage
    As long as we're using unsafe languages to handle untrusted data, we will keep having these problemss.

    Zlib itself contains 8000 lines of code. Not very big, is it? It's been around for years and is widely used, so in theory a lot of people have been able to look at it. And yet, after all these years, they are still finding buffer overflows in these 8000 lines of code.

    Zlib was not written by monkeys. It was written by very smart, experienced coders. And yet somehow they are not able to write 8000 lines of code without multiple serious buffer overflows.

    As long as code like this is written in C we're going to have these problems.

    Saying, "there's a critical buffer overflow in a library written in C" is as newsworthy as saying "when I bang my head against the wall I get a headache."

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...