KDE Heap Overflow Vulnerability Found 233
sayanchak writes "An incorrect bounds check has been discovered in kjs, the JavaScript interpreter engine used by Konqueror and other parts of KDE, that allows a heap based buffer overflow when decoding specially crafted UTF-8 encoded URI sequences. It might allow malicious Javascript code to perform a heap overflow and crash Konqueror or even execute arbitrary code. Source diff patches for KDE 3.2.0 - 3.3.2 and KDE 3.4.0 - 3.5.0 are available."
Variable names? (Score:4, Insightful)
Re:KJS is also used by Apple in Safari (Score:5, Insightful)
Re:This is why I use Windows (Score:5, Insightful)
The complaint about MS is the running of said things in or at the kernel.
The only people who make that complaint are people who don't have a clue what they are talking about. Internet Explorer doesn't run "in or at" the kernel. It runs with the user's privileges, just like any other application.
The problem with "Internet Explorer" is that its rendering engine, Trident, is embedded by a great many applications, so any vulnerability in Trident is also a vulnerability in those applications. The same is true of KDE/KHTML/KJS. If a vulnerability is found in, say, KHTML, it also means KMail and Amarok are vulnerable.
Unfortunately, this is the downside to modern component-based strategies - it's not a Microsoft-specific problem. However the beneefits of these strategies vastly outweigh the downsides.
And now the obligatory... (Score:4, Insightful)
There are patches already available. Fix it. Move on. Mind you, this is not like what happens with "some other operating systems," where they have to be berated by users into issuing patches...
Arbitrary code with what privileges? (Score:2, Insightful)
Re:KJS is also used by Apple in Safari (Score:3, Insightful)
And the question on everybody's lips... (Score:3, Insightful)
Re:This is why I use Windows (Score:5, Insightful)
Of course software has bugs. Given that, the key thing is how the software authors treat such bugs. Open Source authors tend to be very honest about and immediately provide fixes for security holes, while Microsoft tends to softpedal and delay.
The problem is not the bugs, it is how they are handled.
--
Evan
Re:Arbitrary code with what privileges? (Score:4, Insightful)
Not directly, unless you run as root. On the other hand, local root kernel vulnerabilities may be exploited, and the Linux kernel has new ones discovered frequently.
Re:And now the obligatory... (Score:5, Insightful)
There are source patches available. That's fine for you and me, but it's no good for the increasing number of "normal" users who are moving to Linux, who wouldn't be able to apply them if you showed them how. They still have to wait on binary patches from their vendors.
Mind you, this is not like what happens with "some other operating systems," where they have to be berated by users into issuing patches...
That's mostly because the self-same users berated them into only releasing patches once a month at most; they can't have it both ways. I'd also be willing to bet that patches from commercial OS vendors go through rather more rigorous QA processes than this; support contracts and such like make that essential.
Re:This is why I use Windows (Score:1, Insightful)
The projects from Mozilla are far from "finished", but they add features instead of fixing bugs. We wind up with a somewhat slow UI, a huge memory footprint, and random crashes. The OpenOffice people are too busy needlessly throwing in features and coding Java into a C++ program instead of finishish the version. We wind up with a slow UI, very slow startup time, a huge memory footprint, and reliance on C libraries, C++ libraries, *and* the Java runtime.
I still use things like Firefox and OpenOffice, because they're still the best ones out there, but I have no devotion to them. When the devs stop playing happy little games throwing in a bunch of code from their favorite language of the year, or building an IRC client in, or messing about with plugin interfaces well after version 1.0, and just finish what they have first, then I'll be very happy. Instead we have tons of software that are *almost* done.
Exactly what you mentioned about the Linux kernel doing this has a lot of people unhappy. On the few Linux server I keep around, I use Slackware and 2.4.x kernels. I don't want the machines to crash, so I don't trust 2.6.x. They don't need new features; they need stable code, and the constant feature-add game doesn't get me stable code.
What these games *have* done is get a lot of people, such as myself, to use more stable platforms. Some people choose a Linux distro like Debian. Many others just jumped over to BSD or Solaris.
So, in the end, I'm willing to run Linux on my workstation, but I try to avoid running it on my servers. I want to minimize the potential for that server to crash, and Linux isn't giving me that anymore. About two years ago, I waved goodbye to Linux, after having used it heavily since 1993.
Re:This is why I use Windows (Score:2, Insightful)
Is it? It also means just one place to fix the bug, because there are less people reimplementing functionality. The real problem with Microsoft is their sloppy bug fixing.
Just goes to show... (Score:3, Insightful)
Re:This is why I use Windows (Score:2, Insightful)
I do admit they are doing this less now since NT has taken over but the sole reason for instability and early versions of windows was that everything ran in the kernel and one app could violate memory on another app and cause a GP fault. WIndows 3.1 was atrocious.
Re:Bullet-proof JS (Score:1, Insightful)
Second : I hate wasting my life patching the OS on my computer.
Re:This is why I use Windows (Score:2, Insightful)
If its avilable within hours someone have failed to test it properly, there is thousands of combinations of hardware and software and god knows what a quick and dirty patch can break.
Re:Variable names? (Score:1, Insightful)
00261 unsigned long xx = (octets[2] >> 4) & 0x03;
00262 unsigned long yyyy = octets[2] & 0x0F;
00263 unsigned long zzzzzz = octets[3] & 0x3F;
xx is two bits, yyyy four bits and zzzzzz six bits.
So it is not totally stupid...
Meaningful names (Score:4, Insightful)
If you study the code a little, you'll see there is some logic to those names: The length of the variable name also reveals the number of bits stored by that variable. "xx" stores a 2 bit value, "zzzzzz" stores a six-bit value.
That's not obfuscated, since if you know the scheme, it improves readability.
(The code doesn't really look obfuscated to me, but OTOH I have been programming C++ for over 10 years.)
Re:This is why I use Windows (Score:2, Insightful)
Many (slow) eyeballs do what now? (Score:3, Insightful)
By the time you and I heard about it, there was already a fix. On the other hand, if it's existed since 3.2 onward, that means this flaw has been in place since at least February, 2004. The fact that it's public now and there's a patch now doesn't mean that there wasn't some sharp-eyed and black-hearted soul who spotted this hole years ago, and has been quietly taking advantage of it ever since.
Re:This is why I use Windows (Score:4, Insightful)
Yes, Explorer.exe will normally load mshtml.dll to render the info pane for folder contents. Yes, you can still turn off that and use classic folder view. In that case, Explorer.exe doesn't use the rendering engine of IE (unless you use HTML-based Active Desktop, but NOT web folders, a somewhat surprising combination). It's as simple as that. As another comment noted, the common controls were updated with IE and with IE as a recommended way to redistribute the new DLL.
Also, if an administrative user logs in, it will be with the admin profile. There is no immediate reason that someone only using Explorer.exe to browse the HD, even with web folders active, will somehow pick up a known exploit for Trident/MSHTML.
Autorendering of HTML mail has historically been a much worse decision than the use of HTML in the user interface of some local apps. Still, that is a decision that makes some sense, at least if one accepts the idea of people wanting formatted mail at all.
Checking malloc() is an obsolete practice (Score:5, Insightful)
Yes, but on an embedded system, you almost always have an init phase where you allocate all the memory that you need at startup, and so you have an init() function or similar that either fails or succeeds at startup containing checked mallocs. Then you have *one* cleanup path. You only guarantee that your application handles up to N resources used of each type at runtime (100 connections, 30 open files, whatever).
Checking malloc in the middle of your code is essentially an obsolete practice for real-world systems -- it's essentially impossible to cleanly back out of all failures, and nobody is going to test all possible failure conditions. The fact that Linux uses an OOM killer and overcommits by default is just a recognition of this fact.
I know this goes against what some people learn, but let me ask those people who carefully check every failure:
* Do you actually test each bit of cleanup and error-recovery code? I mean, are you using a malloc()/free() wrapper that causes *every* path to be invoked? Otherwise, you're just bloating your application with masses of untested code.
* Are you certain that you can't run out of stack space, not just heap space? Particularly if you're using C++ and local objects, I'm pretty dubious that you're so sure. Do you really know, for certain, how much space a random STL object uses?
Systems these days have so much memory and virtual memory that running out of memory is almost *always* a bug. It's a pretty safe bet that the allocation that causes your app to run out of memory is the culprit. Even if Linux didn't have an OOM killer, I'd feel safe in almost all circumstances just wrapping malloc() with an abort() on failure.
Some applications might be fed huge workloads inadvertently. Those are better off adding checks specifically for those workloads. For example, if you load a huge image in the GIMP, you'll get a warning based on the size before the GIMP attempts to do memory allocation, not after the failure happens.
ummm.. they already do (Score:5, Insightful)
Mod down (Score:3, Insightful)
b) For one, KDE never uses STL, because for one when it was wirtten it was not available on all the platforms it needed, and for two Qt's containers are just better and more efficient than STL contains in general anyway.
Re:This is why I use Windows (Score:4, Insightful)
Please for the love of God tell me you were kidding?
The HTML rendering Engine is NOT Explorer, nor is it even Internet Explorer. And Explorer is NOT Internet Explorer. Understand?
Sure Explorer can call features from the HTML rendering engine, just like it can call feautres from the BMP and Font rendering engines. But this does NOT mean Explorer itself is a PART of the HTML rendering engine.
Addtionally, The Taskbar, Start Menu, etc are not rendered using the HTML engine, and the only time the desktop is rendered along side the HTML rendering engine is when Active Desktop or HTML apps or Pages are placed running on the desktop. Just because Explorer can use the HTML engine does not make it the HTML engine. You could set the system that Explorer NEVER calls any functions out of the HTML engine if you really wanted...
Just remember this, the HTML rendering engine in Windows is what everyone hates. It is not Explorer nor even Internet Explorer. Internet Explorer is a fairly small application that wraps around the HTML engine technologies to give the engine an application interface.
As for the whole HTML rendering being allowed in the OS and other applications, I think the whole argument is a place were people are mislead or try to be misleading.
Almost ALL modern OSes do this or use this to one extent or another. Additionally, even if the OS interface doesn't provide a 'common' HTML rendering technology for third party applications, many third party applications either tap into or strap on HTML engines for everything from part of their UI to their help systems.
So remember before people get on their anti-MS soap boxes, remember - ALL OSes do this, or allow this in one way or another. From OSX to Solaris. PERIOD.
Re:Checking malloc() is an obsolete practice (Score:3, Insightful)
Checking malloc in the middle of your code is essentially an obsolete practice for real-world systems -- it's essentially impossible to cleanly back out of all failures, and nobody is going to test all possible failure conditions. The fact that Linux uses an OOM killer and overcommits by default is just a recognition of this fact.
It is not impossible to cleanly back out failure states. It is difficult. Don't confuse the two. Your job as a programmer is not to make your life easy, it's to write programs that function correctly. Handling memory errors correctly (say, not losing the user's states and corrupting their current working file due to allocation error) is part of the program's role. Advice like this leads to the type of apps that die mysteriously for no apparent reason, leaving no trace or debug message of what went wrong.
* Do you actually test each bit of cleanup and error-recovery code? I mean, are you using a malloc()/free() wrapper that causes *every* path to be invoked? Otherwise, you're just bloating your application with masses of untested code.
No. I test most. I have good confidence in my coding skills for the rest. Yes, there are bugs, that is inevitable. But handling most error cases correctly (obviously I try for total coverage) is better than handling none of them.
* Are you certain that you can't run out of stack space, not just heap space? Particularly if you're using C++ and local objects, I'm pretty dubious that you're so sure. Do you really know, for certain, how much space a random STL object uses?
Well, in my embedded systems, I do. On more conventional machines, not really. But I do know that any well written code will not allocate excessively-large objects on the stack and avoid recursion. My code will never use more than a few kilobytes of stack space (or hundreds, in embedded situaitons), and if that much is not availabe, the program never had a chance of working correctly.
Systems these days have so much memory and virtual memory that running out of memory is almost *always* a bug. It's a pretty safe bet that the allocation that causes your app to run out of memory is the culprit. Even if Linux didn't have an OOM killer, I'd feel safe in almost all circumstances just wrapping malloc() with an abort() on failure.
This is an incorrect assertion and a terrible programming practice. Imaging your Linux box is running 1000 processes, 1 sshd and 999 apache instances. If traffic spikes up, and the apaches start consuming more memory, the system may not have enough to give them, and they may gracefully handle it. But if sshd was coded using your standards, it would crash if someone tried to initiate a connection because it did not check its malloc return. I think we can all agree that under no circumstances should increased memory use by apache (not a bug) cause sshd to crash.
Some applications might be fed huge workloads inadvertently. Those are better off adding checks specifically for those workloads. For example, if you load a huge image in the GIMP, you'll get a warning based on the size before the GIMP attempts to do memory allocation, not after the failure happens.
And how exactly does the GIMP know how much memory is available to it at any time? It may be running on a box with 16 megs of RAM, or 16 gigs. The system may have memory allocated to other things right now that was available five seconds ago. I'd like to see portable code that can predict when nontrivial resource allocation will fail.
All in all, this kind of programming leads to the worst kind of software. Code with no regard for memory utilization. Code that does not know if realloc worked and goes on silently corrupting memory until it fails later down the line. Code that leads to memory leaks, buffer overflows, and denial of service attacks due to its disregard for error conditions. I hope that I shall never encounter your code in my lifetime.
--
Any (nontrivial) program will be buggy.