Linux Worm Spreading, Many Systems Vulnerable 617
sverrehu writes "A GNU/Linux worm exploiting a bug in OpenSSL spreads through vulnerable Apache web servers, according to Symantec. The worm, which was first reported in Europe, targets several popular Linux distributions. See also the SecurityFocus vulnerability listing for the OpenSSL bug." sionide also writes: "Netcraft recently published a report which explains that a large portion of Apache systems are still unpatched (halfway down). To protect yourself please upgrade to OpenSSL 0.9.6g."
Finally (Score:5, Funny)
Re:Linux is No Match For Microsoft ! (Score:3, Insightful)
point one
I know this is Slashdot, but some evidence for Symantec's anti-Linux bias might be useful and relevant.
point two
And in reference to some other posts about GNU/Linux not being Apache and Microsoft Windows not being IIS, remember that IIS and Windows are ostensibly developed by the same company, whereas GNU/Linux and Apache are separate open source projects. Blame can be distributed much more broadly in the GNU/Linux world.
Open Source Vulnerable Too (Score:3, Insightful)
Re:Open Source Vulnerable Too (Score:3, Informative)
That's what makes open source software overall more secure -- the turnaround time with patches is a lot faster.
Re:Open Source Vulnerable Too (Score:2, Interesting)
This power costs money. The administrator would have to download the sources, apply the patch, and - most importantly - configure the build so that the proper things get built and other bits get left out. Getting a live server back takes more than just typing ./configure. IOW, you need a smarter and
therefore more expensive administrator to
actually enjoy this power.
That's what makes open source software overall more secure -- the turnaround time with patches is a lot faster.
I am very grateful for all the open source software I've ever used, but I must point out that this turnaround time usually doesn't include what a responsible commercial outfit would call QA.
Re:Open Source Vulnerable Too (Score:4, Insightful)
Airline pilots are highly trained and constantly upgrade their skills, and are highly paid.
Likewise, programmers who run enterprise-strength systems have heavy responsibilities. This is not something one ought to go into for the money, but rather, for the love and dedication to the craft. (not aircraft)...
As far as QA, I tell you what. If the system is designed correctly, it will need very little QA. I know this because some systems can never get it right, no matter how much QA go into them, because of fundamuntal design flaws.
And yes, designing computer software is hard. Like heart surgery. One slip of the old wrist and it's flatline.
Re:Open Source Vulnerable Too (Score:2)
A good QA process doesn't just test completed code. A good QA process gets involved at all levels of development, and would have at least a fighting chance of catching those fundamental design flaws.
Re:Open Source Vulnerable Too (Score:2, Interesting)
How many webserver administrators have the skills to look at the Apache sourcecode (or in this case, the OpenSSL sourcecode), find the bug, and fix it? If they had such skills they probably wouldn't be working as webserver administrators to begin with. The often tauted ability to "go in and fix things" or even to simply "contribute" is highly overrated. Who found and fixed this bug? Was it some random user, or one of the original developers?
Not overrated. (Score:5, Insightful)
All the skill it should take is to apt-get upgrade or up2date, or whatever the distro in question uses for updates. Debian woody had the patch posted immediately. So the skills needed to update your Apache system are no different from those needed to patch code red (Which, a year after its creation, is still roaming around)
The often tauted ability to "go in and fix things" or even to simply "contribute" is highly overrated. Who found and fixed this bug? Was it some random user, or one of the original developers?
Well, judging by the advisory from the OpenSSL team [openssl.org] (Dated July 30, btw, this is hardly a new issue) and a cursory glance over the developer list [openssl.org], the advisory issue was not found by anyone on the development team. So, I'm going to have to go ahead and disagree with you. I consider the ability of users to find and patch security vulnerabilities to be a benefit of free software that simply cannot be overstated.
Having said that, I'll concede the obvious. Most end users are not skilled in the ways of finding or fixing bugs. However, there are zero end users of proprietary tools who even have the option of patching security holes in the software upon which they depend.
So, while some may say "But any user can find/fix security holes when it's free software!" I'll simply say "But any user has the freedom to find/fix security holes when it's free software!" Whether or not the user has the skills is irrelevant, what's important is that the option is there.
Re:Open Source Vulnerable Too (Score:2, Interesting)
I believe that the real advantange to open-source is that programmers (like me) can't get away with crap designs. When I design open source software, I know I can't get away with hard coded keys or fixed length buffers. Closed source tends to be safe from this kind of sloppiness and is unfortunately acceptable practice.
Ofcourse, my open-source mindset has helped make my closed-source designs much more secure. I can't speak for anyone else.
Ozwald
Re:Open Source Vulnerable Too (Score:2)
Sorry, thats BS! (Score:2)
No, really.
Ok, a poll: how many of you went into the source code today and fixed the vulnerability on your own? Come on, raise your hands...
That's what I thought. People just have to wait for either the distribution to release an updated package, or at the least the package maintainer to release a patch or updated release. NOBODY (ok, not many) people go in and hack the source themseleves to fix it. It's better than closed source, but not as much as you make it apear to be.
Yes, you can coordinate with others to make a fix, but you can't sit there and tell me Joe Sysadmin will sit there and craft his own patch to close a hole. It doesn't work that way.
Go ahead, take my karma...
Re:Open Source Vulnerable Too (Score:4, Insightful)
Re:Open Source Vulnerable Too (Score:2)
Maybe the stats aren't as bad as they think... (Score:5, Informative)
"Almost half of the 22 million Apache HTTP sites found by the survey are running Apache/1.3.26, whilst only around a quarter of the Apache SSL sites are running this version, which fixes the chunked encoding vulnerability."
Does this statistic take into account that some Linux distros (for example, RedHat) backport the bugfixes to earlier versions of Apache/OpenSSL/etc.??
All of our servers are running Apache 1.3.23, but it's 1.3.23 release 14 which DOES include the fixes for the bugs mentioned on that page. If they are simply going by the Apache version number reported, then they may be over-estimating the number of vulnerable web servers by several million...
But you all know what they say about statistics anyway...
Re:Maybe the stats aren't as bad as they think... (Score:2)
Hmm hands up who installs a compiler on WEB SERVER?!
Re:Maybe the stats aren't as bad as they think... (Score:2)
Re:Maybe the stats aren't as bad as they think... (Score:3, Interesting)
Ahem, did you READ what you're replying to?
Many Linux boxes though run more than just Apache and many people need gcc
Again, try READING the post, then attempt to understand what he's saying.
Here, I'll summarize for you:
PROPERLY CONFIGURED PRODUCTION MACHINES SHOULD NEVER HAVE COMPILERS ON THEM
YOU COMPILE STUFF ON NON-PRODUCTION MACHINES, AND INSTALL WITH A PACKAGE MANAGER
many people need gcc
Not on production boxes they don't.
if they've gained access to your box, what stops them from pulling down a GCC package for your architecture
This is a good question; simply put, because it would be lots and lots of work, that can be undone very easily.
It's not a big deal for a hacker to root a box and do something like that, but it's a HUGE deal for a worm to do it - according to the bugtraq discussion, this current version of the worm frequently gets the attack wrong, because it misidentifies the Apache version and platform, and gets the injection vector wrong. Now imagine if it had to identify not just the Apache version, and the archetecture, but the whole machine environment so that it can come up with a working build environment?
Imagine coming up with a way to identify every possible platform out there, and then obtaining or compiling a version of GCC for each one, and then storing it, so that the worm can automatically retrieve it. (GCC - with all of the includes, libraries, etc. is quite large.)
Then you have to make the worm available to download the correct version of GCC - which means that you either have to identify yourself (you put it on your own server), or you have to put it on a compromised server, and hope that the admin doesn't notice the gigabytes of tarballs now being served by his machine.
And regardless of which way you choose, you've just made it ridiculously simple to negate all the hard work you've just done: once the white hats find out where the data is coming from, they just notify that server's upstream connection, and your work is for naught.
Re:Open Source Vulnerable Too (Score:2)
Re:Open Source Vulnerable Too (Score:2)
Repeat after me: Apache is the most popular web server. Microsoft does not make the most popular web server. Apache is the most popular web server. Microsoft does not make the most popular web server. Apache is the most popular web server. Microsoft does not make the most popular web server. Apache is the most popular web server. Microsoft does not make the most popular web server.
There. Hopefully we can put an end to the "hackers just target microsoft because they're most popular" argument right now.
Re:Open Source Vulnerable Too (Score:2)
Umm no, the argument still stands. You may not be aware of this, but MS does more than just make webservers. That's why they earned their reputation.
It's the sort of detail that gets flushed out when you take a moment to understand what people are saying.
Re:Open Source Vulnerable Too (Score:2)
Re:Open Source Vulnerable Too (Score:2)
Re:Open Source Vulnerable Too (Score:2)
Re:Open Source Vulnerable Too (Score:5, Informative)
A couple of important things mentioned though;
1) It must find /bin/sh to spawn a shell
2) It needs gcc to compile the source
Personally I always turn off anything on a production system that doesn't need to run. Also, hardening of the installation to make sure that as few amenities as possible are present for a potential intruder. A compiler is a good example of that. No need to have one on a production system.
Perhaps it is time to have vendors such as Red Hat and others consider using chroot() jails as much as possible, especially for Apache, BIND and other services known to be exploitable.
For those of you who are really serious about running Linux in a production environment and require security features such as compartmentalized shared memory, have a look at HP Linux [hp.com]. I think this is a derivative work of SE Linux, by the NSA.
What I haven't seen on the Symantec website though was exactly what the code does. The fact that it can spread is alarming, and the parent post has a good point that Linux users aren't immune. But what kind of access does this exploit allow? At most, it would be the user that Apache runs as, no? Does it try to further exploit local systems once it has made its way in or is it basically setting itself up to contribute to a DDoS attack?
This should serve as a wakeup call to Linux vendors, especially if they want to make headway in the desktop market. Linux distributions need to be more secure "out-of-the-box" than they currently are. Too much emphasis on trying to copy functionality from Windows and too little emphasis on what Linux is really good at.
Re:Open Source Vulnerable Too (Score:2)
Or backport UserMode Linux and run any exploitable service (which is really all of them, when you think about it... there's gotta be some kind of exploit for qmail, to choose an example which has been comparatively unexploited) in it's own VM.
compiler access... (Score:3, Insightful)
1) It must find /bin/sh to spawn a shell 2) It needs gcc to compile the source
...Also, hardening of the installation to make sure that as few amenities as possible are present for a potential intruder. A compiler is a good example of that. No need to have one on a production system.
This is a good point about the compilers. Every major Linux/Unix exploit I've ever seen/heard of involves sending some kind of source code to the victim machine and compiling it there, and then running it, which gives it a root shell. Too many differences between the environments to make static binaries that work I guess.
HOWEVER, every once in a while, it IS useful/necessary to compile something on your web server, but you easily prevent random exploit worms from having access to your compiler by just restricting which users have access to run the compiler. Make sure that only a very tight group of 'real' users who actually need the ability to compile something are in it, and your common worms aren't going to be able compile themselves. (ie. think "root". Anyone who needs to compile something to be run on a webserver should probably have root access anyway. And if the worm is already running as root, stopping it from compiling is a moot point.)
no it's not (Score:4, Insightful)
The fact is that UNIX and Linux security is simply better than Microsoft's and that mechanisms for fixing problems, which do inevitably occur, are better and faster as well.
In fact, as far as I can tell, this particular problem is already fixed in recent packages, so you aren't vulnerable if you keep your machine up to date. Also, once a vulnerability is known, having the source, many people can and do fix it almost instantly.
Nice boiler-plate advisory (Score:3, Insightful)
Re:Nice boiler-plate advisory (Score:3, Informative)
hardly seems to mean his post is in the least insightful.
...so? (Score:5, Insightful)
Besides which, the impact is a lot less than, say, Code Red which affected a much larger number of machines -- it hit all unpatched IIS servers versus unpatched SSL-enabled Apache servers.
Again, I ask, how is this news? What has changed that made this story worth reporting again [slashdot.org]?
Yeah, So...? (Score:5, Insightful)
Welcome to the world of mainstream.
Re:Yeah, So...? (Score:3, Interesting)
Anyway, it's going to bite them, in a big way. Recently some "combination attacks" have formed, i.e. a series of non-critical security flaws that can be combined to gain total system access.
This is combined with their aggressive end-of-life program which EOLs software that is still in widespread use, completely dropping even critical security bugfix support for said software. As Windows 2000 nears EOL in a couple years, that is when we will really see the shit hit the fan. Hell, my girlfriend got a contract job to migrate systems from NT4 to 2000 last week. With no compelling reasons to upgrade, a lot of people are going to be running unpatchable systems in a couple years. Of course this is MS's whole strategy, to force people to upgrade their software just to get critical bugfixes.
Re:...so? (Score:2)
0.9.6e is good (Score:5, Informative)
Contrary to the slashdot post, you only need to be up to 0.9.6e to be safe. If you happen to just now be upgrading past this bug, 0.9.6g is even better, but if you're already running "e" you are safe. The article kinda alarmed me at first when I saw the "g", thinking there was a new exploit in "e" and I needed to upgrade again.
Re:0.9.6e is good (Score:2)
some earlier are ok too -- vendors have backported (Score:5, Informative)
Also as mentioned by another poster, the netcraft report about the number of unpatched apache servers is complete nonsense. This is an openSSL bug, which has nothing to do with the apache version number, which what they measure and use to conclude people haven't updated.
(presumably older apache versions don't work with the newer openSSL libraries. Guess what... that's why the fixes were backported!)
Re:openssl 0.9.6g(latest) is broken (Score:2)
Try compiling Apache 2.0.X with a dynamic loadable module of SSL. It will break on 'make', at least on Red Hat 7.2. I had to go back to 0.9.6f.
You don't have to manually install the new versions of Apache/OpenSSL/etc from the project authors on Red Hat servers. RedHat backports all the security bugfixes to the older versions of the software, so the "version" number that you are running is always older than the "latest" version available from the actual project's site. RedHat (supposedly) does more compatibility testing to make sure all the different packages play nice with each other, so they don't actually release new packages to be 'up2dated' unless there are significant features in the new version. This delay (often weeks or months) doesn't usually matter because you don't NEED the latest versions to be secure, the bugfixes get updated pretty much immediately every time. It's worked pretty good for us so far...
Apache/BSD/Linux not GNU/Linux (Score:2, Funny)
such a system Apache/BSD/GNU/Linux, not just GNU/Linux. for obvious reasons.
RedHat 7.3 fix already in openssl-0.9.6b-24? (Score:4, Informative)
-28 already available? (Score:2)
Re:RedHat 7.3 fix already in openssl-0.9.6b-24? (Score:2)
Maybe the 'g' build from openssh.org is necessary, but RedHat seems to think they've already fixed in in their "b-24" release.
In general, the updated patched version is NEVER necessary from [openssh|apache|whatever].org on a RedHat system. They are always extremely fast in backporting the security-related bugfixes to the older versions of the packages, so while the "version number" is usually behind what the "latest" version is, it will have a higher "release number" than what you were running previously. Right now it looks like "openssl-0.9.6b-28" is the latest version for RedHat 7.3, and the bug is fixed in that release.
Re:RedHat 7.3 fix already in openssl-0.9.6b-24? (Score:3, Informative)
> necessary, but RedHat seems to think they've
> already fixed in in their "b-24" release
Red Hat typically backports security fixes from later releases to the version they shipped with the distribution release to avoid introducing unrelated changes.
Note that RHSA-2002-155 [redhat.com] is now superceded by RHSA-2002-160 [redhat.com], which additionally addresses CAN-2002-0659 [mitre.org].
Matt
Linux is losing an important edge (Score:5, Insightful)
And that is where Linux is starting to lose its edge on Windows: the quality of the sysadmins. With the risk of being accused of making a crass generalisation, I'd say that many, many Windows sysadmins are of the point-and-click Mickey Mouse variety. Worse, not just the admins, but the infrastructure architects as well. After all, all you need to set up a domain is to complete one easy wizard, right? I have seen the result in all its ugly glory. Linux on the other hand required an admin who knows what he is doing, since there were no easy wizards. Much configuration was by editing files, with the how-to printouts in hand.
I say "required" in the past tense, since Linux is becoming easier and easier to set up. Some distros are close to the point where I'd be happy to give the CD to my mom and have her set up her own desktop. That is not a bad thing. Yet, I already have seen a few (very few, thankfully) "sysadmins" setting up Linux boxes for database or web services, without really knowing what they are doing. When we get to the point where managers themselves can set up Linux, they will be tempted to hire less and less qualified staff, as has already happened to a large degree with Windows NT.
My fear is that Linux servers will be run by less qualified people in the future, and that it will cause the proliferation of aggressive and effective Linux virii.
Re:Linux is losing an important edge (Score:5, Funny)
hackers? interested in linux?! no way!
Re:Linux is losing an important edge (Score:2)
Re:Linux is losing an important edge (Score:2)
It's scary to think that nothing more than the default router settings protects most of these people from script kiddies.
Wrong Answer for Red Hat Linux (Score:5, Informative)
The correct solution is to run:
up2date -u
OR, if you don't use the free Red Hat Network., run:
rpm -Fvh ftp://updates.redhat.com/X.Y/en/os/i386/mod*
rpm -Fvh ftp://updates.redhat.com/X.Y/en/os/i386/apache*
Of course, replace X.Y with your version such as 7.0, 7.1, 7.2, 7.3, etc.
PEOPLE! Package management is GOOD. You should get and apply the updated packages from your vendor/distro. Slashdot editors/submitters should get a clue instead of recommend solutions that ultimately fsck stuff up.
Re:Wrong Answer for Red Hat Linux (Score:2)
# rpm -Fvh ftp://updates.redhat.com/7.3/en/os/i386/openssl*. 9.6b-28.i386.rpm
Retrieving ftp://updates.redhat.com/7.3/en/os/i386/openssl-0
So RedHat doesn't have the latest version on the ftp site?
I have a redhat system, but I have already upgraded to e. I just tried this out of curiousity.
Re:Wrong Answer for Red Hat Linux (Score:4, Informative)
Don't worry. Redhat has an irritating policy of backporting fixes into previously released versions of each package. Its the revision number that counts. Check the date on that file.
OT: Anyone care to elaborate on why apache 2.0.40 requires at least openssl 0.9.6e? I modified the configure script to accept 0.9.6c and it was happy enough...
Re:Wrong Answer for Red Hat Linux (Score:3, Informative)
Debian and FreeBSD among many others do the same thing.
Wrong Answer for Linux per se (Score:2)
And if your Linux distribution can't reliably install RPMs, than its not a Linux distribution but an OS which uses the Linux kernel. There is a difference, and its called the LSB.
Re:Wrong Answer for Red Hat Linux (Score:3, Informative)
Well, I've been keeping my RedHat 7.3 up2date and I got hit. I didn't know it until I read this post, but last night TicketMaster Brasil (of all places) pinged my server one minute before the characteristic /tmp/.uubugtraq file appeared. The only thing that saved me was that the link phase of the worm compilation failed due to missing libraries (specifically, RC4 and MD5).
I agree that package management is good, but it looks like RedHat is running behind on this one. I'll be closing down the SSL port on my firewall for now :-(
Although I never saw it actually operating, you can probably clear the worm from your system via the following command (though you'll have to take measures to ensure it doesn't come right back):
The worm itself is nicely commented; it even has a disclaimer that the author isn't responsible for any harm:
Doubt the disclaimer will keep him out of jail for life [slashdot.org], though
I hate to say it (Score:5, Insightful)
I hope I never see another post stating that again, ok? Especially not a god damned +5 one.
What about... (Score:4, Interesting)
I realize the binary may not run on FreeBSD/OSX/etc., but the vulnerability itself is not Linux-specific, right? Could the virus be ported?
Sorry, I'd RTFA but it's slashdotted.
Sure (Score:2)
Re:What about... (Score:2)
This does not necessarily imply that the worm can be ported. Perhaps it depends on Linux-like behaviors in the underlying OS.
Competence closes this hole too... (Score:3, Insightful)
-jag
Re:Competence closes this hole too... (Score:2)
What is gcc doing on a production webserver in the first place.
My usual practice is to remove the compiler and linker after building the system. I never install anything extra. It's all one package at a time. It's a PITA, but that's the way it goes. If I need to patch, or install from source later on, gcc gets put back, and taken away again after.
...and opens another one! (Score:5, Insightful)
Thank you, try again.
While are you are correct in saying that a limited subset of users should be permitted to run the compiler, that subset should never be the superuser. Compilers have security holes too, and gcc has been no exception. (was it 2.7 or 2.8? don't recall, too tired)
Never do your compiling as root.
Re:...and opens another one! (Score:2, Interesting)
Sadly, many people are still bogged down in the concepts of 70's era Time Sharing systems.
Re:...and opens another one! (Score:2)
Your suggestion only works if it is not a policy. Once it becomes a policy, then developers will not have access to compilers, because it's much easier to find a new job then to convince IT that you properly belong to that subset.
Re:...and opens another one! (Score:2)
Got a link to more info on that? I don't see how a non-suid-root program like gcc can allow anyone to get root by itself. I did find references to a "put symlinks in /tmp" style vulnerability in gcc 2.7.x, but that requires root to run gcc for anything really interesting to happen.
How Do We Solve The Lazy Admin Problem? (Score:5, Insightful)
Unfortunately given human nature, we can't rely on sys admins and end users to patch their boxen. Almost every mechanism I can think of to automate this process either calls for automatically updating machines (which sucks if a patch breaks an untested scenario and also may need some legal exemptions) or some similar mechanisms to enable computers to help themselves [slashdot.org].
Any Slashdotters have any thoughts about this?
Benevolent worms! (Score:2, Informative)
In fact, Microsoft has already pre-infected their own new OS, Windows XP. Maybe those draconian EULAs (you hereby agree that "M$ 0wnz j00") aren't such a dumb idea after all...
Not that I like it, but the fact is that MS is targeting the sort of people we're worrying about, giving them what it thinks they need, whether they ask for it or want it, or not. We hate this because we're tech-savvy and want to control our machines, but for the average user, having someone else "0wn" their machine is probably, ultimately, a necessity. The question is just who's going to do the owning - virus writers and crackers, or Microsoft/Symantec etc.
Re:Benevolent worms! (Score:2)
Do I think that their should be virii that go around an infect/fix your box for you? No, varying distributions would make this a nightmare, and the liability issues involved would make anyone (with even the best intentions and highest technical skill) insane to even try such an approach.
Rather, there needs to be, at a minimum, an accepted method of notifying the admin/primary user of "box X" that their system has been rooted; this notification could include some sort of pointer to (distro-specific? security-vendor-implementation-specific?) registered info about the virus.
this appears to me to be the the best role for a "benevolent virus" (in this case, more of a network scanner/meta-virus, as actual infection is not necessary) - by detecting possible routes of infection/actual infection on a system, and warning that system of possible/actual infections.
A distro could (based on this warning notification) wrap some nice end-user warning/auto-update functionality around the registered virus info.
In other words, the newbie user doesn't nec. have to actively check for updates; rather, others on their network will intermittently scan for "open" boxes, and notify those machines/users of their status (this isn't much different from what a sysadmin on a LAN does, but in a more decentralized manner). Think of it as a sort of semi-automated neighborhood watch.
Are there holes to this approach? Is this politically/technically complicated? Certainly.
However, this is definitely a case where I see that the "mediocre user" needs to be accomodated -
and educated/hand-held, even if just a little bit at a time - into keeping their boxes maintained correctly. Otherwise they simply won't be able to keep up.
Besides, I would assume that a community in which network effects are so well exploited IRT generating code should have some excellent ideas IRT automating notification throughout local networks.
Re:How Do We Solve The Lazy Admin Problem? (Score:3, Insightful)
And then: Don't forget to get a new admin if your old admin leaves the job.
Those machines that have an admin are usually taken care of. But most security issues I see are with clients who have a server that some guy did the setup for some two or three years ago, then left a year later and since then nobody looked after the machine.
As one ad's catchphrase put it correctly you never talk about the server until it fails.
Being the guy in my little company who's responsible for updating the clients' servers, I often experience how clients have a hard time understanding that software support, updates and log checks are necessairy -- because from their perspective this is work without "results".
They can't check if I really did something when I give them a month's bill with x hours for security updates on their machine.
I often explain to them that this server care is a bit like toothbrushing... (Which, btw, is the actual name of that task we use in my company.)
Re:How Do We Solve The Lazy Admin Problem? (Score:4, Insightful)
Security is about risk management. It's about process, procedure, and diligence. Security is not a technology problem, and it is not solved by geeks.
You can have a secure server farm running virtually any kind of software out there (including M$ products). How? By having a tight, auditable system. You carefully install the systems, documenting your procedure and following best practices (even if you develop them -- the important thing is to have a process). You maintain them on a schedule, leaving nothing to chance. You document the configuration thoroughly, and you enforce rigorous change control.
You might not even have OpenSSL upgraded even though it's vulnerable! You have to decide how much risk is acceptable and worthwhile, but the trick is to consciously and deliberately evaluate the risk, and decide how you're going to deal with it.
This applies to everything. You don't leave it up to your sysadmins to decide whether or not they should upgrade -- it's a part of a checklist that must be done, and can be independently verified at any time. It's part of a procedure that will allow new upgrades to be thoroughly tested and carefully rolled out to avoid downtimes due to unexpected incompatibilities between new and old versions. Imagine someone unwittingly upgrading apache from 1.3 to 2.0, without full testing on a major production system or even realizing that there may be configuration differences.... Nightmarish.
The only way to truly run a secure system is to realize that it has to be extremely carefully planned and managed. It's a hell of a lot of work, and it costs a lot of money. So it quickly becomes an exercise in traditional risk management. This is where the suits and the high-priced consultants often come in. You have to find out how much everything is worth, and what kind of risk you're willing to tolerate (or conversely, how much security you can afford given your environment). You will never be 100% mathematically inpenetrable, but you can reduce your risk to a level that you're comfortable with.
Obviously, this kind of thing scales. If you have a simple system, your plans and procedures can be fairly simple as well. As long as you have a solid verifiable plan, and you stick to it, you'll be fine. If you have a complicated system, your security management is going to be complicated as well.
That's a bit arrogant, dontcha think? (Score:3, Insightful)
There've been too many admins who've been burned by a "security patch" that broke the system in some other way. When your computers need to be up 24-7, and you can have, at most, about 4 hours of down time, you're going to be VERY selective about what patches get added to the system. Or from another viewpoint, I just got burned by an XP "security patch" that for some reason broke my autodial functionality so that my routing table went straight into my local network. I had to reinstall Windows XP to get the functionality back... I'm not about to start putting those security patches back on. I don't like it, but my system works. (I run firewall and antivirus software as well, so its not like my butt is completely uncovered, either)
Admin's are not only responsible for the computers and OS's themselves, but the network communications layer, hard drive resources, ALL of the apps on those boxes (and their associated patches), plus help desk support, new computer setups, and old computer shut downs, and let us not forget software licensing management issues.
IT Admins also painfully understand the one part of Software Engineering that Software Engineers don't. Any change to the program WILL have functional differences.
Automating updates can work because it takes the load off of the admin. But as you point out, there are legal issues, plus there's the above issue where you don't necessarily want to install all of these patches because your system works "as is". On the flip side, Norton's LiveUpdate for their anti-virus software runs pretty well. But NAV is a very distinct application and purpose, and doesn't have ripple effects throughout the rest of the computer system.
Also there's an apple and oranges comparison to Microsoft and Linux problems here. Microsoft got its bad press not from legitimate security issues, but because Outlook allowed the very ACT of receiving an email a vector for running a virus/trojan horse through the preview pane. Because Word allowed any document to take control of the users hard drive and begin deleting files, grab the email address book and replicate itself. That's a whole different ballgame than exploiting IIS through stack overflow issues, or exploiting this loophole in OpenSSL. There's a difference between "defeating/exploiting security" and "leaving the doors wide open.". But now, thanks to Microsoft PR to spin their problems and Linux PR to make Microsoft look bad, ALL exploits are equal so that the least exploit is just as important as a truly criticial one and THAT adds to the Admin's workload, and leads back down the road of not getting these patches installed.
In the end, the power and the responsibility lie with the Sys Admin. Which is where it should be.
Re:How Do We Solve The Lazy Admin Problem? (Score:3, Insightful)
I could imagine a ip-up.d script (for dialups) or cron job (for dedicated lines) that connects to a distribution mirror site, then asks for a current status of available security upgrades (using signed communication to avoid man-in-the-middle attacks).
If the system is found to run outdated packages, it could warn the user. If it runs dangerously insecure packages, it could even stop the insecure services, maybe even disconnect the machine.
In today's case, after dial-up the upgrade status check would stop any https-related services and tell the user how to update. If no update was available, it would allow the user to reactive the service but only after a stern warning that he should better wait for the updated packages.
Just a thought...
Security Lists (Score:3, Insightful)
-Serp
only effects https (Score:2)
Re:only effects https (Score:2, Informative)
Or rather, if you're server isn't listening on port 443 there's no point in opening this port up in your firewall. Default deny people. Default deny. Portmap may not be vulnerable today but someone may discover a bug in it at 3am tomorrow while you're happily sleeping in bed and use it to exploit your box. Just block everything and open up only the services you need. And of those servers, think about if you really need them open or not and if you could be using a more secure program to do the same thing.. perhaps DJB's tools like publicfile and djbdns for example to replace these huge monolithic apps for a simple home box with a couple dozen web pages.
Incidents.org just released an advisory as well... (Score:4, Informative)
Here is the alert [incidents.org]:
published: 2002-09-13
OpenSSL, the collection of libraries and programs used by many popular
programs, has had a number of security problems recently. It looks like
the problems are not over yet.
It has been discussed on several mailing lists, that aside from the
exploit known for openssl 0.9.6d, there are exploits available for
even the most recent version (0.9.6g).
As a precaution, we recommend to disable programs that use openssl as
much as possible. The exploits available so far focus on apache, which
is probably the most common exposed service that is using openssl.
As a precaution, we recommend disabling SSLv2, if you have to run an
Apache server with mod_ssl enabled. The magic configuration lines
are:
SSLProtocol all -SSLv2
SSLCipherSuite ALL:!ADH:!NULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:-L
One of the openssl apache exploits was found to install a DDOS agent
called 'bugtraq.c'. It uses port 2002 to communicate and can be used
to launch a variety of DDOS attacks. This program uses UDP packets on
port 2002 to communicate, not necessarily to attack.
-
cow's go muu~
Debian Package (Score:2)
S
Signature? (Score:2)
What to look for in your logs (Score:5, Informative)
[27/Aug/2002 20:02:19 23525] [error] OpenSSL: error:1406B458:SSL routines:GET_CLIENT_ MASTER_KEY:key arg too long
[27/Aug/2002 20:02:22 24087] [error] OpenSSL: error:1406B458:SSL routines:GET_CLIENT_ MASTER_KEY:key arg too long
Thing is though, that "key arg too long" error is part of the July patch to OpenSSL, so you won't see it if you aren't patched. Hopefully this log signature doesn't become as familiar as nimda scans.
They only hack the ones they LOVE!! (Score:2, Funny)
To be truly loved is to get hacked! Someone out there must really love Microsoft, but I am glad they are starting to share the love with the Open Source community more and more. It is a sign that the love for Microsoft may be starting to fade or maybe hackers are just plain sick of "shooting fish" in the idomatic barrel.
Either way, I am going to go block UDP on port 2002 on the fw/router and mumble to myself about buffer overflows.
Nobody is Answering (Score:4, Insightful)
The submission states "A GNU/Linux worm" and "a bug in OpenSSL". But OpenSSL runs on a heck of a lot of systems that aren't Linux. Does this exploit only affect Linux systems running OpenSSL, or does it affect any system running OpenSSL?
Re:Nobody is Answering (Score:2, Informative)
good arguement for alternative architectures :) (Score:2)
My Box Got Hit This Morning (Score:2, Informative)
Showed up in the form of hundred of defunct child processes from
The header of
Basically is installed and compiled via mod_ssl, and then once it starts, it connects to another host (the IP address is passed as an argument) via UDP 2002. From that host, it learns of other infected hosts, and connects to those, etc. The header claims that up to 16 million simultaneous
connections are possible.
The processes run as the web server user.
Harmless, once you get rid of it.
Re:My Box Got Hit This Morning (Score:3, Informative)
iptables -A OUTPUT -p udp --dport 53...... ACCEPT
iptables -A OUTPUT -p udp --dport 123
iptables -A OUTPUT -p udp -j outlog
#The output logging rule
iptables -A outlog -j LOG -m limit \
--limit 3 --limit-burst 5 \
--log-prefix "catch-all:(out)"
iptables -A outlog -j DROP
That should implement a fairly simple rate-limiter on outgoing UDP, with lots of nice easy-to-track logs and stuff.
Sorry, it has to be said... (Score:2)
GNU/Linux worm (Score:2)
Re:only Intel systems? (Score:2, Informative)
Re:only Intel systems? (Score:3, Informative)
Actually, the stacks are usually pretty similar. (On most Linux boxes, stacks grow towards lower addresses, except on Alpha, IIRC. Heaps depend on the libc implementation, not the CPU.) As a result, the structure of a buffer flow vulnerability doesn't change much from machine to machine.
The big difference that keeps this 'sploit tied to x86 is the instruction set. You can't run x86 instructions on other CPUs by default. (Ignoring FX!86 on Alpha, since it's not likely to step up to bat on your shellcode anyway.)
--JoeRe:Debian unstable (Score:2)
I hope it's got the patch backported!
Goes to check changes.log....
Stable safe, and probably unstable (Score:2)
Unstable, is at 0.9.6g and thus shouldn't be vulnerable.
Re:How do I know? (Score:2, Informative)
Re: (Score:2, Flamebait)
Re:Glad to see Redhat helping out...themselves (Score:3, Funny)
Funny that, I thought I paid Microsoft $135 for Windows 98. Perhaps I'm just imaging it. Oh well, I look forward to receiving the free versions of Windows that you seem to think are out there.
Oh wait. Then I realise that your just full of BS. Hell, even Office 2000 SP2 disables installations of Office 2000 that are useing known "pirated" instalation keys. So much for "free."
Jesus, I just drank half a bottle of wine, fucked my girfriend, fired up the Thinkpad and noticed your BS, and I still make more sense than you.
Re:Glad to see Redhat helping out...themselves (Score:2)
You're paying for the convenience of having it automagically installed for you by Red Hat with little need for input on your end.
Re:Glad to see Redhat helping out...themselves (Score:2)
get the tree up to date:
emerge rsync
update your package:
emerge -u openssl
or just update the whole world at once:
emerge -u world
FUD alert! (Score:2)
Please try a little research before making silly statements...
Blocking UDP 2002 isn't the answer. (Score:2, Interesting)
You might save yourself from *this* worm, but how long until someone 0wn3z you with some other 37331 worm that uses port 2003? or 2004? or 37331? or some other number? Hmmmmm?
While you could nuke GCC from your machine (ouch!) why not just patch the hole and get on with life?
--JoeRe:Update Apache too; c'mon... you know you want t (Score:2)
Re:Update Apache too; c'mon... you know you want t (Score:2)
Rubbish (was: Mac Os X goes down in flames...) (Score:2, Informative)
OpenSSL 0.9.6e is perfectly safe. And that was available via Software Update on 30 Jul 2002.
Andreas
mod this one up (Score:2)