Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security Software Apache

Linux Worm Spreading, Many Systems Vulnerable 617

sverrehu writes "A GNU/Linux worm exploiting a bug in OpenSSL spreads through vulnerable Apache web servers, according to Symantec. The worm, which was first reported in Europe, targets several popular Linux distributions. See also the SecurityFocus vulnerability listing for the OpenSSL bug." sionide also writes: "Netcraft recently published a report which explains that a large portion of Apache systems are still unpatched (halfway down). To protect yourself please upgrade to OpenSSL 0.9.6g."
This discussion has been archived. No new comments can be posted.

Linux Worm Spreading, Many Systems Vulnerable

Comments Filter:
  • Finally (Score:5, Funny)

    by SpanishInquisition ( 127269 ) on Friday September 13, 2002 @07:01PM (#4254692) Homepage Journal
    Linux can compete with Microsoft.
  • by P!erCer ( 578708 ) on Friday September 13, 2002 @07:03PM (#4254701)
    People need to know that Open Source is just as vulnerable to viruses and worms as proprietory software is... The hackers target the most widespread software, which is more often than not Windowware. Apache is one of the most widespread Linux programs, and its infection is a sign of things to come as more people leave Windows.
    • Just as vulnerable, perhaps. However, with open source software one has the ability to go in and fix the problem rather than waiting for some vendor to do it for you. That's where the power lies -- often, when a vulnerability is discovered, a report is sent out including exploit code and a patch to correct the issue.

      That's what makes open source software overall more secure -- the turnaround time with patches is a lot faster.
      • That's where the power lies -- often, when a vulnerability is discovered, a report is sent out including exploit code and a patch to correct the issue.

        This power costs money. The administrator would have to download the sources, apply the patch, and - most importantly - configure the build so that the proper things get built and other bits get left out. Getting a live server back takes more than just typing ./configure. IOW, you need a smarter and therefore more expensive administrator to actually enjoy this power.

        That's what makes open source software overall more secure -- the turnaround time with patches is a lot faster.

        I am very grateful for all the open source software I've ever used, but I must point out that this turnaround time usually doesn't include what a responsible commercial outfit would call QA.

        • by chris_mahan ( 256577 ) <chris.mahan@gmail.com> on Friday September 13, 2002 @07:53PM (#4254953) Homepage
          Nobody ever said computer programming was easy. It's a difficult job, full of arcane knowledge and fraught with pitfalls. This is why not everybody can be one, and this is why the good ones ought to be paid well.

          Airline pilots are highly trained and constantly upgrade their skills, and are highly paid.

          Likewise, programmers who run enterprise-strength systems have heavy responsibilities. This is not something one ought to go into for the money, but rather, for the love and dedication to the craft. (not aircraft)...

          As far as QA, I tell you what. If the system is designed correctly, it will need very little QA. I know this because some systems can never get it right, no matter how much QA go into them, because of fundamuntal design flaws.

          And yes, designing computer software is hard. Like heart surgery. One slip of the old wrist and it's flatline.
          • As far as QA, I tell you what. If the system is designed correctly, it will need very little QA. I know this because some systems can never get it right, no matter how much QA go into them, because of fundamuntal design flaws.

            A good QA process doesn't just test completed code. A good QA process gets involved at all levels of development, and would have at least a fighting chance of catching those fundamental design flaws.

      • with open source software one has the ability to go in and fix the problem rather than waiting for some vendor to do it for you. That's where the power lies

        How many webserver administrators have the skills to look at the Apache sourcecode (or in this case, the OpenSSL sourcecode), find the bug, and fix it? If they had such skills they probably wouldn't be working as webserver administrators to begin with. The often tauted ability to "go in and fix things" or even to simply "contribute" is highly overrated. Who found and fixed this bug? Was it some random user, or one of the original developers?

        • Not overrated. (Score:5, Insightful)

          by Cardinal ( 311 ) on Friday September 13, 2002 @08:42PM (#4255128)
          How many webserver administrators have the skills to look at the Apache sourcecode (or in this case, the OpenSSL sourcecode), find the bug, and fix it?

          All the skill it should take is to apt-get upgrade or up2date, or whatever the distro in question uses for updates. Debian woody had the patch posted immediately. So the skills needed to update your Apache system are no different from those needed to patch code red (Which, a year after its creation, is still roaming around)

          The often tauted ability to "go in and fix things" or even to simply "contribute" is highly overrated. Who found and fixed this bug? Was it some random user, or one of the original developers?

          Well, judging by the advisory from the OpenSSL team [openssl.org] (Dated July 30, btw, this is hardly a new issue) and a cursory glance over the developer list [openssl.org], the advisory issue was not found by anyone on the development team. So, I'm going to have to go ahead and disagree with you. I consider the ability of users to find and patch security vulnerabilities to be a benefit of free software that simply cannot be overstated.

          Having said that, I'll concede the obvious. Most end users are not skilled in the ways of finding or fixing bugs. However, there are zero end users of proprietary tools who even have the option of patching security holes in the software upon which they depend.

          So, while some may say "But any user can find/fix security holes when it's free software!" I'll simply say "But any user has the freedom to find/fix security holes when it's free software!" Whether or not the user has the skills is irrelevant, what's important is that the option is there.
      • I've found that most software, open and closed, has become so complicated that fixing problems has become a task better left to the writers. Sure you can fix it yourself, but open source stuff tends to get fixed pretty fast anyway.

        I believe that the real advantange to open-source is that programmers (like me) can't get away with crap designs. When I design open source software, I know I can't get away with hard coded keys or fixed length buffers. Closed source tends to be safe from this kind of sloppiness and is unfortunately acceptable practice.

        Ofcourse, my open-source mindset has helped make my closed-source designs much more secure. I can't speak for anyone else.

        Ozwald
      • Right - and how many small businesses have the time to do this? And how many large businesses can risk a patch that has not been fully regression tested? Just because OSS can release an unstable patch in 12 hours doesn't mean that OSS is faster then CSS when it comes to stable patches.
      • Ya know. To that, I have to say BULLSHIT.

        No, really.

        Ok, a poll: how many of you went into the source code today and fixed the vulnerability on your own? Come on, raise your hands...

        That's what I thought. People just have to wait for either the distribution to release an updated package, or at the least the package maintainer to release a patch or updated release. NOBODY (ok, not many) people go in and hack the source themseleves to fix it. It's better than closed source, but not as much as you make it apear to be.

        Yes, you can coordinate with others to make a fix, but you can't sit there and tell me Joe Sysadmin will sit there and craft his own patch to close a hole. It doesn't work that way.

        Go ahead, take my karma...

    • Repeat after me: Apache is the most popular web server. Microsoft does not make the most popular web server. Apache is the most popular web server. Microsoft does not make the most popular web server. Apache is the most popular web server. Microsoft does not make the most popular web server. Apache is the most popular web server. Microsoft does not make the most popular web server.

      There. Hopefully we can put an end to the "hackers just target microsoft because they're most popular" argument right now.

      • "There. Hopefully we can put an end to the "hackers just target microsoft because they're most popular" argument right now."

        Umm no, the argument still stands. You may not be aware of this, but MS does more than just make webservers. That's why they earned their reputation.

        It's the sort of detail that gets flushed out when you take a moment to understand what people are saying.
        • The argument stands ONLY with respect to Outlook worms, Word macro viruses, etc. It does NOT stand when applied to web servers, which do happen to be the entire focus of this article. If the parent post was indeed talking about things other than webservers, the poster should have made that clear instead of mis-using a cliched excuse for Microsoft's rampant bug problem.
      • Actually, this is becoming less true. Sure, there may be millions of servers running Apache, but how much web traffic is handled by Apache? Of the most hit sites on the Internet (AOL, Microsoft, Dell, etc.) it seems like there's just as many IIS sites as Apache.
    • by alsta ( 9424 ) on Friday September 13, 2002 @07:24PM (#4254809)
      I briefly read the article but I haven't seen the code itself. Would be nice to have had a honeypot at this point, so that one could snarf the source to see what it tries to do.

      A couple of important things mentioned though;

      1) It must find /bin/sh to spawn a shell 2) It needs gcc to compile the source

      Personally I always turn off anything on a production system that doesn't need to run. Also, hardening of the installation to make sure that as few amenities as possible are present for a potential intruder. A compiler is a good example of that. No need to have one on a production system.

      Perhaps it is time to have vendors such as Red Hat and others consider using chroot() jails as much as possible, especially for Apache, BIND and other services known to be exploitable.

      For those of you who are really serious about running Linux in a production environment and require security features such as compartmentalized shared memory, have a look at HP Linux [hp.com]. I think this is a derivative work of SE Linux, by the NSA.

      What I haven't seen on the Symantec website though was exactly what the code does. The fact that it can spread is alarming, and the parent post has a good point that Linux users aren't immune. But what kind of access does this exploit allow? At most, it would be the user that Apache runs as, no? Does it try to further exploit local systems once it has made its way in or is it basically setting itself up to contribute to a DDoS attack?

      This should serve as a wakeup call to Linux vendors, especially if they want to make headway in the desktop market. Linux distributions need to be more secure "out-of-the-box" than they currently are. Too much emphasis on trying to copy functionality from Windows and too little emphasis on what Linux is really good at.

      • Perhaps it is time to have vendors such as Red Hat and others consider using chroot() jails as much as possible, especially for Apache, BIND and other services known to be exploitable.

        Or backport UserMode Linux and run any exploitable service (which is really all of them, when you think about it... there's gotta be some kind of exploit for qmail, to choose an example which has been comparatively unexploited) in it's own VM.

      • compiler access... (Score:3, Insightful)

        by orius_khan ( 416293 )

        1) It must find /bin/sh to spawn a shell 2) It needs gcc to compile the source

        ...Also, hardening of the installation to make sure that as few amenities as possible are present for a potential intruder. A compiler is a good example of that. No need to have one on a production system.

        This is a good point about the compilers. Every major Linux/Unix exploit I've ever seen/heard of involves sending some kind of source code to the victim machine and compiling it there, and then running it, which gives it a root shell. Too many differences between the environments to make static binaries that work I guess.

        HOWEVER, every once in a while, it IS useful/necessary to compile something on your web server, but you easily prevent random exploit worms from having access to your compiler by just restricting which users have access to run the compiler. Make sure that only a very tight group of 'real' users who actually need the ability to compile something are in it, and your common worms aren't going to be able compile themselves. (ie. think "root". Anyone who needs to compile something to be run on a webserver should probably have root access anyway. And if the worm is already running as root, stopping it from compiling is a moot point.)

    • no it's not (Score:4, Insightful)

      by g4dget ( 579145 ) on Friday September 13, 2002 @10:28PM (#4255468)
      Apache has been the most popular web server for years, yet occurrences like this are very rare. So, it's not "just as vulnerable". On the other hand, these things aren't new for UNIX/Linux either: the original worm was written for BSD sendmail. So it's not the case that people just thought of this yesterday.

      The fact is that UNIX and Linux security is simply better than Microsoft's and that mechanisms for fixing problems, which do inevitably occur, are better and faster as well.

      In fact, as far as I can tell, this particular problem is already fixed in recent packages, so you aren't vulnerable if you keep your machine up to date. Also, once a vulnerability is known, having the source, many people can and do fix it almost instantly.

  • by Jeffrey Baker ( 6191 ) on Friday September 13, 2002 @07:05PM (#4254711)
    The advisory at Symantec advises the reader to update their virus definitions and run a full system scan. Presumably they are talking about Symantec anti-virus products, but if they make such a product for Linux/x86, I could not detect it on their website.
  • ...so? (Score:5, Insightful)

    by delta407 ( 518868 ) <.moc.xahjfrel. .ta. .todhsals.> on Friday September 13, 2002 @07:07PM (#4254726) Homepage
    Okay, so this vulnerability was published and corrected over a month ago [securityfocus.com]. Of course it's still growing; a lot of people still haven't patched their servers. How is that newsworthy? It's been out for quite a while now, anyway, and nothing is different today from yesterday. Nothing horrible has happened, it's just continuing to do what it was designed to do.

    Besides which, the impact is a lot less than, say, Code Red which affected a much larger number of machines -- it hit all unpatched IIS servers versus unpatched SSL-enabled Apache servers.

    Again, I ask, how is this news? What has changed that made this story worth reporting again [slashdot.org]?
    • Yeah, So...? (Score:5, Insightful)

      by NetJunkie ( 56134 ) <`jason.nash' `at' `gmail.com'> on Friday September 13, 2002 @07:10PM (#4254741)
      Most MS exploits that hit Slashdot are the SAME WAY. MS releases a fix 6 weeks before, most admins don't patch, and then the big exploit hits.

      Welcome to the world of mainstream. :)
      • Re:Yeah, So...? (Score:3, Interesting)

        by GigsVT ( 208848 )
        You are correct, but it's just a matter of time until MS's glacial turn around time, and outright refusal to fix certain bugs [pivx.com], combined with a "windows update" that often doesn't apply all the needed fixes, or installs patches that undo other patches.... I could go on...

        Anyway, it's going to bite them, in a big way. Recently some "combination attacks" have formed, i.e. a series of non-critical security flaws that can be combined to gain total system access.

        This is combined with their aggressive end-of-life program which EOLs software that is still in widespread use, completely dropping even critical security bugfix support for said software. As Windows 2000 nears EOL in a couple years, that is when we will really see the shit hit the fan. Hell, my girlfriend got a contract job to migrate systems from NT4 to 2000 last week. With no compelling reasons to upgrade, a lot of people are going to be running unpatchable systems in a couple years. Of course this is MS's whole strategy, to force people to upgrade their software just to get critical bugfixes.
    • You said it yourself. It's newsworthy because "it's still growing; a lot of people still haven't patched their servers."
  • 0.9.6e is good (Score:5, Informative)

    by photon317 ( 208409 ) on Friday September 13, 2002 @07:08PM (#4254730)

    Contrary to the slashdot post, you only need to be up to 0.9.6e to be safe. If you happen to just now be upgrading past this bug, 0.9.6g is even better, but if you're already running "e" you are safe. The article kinda alarmed me at first when I saw the "g", thinking there was a new exploit in "e" and I needed to upgrade again.
    • I already started compiling 0.9.6g when I saw your post. Of course, I remembered hearing about an OpenSSL bug which had prompted me to go to e, so I suspected that I was safe. I don't even run an SSL-enabled apache server (though I do run sshd on port 443).
    • by Xylantiel ( 177496 ) on Friday September 13, 2002 @10:40PM (#4255498)
      In Debian, at least, the fixes were backported to 0.9.6c. Updated packages fixing this problem were released almost a month an a half ago for all major distributions. (July 30 for Debian. [debian.org], packages numbered 0.9.6c-2.woody.0)

      Also as mentioned by another poster, the netcraft report about the number of unpatched apache servers is complete nonsense. This is an openSSL bug, which has nothing to do with the apache version number, which what they measure and use to conclude people haven't updated.

      (presumably older apache versions don't work with the newer openSSL libraries. Guess what... that's why the fixes were backported!)

  • In this case at the very least, you should call
    such a system Apache/BSD/GNU/Linux, not just GNU/Linux. for obvious reasons.
  • by leighklotz ( 192300 ) on Friday September 13, 2002 @07:19PM (#4254789) Homepage
    According to the Symantec report [symantec.com] cited in the story, the bug in openssl is this [mitre.org] which is reported as RHSA-2002-155 [redhat.com], for which the the fix is openssl-0.9.6b-24.i386.rpm [redhat.com] for RedHat 7.3 i386 (plus some other RPMs for other versions, or other RPMS for other versions of RedHat). Maybe the 'g' build from openssh.org is necessary, but RedHat seems to think they've already fixed in in their "b-24" release.
    • A couple of days ago, I went on a standard errata gathering run, and downloaded openssl-0.9.6b-28.i386.rpm & etc. for 7.2. I don't see -24 in either the 7.2 or 7.3 directory, even though the page you linked to lists it. I would presume, however, that -28 is not vulnerable.
    • Maybe the 'g' build from openssh.org is necessary, but RedHat seems to think they've already fixed in in their "b-24" release.

      In general, the updated patched version is NEVER necessary from [openssh|apache|whatever].org on a RedHat system. They are always extremely fast in backporting the security-related bugfixes to the older versions of the packages, so while the "version number" is usually behind what the "latest" version is, it will have a higher "release number" than what you were running previously. Right now it looks like "openssl-0.9.6b-28" is the latest version for RedHat 7.3, and the bug is fixed in that release.

    • > Maybe the 'g' build from openssh.org is
      > necessary, but RedHat seems to think they've
      > already fixed in in their "b-24" release

      Red Hat typically backports security fixes from later releases to the version they shipped with the distribution release to avoid introducing unrelated changes.

      Note that RHSA-2002-155 [redhat.com] is now superceded by RHSA-2002-160 [redhat.com], which additionally addresses CAN-2002-0659 [mitre.org].

      Matt

  • by JaredOfEuropa ( 526365 ) on Friday September 13, 2002 @07:22PM (#4254799) Journal
    Of course, it was only a matter of time before hackers showed an interest in this OS. Most parts being open source, perhaps that means that holes in the OS or applications are easier to find, but that goes for both the hackers and for people on the up-and-up. I'm surprised it took so long, and it will certainly happen again. The real question is: how will the admins of the affected or vulnerable servers act, and how many are aware of the issue?

    And that is where Linux is starting to lose its edge on Windows: the quality of the sysadmins. With the risk of being accused of making a crass generalisation, I'd say that many, many Windows sysadmins are of the point-and-click Mickey Mouse variety. Worse, not just the admins, but the infrastructure architects as well. After all, all you need to set up a domain is to complete one easy wizard, right? I have seen the result in all its ugly glory. Linux on the other hand required an admin who knows what he is doing, since there were no easy wizards. Much configuration was by editing files, with the how-to printouts in hand.

    I say "required" in the past tense, since Linux is becoming easier and easier to set up. Some distros are close to the point where I'd be happy to give the CD to my mom and have her set up her own desktop. That is not a bad thing. Yet, I already have seen a few (very few, thankfully) "sysadmins" setting up Linux boxes for database or web services, without really knowing what they are doing. When we get to the point where managers themselves can set up Linux, they will be tempted to hire less and less qualified staff, as has already happened to a large degree with Windows NT.

    My fear is that Linux servers will be run by less qualified people in the future, and that it will cause the proliferation of aggressive and effective Linux virii.
    • by madenosine ( 199677 ) on Friday September 13, 2002 @07:27PM (#4254819)
      it was only a matter of time before hackers showed an interest in this OS

      hackers? interested in linux?! no way!

      ...it had to be said
    • Right, it will probably happen again. How many hands will I need to count the number of popular IIS/Outlook worms before then? By the way, I closed this hole on my server a while ago.
    • I have to agree, I've seen a LOT of pretty big companies using the "guy who knows most about Windows" in charge of the network. These networks tend to be a horrifically unpatched, badly setup mess. I honestly wouldn't trust most of these people with any version of Linux unless I'd installed everything they need, and bolted it down before letting them have it. "Pick'n'clik" really is the extent of their abilities, and even then I'm betting most of the clicking is pretty much guesswork!

      It's scary to think that nothing more than the default router settings protects most of these people from script kiddies.
  • by Anonymous Coward on Friday September 13, 2002 @07:27PM (#4254822)
    If you follow the stoopid /. suggestion, and compile/install the new OpenSSL you are going to leave RPM nirvana and enter "random untracked apps linked against random untracked libraries" hell.

    The correct solution is to run:

    up2date -u

    OR, if you don't use the free Red Hat Network., run:

    rpm -Fvh ftp://updates.redhat.com/X.Y/en/os/i386/mod*
    rpm -Fvh ftp://updates.redhat.com/X.Y/en/os/i386/apache*
    r pm -Fvh ftp://updates.redhat.com/X.Y/en/os/i386/openssl*
    rpm -Fvh ftp://updates.redhat.com/X.Y/en/os/i686/openssl*

    Of course, replace X.Y with your version such as 7.0, 7.1, 7.2, 7.3, etc.

    PEOPLE! Package management is GOOD. You should get and apply the updated packages from your vendor/distro. Slashdot editors/submitters should get a clue instead of recommend solutions that ultimately fsck stuff up.
    • Maybe not...

      # rpm -Fvh ftp://updates.redhat.com/7.3/en/os/i386/openssl*
      Retrieving ftp://updates.redhat.com/7.3/en/os/i386/openssl-0. 9.6b-28.i386.rpm

      So RedHat doesn't have the latest version on the ftp site?

      I have a redhat system, but I have already upgraded to e. I just tried this out of curiousity.

      • by yem ( 170316 ) on Friday September 13, 2002 @10:15PM (#4255440) Homepage
        Retrieving ftp://updates.redhat.com/7.3/en/os/i386/openssl-0. 9.6b-28.i386.rpm

        So RedHat doesn't have the latest version on the ftp site?

        Don't worry. Redhat has an irritating policy of backporting fixes into previously released versions of each package. Its the revision number that counts. Check the date on that file.

        OT: Anyone care to elaborate on why apache 2.0.40 requires at least openssl 0.9.6e? I modified the configure script to accept 0.9.6c and it was happy enough...

    • Every Linux user should be using the packaging system to install this - otherwise, as the author above said, you'll have application with nonstandard install, no file querying or verification, nonstandard uninstalls, and further breakage of your system for apps which subsequently rely on openssl and apache.

      And if your Linux distribution can't reliably install RPMs, than its not a Linux distribution but an OS which uses the Linux kernel. There is a difference, and its called the LSB.
    • Well, I've been keeping my RedHat 7.3 up2date and I got hit. I didn't know it until I read this post, but last night TicketMaster Brasil (of all places) pinged my server one minute before the characteristic /tmp/.uubugtraq file appeared. The only thing that saved me was that the link phase of the worm compilation failed due to missing libraries (specifically, RC4 and MD5).

      I agree that package management is good, but it looks like RedHat is running behind on this one. I'll be closing down the SSL port on my firewall for now :-(

      Although I never saw it actually operating, you can probably clear the worm from your system via the following command (though you'll have to take measures to ensure it doesn't come right back):

      killall -9 .bugtraq

      The worm itself is nicely commented; it even has a disclaimer that the author isn't responsible for any harm:

      Peer-to-peer UDP Distributed Denial of Service (PUD)
      by contem@efnet

      <snip>

      I am not responsible for any harm caused by this program!
      I made this program to demonstrate peer-to-peer communication and
      should not be used in real life. It is an education program that
      should never even be ran at all, nor used in any way, shape or
      form. It is not the authors fault if it was used for any purposes
      other than educational.

      Doubt the disclaimer will keep him out of jail for life [slashdot.org], though

  • I hate to say it (Score:5, Insightful)

    by ealar dlanvuli ( 523604 ) <froggie6@mchsi.com> on Friday September 13, 2002 @07:28PM (#4254827) Homepage
    But don't a decent amount of the readers here make statments like "At least us linux admins patch our boxes regularly". And "There is a patch avadiable that night, and most linux admins patch asap; whereas MCSE's never patch".

    I hope I never see another post stating that again, ok? Especially not a god damned +5 one.
  • What about... (Score:4, Interesting)

    by seanadams.com ( 463190 ) on Friday September 13, 2002 @07:30PM (#4254832) Homepage
    ...non-Linux systems running Apache/OpenSSL?

    I realize the binary may not run on FreeBSD/OSX/etc., but the vulnerability itself is not Linux-specific, right? Could the virus be ported?

    Sorry, I'd RTFA but it's slashdotted.
    • You'd just need PPC/whatever shell code. Fortunately, as of 08-23 any OSX users running Software Update (enabled by default) have been prompted to download the update that fixes this. It may have been perhaps a bit later than 08-23 if they're not checking daily (I think weekly is the default). Anyhow, Apple made and distributed an update [apple.com] shortly after the vulnerability was made public.
    • Take a look at the SecurityFocus article. I believe it states that the OpenSSL bug occurs on Windows systems as well.

      This does not necessarily imply that the worm can be ported. Perhaps it depends on Linux-like behaviors in the underlying OS.
  • by rainmanjag ( 455094 ) <joshg AT myrealbox DOT com> on Friday September 13, 2002 @07:32PM (#4254846) Homepage
    It seems to me that some basic precautions close this hole before you are even vulnerable... first, only root should be able to run gcc... and second, the webserver daemon should not be running as root anyways... I've never administered an apache server, only AOLServer, and it won't even *let* you run it as root... so if you can't get the server to run code as root and only root can run gcc, then you've got no problems...

    -jag
    • ... only root should run gcc ...


      What is gcc doing on a production webserver in the first place.

      My usual practice is to remove the compiler and linker after building the system. I never install anything extra. It's all one package at a time. It's a PITA, but that's the way it goes. If I need to patch, or install from source later on, gcc gets put back, and taken away again after.

    • by devphil ( 51341 ) on Friday September 13, 2002 @08:21PM (#4255053) Homepage
      first, only root should be able to run gcc...

      Thank you, try again.

      While are you are correct in saying that a limited subset of users should be permitted to run the compiler, that subset should never be the superuser. Compilers have security holes too, and gcc has been no exception. (was it 2.7 or 2.8? don't recall, too tired)

      Never do your compiling as root.

      • The whole concept of 'root' is dangerous and a major security flaw. There should be ACL restrictions on any modern secure operating system. Security should be segmented. There's no reason for an antiquated 'god account' concept on a modern server.

        Sadly, many people are still bogged down in the concepts of 70's era Time Sharing systems.
      • you are correct in saying that a limited subset of users should be permitted to run the compiler

        Your suggestion only works if it is not a policy. Once it becomes a policy, then developers will not have access to compilers, because it's much easier to find a new job then to convince IT that you properly belong to that subset.
  • by Carnage4Life ( 106069 ) on Friday September 13, 2002 @07:33PM (#4254848) Homepage Journal
    The primary thing that has concerned me the most about most web based worms is the fact that they usually infect systems using exploits that have long since been patched. This is true for both *nix and Windows worms.

    Unfortunately given human nature, we can't rely on sys admins and end users to patch their boxen. Almost every mechanism I can think of to automate this process either calls for automatically updating machines (which sucks if a patch breaks an untested scenario and also may need some legal exemptions) or some similar mechanisms to enable computers to help themselves [slashdot.org].

    Any Slashdotters have any thoughts about this?
    • Benevolent worms! (Score:2, Informative)

      by alienmole ( 15522 )
      The only solution that'll work in the real world, today, is to write worms that infect vulnerable machines and fix them.

      In fact, Microsoft has already pre-infected their own new OS, Windows XP. Maybe those draconian EULAs (you hereby agree that "M$ 0wnz j00") aren't such a dumb idea after all...

      Not that I like it, but the fact is that MS is targeting the sort of people we're worrying about, giving them what it thinks they need, whether they ask for it or want it, or not. We hate this because we're tech-savvy and want to control our machines, but for the average user, having someone else "0wn" their machine is probably, ultimately, a necessity. The question is just who's going to do the owning - virus writers and crackers, or Microsoft/Symantec etc.

      • Please mod the parent up; while there are ahost of unaddressed issues IRT how this sort of effort could be undertaken, I think that the creation of an accepted infrastructure (not just tech., but extending to communications with infected admins) needs to be established that will enable semi-automated fixing of infected/infectable boxes on networks.

        Do I think that their should be virii that go around an infect/fix your box for you? No, varying distributions would make this a nightmare, and the liability issues involved would make anyone (with even the best intentions and highest technical skill) insane to even try such an approach.

        Rather, there needs to be, at a minimum, an accepted method of notifying the admin/primary user of "box X" that their system has been rooted; this notification could include some sort of pointer to (distro-specific? security-vendor-implementation-specific?) registered info about the virus.

        this appears to me to be the the best role for a "benevolent virus" (in this case, more of a network scanner/meta-virus, as actual infection is not necessary) - by detecting possible routes of infection/actual infection on a system, and warning that system of possible/actual infections.

        A distro could (based on this warning notification) wrap some nice end-user warning/auto-update functionality around the registered virus info.

        In other words, the newbie user doesn't nec. have to actively check for updates; rather, others on their network will intermittently scan for "open" boxes, and notify those machines/users of their status (this isn't much different from what a sysadmin on a LAN does, but in a more decentralized manner). Think of it as a sort of semi-automated neighborhood watch.

        Are there holes to this approach? Is this politically/technically complicated? Certainly.

        However, this is definitely a case where I see that the "mediocre user" needs to be accomodated -
        and educated/hand-held, even if just a little bit at a time - into keeping their boxes maintained correctly. Otherwise they simply won't be able to keep up.

        Besides, I would assume that a community in which network effects are so well exploited IRT generating code should have some excellent ideas IRT automating notification throughout local networks.

    • Easy. Hire an admin. Pay him to do it.

      And then: Don't forget to get a new admin if your old admin leaves the job.

      Those machines that have an admin are usually taken care of. But most security issues I see are with clients who have a server that some guy did the setup for some two or three years ago, then left a year later and since then nobody looked after the machine.

      As one ad's catchphrase put it correctly you never talk about the server until it fails.

      Being the guy in my little company who's responsible for updating the clients' servers, I often experience how clients have a hard time understanding that software support, updates and log checks are necessairy -- because from their perspective this is work without "results".

      They can't check if I really did something when I give them a month's bill with x hours for security updates on their machine.

      I often explain to them that this server care is a bit like toothbrushing... (Which, btw, is the actual name of that task we use in my company.)
    • by Garin ( 26873 ) on Friday September 13, 2002 @11:31PM (#4255630)
      It isn't lazy admins. It's lazy management. There is one exception -- home servers. In that case, it's a lazy (or ignorant) user-turned-admin.

      Security is about risk management. It's about process, procedure, and diligence. Security is not a technology problem, and it is not solved by geeks.

      You can have a secure server farm running virtually any kind of software out there (including M$ products). How? By having a tight, auditable system. You carefully install the systems, documenting your procedure and following best practices (even if you develop them -- the important thing is to have a process). You maintain them on a schedule, leaving nothing to chance. You document the configuration thoroughly, and you enforce rigorous change control.

      You might not even have OpenSSL upgraded even though it's vulnerable! You have to decide how much risk is acceptable and worthwhile, but the trick is to consciously and deliberately evaluate the risk, and decide how you're going to deal with it.

      This applies to everything. You don't leave it up to your sysadmins to decide whether or not they should upgrade -- it's a part of a checklist that must be done, and can be independently verified at any time. It's part of a procedure that will allow new upgrades to be thoroughly tested and carefully rolled out to avoid downtimes due to unexpected incompatibilities between new and old versions. Imagine someone unwittingly upgrading apache from 1.3 to 2.0, without full testing on a major production system or even realizing that there may be configuration differences.... Nightmarish.

      The only way to truly run a secure system is to realize that it has to be extremely carefully planned and managed. It's a hell of a lot of work, and it costs a lot of money. So it quickly becomes an exercise in traditional risk management. This is where the suits and the high-priced consultants often come in. You have to find out how much everything is worth, and what kind of risk you're willing to tolerate (or conversely, how much security you can afford given your environment). You will never be 100% mathematically inpenetrable, but you can reduce your risk to a level that you're comfortable with.

      Obviously, this kind of thing scales. If you have a simple system, your plans and procedures can be fairly simple as well. As long as you have a solid verifiable plan, and you stick to it, you'll be fine. If you have a complicated system, your security management is going to be complicated as well.
    • It's not a "lazy" admin problem.

      There've been too many admins who've been burned by a "security patch" that broke the system in some other way. When your computers need to be up 24-7, and you can have, at most, about 4 hours of down time, you're going to be VERY selective about what patches get added to the system. Or from another viewpoint, I just got burned by an XP "security patch" that for some reason broke my autodial functionality so that my routing table went straight into my local network. I had to reinstall Windows XP to get the functionality back... I'm not about to start putting those security patches back on. I don't like it, but my system works. (I run firewall and antivirus software as well, so its not like my butt is completely uncovered, either)

      Admin's are not only responsible for the computers and OS's themselves, but the network communications layer, hard drive resources, ALL of the apps on those boxes (and their associated patches), plus help desk support, new computer setups, and old computer shut downs, and let us not forget software licensing management issues.

      IT Admins also painfully understand the one part of Software Engineering that Software Engineers don't. Any change to the program WILL have functional differences.

      Automating updates can work because it takes the load off of the admin. But as you point out, there are legal issues, plus there's the above issue where you don't necessarily want to install all of these patches because your system works "as is". On the flip side, Norton's LiveUpdate for their anti-virus software runs pretty well. But NAV is a very distinct application and purpose, and doesn't have ripple effects throughout the rest of the computer system.

      Also there's an apple and oranges comparison to Microsoft and Linux problems here. Microsoft got its bad press not from legitimate security issues, but because Outlook allowed the very ACT of receiving an email a vector for running a virus/trojan horse through the preview pane. Because Word allowed any document to take control of the users hard drive and begin deleting files, grab the email address book and replicate itself. That's a whole different ballgame than exploiting IIS through stack overflow issues, or exploiting this loophole in OpenSSL. There's a difference between "defeating/exploiting security" and "leaving the doors wide open.". But now, thanks to Microsoft PR to spin their problems and Linux PR to make Microsoft look bad, ALL exploits are equal so that the least exploit is just as important as a truly criticial one and THAT adds to the Admin's workload, and leads back down the road of not getting these patches installed.

      In the end, the power and the responsibility lie with the Sys Admin. Which is where it should be.
  • Security Lists (Score:3, Insightful)

    by Q2Serpent ( 216415 ) on Friday September 13, 2002 @07:42PM (#4254894)
    This is why I subscribe to the Mandrake Security mailing list. I got an e-mail about this a little while back, did a "urpmi --auto-select", saw ssl in there, and bang. No more problem for me.

    -Serp
  • Most of us home users don't run https servers so -- correct me if I'm wrong -- this doesn't really effect us. Putting my neck out further, would it be safe to say if you firewall port 443 (https) then you should be safe from this bug?

  • by McCow ( 323710 ) on Friday September 13, 2002 @07:49PM (#4254941)
    Seems a bit more detailed.

    Here is the alert [incidents.org]:

    published: 2002-09-13
    OpenSSL, the collection of libraries and programs used by many popular
    programs, has had a number of security problems recently. It looks like
    the problems are not over yet.

    It has been discussed on several mailing lists, that aside from the
    exploit known for openssl 0.9.6d, there are exploits available for
    even the most recent version (0.9.6g).

    As a precaution, we recommend to disable programs that use openssl as
    much as possible. The exploits available so far focus on apache, which
    is probably the most common exposed service that is using openssl.
    As a precaution, we recommend disabling SSLv2, if you have to run an
    Apache server with mod_ssl enabled. The magic configuration lines
    are:

    SSLProtocol all -SSLv2
    SSLCipherSuite ALL:!ADH:!NULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:-LO W:+SSLv3:+TLSv1:-SSLv2:+EXP:+eNULL

    One of the openssl apache exploits was found to install a DDOS agent
    called 'bugtraq.c'. It uses port 2002 to communicate and can be used
    to launch a variety of DDOS attacks. This program uses UDP packets on
    port 2002 to communicate, not necessarily to attack.

    - //cow
    cow's go muu~
  • What does an attempt to infect a webserver look like in the access logs? This will allow those who have already fixed the problem remind those who have not...
  • by GT_Alias ( 551463 ) on Friday September 13, 2002 @08:31PM (#4255086)
    I noticed some strange stuff a week or two ago in my Apache logs, watch out for this stuff in your ssl_engine_log file:

    [27/Aug/2002 20:02:19 23525] [error] OpenSSL: error:1406B458:SSL routines:GET_CLIENT_ MASTER_KEY:key arg too long
    [27/Aug/2002 20:02:22 24087] [error] OpenSSL: error:1406B458:SSL routines:GET_CLIENT_ MASTER_KEY:key arg too long

    Thing is though, that "key arg too long" error is part of the July patch to OpenSSL, so you won't see it if you aren't patched. Hopefully this log signature doesn't become as familiar as nimda scans.

  • When hackers stop bothering to hack your software, it is a sign that their love for you has grown cold and you are now irrelevant. Has anyone hacked Novell lately? :)

    To be truly loved is to get hacked! Someone out there must really love Microsoft, but I am glad they are starting to share the love with the Open Source community more and more. It is a sign that the love for Microsoft may be starting to fade or maybe hackers are just plain sick of "shooting fish" in the idomatic barrel.

    Either way, I am going to go block UDP on port 2002 on the fw/router and mumble to myself about buffer overflows.
  • by Arandir ( 19206 ) on Friday September 13, 2002 @09:19PM (#4255271) Homepage Journal
    Okay, no one is answering the obvious question: Is this an OpenSSL bug, a Linux bug, or a GNU bug?

    The submission states "A GNU/Linux worm" and "a bug in OpenSSL". But OpenSSL runs on a heck of a lot of systems that aren't Linux. Does this exploit only affect Linux systems running OpenSSL, or does it affect any system running OpenSSL?
    • by Anonymous Coward
      It's an OpenSSL bug. This worm happens to use Apache and mod_ssl to get to OpenSSL in order to exploit OpenSSL, and it happens to use shellcode that only works on Linux on x86 platforms.
  • Seriously - most script kiddies/virus writers are not going to have a sparcstation/axp/whatever at home, and if they do they probably don't have the patience to make it run.
  • by Anonymous Coward
    Basically it's a DOS attack. Easy to squash and prevent.

    Showed up in the form of hundred of defunct child processes from /tmp/.bugtraq.c. Very light signature in the apache logs, only a '408' request timeout as a SSL handshake terminated.

    The header of /tmp/.bugtraq.c says it is "Peer-to-Peer UDP Denial of Service"

    Basically is installed and compiled via mod_ssl, and then once it starts, it connects to another host (the IP address is passed as an argument) via UDP 2002. From that host, it learns of other infected hosts, and connects to those, etc. The header claims that up to 16 million simultaneous
    connections are possible.

    The processes run as the web server user.

    Harmless, once you get rid of it.
    • Here's a hint as well, then:

      iptables -A OUTPUT -p udp --dport 53...... ACCEPT
      iptables -A OUTPUT -p udp --dport 123 ..... ACCEPT
      iptables -A OUTPUT -p udp -j outlog

      #The output logging rule
      iptables -A outlog -j LOG -m limit \
      --limit 3 --limit-burst 5 \
      --log-prefix "catch-all:(out)"
      iptables -A outlog -j DROP

      That should implement a fairly simple rate-limiter on outgoing UDP, with lots of nice easy-to-track logs and stuff.
  • Since it's a GNU/Linux worm, I do hope that it comes with a clear statement of where we can get the source. It would simply be unacceptable to distribute this software in binary form, and the FSF should sue them for violation of the GPL :-)

Your own mileage may vary.

Working...