Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software

Major Unix flaw emerges?? 138

ZDNN has published an article titled: "Major Unix flaw emerges". It talks about "a new denial-of-service attack". They also running news special about Linux world (cute logo) and a poll: will MS squash Linux (guess the results..)
This discussion has been archived. No new comments can be posted.

Major Unix flaw emerges??

Comments Filter:
  • Garfinkel knows security... no doubt. But, IMHO, he has a habit of over-hyping things.
  • NT drops connections randomly, so it doesn't have this problem.
  • Is that all Unix can do at once? I'd guess it would be at least 32k. Not that I blame ZD, they do what they can with no knowledge, just pass these "I can shut down the Internet in 2 seconds!" yahoos along.

    Couldn't this just be handled by tcpwrappers (or similar) to close a session after 5 minutes of no traffic? Aside from all the usual firewall, tighten security, etc a half-witted system admin would do.
  • I just said 32k, because that's all the higher I've noticed my Linux box to get in PIDs before starting back at the beginning. But still the point of ZD saying 600 processes is all it takes to kill an ISP is rather amusing. I can't imagine how Slashdot keeps going, surely there's times when there's 600+ users at once.
  • ...but Garfield discovered that the assault can come from the outside.

    To run your fork program you'd have to be a user on that machine, and we all know that's old news. But the ground-breaking discovery here is that outside connections use the same process table. All I can say is, WOW. I may just change my ISP to this guy's company if he's so security concious...

    Heh, another cute quote I caught while re-reading it: I can shut down any one of their servers on the Net. A mighty bold statement. We need to immediately restructure the entire Internet to resolve this issue, lest we all die a slow and painful death.

    And what's with the bit about it taking 10 hours to accomplish this task? Do one connection a minute for 10 hours to get the 600 entries in a process table? I can already see this guy is an experienced sys admin with programming prowess like this. :)
  • If I'll bang my forehead at it for two days continuously I can make a hole in it.
  • Posted by Ominous the Foreboding:

    My friends and I were using this one on each other back in '93! There are 2 nice ways to fix this one. (a) timeout fingers-n-such after a specified amount of time, and (b) limit the number of each process that inetd will fork. Either one alone can be a pain in the neck if done too tightly, but both done lightly can keep a system up for quite a while!
  • It is rather obvious that the person writing this knows nothing about unix, and is biased against unix. Look at the way they dismiss the few vender quotes they get, the ignorance of what is really going on. This is journism you expect to see in something one step above the national enquier, not something you expect from a creditable news orginization.

    Of course this problem was dismissed be those involed. It is not as serious as you are claiming. Far as I can tell you are saying this is caused by the finger program, which most people consider a security hole anyway, certanitly the administrators who care are security have disabled it. Can it be caused by others? Maybe, but it is no worse then the SYN flood attack that is inhearant in the design of TCP.

  • When you compile your kernel you can configure how many to allow. The more you allow, the more memory the kernel needs. (real physical memory, and swap both)

    For a non-server home machine 600 is pleny of processes. What I want to know is why someone would run a server with a process table that small?

  • Simson did bring it up in the Saturday technical conference - you must not have been in that session. Also, Simson is coauthor of the O'Reilly book, Practical Unix Security. It's a shame that almost no one seems to have bothered to really look into this story before screaming "FUD."
  • He coauthored the O'Reilly book Unix Network Security. Also, he didn't badmouth the vendors, he was just covering himself by letting everyone know that he tried (for a year) to warn them before exposing the hole on a public forum. He was at the SANS network security conference in February, and was geniunely concerned with doing the right thing with regards to announcing it.
  • had info on this last week. Bugtraq had a number of posts about it; check out the archives [geek-girl.com]. And the article on LWN [lwn.net], about the guy posting the 'sploit. Lots of different info on bugtraq.
  • by mackga ( 990 )
    Perfect!!!
  • Yeah, a lot of boxes don't accept finger queries these days anyhow, and we've got the friggin' proprietary chat programs instead (which run on everything, including Windows) and I'm sure they're just as bad...

    It's not hard to set the size of the process table, anyhow, whenever something forks too much you'll get errors, but it won't usually crash a box.

    Even NT doesn't always crash in low memory conditions, but good luck getting that memory back! :)
  • ...is that it isn't already running on everything! :)
  • It's called a firewall. ;>
  • Hehehe. Good one. :)
  • would cut off a country pretty quick, wouldn't it?
  • I will never take ZDNet seriously. A year ago they were running unresearched garbage that was almost always at odds with what I've seen from experience (my favorite was an article talking about what a bad OS Linux is, written by an author who had never used it, and using the poor writing on www.linux.org as his primary reference); these days they are printing unresearched garbage that often supports what I've seen, but that doesn't mean they can be seriously.

    If they run a front page article on how great Linux is, it will only show that the popularity of Linux has made it easy to write trash favoring it. Asking a three year old for strategic IT information is more likely to get you useful information than reading anything published by ZDNet.
  • It's "hypocrisy", buttfuck.
    If you guys can't bleeding SPELL, how can we count on you to CODE?
    Geez!
  • I tried numerous times to vote on whether M$ would squish li'l ol' linux, but I kept getting "document contains no data." ZD runs NT, don't they?
    Hmmmm....
  • The point is NT spawns new threads, not new processes. And NT doesn't have a fingerd (or a whole lot of other daemons).

    Not that I think NT won't suffer from other similar DoS's - I'm sure it will.
    --
  • Ok - but most Linux installations as they are installed by default have this flaw. That's the point. You could argue that all the IIS bugs have been server software problems not OS problems too. The point is - the problem exists (but at least we'll get a fix quicker than an NT service pack <g>)
    --
  • The attack sounds much like SYN flood DoS attack that was plugged in 2.0.thirtysomething (35, I guess).

    This, and what someone said about ZD having 2.2.0 *now*, make me think we're running time machines on our desks (we're predicting the future, it seems)...

    ^D
  • Look, just because it's easy to trace doesn't mean this isn't a problem. What if there's a cyberwar - don't laugh, the U.S. government has a team getting ready for it.
    DOS attacks like this could cause serious headaches. It's basically impossible to "cut off" a country from the Internet. That means that it would be next to impossible to prevent a mass attack of this sort along with many, many other types of attacks. Net result? It might take 8 hours, but it sure would inconvenience a whole lot of people. This doesn't look like that big a problem, just set some limits, based on the anticipated demands, on how much resource a daemon can take up. We do it for Apache. We should do it for all of these.
    My 2c.
    --
  • Shouldn't their poll have been, "Will Linux squash Microsoft like a bug?" I'll bet MS would fare worse than Linux has...
  • Linux 2.2.0 Kernel
    The latest Linux kernel is now available for download from ZDNet's Software Library.
  • Jump right in.

    PS : Flaw ?
  • Jump right in.

    Flaw ?

    Let's see. You finger a server and don't close the connection then you finger it again. You keep doing this until the servers process tables overload and it goes under.

    There is a simple workaround which is to limit how much resources a daemon can eat up.

    It takes 10 hours to bring down a server.

    When you do it you can easily be traced by the server admin.

    Unix vendors have known about this for years and don't even consider it a problem "[...It's like saying the gas in my car could explode]".

    RedHat's Market-Droid had never herd of it but dismissed it in the usual manner "[If there is really a problem it will be fixed quickly]".

    There are no real exploits, just an app that went haywire on a client PC and brought down one server at an ISP.

    Dose anybody else smell FUD aimed at dampening the "problem" called LinuxWorld ?

    PS : "[...]" means it was paraphrased.

  • I took them 2 hours to recover from a DOS finger attack?!?! killall -9.... :)
  • I took them 2 hours to recover from a DOS finger attack?!?!

  • Ah, blissful ignorance, where hast thou gone?
  • by arielb ( 5604 )
    would this flaw be a problem for openbsd?
  • Apparently there's a major problem in the Unix operating system (guffaw) that allows complete and total morons to make public announcements (particularly reporters already known for their utter cluelessness) about new "denial of service attacks" in Unix which are given wide acceptance for reasons which we have not yet been able to discern. Hopefully sniffing the air for the aroma of fertilizer will give us more clues soon.

    This in turn causes hundreds of thousands of people around the globe who are just as clueless to announce that they agree this is a problem and are horrified by it and something must be done immediately.

    ...which in turn causes the several thousands or so who actually *do* know what the hell they're doing to have to spend the next few days answering email and phone calls, attending meetings, and sending out faxes to people setting them straight and telling them to calm down, instead of doing their normally useful job tasks. This denies many companies the services of their properly employed security administrators.

    Remember, folks, just because it's on a web site, doesn't mean it's not complete and utter bulls**t.
  • Most Unixes have a 16-bit PID (which gives you 32767 PIDs if they're signed quantities (leaving negative numbers for error values) or 65535 PIDs if they're unsigned). In any case, 0 is not a valid PID, and 1 is left for the special init process (either by convention or by design, I'm sure it depends on which OS you're dealing with).

    This does not mean you can have 32K simultaneous processes. Linux's default process table has been 512 or 1024 processes, from what I recall, and it's configurable with a #define if you want more.

    Also, in response to another person's comment -- # of processes does not translate into # of users. In fact, typical implementations of a number of common daemons service multiple users with a single process. I believe most MUDs are implemented this way.



    --
  • Sure, Unix has work to do to be stable for the next 100 years. I'll buy that. Heck, we'll all complain about the time_t wraparound that occurs in 2038. "Unix... best if used before: Tue Jan 19 03:14:08 2038 UTC" . Nonetheless, Unix has had 30 years to mature, and most of its major problems have been solved (and new problems, as discovered, are addressed fairly quickly). And we all have another 39 years to brace for Unix's flavor of Y2K problem. ;-)

    In contrast, Windows NT is still fairly immature, and is growing new code faster than the old code can be fixed. It already has tons of problems, with new problems being added every day -- even faster than old problems are being fixed. When new problems are found in it, we have to wait for Micros~1 to decide when to fix them -- and usually the decision is a marketing based decision, and nothing more.

    So which do you choose? 21st Century Problems, or Problems Today (And Forever)?

    snicker

    PS. I stole someone's 'Micros~1' joke... I like it.



    --
  • So, this guy is supposed to be some sort of Sysadmin? That's just part of running a server, this guy sounds like he has no idea what he's talking about. I'll bet it's a 50/50 Unix/NT shop, and this guy's really the NT admin. It just seems odd to me that he'd badmouth all the vendors because he doesn't know how to run his server.
  • I ran it on my ol' p60/64MB NT 4.0 machine for kicks. To its credit, it didn't die, but it opened about 300 processes and slowed to a crawl before the OS realized that something was going on and refused to allocate any more resources.

    Hehe. The main use of this NT box is for running WhatsUp and Visio. Scotty is neat but the pitchers it makes ain't as pretty.
  • PIDs are just sequential numbers, the number of *simultaneous* processes is another thing. This is actually limited by the process table size (which is adjustable at compile time in Linux).

    To quote include/linux/tasks.h:

    #define NR_TASKS 512 /* On x86 Max 4092, or 4090 w/APM configured. */

    Another thing: number of /. "users" != number of processes.

  • Isn't this just a syn flood? Doesn't linux have syn cookies to avoid filling up the connection table? Yeah, I thought so. I guess this affects /other/ unices.

    Andrew
    --
    ...Linux!
  • inetd is a mess in that it never checks the process table. i was hit last week with a DoS attack that failed. why? becuase xinetd was set to deny the IP anyhow nd never forked. just flooded my logs with failures, but hey, my machine survived.

    linux inetd is, of course, subject to this issue. so just move to xinetd, already. other inetd replacements can also be used to control the number of daemon processes spawned... and you should use them. unfortunately, not everyone has the luxury of implementing firewalls.

    i grabbed xinetd from tp://coast.cs.purdue.edu/pub/tool/u nix/xinetd/ [purdue.edu] and it works like a champ.

  • Any server could be put out of service as long as one does not care about getting traced, and have supperior net connection. That is not a security problem. Every server has a maximun number of simultaionus job i can process, (it doesn't exists any computer with unlimited bandwith and unlimited cpu power). If you have access to "several" good connected sites you can always use up these resources for any site. There is noi possibility to stop this. The important thing is that after the "attac" stops ther server resumes work as usall...

  • hahahahahahahahahahahahahaha

    ow... my stomach hurts! not that I've ever had this problem on my NEW machine -- it usually lasts about 2 weeks or so without rebooting. well it did. now it's gone into the bit bucket with no regrets...

    well, not none -- anyone know where you can get a killer flatscreen that works with Linux? and how about drivers for the Creative Labs PC-DVD Encore?
  • ... whereas for microsoft, you can bring it down by doing things you're SUPPOSED to do. like opening MS Word.

    you'd think they'd make sure their OWN proucts don't crash their OWN products...
  • leave it. it crashes.
  • Take a look at this link:

    http://www.zdnet.com/talkback/22 _28402_123746.html [zdnet.com]

    What exactly is the 'Microsoft Standard'? Doesn't this person know that Linux has surpassed Windows in almost every catagory? And I personally don't see anything wrong with the Average User giving Linux a whirl. I think it would be good for them.

    Accipiter

    (P.S.: Pain in using a computer? If you can't take the heat....)

  • I eagerly await ZDNet's coverage of this new-fangled ``horseless carriage'' contraption.

    New DOS attack discovered: parking your car in the middle of the road will clog up traffic.
  • ZDNEt was able to confirm today that a major design flaw exists in all versions of Windows released to date.
    Johnny Doe, a well-known nerd and computer user has told us that any version of Windows can be brought down by a Denial-of-Service attack know as "Normal Use".
    "It is too simple", said Doe, "a user would go and start using the machine. It can be done even remotely. In some hours, bingo, there goes Windows south".
    Doe said he tried to call Microsoft attention to the problem but failed to get it past the third-shift phone-support supervisor, who told him "And what are the news?".

  • Although you can't really use it remotely, you can certanly bring it down remotely... :))
  • ...deserves it. 8)

    Seriously, though, if he's running an internet-accessible system without proc limits or reasonable timeout values, it's his own damn fault.

    It takes all of a minute or two to write a process-reaper, fer Gad's sake.

    -- Cerebus
  • To pass this test? huh?

    I mean... does every troll post have to have some inanity about that F*cking Bill Gates ramming Sh*t down every two bit computer user's throat and a hard whatchamacallit up their *ss's?

    Probably not.

    The obviously superior OS, and I do mean OS as in "Outdated Software", Win-whatever, runs circles around unix stuff. Yeah, sure.

    I mean.. we are talking about windows which runs on today's desktop PC's compared to the unixes which require those large and expensive computing units, right? Can't WE SEE that unix is outdatted?

    Probably not.. not with all that sh*t all over the place.

    Not only that, but just imagine the benefits of running a truly superior OS like win-something-or-other. You'd never need a UPS(because it wouldn't help you), you wouldn't need an administrator(because anyone can crash it just as easily), you wouldn't need powerful hardware(You'd need EXPENSIVE and VERY VERY Powerful hardware to squeeze ANY performance out of it), and best of all, it's like an STD. Once you've got it, you've pretty much got it for life... and probably where it really hurts.

    Now.. compare this to those crufy old useless unix systems. You'll need lot's of backup and UPS's because you'll always have important stuff on those machines. You'd need an administrator who's trained and knowledgable because unix is a software that requires someone who can think behind that keyboard. And you'd better save those pennies, because with only a fraction of the money needed to buy a unix-capable piece of hardware which can outperform win-blows anyday of the week(can't say month since Win-dump won't last that long).

    But hey.. go with that unix or linux. Sure as hell won't piss off Billy "boy" gates. But you'll probably make your customers happy. And as any "good"(as defined by MS) knows, that's the LAST thing you should do.

    [maybe.. just maybe.. this will get a passing grade. :p]

    Disclaimer:

    Btw, the above rant does not represent mine or anyone's(as far as I know) perspective about computers, OS's, and the industry. The characters are fictional and any resemblence to real life individuals would be sad.


    - Wing
    - Reap the fires of the soul.
    - Harvest the passion of life.

  • Yep, that got done (by students) to my University's main server back in 1994. It took about 15000 connections before the thing froze up, and the auto-kill daemon had it cleared up about 10 minutes later. (Probably would have been faster, but the connections were being respawned when they got killed.) Wasn't even enough to get the admins to yell at us. Anyhow, with 15000 connections to a machine being needed to shut it down, how exactly is this a huge, major scary problem? How many connections does it take to nail NT to the floor? All it means is "When you write a new/replacement daemon, don't forget to drop idle connections" - which many of them do already.
  • hehe, Apple ProDOS? isn't that what we call the
    genuine single-user operating system? Eh, and I'll
    have to stop using the top of my system as a
    footrest.
  • I can't see why this would affect Unix more than MacOS, WinNT, AmigaOS, BeOS, OS2 or any other operating system at all.

    It affects any poorly implemented daemon. Imagine apache forked a new process for every HTTP request. Heh. Now that would suck. All daemons designed to be run by the inetd should check and see how many of the same are already running or, better, we could hack GNU's inetd to allow the user to set a maximum number of fork'ed processes per service.

    AFC.
  • Only it was a batch file that spawned a new command interpreter, to run the same batch file.

    Could not click the close buttons fast enough.

    Control-Alt-Delete, then selecting shutdown killed them all - eventually.

    -josh
  • Guess what? If you're root, and type rm -rf / you could earase files. There's no warning and no "Are you REALLY REALLY sure?" message!!! Wow!

    THIS JUST IN --- Ginsu issued a recall of it's best selling kitchin knife. Apparently, you could cut yourself with it.

    Leave it to ZDanything to start a panic.
    Hoard food and ammo, Y2K is nigh!
  • one of the canonical texts on Internet/Unix security

    In the beginning, there was nothing, and then God said "let there be light" and there was light..

    While speaking of canonical texts, let's remember to take them for what you're worth, question our sources, and never let our heroes rest too long on thir laurels. And if they stick their foot in their mouth, let's make sure they know we know.

    I wonder how M$NT would stand up to this 'flaw'.
  • Anyone else notice the intelligence of the Poster reflected in the grammar and spelling?
  • They just had a post on bugtraq about inetd doing this. Its easy to fix, just put an alarm() in the source code if the client doesnt answer in 60 seconds.

    bet you I can fix this faster on my linux box with sources then my NT box without..
  • Linux defines in (2.2.x):

    #define NR_TASKS 512 /* On x86 Max 4092, or 4090 w/APM configured. */

    #define MAX_TASKS_PER_USER (NR_TASKS/2)
    #define MIN_TASKS_LEFT_FOR_ROOT 4

    The real danger is if you are running stuff as root out of inetd. finger seems to have this problem on RedHat. qmail takes the correct approach: Use timeouts on all I/O, run as a user process only, and use tcpserver instead of inetd to limit number of connections. If you are running stuff out of inetd, make sure it at least uses an inactivity timeout.

    In any case, finding the attacker is real easy. It's a suicide crack.

    Some individual services may be somewhat lame in this respect, but that hardly makes it a UNIX flaw. It's an application problem.
  • Here's from inetd man page on my RedHat 5.2 server.

    "The optional ``max'' suffix (separated from `wait'' or ``nowait'' by a dot) specifies the maximum number of server instances that may be spawned from inetd within an interval of 60 seconds. When omitted, ``max'' defaults to 40."

    With a max of 4096 processes, ( I am using a 2.2.x kernel ), a mentioned in a prior reply, it would take well over an hour to complete this kind of attack, but, less if multiple servers are attacked. But is 4096 really the max number? I've seen process ID numbers well over 20000?

    ps. why would a "security expert" leave the finger service enabled????????? Heck even I know better..
  • Personally, I've loathed Ziff-Davis ever since they bought and subsequently folded Creative Computing (remember it?) way back when. It's no wonder they're in bed with Microsoft-- greed is the only language they understand. This kind of FUD doesn't surprise me one bit.
  • At the end of the article the author says the the internet wasn't robust enough to handle 21st century problems. WTF is /THAT/ supposed to mean? Just because people can't configure their deamons then the internet isn't working up to spec? Somebody should shake this guy's head for him...
  • IT appears that this is another article looking for an excuse to exist by the Micros~1 media mouthpiece, Ziff-Davis.

    This'll be hot news next month in Dvorak's Inside Scoop (or whatever it's called) article.

    Jason Dufair
    "Those who know don't have the words to tell
  • Actually, I suspect it was sarcastic because it wasn't an AC... but still... Not very good sarcasm either..

    AS
    AS
  • There is a revolutionary new bug that has just been discovered, and it affects all M$ windows systems, its called time. And come Jan 1, 2000 this bug is going to cause rampant errors in almost all windows systems. M$ is not currently working on a fix, becuse they plan to have a public beta of Windows 2000 out in December of that year, at the earliest. Comparing bugs is not even fair to microsoft. You have to actually pay for thier product and you expect it to be bug free, Linux you can get for free, and it *IS* bug free (well close enough when you compare the two). OK I'm done now.

  • There is always a good way to solve that kinda problem. If a cyberwar as you said breaks out, and we need to take down a country's net links its not that hard, we wanna cut off autralia or something we just go and blow the fiber lines or satilite uplinks. Thats why we have real commandoes and not just geeks protecting our country :)
  • Who waaants..to liiivvveee..foreverrrrrrrrr, Who waaants..to livvveee..foreverrrrrrrr


    people, it's conner mccloud of the clan mcloud!!
    there can be only one!!



    ahh, shit. i've fell to the level of total nerd!
  • I'm a diehard IRIX sys admin.

    The first thing I do when I walk into work in the
    morning is pat my ORIGIN on it's little blue head
    then ask it how it's night was.
    (that article mentioned IRIX first -- so I'm sticking up for sgi)

    The second thing I do is get a coffee and reboot all the 'NT servers'. It's a pathetic OS that is
    totally closed...unfortunatly the software we need
    only runs on IRIX or NT. Joyus day when I can move
    the System32 folder to the Recycle_bin!
  • to run the God of OSes, MS Windows 2000.
    All bow down to the mighty Bill Gates, for
    his operating systems are not vulnerable to
    these kinds of shenanigans. They are so
    obviously superior to that crufty old Unix
    stuff, why doesn't everyone run them?

    --C
  • Exactly. Bill is your friend. He wants to make
    sure you don't get zapped by any of those mealy-
    fingered little "linux-hackers" that will be the
    ruination of the entire 'net. It stands to reason
    that NT should use all your resources, anyway. I
    mean, didn't you pay for an OS that would _use_
    that machine?

    Praise Bill.

    --C
  • One doesn't even need a firewall, one just needs to install x-inetd with the 'instances' arguement. Which will limit how many daemons xinetd will spawn per service. I don't know of many ISPs that use inetd these days, almost all have switched to x-inetd for the virtual machine capabilities and the superior logging.
  • EVER SEEN such a stupid essay. this is truly showing how PATHETICALLY stupid ZDNET is.
    lol, "MAJOR UNIX FLAW" LOL, HAHAHAHA god this is hilarious.
    Will Microsoft squash Linux like a bug -- or can it stand up to the
    big boys from Redmond?
    LOL BIG BOYS FROM REDMOND
    lol zdnet is so pathetic hhahaahaha
    of course they have to be scared too... when microsoft falls over all the way, they will be out of business... sure as hell noone wants them endorsing linux.

Know Thy User.

Working...