Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Bug Software IT

History's Worst Software Bugs 645

bharatm writes "Wired has an article on the 10 worst sofware bugs.. From the article 'Coding errors have sparked explosions, crippled interplanetary probes -- even killed people. Here's our pick for the 10 worst bugs ever, but the judging wasn't easy.'"
This discussion has been archived. No new comments can be posted.

History's Worst Software Bugs

Comments Filter:
  • Predictions are hard (Score:4, Interesting)

    by Teppy ( 105859 ) * on Tuesday November 08, 2005 @10:31AM (#13978817) Homepage
    Wonderful article. Twenty years ago I believed that writing software would soon become a licensed profession. (Need a
    license to own a compiler, for instance.) I thought that the event that would inevitably trigger this is when a software
    bug caused a human death.

    I still believe that programming will eventually require a license, but I now think that lobbying by the big media
    companies will be the cause. Depressing, huh?
    • Consider how much software is written by people with five years or less of professional experience, on short schedules, with no time allocated for continuing education. If software projects weren't always rush jobs, and on relative shoestring budgets, the quality would be better. If continuing education for programmers was a priority, quality would be better. If a couple of decades of experience was properly appreciated, quality would be better.

      • If continuing education for programmers was a priority, quality would be better.

        This also requires more than the current courses which are pretty much level starter course. It is sad that after a few days being busy with a language before a course, you will already find mistakes/bugs or just better ways to do it than what is promoted in the course.

        For example after a 3 day crash course (I missed day 1, else it would have been 4 days), I became a certified Stellent developer. So a "real" test at the end t
      • by mumblestheclown ( 569987 ) on Tuesday November 08, 2005 @12:16PM (#13979880)
        Some of you marked the parent comment "insightful" because it doubtlessly seemed like commonsense, reasonable analysis.

        However, you have been fooled. The parent comment is competely at odds with the article.

        The article shows largely a series of examples where you DID have HIGHLY PAID and HIGHLY trained professionals with plenty of experience and oversight, but nevertheless very significant bugs occurred. So, the real lesson from this article is not "you get what you pay for," but rather that "software development is very hard" and perhaps that "by nature of its hardness, we can expect critical flaws to pop up from time to time, even when highly trained, experienced, and monitored programmers are involved."

        • Mangement problems (Score:5, Insightful)

          by gr8_phk ( 621180 ) on Tuesday November 08, 2005 @01:02PM (#13980303)
          "...trained professionals with plenty of experience and oversight, but nevertheless very significant bugs occurred."

          Some of the bugs reported in the story were not so much the fault of programmers, but of management. The phone network bug was a misplaced { character in a nested if-else construct. The code had already been though extensive testing, and then a small change was needed. Because it was a "minor" change someone said it didn't need to go through the extensive (expensive) testing again. It's always easy to point at the code or the guy who wrote it. Especially when the boss is the one tasked with finding out what went wrong.

          • by TFloore ( 27278 )
            The phone network bug was a misplaced { character in a nested if-else construct.

            Is that what it was? I thought I'd heard that the AT&T outage was from a missing break; in a switch-case statement.

            I found that more believable, because a missing { would cause a compiler error, where a missing break; is a valid way to purposely fall into the next case.

            Though, really, I suspect both of us are just repeating rumors we heard.
        • by tyen ( 17399 ) on Tuesday November 08, 2005 @01:12PM (#13980361) Journal

          ...you DID have HIGHLY PAID and HIGHLY trained professionals with plenty of experience and oversight, but nevertheless very significant bugs occurred...So, the real lesson from this article is not "you get what you pay for," but rather that "software development is very hard"...

          It doesn't matter how highly paid and trained your professionals are, if the environment that produces the software is not conducive to eliminating these types of flaws. Like if they are not given enough resources to test and QA the the projects they are assigned, there is no organizational commitment to take the time and expense to document properly, or leadership overrides technical objections to project timeframes, etc. Most of the cited projects could probably be classified as failures of project management rather than failures of the end product (the software) that these flawed projects produced. Yes, software is hard and the software profession should continue its efforts to improve quality, but that doesn't let the organizational culture, leadership and processes that produced the software in these cases off the hook.

          Why is it when the accounting profession makes spectacular mistakes that take down entire Fortune 500 class organizations, there is a critical analysis of the processes that led to these failures, and remedies often comprise prescriptive measures for these processes, but similar analysis for software failures focus upon the software flaw but not the environment that allowed the flaw to emerge? Now sometimes the remedy in the accounting case might not make complete sense (like SOX), but the point here is people don't look at just the end result (the accounting system transactions) of the accounting process.

        • by loose_cannon_gamer ( 857933 ) on Tuesday November 08, 2005 @01:31PM (#13980543)
          I think both the parent and grandparent have some validity. I'm a master's student in CS who has managed never to take a software engineering class before this semester (and I graduate in December). This has been an eye opening experience. Let me point out some of the well known highly advocated techniques which, as far as I can tell, most graduates and many 'out in the field' software engineering professionals don't do that would help avoid these bugs.

          1. Design reviews, by peers and independents

          2. Code reviews, by peers and independents

          3. Regulary, organized, unit testing

          4. Correctness proving

          5. Documentation is about a bazillion forms

          6. Defect tracking

          7. Effective software process metrics measurement and improvement

          8. Continuing education

          9. Humility / egoless programming

          This list was assembled in about a minute off the top of my head. I work in a CMM3/4 type organization, and although there are processes for these things, most people don't use them, or consider them a hassle.

          So my point is, the parent is right -- creating good software, even when done by properly trained experts with great experience -- is hard. But the grandparent is right too -- doing all of the above to 'do it right' takes time and money, and many organizations, and by this I mean software process management as well as the actual engineers, don't understand the value / aren't willing to pay for or aren't willing to do all that work. And occasionally, as the article shows, the piper comes and takes his payment.

          • You mised one..

            Item 0: Requirements reviews by peers and independants. If you don't have good requirements you obviously don't know things well enough to be building them. Sure you can catch some requirements issues in 1 and 2 but the longer you wait the costlier it is to fix.

            A MSCS is NOT a Software Engineering Degree, so why WOULD you take courses in SE?I'd say that CS and SE are two different professions. There are places to get a MS SE (Texas Tech comes to mind) if you are interested.
    • by ZiakII ( 829432 ) on Tuesday November 08, 2005 @10:43AM (#13978939)
      Wonderful article. Twenty years ago I believed that writing software would soon become a licensed profession. (Need alicense to own a compiler, for instance.) I thought that the event that would inevitably trigger this is when a software bug caused a human death.

      This is like saying you need a license to operate a Soda Vending Machine because some idiot decided tipping it over trying to get a free soda was a smart idea. You might have to put warnings on compliers like do not code if you have no clue what you are doing, etc but requiring a license won't ever happen. I am sure there will be lawsuits in the future regarding software bugs, but any software being used where an error could cause a human death is going to have a corporation behind it, that can be held responsible.
      • by bunratty ( 545641 ) on Tuesday November 08, 2005 @10:55AM (#13979027)
        You might have to put warnings on compliers like do not code if you have no clue what you are doing
        Unfortunately, in my experience the ones who have no clue about what they are doing seem to be the most confident that they are top experts.
        • by Jason Ford ( 635431 ) on Tuesday November 08, 2005 @11:31AM (#13979402)
          Several recent studies lend support to this observation. From an article [apa.org] at the American Pyschological Association:

          We've all seen it: the employee who's convinced she's doing a great job and gets a mediocre performance appraisal, or the student who's sure he's aced an exam and winds up with a D.

          The tendency that people have to overrate their abilities fascinates Cornell University social psychologist David Dunning, PhD. "People overestimate themselves," he says, "but more than that, they really seem to believe it. I've been trying to figure out where that certainty of belief comes from."

          Dunning is doing that through a series of manipulated studies, mostly with students at Cornell. He's finding that the least competent performers inflate their abilities the most; that the reason for the overinflation seems to be ignorance, not arrogance; and that chronic self-beliefs, however inaccurate, underlie both people's over and underestimations of how well they're doing.

          • by idontgno ( 624372 ) on Tuesday November 08, 2005 @12:33PM (#13980038) Journal
            I can't cite any documentation, but I recall seeing studies which show that the number one critical attribute of persistently optimistic personalities is a chronic inability to clearly see reality. Is this the same phenomenon?

            In the words of the old chestnut, "If you're calm and confident when everyone around you is running around in blind panic, you clearly don't understand the situation."

          • by BenEnglishAtHome ( 449670 ) on Tuesday November 08, 2005 @12:48PM (#13980159)
            The tendency that people have to overrate their abilities fascinates Cornell University social psychologist David Dunning, PhD.

            I'll bet the guy just LOVES the first few installments each season of American Idol.

    • What constitues programming is so blurry, though.
      Does it count when someone puts some HTML in a blog? What about Javascript? a DIY PHP site? a batchfile or shell script? Excel function/macro?
      Do you only want to licence compilers? How do I install my OSS? What about the power of interpereted or JIT languages? So much can be done with uncompiled code.
    • I recently completed an "Ethics in the Information Age" class for grad school (my earlier M.S. and undergrad predated such focused classes). As part of the discussions, we talked quite a bit about the Software Engineering Code of Ethics [acm.org] created by the ACM and IEEE and how such a code was a precursor to making software engineering an licensed, certified profession (akin to a CPA). So I figured it'd be neat to link to ACM's page advocating licensing.

      Guess what: They don't, [acm.org] although they appear to be hedgi [acm.org]

  • only 10? (Score:4, Insightful)

    by Lucan_UK ( 745708 ) <.nick.oram. .at. .ndr.co.uk.> on Tuesday November 08, 2005 @10:34AM (#13978848) Journal
    I wouldnt say they are the 10 worst bugs ever... more like the 10 most widely known media announced bugs. Okay I have no examples of any others but I'm sure there must be worse bugs out there...

    anyone think of any others?
    • Besides that, there really were only 8 software bugs mentioned. One, the pentium floating point error is a hardware error. The other, the CIA messing with the software for the soviet pipeline was intentional.
      • Not to mention the fact that neither the CIA nor the Soviets ever admitted that there was such a bug. Sounds like Tom Clancy-ish wishful thinking to me.
    • Re:only 10? (Score:5, Insightful)

      by plover ( 150551 ) * on Tuesday November 08, 2005 @10:41AM (#13978919) Homepage Journal
      I don't think they should count the "pipeline bug."

      That was a trojan. It was a deliberate attack on their system by an enemy. It didn't even arrive via the now classical "worm" or "virus" route, which would have implied that a "bug let it in the door." No, this one was deliberately planted carefully at the root. It's not a bug, it was an attack.

      • Probably BS (Score:3, Informative)

        by hughk ( 248126 )
        I looked at this a while back because many millenia ago, I worked at the company that produced the telemetry/control system for the Trans-Sib pipeline. It was a specialised outfit based in Warwickshire, UK. It is very doubtdul that their systems could have nobbled by anyone. The network was closed, based on an X.25ish HDLC and the software was blown on to UV erasable EPROMs. The CIA may have modified the s/w at the pump stations, but again it is doubtful.
    • Re:only 10? (Score:4, Insightful)

      by c_fel ( 927677 ) on Tuesday November 08, 2005 @10:45AM (#13978948) Homepage
      I remember the Mars Polar Lander crash in 1999 [http://www.space.com/missionlaunches/mars_polar_l ander_031222.html%5D [space.com]. At the time there was a rumor that said it was a human error : somebody had mixed a foot and a meter. Now we know that it was a software bug that was contained in a single line of code.
    • Re:only 10? (Score:5, Insightful)

      by arivanov ( 12034 ) on Tuesday November 08, 2005 @10:46AM (#13978949) Homepage
      Seconded.

      Both radiation bugs in both cases have killed less people then the shiteware used in Patriot missile system. Ariane and Mariner get an honorable mentioning, Raytheon doen't. Why?

      There also no mentioning of power grid system bugs. The recent US blackout was a good example.
    • Re:only 10? (Score:4, Insightful)

      by ChodeMonkey ( 65149 ) on Tuesday November 08, 2005 @11:02AM (#13979096) Homepage
      The worst bug ever is the one that's there but you don't know about. Yet.

  • by PIPBoy3000 ( 619296 ) on Tuesday November 08, 2005 @10:35AM (#13978854)
    Bringing down the company's intranet countless times over the years almost seems like an amusing little distraction. No one died, nothing blew up, and I've even managed to keep my job. It must be that people are getting used to these "software bug" excuses for the various problems that pop up with computers. I'll have to remember that for next time.

    Caller: "My computer exploded and I'm bleeding profusely!"
    911 Operator: "Must be a software bug."
  • by Colin Smith ( 2679 ) on Tuesday November 08, 2005 @10:36AM (#13978863)
    So nobody's hit on the really big one yet.

     
    • So nobody's hit on the really big one yet.

      Um, what really big bug?

      FROM: it_director@norad.mil
      TO: president@whitehouse.gov
      SUBJECT: New systems

      Mr President, I'm pleased to report that the new national radar systems are fully tested and operational.

      FROM: r.q.hacker@norad.mil
      TO: it_director@norad.mil
      SUBJECT: possible bug in calendar module
      UNREAD

      Hey, we may have a problem in the calendar system. I suspect there's a memory allocation issue here, we've been seeing occasional bugs in testing. Might

  • Moth. (Score:5, Interesting)

    by Poromenos1 ( 830658 ) on Tuesday November 08, 2005 @10:36AM (#13978870) Homepage
    The moth was trapped, removed and taped into the computer's logbook with the words: "first actual case of a bug being found."

    Why would they say that, if the term "bug" didn't exist? I mean, you wouldn't find a rat in your car and say "First actual case of a car 'rat' being found" if you didn't use it as a term to indicate something. You'd just say "this bug caused computing errors". I smell a car rat.
  • Bug or User error? (Score:5, Insightful)

    by Lucan_UK ( 745708 ) <.nick.oram. .at. .ndr.co.uk.> on Tuesday November 08, 2005 @10:38AM (#13978888) Journal
    The last one on the list is this

    "Multidata's software allows a radiation therapist to draw on a computer screen the placement of metal shields called "blocks" designed to protect healthy tissue from the radiation. But the software will only allow technicians to use four shielding blocks, and the Panamanian doctors wish to use five.

    The doctors discover that they can trick the software by drawing all five blocks as a single large block with a hole in the middle. What the doctors don't realize is that the Multidata software gives different answers in this configuration depending on how the hole is drawn: draw it in one direction and the correct dose is calculated, draw in another direction and the software recommends twice the necessary exposure.

    At least eight patients die, while another 20 receive overdoses likely to cause significant health problems. The physicians, who were legally required to double-check the computer's calculations by hand, are indicted for murder. " ... to me that sounds like a user not using the software correctly..
    • Why not both? (Score:5, Insightful)

      by brouski ( 827510 ) on Tuesday November 08, 2005 @10:48AM (#13978969)
      I've read about this instance before, and I think it's attributable to ignorance on both the user and the developer. The software developer in this case knows the life of a human being is resting on his code, so it should have been nigh impossible to "trick" the software into allowing anything other than what the specs said it could do.
  • The doctors wanted to trick the software. But then the software didn't work as intended. A really unexpected outcome, really :P
  • I didn't know that the Pentium FP bug was a software bug.



    Oh wait, it wasn't ...

  • Engineers so good they had to steal their pipeline control software [msn.com]. And, apparently, a ton of other Western engineering too.
  • Why do they have the Intel Pentium floating point divide error listed as a bug? That was a hardware design error in the circuit, it was not a software bug. Of course it caused software to behave unexpectedly, but still I'm surprised that Wired put that one in there.
    • by gorim ( 700913 ) on Tuesday November 08, 2005 @10:57AM (#13979049)
      Because it was actually implemented as microcode and stored into the CPU, whether as mask rom or some other means of storing, but it was indeed software either way you look at it.
  • by cytoman ( 792326 ) on Tuesday November 08, 2005 @10:41AM (#13978914)

    July 28, 1962 -- Mariner I space probe. A bug in the flight software for the Mariner 1 [wikipedia.org] causes the rocket to divert from its intended path on launch. Mission control destroys the rocket over the Atlantic Ocean. The investigation into the accident discovers that a formula written on paper in pencil was improperly transcribed into computer code, causing the computer to miscalculate the rocket's trajectory.

    1982 -- Soviet gas pipeline. Operatives working for the U.S. Central Intelligence Agency allegedly [loyola.edu] (.pdf) plant a bug in a Canadian computer system purchased to control the trans-Siberian gas pipeline. The Soviets had obtained the system as part of a wide-ranging effort to covertly purchase or steal sensitive U.S. technology. The CIA reportedly found out about the program and decided to make it backfire [msn.com] with equipment that would pass Soviet inspection and then fail once in operation. The resulting event is reportedly the largest non-nuclear explosion in the planet's history.

    1985-1987 -- Therac-25 medical accelerator. A radiation therapy device malfunctions and delivers lethal radiation doses at several medical facilities. Based upon a previous design, the Therac-25 [wikipedia.org] was an "improved" therapy system that could deliver two different kinds of radiation: either a low-power electron beam (beta particles) or X-rays. The Therac-25's X-rays were generated by smashing high-power electrons into a metal target positioned between the electron gun and the patient. A second "improvement" was the replacement of the older Therac-20's electromechanical safety interlocks with software control, a decision made because software was perceived to be more reliable.

    What engineers didn't know was that both the 20 and the 25 were built upon an operating system that had been kludged together by a programmer with no formal training. Because of a subtle bug called a "race condition [wikipedia.org]," a quick-fingered typist could accidentally configure the Therac-25 so the electron beam would fire in high-power mode but with the metal X-ray target out of position. At least five patients die; others are seriously injured.

    1988 -- Buffer overflow in Berkeley Unix finger daemon. The first internet worm (the so-called Morris Worm [eweek.com]) infects between 2,000 and 6,000 computers in less than a day by taking advantage of a buffer overflow. The specific code is a function in the standard input/output library routine called gets() [apple.com] designed to get a line of text over the network. Unfortunately, gets() has no provision to limit its input, and an overly large input allows the worm to take over any machine to which it can connect.

    Programmers respond by attempting to stamp out the gets() function in working code, but they refuse to remove it from the C programming language's standard input/output library, where it remains to this day.

    1988-1996 -- Kerberos Random Number Generator. The authors of the Kerberos security system neglect to properly "seed" the program's random number generator with a truly random seed. As a result [psu.edu], for eight years it is possible to trivially break into any computer that relies on Kerberos for authentication. It is unknown if this bug was ever actually exploited.

    January 15, 1990 -- ATT Network Outage. A bug in a new release of the software that controls ATT's #4ESS long distance switches causes these mammoth computers to crash when they receive a specif

  • by Lead Butthead ( 321013 ) on Tuesday November 08, 2005 @10:42AM (#13978923) Journal
    Something about their latest toy... ahm, ship that had to be towed back to port because Windows NT they used to run everything on the ship keep blue screening.
  • by technoextreme ( 885694 ) on Tuesday November 08, 2005 @10:46AM (#13978956)
    1988-1996 -- Kerberos Random Number Generator. The authors of the Kerberos security system neglect to properly "seed" the program's random number generator with a truly random seed. As a result, for eight years it is possible to trivially break into any computer that relies on Kerberos for authentication. It is unknown if this bug was ever actually exploited.

    Hehehe.... This reminds me of a Dilbert cartoon. Here is what I can remember:
    Some guy: And here is our random number generator.
    Another guy: 2 2 2 2 2 2 2 2 2 2 2 2.
    Dilbert: That isn't very random though.
    Some guy: He is randomly getting the same number.
    Anyone actually know which comic I am thinking of.
  • Airbus Crash (Score:5, Informative)

    by CruddyBuddy ( 918901 ) on Tuesday November 08, 2005 @10:56AM (#13979043)
    Here is video of an Airbus crashing into the trees because the autopilot didn't like the landing conditions. IIRC (remember), the pilot's pull-up was ignored because the flight conditions weren't optimum despite an obvious life threatening situation. If this isn't a software bug, what would you call it? (Maybe the software considered crash modes and this configuration allowed the black box to survive intact.)

    http://www.alexisparkinn.com/photogallery/Videos/A irbus320_trees.mpg/ [alexisparkinn.com]

    (Let the slashdotting begin! (poor servers))

    All things considered, I don't know if the pilots survived.

    • Re:Airbus Crash (Score:5, Informative)

      by be-fan ( 61476 ) on Tuesday November 08, 2005 @11:20AM (#13979255)
      I actually know why this happened. We learned about it in our flight dynamics class. The problem was the result in a mistmatch between what the pilot thought the airplane was doing, and what it was actually doing. The A320 had software that prevented the pilot from stalling the airplane during flight. However, the protection only kicked in above 90', because the software assumed that if you were below that, you wanted to land (which involves a stall right at touchdown). The pilot was trying to do a flyby, and was supposed to be above 100', but for whatever reason he came in at around 30'. Now, the reasons he didn't pull up and ramp up the engines are debatable, but the equitable explanations suggest that he assumed that the airplane's stall protection would kick in, while the airplane had disabled them because it thought it was about to land.
  • by SysKoll ( 48967 ) on Tuesday November 08, 2005 @10:57AM (#13979050)
    The Wired article perpetuates the same old tiresome mistake, that is, that the term "bug" originated from a moth found in a 1947 computer.

    That is wrong. This is a myth that has been disproved several times. See for example the "IEEE Annals of Computer History" where Adm. Grace Hopper said that that the term "bug" was used at least since the 30s, and maybe earlier, to describe an electrical problem in a system. See also here. [faqs.org]

    In interview, Hopper confirmed that the notebook moth's caption, "First actual case of bug being found", clearly shows that it was a joke referring to a term that was already in use at the time.

    Any idiot researching this anecdote for five minutes could have found about it. I guess Wired couldn't be bothered. At this level of laziness and incompetence, one wonders why they just don't start publishing printouts of slashdot laced with ads. At least, this place contains occasional nudgets of truth.

    Once again, Wired blew it. Nice jobs, guys.

    • I hate to interrupt your rant, but... the Wired article doesn't say that the term "bug" originated in 1947. It merely notes that the first widely known "buggy computer" was the Harvard Mark I:

      With that recall, the Pruis joined the ranks of the buggy computer -- a club that began in 1947 when engineers found a moth in Panel F, Relay #70 of the Harvard Mark 1 system. The computer was running a test of its multiplier and adder when the engineers noticed something was wrong. The moth was trapped, removed an

  • Medical Systems (Score:5, Interesting)

    by koehn ( 575405 ) * on Tuesday November 08, 2005 @11:10AM (#13979162)
    I designed and build a diagnostic radiology workstation (in 1997, in Java 1.1, 4x5 megapixel monitors, still in use today). During the development effort we were regaled with stories of software glitches in medical systems resulting in disaster. It really keeps you focused.

    In one case, a radiation treatment system had a bug where if you used the backspace key when entering the dose a patient received, the display would show you deleted the last digit, but internally you hadn't. So the patient would recieve 10^backspace times the intended dose of radiation. Not a big deal normally, since the techs would typically shut the machine off between treatments. Until one day when they had two patients needing treatment back to back. The tech knew something was wrong when the machine was running for an unusually long time. The patient knew something was wrong when he died.

    On our team a defect that crashed the system was considered severity 2. Severity 1 was reserved for defects that could result in a mis-diagnosis, which most patients agree is worse than a crash.
  • by Phanatic1a ( 413374 ) on Tuesday November 08, 2005 @11:17AM (#13979215)
    For potential severity, this one [wired.com]'s worse than a few they listed.

    Basically, the Navy was running critical ship systems on a Windows NT platform, and a divide-by-zero in a database caused a buffer overrun that resulted in a shutdown of the engines, leaving the ship dead in the water for 2.5 hours.

    Fortunately, it was on maneuvers off of Cape Charles, and not at war off the coast of Yemen or something. Scratch a billion-dollar destroyer and most of her crew because of an NT bug, in that case.

  • by randomErr ( 172078 ) <ervin,kosch&gmail,com> on Tuesday November 08, 2005 @11:29AM (#13979371) Journal
    2004 Luxembourg blackout
    Patriot Missle - Missles had to be shut down once a day because targeting system would cycle every minute and change the internal cordinating system a fraction of a degree. Over the course of a few days the targeting system would be completely useless.
    PS/2 shutdown bug - Analog copiers at the time fuser componants worked athe same frequency as the processor's shutdown signal.
    Minus World - Super Mario Brother - A hidden water glitch
    ErMac - Mortal Combat
  • by edunbar93 ( 141167 ) on Tuesday November 08, 2005 @11:33AM (#13979426)
    Why isn't Outlook Express in here? Early versions basically changed unopened e-mail viruses from a hoax to reality, when Microsoft decided it was a *good* idea to automatically run any VB script that was recieved. That's cluelessness like trusting everyone to be good and decent human beings while you walk through a prison shower with "Please rape me" painted on your back.

    Later versions tried to fix the problem while keeping the functionality, as if somehow the bad guys would intentionally include the Evil Bit in their code.
    • Later versions tried to fix the problem while keeping the functionality, as if somehow the bad guys would intentionally include the Evil Bit in their code.

      If the newer build didn't contain the same functionality, then nobody would upgrade their software. Outlook Express has also served to reinforce the idea that this functionality should exist and be activated by default in all modern e-mail clients. If you were to install a different e-mail client -- Thunderbird, for instance -- on a computer belonging to
  • Not a bug (Score:5, Informative)

    by stlhawkeye ( 868951 ) on Tuesday November 08, 2005 @11:44AM (#13979557) Homepage Journal
    In a series of accidents, therapy planning software created by Multidata Systems International, a U.S. firm, miscalculates the proper dosage of radiation for patients undergoing radiation therapy.

    I used to work with the lead programmer on this software package from Multidata. We worked together at two different companies for a total of about four years.

    Multidata's software allows a radiation therapist to draw on a computer screen the placement of metal shields called "blocks" designed to protect healthy tissue from the radiation. But the software will only allow technicians to use four shielding blocks, and the Panamanian doctors wish to use five.

    This is also made very clear in the documentation. This isn't a bug at all, the dosimitrists misused the software.

    The doctors discover that they can trick the software by drawing all five blocks as a single large block with a hole in the middle. What the doctors don't realize is that the Multidata software gives different answers in this configuration depending on how the hole is drawn: draw it in one direction and the correct dose is calculated, draw in another direction and the software recommends twice the necessary exposure.

    Exactly. They tried to create a feature that the software did not support, and they did so in a manner that broke the software.

    At least eight patients die, while another 20 receive overdoses likely to cause significant health problems. The physicians, who were legally required to double-check the computer's calculations by hand, are indicted for murder.

    It's not a software bug, it's a user error. This isn't a bug any more than it's a "bug" that your Linux box stops working properly if you do sudo rm -rf /. The users of the product knew better.

    To be fair, Multidata was not a great shop from a procedural standpoint - the guy who ran it was insane, but the software was rock solid. I actually worked with a number of former Multidata employees who jumped ship and went to a rival shop that builds similar software, and they were all fairly competant and intelligent.

    • Re:Not a bug (Score:3, Insightful)

      by JMan1 ( 200342 )
      Exactly. They tried to create a feature that the software did not support, and they did so in a manner that broke the software.

      Except that the software didn't break well. It should have either reported that the action wasn't allowed or calculated correctly. It shouldn't look like it's working but give erroneous results. If a single block with a hole isn't supported, why are you allowed to select it?

    • Re:Not a bug (Score:5, Insightful)

      by bit01 ( 644603 ) on Tuesday November 08, 2005 @12:42PM (#13980100)

      It's not a software bug, it's a user error.

      It's both. The program should not have accepted easily recognised invalid input and the user should not have entered it.

      I don't care if it's not in the spec, it's commonly accepted programming practice that all input should be bounds checked and any program that doesn't do that is crap.

      Your rm example is not equivalent as command line programs are by design flexible; in unusual circumstances it may be exactly what the operator wants to do.

      ---

      Keep your options open!

    • Re:Not a bug (Score:3, Insightful)

      by frankie ( 91710 )
      It's not a software bug, it's a user error.

      No. When you design software that is explicitly intended to perform potentially lethal actions on human beings, you absolutely make sure it's foolproof. You do input validation at every freaking step, then double-check the result before you pull the trigger.

      If I go in for LASIK and get my retina burned off because some technician turned the wrong dial up to 11, you bet your ass I'm suing the manufacturer right along side the clinic. It should not be possible fo

    • Re:Not a bug (Score:4, Insightful)

      by barole ( 35839 ) on Tuesday November 08, 2005 @01:36PM (#13980598)
      I develop medical software for a living and this is the scariest thing I have ever read.

      Your example is incomplete. Imagine that you type "rm -rf / junk" and the system responds "Delete /junk?", so you answer "Y" and it then deletes the whole filesystem.

      It is most certainly a bug. First, there is a mismatch between what is shown on the screen and what the system is doing. That is a bug by any definition. Second, the system obviously had gaps in its validation of input. This makes it no less of a bug than many of the others listed (eg fingerd bug).

      Furthermore, it is the responsibility of designers and developers of medical software to ensure that potential hazards are identified and mitigated. A hazard of "calculated dose does not match image shown on screen" is not some obscure hazard that no one would have thought of - it is the first that comes to mind!

      Please tell me that these people are not involved in medical software anymore.

  • by darthnoodles ( 831210 ) on Tuesday November 08, 2005 @11:45AM (#13979565)
    en.wikipedia.org/wiki/2003_North_America_blackout [slashdot.org]

    From Wiki page:

    It also found that FirstEnergy did not take remedial action or warn other control centers until it was too late because of a bug in the Unix-based General Electric Energy's XA/21 system that prevented alarms from showing on their control system, and they had inadequate staff to detect and correct the software bug. The cascading effect that resulted ultimately forced the shutdown of more than 100 power plants.

  • by Iphtashu Fitz ( 263795 ) on Tuesday November 08, 2005 @11:54AM (#13979653)
    My dad tells this story from time to time. I don't know if it's true, but it makes a good story. Back in the early days of computers when only big corporations had them, most software was written in-house by staff programmers. One of the major soda manufacturers had a new mainframe and had one of their top programmers write an accounting package for them. It so happens that the manufacturer was a major competitor of 7-Up. Well for whatever reason the programmer left the company on not-too-good terms. The very next time the manufacturer when to print out a report from the accounting package, every 7th page contained the phrase "Drink 7-Up" in big block letters. They had their remaining programmers go back through the code and try to remove this new "feature" but they were unable to. This guy was so good that he'd embedded the logic for this nastygram right into the actual logic of the accounting package. Supposedly there was code that would dynamically generate other instructions that, when executed would generate other instructions, etc. They were supposedly unable to get rid of the 7-Up message without breaking other parts of the program, so they ended up having to go back to square one and write a whole new accounting package.

    So the story goes...
  • by douthat ( 568842 ) on Tuesday November 08, 2005 @12:02PM (#13979729)
    I think the two worst computer bugs of all time are the two that quite possibly could have wiped us all out. More inforation here. [wikipedia.org]

    (Copied from the article:)
            * November 9, 1979, when the US made emergency retaliation preparations after NORAD saw on-screen indications that a full-scale Soviet attack had been launched. No attempt was made to use the "red telephone" hotline to clarify the situation with the USSR and it was not until early-warning radar systems confirmed no such launch had taken place that NORAD realised that a computer system test had caused the display errors. A Senator at NORAD at the time described an atmosphere of absolute panic. A GAO investigation led to the construction of an off-site test facility, to prevent similar mistakes subsequently. A fictionalized version of this incident was filmed as the movie WarGames, in which the test system is inadvertantly triggered by a teenage hacker believing himself to be playing a video game.

            * September 26, 1983, when Soviet military officer Stanislav Petrov refused to launch ICBMs, despite computer indications that the US had already launched.

            If it weren't for two humans who said "fuck what the computer says!", we might be in a very different place right now.
    • If it weren't for two humans who said "fuck what the computer says!", we might be in a very different place right now.

      I guess that is why they were there.

      Computers are excellent at performing according to the logic that is programmed into them. For the most part, they cannot "think" or take a step back and say, "I'm sure I did everything right, but something still looks wrong". I used to put on my math tests something like, "I know this is not the right answer, but here is my work". To me, that is much mo
  • Y2K (Score:3, Insightful)

    by Dausha ( 546002 ) on Tuesday November 08, 2005 @12:10PM (#13979814) Homepage
    What about the Y2K bug? I believe that had a greater economic impact than many of the other "worst."
  • by AviLazar ( 741826 ) on Tuesday November 08, 2005 @01:55PM (#13980805) Journal
    Some of the bugs they listed are not truly bugs.
    Soviet Gas Pipeline...This was a desired feature working just as intended (unless they CIA didn't want to blow up the pipeline)

    Buffer Overflow in Berkley - a worm is not a bug. it is a program designed to infiltrate a system and do something. While the people utilizing the program may not have intended this to happen (duh) the makers of the worm did.

    A bug is an unwanted aspect of the code as implemented by the people who wrote (or edited the code) but this does not include something affected by a virus/worm. A program that crashes every six minutes for no apparant, or intended reason has a bug...a program that gets infected by a virus which causes it to crash every six minutes is not a bug. Also, a piece of code that is intentially inserted in the hopes of crashing a system is not a bug...it is a feature. It may be undesirable, but it is a feature.
  • by Khopesh ( 112447 ) on Tuesday November 08, 2005 @02:09PM (#13980942) Homepage Journal
    My favorite bug is a computer chip in the US surveillance of Soviet Russia's missile silos. Basically, some early warning system stated that Russia had launched something like 2222222 missiles from every source they had. (I'm not sure of the actual number, but it only contained 2's.)

    Some person down the line noticed that the Russians didn't have that many missiles, couldn't have launched them all with such synchronization, and that there were an awful lot of two's in the report ... actually, every digit of every number was a two. It turned out to be a fried chip somewhere, always pumping out the same bit regardless of input (I have no understanding of the technical side of the issue; maybe it hit the 32-bit limit and the int->string function reacted with 2's).

    Good thing we were not too automated, and that we employed somebody smart enough to critically examine his printouts.

    Disclaimer, this is a favorite tidbit of one of my professors ... I have no real source to refer to.

Adding features does not necessarily increase functionality -- it just makes the manuals thicker.

Working...