History's Worst Software Bugs 645
bharatm writes "Wired has an article on the 10 worst sofware bugs.. From the article 'Coding errors have sparked explosions, crippled interplanetary probes -- even killed people. Here's our pick for the 10 worst bugs ever, but the judging wasn't easy.'"
Predictions are hard (Score:4, Interesting)
license to own a compiler, for instance.) I thought that the event that would inevitably trigger this is when a software
bug caused a human death.
I still believe that programming will eventually require a license, but I now think that lobbying by the big media
companies will be the cause. Depressing, huh?
You get what you pay for (Score:2, Insightful)
Re:You get what you pay for (Score:3, Interesting)
This also requires more than the current courses which are pretty much level starter course. It is sad that after a few days being busy with a language before a course, you will already find mistakes/bugs or just better ways to do it than what is promoted in the course.
For example after a 3 day crash course (I missed day 1, else it would have been 4 days), I became a certified Stellent developer. So a "real" test at the end t
Re:You get what you pay for NONSENSE (Score:5, Insightful)
However, you have been fooled. The parent comment is competely at odds with the article.
The article shows largely a series of examples where you DID have HIGHLY PAID and HIGHLY trained professionals with plenty of experience and oversight, but nevertheless very significant bugs occurred. So, the real lesson from this article is not "you get what you pay for," but rather that "software development is very hard" and perhaps that "by nature of its hardness, we can expect critical flaws to pop up from time to time, even when highly trained, experienced, and monitored programmers are involved."
Mangement problems (Score:5, Insightful)
Some of the bugs reported in the story were not so much the fault of programmers, but of management. The phone network bug was a misplaced { character in a nested if-else construct. The code had already been though extensive testing, and then a small change was needed. Because it was a "minor" change someone said it didn't need to go through the extensive (expensive) testing again. It's always easy to point at the code or the guy who wrote it. Especially when the boss is the one tasked with finding out what went wrong.
Re:Mangement problems (Score:3, Informative)
Is that what it was? I thought I'd heard that the AT&T outage was from a missing break; in a switch-case statement.
I found that more believable, because a missing { would cause a compiler error, where a missing break; is a valid way to purposely fall into the next case.
Though, really, I suspect both of us are just repeating rumors we heard.
Re:You get what you pay for NONSENSE (Score:5, Insightful)
It doesn't matter how highly paid and trained your professionals are, if the environment that produces the software is not conducive to eliminating these types of flaws. Like if they are not given enough resources to test and QA the the projects they are assigned, there is no organizational commitment to take the time and expense to document properly, or leadership overrides technical objections to project timeframes, etc. Most of the cited projects could probably be classified as failures of project management rather than failures of the end product (the software) that these flawed projects produced. Yes, software is hard and the software profession should continue its efforts to improve quality, but that doesn't let the organizational culture, leadership and processes that produced the software in these cases off the hook.
Why is it when the accounting profession makes spectacular mistakes that take down entire Fortune 500 class organizations, there is a critical analysis of the processes that led to these failures, and remedies often comprise prescriptive measures for these processes, but similar analysis for software failures focus upon the software flaw but not the environment that allowed the flaw to emerge? Now sometimes the remedy in the accounting case might not make complete sense (like SOX), but the point here is people don't look at just the end result (the accounting system transactions) of the accounting process.
Re:You get what you pay for NONSENSE (Score:4, Insightful)
1. Design reviews, by peers and independents
2. Code reviews, by peers and independents
3. Regulary, organized, unit testing
4. Correctness proving
5. Documentation is about a bazillion forms
6. Defect tracking
7. Effective software process metrics measurement and improvement
8. Continuing education
9. Humility / egoless programming
This list was assembled in about a minute off the top of my head. I work in a CMM3/4 type organization, and although there are processes for these things, most people don't use them, or consider them a hassle.
So my point is, the parent is right -- creating good software, even when done by properly trained experts with great experience -- is hard. But the grandparent is right too -- doing all of the above to 'do it right' takes time and money, and many organizations, and by this I mean software process management as well as the actual engineers, don't understand the value / aren't willing to pay for or aren't willing to do all that work. And occasionally, as the article shows, the piper comes and takes his payment.
Re:You get what you pay for NONSENSE (Score:3, Informative)
Item 0: Requirements reviews by peers and independants. If you don't have good requirements you obviously don't know things well enough to be building them. Sure you can catch some requirements issues in 1 and 2 but the longer you wait the costlier it is to fix.
A MSCS is NOT a Software Engineering Degree, so why WOULD you take courses in SE?I'd say that CS and SE are two different professions. There are places to get a MS SE (Texas Tech comes to mind) if you are interested.
Re:Predictions are hard (Score:5, Insightful)
This is like saying you need a license to operate a Soda Vending Machine because some idiot decided tipping it over trying to get a free soda was a smart idea. You might have to put warnings on compliers like do not code if you have no clue what you are doing, etc but requiring a license won't ever happen. I am sure there will be lawsuits in the future regarding software bugs, but any software being used where an error could cause a human death is going to have a corporation behind it, that can be held responsible.
Re:Predictions are hard (Score:5, Insightful)
Re:Predictions are hard (Score:5, Informative)
We've all seen it: the employee who's convinced she's doing a great job and gets a mediocre performance appraisal, or the student who's sure he's aced an exam and winds up with a D.
The tendency that people have to overrate their abilities fascinates Cornell University social psychologist David Dunning, PhD. "People overestimate themselves," he says, "but more than that, they really seem to believe it. I've been trying to figure out where that certainty of belief comes from."
Dunning is doing that through a series of manipulated studies, mostly with students at Cornell. He's finding that the least competent performers inflate their abilities the most; that the reason for the overinflation seems to be ignorance, not arrogance; and that chronic self-beliefs, however inaccurate, underlie both people's over and underestimations of how well they're doing.
Re:Predictions are hard (Score:5, Interesting)
In the words of the old chestnut, "If you're calm and confident when everyone around you is running around in blind panic, you clearly don't understand the situation."
Re:Predictions are hard (Score:4, Funny)
I'll bet the guy just LOVES the first few installments each season of American Idol.
Re:The more I learn, the less I know (Score:3, Insightful)
1. Knowledge you have that you are aware of
2. Knowledge you have that you are ignorant of
3. Knowledge you are aware you are ignorant of
4. Knowledge you are are not aware you are ignorant of
So, as you move knowledge from the other areas into area 1, you tend to pull things "up" if you will. Knowledge moves from 4 to 3 as well.
2 isn't a contradiction, just that you might not be aware that some
Re:Whoops forgot to hit preview (Score:4, Informative)
The problem is, is that most companies producing software do not want to pay for an engineer to oversee their project. Also, the way most software operations are run, you wouldn't see an engineer, signing off on the projects. The engineer would force things to be much more tested in order to be ensure that things were actually worthy to be signed off on. There is lots of this kind of software being built for planes, and other situations where it really matters if there is bugs. I don't think this kind of situation will ever happen with off the shelf software. For one thing, software would cost too much, and most people aren't willing to pay $2000 to run an operating system on their home computer, and also because most engineers wouldn't sign off on a system, in which they didn't know the computer their software would run under. There's too many variables on a home computer to be able to garauntee, at that level, that your software will operate completely as expected.
Re:Whoops forgot to hit preview (Score:5, Insightful)
Back to what you were saying, if you have a system that could cause damage or whatever, then you start by writing your output routines, and you create rules to govern the machine (i.e. outputs A and B can't come on at the same time, or output C can't exceed this value). Then you write another module that monitors the inputs AND outputs looking for fault conditions that shuts down the machine if you do anything dangerous. Only this part of the code needs to be signed off by an engineer. Typically it's simple code, and easy to prove correct, with peer review. Then you write other modules that essentially make requests through the safety checks to do anything. You don't have to review the complex other code so much, because your output stage should catch any mistakes.
That's how you make a machine safe. Unfortunately, most engineers I know just go out and write the software figuring there's no difference, and that's how bad things happen. It comes from believing you won't make a mistake, or believing that testing will catch all problems. If you plan from the start that you're going to be making mistakes, you can catch them before damage is done. It's too bad this isn't taught, even in the software engineering classes I took at a Canadian university.
Re:Whoops forgot to hit preview (Score:3, Insightful)
Re:Whoops forgot to hit preview (Score:3, Insightful)
True, it's just that in my line of work, off is usually the safe state, but what should be done is to go to some kind of safe state, whatever that may be. Sometimes you revert to a manual operation, for instance.
Also, even good inputs can result in the the machine not doing what it needs to do.
Which is why you need to also hire a mechanical and an electrical engineer to design those aspe
Re:Whoops forgot to hit preview (Score:4, Informative)
Actually, you're confusing the title "P.E." (professional engineer) with the generally accepted term "engineer." One (the P.E.) is a licensed engineer, and others are used traditionally and arbitrarily with no legal recourse. For example, I and my co-workers are bona fide engineers, and most of us have engineering or engineering technology degrees. None of what we do requires a P.E. to sign off on anything, although there are other aspects of our business (and many other businesses) that do require a P.E.
Of course, there are all kinds of "engineers" that have that title but don't truly merit it -- customer service engineer; field service engineer; applications engineer; and so on. Most of these don't hold engineering degrees. For many of them, I don't begrudge them their title, either. But we also know that they're not P.E.'s.
Re:And don't forget your roots... (Score:5, Interesting)
Re:And don't forget your roots... (Score:3, Informative)
Re:If Engineers built buildings (Score:5, Insightful)
If engineers built buildings the way computer programmers wrote programs, an average engineer would be able to build an array of radio telescopes by himself in one evening. A team of 30 engineers would be able to build a ringworld in 3 months.
i.e. it would be nice if software were like designing an office, where there were 3 architects, 5 engineers, a building inspector, and 50 professional workmen to examine a system containing just a few hundred variables, and almost identical to the last 20 buildings they'd constructed.
And in case that didn't start a flamewar, how about...
"Just one unexpected input (of an aeroplane) caused the failure of two of New York's biggest civil engineering projects -- imagine how they'd cope with being attacked every 3 seconds like some internet software"
Re:Predictions are hard (Score:3, Interesting)
Does it count when someone puts some HTML in a blog? What about Javascript? a DIY PHP site? a batchfile or shell script? Excel function/macro?
Do you only want to licence compilers? How do I install my OSS? What about the power of interpereted or JIT languages? So much can be done with uncompiled code.
Licensing - ACM Position (Score:3, Interesting)
Guess what: They don't, [acm.org] although they appear to be hedgi [acm.org]
only 10? (Score:4, Insightful)
anyone think of any others?
Re:only 10? (Score:2)
Re:only 10? (Score:2)
Re:only 10? (Score:5, Insightful)
That was a trojan. It was a deliberate attack on their system by an enemy. It didn't even arrive via the now classical "worm" or "virus" route, which would have implied that a "bug let it in the door." No, this one was deliberately planted carefully at the root. It's not a bug, it was an attack.
Probably BS (Score:3, Informative)
Re:only 10? (Score:5, Insightful)
Actually, they didn't steal it--they bought it. From the Canadians. After we refused to sell it ourselves.
These days, the Soviets could probably just have filed an unfair restraint of trade complaint with the WTO!
Seriously, though, culpability here is convoluted. The Soviets had a legitimate need for this technology, and we said, "No, you can't have it." So they went to someone else to buy it, and we sabatoged it. And the only justification is that the Soviet Union was the "evil empire," which had to be destroyed no matter what.
Yeah, yeah, it was a "tense time," and "they wanted to bury us, too." But everytime we talk about how capitalism beat communism because it is inherently better, we should remember all of these incidents which were expressly designed to choke out the Soviet State. Did it wither away because it was inefficient and inferior? Or because we had the strength at the time to hound it into oblivion?
Re:Goose and gander (Score:4, Informative)
Colin Powell's statement: "With respect to your earlier comments about Chile in the 1970s and what happened with Mr. Allende, it is not a part of American history that we're proud of."
Iran [wikipedia.org]
Guatemala [wikipedia.org]
Greece [wikipedia.org]
There's lots more where those came from -- all democratically elected too. I hope you survive the cognitive dissonance.
Re:only 10? (Score:3, Insightful)
Look at the article you just read (or probably didn't ;-). Absorb it. Consider that systems engineered by humans can contain flaws. A human won't shoot the postman or a paramedic ariving unannounce
Re:only 10? (Score:4, Insightful)
Re:only 10? (Score:5, Insightful)
Both radiation bugs in both cases have killed less people then the shiteware used in Patriot missile system. Ariane and Mariner get an honorable mentioning, Raytheon doen't. Why?
There also no mentioning of power grid system bugs. The recent US blackout was a good example.
Re:only 10? (Score:4, Insightful)
Well, that makes me feel better (Score:4, Funny)
Caller: "My computer exploded and I'm bleeding profusely!"
911 Operator: "Must be a software bug."
hey, we're all still here (Score:4, Funny)
Re:hey, we're all still here (Score:2, Funny)
Um, what really big bug?
FROM: it_director@norad.mil
TO: president@whitehouse.gov
SUBJECT: New systems
Mr President, I'm pleased to report that the new national radar systems are fully tested and operational.
FROM: r.q.hacker@norad.mil
TO: it_director@norad.mil
SUBJECT: possible bug in calendar module
UNREAD
Hey, we may have a problem in the calendar system. I suspect there's a memory allocation issue here, we've been seeing occasional bugs in testing. Might
Moth. (Score:5, Interesting)
Why would they say that, if the term "bug" didn't exist? I mean, you wouldn't find a rat in your car and say "First actual case of a car 'rat' being found" if you didn't use it as a term to indicate something. You'd just say "this bug caused computing errors". I smell a car rat.
Re:Moth. (Score:2)
Re:Moth. (Score:5, Informative)
http://www.silicon.com/software/webservices/0,390
Bug or User error? (Score:5, Insightful)
"Multidata's software allows a radiation therapist to draw on a computer screen the placement of metal shields called "blocks" designed to protect healthy tissue from the radiation. But the software will only allow technicians to use four shielding blocks, and the Panamanian doctors wish to use five.
The doctors discover that they can trick the software by drawing all five blocks as a single large block with a hole in the middle. What the doctors don't realize is that the Multidata software gives different answers in this configuration depending on how the hole is drawn: draw it in one direction and the correct dose is calculated, draw in another direction and the software recommends twice the necessary exposure.
At least eight patients die, while another 20 receive overdoses likely to cause significant health problems. The physicians, who were legally required to double-check the computer's calculations by hand, are indicted for murder. "
Why not both? (Score:5, Insightful)
I liked the last one (Score:2)
Worst _software_ bugs, huh ? (Score:2, Redundant)
Oh wait, it wasn't
Re: (Score:3, Informative)
All hail the mighty Soviet engineering (Score:2)
Intel FP divide is -not- a software bug (Score:2, Insightful)
MOD PARENT BACK DOWN IT WAS A SOFTWARE BUG (Score:5, Informative)
Re:Intel FP divide is -not- a software bug (Score:5, Informative)
This table produces an estimate with 12 or so good bits of precision. Iterative refinement (typically microcoded) then produces the rest of the bits. After that the reciprocal is multiplied in, and you get the result.
More recently this has been somewhat exposed, as most all modern processors have a reciprocal estimate instruction which executed in a single cycle. This is very useful if, for example, you want to normalize a bunch of normal vectors before passing them into the graphics pipeline. 12 bits is almost always enough for this purpose, and the reciprocal sqrt instruction is very much your friend here. So something that was dominated by the ~60 cycles of 1.0f/sqrt(sum_of_squares) becomes 1 cycle. Total speedup is about 10x -- and it's vectorizable -- the SSE unit will do a vector rsqrte.
My understanding of the pentium fdiv bug is that a section of the reciprocal estimate table had bad data in it.
This, in my opinion, counts as software, as would the microcode. If the bug had been in the multiplier, adder, or logic circuitry of the lookup table, then it would count as hardware.
Many, if not all the complex ciscy instructions are implemented in microcode -- so I believe that a bug in them would count as a software bug.
The meat of the article... (Score:5, Informative)
July 28, 1962 -- Mariner I space probe. A bug in the flight software for the Mariner 1 [wikipedia.org] causes the rocket to divert from its intended path on launch. Mission control destroys the rocket over the Atlantic Ocean. The investigation into the accident discovers that a formula written on paper in pencil was improperly transcribed into computer code, causing the computer to miscalculate the rocket's trajectory.
1982 -- Soviet gas pipeline. Operatives working for the U.S. Central Intelligence Agency allegedly [loyola.edu] (.pdf) plant a bug in a Canadian computer system purchased to control the trans-Siberian gas pipeline. The Soviets had obtained the system as part of a wide-ranging effort to covertly purchase or steal sensitive U.S. technology. The CIA reportedly found out about the program and decided to make it backfire [msn.com] with equipment that would pass Soviet inspection and then fail once in operation. The resulting event is reportedly the largest non-nuclear explosion in the planet's history.
1985-1987 -- Therac-25 medical accelerator. A radiation therapy device malfunctions and delivers lethal radiation doses at several medical facilities. Based upon a previous design, the Therac-25 [wikipedia.org] was an "improved" therapy system that could deliver two different kinds of radiation: either a low-power electron beam (beta particles) or X-rays. The Therac-25's X-rays were generated by smashing high-power electrons into a metal target positioned between the electron gun and the patient. A second "improvement" was the replacement of the older Therac-20's electromechanical safety interlocks with software control, a decision made because software was perceived to be more reliable.
What engineers didn't know was that both the 20 and the 25 were built upon an operating system that had been kludged together by a programmer with no formal training. Because of a subtle bug called a "race condition [wikipedia.org]," a quick-fingered typist could accidentally configure the Therac-25 so the electron beam would fire in high-power mode but with the metal X-ray target out of position. At least five patients die; others are seriously injured.
1988 -- Buffer overflow in Berkeley Unix finger daemon. The first internet worm (the so-called Morris Worm [eweek.com]) infects between 2,000 and 6,000 computers in less than a day by taking advantage of a buffer overflow. The specific code is a function in the standard input/output library routine called gets() [apple.com] designed to get a line of text over the network. Unfortunately, gets() has no provision to limit its input, and an overly large input allows the worm to take over any machine to which it can connect.
Programmers respond by attempting to stamp out the gets() function in working code, but they refuse to remove it from the C programming language's standard input/output library, where it remains to this day.
1988-1996 -- Kerberos Random Number Generator. The authors of the Kerberos security system neglect to properly "seed" the program's random number generator with a truly random seed. As a result [psu.edu], for eight years it is possible to trivially break into any computer that relies on Kerberos for authentication. It is unknown if this bug was ever actually exploited.
January 15, 1990 -- ATT Network Outage. A bug in a new release of the software that controls ATT's #4ESS long distance switches causes these mammoth computers to crash when they receive a specif
Re:The meat of the article... (Score:5, Funny)
The former dinosaurian population of the Yucatan Peninsula might disagree...
Re:The meat of the article... (Score:5, Interesting)
I actually did a research report on the Therac-25 incident while I was in Software Engineering class a few semesters ago (I was also in Technical Writing at the time, so I could kill two assignments with one report!) ;-) The details of the incident(s) are actually quite fascinating and sometimes spine-chilling.
Here's the report in PDF if anyone's interested: reportfinal.pdf [freearrow.com]
And in HTML for those of you who prefer it: link [freearrow.com]
Whatever happened to the US Navy? (Score:4, Informative)
Re:Whatever happened to the US Navy? (Score:5, Informative)
The Aegis system can be found on Ticonderoga-class cruisers and Arleigh Burke-class destroyers in the USN, Kongo-class destroyers in the IJN, and some Spanish frigate whose designation I forget.
The ship we're talking about is the USS Yorktown, CG-48, and the problem was pretty much as you describe. A user input an erroneous zero value for some quantitity (fuel pressure, I think), and the system ate itself and took the engines offline.
The Yorktown was decommissioned last year. Shame that the practice of using Windows in ship-critical systems wasn't.
This bug reminds me of a Dilbert comic (Score:5, Funny)
Hehehe.... This reminds me of a Dilbert cartoon. Here is what I can remember:
Some guy: And here is our random number generator.
Another guy: 2 2 2 2 2 2 2 2 2 2 2 2.
Dilbert: That isn't very random though.
Some guy: He is randomly getting the same number.
Anyone actually know which comic I am thinking of.
Re:This bug reminds me of a Dilbert comic (Score:4, Informative)
I think the last line is actually something like
Dilbert: That isn't very random though.
Some guy: That's the trouble with randomness - you can never tell.
Re:This bug reminds me of a Dilbert comic (Score:5, Informative)
http://www.geocities.com/raptorred42/Dilbert0001.
Airbus Crash (Score:5, Informative)
http://www.alexisparkinn.com/photogallery/Videos/A irbus320_trees.mpg/ [alexisparkinn.com]
(Let the slashdotting begin! (poor servers))
All things considered, I don't know if the pilots survived.
Re:Airbus Crash (Score:5, Informative)
Re:Airbus Crash (Score:4, Informative)
However the actual problem was that airliner engines aren't like some awesome fighter jet with afterburner. They take time to spin up - from examining the black box, they determined that at the point the pilot wanted to ascend, even if the engines span up at the maxiumum rate, it was still nowhere near enough to pull the plane out of the descent. Hence, pilot error.
Same old tiresome error: "BUG" was old then (Score:5, Insightful)
That is wrong. This is a myth that has been disproved several times. See for example the "IEEE Annals of Computer History" where Adm. Grace Hopper said that that the term "bug" was used at least since the 30s, and maybe earlier, to describe an electrical problem in a system. See also here. [faqs.org]
In interview, Hopper confirmed that the notebook moth's caption, "First actual case of bug being found", clearly shows that it was a joke referring to a term that was already in use at the time.
Any idiot researching this anecdote for five minutes could have found about it. I guess Wired couldn't be bothered. At this level of laziness and incompetence, one wonders why they just don't start publishing printouts of slashdot laced with ads. At least, this place contains occasional nudgets of truth.
Once again, Wired blew it. Nice jobs, guys.
Re:Same old tiresome error: "BUG" was old then (Score:3, Informative)
Medical Systems (Score:5, Interesting)
In one case, a radiation treatment system had a bug where if you used the backspace key when entering the dose a patient received, the display would show you deleted the last digit, but internally you hadn't. So the patient would recieve 10^backspace times the intended dose of radiation. Not a big deal normally, since the techs would typically shut the machine off between treatments. Until one day when they had two patients needing treatment back to back. The tech knew something was wrong when the machine was running for an unusually long time. The patient knew something was wrong when he died.
On our team a defect that crashed the system was considered severity 2. Severity 1 was reserved for defects that could result in a mis-diagnosis, which most patients agree is worse than a crash.
Re:A radiology system written in Java 1.1????! (Score:5, Insightful)
As it turns out, JDK1.1 (along with a native-C library for quick image processing, and a custom PCI card for doing 30MB/sec image transfers) was just fine for the task. We had a team of seven testers working on the project full time for a year, and were able to ship with zero severity 1-2 defects.
We set a new record for lowest defects/KLOC at the customer (a major player in the medical systems industry), despite running JDK 1.1 on Windows NT 4. Our product was several times faster than the C-based product it replaced, had more functionality, and provided more accurate diagnosis for the patient.
Good design is the most important thing in developing good software. The language/runtime/OS can provide crutches to save you if you screw up, but bad design will result in defects no matter how sturdy the crutches are.
I think this one should have made the list (Score:5, Insightful)
Basically, the Navy was running critical ship systems on a Windows NT platform, and a divide-by-zero in a database caused a buffer overrun that resulted in a shutdown of the engines, leaving the ship dead in the water for 2.5 hours.
Fortunately, it was on maneuvers off of Cape Charles, and not at war off the coast of Yemen or something. Scratch a billion-dollar destroyer and most of her crew because of an NT bug, in that case.
Re:I think this one should have made the list (Score:4, Insightful)
"NT played no role in the Yorktown's LAN crash, Baker said." [gcn.com]
"The Yorktown is unique because it was a proof-of-concept [ship] put out to sea without formal testing and software certification, which our products normally go through," Baker said.
Couple of Bugs I thought of (Score:3, Funny)
Patriot Missle - Missles had to be shut down once a day because targeting system would cycle every minute and change the internal cordinating system a fraction of a degree. Over the course of a few days the targeting system would be completely useless.
PS/2 shutdown bug - Analog copiers at the time fuser componants worked athe same frequency as the processor's shutdown signal.
Minus World - Super Mario Brother - A hidden water glitch
ErMac - Mortal Combat
What? No Outlook Express? (Score:5, Insightful)
Later versions tried to fix the problem while keeping the functionality, as if somehow the bad guys would intentionally include the Evil Bit in their code.
Re:What? No Outlook Express? (Score:3, Insightful)
If the newer build didn't contain the same functionality, then nobody would upgrade their software. Outlook Express has also served to reinforce the idea that this functionality should exist and be activated by default in all modern e-mail clients. If you were to install a different e-mail client -- Thunderbird, for instance -- on a computer belonging to
Not a bug (Score:5, Informative)
I used to work with the lead programmer on this software package from Multidata. We worked together at two different companies for a total of about four years.
Multidata's software allows a radiation therapist to draw on a computer screen the placement of metal shields called "blocks" designed to protect healthy tissue from the radiation. But the software will only allow technicians to use four shielding blocks, and the Panamanian doctors wish to use five.
This is also made very clear in the documentation. This isn't a bug at all, the dosimitrists misused the software.
The doctors discover that they can trick the software by drawing all five blocks as a single large block with a hole in the middle. What the doctors don't realize is that the Multidata software gives different answers in this configuration depending on how the hole is drawn: draw it in one direction and the correct dose is calculated, draw in another direction and the software recommends twice the necessary exposure.
Exactly. They tried to create a feature that the software did not support, and they did so in a manner that broke the software.
At least eight patients die, while another 20 receive overdoses likely to cause significant health problems. The physicians, who were legally required to double-check the computer's calculations by hand, are indicted for murder.
It's not a software bug, it's a user error. This isn't a bug any more than it's a "bug" that your Linux box stops working properly if you do sudo rm -rf /. The users of the product knew better.
To be fair, Multidata was not a great shop from a procedural standpoint - the guy who ran it was insane, but the software was rock solid. I actually worked with a number of former Multidata employees who jumped ship and went to a rival shop that builds similar software, and they were all fairly competant and intelligent.
Re:Not a bug (Score:3, Insightful)
Except that the software didn't break well. It should have either reported that the action wasn't allowed or calculated correctly. It shouldn't look like it's working but give erroneous results. If a single block with a hole isn't supported, why are you allowed to select it?
Re:Not a bug (Score:5, Insightful)
It's not a software bug, it's a user error.
It's both. The program should not have accepted easily recognised invalid input and the user should not have entered it.
I don't care if it's not in the spec, it's commonly accepted programming practice that all input should be bounds checked and any program that doesn't do that is crap.
Your rm example is not equivalent as command line programs are by design flexible; in unusual circumstances it may be exactly what the operator wants to do.
---
Keep your options open!
Re:Not a bug (Score:3, Insightful)
Changing the order of the vertices of a geometric figure should not affect the way the "inside" of the figure should be, since the order of the points is irrelevant (geometric-wise, as in mathemathics).
The software should have probably prompted the user (in all cases) which should be the inside area and not assu
Re:Not a bug (Score:3, Insightful)
No. When you design software that is explicitly intended to perform potentially lethal actions on human beings, you absolutely make sure it's foolproof. You do input validation at every freaking step, then double-check the result before you pull the trigger.
If I go in for LASIK and get my retina burned off because some technician turned the wrong dial up to 11, you bet your ass I'm suing the manufacturer right along side the clinic. It should not be possible fo
Re:Not a bug (Score:4, Insightful)
Your example is incomplete. Imagine that you type "rm -rf / junk" and the system responds "Delete /junk?", so you answer "Y" and it then deletes the whole filesystem.
It is most certainly a bug. First, there is a mismatch between what is shown on the screen and what the system is doing. That is a bug by any definition. Second, the system obviously had gaps in its validation of input. This makes it no less of a bug than many of the others listed (eg fingerd bug).
Furthermore, it is the responsibility of designers and developers of medical software to ensure that potential hazards are identified and mitigated. A hazard of "calculated dose does not match image shown on screen" is not some obscure hazard that no one would have thought of - it is the first that comes to mind!
Please tell me that these people are not involved in medical software anymore.
2003 North American Power Outage???? (Score:5, Interesting)
From Wiki page:
It also found that FirstEnergy did not take remedial action or warn other control centers until it was too late because of a bug in the Unix-based General Electric Energy's XA/21 system that prevented alarms from showing on their control system, and they had inadequate staff to detect and correct the software bug. The cascading effect that resulted ultimately forced the shutdown of more than 100 power plants.
Not a bug but I think this is appropriate (Score:5, Interesting)
So the story goes...
Well, they're ok, but not quite the worst (Score:5, Informative)
(Copied from the article:)
* November 9, 1979, when the US made emergency retaliation preparations after NORAD saw on-screen indications that a full-scale Soviet attack had been launched. No attempt was made to use the "red telephone" hotline to clarify the situation with the USSR and it was not until early-warning radar systems confirmed no such launch had taken place that NORAD realised that a computer system test had caused the display errors. A Senator at NORAD at the time described an atmosphere of absolute panic. A GAO investigation led to the construction of an off-site test facility, to prevent similar mistakes subsequently. A fictionalized version of this incident was filmed as the movie WarGames, in which the test system is inadvertantly triggered by a teenage hacker believing himself to be playing a video game.
* September 26, 1983, when Soviet military officer Stanislav Petrov refused to launch ICBMs, despite computer indications that the US had already launched.
If it weren't for two humans who said "fuck what the computer says!", we might be in a very different place right now.
Re:Well, they're ok, but not quite the worst (Score:3, Insightful)
I guess that is why they were there.
Computers are excellent at performing according to the logic that is programmed into them. For the most part, they cannot "think" or take a step back and say, "I'm sure I did everything right, but something still looks wrong". I used to put on my math tests something like, "I know this is not the right answer, but here is my work". To me, that is much mo
Y2K (Score:3, Insightful)
Disagree with them on some bugs (Score:3, Insightful)
Soviet Gas Pipeline...This was a desired feature working just as intended (unless they CIA didn't want to blow up the pipeline)
Buffer Overflow in Berkley - a worm is not a bug. it is a program designed to infiltrate a system and do something. While the people utilizing the program may not have intended this to happen (duh) the makers of the worm did.
A bug is an unwanted aspect of the code as implemented by the people who wrote (or edited the code) but this does not include something affected by a virus/worm. A program that crashes every six minutes for no apparant, or intended reason has a bug...a program that gets infected by a virus which causes it to crash every six minutes is not a bug. Also, a piece of code that is intentially inserted in the hopes of crashing a system is not a bug...it is a feature. It may be undesirable, but it is a feature.
22222222 missiles ... almost launched WWIII (Score:3, Interesting)
Some person down the line noticed that the Russians didn't have that many missiles, couldn't have launched them all with such synchronization, and that there were an awful lot of two's in the report ... actually, every digit of every number was a two. It turned out to be a fried chip somewhere, always pumping out the same bit regardless of input (I have no understanding of the technical side of the issue; maybe it hit the 32-bit limit and the int->string function reacted with 2's).
Good thing we were not too automated, and that we employed somebody smart enough to critically examine his printouts.
Disclaimer, this is a favorite tidbit of one of my professors ... I have no real source to refer to.
Please try to pay attention (Score:5, Insightful)
===
WinNuke made it...
1995/1996 -- The Ping of Death (Score:2)
This is my favourite.
Re:Microsoft's striking absence (Score:3, Interesting)
I couldn't find the description right now, but I'm sure others know the bug. The one were you can basically type a special textfile using type-command or similar and will basically BSOD the machine. The file consists of tabs, spaces and newline/carriage return pairs and nothing else. MS never fixed the bug.
Re:Microsoft's striking absence (Score:3, Informative)
How on earth was such a basic and low-level bug ignored for so long? It doesn't seem like rocket-science to fix it with a small bounds-checking if statement!
Re:Microsoft's striking absence (Score:3, Insightful)
I would also - but it's probably pretty impractical to tell if just the operator interface is running Windows, or if the low level controller of the electromechanical sensors, switches and actuators is also running Windows. I wouldn't worry about the former, I'd worry a lot about the latter. (Well, to be honest, I probably would accept the treatments anywa
Re:Microsoft's striking absence (Score:4, Informative)
Re:Microsoft's striking absence (Score:5, Informative)
Look everyone, someone didn't RTFA! (Score:2)
Did the original post actually quote correctly? (Score:3, Interesting)
From the post:
The resulting event is reportedly the largest non-nuclear explosion in the planet's history.
The actual quote from a hyperlink [msn.com] in the article ment
Re:omg (Score:5, Informative)
Re:omg (Score:3, Insightful)
You keep using that word, 'terror'. Are you sure you know what it means?
The fact that there was an explosion of such magnitude doesn't bother me a bit. And I bet the majority of the citizens of the USSR weren't shaken a bit by this explosion, because (drum roll) they never knew such an accident had happened (and that's, for me, the scary part). And nothing spells success better than an act of terror noone finds out about, now does it?
Suuuuure (Score:3, Insightful)
Terrorism is an act of mayhem designed to terrorize. This did not.
Sabotage? Yes.
Act of war? Probably.
Terrorism? Not even close.
Your statement is just a display of anti-American rhetoric with no basis in reality.
Abuse of language (Score:3, Insightful)
I might as well say: "Idiots like you that corrupt the language are worse than terrorists."
Both are absurd exaggerations that have nothing to do with reality, and only degrade the ability of our language to carry meaning.
Get Real. Terrorism is the deliberate use of violence against civilians in order to induce a state of terror in the general population, as a method intended to achieve
Re:They are just very, VERY careful. (Score:4, Interesting)
When you are writing software for life critical systems, there are methods you can follow that allow you much greater assurance of correct code and drastically reduce the testing burden (byt being abel to prove that certain classes of errors don't exist in the code). It's akin to static types, which allow you to statically catch a lot of type errors obviating reducing the need to spend time testing for possible type errors.
The languages and methods used are things like SPARK [praxis-his.com] and B-method [b-core.com] The beauty of systems like SPARK is that they provide a degree of flexibility in how much work you go to depending on how much extra assurance you want. It is quite possible to simply specify critical portions of code with a little extra formality (basically extended static checks beyond what type checing alone can give you) through to fully specifying everything and doing formal proofs for the whole system. You can tailor the effort and assurance to the needs of the project.
(This time without that dangling link - that'll teahc me not to preview)
Jedidiah
Re:Peer review for software? (Score:3, Interesting)
regards,
treefrog
Re:gets() (Score:3, Funny)
#define gets() DONT_USE_GETS_YOU_MORON()