Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Bug IT Technology

More Than Coding Errors Behind Bad Software 726

An anonymous reader writes "SANS' just-released list of the Top 15 most dangerous programming errors obscures the real problem with software development today, argues InfoWeek's Alex Wolfe. In More Than Coding Mistakes At Fault In Bad Software, he lays the blame on PC developers (read: Microsoft) who kicked the time-honored waterfall model to the curb and replaced it not with object-oriented or agile development but with a 'modus operandi of cramming in as many features as possible, and then fixing problems in beta.' He argues that youthful programmers don't know about error-catching and lack a sense of history, suggesting they read Fred Brooks' 'The Mythical Man-Month,' and Gerald Weinberg's 'The Psychology of Computer Programming.'"
This discussion has been archived. No new comments can be posted.

More Than Coding Errors Behind Bad Software

Comments Filter:
  • cheap shot (Score:5, Informative)

    by Anonymous Coward on Monday January 12, 2009 @03:41PM (#26421607)

    I work at Microsoft. We use agile development and almost everybody I know here has read the Mythical Man Month. Get your facts straight before taking cheap shots in story submissions. Thanks.

  • Microsoft? (Score:5, Informative)

    by dedazo ( 737510 ) on Monday January 12, 2009 @03:46PM (#26421705) Journal

    Most of the teams I've had contact with inside the tools group at MS (in the last four years or so) use SCRUM.

    I don't know how widespread that is in other divisions (say the MSN/Live folks or the Microsoft.com teams) but that clever comment in the submission is nothing more than an ignorant cheap shot.

    Don't be so twitterish and make up crap about Microsoft. Get your facts straight or you just come across as an idiot.

  • by hopopee ( 859193 ) on Monday January 12, 2009 @03:51PM (#26421789)
    I find it more than a little amusing that summary mentions waterfall method being a time honed, excellent example of how all software should be made. Here's the history of the none existent waterfall method has to say about it.. Waterfall Method does not exist! [idinews.com]
  • by networkBoy ( 774728 ) on Monday January 12, 2009 @03:57PM (#26421941) Journal

    they cost a shit ton of money is what happened.

    A project I was on in 2000ish went as follows:
    Steppings A0, A1, A2, and A3 were halted in-fab because someone found a critical bug in simulations.
    A4-A7 did not work.
    B0-B4 did not work B6 did not work
    C0-C4 did not work
    B5, B7, C5 sorta worked.
    The company folded.
    That's what a software mentality working on hardware will get you.

    Steppings in CPUs are a little different. Often an earlier stepping was functional enough to start the design cycle for Dell HP, et.al. but not ideal. The later steppings start by fixing the deficiencies, then beyond that are likely cost cutting.
    -nB

  • by Enter the Shoggoth ( 1362079 ) on Monday January 12, 2009 @03:59PM (#26421973)

    The most common errors: SQL injection, command injection, cleartext transmission of Sensitive Information, etc.

    People make mistakes. Software needs to ship, preferably yesterday.

    How much would it cost to have perfect software? I happen to have worked in an industry that requires perfect coding. So I can imagine what it would look like if Microsoft tried it.

    The debugger would cost half a million dollar per seat (gdb is free). There would be an entire industry dedicated to analyzing your source code and doing all kinds of proofs, coverage, what-if analysis and other stuff that require Ph.Ds to understand the results.

    The industry I'm referring to is the chip industry. Hardware designers code pretty much like software developers (except the languages they use are massively parallel, but apart from that, they use the same basic constructs). Hardware companies can't afford a single mistake because once the chip goes to fab, that's it. No patches like software, no version 1.0.1.

    It's just not practical. Let the NSA order special versions of Office that cost 10 times the price and ship three years after the consumer version.

    But for me, "good enough" is indeed good enough.

    -- FairSoftware.net [fairsoftware.net] -- work where geeks are their own boss

    I worked within the same space about 10 years ago - I was a sysadmin for a group of asic design jockies as well as the firmware and device driver guys and I'm gonna call you on this...

    The hardware designers were under the same sorts of pressures, if not more so, than the software guys and I saw many bugs that would end up in the shipping silicon. The general attitude was always "oh! a bug: well the software guys will just have to work around it."

    And as for "no patching", well that's also BS, you can patch silicon, it's just rather messy having to have the factory do it post-fab by cutting traces on die.

    So much for perfection!

  • by Anonymous Coward on Monday January 12, 2009 @04:19PM (#26422297)
    Yup. Eclipse is definitely an example of this. Except that it does a lot without any plugins installed, and there are literally hundreds of plugins available to fill practically every need.
  • by jandrese ( 485 ) <kensama@vt.edu> on Monday January 12, 2009 @04:19PM (#26422303) Homepage Journal
    Anybody who has worked on driver code can tell you that most hardware vendors ship an errata with their chip. Some hardware has rather significant erratas, like the SiI 3112 SATA controller that got installed in pretty much every SATA compatible motherboard back before SATA support was properly integrated into Southbridges. The CMD 640 is another example of a chip with an extensive hardware errata.
  • by jellomizer ( 103300 ) on Monday January 12, 2009 @04:26PM (#26422415)

    I'm old enough to remember when it wasn't like that. You'd run your program and it was ready in a second, you'd exit and it left no trace. Crashes were virtually unheard of. We have people where I work who only do data entry, and they still use wordperfect 4.2 on 386 hardware. I've seen their workflow and how fast it works for them and I can see if they "modernized" it would cripple their productivity.

    You remember wrong.
    The old stuff was always failing and had a bunch of problems...
    You are probably thinking about your old DOS PC. If the floppy disk wasn't corrupted thinks in general worked and it worked well. Why because the programmer could fairly easily test all the cases, and allow security bugs, as a buffer overflow can be prevented as it would take to long for the guy to physically type in the data... And it was one application with essentially the full PC at its whim.
    If you tried to do computing with Multi-tasking such as apps like Desqview your stability was greatly reduced. It would make you want to wish for Windows 95. Or remember Windows 3.1 stabilty...

    Downloading data via the Modem was a hit or miss activity. I remember getting disconnected trying to download Slackware as there was a combination for the zmodem that passed the hangup code to the modem, forcing me to regzip the file over again to get it.

    Just because you were doing simple things back then it wasn't because they were better.

  • Time-Honored WHAT?! (Score:3, Informative)

    by mihalis ( 28146 ) on Monday January 12, 2009 @04:33PM (#26422531) Homepage

    he lays the blame on PC developers (read: Microsoft) who kicked the time-honored waterfall model to the curb

    The waterfall model is long-discredited and its downfall is nothing to do with Microsoft. Ian Sommerville in his Software Engineering book was discussing this in the early 90s. I recall a diagram of the spiral model which is very much like agile development, although we didn't call it that then.

  • by Anonymous Coward on Monday January 12, 2009 @04:34PM (#26422545)

    er....what you have described is code written by an idiot. Any idiot can be a self-proclaimed genius. That doesn't actually make him a genius, however.

  • by 77Punker ( 673758 ) <spencr04 @ h i g h p o i n t.edu> on Monday January 12, 2009 @04:41PM (#26422641)

    That's a very good answer, but any good answer would be one that didn't involve declaring an array of all the number between 0 and 100 and then iterating over the array. Yes, that's a typical solution.

  • by Anonymous Coward on Monday January 12, 2009 @04:42PM (#26422657)
    "When did you code your first C application?" If it was any older than 12 (twelve), I'd reject them. *I* did this, and I don't even consider myself to be a programmer.

    That is a bit elitist. Although C was available when I was 12, it wasn't many years old, and the K&R book had not been published. So I learned first on Basic (on a mainframe!), and when it became possible to build (!) home computers, on some strange game-language (chip-8?), and then machine code. By the age of 16 or so I had written my own assembler, and sold my first programs - written in directly in hexadecimal machine code

    I agree that there is a lot to be said for us self-taught programmers. But I must admit that I have seen some good programmers who started late, and learned it all in the school.

  • by Anonymous Coward on Monday January 12, 2009 @04:59PM (#26422919)

    return (n*n+1)/2

    You mean n*(n+1)/2

  • by Anonymous Coward on Monday January 12, 2009 @05:07PM (#26423025)

    "When did you code your first C application?"

    If it was any older than 12 (twelve), I'd reject them.

    Those of us who were twelve before the Internet pervaded every home would not have had access to a C (or any language) compiler. QBasic came with MS-DOS at the time (early 1990s), and how many parents would lay out the hundreds of dollars for a piece of software they don't understand? I personally was lucky to have a programmer for a parent, so I had a modem and BBS access (with all the pirated Borland products you could ask for), but most kids didn't, and few public schools had CS curricula (mine sure didn't). Believe it or not, but there was a time when all software wasn't free and instantly available.

    Something tells me you're likely no older than 20, and hence unlikely to be interviewing anyone for development positions, which is a good thing as you are clearly unfit for that role.

  • by chaim79 ( 898507 ) on Monday January 12, 2009 @05:20PM (#26423209) Homepage

    I work at a company that tests aircraft instruments to FAA regs (DO-178b) and I know exactly what you are talking about, but for different reasons. What we are working on basically amounts to full computers (one project even used the PowerPC G5 processor, Alphanumeric keypad, and 3-color LCD Screen), but because of where they are failures and bugs are not an option (for software that qualifies as DO-178b Level A, if it fails there is a very high probability that people will die).

    The testing that we do for hardware and software is way beyond any other industry I've heard of (before reading your post) and took me a bit to get used to.

    For those of you interested, we do Requirements base testing, all the functionality of a product is covered by requirements detailing exactly what it will and will not do. Once the development is underway we are brought in as testers and we write and execute tests that cover all the possibilities for each requirement. An example is a requirement that states: "Function A will accept a value between 5 and 20". This simple statement results in a ton of testing:

    Value In range and Value out-of-range is tested:

    • Test #1: value = 11
    • Test #2: value = 45

    Boundry conditions are tested:

    • Test #3: value = 4
    • Test #4: value = 5
    • Test #5: value = 6
    • Test #6: value = 19
    • Test #7: value = 20
    • Test #8: value = 21

    Limits of Value Type is tested (assuming unsigned short):

    • Test #9: value = 0
    • Test #10: value = 255

    So 10 tests from a simple statement. Once all tests are written we do "Structural Coverage" tests, we instrument the code and run all the tests we've written and make sure that every line of code is covered in a test.

    Only once all that passes is that instrument considered ready to go, that takes a LOT of man-hours to do, even in situations where we can script the running of the tests (some situations are 'black box' testing where we just get the instrument itself and have to manually push the buttons to run any tests... takes a Long time!)

    This level of testing is making me take a second look at some of the software I've written before starting this job and wincing... spending a few quick min playing around with a few different values really doesn't cut it for testing anymore in my eyes...

  • Feature, not bug ... (Score:3, Informative)

    by trolltalk.com ( 1108067 ) on Monday January 12, 2009 @05:28PM (#26423309) Homepage Journal

    The program he left worked wonderfully and was incredibly complicated until he wasn't there due to poor coding practice with documentation/comments to allow others to manage the tool.

    It's called "coding for job security . . ."

    It also comes with a companion use case "coding for future consultancy gigs . . ."

  • Re:Question (Score:5, Informative)

    by DoofusOfDeath ( 636671 ) on Monday January 12, 2009 @05:38PM (#26423449)

    Sorry to be harsh, but get with the times. Computing these days is vastly more complex then back in the "good old days".

    You. are. fucking. kidding. me., right?

    The sources of complexity have changed, but not significantly increased.

    When's the last time your code had to:

    • Employ overlays to make your code fit into memory?
    • Write a large, complex algorithm in assembly, or even (in the 50's) straight machine language?
    • Consider writing self-modifying code, just to make the program require less memory?
    • Make "clever" use of obscure, corner-case behavior of certain machine instructions, not because you like to screw the people who will inherit the code, but because you have only a time amount memory to work with?
    • Intentionally mis-DIMENSION an array in fortran 77 code, knowing that it will work out okay at runtime, just to avoid (by necessity) the runtime cost of using array-slicing functions?
  • by DoofusOfDeath ( 636671 ) on Monday January 12, 2009 @06:08PM (#26423933)

    I'm afraid you've entirely missed the point. You may want to re-read your original post and my response.

    Any further elaboration would come across as trolling, so I'll simply say good day to you.

  • by Surt ( 22457 ) on Monday January 12, 2009 @08:17PM (#26425303) Homepage Journal

    Yes, I mean (n*(n+1))/2.
    Tens of AC's are all trying to correct me, and I assume none can see each other's posts.

  • by nonsequitor ( 893813 ) on Monday January 12, 2009 @09:06PM (#26425859)

    The horror you described is for Level C certification. Level B has the requirement of testing each implied else. Meaning for every standalone if statement, you have one test that makes sure it does the right thing when the conditions are true, and a second test which makes sure that thing does not happen when the conditions are not true.

    Level A certification requires Modified Condition Decision Coverage (MCDC). The means a test case must exist for every possible logical combination of terms for every conditional expression. These are extensions of the structural coverage requirements for Level C which is only statement coverage, every line has to be exercised at least once by a test case.

    This sort of exhaustive testing is done because of a very different operating environment from desktop software. Desktops don't have single bit events. A single bit event occurs when a gamma ray strikes a 0 in RAM and turns it into a 1. As a result some places forbid the use of pointers in their code, since a single bit event could have catastrophic consequences. Every function has to be tested to be able to best withstand random garbage input and fail gracefully since you cannot rely on RAM storing values correctly.

    However, the medical industry has similar guidelines for things like X-Ray machines and ventilators, which I believe have a very similar set of best practices codified into a certification process. Depending on the certification level, the cost of testing varies. It is VERY expensive to develop software this way since Test Driven Development doesn't cut it. For these industries the testers need to be independent of the developers meaning developers don't write their own unit tests, which in turn leads to more cumbersome software processes.

    I still work in safety critical embedded software, but am eternally thankful I no longer have to worry about certification with a mandated software process. I now write software for boats instead of planes and have started doing model based controls design, test driven development, and in general have tailored the software process to best suit the company. The goal is to produce better software faster, in order to respond to changing business needs, not to fill out checklists to pass audits. The FAA process is the most reliable way to the highest quality software possible; it is not efficient, cheap, or even modern.

  • by jellomizer ( 103300 ) on Monday January 12, 2009 @09:17PM (#26425997)

    Very true I have worked on maintaining these old systems; your statement is quite true.
    The cost to replace legacy apps are massive, like millions of dollars.
    Why...
    Analysis of all the features of the old app: Before you buy an app you should know what your current one does... A form that fits on a 80x25 display could be doing a bunch of complex calculations.
    Analysis on the current usages of the program: Some parts are used all the time others code hasn't been touched in decades and will never be used again, some havent been used in a while but is needed once in a blue moon. You don't want to code unneeded code but you need to make sure when they upgrade they are not missing anything.
    Analysis of the data: How is the data layout how can you migrate the data from one system to an other. Are those values in easy to read text numbers. Or in a binary form with one number next to each other, with an endianness that is unknown, and with data types of undocumented size. (1 Bit bool or an 8 bit bool?)
    Then after all that Analysis there is the coding of the new app.
    Importing all the data from the old app to the new one, and insure data integrity.
    Finding a group of people willing to test the new software.
    Training people to use the new software.
    Getting people to STOP using the old software (and there will be resistance on this, as there be that one stupid feature that was nicer on the old system vs the new one)

    This is a HUGE Process and I am not factoring bugs. So cost of upgrading is defiantly a big problem.

  • Re:Waterfall (Score:3, Informative)

    by turbidostato ( 878842 ) on Monday January 12, 2009 @11:38PM (#26427473)

    "One of the main commandments in agile development is that the requirements cannot change."

    Maybe you read the wrong book. In my book requirements don't change *within a sprint*.

    "Also, you are supposed to plan and code for a feature to be shippable when it is done."

    And you plan it so it can be done within a sprint. So before the sprint you get the client to accept feature "A" (you will show him a list of features so is he, not you, then one that sorts them). After the sprint you offer your client the promised feature "A" (and ideally, you bill your client for the feature).

    "But in the office, requirements keep changing as much as they did with waterfall"

    And that's exactly why agile exists: because it recognices requirements *will* change. Since they'll change why expend a lot of time on producing a very long list of intermingled requirements, then a long (and costly) development cycle, then after a long time and expenditure show the whole list of asked for requirements done only to find that 25% of the original requirments were not indeed required because the client didn't properly understood them, 25% of them were wanted no more because on such a long time situation changed and 50% new requirements arosen because, again, the long time awaited?

    You better agree on a sketchy *cheap* and *fast* gross list of requirements, short them as for the client's percieved value and start producing functionally complete code ASAP. The client is glad because gets to see the advance on the project; your company is glad, because the client is glad, it can suck money faster from it, and due to the constant exchange of ideas is easy to engage on new projects. When agile is possible is a win-win proposition.

    "This messes up the schedule."

    That clearly means you are not doing agile at all. If you have a long run schedule, you are not doing agile. If you are doing agile, then there's no long run schedule to mess up with. You can have a starting grosstimate at best and the continous feedback with your client will make him easly understand and accept any delays because it will have strong feedback about being him not you the main responsible for the slippage.

    "And features only get to the "done enough for now" stage which means that they are good enough to sort of work and build on, but they are not complete or shippable."

    Again, you are clearly not doing "agile". Of course bugs can arise but if you do it the proper way there's no way for a feature to be so long running and complex that they can slip too much (they should fit within a single sprint), and since you are doing tests->autodoc->prototype->implementation there's no chance of shipping "done enough" features as is the case when you develop functionality for the easy cases->corner cases->documentation (this too common way is the one that always gets cut up at step one "easy cases" since then it "seems" to work; the first way only seems to work when it indeed works).

    "not everyone is going to say no to the boss"

    Again, "agile" works on the assumption that in fact nobody can say "no" to the boss producing a framework where you won't need to say "no" to the boss.

    "The seminar example was a web app [...] In my office, the software is much more complex"

    This seems to be the only sensical assertion you made to this point. Yes it's true: agile is not the solution for every project. While proper practice and experience can help you to find a way to split more and more problems so it can become "agile-tractable" there will definetly be situations where due to the inherent complexity and internal coupling of the problem "agile" is not the way to go. If the seminar guy explicitly told or make the impression that "agile" is a silver bullet for any software project, then that just were a "snake oil vendor seminar", not to say that there are no little numbers of them.

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...