Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming Bug IT Technology

More Than Coding Errors Behind Bad Software 726

An anonymous reader writes "SANS' just-released list of the Top 15 most dangerous programming errors obscures the real problem with software development today, argues InfoWeek's Alex Wolfe. In More Than Coding Mistakes At Fault In Bad Software, he lays the blame on PC developers (read: Microsoft) who kicked the time-honored waterfall model to the curb and replaced it not with object-oriented or agile development but with a 'modus operandi of cramming in as many features as possible, and then fixing problems in beta.' He argues that youthful programmers don't know about error-catching and lack a sense of history, suggesting they read Fred Brooks' 'The Mythical Man-Month,' and Gerald Weinberg's 'The Psychology of Computer Programming.'"
This discussion has been archived. No new comments can be posted.

More Than Coding Errors Behind Bad Software

Comments Filter:
  • by alain94040 ( 785132 ) * on Monday January 12, 2009 @03:19PM (#26421159) Homepage

    The most common errors: SQL injection, command injection, cleartext transmission of Sensitive Information, etc.

    People make mistakes. Software needs to ship, preferably yesterday.

    How much would it cost to have perfect software? I happen to have worked in an industry that requires perfect coding. So I can imagine what it would look like if Microsoft tried it.

    The debugger would cost half a million dollar per seat (gdb is free). There would be an entire industry dedicated to analyzing your source code and doing all kinds of proofs, coverage, what-if analysis and other stuff that require Ph.Ds to understand the results.

    The industry I'm referring to is the chip industry. Hardware designers code pretty much like software developers (except the languages they use are massively parallel, but apart from that, they use the same basic constructs). Hardware companies can't afford a single mistake because once the chip goes to fab, that's it. No patches like software, no version 1.0.1.

    It's just not practical. Let the NSA order special versions of Office that cost 10 times the price and ship three years after the consumer version.

    But for me, "good enough" is indeed good enough.

    --
    FairSoftware.net [fairsoftware.net] -- work where geeks are their own boss

    • by Opportunist ( 166417 ) on Monday January 12, 2009 @03:32PM (#26421443)

      The problem is that software doesn't even ship as "good enough" anymore. It's more like "it compiles, ship it".

      Your example of hardware, and how it's impossible to patch it, was true to a point for software, too, in the past. Before it became easy to distribute software patches via the internet, companies actually invested a lot more time into testing. Why? Because yes, you could technically patch software, but it was tied to sometimes horrible costs to do just that.

      You can actually see a similar trend with the parts of hardware (i.e. BIOSes) that are patchable. Have you ever seen hardware shipped with all BIOS options fully enabled and working? I haven't in the past 2-3 years. More often than not you get a "new" board or controller with the predecessor's BIOS flashed in, and the promise for an update "really soon now".

      The easier it is to patch something, the sloppier the original implementation is. You'd see exactly the same with hardware if it wasn't so terribly hard (read: impossible) to rewire that damn printed circuit. I dread the day when they find some way to actually do it. Then the same will apply to hardware that you have today with some OSs: It's not done until it reads "SP1" on the cover.

      • by Anonymous Coward on Monday January 12, 2009 @04:01PM (#26422005)

        The same is true in a way of software development.

        Back when I was in high school, I could write a program (on punch cards) and put them in the tray to be submitted. Every week the intra-school courier came around and picked up the tray, returning the tray and output from the previous week. When every typo adds 2 weeks to your development time, you check your code *very* carefully, and examine every error message or warning that comes back from the compiler, to try to fix as many errors as possible in each submission.

        With interactive compilers/interpretors, it is not worth spending that much time on verifying the mechanical coding - just fix the first obvious problem and re-submit because it is faster to let the compiler (1) refuse to complain about the parts you managed to type correctly, and (2) remove all of the messages that were cascaded issues from the mistake you just fixed, than it is to waste your time scanning for typos, or reading the subsequent error message in case there are some that are not cascades.

      • Re: (Score:3, Interesting)

        by truthsearch ( 249536 )

        This is especially true of web development. To patch a web application you don't even need to transmit a patch to clients; just update the web server. It's so easy to patch that many sites let the public use the code before any testing is done at all.

        I spent my first 10 years programming clients and servers for the financial industry. Now, as a web developer, I'm shocked at how hard it is to find programmers who strictly follow best practices [docforge.com].

        • by Opportunist ( 166417 ) on Monday January 12, 2009 @07:41PM (#26424809)

          Best practices? Following any sensible practice would already be a blessing!

          Now, I know I'm jaded and cynical, but having to deal with people you get from temp agencies does that to you. It's the latest fad, and it arrived at "coding jobs". Watch your productivity plummet, code quality leaping off the cliff and your sanity following close behind.

          First of all, what do you get from a temp agency? Hell, it's not like programmers have a really, really hard time finding a job around here. If they can code, they have a normal job. So what the hell do you think you get from a TA? Right. The sludge of the trade. The ones that cling to "doing something with computers" despite all odds and the fact that they simply can't.

          The few gems that you might get, because they're young and need some sort of work experience, are gone before you got them up to par. And that time is spent mostly trying to find some better (read: permanent) job. You'd be amazed how often people get "sick" just before they jump ship and are miraculously hired by some other company.

          The average turnover is 3 months. Now, I'm sure anyone who spent more than a week in programming knows what sort of productive work you get out of someone in his first three months (remember: You get either sludge or fresh meat, nobody with experience would willingly waste his time in a TA employment situation).

          After a year or two you end up with code written by about 20-30 different people, every single of them caring more about their next lunch than the project, with 20-30 different coding styles, and near zero documentation.

          I left the company, too. I still have a tiny bit of sanity that I treasure.

          • Re: (Score:3, Interesting)

            by mollymoo ( 202721 )

            First of all, what do you get from a temp agency? Hell, it's not like programmers have a really, really hard time finding a job around here. If they can code, they have a normal job. So what the hell do you think you get from a TA? Right. The sludge of the trade. The ones that cling to "doing something with computers" despite all odds and the fact that they simply can't.

            I used to do contract work (if that's what you mean by "temp agency"), not because I couldn't get a permanent job - I was offered a permanent position at every single place I did contract work - but because being paid twice as much as you'd get in a permanent position, getting to work on a variety of projects with a variety of companies and being able to take several months a year off is just a nicer way of working.

      • by Junior J. Junior III ( 192702 ) on Monday January 12, 2009 @05:08PM (#26423033) Homepage

        The problem is that software doesn't even ship as "good enough" anymore. It's more like "it compiles, ship it".

        I don't know, if you ask me, software is about a million times better than it used to be. I've never been a user of a mainframe system, and I understand they were coded to be a lot more reliable than desktop class microcomputers. But having started using computers in the early 80's as a small child, and seeing where we are now, there's just no comparison.

        It used to be, computers were slow, crashed all the time, on the slightest problem, and encountered problems very frequently. Today, we have many of these same problems, but much less frequently and much less severe. An application crash no longer brings down the entire system. When it does crash, usually there is no data loss, and the application is able to recover most if not all of your recent progress, even if you didn't save. Crashes happen far less frequently, and the systems run faster than they used to, even with a great many more features and all the bloat we typically complain about.

        • Re: (Score:3, Insightful)

          by plover ( 150551 ) *

          Yes, on the whole applications have become more stable, while growing an order of magnitude more complex. But TFA is not about stability as much as it is about security -- people leaving inadvertent holes in software that a hacker can exploit. You can have a perfectly functioning program, one that passes every test 100% of the time, but it may still have a SQL injection flaw or fail to validate input.

          • Re: (Score:3, Insightful)

            by Opportunist ( 166417 )

            Now, I don't think that code got less secure. I rather think there are simply a hell lot more ways a system's insecurities can be exploited, and the propagation of such malicious code is magnitudes faster.

            Think back to the 80s. Whether you take an Amiga or a PC, both had horrible systems from a security point of view. It was trivial on either system to place code into ram so it could infect every disk inserted, every program executed, much more than it is today, with home users having systems that have at l

    • by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Monday January 12, 2009 @03:34PM (#26421483) Homepage Journal

      I happen to have worked in an industry that requires perfect coding. [...] The industry I'm referring to is the chip industry. [...] Hardware companies can't afford a single mistake because once the chip goes to fab, that's it. No patches like software, no version 1.0.1.

      What does "stepping: 9" in /proc/cpuinfo on my home computer mean? What is a f00f, and what happened with the TLB on the early Phenom processors?

      • by networkBoy ( 774728 ) on Monday January 12, 2009 @03:57PM (#26421941) Journal

        they cost a shit ton of money is what happened.

        A project I was on in 2000ish went as follows:
        Steppings A0, A1, A2, and A3 were halted in-fab because someone found a critical bug in simulations.
        A4-A7 did not work.
        B0-B4 did not work B6 did not work
        C0-C4 did not work
        B5, B7, C5 sorta worked.
        The company folded.
        That's what a software mentality working on hardware will get you.

        Steppings in CPUs are a little different. Often an earlier stepping was functional enough to start the design cycle for Dell HP, et.al. but not ideal. The later steppings start by fixing the deficiencies, then beyond that are likely cost cutting.
        -nB

        • by CarpetShark ( 865376 ) on Monday January 12, 2009 @04:37PM (#26422593)

          A project I was on in 2000ish went as follows:
          Steppings A0, A1, A2, and A3 were halted in-fab because someone found a critical bug in simulations.
          A4-A7 did not work.
          B0-B4 did not work B6 did not work
          C0-C4 did not work
          B5, B7, C5 sorta worked.
          The company folded.
          That's what a software mentality working on hardware will get you.

          Luckily Microsoft stepped in and bought the company, and now market the product as X-Box.

    • Re: (Score:3, Insightful)

      by Jason1729 ( 561790 )
      People make mistakes. Software needs to ship, preferably yesterday.

      This attitude is the number one problem with software development. When all you care about is getting it out the door, you send garbage out the door.

      Software today is so damn buggy. I spend hours a week just doing the work of getting my computer to work. And even then it has random slowdowns and crashes.

      I'm old enough to remember when it wasn't like that. You'd run your program and it was ready in a second, you'd exit and it left
      • by bb5ch39t ( 786551 ) on Monday January 12, 2009 @03:49PM (#26421753)
        I'm an old timer, still working on IBM zSeries mainframes, mainly. We just got a new system, which runs on a combination of Linux and Windows servers, to replace an application which used to run on the mainframe. Nobody likes it. We are apparently a beta test site (though we were told it was production ready). It has a web administration interface. For some reason, on some users, the only PC which can update those users is my Linux PC running Firefox. Nobody can say why. Until early last week, it would take a person a full 5 minutes to login to the product. They made some change to the software and now it takes about 10 seconds. this is production quality? No, I won't say the product. Sorry. Against policy.
        • Re: (Score:3, Interesting)

          by ColdWetDog ( 752185 ) *
          Oh it's obviously Notes. Nothing else needs a zSeries to run and 5 minutes to login.

          Gotcha.
        • Re: (Score:3, Interesting)

          by aztracker1 ( 702135 )
          I feel your pain. I've come across about 3 major instances where race conditions were a problem under load in a heavy traffic website. I'm now on my third web application refactoring. The first, I cut minutes out of a reporting process, and changed the way data was pulled in from various dbms resources. The second I did some changes to the way data settings were looked up and persisted. At first glance it wasn't so bad, because each setting call was taking only 0.11 seconds. The problem was there were
      • by Schuthrax ( 682718 ) on Monday January 12, 2009 @03:56PM (#26421903)

        You'd run your program and it was ready in a second, you'd exit and it left no trace. Crashes were virtually unheard of.

        And all without managed memory, automatic garbage collection, etc., imagine that! Seriously, I see so many devs (and M$, who has a product to sell) insisting that all that junk is what will save us. What they're doing is attempting to create a Fisher Price dev environment where you don't have to think anymore because they've done it all for you. What's going to happen to this world when GenC# programmers replace the old guard and they don't have the least clue about what is going on inside the computer that makes the magic happen?

        • by samkass ( 174571 ) on Monday January 12, 2009 @04:15PM (#26422217) Homepage Journal

          Even in C# and Java there are experts who do know what's going on. They will replace the old guard. The rest will be people for whom contributing to software development was completely impossible before, but can now do the basics (ie. designers, information visualization experts, etc).

          And of the top programming errors, many of them still apply to Java and C#. But some don't, and I see that as a positive step. I do Java in my day job, and iPhone development on the side. And while nothing in the industry beats Interface Builder, Objective-C is pretty horrible to develop in when you're used to a modern language and IDE...

        • Re: (Score:3, Insightful)

          by billcopc ( 196330 )

          What they're doing is attempting to create a Fisher Price dev environment where you don't have to think anymore because they've done it all for you.

          They don't have much of a choice, since about (rand[100]) 97% of all computer schools are absolute garbage, we end up with a bunch of Fisher Price developers who can type out Java bullshit at a steady 120wpm, but the resultant app makes zero sense as they fall into the trap of "checklist development". The app does "this, and that, and that too" but does them all sloppily and unreliably. Then we have managers who review the checklist line by line then sign off because it "passes spec". They completely mis

      • Question (Score:4, Insightful)

        by coryking ( 104614 ) * on Monday January 12, 2009 @04:23PM (#26422355) Homepage Journal

        When you were working on those punch cards, using your green screen console (kids these days with color monitors and mice), what were you doing?

        Did you ever transcode video and then upload it to some website called Youtube on "the internet"? Did you then play it back in a "web browser" that reads a document format that your grandma could probably learn? Did your mainframe even have "ethernet"? Or is that some modern fad that us kids use but will probably pass and we'll all go back to "real" computers with punch cards.

        Did you ever have to contend with botnets, spyware or any of that? And dont say "if we used The Right Way Like When I Was Your Age, we wouldn't have those things because software would be Designed Properly". because if we used "The Right Way" like you, software would take so long to develop and cost so much that we wouldn't even have the fancy systems that even make malware possible.

        Old timers crack me up. Ones that are skeptical of object oriented programming. Ones who think you can waterfall a huge project. I'd like just one of them to run a startup and waterfall Version 1.0 of their web-app (which, they wouldn't because the web is a fad, unlike their punch cards).

        Sorry to be harsh, but get with the times. Computing these days is vastly more complex then back in the "good old days". Your 386 couldn't even play an mp3 without pegging the CPU, let alone a flash video containing one.

        I've seen their workflow and how fast it works for them and I can see if they "modernized" it would cripple their productivity.

        Until they try to bring in new-hires. How long does it take to train somebody who is used to modern office programs to use a DOS program like wordperfect? You think they'll ever get as proficient when what they see isn't what they get (a fad, I bet, right?)

        Again, sorry to sound so harsh. You guys crack me up. Dont worry though, soon enough we'll see the errors in our ways and go back to time honored methods like waterfall. We'll abandon "scripting languages" like Ruby or C and use assembler like god intends.

        Sheesh.

        • Re:Question (Score:4, Interesting)

          by ClosedSource ( 238333 ) on Monday January 12, 2009 @05:35PM (#26423389)

          Not all of us "old timers" think everything was great "back in the day" and shit now.

          As you say, what is being built now is much more ambitious than what we used to make. There were challenges then too, they were just different.

          Newer technologies can be a two-edged sword. Way back when a serious bug in an embedded system would require a new PROM or EPROM to be made and installed by a technician.

          Today you can download an update over the Internet in a few minutes. That convenience weakens a company's motivation to getting it right the first time.

          Of course today your product probably relies on software that you didn't write and aren't familiar with. We used to write every byte we delivered and couldn't blame an error on libraries because we didn't use any. But you couldn't compete in the market that way now.

          The constant in this business is there will always be those who try to push the limits whatever they are.

        • Re:Question (Score:5, Informative)

          by DoofusOfDeath ( 636671 ) on Monday January 12, 2009 @05:38PM (#26423449)

          Sorry to be harsh, but get with the times. Computing these days is vastly more complex then back in the "good old days".

          You. are. fucking. kidding. me., right?

          The sources of complexity have changed, but not significantly increased.

          When's the last time your code had to:

          • Employ overlays to make your code fit into memory?
          • Write a large, complex algorithm in assembly, or even (in the 50's) straight machine language?
          • Consider writing self-modifying code, just to make the program require less memory?
          • Make "clever" use of obscure, corner-case behavior of certain machine instructions, not because you like to screw the people who will inherit the code, but because you have only a time amount memory to work with?
          • Intentionally mis-DIMENSION an array in fortran 77 code, knowing that it will work out okay at runtime, just to avoid (by necessity) the runtime cost of using array-slicing functions?
      • by jellomizer ( 103300 ) on Monday January 12, 2009 @04:26PM (#26422415)

        I'm old enough to remember when it wasn't like that. You'd run your program and it was ready in a second, you'd exit and it left no trace. Crashes were virtually unheard of. We have people where I work who only do data entry, and they still use wordperfect 4.2 on 386 hardware. I've seen their workflow and how fast it works for them and I can see if they "modernized" it would cripple their productivity.

        You remember wrong.
        The old stuff was always failing and had a bunch of problems...
        You are probably thinking about your old DOS PC. If the floppy disk wasn't corrupted thinks in general worked and it worked well. Why because the programmer could fairly easily test all the cases, and allow security bugs, as a buffer overflow can be prevented as it would take to long for the guy to physically type in the data... And it was one application with essentially the full PC at its whim.
        If you tried to do computing with Multi-tasking such as apps like Desqview your stability was greatly reduced. It would make you want to wish for Windows 95. Or remember Windows 3.1 stabilty...

        Downloading data via the Modem was a hit or miss activity. I remember getting disconnected trying to download Slackware as there was a combination for the zmodem that passed the hangup code to the modem, forcing me to regzip the file over again to get it.

        Just because you were doing simple things back then it wasn't because they were better.

      • Re: (Score:3, Interesting)

        by ozphx ( 1061292 )

        I pull a 6-digit salary (even when you convert our shitty AUD to USD ;)) - and its worse.

        If anything I am more rushed than I was when I was a junior dev. Managers flat out don't care what you are doing when you are on 50k. You could whack off all day if you like.

        As soon as you start earning more than your manager it all goes to shit. Possibly an ego thing - I don't really care about the reasons - but every place I've had managers over my shoulder monitoring my times, trying to cut schedules, etc. Where I'm

      • Re: (Score:3, Insightful)

        by Eskarel ( 565631 )

        The software of yester-year ran largely on single threaded operating systems, didn't have to interact with the internet or defend against attacks originating from it, had to manage miniscule feature and data sets, and was still buggy.

        There was no magical era of bug free computing, there was an era when systems were orders of magnitude less complex, where about a tenth of the software was running, and where features which most people depend on didn't exist. The software was still buggy and most of it was so

    • by Enter the Shoggoth ( 1362079 ) on Monday January 12, 2009 @03:59PM (#26421973)

      The most common errors: SQL injection, command injection, cleartext transmission of Sensitive Information, etc.

      People make mistakes. Software needs to ship, preferably yesterday.

      How much would it cost to have perfect software? I happen to have worked in an industry that requires perfect coding. So I can imagine what it would look like if Microsoft tried it.

      The debugger would cost half a million dollar per seat (gdb is free). There would be an entire industry dedicated to analyzing your source code and doing all kinds of proofs, coverage, what-if analysis and other stuff that require Ph.Ds to understand the results.

      The industry I'm referring to is the chip industry. Hardware designers code pretty much like software developers (except the languages they use are massively parallel, but apart from that, they use the same basic constructs). Hardware companies can't afford a single mistake because once the chip goes to fab, that's it. No patches like software, no version 1.0.1.

      It's just not practical. Let the NSA order special versions of Office that cost 10 times the price and ship three years after the consumer version.

      But for me, "good enough" is indeed good enough.

      -- FairSoftware.net [fairsoftware.net] -- work where geeks are their own boss

      I worked within the same space about 10 years ago - I was a sysadmin for a group of asic design jockies as well as the firmware and device driver guys and I'm gonna call you on this...

      The hardware designers were under the same sorts of pressures, if not more so, than the software guys and I saw many bugs that would end up in the shipping silicon. The general attitude was always "oh! a bug: well the software guys will just have to work around it."

      And as for "no patching", well that's also BS, you can patch silicon, it's just rather messy having to have the factory do it post-fab by cutting traces on die.

      So much for perfection!

      • by jandrese ( 485 ) <kensama@vt.edu> on Monday January 12, 2009 @04:19PM (#26422303) Homepage Journal
        Anybody who has worked on driver code can tell you that most hardware vendors ship an errata with their chip. Some hardware has rather significant erratas, like the SiI 3112 SATA controller that got installed in pretty much every SATA compatible motherboard back before SATA support was properly integrated into Southbridges. The CMD 640 is another example of a chip with an extensive hardware errata.
      • by Grishnakh ( 216268 ) on Monday January 12, 2009 @09:15PM (#26425965)

        The hardware designers were under the same sorts of pressures, if not more so, than the software guys and I saw many bugs that would end up in the shipping silicon. The general attitude was always "oh! a bug: well the software guys will just have to work around it."

        Most of my work involves software workarounds for hardware bugs. The reason is a little different, though. My company, in its infinite wisdom, completed the very first revision of this particular chip, did a preliminary test and the core seemed to work (it's a system-on-chip), so they laid off the hardware design team! Then they started finding the many bugs in the hardware, and ran around looking for another design team to fix the bugs, but it took several years, and they had to ship millions of the buggy chips to customers, which are deployed in the field.

    • by curunir ( 98273 ) * on Monday January 12, 2009 @04:16PM (#26422247) Homepage Journal

      But for me, "good enough" is indeed good enough.

      It might be of interest to you that Voltaire came up with your conclusion 245 years ago, and a bit more eloquently as well:

      The perfect is the enemy of the good.

    • by chaim79 ( 898507 ) on Monday January 12, 2009 @05:20PM (#26423209) Homepage

      I work at a company that tests aircraft instruments to FAA regs (DO-178b) and I know exactly what you are talking about, but for different reasons. What we are working on basically amounts to full computers (one project even used the PowerPC G5 processor, Alphanumeric keypad, and 3-color LCD Screen), but because of where they are failures and bugs are not an option (for software that qualifies as DO-178b Level A, if it fails there is a very high probability that people will die).

      The testing that we do for hardware and software is way beyond any other industry I've heard of (before reading your post) and took me a bit to get used to.

      For those of you interested, we do Requirements base testing, all the functionality of a product is covered by requirements detailing exactly what it will and will not do. Once the development is underway we are brought in as testers and we write and execute tests that cover all the possibilities for each requirement. An example is a requirement that states: "Function A will accept a value between 5 and 20". This simple statement results in a ton of testing:

      Value In range and Value out-of-range is tested:

      • Test #1: value = 11
      • Test #2: value = 45

      Boundry conditions are tested:

      • Test #3: value = 4
      • Test #4: value = 5
      • Test #5: value = 6
      • Test #6: value = 19
      • Test #7: value = 20
      • Test #8: value = 21

      Limits of Value Type is tested (assuming unsigned short):

      • Test #9: value = 0
      • Test #10: value = 255

      So 10 tests from a simple statement. Once all tests are written we do "Structural Coverage" tests, we instrument the code and run all the tests we've written and make sure that every line of code is covered in a test.

      Only once all that passes is that instrument considered ready to go, that takes a LOT of man-hours to do, even in situations where we can script the running of the tests (some situations are 'black box' testing where we just get the instrument itself and have to manually push the buttons to run any tests... takes a Long time!)

      This level of testing is making me take a second look at some of the software I've written before starting this job and wincing... spending a few quick min playing around with a few different values really doesn't cut it for testing anymore in my eyes...

      • by nonsequitor ( 893813 ) on Monday January 12, 2009 @09:06PM (#26425859)

        The horror you described is for Level C certification. Level B has the requirement of testing each implied else. Meaning for every standalone if statement, you have one test that makes sure it does the right thing when the conditions are true, and a second test which makes sure that thing does not happen when the conditions are not true.

        Level A certification requires Modified Condition Decision Coverage (MCDC). The means a test case must exist for every possible logical combination of terms for every conditional expression. These are extensions of the structural coverage requirements for Level C which is only statement coverage, every line has to be exercised at least once by a test case.

        This sort of exhaustive testing is done because of a very different operating environment from desktop software. Desktops don't have single bit events. A single bit event occurs when a gamma ray strikes a 0 in RAM and turns it into a 1. As a result some places forbid the use of pointers in their code, since a single bit event could have catastrophic consequences. Every function has to be tested to be able to best withstand random garbage input and fail gracefully since you cannot rely on RAM storing values correctly.

        However, the medical industry has similar guidelines for things like X-Ray machines and ventilators, which I believe have a very similar set of best practices codified into a certification process. Depending on the certification level, the cost of testing varies. It is VERY expensive to develop software this way since Test Driven Development doesn't cut it. For these industries the testers need to be independent of the developers meaning developers don't write their own unit tests, which in turn leads to more cumbersome software processes.

        I still work in safety critical embedded software, but am eternally thankful I no longer have to worry about certification with a mandated software process. I now write software for boats instead of planes and have started doing model based controls design, test driven development, and in general have tailored the software process to best suit the company. The goal is to produce better software faster, in order to respond to changing business needs, not to fill out checklists to pass audits. The FAA process is the most reliable way to the highest quality software possible; it is not efficient, cheap, or even modern.

  • by jollyreaper ( 513215 ) on Monday January 12, 2009 @03:23PM (#26421229)

    Fred Brooks's 'The Mythical Man-Month',

    I read that as "the Mythical Man-Moth." I bet that would be a great book.

  • by PingXao ( 153057 ) on Monday January 12, 2009 @03:24PM (#26421261)

    In the early '80s there were no "older" programmers unless you were talking mainframe data processing. On microprocessor CPU systems the average age was low, as I recall. Back then we didn't blame poor software on "youthful programmers". We blamed it on idiots who didn't know what they were doing. I think it's safe to say that much hasn't changed.

    • by Skapare ( 16644 ) on Monday January 12, 2009 @03:35PM (#26421489) Homepage

      This is true with any group. There are geniuses and idiots in all groups. The problems exist because once the supply of geniuses have been exhausted, businesses tap into the idiots. And this is made worse when employers want to limit pay across the board based on what the idiots were accepting. Now they are going overseas to tap into cheaper geniuses, which are now running out, and in the mean time, lots of local geniuses have moved on to some other career path because they didn't want to live at the economic level of an idiot.

      • by fishbowl ( 7759 ) on Monday January 12, 2009 @03:48PM (#26421725)

        >There are geniuses and idiots in all groups.

        Most of both groups are within two standard deviations of a norm. Your idiots are probably smarter than you think and your geniuses are probably not as smart as you'd like to believe.

    • by 77Punker ( 673758 ) <(ude.tniophgih) (ta) (40rcneps)> on Monday January 12, 2009 @03:35PM (#26421495)

      After working 3 months at my first programming job, the other two developers quit which left just me. I felt inadequate until I started interviewing other programmers to fill in the gap. Apparently lead developers with 10 years of experience can't solve the simplest programming problems, explain how databases work, or explain OOP. I'm convinced that most software sucks because most people writing it have no idea what they're doing and shouldn't be allowed to touch a computer. I'm currently in my 5th month at the same job and we've got someone good who will start soon, but it took a long time to find even one competent developer.

      • I'm convinced that most people presenting themselves as lead developers in interviews are far from it. There's a reason why thedailywtf.com [thedailywtf.com] has a "Tales from the Interview" section.

      • by jeko ( 179919 ) on Monday January 12, 2009 @04:52PM (#26422821)

        I bought three Mercedes. Two of them got repossessed. Now, the dealers won't finance me when I go to buy another. Clearly, there is a shortage of Mercedes.

        Look at your story. You had three programmers. Two quit (Yeah, I know, it wasn't because they were unhappy. Look, no one wants to be known as a malcontent or difficult. They lied to you.) Now, you can't get anyone in to interview who knows what they're doing.

        You think maybe it's possible that your company's reputation precedes it? I know of half a dozen places in my town that nobody in their right mind would agree to work for.

        Show me a man who says he can't find anyone to hire, and I'll show you a man nobody wants to work for.

        Take that same man, triple the wages he's offering and wire a pacifier into his mouth and the ghosts of Ada Lovelace and Alan Turing will fight for the interview.

    • by Opportunist ( 166417 ) on Monday January 12, 2009 @03:44PM (#26421671)

      Quite dead on. But the difference is that today, with all the RAD tools around, you have a fair lot of "programmers" who don't even know what they're doing and get away with it. They got some course, maybe at their school (and that's already the better ones of the lot), maybe as some sort of an attempt to get them back onto a job from their unemployment services, and of course tha intarwebz is where da money is, so everyone and their dog learned how to write a few lines in VB (or C#, the difference for this kind of code molesters is insignificant) and now they're let loose on the market.

      Then you get hired by some company that signed up those "impressively cheap programmers, see, programmers needn't be horribly expensive" when the project goes south because deadlines lead to dead ends but no product, and you're greeted with code that makes you just yell out WTF? You got conditional branches that do exactly the same in every branch. You get loops that do nothing for 90% of the loop time and when asked you just get a blank stare and a "well, how do you think we could count from 80 to 90 besides counting from 0 to 90 and 'continue;' for 0-70?", because that's how they learned it and they never for a nanosecond pondered just WHAT those numbers in the 'for' block meant. And so on.

      Granted, those are the extreme examples of people who learned programming like a poem. By heart. They have a hammer as a tool, so every problem has to be turned into a nail. But you'd be amazed at the blank stares and "what do I need that for?" when you ask some of those "programmers" about hash tables (include snide joke about Lady Mary Jane here...) or Big-O notation. And we're talking people who are supposedly able to write a database application here.

      This is the problem here. It's not youthful programmers. It's simply people who know a minimum about programming and managed to trick some HR goon into believing they could actually do it.

  • Waterfall (Score:5, Insightful)

    by Brandybuck ( 704397 ) on Monday January 12, 2009 @03:25PM (#26421273) Homepage Journal

    The waterfall method is still the best development model. Uou have to analyze, then plan, then code, then test, then maintain. The steps need to be in order and you can't skip any of them. Unfortunately waterfall doesn't fit into the real world of software development because you can't freeze your requirements for so long a time. But cyclic models are a good second place, because they are essentially iterated waterfall models. When you boil all the trendy stuff out of Agile, you're basically left with a generic iterated waterfall, which is why it works. The trendy crap is just so you can sell the idea to management.

    • Re:Waterfall (Score:5, Insightful)

      by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Monday January 12, 2009 @03:29PM (#26421385) Homepage Journal

      The waterfall method is still the best development model. [...] Unfortunately waterfall doesn't fit into the real world

      WTF? Not working in the real world makes it a crap model.

      When you boil all the trendy stuff out of Agile, you're basically left with a generic iterated waterfall, which is[...]

      ...not a waterfall.

      • Re:Waterfall (Score:4, Insightful)

        by Brandybuck ( 704397 ) on Monday January 12, 2009 @04:00PM (#26421987) Homepage Journal

        You can't create quality software without planning before coding. Ditto for not testing after coding. This isn't rocket science, yet too many "professionals" think all they need to do is code.

        The waterfall model isn't a management process, it's basic common sense. It's not about attending meetings and getting signatures, it's about knowing what to code before you code it, then verifying that that is what you coded. The classic waterfall took too long because you had to plan Z before you started coding A, but with an iterated waterfall (which is still a waterfall, duh) you only need to plan A before you code A.

    • Re:Waterfall (Score:5, Insightful)

      by radish ( 98371 ) on Monday January 12, 2009 @03:52PM (#26421813) Homepage

      The waterfall is broken, seriously. I'm paraphrasing from an excellent talk I attended a while back, but here goes.

      For a typical waterfall you're doing roughly these steps: Requirements Analysis, Design, Implementation, Testing, Maintenance. So let's start at the beginning...requirements. So off you go notebook in hand to get some requirements. When are you done? How do you know you got them all? Hint: you will never have them all, and they will keep changing. But you have to stop at some point so you can move onto design, so when do we stop? Typically it's when we get to the end of the week/month/year allocated on the project plan. Awesome. Maybe we've got 50% of the reqs done, maybe not. It'll be a long time until we find out for sure...

      Next up - Design! Woot, this bit is fun. So we crank up Rose or whatever and get to work. But when do we stop? Well again, that's tough. Because I don't know about you but I can design forever, getting better and better, more and more modular, more and more generic, until the whole thing has flipped itself inside out. So we stop when it's "good enough" - according to who? Or more likely, it's now 1 week to delivery and no-one's written any code yet so we better stop designing!

      Implementation time. Well at least this time we know when we're done! We're up against it time wise though, because we did such a good job on Reqs & Design. Let's pull some all nighters and get stuff churned out pronto, who cares how good it is, no time for that now. That lovely, expensive design gets pushed aside.

      No time to test...gotta hit the release date.

      Sure this isn't the waterfall model as published in the text books, but it's how it works (fails) in real life. And the text books specifically don't say how to fix the problems inherent in the first two stages. What to do instead? Small, incremental feature based development. Gather requirements, assign costs to them, ask the sponsors to pick a few, implement & test the chosen features, repeat until out of time or money.

      • Re: (Score:3, Insightful)

        by jstott ( 212041 )

        Hint: you will never have them all, and they will keep changing. But you have to stop at some point so you can move onto design, so when do we stop?

        This is a solved problem in engineering. You write the contract so that if the customer [internal or external] changes the requirements it carries a financial penalty and allows you [the developer] to push back the release date. On the other side, if you discover a problem after you've shifted out of the design phase, then you are SOL and your company eats a

      • Re: (Score:3, Interesting)

        by recharged95 ( 782975 )
        Obvious you were working in the consumer-commercial side of the waterfall practice.

        .

        On the DoD Space Systems side, we had stuff like:

        • A Requirements Traceability Matrix
        • PDR: Preliminary Design Review with the Customer
        • CDR: Critical Design Review with the Customer
        • IOC: Initial Op capability
        • FAT: Preliminary Acceptance Testing
        • FOC: Final Op capability
        • FAT: Final Acceptance Testing

        And never had a launch delay due to writing software last minute. And as long as the h/w flew fine, we had birds that ran in space fo

  • by pete-wilko ( 628329 ) on Monday January 12, 2009 @03:30PM (#26421401)
    Yeah, those good for nothing programmers cramming in features all over the place and not ad-hearing to time honored development practices like waterfall!

    And requirement changes? WTF are those? Using waterfall you specify your requirements at the beginning, and these are then set in stone, IN STONE! Nothing will ever change in 6 -12 months.

    It's not like they're working 60-80 hour weeks, been forced to implement features, having new requirements added and not being listened to! That would be like marketing driving engineering! Insanity!

    As an aside - why is he dragging OO into this? Pretty sure you can use waterfall with OO - you even get pretty diagrams
  • Extensible Framework (Score:3, Interesting)

    by should_be_linear ( 779431 ) on Monday January 12, 2009 @03:32PM (#26421433)

    Most horrible projects I've seen were "extensible frameworks" that can do just about anything with appropriate extensions (plugins or whatever). But currently, without any existing extensions, it is bloated pile of crap. Also, there is nobody in sight willing to make one extension for it (except sample, done by author himself, on how to easily create extension).

  • by overshoot ( 39700 ) on Monday January 12, 2009 @03:33PM (#26421459)
    ... are destined for greatness, because their bullshit is not burdened by reality.

    I've heard from several ex-Softies that the company inculates its recruits with a serious dose of übermensch mentality: "those rumors about history and 'best practices' are for lesser beings who don't have the talent that we require of our programmers." "We don't need no steenking documentation," in witness whereof their network wireline protocols had to be reverse-engineered from code by what Brad Smith called 300 of their best people working for half a year.

    However, I'll note that they were right: anyone who wants to say that they did it wrong should prove it by making more money.

    • Re: (Score:3, Insightful)

      by jd ( 1658 )

      The Brinks-Matt Robbery, in which thieves brazenly stole truck-loads of gold bullion, and the two gigantic heists at Heathrow Airport, were all 100% correct - from the perspective of the thieves. And given that next to nothing from any of these three gigantic thefts was ever recovered, I think most people would agree that the implementation was damn-near flawless. As economic models go, though, they suck. They are so utterly defective in any context other than the very narrow one for which they were designe

  • Users are to blame (Score:5, Insightful)

    by Chris_Jefferson ( 581445 ) on Monday January 12, 2009 @03:36PM (#26421499) Homepage
    The sad truth is, given the choice between a well-written, stable and fast application with a tiny set of features and a giant slow buggy program with every feature under the sun, too many users choose the second.

    If people refused to use and pay for buggy applications, they would either get fixed or die off.

    • by curunir ( 98273 ) * on Monday January 12, 2009 @04:09PM (#26422119) Homepage Journal

      The even sadder truth is that when faced with the choice of the two apps you describe and a third buggy application with a tiny set of features, users will choose the most visually appealing app, regardless of lack of features or the app being buggy.

      The under-the-covers stuff is important, but finding a good designer to make it pretty is the single most important thing you can do to make people choose your product. If it's pretty, people will put up with a lot of hassle and give you the time necessary to make it work reliably and have all the necessary features.

  • by sheldon ( 2322 ) on Monday January 12, 2009 @03:41PM (#26421595)

    Oh great a rant by someone who knows nothing, providing no insight into a problem.

    Must be an Op-Ed from a media pundit.

    And they wonder why blogs are replacing them?

  • cheap shot (Score:5, Informative)

    by Anonymous Coward on Monday January 12, 2009 @03:41PM (#26421607)

    I work at Microsoft. We use agile development and almost everybody I know here has read the Mythical Man Month. Get your facts straight before taking cheap shots in story submissions. Thanks.

  • by BarryNorton ( 778694 ) on Monday January 12, 2009 @03:44PM (#26421659)
    A waterfall process and object-oriented design and programming are orthogonal issues. The summary, at least, is nonsense.
  • Complete BS? (Score:5, Insightful)

    by DoofusOfDeath ( 636671 ) on Monday January 12, 2009 @03:45PM (#26421687)

    For the life I me, I can't figure out what the choice of {waterfall vs. cyclic} has to do with {writing code that checks for error return codes vs. not}.

    Waterfall vs. cyclic development is mostly about how you discover requirements, including what features you want to include. It also lets you pipeline the writing of software tests, rather than waiting until the end and doing it big-bang approach. Whether or not you're sloppy about checking return codes, etc., is a completely separate issue.

    Despite the author's protests to the contrary, he really is mostly complaining incoherently about the way whipper-snappers approach software development these days.

  • Microsoft? (Score:5, Informative)

    by dedazo ( 737510 ) on Monday January 12, 2009 @03:46PM (#26421705) Journal

    Most of the teams I've had contact with inside the tools group at MS (in the last four years or so) use SCRUM.

    I don't know how widespread that is in other divisions (say the MSN/Live folks or the Microsoft.com teams) but that clever comment in the submission is nothing more than an ignorant cheap shot.

    Don't be so twitterish and make up crap about Microsoft. Get your facts straight or you just come across as an idiot.

  • by SparkleMotion88 ( 1013083 ) on Monday January 12, 2009 @03:48PM (#26421739)
    It is very common for people to blame all of the problems of software engineering on some particular methodology. We've been shifting blame from one method to another for decades, and the result is that we just get new processes that aren't necessarily better than the old ones.

    The fact is that software development is very difficult. I think there are several reasons why it is more difficult to develop robust software now than it was 20 years ago. Some of these reasons are:
    • The customer expects more from software: more features, flashier interfaces, bigger datasets -- all of these things make the software much more complicated. The mainframe systems of a few decades ago can't even compare to the level of complexity of today's systems (ok, maybe OS/360).
    • There is just more software out there. So any new software that we create is not only supposed to do its job, but also interconnect with various other systems, using the standards and interfaces of various governing bodies. This effect has been growing unchecked for years, but we are starting to counter it with things like SOA, etc.
    • The software engineering talent has been diluted. There, I said it. The programmers of 20-30 years ago were, on average, better than the programmers of today. The reason is we have more need for software developers, so more mediocre developers slip in there. Aggravating this issue is the fact that the skilled folks, those who develop successful architectures and programming languages, don't tend to consider the ability level of the people who will be working with the systems they develop. This leads to chronic "cargo cult programming" (i.e. code, test, repeat) in many organizations.
    • Software has become a part of business in nearly every industry. This means that people who make high-level decisions about software don't necessarily know anything about software. In the old days, only nerds at IBM, Cray, EDS, etc got involved with software development. These days, the VP of technology at Kleenex (who has an MBA) will decide how the new inventory tracking software will be built.

    I'm sure there are more causes and other folks will chime in.

  • by gillbates ( 106458 ) on Monday January 12, 2009 @03:49PM (#26421757) Homepage Journal

    As long as:

    • Consumers buy software based on flashy graphics and bullet lists of features, without regard for quality...
    • Companies insist on paying the lowest wages possible to programmers...
    • Programmers are rewarded for shipping code, rather than its quality...

    You will have buggy, insecure software.

    Fast. Cheap. Good. Pick any two.

    The market has spoken, and said that they would rather have the familiar and flashy than secure and stable. Microsoft fills this niche. There are other niches, such as the Stable and Secure Computer market, and they're owned by the mainframe and UNIX vendors. But these aren't as visible as the PC market, because they need not advertise as much; their reputation precedes them. But they are just as important, if not moreso, than the consumer market.

  • by hopopee ( 859193 ) on Monday January 12, 2009 @03:51PM (#26421789)
    I find it more than a little amusing that summary mentions waterfall method being a time honed, excellent example of how all software should be made. Here's the history of the none existent waterfall method has to say about it.. Waterfall Method does not exist! [idinews.com]
  • grumpy old coder (Score:4, Insightful)

    by girlintraining ( 1395911 ) on Monday January 12, 2009 @04:01PM (#26422011)

    It's just another "I have 40 years of experience doing X... Damn kids these days. Get off my lawn." Hey, here's something to chew on -- I bet he screwed up his pointers and data structures just as much when he was at the same experience level. Move along, slashdot, nothing to see here. I will never understand the compulsion to compare people with five years experience to those with twenty and then try to use age as the relevant factor. Age is a number... Unless you're over the age of 65, or under the age of about 14, your experience level is going to mean more in any industry. This isn't about new technology versus old, or people knowing their history, or blah blah blah -- it's all frosting on the poison cake of age discrimination.

    P.S. Old man -- reading a book won't make you an expert. Doubly so for programming books. I'd have thought you'd know that by now. Why not get off your high horse and side-saddle with the younger generation and try to impart some of that knowledge with a little face time instead?

    • Re: (Score:3, Insightful)

      by benj_e ( 614605 )

      Meh, it goes both ways. How many younger coders feel they are god's gift to the industry?

      Personally, I welcome anyone who wants to be a programmer. Show me you want to learn, and I will mentor you. I will also listen to your ideas and will likely learn something from your fresh insight.

      But show me you are an asshat, and I'll treat you accordingly.

  • by rickb928 ( 945187 ) on Monday January 12, 2009 @04:18PM (#26422275) Homepage Journal

    Several posters have alluded to this, but I blame the Internet. Just follow me here:

    - Back in the 'Old Days', productivty software at the time was dominated by WordPerfect, VisiCalc and then 1-2-3, and what else? MS-DOS as the operating system. Everything shipped on diskettes. There was no Internet.

    - Fixing a major bug in WordPerfect required shipping diskettes to users, usually under 'warranty'. Expensive, time-consuming, and fraught with uncertainty.

    - Fixing bugs in MS-DOS really wasn't done. It was a minor release. Again, diskettes everywhere, and more costs.

    - Patch systems were important. Holy wars erupted on development teams over conflicting patch methods, etc. Breaking someone else's code was punished. Features that weren't ready either waited for the next release or cost someone their job.

    Today, patches can be delivered 'automatically'. It take how long, seconds, to patch something minor? Internet access is assumed. the 'ship-it-now' mentality is aided by this 'ease' of patching.

    If it weren't for Internet distribution, we would see real quality control. It would be a matter of financial survival.

    No, Internet distribution is not free. But it's both cheaper, I suspect, than shipping any media, and also less frustrating to a user than waiting at least overnight (more likely 5-7 days) for shipment.

    And it leads to the second distortion - Bug fixes as superior service. The BIG LIE.

    It is not superior service to post a patch overnight. It is not superior service to respond immediately to an exploit. It is a lie. Having to respond to another buffer overflow exploit after years (YEARS) of this is incompetence, either incompetence by design or incompetence of execution. This afflicts operating systems, software, utilities, nothing is innocent of this.

    The next time you marvel just a little bit when Windows or Ubuntu tells you that you were automatically updated, or that udpates are awaiting your mere consent to be installed, remember - they just admitted your software was imperfect, and are asking you to take time to let their process, designed with INEVITABLE errors expected, perform its task and fix what should never have been broken to begin with.

    ps- I love Ubuntu. I cut it some slack 'cause you get what you pay for, and many who work on Ubuntu are unpaid, and any rapid response to problems is above expectations. Microsoft, Symantec, Adobe, Red Hat, to name a few, are not in that business. They purport to actually *sell* their products, and make claims to make money. When they fail to deliver excellent products, the lie of superior service is still a lie.

    Just the voice of one who remembers when it was different.

    pps - EULAs tell it all. I wish I had an alternative. Oh, wait, I do... envermind, lemme get that Ubuntu DVD back out here and... Except at work...

    • Re: (Score:3, Insightful)

      by PitaBred ( 632671 )
      Don't forget that Ubuntu patches EVERYTHING on your system, versus the monthly Microsoft patches which are just the core OS and possibly Office. And since most of the developers of those other programs that Ubuntu patches are not on Ubuntu's payroll... well, they don't really have much control on whether something is patched or not. They'll patch some egregious things themselves and send them upstream, but... Anyway, you probably know all that. Just wanted to make sure it was clear to everyone else who may
  • by cylcyl ( 144755 ) on Monday January 12, 2009 @04:20PM (#26422307)

    The article link says top 25 errors....

  • Time-Honored WHAT?! (Score:3, Informative)

    by mihalis ( 28146 ) on Monday January 12, 2009 @04:33PM (#26422531) Homepage

    he lays the blame on PC developers (read: Microsoft) who kicked the time-honored waterfall model to the curb

    The waterfall model is long-discredited and its downfall is nothing to do with Microsoft. Ian Sommerville in his Software Engineering book was discussing this in the early 90s. I recall a diagram of the spiral model which is very much like agile development, although we didn't call it that then.

  • by Stormie ( 708 ) on Monday January 12, 2009 @08:47PM (#26425623) Homepage

    I love the way Alex Wolfe blames shoddy programming on the PC industry, which apparently replaced the waterfall development model with "cramming in as many features as possible before the shipping cut-off date, and then fixing the problems in beta". He then goes on to reference two books, from 1971 and 1975 respectively, which provide wisdom regarding this problem.

    They're excellent books - but I wasn't aware that the PC industry was around in 1971. Could it be.. that bad programming practices have always been with us? That the PC isn't to blame?

"If it ain't broke, don't fix it." - Bert Lantz

Working...