Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Bug IT Technology

More Than Coding Errors Behind Bad Software 726

An anonymous reader writes "SANS' just-released list of the Top 15 most dangerous programming errors obscures the real problem with software development today, argues InfoWeek's Alex Wolfe. In More Than Coding Mistakes At Fault In Bad Software, he lays the blame on PC developers (read: Microsoft) who kicked the time-honored waterfall model to the curb and replaced it not with object-oriented or agile development but with a 'modus operandi of cramming in as many features as possible, and then fixing problems in beta.' He argues that youthful programmers don't know about error-catching and lack a sense of history, suggesting they read Fred Brooks' 'The Mythical Man-Month,' and Gerald Weinberg's 'The Psychology of Computer Programming.'"
This discussion has been archived. No new comments can be posted.

More Than Coding Errors Behind Bad Software

Comments Filter:
  • by pete-wilko ( 628329 ) on Monday January 12, 2009 @03:30PM (#26421401)
    Yeah, those good for nothing programmers cramming in features all over the place and not ad-hearing to time honored development practices like waterfall!

    And requirement changes? WTF are those? Using waterfall you specify your requirements at the beginning, and these are then set in stone, IN STONE! Nothing will ever change in 6 -12 months.

    It's not like they're working 60-80 hour weeks, been forced to implement features, having new requirements added and not being listened to! That would be like marketing driving engineering! Insanity!

    As an aside - why is he dragging OO into this? Pretty sure you can use waterfall with OO - you even get pretty diagrams
  • Extensible Framework (Score:3, Interesting)

    by should_be_linear ( 779431 ) on Monday January 12, 2009 @03:32PM (#26421433)

    Most horrible projects I've seen were "extensible frameworks" that can do just about anything with appropriate extensions (plugins or whatever). But currently, without any existing extensions, it is bloated pile of crap. Also, there is nobody in sight willing to make one extension for it (except sample, done by author himself, on how to easily create extension).

  • by 77Punker ( 673758 ) <(ude.tniophgih) (ta) (40rcneps)> on Monday January 12, 2009 @03:35PM (#26421495)

    After working 3 months at my first programming job, the other two developers quit which left just me. I felt inadequate until I started interviewing other programmers to fill in the gap. Apparently lead developers with 10 years of experience can't solve the simplest programming problems, explain how databases work, or explain OOP. I'm convinced that most software sucks because most people writing it have no idea what they're doing and shouldn't be allowed to touch a computer. I'm currently in my 5th month at the same job and we've got someone good who will start soon, but it took a long time to find even one competent developer.

  • by Opportunist ( 166417 ) on Monday January 12, 2009 @03:44PM (#26421671)

    Quite dead on. But the difference is that today, with all the RAD tools around, you have a fair lot of "programmers" who don't even know what they're doing and get away with it. They got some course, maybe at their school (and that's already the better ones of the lot), maybe as some sort of an attempt to get them back onto a job from their unemployment services, and of course tha intarwebz is where da money is, so everyone and their dog learned how to write a few lines in VB (or C#, the difference for this kind of code molesters is insignificant) and now they're let loose on the market.

    Then you get hired by some company that signed up those "impressively cheap programmers, see, programmers needn't be horribly expensive" when the project goes south because deadlines lead to dead ends but no product, and you're greeted with code that makes you just yell out WTF? You got conditional branches that do exactly the same in every branch. You get loops that do nothing for 90% of the loop time and when asked you just get a blank stare and a "well, how do you think we could count from 80 to 90 besides counting from 0 to 90 and 'continue;' for 0-70?", because that's how they learned it and they never for a nanosecond pondered just WHAT those numbers in the 'for' block meant. And so on.

    Granted, those are the extreme examples of people who learned programming like a poem. By heart. They have a hammer as a tool, so every problem has to be turned into a nail. But you'd be amazed at the blank stares and "what do I need that for?" when you ask some of those "programmers" about hash tables (include snide joke about Lady Mary Jane here...) or Big-O notation. And we're talking people who are supposedly able to write a database application here.

    This is the problem here. It's not youthful programmers. It's simply people who know a minimum about programming and managed to trick some HR goon into believing they could actually do it.

  • by SparkleMotion88 ( 1013083 ) on Monday January 12, 2009 @03:48PM (#26421739)
    It is very common for people to blame all of the problems of software engineering on some particular methodology. We've been shifting blame from one method to another for decades, and the result is that we just get new processes that aren't necessarily better than the old ones.

    The fact is that software development is very difficult. I think there are several reasons why it is more difficult to develop robust software now than it was 20 years ago. Some of these reasons are:
    • The customer expects more from software: more features, flashier interfaces, bigger datasets -- all of these things make the software much more complicated. The mainframe systems of a few decades ago can't even compare to the level of complexity of today's systems (ok, maybe OS/360).
    • There is just more software out there. So any new software that we create is not only supposed to do its job, but also interconnect with various other systems, using the standards and interfaces of various governing bodies. This effect has been growing unchecked for years, but we are starting to counter it with things like SOA, etc.
    • The software engineering talent has been diluted. There, I said it. The programmers of 20-30 years ago were, on average, better than the programmers of today. The reason is we have more need for software developers, so more mediocre developers slip in there. Aggravating this issue is the fact that the skilled folks, those who develop successful architectures and programming languages, don't tend to consider the ability level of the people who will be working with the systems they develop. This leads to chronic "cargo cult programming" (i.e. code, test, repeat) in many organizations.
    • Software has become a part of business in nearly every industry. This means that people who make high-level decisions about software don't necessarily know anything about software. In the old days, only nerds at IBM, Cray, EDS, etc got involved with software development. These days, the VP of technology at Kleenex (who has an MBA) will decide how the new inventory tracking software will be built.

    I'm sure there are more causes and other folks will chime in.

  • by bb5ch39t ( 786551 ) on Monday January 12, 2009 @03:49PM (#26421753)
    I'm an old timer, still working on IBM zSeries mainframes, mainly. We just got a new system, which runs on a combination of Linux and Windows servers, to replace an application which used to run on the mainframe. Nobody likes it. We are apparently a beta test site (though we were told it was production ready). It has a web administration interface. For some reason, on some users, the only PC which can update those users is my Linux PC running Firefox. Nobody can say why. Until early last week, it would take a person a full 5 minutes to login to the product. They made some change to the software and now it takes about 10 seconds. this is production quality? No, I won't say the product. Sorry. Against policy.
  • by Anonymous Coward on Monday January 12, 2009 @04:01PM (#26422005)

    The same is true in a way of software development.

    Back when I was in high school, I could write a program (on punch cards) and put them in the tray to be submitted. Every week the intra-school courier came around and picked up the tray, returning the tray and output from the previous week. When every typo adds 2 weeks to your development time, you check your code *very* carefully, and examine every error message or warning that comes back from the compiler, to try to fix as many errors as possible in each submission.

    With interactive compilers/interpretors, it is not worth spending that much time on verifying the mechanical coding - just fix the first obvious problem and re-submit because it is faster to let the compiler (1) refuse to complain about the parts you managed to type correctly, and (2) remove all of the messages that were cascaded issues from the mistake you just fixed, than it is to waste your time scanning for typos, or reading the subsequent error message in case there are some that are not cascades.

  • by curunir ( 98273 ) * on Monday January 12, 2009 @04:09PM (#26422119) Homepage Journal

    The even sadder truth is that when faced with the choice of the two apps you describe and a third buggy application with a tiny set of features, users will choose the most visually appealing app, regardless of lack of features or the app being buggy.

    The under-the-covers stuff is important, but finding a good designer to make it pretty is the single most important thing you can do to make people choose your product. If it's pretty, people will put up with a lot of hassle and give you the time necessary to make it work reliably and have all the necessary features.

  • by 77Punker ( 673758 ) <(ude.tniophgih) (ta) (40rcneps)> on Monday January 12, 2009 @04:10PM (#26422149)

    "Write a function to sum all the numbers from 0 to 100"

    Every code question I ask is about that simple. The solutions I get to EASY questions are almost always really stupid, incorrect, or get answered with "I don't know how to do that"

  • by curunir ( 98273 ) * on Monday January 12, 2009 @04:16PM (#26422247) Homepage Journal

    But for me, "good enough" is indeed good enough.

    It might be of interest to you that Voltaire came up with your conclusion 245 years ago, and a bit more eloquently as well:

    The perfect is the enemy of the good.

  • by rickb928 ( 945187 ) on Monday January 12, 2009 @04:18PM (#26422275) Homepage Journal

    Several posters have alluded to this, but I blame the Internet. Just follow me here:

    - Back in the 'Old Days', productivty software at the time was dominated by WordPerfect, VisiCalc and then 1-2-3, and what else? MS-DOS as the operating system. Everything shipped on diskettes. There was no Internet.

    - Fixing a major bug in WordPerfect required shipping diskettes to users, usually under 'warranty'. Expensive, time-consuming, and fraught with uncertainty.

    - Fixing bugs in MS-DOS really wasn't done. It was a minor release. Again, diskettes everywhere, and more costs.

    - Patch systems were important. Holy wars erupted on development teams over conflicting patch methods, etc. Breaking someone else's code was punished. Features that weren't ready either waited for the next release or cost someone their job.

    Today, patches can be delivered 'automatically'. It take how long, seconds, to patch something minor? Internet access is assumed. the 'ship-it-now' mentality is aided by this 'ease' of patching.

    If it weren't for Internet distribution, we would see real quality control. It would be a matter of financial survival.

    No, Internet distribution is not free. But it's both cheaper, I suspect, than shipping any media, and also less frustrating to a user than waiting at least overnight (more likely 5-7 days) for shipment.

    And it leads to the second distortion - Bug fixes as superior service. The BIG LIE.

    It is not superior service to post a patch overnight. It is not superior service to respond immediately to an exploit. It is a lie. Having to respond to another buffer overflow exploit after years (YEARS) of this is incompetence, either incompetence by design or incompetence of execution. This afflicts operating systems, software, utilities, nothing is innocent of this.

    The next time you marvel just a little bit when Windows or Ubuntu tells you that you were automatically updated, or that udpates are awaiting your mere consent to be installed, remember - they just admitted your software was imperfect, and are asking you to take time to let their process, designed with INEVITABLE errors expected, perform its task and fix what should never have been broken to begin with.

    ps- I love Ubuntu. I cut it some slack 'cause you get what you pay for, and many who work on Ubuntu are unpaid, and any rapid response to problems is above expectations. Microsoft, Symantec, Adobe, Red Hat, to name a few, are not in that business. They purport to actually *sell* their products, and make claims to make money. When they fail to deliver excellent products, the lie of superior service is still a lie.

    Just the voice of one who remembers when it was different.

    pps - EULAs tell it all. I wish I had an alternative. Oh, wait, I do... envermind, lemme get that Ubuntu DVD back out here and... Except at work...

  • by ColdWetDog ( 752185 ) * on Monday January 12, 2009 @04:34PM (#26422543) Homepage
    Oh it's obviously Notes. Nothing else needs a zSeries to run and 5 minutes to login.

    Gotcha.
  • by wgaryhas ( 872268 ) on Monday January 12, 2009 @04:35PM (#26422557)

    That should have been:

    int SumNumbers(int x){
    return x * (x + 1) / 2;
    }

  • by 77Punker ( 673758 ) <(ude.tniophgih) (ta) (40rcneps)> on Monday January 12, 2009 @04:45PM (#26422701)

    I'm just looking for a guy who can solve the problem at all and explain how he solved it. You know, just trace it by hand and tell me why he decided to approach the problem the way he did. I tend to go for "no wrong answer" types of questions. I count this question in that group since the only wrong answers are "I can't do it" or the interviewee writes an obviously wrong solution and can't trace his own code well enough to know that it's wrong even after having it explained to him.

  • by truthsearch ( 249536 ) on Monday January 12, 2009 @04:56PM (#26422879) Homepage Journal

    This is especially true of web development. To patch a web application you don't even need to transmit a patch to clients; just update the web server. It's so easy to patch that many sites let the public use the code before any testing is done at all.

    I spent my first 10 years programming clients and servers for the financial industry. Now, as a web developer, I'm shocked at how hard it is to find programmers who strictly follow best practices [docforge.com].

  • by Anonymous Coward on Monday January 12, 2009 @05:00PM (#26422927)
    My first real "CS" course in college used cards for assembly programming (well, using a pseudo assembly language) for this very reason - it was supposed to be expensive/time consuming to compile/load/run so students were motivated to check their work before initiating that process. It was actually quite expensive to maintain because the card punches took a lot of physical space and were expensive to maintain compared to VT100s which were all over the place in large terminal rooms and smaller satellite terminal rooms.

    It was a good idea -- but somewhere along the line the process had been "optimized" to save operator resources which sort of weakened the concept. So, instead of pushing the card deck through the window (along with a good word or a little gift to the surly operator reduce the risk of the deck getting dropped on the floor "accidentally"), and coming back three hours later to a heart sinking single page output (many of you old folks out there know what I'm talking about!), we just tossed the deck into the high speed self service card reader, listened for the self service line printer six feet away to fire up, and pulled our listing off. The cost saving optimization just made the whole process painful rather than instructive.

    Not to worry though, I was later "fortunate" enough to experience the real (or closer to it) world of batch processing when I worked one summer (in the very late 70's) for a big aerospace company. There, I got to experience the RJE station operator from central casting. Fortunately, the first hour on the job I was advised to be especially friendly to Joe the RJE Station Operator - advice I initially ignored but within a week began to follow after learning the hard way to listen more seriously to those with more experience.
  • by Anonymous Coward on Monday January 12, 2009 @05:19PM (#26423183)
    Ah, that explains the Motorola DVR box Comcast uses -- you know, the one that can just absorb remote clicks for several minutes doing nothing -- and then zip through the whole list of queued commands at breakneck speed.

    The cost of the number of hours wasted by customers across the country using that piece of crap must, collectively, be huge and way more than it would cost to fix (or, at least, mitigate) the problem. However, the customers individually absorb the cost like good little sheep (and sometimes have few alternatives if they want other than OTA programming).

    Although, it's likely the minimum wage fresh-out moronic programmers who wrote the code wouldn't have a clue how to fix or mitigate the problem even if they were asked to.
  • by aztracker1 ( 702135 ) on Monday January 12, 2009 @05:33PM (#26423373) Homepage
    I feel your pain. I've come across about 3 major instances where race conditions were a problem under load in a heavy traffic website. I'm now on my third web application refactoring. The first, I cut minutes out of a reporting process, and changed the way data was pulled in from various dbms resources. The second I did some changes to the way data settings were looked up and persisted. At first glance it wasn't so bad, because each setting call was taking only 0.11 seconds. The problem was there were about 50 calls on each page request.. that added up to a major page that took about 8 seconds to load, and was spiking the cpu's on the farm to 90% under our heaviest load. After the changes, the page load is well under 2 seconds, and cpu load under heaviest user load is under 32%.

    I don't think a lot of people understand where resources get used, and how they get used, nearly as much as they should. Let alone security risks. I see in enterprise web applications people will use static resources in their classes not understanding how to use them safely. Often even *if* they should use them.
  • Re:Question (Score:4, Interesting)

    by ClosedSource ( 238333 ) on Monday January 12, 2009 @05:35PM (#26423389)

    Not all of us "old timers" think everything was great "back in the day" and shit now.

    As you say, what is being built now is much more ambitious than what we used to make. There were challenges then too, they were just different.

    Newer technologies can be a two-edged sword. Way back when a serious bug in an embedded system would require a new PROM or EPROM to be made and installed by a technician.

    Today you can download an update over the Internet in a few minutes. That convenience weakens a company's motivation to getting it right the first time.

    Of course today your product probably relies on software that you didn't write and aren't familiar with. We used to write every byte we delivered and couldn't blame an error on libraries because we didn't use any. But you couldn't compete in the market that way now.

    The constant in this business is there will always be those who try to push the limits whatever they are.

  • Re:Its all true (Score:3, Interesting)

    by Xtifr ( 1323 ) on Monday January 12, 2009 @05:44PM (#26423527) Homepage

    Not necessarily. I'm well over 30, and I agree with him. Programming is a talent that people have or they don't, and age doesn't seem to be a big factor either way. Skill (as opposed to talent) can be acquired with age, but I'd rather have a 22 year old with great talent and instinct than a 40 year old stick-in-the-mud who's been dragging down his teams for the last 18 years. I'd even more rather have a 40 year old with great talent and instinct, but you take what you can get.

    Or to put it another way--just because the schools often seem to miss out on teaching certain useful skills, that doesn't mean that someone fresh out of school can't have acquired them by other means. For example, a kid might know what a linker is because she's been hacking her dad's Linux box since she was in grade school. (I have strong hopes that this will end up being a valid description of at least one of my nieces.) The 22 year old just might have more real-world practical experience hacking Debian than the 40 year old who spent years doing nothing but generating reports with RPG, Access and VB "programming".

  • by HeronBlademaster ( 1079477 ) <heron@xnapid.com> on Monday January 12, 2009 @06:26PM (#26424241) Homepage

    At the company I work for, I used to be on one of the software teams (now I'm in charge of IT infrastructure). When I was given a task, I would ask for details, and would receive vague comments that would only suffice for a vague implementation. Whenever I would ask for more details (knowing that the manager had them), my manager would say "just go implement what you can, then we'll change it later for what we need." No amount of begging would get more out of him.

    One particular piece of the software was reimplemented a grand total of five times (twice by me, three times by someone else); all of the information that would have prevented the rewrites was in the manager's possession before the first implementation was written (as he later admitted).

    Granted, the early versions were never released to the public (beyond a few development partners), but you can see how this mentality might cause poor code to end up in the "finished" product.

    I finally transferred to IT because that manager refused to allow me, or anyone else, to clean up poor code as I went about implementing new features. Incidentally, he did put some code cleanup tasks on the to-do list, but he assigned them to a guy who didn't want to do them, despite both me and the manager's boss asking that those tasks be assigned to me.

    Before anyone asks - no, I have no idea why I wanted to clean up old code. Maybe I'm insane. But I figure, if a guy has an itch to clean up old code, and you know it needs to be done, why on earth would you give it to a guy who doesn't want to do it instead of the guy who wants to do it (assuming equal skill sets)?

  • by Opportunist ( 166417 ) on Monday January 12, 2009 @07:41PM (#26424809)

    Best practices? Following any sensible practice would already be a blessing!

    Now, I know I'm jaded and cynical, but having to deal with people you get from temp agencies does that to you. It's the latest fad, and it arrived at "coding jobs". Watch your productivity plummet, code quality leaping off the cliff and your sanity following close behind.

    First of all, what do you get from a temp agency? Hell, it's not like programmers have a really, really hard time finding a job around here. If they can code, they have a normal job. So what the hell do you think you get from a TA? Right. The sludge of the trade. The ones that cling to "doing something with computers" despite all odds and the fact that they simply can't.

    The few gems that you might get, because they're young and need some sort of work experience, are gone before you got them up to par. And that time is spent mostly trying to find some better (read: permanent) job. You'd be amazed how often people get "sick" just before they jump ship and are miraculously hired by some other company.

    The average turnover is 3 months. Now, I'm sure anyone who spent more than a week in programming knows what sort of productive work you get out of someone in his first three months (remember: You get either sludge or fresh meat, nobody with experience would willingly waste his time in a TA employment situation).

    After a year or two you end up with code written by about 20-30 different people, every single of them caring more about their next lunch than the project, with 20-30 different coding styles, and near zero documentation.

    I left the company, too. I still have a tiny bit of sanity that I treasure.

  • by ozphx ( 1061292 ) on Monday January 12, 2009 @08:36PM (#26425511) Homepage

    I pull a 6-digit salary (even when you convert our shitty AUD to USD ;)) - and its worse.

    If anything I am more rushed than I was when I was a junior dev. Managers flat out don't care what you are doing when you are on 50k. You could whack off all day if you like.

    As soon as you start earning more than your manager it all goes to shit. Possibly an ego thing - I don't really care about the reasons - but every place I've had managers over my shoulder monitoring my times, trying to cut schedules, etc. Where I'm consulting at the moment is a good case in point - every 3 weeks theres a new "final deadline". I think its to try and motivate us to finish up (theres a good several months of work left) - all they are getting is bandaid fixes.

  • by Grishnakh ( 216268 ) on Monday January 12, 2009 @09:15PM (#26425965)

    The hardware designers were under the same sorts of pressures, if not more so, than the software guys and I saw many bugs that would end up in the shipping silicon. The general attitude was always "oh! a bug: well the software guys will just have to work around it."

    Most of my work involves software workarounds for hardware bugs. The reason is a little different, though. My company, in its infinite wisdom, completed the very first revision of this particular chip, did a preliminary test and the core seemed to work (it's a system-on-chip), so they laid off the hardware design team! Then they started finding the many bugs in the hardware, and ran around looking for another design team to fix the bugs, but it took several years, and they had to ship millions of the buggy chips to customers, which are deployed in the field.

  • by mollymoo ( 202721 ) on Monday January 12, 2009 @09:32PM (#26426179) Journal

    First of all, what do you get from a temp agency? Hell, it's not like programmers have a really, really hard time finding a job around here. If they can code, they have a normal job. So what the hell do you think you get from a TA? Right. The sludge of the trade. The ones that cling to "doing something with computers" despite all odds and the fact that they simply can't.

    I used to do contract work (if that's what you mean by "temp agency"), not because I couldn't get a permanent job - I was offered a permanent position at every single place I did contract work - but because being paid twice as much as you'd get in a permanent position, getting to work on a variety of projects with a variety of companies and being able to take several months a year off is just a nicer way of working.

  • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Monday January 12, 2009 @10:34PM (#26426825) Homepage

    xor ax,ax
    xor bx,bx ;_loop
    inc bx
    add ax,bx
    cmp bx,64 ;64 hex = 100 decimal
    jb 0104 ;_loop

    You damn wasteful kids, just using a CMP like it doesn't cost anything. For the love God, can't you just load bx with the upper boundary and use dec/jnz in that loop? Been faster since day one, and both instructions execute in one cycle on the Pentium and later. Makes it easier to turn into a subroutine, too--make setting bx the first instruction, then you just can put the number you want to sum up to into bx and jump into the second instruction.

    Don't even get me started on how you just throw in a div there like they're giving them away for free at the store or something.

  • Re:Waterfall (Score:3, Interesting)

    by recharged95 ( 782975 ) on Monday January 12, 2009 @11:29PM (#26427395) Journal
    Obvious you were working in the consumer-commercial side of the waterfall practice.

    .

    On the DoD Space Systems side, we had stuff like:

    • A Requirements Traceability Matrix
    • PDR: Preliminary Design Review with the Customer
    • CDR: Critical Design Review with the Customer
    • IOC: Initial Op capability
    • FAT: Preliminary Acceptance Testing
    • FOC: Final Op capability
    • FAT: Final Acceptance Testing

    And never had a launch delay due to writing software last minute. And as long as the h/w flew fine, we had birds that ran in space for years. Years.

    .

    The Waterfall model "failed" because of a. it got too expensive due to competition (mainly smaller firms that cut corners resulting in more failures); it wasn't "faster, cheaper". And b. all the OO and Agile guys were better salesmen.

    .

    I bet DeMarco is enjoying the responses.

BLISS is ignorance.

Working...