Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming Bug IT Technology

More Than Coding Errors Behind Bad Software 726

An anonymous reader writes "SANS' just-released list of the Top 15 most dangerous programming errors obscures the real problem with software development today, argues InfoWeek's Alex Wolfe. In More Than Coding Mistakes At Fault In Bad Software, he lays the blame on PC developers (read: Microsoft) who kicked the time-honored waterfall model to the curb and replaced it not with object-oriented or agile development but with a 'modus operandi of cramming in as many features as possible, and then fixing problems in beta.' He argues that youthful programmers don't know about error-catching and lack a sense of history, suggesting they read Fred Brooks' 'The Mythical Man-Month,' and Gerald Weinberg's 'The Psychology of Computer Programming.'"
This discussion has been archived. No new comments can be posted.

More Than Coding Errors Behind Bad Software

Comments Filter:
  • by alain94040 ( 785132 ) * on Monday January 12, 2009 @03:19PM (#26421159) Homepage

    The most common errors: SQL injection, command injection, cleartext transmission of Sensitive Information, etc.

    People make mistakes. Software needs to ship, preferably yesterday.

    How much would it cost to have perfect software? I happen to have worked in an industry that requires perfect coding. So I can imagine what it would look like if Microsoft tried it.

    The debugger would cost half a million dollar per seat (gdb is free). There would be an entire industry dedicated to analyzing your source code and doing all kinds of proofs, coverage, what-if analysis and other stuff that require Ph.Ds to understand the results.

    The industry I'm referring to is the chip industry. Hardware designers code pretty much like software developers (except the languages they use are massively parallel, but apart from that, they use the same basic constructs). Hardware companies can't afford a single mistake because once the chip goes to fab, that's it. No patches like software, no version 1.0.1.

    It's just not practical. Let the NSA order special versions of Office that cost 10 times the price and ship three years after the consumer version.

    But for me, "good enough" is indeed good enough.

    --
    FairSoftware.net [fairsoftware.net] -- work where geeks are their own boss

  • by PingXao ( 153057 ) on Monday January 12, 2009 @03:24PM (#26421261)

    In the early '80s there were no "older" programmers unless you were talking mainframe data processing. On microprocessor CPU systems the average age was low, as I recall. Back then we didn't blame poor software on "youthful programmers". We blamed it on idiots who didn't know what they were doing. I think it's safe to say that much hasn't changed.

  • Waterfall (Score:5, Insightful)

    by Brandybuck ( 704397 ) on Monday January 12, 2009 @03:25PM (#26421273) Homepage Journal

    The waterfall method is still the best development model. Uou have to analyze, then plan, then code, then test, then maintain. The steps need to be in order and you can't skip any of them. Unfortunately waterfall doesn't fit into the real world of software development because you can't freeze your requirements for so long a time. But cyclic models are a good second place, because they are essentially iterated waterfall models. When you boil all the trendy stuff out of Agile, you're basically left with a generic iterated waterfall, which is why it works. The trendy crap is just so you can sell the idea to management.

  • Re:Waterfall (Score:5, Insightful)

    by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Monday January 12, 2009 @03:29PM (#26421385) Homepage Journal

    The waterfall method is still the best development model. [...] Unfortunately waterfall doesn't fit into the real world

    WTF? Not working in the real world makes it a crap model.

    When you boil all the trendy stuff out of Agile, you're basically left with a generic iterated waterfall, which is[...]

    ...not a waterfall.

  • Re:Waterfall (Score:1, Insightful)

    by Anonymous Coward on Monday January 12, 2009 @03:30PM (#26421397)

    The waterfall method is still the best development model.

    I agree and that's why I painted over the windshield on my car and drive everywhere by dead reckoning. Pre-planning is the way to go for everything, I say!

    (CAPTCHA: unaware. How appropriate.)

  • Re:Modus Operandi (Score:3, Insightful)

    by John Hasler ( 414242 ) on Monday January 12, 2009 @03:30PM (#26421403) Homepage

    Except that what you actually do is promise to paint it red even though you know that you do not have and cannot get any red paint. Then you deliver it green and try to tell the customer he is colorblind and besides the next model really will be red.

  • by Opportunist ( 166417 ) on Monday January 12, 2009 @03:32PM (#26421443)

    The problem is that software doesn't even ship as "good enough" anymore. It's more like "it compiles, ship it".

    Your example of hardware, and how it's impossible to patch it, was true to a point for software, too, in the past. Before it became easy to distribute software patches via the internet, companies actually invested a lot more time into testing. Why? Because yes, you could technically patch software, but it was tied to sometimes horrible costs to do just that.

    You can actually see a similar trend with the parts of hardware (i.e. BIOSes) that are patchable. Have you ever seen hardware shipped with all BIOS options fully enabled and working? I haven't in the past 2-3 years. More often than not you get a "new" board or controller with the predecessor's BIOS flashed in, and the promise for an update "really soon now".

    The easier it is to patch something, the sloppier the original implementation is. You'd see exactly the same with hardware if it wasn't so terribly hard (read: impossible) to rewire that damn printed circuit. I dread the day when they find some way to actually do it. Then the same will apply to hardware that you have today with some OSs: It's not done until it reads "SP1" on the cover.

  • by overshoot ( 39700 ) on Monday January 12, 2009 @03:33PM (#26421459)
    ... are destined for greatness, because their bullshit is not burdened by reality.

    I've heard from several ex-Softies that the company inculates its recruits with a serious dose of übermensch mentality: "those rumors about history and 'best practices' are for lesser beings who don't have the talent that we require of our programmers." "We don't need no steenking documentation," in witness whereof their network wireline protocols had to be reverse-engineered from code by what Brad Smith called 300 of their best people working for half a year.

    However, I'll note that they were right: anyone who wants to say that they did it wrong should prove it by making more money.

  • by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Monday January 12, 2009 @03:34PM (#26421483) Homepage Journal

    I happen to have worked in an industry that requires perfect coding. [...] The industry I'm referring to is the chip industry. [...] Hardware companies can't afford a single mistake because once the chip goes to fab, that's it. No patches like software, no version 1.0.1.

    What does "stepping: 9" in /proc/cpuinfo on my home computer mean? What is a f00f, and what happened with the TLB on the early Phenom processors?

  • by Skapare ( 16644 ) on Monday January 12, 2009 @03:35PM (#26421489) Homepage

    This is true with any group. There are geniuses and idiots in all groups. The problems exist because once the supply of geniuses have been exhausted, businesses tap into the idiots. And this is made worse when employers want to limit pay across the board based on what the idiots were accepting. Now they are going overseas to tap into cheaper geniuses, which are now running out, and in the mean time, lots of local geniuses have moved on to some other career path because they didn't want to live at the economic level of an idiot.

  • Users are to blame (Score:5, Insightful)

    by Chris_Jefferson ( 581445 ) on Monday January 12, 2009 @03:36PM (#26421499) Homepage
    The sad truth is, given the choice between a well-written, stable and fast application with a tiny set of features and a giant slow buggy program with every feature under the sun, too many users choose the second.

    If people refused to use and pay for buggy applications, they would either get fixed or die off.

  • by Jason1729 ( 561790 ) on Monday January 12, 2009 @03:38PM (#26421535)
    People make mistakes. Software needs to ship, preferably yesterday.

    This attitude is the number one problem with software development. When all you care about is getting it out the door, you send garbage out the door.

    Software today is so damn buggy. I spend hours a week just doing the work of getting my computer to work. And even then it has random slowdowns and crashes.

    I'm old enough to remember when it wasn't like that. You'd run your program and it was ready in a second, you'd exit and it left no trace. Crashes were virtually unheard of. We have people where I work who only do data entry, and they still use wordperfect 4.2 on 386 hardware. I've seen their workflow and how fast it works for them and I can see if they "modernized" it would cripple their productivity.

    And for the money at stake, what's so wrong with hiring a few Ph.D's to analyze code. Amortized over a few million copies, a few 6-digit salaries aren't so bad. And the losses the software shops suffer in bad-will when their products fail costs them more.
  • by sheldon ( 2322 ) on Monday January 12, 2009 @03:41PM (#26421595)

    Oh great a rant by someone who knows nothing, providing no insight into a problem.

    Must be an Op-Ed from a media pundit.

    And they wonder why blogs are replacing them?

  • by Anonymous Coward on Monday January 12, 2009 @03:43PM (#26421645)

    Most companies simply refuse to spend the money do get it right. The reason that early programmers didn't have as many bugs is that their development efforts had virtually unlimited funding to resolve errors, because a bug in the system was far more expensive relative to the cost of development (compared with today, where you can reboot the machine and try again in 5 minutes "for free").

  • by BarryNorton ( 778694 ) on Monday January 12, 2009 @03:44PM (#26421659)
    A waterfall process and object-oriented design and programming are orthogonal issues. The summary, at least, is nonsense.
  • Complete BS? (Score:5, Insightful)

    by DoofusOfDeath ( 636671 ) on Monday January 12, 2009 @03:45PM (#26421687)

    For the life I me, I can't figure out what the choice of {waterfall vs. cyclic} has to do with {writing code that checks for error return codes vs. not}.

    Waterfall vs. cyclic development is mostly about how you discover requirements, including what features you want to include. It also lets you pipeline the writing of software tests, rather than waiting until the end and doing it big-bang approach. Whether or not you're sloppy about checking return codes, etc., is a completely separate issue.

    Despite the author's protests to the contrary, he really is mostly complaining incoherently about the way whipper-snappers approach software development these days.

  • by gillbates ( 106458 ) on Monday January 12, 2009 @03:49PM (#26421757) Homepage Journal

    As long as:

    • Consumers buy software based on flashy graphics and bullet lists of features, without regard for quality...
    • Companies insist on paying the lowest wages possible to programmers...
    • Programmers are rewarded for shipping code, rather than its quality...

    You will have buggy, insecure software.

    Fast. Cheap. Good. Pick any two.

    The market has spoken, and said that they would rather have the familiar and flashy than secure and stable. Microsoft fills this niche. There are other niches, such as the Stable and Secure Computer market, and they're owned by the mainframe and UNIX vendors. But these aren't as visible as the PC market, because they need not advertise as much; their reputation precedes them. But they are just as important, if not moreso, than the consumer market.

  • by Anonymous Coward on Monday January 12, 2009 @03:51PM (#26421787)

    The biggest impact on software quality is putting the release schedule in the hands of businessmen. speaking as a former (long ago) MS SDE, the coders I worked with there were at least as good as a random developer (frequently /much/ better). However their job is to code things in triaged order, not make release schedule decisions. When the execs tell everyone to stop typing and RTM, then that's it. The state of the software is generally known prior to ship because of their full-time /real/ QA teams with ad-hoc testing, automation, and metrics that are all much better than all other teams I've been on before or since. Don't rag on the MS devs for their suits' decisions to release with known bugs.

  • Re:Waterfall (Score:5, Insightful)

    by radish ( 98371 ) on Monday January 12, 2009 @03:52PM (#26421813) Homepage

    The waterfall is broken, seriously. I'm paraphrasing from an excellent talk I attended a while back, but here goes.

    For a typical waterfall you're doing roughly these steps: Requirements Analysis, Design, Implementation, Testing, Maintenance. So let's start at the beginning...requirements. So off you go notebook in hand to get some requirements. When are you done? How do you know you got them all? Hint: you will never have them all, and they will keep changing. But you have to stop at some point so you can move onto design, so when do we stop? Typically it's when we get to the end of the week/month/year allocated on the project plan. Awesome. Maybe we've got 50% of the reqs done, maybe not. It'll be a long time until we find out for sure...

    Next up - Design! Woot, this bit is fun. So we crank up Rose or whatever and get to work. But when do we stop? Well again, that's tough. Because I don't know about you but I can design forever, getting better and better, more and more modular, more and more generic, until the whole thing has flipped itself inside out. So we stop when it's "good enough" - according to who? Or more likely, it's now 1 week to delivery and no-one's written any code yet so we better stop designing!

    Implementation time. Well at least this time we know when we're done! We're up against it time wise though, because we did such a good job on Reqs & Design. Let's pull some all nighters and get stuff churned out pronto, who cares how good it is, no time for that now. That lovely, expensive design gets pushed aside.

    No time to test...gotta hit the release date.

    Sure this isn't the waterfall model as published in the text books, but it's how it works (fails) in real life. And the text books specifically don't say how to fix the problems inherent in the first two stages. What to do instead? Small, incremental feature based development. Gather requirements, assign costs to them, ask the sponsors to pick a few, implement & test the chosen features, repeat until out of time or money.

  • by Schuthrax ( 682718 ) on Monday January 12, 2009 @03:56PM (#26421903)

    You'd run your program and it was ready in a second, you'd exit and it left no trace. Crashes were virtually unheard of.

    And all without managed memory, automatic garbage collection, etc., imagine that! Seriously, I see so many devs (and M$, who has a product to sell) insisting that all that junk is what will save us. What they're doing is attempting to create a Fisher Price dev environment where you don't have to think anymore because they've done it all for you. What's going to happen to this world when GenC# programmers replace the old guard and they don't have the least clue about what is going on inside the computer that makes the magic happen?

  • Re:Waterfall (Score:4, Insightful)

    by Brandybuck ( 704397 ) on Monday January 12, 2009 @04:00PM (#26421987) Homepage Journal

    You can't create quality software without planning before coding. Ditto for not testing after coding. This isn't rocket science, yet too many "professionals" think all they need to do is code.

    The waterfall model isn't a management process, it's basic common sense. It's not about attending meetings and getting signatures, it's about knowing what to code before you code it, then verifying that that is what you coded. The classic waterfall took too long because you had to plan Z before you started coding A, but with an iterated waterfall (which is still a waterfall, duh) you only need to plan A before you code A.

  • grumpy old coder (Score:4, Insightful)

    by girlintraining ( 1395911 ) on Monday January 12, 2009 @04:01PM (#26422011)

    It's just another "I have 40 years of experience doing X... Damn kids these days. Get off my lawn." Hey, here's something to chew on -- I bet he screwed up his pointers and data structures just as much when he was at the same experience level. Move along, slashdot, nothing to see here. I will never understand the compulsion to compare people with five years experience to those with twenty and then try to use age as the relevant factor. Age is a number... Unless you're over the age of 65, or under the age of about 14, your experience level is going to mean more in any industry. This isn't about new technology versus old, or people knowing their history, or blah blah blah -- it's all frosting on the poison cake of age discrimination.

    P.S. Old man -- reading a book won't make you an expert. Doubly so for programming books. I'd have thought you'd know that by now. Why not get off your high horse and side-saddle with the younger generation and try to impart some of that knowledge with a little face time instead?

  • by erroneus ( 253617 ) on Monday January 12, 2009 @04:02PM (#26422025) Homepage

    I have to concur with the other "old timers." I am 40 years old and have been in this world since I was around 10 or so. It has been a rather long time since I did any serious programming, but I find myself hacking and tweaking from time to time and I recall vividly the type of thinking I had to engage in to write software that worked. VALIDATE INPUT. VALIDATE INPUT. VALIDATE INPUT. There is little more to writing good code than that. Actually, there is plenty more, but where security is concerned, that should be task #1. The move to object oriented code should not have changed this practice. "In Theory" validating input should now be handled by the object but it isn't always the case and good programmers should know better than to trust "black boxes" to do what they are supposed to do. So the other side is "VALIDATE OUTPUT" as well.

    I find this remarkable disregard for fundamentals a bit unsettling... it is as unsettling as doctors who prescribe drugs without first doing a diagnosis.

  • Re:Its all true (Score:5, Insightful)

    by Cornflake917 ( 515940 ) on Monday January 12, 2009 @04:06PM (#26422073) Homepage

    I think refusing to hire someone solely because of their age is naive. Is there some magical event at the age of 30 that bestows knowledge of linkers to the aging programmer? Give me a break. You are making bad assumptions. Your first bad assumption is that just because of your anecdotal experience dealing with one individual, that all schools no longer teach anything about linkers. Your second bad assumption is that even if that was true, no programmer would learn that information on their own, as if no one is generally interested in learning comp sci any more outside of the classroom.

  • by samkass ( 174571 ) on Monday January 12, 2009 @04:15PM (#26422217) Homepage Journal

    Even in C# and Java there are experts who do know what's going on. They will replace the old guard. The rest will be people for whom contributing to software development was completely impossible before, but can now do the basics (ie. designers, information visualization experts, etc).

    And of the top programming errors, many of them still apply to Java and C#. But some don't, and I see that as a positive step. I do Java in my day job, and iPhone development on the side. And while nothing in the industry beats Interface Builder, Objective-C is pretty horrible to develop in when you're used to a modern language and IDE...

  • by jandrese ( 485 ) <kensama@vt.edu> on Monday January 12, 2009 @04:17PM (#26422255) Homepage Journal
    It doesn't help that most programming languages are lousy at validating input. C is especially bad as it has poor pattern matching capabilities by default and no dynamically sized structures. Worse, it offers absolutely no heap checking capabilities meaning at the end of the day your function is forced to trust the input instead of verifying it for itself. You can use fgets() with a buffer size listed, but if that passed buffer size is wrong for whatever reason there is no way in the language for fgets() or anything else to safely handle or even detect it.

    That said, it's not impossible to write safe C code. In fact it's not even all that difficult, but it does mean that even small otherwise unrelated errors in your code can be surprising security problems.
  • Question (Score:4, Insightful)

    by coryking ( 104614 ) * on Monday January 12, 2009 @04:23PM (#26422355) Homepage Journal

    When you were working on those punch cards, using your green screen console (kids these days with color monitors and mice), what were you doing?

    Did you ever transcode video and then upload it to some website called Youtube on "the internet"? Did you then play it back in a "web browser" that reads a document format that your grandma could probably learn? Did your mainframe even have "ethernet"? Or is that some modern fad that us kids use but will probably pass and we'll all go back to "real" computers with punch cards.

    Did you ever have to contend with botnets, spyware or any of that? And dont say "if we used The Right Way Like When I Was Your Age, we wouldn't have those things because software would be Designed Properly". because if we used "The Right Way" like you, software would take so long to develop and cost so much that we wouldn't even have the fancy systems that even make malware possible.

    Old timers crack me up. Ones that are skeptical of object oriented programming. Ones who think you can waterfall a huge project. I'd like just one of them to run a startup and waterfall Version 1.0 of their web-app (which, they wouldn't because the web is a fad, unlike their punch cards).

    Sorry to be harsh, but get with the times. Computing these days is vastly more complex then back in the "good old days". Your 386 couldn't even play an mp3 without pegging the CPU, let alone a flash video containing one.

    I've seen their workflow and how fast it works for them and I can see if they "modernized" it would cripple their productivity.

    Until they try to bring in new-hires. How long does it take to train somebody who is used to modern office programs to use a DOS program like wordperfect? You think they'll ever get as proficient when what they see isn't what they get (a fad, I bet, right?)

    Again, sorry to sound so harsh. You guys crack me up. Dont worry though, soon enough we'll see the errors in our ways and go back to time honored methods like waterfall. We'll abandon "scripting languages" like Ruby or C and use assembler like god intends.

    Sheesh.

  • by Surt ( 22457 ) on Monday January 12, 2009 @04:39PM (#26422623) Homepage Journal

    Do you care whether they write a loop or return (n*n+1)/2? (where n=100 in this case?)

    (curious whether you are looking for the person who knows the clever solution, or the guy who can write a basic loop).

  • by TheLink ( 130905 ) on Monday January 12, 2009 @04:40PM (#26422635) Journal
    So how does full warranty work for OSS software?
  • by billcopc ( 196330 ) <vrillco@yahoo.com> on Monday January 12, 2009 @04:44PM (#26422689) Homepage

    What they're doing is attempting to create a Fisher Price dev environment where you don't have to think anymore because they've done it all for you.

    They don't have much of a choice, since about (rand[100]) 97% of all computer schools are absolute garbage, we end up with a bunch of Fisher Price developers who can type out Java bullshit at a steady 120wpm, but the resultant app makes zero sense as they fall into the trap of "checklist development". The app does "this, and that, and that too" but does them all sloppily and unreliably. Then we have managers who review the checklist line by line then sign off because it "passes spec". They completely miss the true purpose of the app in the first place: to fulfill a human need.

    If an app is unusable because of bugs or a nonsensical interface, then I don't care what the checklist says, it is a failure!

  • The Brinks-Matt Robbery, in which thieves brazenly stole truck-loads of gold bullion, and the two gigantic heists at Heathrow Airport, were all 100% correct - from the perspective of the thieves. And given that next to nothing from any of these three gigantic thefts was ever recovered, I think most people would agree that the implementation was damn-near flawless. As economic models go, though, they suck. They are so utterly defective in any context other than the very narrow one for which they were designed, and even then only from the very narrow, short-sighted perspective of the thief, that they have zero reusability.

    Just because something makes a person rich doesn't mean that something is worth copying, would even work for someone else, or could ever be to the advantage of anyone but the original person. It is equally flawed logic to assume that "correct" behaviour could be equally profitable to one-off flawed behaviour. Flaws can make some damn good income, short-term, even if they must degenerate in the long-term.

    You will find socially-irresponsible companies have the fastest-rising stocks in the stock-market. Well, until they crash and burn. Cults are the most profitable of all societies. Until they self-destruct. Extremists have the ultimate in work ethics, until they die from the strain. In the Dark Ages, you could build far and away more wooden palisades than you could build stone fortifications. Well, until the builders all died horribly from Greek Fire (early napalm).

    And nobody will ever make more money than Microsoft by writing responsibly and sensibly. In the long-term, Microsoft's code is unsupportable and its methodology is unscalable. Vista showed that they are approaching (but are not yet) at the limits. They might not reach the limits for another 10-20 years, but they will someday reach them in such a way that they have blocked off all avenues of escape.

    A software company producing a good, solid product won't necessarily make much money in any given year, now or ever, but if the design is truly as good as all that, it will sell longer and require less maintenance. Instead of a 50-year lifespan for the company, you might well be looking at a 250-year lifespan or more. (There are companies today that are much older than that.) I'd say that such a company can legitimately say that it is doing things right, because they meet the ultimate requirement: will you adapt or will you die?

    It is also possible that a company that lasts that long will actually end up making more money than Microsoft (as a sum of gross revenue, not net), even if they never matched the sorts of sales Microsoft have achieved and even if they never end up as rich because they spent that resource on improving their product over and above improving their bank balance. Failure to invest is not an admirable quality in my book, nor is cutting corners. In terms of social evolution and being able to adapt to new environmental pressures, Microsoft is an evolutionary dead-end. Massively successful in the short-term, but incapable of survival on a meaningful timeframe.

  • by jeko ( 179919 ) on Monday January 12, 2009 @04:52PM (#26422821)

    I bought three Mercedes. Two of them got repossessed. Now, the dealers won't finance me when I go to buy another. Clearly, there is a shortage of Mercedes.

    Look at your story. You had three programmers. Two quit (Yeah, I know, it wasn't because they were unhappy. Look, no one wants to be known as a malcontent or difficult. They lied to you.) Now, you can't get anyone in to interview who knows what they're doing.

    You think maybe it's possible that your company's reputation precedes it? I know of half a dozen places in my town that nobody in their right mind would agree to work for.

    Show me a man who says he can't find anyone to hire, and I'll show you a man nobody wants to work for.

    Take that same man, triple the wages he's offering and wire a pacifier into his mouth and the ghosts of Ada Lovelace and Alan Turing will fight for the interview.

  • by Opportunist ( 166417 ) on Monday January 12, 2009 @04:53PM (#26422835)

    What you describe is a wannabe-genius. A showoff. Not a genius.

    The code that makes me mutter "that's pure genius" is usually not the kind of code you can't understand or is winded, twisted and a worthy entry for the obfuscated C-Code contest. It's usually code that is brilliantly simple yet very functional, fast and easy to understand.

    Genius code isn't the code that takes a year to read and a decade to understand. It's the code that makes you wonder why you didn't come up with it because it, well, looks so simple that it's pure genius. A good example of "pure genius" code is the square root function that relies on the quirks of the IEEE754 format. Works only on 32bit little endian, but there it works and is fast.

    Pure genius code doesn't mean it has to be complicated, quite the opposite. "Pure genius" is what it mostly attributed to finding new ways to do something in a better way. Not making it overly complicated to look like you did something great.

  • by Chabo ( 880571 ) on Monday January 12, 2009 @04:53PM (#26422839) Homepage Journal
    Well that's unfair...

    First off, I started off as self-taught, then moved on to get a B.S. in Computer Science (is there a school that offers a B.A.?). Would you fault me for getting an education?

    Second, I started teaching myself at age 18. You'd reject me simply because I started six years later?

    Just because it's harder to learn something later in life doesn't mean you can't learn it, whether it be French, Italian, or C.
  • by Gordonjcp ( 186804 ) on Monday January 12, 2009 @05:02PM (#26422947) Homepage

    Sometimes the right answer is "do it the stupid way and buy a faster computer"...

    Look at it this way, if I cost £50 an hour, and a new fast-as-blazes CPU costs £100, then will two hours of my work make as much improvement as putting a faster chip in? If not, I can go and do some useful coding and you get new shiny to play with.

  • Don't forget that Ubuntu patches EVERYTHING on your system, versus the monthly Microsoft patches which are just the core OS and possibly Office. And since most of the developers of those other programs that Ubuntu patches are not on Ubuntu's payroll... well, they don't really have much control on whether something is patched or not. They'll patch some egregious things themselves and send them upstream, but... Anyway, you probably know all that. Just wanted to make sure it was clear to everyone else who may not use it ;)
  • by Junior J. Junior III ( 192702 ) on Monday January 12, 2009 @05:08PM (#26423033) Homepage

    The problem is that software doesn't even ship as "good enough" anymore. It's more like "it compiles, ship it".

    I don't know, if you ask me, software is about a million times better than it used to be. I've never been a user of a mainframe system, and I understand they were coded to be a lot more reliable than desktop class microcomputers. But having started using computers in the early 80's as a small child, and seeing where we are now, there's just no comparison.

    It used to be, computers were slow, crashed all the time, on the slightest problem, and encountered problems very frequently. Today, we have many of these same problems, but much less frequently and much less severe. An application crash no longer brings down the entire system. When it does crash, usually there is no data loss, and the application is able to recover most if not all of your recent progress, even if you didn't save. Crashes happen far less frequently, and the systems run faster than they used to, even with a great many more features and all the bloat we typically complain about.

  • Re:Question (Score:1, Insightful)

    by C0vardeAn0nim0 ( 232451 ) on Monday January 12, 2009 @05:08PM (#26423037) Journal

    serious troll is trolling...

    take this argument for example:

    Sorry to be harsh, but get with the times. Computing these days is vastly more complex then back in the "good old days". Your 386 couldn't even play an mp3 without pegging the CPU, let alone a flash video containing one.

    do you know what "data entry" is ? it's pretty much reading something from a sheet of paper (like a form filled by hand) and typing it. i did that on my first job ever on a PC-XT runing MUMPS. after sometime, it gets so mechanic you don't even look at the screen anymore. and who gives a f**k if the CPU can't play MP3s ? if you want to listen music on the job, buy an iPod. data entry can be done on a dumb terminal, for cry out loud. and it's usually quicker an less distracting using a keyboard only, text only interface than messing with GUIs. i see this everytime i stop at a gas station. when it's time to pay, service is much quicker on the ones where the register machine have a text only interface.

    Until they try to bring in new-hires. How long does it take to train somebody who is used to modern office programs to use a DOS program like wordperfect? You think they'll ever get as proficient when what they see isn't what they get (a fad, I bet, right?)

    let me guess, less than what it takes to train them on GUIs ? learning curve have nothing to do with the nature of the interface (text or graphical) but how well the particular interface of the appliation was writen. and "what you see is what you get" in most cases is irrelevant. not all data inputed is supposed to be printed. in this age of e-mail, even less. so let the WYSIWYG stuff for who actually needs it.

    oh, and out of order, but still relevant,

    Did you ever have to contend with botnets, spyware or any of that? And dont say "if we used The Right Way Like When I Was Your Age, we wouldn't have those things because software would be Designed Properly". because if we used "The Right Way" like you, software would take so long to develop and cost so much that we wouldn't even have the fancy systems that even make malware possible.

    MacOS, Linux, FreeBSD, OpenSolaris, they are as complex as Windows but are much more resistant than what MS sells. and with the exception of MacOS, they're all free. the diference is in the methodology used to develop them. which is to say, the people who work on them actually _DO_ have methodologies, while microsoft seems to have none.

    so, well fed already, mr. troll ?

  • by benj_e ( 614605 ) <walt@eis.gmail@com> on Monday January 12, 2009 @05:08PM (#26423045) Journal

    Meh, it goes both ways. How many younger coders feel they are god's gift to the industry?

    Personally, I welcome anyone who wants to be a programmer. Show me you want to learn, and I will mentor you. I will also listen to your ideas and will likely learn something from your fresh insight.

    But show me you are an asshat, and I'll treat you accordingly.

  • If you perform certain types of validation on a routine basis, write a set of common routines to do the work, and reuse them over and over again. Standardize your code. Define standard buffer names, sizes and buffer attributes, and make sure that anyone working on that code is acutely aware of the standards which are already in place.

    Reject code that doesn't follow the standard. Even if it works otherwise.

    Modular coding isn't rocket science, and one can be very structured and modular in any language. No OO needed. We had an extensive library of common routines, common buffers, etc. back when I wrote Fortran 66 and 77 code at my last major place of employment, and we have the exact same thing here on both the C and Fortran sides of life.

  • by TheLink ( 130905 ) on Monday January 12, 2009 @05:31PM (#26423335) Journal
    Is there really a thing as "perfect software"?

    Often a bug is a matter of opinion/taste, and opinions/tastes change.

    For instance some might say a program should stop if it discovers it suddenly can't log anymore. Others might say it should continue running and ignore the logging problem. And others might say it should continue running and try to log elsewhere. What would a perfect program do? I really don't know.

    And the other reason why you don't get perfect software:

    If shopping malls cost the same as now to design, but cost the equivalent of "make mall" and "coffee break" to build, people would be shopping in "buggy" shopping malls as soon as the blueprint is good enough to "compile".

    Because the bosses won't bother waiting for stuff to be "perfect". They'd want to start collecting rent/$$$ ASAP.

    And most shoppers and shopkeepers won't care either - as long as nobody gets killed or maimed, just stick a few warning signs over stuff that isn't working properly yet.
  • by againjj ( 1132651 ) on Monday January 12, 2009 @05:31PM (#26423345)
    "Write a function to return the sum of all the numbers from 0 to 100" is even more fun. Because many people, while getting the "return n(n+1)/2" solution, miss "return 5050".
  • by jeko ( 179919 ) on Monday January 12, 2009 @05:38PM (#26423455)

    "I'm beginning to think that our HR person just does a terrible job at finding resumes for me"

    Well, there's your problem...

    You sound like a great, amiable guy who'd be a great coworker. HR is screwing that up for you.

    Why are you letting HR dictate who you're going to get saddled with? HR doesn't bring you resumes, you should be taking resumes to them. Talk to your friends and acquaintences, guys in the users' group. People that you know can do the job.

    "Hey, we need a coder for Project X. Ya want it, or know anybody?"

    "Whatcha offering?"

    "120K, 30 days vacation, free milk and cookies..."

    "See you Monday morning."

    .
    .

    "Hey, we need a coder for Project X. Ya want it, or know anybody?"

    "Whatcha offering?"

    "$4.25 an hour plus all the stress and scapegoating you can handle..."

    "Gee, not really looking myself. Let me see if I can find anyone for you...."

    Basically, with unemployment penciled in to hit nine percent next month, you WILL find someone competent to hire. You just have to be offering market rates.

  • by Chabil Ha' ( 875116 ) on Monday January 12, 2009 @05:39PM (#26423467)

    The main reason why your DVD never required updates is because Blu-Ray is a trainwreck of a spec that is still in heavy flux, while DVD was finalized and stable by the time players started shipping.

    It didn't require updates because in 1995 [wikipedia.org] only 0.4% of the world's population [internetworldstats.com] had access to the Internet, and the spec didn't allow for such things as 'upgrades'. It hadn't even entered their minds that people would actively crack the thing. Fast forward ten years later (digitally mind you, we had passed up analog tapes!) and one of the requirements was to be able to update the code used to decrypt the disc in order to combat piracy [wikipedia.org]. The fact that many updates to the specification have already been made is only an ancillary to this motivation.

  • by clone53421 ( 1310749 ) on Monday January 12, 2009 @06:01PM (#26423825) Journal

    All loops are coded as GOTOs in assembly [slashdot.org], you insensitive clod!

  • Re:Waterfall (Score:3, Insightful)

    by radish ( 98371 ) on Monday January 12, 2009 @06:09PM (#26423955) Homepage

    The phase is over when the stakeholders agree and sign off on the requirements.

    Hahahahahahahahahahahahahaha...breathe....Hahahahahahahahahahahahahaha!!

    I have no idea where you're coming from (government?) but in _my_ real world stakeholders change requirements all the time. Like daily. And saying "no you can't have this because you signed off on something last week" is a great way to dramatically shorten your career. The real world that my stakeholders live in changes all the time, so their requirements change all the time, so you have to be able to react.

    According to whom? According to the stakeholders who mutually agree on the design

    I've had stakeholders who understand requirements, and even some who understand specs. Never one who understood a design in any useful way. This ties into my first point though, without knowing the requirements are right how can you validate a design?

    Says you who very clearly have not been on a large waterfall project with people who know what they're doing?

    As far as I'm concerned, anyone who tries to do anything useful with a rigid waterfall doesn't know what they're doing by definition, so I guess not :)

  • by tsm_sf ( 545316 ) on Monday January 12, 2009 @06:12PM (#26424013) Journal

    Software is bad because it's not designed by MBAs, or even marketing departments. It's designed by engineers and developers with no practical experience.

    (( I agree with your statement as well, btw ))

  • by plover ( 150551 ) * on Monday January 12, 2009 @06:22PM (#26424169) Homepage Journal

    Yes, on the whole applications have become more stable, while growing an order of magnitude more complex. But TFA is not about stability as much as it is about security -- people leaving inadvertent holes in software that a hacker can exploit. You can have a perfectly functioning program, one that passes every test 100% of the time, but it may still have a SQL injection flaw or fail to validate input.

  • by Opportunist ( 166417 ) on Monday January 12, 2009 @07:51PM (#26424967)

    Now, I don't think that code got less secure. I rather think there are simply a hell lot more ways a system's insecurities can be exploited, and the propagation of such malicious code is magnitudes faster.

    Think back to the 80s. Whether you take an Amiga or a PC, both had horrible systems from a security point of view. It was trivial on either system to place code into ram so it could infect every disk inserted, every program executed, much more than it is today, with home users having systems that have at least nominal levels of security. Yet the propagation of malware was way slower. Because most people rebooted between using programs (they sometimes had to because the programs were not kind to the system and simply kicked it out of the ram so they could use more of it... and a program being able to do that just shows how secure the system was in the first place). Even if the malicious program could somehow survive as a TSR program, spreading from one machine to another required more often than not that a disc was carried over to the next machine and put into this other machine...

    This is all a matter of the past today. Dozens of programs running at the same time, machines getting rebooted maybe once a day and spreading malware through the internet is a matter of seconds.

    The same applies to databases. How many databases were actually "online" in the old days, with people having (potentially) access to them that had no business there? Most databases were inside a company's local network. Maybe accessable by other company outlets somewhere, but it was usually a quite well shielded network, not accessable by "mundane" means. The amount of hackers (or people who would like to pose as one) that could possibly access the database and inject something was way smaller.

    So I doubt systems got more secure. They're just exposed to more insecure input these days.

  • by Anonymous Coward on Monday January 12, 2009 @08:01PM (#26425077)

    Even worse, for at least the last 10 years, consumer grade hardware drops out of it's support cycle before all the firmware/driver bugs and features are worked out.

    You just end up buying one piece of crap after another, and that's just the way companies like it, buy more stuff, not make better quality. Quality means you don't replace it as often.

  • by Stormie ( 708 ) on Monday January 12, 2009 @08:47PM (#26425623) Homepage

    I love the way Alex Wolfe blames shoddy programming on the PC industry, which apparently replaced the waterfall development model with "cramming in as many features as possible before the shipping cut-off date, and then fixing the problems in beta". He then goes on to reference two books, from 1971 and 1975 respectively, which provide wisdom regarding this problem.

    They're excellent books - but I wasn't aware that the PC industry was around in 1971. Could it be.. that bad programming practices have always been with us? That the PC isn't to blame?

  • by Anonymous Coward on Monday January 12, 2009 @09:15PM (#26425963)

    Development culture that is. If you do not plan your development, document your system and expand the system incrementally, all with discipline to boot, then you are part of the problem. And Im sure we could all think of more to add to the above. The point is that we can talk about the Waterfall Model, the Spiral Model etc... but the real problem here is the "Cowboy Model". You do not need to staff only "genius" programmers to get a good result. What you need is the discipline to plan accordingly and then force the programmers to adhere to the standard. And if a programmer finds an issue while implementing said system, then the problem is dealt with intelligently.

    I work in the embedded systems arena. You would think that aiming to produce a system with uptime in years would provide the motivation necessary to do the above. Well it doesnt. I have just finished reviving a product where millions were lost due to absolutely horrible development practices. This particular system has multiple micro's that need to communicate. I asked for documentation... and people started to inform me verbally and draw diagrams on my whiteboard. No written documentation at all. In fact in the past 9 months I have not seen a shred of written documentation. Everything from incorrect use of watchdog timers to busy waiting all over. I could go on all night.

    But the reality is (at least with the place I am currently at) this is what you get when you have some guy who has 7 yrs of "experience" and bestows upon himself the title of "Senior Software Engineer". Then goes to an interview and nobody asks him about asymptotic analysis. Or about computer architecture. Or about databases. Or about network protocols. Or about anything else relevant to designing software to run on a computer in today's world. I have helped conduct interviews for some of the Mechanical and Electrical engineers (since we all work closely together) and I know for a fact that they get grilled after the initial formalities are over with. And you know what? 9 times out of 10, if there is a problem with one of our systems, its the software thats screwed up. I don't believe that is a coincidence.

    The simple fact is the development woes could have been alleviated to a large extent if the culture of Software Engineering was better. If the culture of Software Engineering (and I hesitate to call it that at this point in time) became more like that of other engineering fields then I think that things would improve.

  • Re:Waterfall (Score:3, Insightful)

    by jstott ( 212041 ) on Monday January 12, 2009 @10:41PM (#26426901)

    Hint: you will never have them all, and they will keep changing. But you have to stop at some point so you can move onto design, so when do we stop?

    This is a solved problem in engineering. You write the contract so that if the customer [internal or external] changes the requirements it carries a financial penalty and allows you [the developer] to push back the release date. On the other side, if you discover a problem after you've shifted out of the design phase, then you are SOL and your company eats any associated cost. Contracts like these are motivation for both of you to get it right the first time.

    -JS

  • by Eskarel ( 565631 ) on Monday January 12, 2009 @11:17PM (#26427271)

    The software of yester-year ran largely on single threaded operating systems, didn't have to interact with the internet or defend against attacks originating from it, had to manage miniscule feature and data sets, and was still buggy.

    There was no magical era of bug free computing, there was an era when systems were orders of magnitude less complex, where about a tenth of the software was running, and where features which most people depend on didn't exist. The software was still buggy and most of it was so full of security holes that it may as well have been a sieve.

    Complex systems have more bugs, modern systems are more complex. The workflow of your data entry people might be fast, and perfectly adequate, but what does it cost to support those 386's, what does it cost to train someone to actually use word perfect 4.2(the old dos verisons of word perfect had a pretty high learning curve). Data entry folks are basically doing monkey work, but training them up on an older system could take days.

The one day you'd sell your soul for something, souls are a glut.

Working...