Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Software Technology

Taking On Software Liability - Again 382

An anonymous reader writes "You may remember an article in which a BBC correspondent wrote an article criticising current software licenses. In answer to the huge discussion that this brought about, he has written another article defending his views. From the article: 'It is possible to make error-free code, or at least to get a lot closer to it than we do at the moment, but it takes time and effort. Doing it will probably mean that commercially-available code is more expensive and cause major problems for free and open source software developers. But I still believe that the current situation is unsustainable, and that we should be working harder to improve the quality of the code out there.'"
This discussion has been archived. No new comments can be posted.

Taking On Software Liability - Again

Comments Filter:
  • by MaskedSlacker ( 911878 ) on Sunday October 09, 2005 @08:45PM (#13753457)
    Perfect software is possible, with due diligence. I submit TeX into evidence.
  • by Anonymous Coward on Sunday October 09, 2005 @08:49PM (#13753476)
    I've said this years ago: software liability should apply on programs you pay for but for which you don't get the source. If money you pay goes to make something you don't have source level control over then that implies the vendor thinks its of sufficient quality that you, the end user, should not have to fix it. If you get the source then there is no guarantee and the distributor should have no liability. This doesn't mean you have to have the right to re-distribute the source -- but you have to have the right to re-build it using commonly available tools so liability can't be limited to one "magic" libarary.
  • by Captain Perspicuous ( 899892 ) on Sunday October 09, 2005 @08:54PM (#13753496)
    [ ] vendor guarantees that software works as advertised
    could be another checkbox that all software companies are trying to reach.

    "What? You don't guarantee works-as-advertised? Well, then I'm looking for a different product."

    If computing magazines would update their testing methods and added this one checkbox, Microsoft just might say "oh, hey, we haven't covered that checkbox yet. We need to have every checkbox. Let's quickly drop by the legal department get this in order..."
  • free software (Score:1, Interesting)

    by Anonymous Coward on Sunday October 09, 2005 @08:59PM (#13753515)
    Pundits seem to frequently make two assumptions/assertions
    1) Free software is less tested than commercial software
    2) Free software programmers are programming for free. (The article
    claims this without the slighest proof)
    But I know from personal experience neither of these is universally true,
    and I don't believe either to be particularly true. I wish there was
    a great deal real data on these topics, but there does not seem to be.

    1) I know proprietary products sold to customers by a commercial
    enterprise that were written
    by one person and not code-reviewed or even read by anyone else ever. (Ok, not a big seller, but who is to know the statistics of review
    in proprietary software? Proprietary software is well hidden...)

    2) I know one open source product on which all the key contributors are paid to do
    the work because it benefits the companies they work for.
    Just not paid by the FSF that owns the product.

    My comments don't prove anythin, they just advise caution about pundits
    who seem to make questionable assumptions.g
  • by Anonymous Coward on Sunday October 09, 2005 @09:15PM (#13753590)
    but that is not the issue. He is pointing out that companies EULA's exclude liability even if the fault is their own. You also seem to be getting hung up on who's to blame instead of who is liable.

    As most commercial software is shipped precompiled it isn't an issue for the end user is the compiler buggered it up or not. Standard contract law means you sue the company you brought the product off that is faulty and they then sue the people who created the fault and exposed them to the liability. This is as legally by selling something you are saying that what you are selling will do what it says.

    If there is a clear flaw in a product that you buy and it causes you harm you can sue the retailer. If it's software they will claim that EULA terms exempt them from liability.
  • Not entirely new... (Score:5, Interesting)

    by cperciva ( 102828 ) on Sunday October 09, 2005 @09:24PM (#13753629) Homepage
    Dan Bernstein has offered a guarantee for many years that djbdns and qmail are secure. Now, this is a rather vague guarantee, since the task of deciding if a reported problem is a security flaw lies with Dan Bernstein himself; but it's a start.

    I'm currently writing some cryptographic code, and I intend to go considerably further: I intend to offer a guarantee not only that my code operates as specified, but also that it is not vulnerable to any side channel attacks within certain classes.

    As the time-to-exploit of security flaws continually decreases, I see only one solution: Writing code which is correct in the first place. If you can do that, you can offer a guarantee. And hopefully once security becomes as larger issue to consumers, people will start looking for guarantees.
  • by autopr0n ( 534291 ) on Sunday October 09, 2005 @09:24PM (#13753633) Homepage Journal
    When I ran Autopr0n, hooo... that code was awful. But there really was never any kind of economic incentive to fix it, I could just keep restarting my JVM (the thing was coded in java).

    Or, look at metafilter.com. That site goes down like a $2 hooker, yet it's so successful that the maintainer was able to quit his day job and support himself based on the site. People don't care.

    Even when you get to a desktop OS back in the '90s, quality just wasn't that important. Would you rather pay $10,000 for an OS, or $90 and loose work once in a while.

    If the cost of the lost work due to software errors is less then the cost of writing the code so that it works perfectly, then it's not worth doing. Sure, for some programmers there's not a tradeoff, but those programmers probably cost a lot more to pay then 90% of the coders out there (who are idiots, IMO, just look at the existence and popularity of Visual Basic).

    When the cost of the error increases, you'll find much more stable software (like on medical equipment, airplanes, and so on).

    The secretaries spreadsheet just ain't mission critical.

    Of course, now that all computers are connected together, they need to be at least secure and not targets for worms and trogens, etc. I predict that we move towards web services, the software quality will get worse and worse, but people will just pay a sysadmin to sit there and reboot the machine whenever it goes down, so people won't notice everything...
  • by Concerned Onlooker ( 473481 ) on Sunday October 09, 2005 @09:27PM (#13753643) Homepage Journal
    A couple of quarters ago I was taking a software engineering course. Our instructor told the story of a debugging competition which used a mature piece of software that was known to be error-free for the test case. A fixed amount of bugs were then introduced into the code and the teams all had a crack at it. At least one of the teams found bugs in the code that were not the ones intentionally introduced. I'm paraphrasing here, but in other words they took a piece of software that they knew to be bug free due to its having been intensely examined by many programmers, yet another bug or two was found.

    Truly error free is not a likely state for software.

  • Good software costs (Score:5, Interesting)

    by Angst Badger ( 8636 ) on Sunday October 09, 2005 @09:56PM (#13753752)
    First off, I should issue a disclaimer that I'm an oldbie. I started programming in assembly language on punch cards, but no, this isn't going to be a rant about youngsters and their newfangled languages. (At least it better not be; my current job has me living, breathing, and eating PHP.)

    The problem with bad software today -- just like it was thirty years ago -- is bad engineering. It's not because of the methodology du jour (or its absence), licensing, choice of language, or toolsets. You can write brilliant, bug-free, efficient software in COBOL using the basic procedural structured programming paradigm. You can write awful, buggy, resource-hungry software in object-oriented Java using XP. None of that shit matters.

    Good engineering requires, among other things, a detailed understanding of the problem, thorough planning, the sheer experience required to distinguish between the clever and overcomplicated on one hand, and the lucid and elegant on the other, excellent communication between developers, foresight (also borne of experience), and rigorous debugging. All of these things, including the many other prerequisites not mentioned, require lots of time and effort. Too much time and effort, in fact, for most commercial software outfits to invest and still turn a profit.

    That's the rub, really. All the methodology and language fads aside, the basic principles of good software engineering were worked out decades ago, and sometimes further -- good generic engineering practices in the abstract were worked out long before we harnessed electricity. It all comes down to this: the more time, effort, and care you put into a product, all other things being equal, the better the product will be. It's easy (and well-deserved) to mock Microsoft for the shoddiness of their major products, but that very shoddiness is why you can buy MS Word for less than ten grand. If MS built word processors the way engineers built the Golden Gate Bridge, the prices would be comparable.

    The market does not reward that kind of quality. In the first place, no one is willing to pay thousands of dollars for a supremely excellent product when one that is good enough can be had for a couple hundred. Most folks couldn't afford that kind of software engineering even if they wanted it. In the second place, once you have the perfect all-in-one software package, why would you ever buy another one? Microsoft is in this position already with its good-enough products. No one needs an upgrade, so remaining profitable requires MS to churn out new versions of its increasingly resource-intensive operating system so that you at least have to buy new copies as you replace your older machines.

    FOSS is at least theoretically invulnerable to these pressures. In theory, there will eventually be all-singing all-dancing FOSS packages covering all of the major software categories, and the age of commercial mass-market software will be at an end. I've been waiting for this day to come since well before the first release of Linux. I'm surprised that it hasn't come yet. I'm surprised that the majority of FOSS software is still as buggy, poorly designed, and -- almost without exception -- undocumented as its commercial equivalents.

    I suppose I shouldn't be surprised. Excellence in software engineering is like excellence in any other field: it's really fucking hard. It's even harder when you have a day job; time constraints aside, after 8-12 hours coding at work, the last thing many developers want to look at when they get home is compiler output. Many of the remainder are either amateurs or students -- not to diss either category, but often the necessary experience is lacking, and the lone hacker often lacks the knowledge or the inclination to produce code that's easy for other developers to work with. I remain confident that we'll get there, though. (I am less confident that I will still care by then, but it will still be a boon to those who live to see that day.) I am equally certain, for the reasons
  • by fbjon ( 692006 ) on Sunday October 09, 2005 @09:58PM (#13753762) Homepage Journal
    There was an analogy with a bridge earlier. Bridges are designed with redundant security, you can (usually) put a lot more weight on them than what they are rated for.

    In the same vein, instead of trying to make every part of the code perfect, how about designing some redundancy into the code?

    I leave it as an exercise for the reader to figure out what the hell that means.

  • Sure, let's have liability. The software must perform substantially as advertised - counting all advertisements, press releases, interviews given by publisher's officers, etc.. But make the amount of damages simply equal the price paid.

    This would keep free-as-in-beer software in the clear. It would also have the side benefit of forcing Microsoft to reveal its OEM prices. :D

    I like the source code as condition of immunity suggestion above too, but it would be futile without a licence like those the FSF approves, which would actually allow you to fix problems without violating copyrights and patents.

  • by Midnight Thunder ( 17205 ) on Sunday October 09, 2005 @10:34PM (#13753908) Homepage Journal
    I thought about this the other day, asking myself why we can't have the same approach in software development as bridge building, or other engineering disciplines. The difference seems to be that of prototypes. When you build a bridge you create a prototype, test it as much as possible, tweak it where necessary and let the cycle continue until there is a working solution. Once that is done you are ready to build the bridge, based on specifications that in a certain sense are easier to follow than what software does.

    Look at software and ask yourself where that prototype is, that can tweaked reworked until all obvious and so obvious issues have been tested for? You will end up noticing that the prototype and the final product is the same thing. While a bridge can be tested based on a number of complex mathematical formula, I am not so sure that software can be tested in the same way. Software is designed and developed based on a number of philosophies and sometimes these even have to interface with other programs based on other philosophies. Over time the complexity grows to a point where testing it 100% is like trying to predict what the stock market is going to do next week. I would like to give a figure to what we are able to predict, but that I will leave that for someone else, since I am not sure I am qualified to do so.

    At the same time I will say that there are a good number of things for which you can create unit tests for and these help avoid the most obvious issues. The non-obvious issues, based on difficult to reproduce scenarios, variable dependencies are a little trickier.

    Things are also improving thanks to libraries that implement much in the way of reusable code, but here too there is an issue. Imagine that you designed your program to be dependent on libraries x, y and z, and then the user adds libraries that effect the libraries you depend on, how can you predict what is going to happen?

    You will notice that most mission critical systems are designed to have only the most essential features (as compared to desktop software) and are often coded with very precise memory management and sometimes even avoid the pointer type and instead using only primitives. Trying to develop most applications this way would be long and laborious and your users would be complaining that his complex office software doesn't do what (s)he wants (remember they can't agree on what they want), even if it is 99.999% stable.

    I am not saying it is impossible, its just that I have yet to see an approach that is 100% effective and for 100% of cases. Yes I am a software developer, so I do have a certain bias.
  • by Fastolfe ( 1470 ) on Sunday October 09, 2005 @10:36PM (#13753917)
    I agree, to an extent. It makes no economic sense to shoot for as perfect-as-possible for all software. The reason we have minimum standards for other industries, such as as automobiles, is because a defect in an automobile can kill people.

    But what we have today is practically anarchy. There's no way of telling if a product will work properly, or will work at all, and software vendors are allowed to get away with that.

    A middle ground here might be forced labeling. Require software vendors to place a label that, in a standard fashion, describes how safe the software is, whether it is guaranteed to work as labeled and advertised, and maybe something about the known defects it has, or estimated failure rate. Don't let the vendor hide this in the fine print. And then hold them to it with legal measures.

    That way, if a piece of software is targeted for home use, the labeling should make it clear that it's going to have significant defects, and will fail at a high rate. You might have a more expensive variant for office use, with fewer defects. And then you might have a stripped down, very expensive version intended for critical applications, in hospitals or infrastructure. The end user can then choose which one they want to buy, and instead of feeding a market where the customer buys the cheapest product because they think all products are buggy, they can buy the product that meets their needs, with the assurance that they will have legal recourse if the product fails to meet the expectations indicated by labeling.
  • Software insurance (Score:2, Interesting)

    by click2005 ( 921437 ) on Sunday October 09, 2005 @10:48PM (#13753961)
    It makes me wonder why no insurance companies offer insurance against loss via bad software. House insurance is dependant on suitable locks and security. Software insurance could be made available on the condition that suitable AV/Spyware/Firewall software was installed and patched.
  • by Totally_Lost ( 177765 ) on Sunday October 09, 2005 @10:49PM (#13753968)
    There is something of a chicken and egg problem here. Good software can be produced in volume, but good software can not compete after cheap trash software consumes all the dollars (or available labor) in the market by first to ship. Both in commercial and open source markets.
  • by kannibal_klown ( 531544 ) on Sunday October 09, 2005 @11:01PM (#13754006)
    If one program causes something "devastating" to happen, who is to decide that it's not the user's fault, the compiler's fault, the programmer's fault, the OS creator's fault (and if it's OSS, who's package etc?), or the hardware's fault?

    Let's not forget "another piece of software's fault." Installing Software package B might overwrite a registry setting or DLL needed by software package A. On top of that, software package B might leave something running in the memory as a service that conflicts with something software package A does.

    You are correct, there are WAY too many variables when dealing with software failures. And if this guy were actually a software developer he'd know that it's pretty much impossible to make something completely bug free. The most you can hope for is something that rarely has a bug or recovers if it encounters ones without losing its place/data.

  • by MOBE2001 ( 263700 ) on Sunday October 09, 2005 @11:12PM (#13754038) Homepage Journal
    Bug free software is possible, so long as it is done right and people are prepared to pay for it.

    It is impossible to guarantee the reliability of complex algorithmic software. This is something that Frederick P. Brooks has shown in his famous "No Silver Bullet" paper. However, Brooks' arguments fall apart in one important area. Although Brooks' conclusion is correct as far as the unreliability of complex algorithmic software is concerned, it is correct for the wrong reason. Software programs are unreliable not because they are complex (Brooks' conclusion), but because they are algorithmic in nature.

    Last week, an article in the Wall Street Journal's OnLine Edition gave a vivid description of the costly software reliability problems that Microsoft has had to endure in its effort to develop the next version of its Windows operating system. It drove home a point that I have repeatedly made in the past. The biggest problem with software is communication. I am not talking about the lack of communication between programmers (nothing can really be done about that since programmers come and go) but about communication between various parts of the software. Microsoft is suffering from a classic case of the "right hand not knowing what the left hand is doing" syndrome.

    The problem has to do with what I call blind code and it is not just Microsoft's problem. It is an old problem that has plagued the entire software development industry from the beginning. It is proportional to complexity but it does not have to be. In fact, it can be completely eliminated. The solution requires a rethinking of software construction, not only at the single program level but also at the operating system level. It calls for the reinvention of computing at the fundamental level. We must abandon the algorithmic model of software construction and embrace a signal-based, synchronous model. Eventually, even basic microprocessor architecture will have to be overhauled. For more on this important subject, see the link below.
  • by Anonymous Brave Guy ( 457657 ) on Sunday October 09, 2005 @11:50PM (#13754184)

    What you say may be true, but I don't think it's the use of prototypes and up-front planning that separate true engineering fields from software "engineering". Those are merely the processes that have been found to work effectively in other disciplines, and we know many processes that work and many that don't for software development, too.

    I think what really separates engineering from most of today's software development is that in real engineering, you have an engineer. This is a highly trained, experienced, skilled and independently assessed professional, whose sign-off is required before a project can continue regardless of what the bean-counters say, and whose personal reputation is on the line if they sign something off inappropriately. In other words, it gives a veto to the guy who actually knows whether something's going to be crap or not, and that guy has a very strong motivation to use the veto when it's appropriate.

    Now, suppose I were a software engineer in the real engineering sense. Let's say my signature was required before shipping a release from the software project I was supervising, to confirm that reasonable care had been taken to keep the bugs as few and as minor as possible. In all the professional projects I've ever worked on, I can't think of more than a couple I would have approved. How about you?

    The bottom line is that in today's software development world, the business guys can come in and trump the development guys, and frequently do. That's cutting corners in the interests of making more money, pure and simple. It may be the way to run a more successful business in a competitive marketplace, but it's nothing to do with real engineering.

  • Re:Bullshit (Score:5, Interesting)

    by Maxo-Texas ( 864189 ) on Monday October 10, 2005 @12:08AM (#13754247)
    You are a civil engineer.

    I want you to build a bridge.

    I won't say where- or what the end conditions are on each end- because this bridge needs to work in about 2 million different places.

    Now- as to what will cross the bridge. I won't tell you that either. It might be a car- it might be a convoy of tanks.

    Now... as to the basic laws of the universe (the operating system). I can't tell you much about them either. For example, gravity may change at any time to be higher or lower. The tensile strength of various materials may change unpredicatably with various patches to reality.

    Your work force will be available to work 2 to 16 hour days and may or may not comprehend instructions written in english.

    The bridge needs to be built from scratch from materials using new refining methods so you cannot use any reference materials to analyze how strong it has been historically.

    Finally, this bridge must be made of at least 9 million different pieces (opcodes). The subunits will be assembled by a robot of some kind (Compiler) so you will not know the details of how the units work- only how they are supposed to work as units.

    ---

    I'm sorry but you really do not understand what you are talking about.

  • by man_of_mr_e ( 217855 ) on Monday October 10, 2005 @02:06AM (#13754642)
    You raise an interesting point, however, let's look at how a bridge is built versus how software is built.

    When you build a bridget, an architect designs every detail of that bridge. An engineer ensures that the bridge is structurally sound, and develops the methods used to build it.

    The people that actually BUILD the bridge, are, for all intents and purposes, monkeys. Skilled monkeys, to be sure, but monkey's no less. They do what they're told, and have no "creative input" into the building of the bridge.

    In software, typically everyone working on it has creative input of some kind or another. There are no standardized ways to do the jobs they're told to do, and they often have to engineer their own solutions, and depending on their experience and skill can choose some pretty poor ways to do it.

    Software engineers ARE engineers in every sense of the word, because they're DOING engineering tasks. That doesn't mean their qualified to BE an engineer, they just are by default.

    Until such time as the software can be 100% specified by a qualified engineer, and no creative input is required by the workers, you won't get a well engineered product. In fact, if that were possible, you wouldn't even NEED programmers. The software could be specified, and then other software could build it based on the specifications.

    So, until programmers are no longer needed, you're not going to have well engineered software.
  • by stretch0611 ( 603238 ) on Monday October 10, 2005 @02:59AM (#13754781) Journal
    Excellent, I agree with you. I also consider myself an oldbie. (20 years of programming, 12 years being paid for it) Fortunately in the early years I had a teacher that actually emphasize design and comments.

    Unfortunately the environment in the business world today prevents truly bug-free programming. A lot needs to change:

    1 - Fire all the programmers and developers that can't program. We all know which ones in the group fit into this category. Unfortunately our bosses don't know. They're the ones that cause the majority of the bugs. They came into the industry just for money (pre-2000 bust) and they have no real feel for programming yet they know how to email the boss. Keep the ones that are naturals. The real code warriors. The good ones know when to code new source, when to copy old source, and how to clean up old source when they copy it into their new modules.

    2 - Get rid of the bosses that don't know tech people. (i.e. the ones that don't know the difference from #1 above) The boss doesn't need to know tech (it does help) but they do need to know their people. They also need to know how to keep office politics and beauracracy away from their people.

    3 - Get rid of separate New Development and Maintenance groups. People will code better when they know they will have to fix their own code when it goes into production. They will care more about stability instead of features. Also, a programmer learns the difference between good and bad coding techniques when they are forced to maintain both.

    4 - After the requirements are requested and the specs/design is created don't let users change them. I can't change everything just because a user changes their mind. If I have to change, the release date is pushed back as if I just started the design today. I can't complete a program until you are done knowing what you want it to do.

    5 - Procedural vs. Object Orientated programming. The huge developement debate. I admit I am biased toward Procedural programming. However, you should use whatever works better for your project. A GUI works better when you design using OOP, but when you need to crunch numbers on 10 million records procedural will work a lot better. I know a lot has been said about the poor code quality of OOP in particular, but if you get rid of the idiots in #1, the logic should be easy to follow.

    6 - KISS - Keep It Simple Stupid - I used to work with someone very intelligent, but his code was terrible. He would program elaborate functions just to add two numbers together. My honest belief is that he tried to impress us with his "coding ability." If someone needs a simple program give them a simple program, don't redesign the wheel.

    7 - Shoot and KILL everyone that sponsors or participates in a unreadable source code competition. (sorry personal peeve) We need to promote legible code with indenting and good, clear, and relevant variable naming.

    8 - Quality. CMM, ISO, TQI. These are nothing more than BULLSH!T. While there some occasional insights coming from these "Quality" initiatives I disagree with most of the methods. Commenting and documenting your code is a good thing. Unfortunately, most of this initiatives are nothing more than feelgood bs for clueless management.

    9 - Admins and Tech Writers. Hire all the good ones back. The improve our ability to code by letting us use admins to do the less technical aspects of our jobs. Their hourly cost is less than ours and by offloading some of our work to them we have more time to develop the system that managent wants done yesterday. This creates more cost effective development even though it raises headcount.

    10 - Pay. Simple answer. You get what you pay for. If you offer good pay for good programmers you will get good code in return provided your managers need know their programmers (see #2 above)

    11 - Overtime. Don't do it. An overworked, stressed developer is a poor quality developer. A little OT before a release isn't terrible, but 50+ hour weeks for months on end will cause poor code. Also, if a little OT before a release happens, compensate the developer with pay or comp time to make them happy.

    12 - TEST TEST TEST TEST TEST TEST. Then test some more. Make sure your users test also. This is the most important step.
  • Re:Bullshit (Score:2, Interesting)

    by jtev ( 133871 ) on Monday October 10, 2005 @03:04AM (#13754797) Journal
    I might not be able to design a bridge that works in these conditions, but Buckminster Fuller designed an arena that could be build under these conditions. He used color coded steel beams, shiped as a kit. A handfull of engineers and an indigenous workforce could build a large dome in aproximatly 3 days, even if they didn't speak a single word of a common language. The engineers would be amazingly like the compiler that condenses your code into actual machine code. Also it is quite posible to write in those opcodes, and can be rather enjoyable. Please stop talking out of your ass.
  • by Anonymous Brave Guy ( 457657 ) on Monday October 10, 2005 @11:52AM (#13757087)

    I agree with much of what you say as things stand today, but I think you're making an unstated assumption that this is the only way things can work.

    A lot of programming is donkey work, and requires little more than joining the relevant library code together in the appropriate pattern. IME, the key to getting this right is that you usually need:

    • a small number of very good people at the top of this process, co-ordinating the design;
    • a small number of very good people at the bottom of this process, writing the tools and library code everyone else will use;
    • a large number of code monkeys in between, joining the libraries according to the design.

    It is possible to get much better (faster, safer, whatever) results out of code monkeys by using monkey-friendly tools: look at the success of Java, which is less powerful than many other languages, but provides an effective tool for many average programmers to produce decent quality work, while helping to avoid making average programmer mistakes. However, such a tool is almost certainly the wrong choice for the specialist guys at either end of the plan, who will feel the lack of power and would be less likely to make those classes of programmer error.

    I think the next wave of robust software development will come from realising that these three levels require very different skills and skill levels; very different tools with different balances of power, flexibility, safety, etc.; and in particular, very different proportions of the work force. Not all programmers are equal, and not all developers fall onto a simple scale from "crapware newbie" to "L337 hax0r".

  • by man_of_mr_e ( 217855 ) on Monday October 10, 2005 @12:48PM (#13757546)
    While I agree with you that things will not always be this way (I did lay out the criteria I believe will solve the problem), I don't agree that it's possible today.

    When you build a bridge, you need a human to make decisions about various things, but those decisions are based on how to build the bridge, not how the bridge will operate once built. Programmers make decisions every day that effect how the software runs even after it is built.

    A bridge builder might have to decide whether to use a shovel or a backhoe to do something, but once the thing is done, it's the engineers choices that depend on how well the bridge works, not the bridge builders.

    As an example, as a programmer, I have to make decisions about how to build the product to meet the specifications. This would be equivelent to the bridge builder having to decide how to make the steel, or the the composition of the concrete. SOMEONE has to make those decisions, but not the grunt in the trenches. Programmers make those kinds of decisions every day, such as choosing an algorithm that may have O(1) performance, or O(N) performance, or even worse. Maybe they don't even understand O notation and what it signifies.

    Until the basic "pieces" of software are standardized, an engineer cannot fully control how the finished product will function. And once those peieces are standardized, there is no need for a programmer anymore since the computer can just join those pieces based on specifications.

The flush toilet is the basis of Western civilization. -- Alan Coult

Working...