Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Software Businesses The Almighty Buck Technology

Slashdot Asks: Are You Ashamed of Your Code? (businessinsider.com) 280

Programmer and teacher Bill Sourour wrote a post last week called "Code I'm Still Ashamed Of," where he recounts a story in which he was hired to write code for a pharmaceutical company. Little did he know at the time, he was being "duped into helping the company skirt drug advertising laws in order to persuade young women to take a particular drug," recaps Business Insider. "He later found out the drug was known to worsen depression and at least one young woman committed suicide while taking it." Sourour was inspired to write the post after viewing a talk by Robert Martin, called "The Future of Programming," who argues that software developers need to figure out how to self-regulate themselves quickly as software becomes increasingly prevalent in many people's lives. Business Insider reports: "Let's decide what it means to be a programmer," Martin says in the video. "Civilization depends on us. Civilization doesn't understand this yet." His point is that in today's world, everything we do like buying things, making a phone call, driving cars, flying in planes, involves software. And dozens of people have already been killed by faulty software in cars, while hundreds of people have been killed from faulty software during air travel. "We are killing people," Martin says. "We did not get into this business to kill people. And this is only getting worse." Martin finished with a fire-and-brimstone call to action in which he warned that one day, some software developer will do something that will cause a disaster that kills tens of thousands of people. But Sourour points out that it's not just about accidentally killing people or deliberately polluting the air. Software has already been used by Wall Street firms to manipulate stock quotes. "This could not happen without some shady code that creates fake orders," Sourour says. We'd like to ask what your thoughts are on Sourour's post and whether or not you've ever had a similar experience. Have you ever felt ashamed of your code?
This discussion has been archived. No new comments can be posted.

Slashdot Asks: Are You Ashamed of Your Code?

Comments Filter:
  • by account_deleted ( 4530225 ) on Tuesday November 22, 2016 @08:05AM (#53338241)
    Comment removed based on user account deletion
  • Redmond? (Score:4, Funny)

    by gti_guy ( 875684 ) on Tuesday November 22, 2016 @08:06AM (#53338251)
    We're waiting for a response.
  • by Parker Lewis ( 999165 ) on Tuesday November 22, 2016 @08:13AM (#53338277)

    In civil engineering, when any project is bigger then a certain amount, it's required to have a civil engineer signing the project, responsible for all the stuff. Sometimes I wonder if we need a similar regulation in software. By example: if it's covering something sensible to life, like medical, airport, etc, law should require a software engineer signing the project and responsible for it.

    Like on civil engineering, probably this will force software people to really invest in QA (until current days, QA is really, really bad in software).

    • by Cassini2 ( 956052 ) on Tuesday November 22, 2016 @08:31AM (#53338361)

      Civil engineers design with a safety margin such that their building's don't fall down. I work with a bunch of them. Civil engineers dread the thought that their building falls down.

      What does this mean in terms of software? Software crashes all the time.

      Software systems tend to have really complex side effects. Suppose I design a blood pressure monitoring machine for a hospital. It and a hundred other devices let the hospital run much more efficiently. The hospital only needs 1/2 the number of nurses. Now, someone discovers a bug in security camera, penetrates the network, discovers hundreds of Windows XP Embedded devices, and turns the hospital into a malware farm. (Incidents like this have happened.)

      The hospital is screwed. It can't suddenly double the number of nurses, and even if you did, the nurses are used to the automated equipment. They don't know how to fall back to the non-networked way of doing things instantly. They are out of practice.

      How could an engineer sign off on a system like this?

      On one hand, it is running standard and recommended software (like Windows). Software has went through the FDA approval process. However, on the flip side, the hospital is a sitting duck. These embedded devices are hopelessly insecure, and there is no way to secure them against modern network threats.

      I don't think we have proper methods of describing and solving modern safety issues in embedded systems. We have no proper method of understanding safety with machines built in one country, running software written two different countries, and then running somewhere else. The safety interactions even in a relatively stand-alone machine can be very tough to understand. These network enabled threats make things really hard.

      • by Anonymous Coward on Tuesday November 22, 2016 @09:22AM (#53338603)

        What does this mean in terms of software? Software crashes all the time.

        Not in safety critical applications. Writing software for them is a different beast.

        How could an engineer sign off on a system like this?

        With the proper documentation.

        I don't think we have proper methods of describing and solving modern safety issues in embedded systems.

        Google for machine safety standards. IEC 60601-1 seems to be a good starting point for medical devices.
        I've only written code for industrial machinery so I can't say for sure if it contains the necessary information. You typically have to go through quite a lot of standards to figure out the full requirements.

        You have to document not only how the software will handle all plausible input cases but also how the device won't endanger anyone in the case of common hardware failures.
        Some electromechanical devices can be assumed to not fail if you never approach half the marked current.
        Some components are designed to have a defined failure state. You can use capacitors that always break, never short circuits.
        For transistors you have to document how the device will operate in the different possible ways the transistor can break.
        For complex circuits like a CPU you are not allowed to assume that it will remain functional and because of this you need at least two CPUs and have software or hardware that detects if one of them doesn't act as it should.
        Depending on what safety class you are aiming for you might have to use CPUs of different architectures and have different programmers writing the software to minimize the risk of them failing in the same way.

        As you might have figured out you can't just throw in a Raspberry Pi or anything running Windows CE and hope to write life critical applications.
        If you need an OS it will be something like SafeRTOS but most of the time you will skip it.
        You typically have to use window watchdogs to make sure that the code executes within the right time and you need to add checkpoints to make sure that the code executes in the right order.
        You should try to avoid using pointers and dynamic allocation. Yep, that rules out high level languages no matter how safe some people seem to believe they are.
        Exceptions is a big no. You avoid code that doesn't have a determined path trough it.
        If you actually use pointers you will have to document every usage to make sure that it can never be used uninitialized or trash other parts of the memory.
        If you allocate things dynamically you will have to show that allocation failure doesn't lead to safety issues.

        TL;DR;
        We have the methods to write safe software. It's not easy and it is very time consuming.
        If you are interested in doing it I recommend going for an EE degree rather than CS. Reading the standards will be hard otherwise and understanding the possible failure modes even more so.

        • by ranton ( 36917 ) on Tuesday November 22, 2016 @11:02AM (#53339147)

          Google for machine safety standards. IEC 60601-1 seems to be a good starting point for medical devices.

          We have the methods to write safe software. It's not easy and it is very time consuming.

          There is still a fairly large difference between quality control in civil engineering and software development, even for safety critical devices. In college I worked with a professor whose area of research was requirements engineering, specifically requirements trace-ability. I did some work on a research project involving Siemens and the FDA where the goal was to improve specifications given to the FDA so they could monitor safety critical devices better. It was very eye opening just how difficult it is for the FDA to perform approval on devices which include a software component.

          Right now their approach is basically to look for what they called "bad smells". It is impossible to thoroughly go over every software system with the same rigor they would over electronics or mechanic systems without an astronomically higher cost. So the best they can do is use their experience on where problems are most likely to be and to focus on areas where documentation is light. Just like an experienced software QA engineer would. My professor's research focused on AI and information retrieval to build tools which assist in investigation because a thorough review would be impossibly costly.

          People often pontificate about whether civil engineering, electronics, or software engineering products are inherently more or less complex than each other. They look at number of bolts in a bridge and the lines of code in a software program and pretend they can compare the two. But coming from the actual mechatronics engineers and FDA officials whose job it is to oversee the safety of these products, software systems have simply too many external and internal inputs for software quality control to reach the rigor of other engineering disciplines. Human beings are simply not capable of handling the level of complexity it would take for software engineers to have the same confidence in their products as a civil engineer has in his.

          Engineer experience and good QA practices really do make a big difference, but no software engineer will ever be capable of taking on the same level of responsibility for his products as a civil engineer does when he signs off on a bridge. This doesn't mean the industry cannot improve (it certainly can), it just means comparing the results of software QA with the results of civil engineering QA will always be faulty. Comparing each other's procedures to find ways to improve is still a good exercise though.

      • by sinij ( 911942 )
        Software shouldn't crash all the time, it is just that we tolerate that a lot more than collapsed buildings. Software used to be "stuff with computers" but now it is everywhere, so the approach must change.
        • It won't until the folks running the show are willing to wait a quarter or two to recognize revenue in order to make sure stuff doesn't break. Which is to say, it won't happen.
      • by eegeerg ( 673636 )

        On one hand, it is running standard and recommended software (like Windows). Software has went through the FDA approval process. However, on the flip side, the hospital is a sitting duck. These embedded devices are hopelessly insecure, and there is no way to secure them against modern network threats.

        I work at a hospital. Yes, there are still problems. But things are better in the last 10 years. I disagree that there is no way to secure these devices. Your hypothetical blood pressure monitoring device, if it requires network access, would generally be firewalled at time of install (either by the vendor or the hospital). If the malware operates through the one or two open ports required for communication, a more sophisticated packet filtering rule can be applied.

        Also, you may not be aware, but FDA w

      • Re: (Score:3, Interesting)

        by Culture ( 575650 )
        As a practicing structural engineer for 30 years who also writes structural engineering design software, let me answer this for you. When an engineer signs/stamps a design, they are not certifying that it is perfect. In fact, it is generally recognized that no set of plans is ever error free. What you are asserting by your signature is that 1) You were in charge of and supervised all the work going into the design and 2) The design was performed in accordance with the standard of care for the work being per
        • Standard of care is NOT a standard of perfection, by rather "that degree of care and skill ordinarily exercised under similar conditions by reputable members of our profession practicing in the same or similar locality."

          This is basically true for all professions, like lawyers or doctors. No person or system is perfect, but an operation by a highly trained and qualified surgeon is many orders of magnitude more likely to succeed than following Baz's DIY appendectomy guide on YouTube.

      • by Kjella ( 173770 )

        Civil engineers design with a safety margin such that their building's don't fall down. I work with a bunch of them. Civil engineers dread the thought that their building falls down. (...) Suppose I design a blood pressure monitoring machine for a hospital. It and a hundred other devices let the hospital run much more efficiently. The hospital only needs 1/2 the number of nurses. Now, someone discovers a bug in security camera, penetrates the network, discovers hundreds of Windows XP Embedded devices, and turns the hospital into a malware farm.

        Except the latter is more akin to evildoers running trucks in complex resonance patterns or planting C4 charges to bring the bridge down. In complex software sure there are accidentally created and accidentally triggered bugs, but those are mostly contained by testing and rollback procedures so what you have today is not worse than what you had yesterday. If it's broken you can fairly easily put a business value to it and prioritize accordingly. And there's not many insider civil engineers who intentionally

      • by swillden ( 191260 ) <shawn-ds@willden.org> on Tuesday November 22, 2016 @10:22AM (#53338923) Journal

        One difference between the civil and software engineering examples that strikes me is that the civil engineer only has to ensure that the building never falls down due to natural and expected forces. If someone sets off a large truck bomb in the basement, the building *will* fall down, and everyone understands that's not the civil engineer's fault because that attack was outside of the normal and accepted design parameters.

        That's not to say that we don't defend buildings against truck bombs. We do, but we do it with other mechanisms. We have regulations that attempt to restrict the availability of explosives. We have law enforcement and court systems that attempt to deter people from blowing up buildings by threatening them with punishment if they do. In some cases, for buildings that seem to be at particularly high risk, we apply various other security measures to control what vehicles can be driven into the basement, and by whom. We also have infrastructure in place that attempts to monitor whether or not some individuals or groups might be interested in trying to blow up a specific building, and devise and implement countermeasures dynamically as needed.

        In the case of software, the responsibility of software engineers is not nearly as clear as it is for civil engineers. Largely this is because software engineering is still a very young profession as compared to civil engineering, and it's still evolving rapidly. In some cases, the tools and techniques used by attackers didn't even exist when the software was written. In most cases, the tools and techniques did exist and were well-known to attackers and security engineers, but not to the people who wrote the software. This indicates a failure of the profession to educate its members... but given the pace at which attack techniques develop and the pace at which the software industry is and has been expanding, it's a failure without obvious solution. Simply applying the same sort of regulation and procedures applied to civil engineering would be massive overkill that would dramatically decrease the ability of the industry to produce software and probably wouldn't solve the problem.

        Clearly, we need to create more secure software. The status quo is generally terrible. There are exceptions; there are organizations that do excellent security engineering and we have a good collection of tools and practices for making software that is much better than the norm. On the other hand, no matter what we do during development there will always exist the potential for a truck bomb, an attack which was simply outside the parameters that it made sense to defend against. That means we'll always need additional, "active" defenses.

        In the case of the hospital equipment, that means that processes developed for medical equipment not based on software simply don't work. FDA approvals hinder security because they make patching far more expensive and difficult than it should be. We can attempt to build security perimeters around all of the equipment, but experience proves that that's a fool's errand. There's always some way in and once inside the perimeter attackers can run amok.

        Our current (but rapidly evolving!) best understanding of how to make software reliable in the face of active attack is a multi-layered strategy. It starts with good software engineering practices that attempt to minimize well-understood risks (buffer overflows, SQL injection, XSS, etc.). Then we try to add firebreaks wherever possible and reasonable, so that compromise of one component doesn't compromise the system as a whole. Such firebreaks mostly consist in locking down any communication channels between components that aren't actually necessary, within processes, between processes, and between devices on networks. We also try to authenticate users and keep them restricted to the functions they can legitimately perform. Then at every level we do regular penetration testing and work to identify and patch vulnerabilities before they can be exploited -- because there will

    • Well, from past experience, the FDA covers most of "medical device" software. But this is just governance on the design, construct, maintenance. Not sure if they are concerned with some of the logic inside the software.

      So does the FDA cover most areas, yes. Does it cover all areas, no. This is the liability the company must assume and creating software that is NOT life threatening is a benefit to both the producer and the consumer.

    • I think the problem of comparing software to civil engineering is that a civil engineer (or team of them) can design let's say a bridge with full specifications and expect that it's build according to that spec. Such and such grade of steel will be used. There will be support structures here, and here. It will be expected to carry X tons of cars. Then you get skilled workers to assemble the bridge.

      Compare that to software engineering. It's really hard to explain how software should actually be constructed

      • Compare that to software engineering. It's really hard to explain how software should actually be constructed without actually doing all the coding yourself. You can set guidelines for people to follow, but writing code isn't really as close to following instructions as following plans for assembling a bridge. There aren't really any low level jobs when it comes to building software. Each and every person writing code on the software project must be basically a software engineer. At best you could have a software engineer review the code written and send it back if it doesn't comply with the specification. But by the time you read the code and verify that it actually fits the spec and executes properly you probably could have written the code yourself. There isn't really any software equivalent of welding the beams together or driving a steamroller.

        The mistake you're making is comparing coding to constructing the building. Coding is more like drawing blueprints. The compiler is the construction crew. The early specs/design are more like drawing pictures and building models of the building beforehand.

    • by mtaht ( 603670 )
      "Ubiquity, like great power, requires of us great responsibility. It changes our duties, and it changes the kind of people we have to be to meet those duties. It is no longer enough for hackers to think like explorers and artists and revolutionaries; now we have to be civil engineers as well, and identify with the people who keep the sewers unclogged and the electrical grid humming and the roads mended. Creativity was never enough by itself, it always had to be backed up with craftsmanship and care â"
    • QA catches the low hanging fruit. Formal verification catches more (but even then, bugs in your specification can be a problem). seL4 currently has the record for the most efficient formal verification workflow and it cost around 30 times as much as doing best-of-breed QA. Oh, the spec didn't cover everything so it was about 6 hours between open sourcing the code and someone finding a security hole in the system call handler.

      We can do this with civil engineering projects because they're fairly simple

  • by PopeRatzo ( 965947 ) on Tuesday November 22, 2016 @08:15AM (#53338283) Journal

    Hell, no. I ain't ashamed of my code, and if a man says something bad about my code, there's gonna be some blood spilt.

  • by codeButcher ( 223668 ) on Tuesday November 22, 2016 @08:21AM (#53338303)

    I thought "ashamed of code" would be e.g. using a for loop on a Java Collection rather than an Iterator; or using EJB methods to simply wrap database layer calls, instead of encapsulating business rules (because you see our project uses a 3-tier architecture because someone somewhere read it is a good thing to do); or not doing unit tests.

    Writing code to put bread on the table for employers whose business ethics are questionable (or cut corners when it comes to generally accepted good software engineering practices) is to be expected. It's not as if these things are discussed at the hiring interview. And jumping ship at the drop of a hat when these things crop up is seldom practicable - a new round of interviews takes time, so does induction into a new workplace.

    We are all prostitutes, either from the neck up or the neck down.

  • by DdJ ( 10790 ) on Tuesday November 22, 2016 @08:22AM (#53338313) Homepage Journal

    I've at times had to code up things I haven't been happy with, but rather than refuse to do it, I tried to modularize stuff so it could be fixed later when management changed.

    This is, I think, better than refusing, and having someone else code it up. To quote Mordin Solus, "someone else might have gotten it wrong".

    (And in at least one occasion, that worked -- for one product I worked on, we managed to safely and quickly kill the "phone home" DRM before it got out into the wild. Felt filthy working on it, felt good to bury it.)

  • The airline example, tragic as it is, is an example of a mistake. Preventable perhaps, but still a mistake.

    The VW example is malice.

    Cannot compare the two.

  • No but (Score:4, Interesting)

    by Big Hairy Ian ( 1155547 ) on Tuesday November 22, 2016 @08:25AM (#53338331)
    I am ashamed of some of the code that's been written in programming languages I've written
  • Until we're out of the limelight, the idiocracy will just fuck it up.
  • Not so much (Score:4, Insightful)

    by MrBoring ( 256282 ) on Tuesday November 22, 2016 @08:30AM (#53338359)

    The last thing programmers need is a power grubbing QA task manager on top of the idiot scrum manager, in addition to whoever else wants to run things. Quality starts with planning and thinking before coding and not rushing code out the door. A better approach would be not allowing non software people the ability to make statements of quality, cost and capability about software, via legal fiat. Let software engineers as individuals sign off on it.

    Folks there's a lot less science, predictability and consensus in the legal profession. People need a license to cut hair. If software as a profession isn't to be regulated, neither of those professions should be as well.

  • Which would also make an interesting story.
  • Realization... (Score:5, Interesting)

    by west ( 39918 ) on Tuesday November 22, 2016 @08:39AM (#53338391)

    If we're talking about how are code was used, I remember in high school (many moons ago) writing Turbo Pascal programs and Lotus 123 macros for a shipping department of a sizable company that hadn't yet computerized. I was brought in by the manager of the shipping department because he could hire a high-schooler when he couldn't get authorization to computerize from within the internal IT department (which was busy sinking the company with some massively expensive software controlling the manufacturing).

    Anyway, I was very proud of allowing my boss to get all the data that he wanted, and he was very, very pleased that his department now had some means of seeing what was going on.

    I distinctly remember when he called me in and thanked me. Due to my program, he'd had enough data to improve efficiency 25%!

    I glowed.

    Now he'd been able to let go 2 out of the 8 drivers they had.

    I stood there speechless.

    There were real people underneath those numbers.

    • Re:Realization... (Score:4, Insightful)

      by 110010001000 ( 697113 ) on Tuesday November 22, 2016 @08:50AM (#53338447) Homepage Journal
      That doesn't matter. Maybe getting rid of 2 of the 8 kept the company afloat so the other 6 didn't get laid off.
    • Yeah, been there done that. Rewrote a system that needed a lot of manual intervention (checks and such) because it was so... crap. The look on some of the peoples faces when we were giving a demo, when they realized that their jobs had been fully automated, was really painful. But that's unfortunately a big part of what we do, we automate stuffs, and at the rate of AI growth it's not going to be long before we automate ourselves out of a job, which is like a big ass karma thing I'm sure.
  • by jenningsthecat ( 1525947 ) on Tuesday November 22, 2016 @08:43AM (#53338401)

    It points out a real problem, and I'm glad the author has the conscience to care and to promote change. But I wonder what can effectively be done. In the average corporate environment, senior programmers are typically read in on the business details only on a need-to-know basis. Low-level code slingers usually don't get told jack. It's pretty difficult to act according to one's conscience given such a dearth of information. And if you demand more information - well, there's always someone waiting to take your job who'll just shut up and code.

    The video linked in TFS points out that civilization depends on programmers. For a century or more, it has also depended on engineers; yet we still have Volkswagen-like scandals, not to mention all the mostly-unnoticed little day-to-day ethical compromises made by engineering staff in the name of business. It seems to me that the only solution is for designers and implementers to have a say equal to that of bean-counters, PHB's, and investors. And in our current world-wide corporotocracy, that simply isn't going to happen - at least not in the absence of bloody revolution.

    • by SirSlud ( 67381 )

      Uh, of course you "still have" bad things that engineers do, but it'd be a lot lot worse without professional engineering bodies that regulate certification, and the social science courses that most (all?) BEng programs mandate. Don't kid yourself.

  • by generic_screenname ( 2927777 ) on Tuesday November 22, 2016 @08:43AM (#53338405)
    I've seen some shady things, and it was ALWAYS in a setting full of people too junior to ask questions. Junior people are sometimes naive, and will believe management when told that certain shady things are normal. Junior people may have no resume to speak of and are basically forced to look good at their first real job. Junior people may not be able to afford to quit without having something else lined up, and don't want to be marked as job-hoppers. Senior people have the marketability to leave, and the experience to see through BS. They may also have enough savings to quit out of principle and take a sabbatical, or the ability to shift gears to their side business. I don't really know how to solve the problem, given that young adults need to eat regardless of their ethics. I do know that the problem is hardly contained to computing. Maybe we gravitate to this field because we love logic, but the rest of the world isn't logical. We still have to deal with human nature in this field too.
  • Any bad code I write was because of bad management: lack of specifications, deadlines, lack of QA, etc. ;)
  • by darkitecture ( 627408 ) on Tuesday November 22, 2016 @08:49AM (#53338443)
    Doesn't this describe almost every job?

    I mean, I generally agree with the article. But the article seems a little... self-aggrandising, doesn't it? As if to say "hey, we're just as important as doctors and engineers!"
    The thing is... I kinda agree - programmers are very important and their actions can have serious consequences if done poorly or incorrectly. But like... plenty of other jobs are just like that too.

    If the person stocking the shelves at your local grocery store doesn't clear out the expired stock, or maintain proper hygiene around fresh food, they could easily contribute to someone getting sick or spreading bacteria or a virus.

    If the person selling gear at a bicycle store doesn't realize the wheel or frame is broken, or that a frame has been recalled due to a defect, they could easily contribute to someone being seriously injured.

    If a school teacher ignores serious bullying or doesn't fact check the information they're teaching or doesn't make sure their students properly know how to do proper calculations, they could easily contribute to a serious mistake made by the student some time in the future.

    If a salesperson helps someone get a loan approved when they've very much shown in all likelihood that they probably can't afford the monthly payments or that the loan is predatory in nature, they could easily contribute to that person's life taking a serious financial turn for the worse - and we all know how stressful and desperate people can get when they can't make ends meet.

    Yes, programmers need to be aware of their moral compass - but so does everybody else to varying levels, pretty much. Generally speaking, just - don't be a dick, don't be apathetic and use some common sense. That'd go a long way for pretty much anybody in any situation.
    • The big difference between the everyday examples you quote, and contributing to software systems, is what I call the invisibility of the software. No bystanders can instantly judge you to be criminally incompetent when you write code that is so buggy, so fragile and so unsuited to control any aspect of society. Whereas any bystanders witnessing your example actors (shelf stocker, bicycle shop seller, etc..) can instantly judge your example actions as being immoral. That is why we constantly get away with m
      • Agree completely. The fundamental problem is that software tends to have so much more combinatorial compexity than just about everything else (except maybe medicine or law), and even strategies used to reduce the complexity (modularization and encapsulation) that work in fields like engineering are, in software, often broken or ineffective due to poor design. (Imagine if the person designing the plumbing system in a skyscraper couldn't rely on the walls and floors staying in the same place. He'd have to inv

      • That's silly. Should the assembly line workers that made the pinto be ashamed? Nobody sees them.
    • Your response strikes me as typical of programmers in that they don't recognize how their work can affect a great deal more people than almost all of the examples you cite. With the possible exception of mishandling food, none of the other examples come close to affecting the same order of magnitude of people as programmers can.

      The recent VW emissions scandal is a perfect example: VW's proprietary software was used in around 11 million VW cars worldwide (that VW admits to) from model years 2009-2015. Compar

  • by symes ( 835608 ) on Tuesday November 22, 2016 @08:50AM (#53338445) Journal

    I avoid shame by making sure my code does not work and is not seen by anyone else.

  • I also saw Robert Martin's talk, and generally agreed with it. I am generally surprised (and disappointed) at how insensitive and irresponsible colleagues are. At the end of the day, most just want their salaries in their bank accounts, and keep their jobs. Software engineers appear to be just like everyone else. So I agree with Martin that laws are badly needed.. to root out the cowboy/hacker attitude for this coming century. On the subject of Martin's talk, I actually bought his book "Clean Code" as a re
  • And even back in 1985, software was killing people, gruesomely [wikipedia.org]. Probably before.

    And it was testing that failed back then. Nothing much changes. Agile process has given project managers a way to avoid testing as a function, and so there is no real testing. Hilarity ensues.

    • by Ihlosi ( 895663 )
      And it was testing that failed back then.

      In that particular case, it wasn't just testing.

      The bug could have been caught by someone reviewing the code, if the manufacturer had spent the money.

      The bug could have been caught during testing ... maybe (there was some randomness involved).

      The bug could have been caught after the first reports from users about erratic behavior of the device came in.

  • I code in COBOL.

    I'm so sorry. Please forgive me.

  • by in10se ( 472253 ) on Tuesday November 22, 2016 @09:11AM (#53338549) Homepage

    That escalated quickly. I assumed they were talking about how ashamed I am about that spaghetti code I wrote for my client last week, not killing people.

  • by petes_PoV ( 912422 ) on Tuesday November 22, 2016 @09:15AM (#53338567)
    This piece isn't about the code per. se. It is about the use it is put to.

    Some people might set out to write software that is ONLY usable for malevolent purposes - and they could be fully aware of this when they do the job and deliver the result. Just like some people will work in cigarette factories. Or design more "efficient" land mines.

    However, the vast majority of software that is used for evil can also be used for good. Take GPS for example. It can be used to guide ambulances to accident victims and it can be used to guide missiles to their targets (it can also be used to make those missiles more accurate thereby reducing collateral damage - go figure).

    Is the person who invented the for() loop responsible for all the unknown uses it is put to? Is the team that fixes a bug in car's firmware responsible for saving lives? These are unknowable points. The best that programmers (and testers and designers) can do is to produce high quality work, that fits within their ethical framework. Then sleep easy at night.

  • being ashamed of your code is completely different thing,
    than being misled to write code for something unethical.

    But the other side of the coin is there had to be enough clues to make you aware there was something fishy.

    Are/were you desperate? Or are your blinders that big?

    So Yes you should be ashamed.....

  • I recently started reading A Deepness in the Sky (https://en.wikipedia.org/wiki/A_Deepness_in_the_Sky ) by Vernon Vinge who briefly touches on this issue.

    He speculates that eventually hardware will stabilize allowing code written over the period of centuries to still be used. He says that bugs in old code, (the original designer, coder, maintainer are dead) eventually cause more deaths than hazardous activities like space travel.

    I can see how code that hasn't needed TLC for years but is still used extensiv

  • by ledow ( 319597 )

    Not ashamed of anything I do professionally, as if I'm made to do something I don't like, I air my objections and go full "I told you so" if it fails, and make sure the responsibility lies in the direction of the people who overruled me.

    Anything more shameful, I wouldn't be doing it. The people who code malicious junk just because their company wants it? Those people might well have something to be ashamed of. Even if they "moved on soon after" or whatever. You should have just not done it if you were t

  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Tuesday November 22, 2016 @09:32AM (#53338657)

    The very nature and absolutely empowering thing about software is that it is flexible. It's purpose abstracted from the machine that fulfills the purpose. It doesn't get any more awesome than that. That's why we are gods in our own little world of the systems we work on and that's why we love tinkering with and building software.
    That's the whole point of it.

    So when I dig up my old EDI connector/serial processing ERP software I wrote 14 years ago in Python I don't think "OMG! What was I thinking? This abstaction is non-existant and if it is it's abysmal!". I think about the other things: How I wrote the filters for Amazon Marketplace before they even had an open API. How chains of regexes filtered the competitors pricing and how another script adjusted ours one cent cheaper than the cheapest offering of our competitors. How orders went from 15 to 120 the day we started using those scripts. How we were adjusting to changes in amazons websites on a daily basis, and how we built a billing system with Python, RTF Templates and an cli-automated Open Office.

    If you look at that code today without context you'd think it's some bizar experiment or something. But it pushed our revenue back then from 20 000 Euros to 480 000 Euros.

    Same goes for the very first Flash video and multimedia streaming client that I built. The code looks a mess and the player works very strangely. But we had to find out the hard way that you have to render your objects off screen in order to force the flashplayer from back then to actually load them. Likewise, if you call an XML object in Flash 4 / ActionScript 2 without instancing it, the Internet Explorer would reference the file on the far server, reloading it every time and causing traffic to skyrocket from 40kb to 3MB (Note: This was before DSL and when 68kbit ISDN was avantgarde for internet access.) ... That's why that piece of code makes an 'unnecessary' copy of that XML object and deletes the old one.

    Likewise today, using WordPress, I see abysmal software architecture in the whole way WP is built. But not for a moment do I delude myself and think this could've been done from the very beginning. WP is historically grown and design decisions made back then might have had very good reasons, even from a software architects perspective.

    So, long story short, I'm not really ashamed of my code. At all.
    I know where it came from and what it did and why.

  • However, my very first commercial project ended up an ugly glob of assembly with a little bit of C mixed in, which I hopefully will never have to look at again in my life.

    The product sold almost a million times and didn't cause any deaths, to my knowledge.

    My ability to produce readable, maintainable and debuggable code have improved signficantly since then.

    • readable and maintainable code is somewhat overemphasized these days. Half the time so called maintainable projects should probably be written, and the other half of the time the nobody wants you to change a thing about the code because of the scary "risk" monster.

      I get a lot of not invented here sort of nonsense where a perfectly fine bit of software is discarded so that some new team can build it from the ground up. It makes me not want to go to the effort to design a reasonable and extensible API.

      PS - as

      • by Ihlosi ( 895663 )
        readable and maintainable code is somewhat overemphasized these days.

        Well, in my line of work I've realized there is a fair chance of products coming back to bite me for a couple of years after they are released. Making the code maintainable is a sanity issue, since I'm the one who will be doing the maintenance if necessary.

        PS - assembler is quite "debuggable"

        Yes, if you stick to a few simple rules (like avoiding overly long functions and such), and if you use an architecture with an assembly dialec

        • I don't really impose any particular rules, much of the code I go through was written by dozens if not hundreds of other C developers. Sometimes functions in the kernel can be very long, several kilobytes in some cases. Which is unfortunate but not impossible to debug.
          As for the dialect of assembler, I'm not sure what you mean. I've done everything from MIPS, to PPC, to ARM to x86-64. The only real confusion is gas or intel order for operands when dealing with the disassembler. Some of the branch registers

  • by Chris Katko ( 2923353 ) on Tuesday November 22, 2016 @09:41AM (#53338705)

    If they wanted quality code, they should have paid for the time to create it correctly in the first place.

    Since nobody wants to pay for quality code, then they get exactly what they paid for. Something that gets assembled as quickly as possible to maximize the return on investment.

    • I've had several companies tell me they were very interested in improving the security of their software. Then when I found some security issues and suggested we fix them, was told to stop wasting their time with irrelevant stuff. Most recent time this happened was 2 weeks ago.

      One company, a few years ago, was particularly bad. One of the vulnerabilities I'd pointed out was exploited a couple months after, resulting in the compromise of a server. That server had full access to a database full of HIPAA-p

  • by Pascoea ( 968200 ) on Tuesday November 22, 2016 @09:42AM (#53338717)

    If you don't have at least one project you have worked on that you aren't proud of you probably aren't a programmer.

    Mine is a steaming turd of a purchase order request "system". Written in VBS inside of Excel, talking to a MySQL database back end.

    It's not pretty. It's not fast. It's not sustainable. But, it met the purpose of the request using the tools I had available to me. And it was better than what they were using at the time. 5 project managers all writing purchase orders, or not, off of their local computers. Most of the time the PO wouldn't get to the Accounts Payable person, and if they did it was usually a hard copy. There were no backups. Nobody knew how the database worked. Nobody knew how to re-install it, or where the source for it was. Many many dollars lost.

    I would have never imagined they would still be using that system, this was 8 or 9 years ago. I have a feeling that's how most of these embarrassments work, a fast and messy fix to an immediate problem that will be "replaced eventually" that never gets touched again.

  • It's great when as individuals we have the luxury of choosing where we work. I'm at a point in my career where I have that luxury and I use it. I'd leave a place that frustrates me enough, either in terms of mission, management, or coworkers. As a group though, a lot of us lack that choice, and even for those who do, when they step away the employer will just find someone else to deal with the crap they left behind, because funds are sustenance, we've all got to eat, and if there are spare funds to hire peo

  • "We are killing people," Martin says. "We did not get into this business to kill people.

    Speak for yourself buddy! I have a counter for my kills. [digitaldisplay.com] ;)

  • I am ashamed of my code !

    My code is an abomination that must be ejected from the visible universe !

    We must send a message that will pass through the Big Crunch that will alert future universes that my code should not exist in any possible future !

    Etc.

    I am so ashamed.

  • by mark-t ( 151149 ) <markt AT nerdflat DOT com> on Tuesday November 22, 2016 @10:41AM (#53339015) Journal

    ... we are not paid to agree with our employer about everything we have to develop, we are just paid to deliver the requested work on time.

    If one has an insurmountable ethical issue with what they are developing for an employer, the only viable recourses are to find an alternative employer or to become self-employed.

  • If you are going to say "programmers should refuse to program unethical things", remember that this will be taken to heart by programmers who have ethics different from yours. It's not only going to apply to drug websites.

    We don't *want* programmers who are working on web sites for abortion clinics to do all they can to end the project because they have an "obligation" to avoid the "unethical behavior". (If you happen to be against abortion, imagine an example using a gun-rights website instead.)

  • Right now, the majority of people writing code are writing code because they're being paid to do so, either by individuals or more likely, a business.

    Those people, in turn, are not hiring programmers purely for altruism, they're doing it to achieve some goal, usually profit, increase in efficiency, and so on. They have to cope with estimating the curve for diminishing returns. They can figure out having a product that works 95% of the time, or one that works 96% of the time but costs 2x as much and takes

  • But that's because I learned a lot in the last 12 month going from Fortran 77 to Java. That process is called "learning".
  • by fahrbot-bot ( 874524 ) on Tuesday November 22, 2016 @11:28AM (#53339335)

    And dozens of people have already been killed by faulty software in cars, while hundreds of people have been killed from faulty software during air travel.

    ... the number of suicides by those early users of /. Beta may never be known ...

  • That ship sailed decades ago. See WWII + IBM + Germany.
  • by Virtucon ( 127420 )

    No, I write perfect code. It's always perfect because I wrote it. Don't bother reviewing it, it's perfect and works as designed. I designed it.
    Go away, you're preventing me from writing more code.

  • Sure it's going wrong now and then but overall I'm quite positive code saves lives. For every aircraft that crashes there are probably hundreds if not thousands of crashes that did not happen because of all the pilot aids. For all car crash that did happen, there are hundreds if not thousands that did not happen thanks to all those driver aids - including Tesla's autopilot. Software makes hospitals more efficient, helps to diagnose diseases and develop medication faster, may help doing a quick cross check o

  • Whenever I find myself spending a lot of time coding compatibility so a closed source OS will work, I do feel the spring being sucked out of my step for prolonging the lifetime of said closed source product by improving its usefulness.

Life is cheap, but the accessories can kill you.

Working...