Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Technology

Alan Cox on Writing Better Software 391

Andy Gimblett writes "Alan Cox recently gave a talk in which he discussed some current and emerging techniques for producing higher quality software. Some of these will be familiar to Slashdot readers, such as validation tools, type checking, etc, but others seem heavily influenced by his recent MBA. In particular, he has a lot to say about Quality Assurance in the software world, and the kinds of things we should be doing (and some people are doing) to make better software. Story and lots of quotes at Ping Wales, and video at IT Wales."
This discussion has been archived. No new comments can be posted.

Alan Cox on Writing Better Software

Comments Filter:
  • Quality (Score:5, Funny)

    by mfh ( 56 ) on Friday October 08, 2004 @07:53AM (#10468773) Homepage Journal
    he has a lot to say about Quality Assurance in the software world

    Quality Assurance in 4 easy steps!

    Dear Managers,

    1. Listen
    2. Close your mouth
    3. Plan everything around #1
    4. Profit!!! (notice there is no line with ??? because you listened!)
    • Re:Quality (Score:4, Interesting)

      by jc42 ( 318812 ) on Friday October 08, 2004 @12:43PM (#10471847) Homepage Journal
      1. Listen
      2. Close your mouth ...


      One problem with this: By listening with your mouth closed, you run a very real risk of misunderstanding what you're hearing. Without feedback to verify that you're understanding, you are highly likely to do things completely wrong.

      Funny example: A few years ago, I was implementing an interactive web site, and we had a nice test server machine in the lab. In one discussion with The Mgt, I casually mentioned that for testing, I thought we needed to run another server.

      I didn't hear any response, so I went ahead and cloned the apache server that we were using, and fired it up on a different port. 10 minutes work, and we proceeded to test.

      A while later, I discovered just in time that the managers had heard me say that we needed a second server machine, and were ordering another one. After a bit of discussion, I realized that to them, a server is a piece of hardware, while to us software guys, it is a piece of software.

      I managed to explain to them that, no, we didn't need a new machine. I'd already set up a second web server on the old machine, and it was working fine. There was still time to cancel the hardware order.

      But still, they had done a bunch of unnecessary work, and had almost spent a good sum of money unnecessarily, all because they had listened carefully without asking questions. If they had asked what hardware the new server needed, I'd have realized quickly that there was a misunderstanding, and we could all have saved a bit of time.

      Listening without interacting, and acting on your understanding of what you heard, can lead to a lot of serious problems.

    • Re:Quality (Score:5, Interesting)

      by fyngyrz ( 762201 ) on Friday October 08, 2004 @01:15PM (#10472317) Homepage Journal
      Quality Assurance in 6 Easy Steps:

      1. Fire the managers
      2. Fire the beancounters
        (steps 1 and 2 may be reversed if required)
      3. Fire anyone who even mentions the word "committee"
      4. Keep labs open 24/365. Apply metered, but 100% flexible, work hour rules for programmers (ie, "You have to work 40 hours/week, but it doesn't matter which 40 hours.)
      5. Provide superb comfort and amenities to programmers
      6. Create significant bonus program tied directly to number of problems found in each programmer's code per release cycle. Zero problems, max bonus, more problems, less or no bonus. Stir to taste.

      Enjoy good results.

  • Unit testing? (Score:5, Interesting)

    by JohnGrahamCumming ( 684871 ) * <slashdot.jgc@org> on Friday October 08, 2004 @07:54AM (#10468785) Homepage Journal
    It's quite surprising to me that there's no mention here of unit testing, or any sort of automated testing where the engineers who wrote the code actually write tests.

    It should, by now, be clear that "code that doesn't have tests, doesn't work", and that if XP has done anything for us, it's to focus on writing tests. I've seen this in action and it works.

    John.
    • Re:Unit testing? (Score:5, Insightful)

      by bloggins02 ( 468782 ) on Friday October 08, 2004 @08:05AM (#10468858)
      Unit testing is essential, but it's not a panacea. In particular, beware of two pitfalls:

      1) "The unit tests passed, so it works." This assumption is flawed on several levels. First, and most fundamentally, even if all unit tests pass, there is still the issue of whether your software works as a whole. Software often has "emergent logic" and UI scenarios that are difficult or impossible to test (after all, that's not what unit testing is for, but some people seem to think it is).

      Second, just because a test passes, doesn't always mean the API works. This is especially important if you didn't write the tests yourself. Just because a unit test CLAIMS it tests X, doesn't mean it does. Is the test complete? Any false positives? Is the test just a skeleton that was intended to be implemented later but never was? I've had all these bite me in the past.

      2) "That particular test has NEVER passed, so there's something wrong with the test. We just ignore it now." Bzzzt! Wrong! There's a REASON it never passed. It's either not implemented properly, just a stub that fails waiting for someone to write an implementation, or maybe you just think the feature it tests actually works. Look closer. The test might be trying to tell you something.

      If you are careful with unit tests, they can be very rewarding and useful (especially for regression testing, where they are invaluable), but put too much confidence in them or depend on them to do the kind of overall testing they were never designed to do, and you will fail long before your first test does.
      • You do nto notice major differences until you do unit test first as part of the design process before doing OO code...

        Because it is short cutting the design pahse by takign youd own less unworkable roads ti improves the design while decreasing tiem to complete the code..

        That is why the phole complete process is called agile!!!

        Unit testing after the design is not very effective except to catch where things break after a change
        • Hye, cal lyour pear pro gramign budy! You rtypign skils a ren t up to scrtach wehny our alon e!

          Man, if only there were unit tests for slashdot postings... Ah, wait, we have those (lameness filter!) and they don't help at all!

      • Re:Unit testing? (Score:2, Interesting)

        by Anonymous Coward
        Anyone ever notice DO-254 (That pesky avionics software development guideline) never mentions unit testing? There's a reason for that.

        A solid requirements based development process and requirements based testing/verification process is the key to large high quality software. In my opinion, formalized unit testing tends to hide errant system level behavior. Sure, it aids an individual developer in understanding their code works as they intended, and should stay a vital part of low level development. But
        • Re:Unit testing? (Score:4, Insightful)

          by richieb ( 3277 ) <richieb@@@gmail...com> on Friday October 08, 2004 @10:41AM (#10470091) Homepage Journal
          A solid requirements based development process and requirements based testing/verification process is the key to large high quality software.

          You tacitly assume that it is possible to get solid requirements. When writing avionics software, I'm sure you can, because the problem is well understood and we have good science/math to back it up (eg. GPS nav system).

          But in most sofware projects it is impossible to create requirement ahead of time, mostly because the problem you are trying to solve is new and we don't understand it well enough yet.

          Are there requirements for the web browser? Were they created before the code was written? WHat about requirements for MS Word?

        • Re:Unit testing? (Score:3, Insightful)

          by dubl-u ( 51156 ) *
          But I've come to realize, it [unit testing] just doesn't help to reduce big errors in system level design.

          That's like saying that street maps don't tell you what the continent looks like. It's technically true, but it seems to miss the point.

          Unit tests are for testing relatively small chunks of work. If you want to be sure the pieces work together, you do testing at higher levels. I think both are necessary for a solid system.

          Personally, I think of my high-level tests as executable requirements. Every t
    • Re:Unit testing? (Score:4, Insightful)

      by cerberusss ( 660701 ) on Friday October 08, 2004 @08:07AM (#10468872) Journal
      What people should realize, is that it can cost quite a bit of effort to build a unittest. I.e. if you write some code that reads several rows from a database and as a result writes a bunch of rows in another table, you need to run scripts before and after your unittest. It's of course all doable, it's just that it's sometimes more work than writing the method itself.
      • Re:Unit testing? (Score:5, Insightful)

        by Unkle ( 586324 ) on Friday October 08, 2004 @08:15AM (#10468936)
        But that's really the reason it's not done all that often. Developers think that the unit test will be a waste of time. The problem is, nobody codes perfectly. Finding a bug during unit testing is much better than finding it in design validation testing. Indeed, on many projects I have worked on, issues that could have been found with adequate unit testing were not found until after release.

        Unfortunately, when schedules get tight, it's things like unit testing (and testing in general) that get cut. The more emphasis we get on the importance of QA the better our industry will be.

      • Re:Unit testing? (Score:3, Insightful)

        by Anonymous Coward
        Study Mock objects [mockobjects.com]
      • you only need to validate the functionality in the method itself is working. from your sig it seems you're using java. junit with an embedded axion database allows for extremely easy testing of a method that uses a database connection for its work. of course, you have to setup your test by creating a table, and inserting any base data. getting that basic framework is as simple as writing the method itself. you'll probably double your time in writing the actual method. i'd think the return on that inv
    • by wowbagger ( 69688 ) on Friday October 08, 2004 @08:24AM (#10469004) Homepage Journal
      More importantly, write the tests FIRST. Then write the code, updating the tests for anything that is identified during the coding.

      This has several important benefits:
      1. You have to DEFINE what the module is to do so that you can write the tests. Granted, the first pass for the "tests" may simply return "failed" until something is written for the module, but at least you will have a chance to think about what you should be testing.
      2. You actually DO write the tests, rather than blowing them off. If your manager says "Why aren't you working on $blarg?" you can say "I *am* working on $blarg" since the first step is writing the tests.
      3. As you get funtionality working (as demonstrated by the tests passing) you can quickly determine if a later addition to the code breaks the working feature - and fix it while the change is still fresh in your mind (and hopefully BEFORE you commit your changes to the mainline code path (you ARE using a source code control system, aren't you?))
      4. You can automate the testing of the system.

      • What? Projecting the test cases before the software is written? If you plan to do some black-box testing this is acceptable, once you can plot the test cases based on the software formal specification. But I guess you know there are software testing approachs other than functional (black-box) testing. If you intend to do some structural testing (white box) it is impossible to write test cases before writing the software, once the testing requeriments are defined by analyzing the source code (that's why it i
        • If you intend to do some structural testing (white box) it is impossible to write test cases before writing the software ...
          Three words: test-driven development. [google.com]
        • What? Projecting the test cases before the software is written?

          It's an iterative process. You don't necessarily write a complete suite of tests for your interfaces before you start writing a single implementation. Someone in QA might think of a unit test as white box, but they tend to be black box from the perspective of the developer. You should be able to write a unit test before writing the unit.

          The point of most unit tests is to verify an implementation's conformance to its interface. When you late


        • If you intend to do some structural testing (white box) it is impossible to write test cases before writing the software, once the testing requeriments are defined by analyzing the source code (that's why it is called white box testing).

          This is only partly correct. If you design your software via Use Cases and Scenarios or User Storiers, then the possible pathes are in gereat deals predefined. That means you can do black box tests with out any need of white box tests.
          To check wether your black box test c
      • "More importantly, write the tests FIRST. Then write the code"

        But write which tests first? There are so many possible. Usually there are more ways for something to malfunction than for it to function correctly.

        How do you write a test to check that a banking application does not allow a customer to cancel other customer's cheques? How do you write a test to check that someone didn't allow sql injection? How do you write a test to ensure that a user account cannot do what it is not authorized to?

        Since ther
        • First, you confuse "unit testing" with "application testing". You are writing the tests for a *part* of the app, not the whole. So, to use your example, you are writing the tests for parsing a customer's auto-pay entries - not the whole app.

          SO:

          Your first unit test is a simple "return pass"

          You run your test framework and verify everything works.

          Commit the changes into the main line branch.

          You then change your unit test to "return fail". Run test framework and verify it fails. (NOTE: You do all of this I
        • You don't catch SQL injection issues with unit testing; you catch them with type checking. A SQL injection occurs when someone uses a string plus quotes as a partial SQL statement. There should be a different type for partial SQL statements, and the operation to make one for a string constant out of a string should escape the string appropriately. Alternatively, you could just use prepared statements exclusively and make your database happy as well as making SQL injections totally impossible (as the databas
      • There are some more reasons:

        1. You can be more confident when you begin refactoring mature code-bases. This for me is the clincher as code never stands still but the tests can be a constant. A permanent measure.

        2. If it's an API, you have working examples to show people.

        3. for years, I used informal, undocumented tests. Handing-over was always going to be bad. Now, hand-overs are a, uh, doddle.

        4. Progress. Nothing like some concrete test results to show people. Most test suits show results as HTML. Put
    • Re:Unit testing? (Score:3, Insightful)

      by OP_Boot ( 714046 )
      the engineers who wrote the code actually write tests

      Do NOT get the engineer who wrote the code to also write the test.

      It's fairly fundamental - the engineer who wrote it will have a prejudiced view of what should/will work.

      Get someone else to do it and get a valuable fresh insight.
    • Unit testing doesn't find bugs, it just ensures that you didn't regress the obvious or break something that your previously fixed.

      With end user applications, on average, a couple hours of active ad-hoc testing and test case development per week finds 95% more bugs than hundreds to thousands of automated unit tests will.

      For an API, unit testing might be more effective, but APIs are much simpler to test than full end user applications.
    • Unit tests are no more than glorified sanity checks.

      The advantage of unit tests is that, once written, they can be used again and again, which is useful for verifying that a change to the unit did not break its previously tested behavior.

      The disadvantage of unit tests is that they can themselves contain bugs. It's good to know your program passed the tests, but were the tests good? Did you really test what you thought you were testing? Was what you thought you tested actually the right thing? Did you not
      • I disagree.

        First, it is not necessary for unit tests to be absolutely complete to be useful. Anything that finds a hitherto-undiscovered flaw is valuable - extremely valuable.

        Second, that there are bugs in a unit test is not, in practise, that big of a deal. At worst it means is you haven't tested a unit as thoroughly as you think you have. Unfortunate, but not a disaster. Perhaps two people should write their own units tests for a single module, and then compare the bugs they found in the module. An in

  • 2 words (Score:3, Insightful)

    by otisg ( 92803 ) on Friday October 08, 2004 @07:56AM (#10468796) Homepage Journal
    unit tests

    not a panacea, but it does go far.
    • Re:2 words (Score:3, Insightful)

      by ceeam ( 39911 )
      That's nice thing when you do your project bottom-to-top, i.e. when you have some engine tossing around your data and stuff, and then write UI on top of it. (UI is not very unit-testing-friendly, is it?)

      Unfortunately, IMHO, most of software is done as this - let's put pretty gadgets around, cool nice icons, and then we'll do the job in event handlers somehow. Here it all breaks loose.

      Why it is done that way is easy to see - managers/supervisors are not interested in you doing smth behind-the-scenes for we

      • Re:2 words (Score:5, Funny)

        by Unkle ( 586324 ) on Friday October 08, 2004 @08:20AM (#10468970)
        I actually had a coworker marked down on his yearly review last year for wanting to write re-usable code. Our manager's (very VERY flawed) opinion was that, though it might be nice to have the re-usable code, just write it for this specific task because it's just easy and fast.

        The kicker is, this year that same manager wants to re-use the code that my coworker was origionally going to write.

        • Re:2 words (Score:5, Funny)

          by johannesg ( 664142 ) on Friday October 08, 2004 @08:29AM (#10469040)
          Ha, that's nothing. Long ago I used to work for a company that thought it was a good idea to NEVER reuse code because then you would reuse all the bugs as well. OTOH, they reasoned, if you wrote it from scratch you wouldn't be copying old bugs, thus this was a safer thing to do.

          I'll leave the results as an exercise for the reader...

          • Re:2 words (Score:5, Informative)

            by sootman ( 158191 ) on Friday October 08, 2004 @12:14PM (#10471425) Homepage Journal
            And for those who don't like exercise, I suggest Joel [joelonsoftware.com]. Samples:

            "The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they've been fixed. There's nothing wrong with it. It doesn't acquire bugs just by sitting around on your hard drive. Au contraire, baby!... it has grown little hairs and stuff on it and nobody knows why. Well, I'll tell you why: those are bug fixes.

            One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn't have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.

            Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it's like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters. When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work."
  • "free" ? (Score:2, Insightful)

    by mirko ( 198274 )
    I see that the word "Free" doesn't appear in this story's synopsys.
    How would Alan apply his quality methods to projects which member might never meet due to geographical contingencies ?
  • Good code... (Score:5, Insightful)

    by TrollBridge ( 550878 ) on Friday October 08, 2004 @07:59AM (#10468809) Homepage Journal
    ...isn't just for end-users! If you anticipate that others will be working on future versions/releases of your software, good commenting can make the job SOOOOO much easier for the next codemonkey.

    I'd say commenting is doubly important in OSS projects, as it involves many sets of eyes trying to comprehend what you coded.
    • Re:Good code... (Score:5, Insightful)

      by Dr. Evil ( 3501 ) on Friday October 08, 2004 @08:05AM (#10468857)

      Commenting must be 100% accurate else it is detrimental to understanding the code.

      Sometimes code changes don't result in updated comments...

      Once I find an inaccurate comment in somebody's code, I have to start rewriting or deleting all the comments because I can't trust them anymore.

      • Mod parent comment (pun intended) up ... comments are evil. Code should
        first and foremost be self-documenting, by the
        choice of proper variable names and subroutine
        names. Only comment things that are not obvious,
        like tricks that are employed. The point is that
        comments are not read nearly as much as names,
        and get stale more quickly, rendering them useless.
        • In the Fortran variant I was working in until a few years ago, local variable names were limited to five (5) characters, and many otherwise obvious functions were performed by a series of external subroutines with six-character names usually beginning with SY (for "System").

          That meant that the code itself could be quite hard to follow without some sort of logical commentary accompanying it.

          I tended to comment my code quite heavily in that context, since I'd already had experience with uncommented code in
        • I hear this a lot, but what's really the problem here? Is it because the comments poorly aged or because a programmer didn't do his job? What's the cure? Stop commenting or make sure your coders are properly doing their job?
      • Re:Good code... (Score:3, Interesting)

        by iabervon ( 1971 )
        Comments are valuable at a relatively high level, where you describe what the code is supposed to do. The code itself documents what it actually does, but there's no way for a person reading it later to determine if it is doing what it is supposed to do without a comment that explains the intent. For example: "// Long divide in place, junking the data, but leaving the reminder" is very useful to the person who comes along later and doesn't know that the code actually wants this odd result, and wouldn't have
    • Re:Good code... (Score:5, Insightful)

      by johannesg ( 664142 ) on Friday October 08, 2004 @08:44AM (#10469120)
      That, and good naming of variables / functions / classes. To clear up any possible confusion: that means that the name has some bearing on what the thing in question is doing for you...

      Recently I had the misfortune to wade through a few hundred kilobytes of Java that was written by someone who thought he should 'abstract' everything as much as was humanly possible. Sounds good, right? Well... It turns you can do a lot of harm that way, too. I don't think he had a single function in there that _wasn't_ called something like SetProperty(), GetValue(), DoFunction(), etc. There was absolutely no way to guess what it was doing based on the name of the functions. Naming of classes and variables wasn't much better. After looking at it for a couple of hours I don't think I could have guessed what it was trying to do if I hadn't already known that beforehand.

      So, next time you are writing software, feel free to get in touch with reality and name stuff after what it is supposed to be doing. Nice long names please, no abbreviations unless you go over 30 or 40 characters. Down with CmtPmt2Db(), down with SCUPD(), and down with GetPropertyValueInterfaceCaller()!

      Because, be honest: those mean nothing, while CommitPaymentToDatabase(), ScreenUpdate(), and GetXLocation(), have intuitive meanings we all understand...

      • Re:Good code... (Score:3, Insightful)

        by ajs318 ( 655362 )
        You see, that's the point. There's no point being too clever -- you can use the programming language itself as a sort of "generality-of-purpose abstraction layer". After all, if a language is computationally complete then as a matter of definition, any task you might want to perform can be implemented using that language. There is no shame in writing a program to do just the thing it has to do. It's far better to be able to play one song all the way through from start to finish with no mistakes, than an
    • WHY, not what (Score:5, Informative)

      by wowbagger ( 69688 ) on Friday October 08, 2004 @09:32AM (#10469396) Homepage Journal
      The single best rule of comment is "Comment upon the WHY, not the WHAT".

      Don't say:

      double sum = 0.0; // zero sum
      for (i = 0; i < len; ++i) // all samples
      {
      double signal = buf[i]; // get value
      sum += signal*signal; // add signal^2 to sum
      }
      return sqrt(sum/len);


      say:
      // Compute RMS average of signal -
      // compute the sum of the squares of all values,
      // then compute the mean (sum/len), then the
      // root of the sum - hence Root-Mean-Squared

      double sum = 0.0;
      for (i = 0; i < len; ++i)
      {
      double signal = buf[i];
      sum += signal*signal;
      }
      return sqrt(sum/len);

      In other words, tell me WHY this code added the square of the signal, not THAT it added the square of the signal.

      Moreover:
      1. Every directory of the project should have a purpose, and should contain a short README describing the purpose of the code in the directory.
      2. Every file should have a header that describes the purpose of the file.
      3. Every function should have a header describing the purpose of the function, as well as any inputs and outputs (including global, static, and class variables), any exceptions thrown, and any assumptions made.
      4. Blocks of code within a (large) subroutine should have a descriptive block comment describing the overall goal of that block (e.g. "Now we walk the list of conversations and update the call stats").
      5. Comments on a line of code shouldn't be needed normally - the code should speak for itself. Only pretty tricky things ("use a shift and add rather than a multiply as this saves 3 clock cycles") should need a comment at end of line.

      If more folks would follow these rules it would be a HELL of a lot easier to follow their code.

      NOTE: If you can say it in the code, do so - if you can specify the exceptions to your function via a "throw(int code)" type statement, then do so rather than using a comment.

      Remember - the code tells the COMPILER and the programmer what is going on, the comments tell the programmer WHY it is going on.
  • Alan Cox (Score:5, Funny)

    by Anonymous Coward on Friday October 08, 2004 @07:59AM (#10468812)
    Who cares about what Alan Cox has to say? He's soooo 2.4
  • by Frans Faase ( 648933 ) on Friday October 08, 2004 @08:04AM (#10468850) Homepage
    The most effective techniques for finding defects is still code review. It seems that one of the pair programming is a very good way of doing code review.

    However the greatest problem with writing good software is still in the marketing. In order to sell/license software it needs to have features, and the lack of defect often does not count as "features".

    • by ZorbaTHut ( 126196 ) on Friday October 08, 2004 @08:14AM (#10468930) Homepage
      Funny, that. I recently started working at a company with mandatory code reviews. Here's a list of my recent experiences with it:

      1) Checked in code. Spent fifteen minutes justifying design decisions. No changes made.

      2) Checked in code. Code contained horrible horrible bug. Code reviewer didn't see it.

      3) Checked in code. Defended my design against several more computationally expensive suggestions that were also more complicated. No changes made.

      4) Listened to a friend gripe about having to spend a DAY AND A HALF repeating design reasons and fixing bugs introduced by his code reviewer "cleaning up" his code.

      5) Received company-wide email about a build that flat-out didn't compile - apparently someone hadn't bothered compiling a patch, and had sent it to a code reviewer, who likewise hadn't bothered compiling it before authorizing it.

      Now I'll admit that there are also a whole lot of "well, it only took five minutes, so it wasn't much of a waste" cases. But so far I haven't heard one person talking about how useful the mandatory code reviews are.

      Maybe it's just an artifact of the kind of programmers working at this company, or the kind of code being worked on, but so far code reviews have been a net loss in my experience. I've taken to doing major changes in my own personal branch of the repository (which doesn't enforce the code-review requirement) and in a month or two I'll have 3000 lines of code for someone to look at - but at least I won't have nickel-and-dimed them to death with 120 100-line code reviews, 3/4 of which I will inevitably end up deleting entirely.

      Note that I'm not saying "code reviews are bad" - what I am saying is that there's a time and a place for just about every technique, and there's also a time and a place where each technique is worse than useless. Pick your battles and pick your tools.
      • We used to implement code reviews at my former workplace, but they were more "sanity checking" peer reviews than anything else. They didn't take very long, and it gave one or two other people a chance to see how understandable or generally logical the changes were.

        That type of thing doesn't work as well for large changes, but we found that for small ones it sometimes can be a useful thing.
        • This type of sanity checking is especially useful for training junior programmers. It can be very instructive for a senior programmer to sit down with a junior progammer and go through their code together. The primary purpose of a review should be to have a second set of eyes on the code but it is very valuable for training and communication as well.

          • Yes, you're quite right.

            That isn't why we implemented it at first (it was initially put in place because we had a few offshore contractors imposed on us and we wanted to do sanity checking of their code before we allowed it on the production system we supported), but we found that such code reviews could also be an excellent teaching tool when someone new came on board.
      • Sounds like your company has a bad process. Having a formal code review after each check in is crazy. Reviewing code when a task is declared completed makes more sense, or even doing regularly scheduled reviews.

        Our company has some loose rules (we're working on strengthening them) that state that checked in code must be unit tested. This is to prevent things like your #5. But we haven't gotten to code reviews yet. Being on the team that's working on our process, I'll remember your experiences when we

      • by Alan Cox ( 27532 ) on Friday October 08, 2004 @09:17AM (#10469320) Homepage
        If you are bug hunting then the code review needs to be an explanation of what the code does. "We get a foo, we have to decide if its in US or UK format so we compute both checksums and .. oh umm.. I'll come back later'

        It's very different to things like design reviews. They have their place too. A lot of things Linus rejects are really design review things. Its not uncommon to get a "Yes this needs fixing but do it this way instead". It works well providing the person saying that has good judgement.

        Bad code review, bad tools, bad compilers and bad managers are _all_ useless
      • [... Examples of how code reviews slowed down this guy's work without doing any good ...]

        If this happens to you on a regular basis then you are probably a better-than-average developer. Which is just fine as long as we find a way to make your work more average. And code reviews seem to do the job.

        Seriously, the productivity spread between developers (20 times as effective... adding more people... Thank you Mr. DeMarco) is what a lot of very strict process models and practices (such as code reviews) rea

      • Sounds like you've had a rough deal -- or there are some real idiots in your workplace; even code reviews can't solve everything!

        But I doubt they've been a total waste. For example, knowing their code is going to be reviewed soon makes some people write better code. And in explaining your design decisions, maybe your listener(s) learned something about design. (Or maybe you did!)

        My own feeling is that code reviews can be a great way of sharing knowledge about the system, and about development general

  • Testing is good (Score:4, Insightful)

    by TreadOnUS ( 796297 ) on Friday October 08, 2004 @08:09AM (#10468887) Homepage
    But the real key is reducing the number of defects introduced into software. Testing only finds existing errors. If the number of errors are low from using good requirements, design and development practices then testing becomes less expensive and time consuming.
  • by plcurechax ( 247883 ) on Friday October 08, 2004 @08:10AM (#10468892) Homepage
    My employer produces a large-ish software package, with 10 years of history and a small, 2-3 people, development team. Since I joined we have made massive strides in automating the build process, include some unit tests, and a few smoke tests in an automated process.

    Well, the effort paid off. Before we supported one version of HP/UX, and one release of Linux, now we support HP/UX (still a pain), and 4 looking at going to 6 Linux version/distributions and it is less work to produce a release now than ever just a year ago.

    Tools like automake [redhat.com], autoconfig, libtool, cvstrac [cvstrac.org] and of course cvs have made my life bearable.
    • now we support HP/UX (still a pain), and 4 looking at going to 6 Linux version/distributions

      Automated building and testing really pay off when adding new platforms. We recently added IA64 Linux. We already supported IA32 Linux and IA64 HP-UX, so we had most of the C and assembler we needed. Great, we should have a new platform by Friday. Well, the automated tests we've accumulated over the last five years found bugs in places I never would have thought to look if I were testing by hand. Now we're adding O

  • by Viol8 ( 599362 ) on Friday October 08, 2004 @08:10AM (#10468898) Homepage
    Is he back on full time linux development now?

    "Alax Cox gave a talk"

    Was it in Welsh? :)
  • Text Of Article (Score:2, Informative)

    by Anonymous Coward
    Launch of IT Wales Technical Computing Group- Thursday 23rd September 2004

    IT Wales, working in partnership with Know How Wales, Knowledge Exploitation Centre and Cygnus Online, has unveiled plans for an exciting new programme of events specifically targeted at computing professionals from both business and academia.

    During the launch breakfast, held in Sketty Hall Swansea, on Thursday 23rd September, Beti Williams, Director of IT Wales, outlined the aims and objectives for the group.

    "The IT Wales Advanced
  • Well, they've had the idea of requiring authorization, so the effect is the same.

    Anyone got a mirror?
  • Paths to quality (Score:3, Insightful)

    by Anonymous Coward on Friday October 08, 2004 @08:12AM (#10468920)
    The links seem dead so I will add my .02. The programmer's view of the software should not be confined to the source code control system. Programmers should know how to install and implement the software they design and code. Programmers should have some personal experience supporting the software they code. NOTHING highlights the weaknesses of your software faster than being forced to work a support line/forum, etc. Establish personal relationships with employees in the sales force and management. You will be able to be more influential in the company if you do this. Establish personal relationships with your customers. You can use them to push your agendas. Paying customers have some of the most pull with your company. Use this to your advantage.
  • Slashdotted (Score:2, Funny)

    by Anonymous Coward
    It's slashdotted..

    Btw He didn't write it in Welsh did he? Coz Wales officially doesn't exist http://news.bbc.co.uk/1/hi/wales/3715512.stm
  • by tcopeland ( 32225 ) * <tom@NoSPaM.thomasleecopeland.com> on Friday October 08, 2004 @08:16AM (#10468940) Homepage
    ...are nifty. They can catch all sorts of stuff and produce lovely reports [cougaar.org] - or, well, at least functional reports. And running them nightly - or hourly - helps to ensure the code won't get out of sorts.

    PLUG: Need to check Java code? Try PMD [sf.net]!
  • by mark1348 ( 812035 ) on Friday October 08, 2004 @08:19AM (#10468961)
    You can't help but be frustrated when dealing with projects that were obviously "proof of concept" which then evolved directly into the actual "product". Hard to resist but just plain stupid. I wish more open source projects had strict standards.
  • by LeonardShelby ( 576365 ) on Friday October 08, 2004 @08:20AM (#10468963)
    Reusable, reliable, quality software begins with the interface. If someone cannot tell what a routine or module does with just a quick read, then all is lost. They'll cease to believe what the documentation (if any) says, and go on to write it themselves.

    The solution is better interface design. Clear, concise naming without ambiguity. And including the specification is absolutely necessary. With the contract included as part of the interface, there is even less chance for error and/or any ambiguity. Testing is aided because the rules for calling a routine are right there with the routine interface and comment.

    Unfortunately, most programming languages refuse to support contracting in any form, and most developers don't see the benefits. Until this hurdle is breached, quality software will not be achieved.

    Steve

    --
  • As always, mirrors of the pages and the movies are here: MirrorDot [mirrordot.com]. Enjoy.
  • by AnuradhaRatnaweera ( 757812 ) on Friday October 08, 2004 @08:25AM (#10469011) Homepage
    Wonder if he also talked about being on a chip [alancoxonachip.com]. ;-)
  • by jaymzter ( 452402 ) on Friday October 08, 2004 @08:31AM (#10469051) Homepage
    "regression testing, what's that? If it compiles it works and if it boots it's perfect!" - Linus

    'nuff said
  • Good practices (Score:5, Insightful)

    by vinukr ( 796210 ) on Friday October 08, 2004 @08:31AM (#10469056)
    These are some things i guess is necessary for good software
    1) Reviews at all stages.(Reqs/design/dev)
    2) While at development, u sure must know whats the most efficient way to code a design (which libraries are more suitable etc)
    3) Unit testing and Integration testing (when the project is huge)

    Some practices that managers can really use to take the pressure off the team
    1) Try giving buffers to the team (seriously, it works)
    2) Proper Code management (Lotsa rework and pressure come due to lack of this)
    3) Proper tracking and status updates to the customers
  • by cs02rm0 ( 654673 ) on Friday October 08, 2004 @08:35AM (#10469072)
    emerging techniques for producing higher quality software
  • MBA? (Score:5, Funny)

    by Anonymous Coward on Friday October 08, 2004 @08:37AM (#10469079)
    So now we're taking software writing advice from PHBs now? :)
  • MDA? (Score:3, Interesting)

    by little1973 ( 467075 ) on Friday October 08, 2004 @08:48AM (#10469146)
    At my company the management has started an MDA (Model Driven Architecture) project, because some developers presented it to them as the holy grail in software development. They use a GUI to make associations between classes and ASL for the code. They say that you are less likely to make a mistake because you code in a platform independent way (so, you do not have to pay attention to all the little details) and the translator is responsible to make the actual code.

    Some of us more experienced developers do not think it is the holy grail. It looks like you can make as much mistake as in convetional languages. Also, development with a GUI (see at www.kc.com) is much more cumbersome.

    Is there anyone who used MDA and ASL and has some experience about it?
  • by nuggz ( 69912 ) on Friday October 08, 2004 @09:46AM (#10469501) Homepage
    Oh no, someone with an education in MANAGEMENT suggesting ways to MANAGE a production process.

    Yes your average programmer/engineer might be able to manage a project. But why not take some of the expertise of a manager to make it a bit better?

    If someone like Alan Cox should now be ignored as "some MBA toting PHB" how open minded are you?
    I think Alan might have a bit of an idea how the software development process works.

    If you're not even willing to consider their ideas, you're doing yourself quite a disservice.
  • For DOS/Windows/OS2 etc:

    Gimpel Software's:
    PC Lint (cheapish)
    Flexelint (pricier)

    Freeware (checks GDI leaks)
    bear.exe (http://www.geocities.com/the_real_sz/misc/bear_. h tm)
    gdiobj.exe (http://www.fengyuan.com)

    Linux:
    Electric Fence (free)
    Valgrind (free)
    Splint (free)

    Books:
    John Robbins books on debugging. Concentrates on Win 32 but useful ideas wise for any x86 platform.

    And now the gags...

    Tools I've not found helpful.....
    Rational Rose!
    Microsoft's beloved COM!
  • The Ping Wales article's link to Page 2 loops back to Page 1.
  • QA != testing !!! (Score:3, Informative)

    by gosand ( 234100 ) on Friday October 08, 2004 @09:47AM (#10469518)
    I have said it for years, and I'll keep saying it.

    Quality Assurance is not testing.

    Testing is testing, and can run the gamut from unit to use case, from integration to system, from acceptance to beta. But QA is not testing. A lot of people call testing QA, but it is not the same thing.

    Testing is what you do when you get the code. QA is everything that you do throughout the software development cycle to ensure that you have quality software. This can include code reviews, process audits, statistical analysis, etc.

    I have been doing QA and testing for 11 years now. I have a degree in computer science, and I CHOSE to do this career. You may be able to get away with ignoring QA professionals and still produce high-quality software. But not all software projects are equal. QA is probably the most overlooked part of software development, testing the second.

  • Good interfaces is why UNIX originally took off. Back in the '70s when you had to actually call the OS rather than just doing (say) Fortran formatted I/O, you had to do stuff like:

    Allocate a file control block. In assembly, this was usually done statically at compile time. In Fortran it was usually a global common.

    Call something like SYS$INIT%FCB

    Fill in half a dozen random fields in the FCB with magic numbers. Like, file mode, file type, access mode, device type, file name, file extension, file owner, file password, read key, write key, tape policy, ...

    Call something like SYS$OPEN%FILE, which may involve two or more calls. If It was a device, you might need to call SYS$ATTACH%DEVICE, for example, or SYS$LOCK%FILE...

    Repeat the above for an I/O control block. If the I/O control block doesn't contain a buffer, repeat the same for the buffer. You may need to allocate an event flag, or just pick one and hope that the same one wasn't used by another library. You may need to allocate a "mailbox" or other asynchronous I/O endpoint.

    Call the variant of SYS$READ%BLOCK for the file type or device type you're using. You could be calling SYS$READ%BUFFERED or SYS$QUEUE%IO followed by SYS$WAIT%IO.

    Do something with the buffer.

    Manually increment the block offset in the I/O control block.

    Repeat the reads until done.

    Release the I/O control block, and any event flags or mailboxes.

    Call SYS$CLOSE%FILE or SYS$RELEASE%DEVICE or whatever.

    On UNIX you did this instead:
    int fd;
    char buffer[BUFSIZ];

    fd = open("file name",mode);
    if(fd != -1) {
    while(nr = read(fd, buffer, BUFSIZ) > 0) {
    do_something_with(buffer, nr);
    }
    close(fd);
    }
    This was revolutionary. Really. It's not perfect, but it's so much better than what we were using at the time that it's no wonder people couldn't wait for AT&T and re-implemented UNIX from scratch several times.

    This is the same kind of improvement we need in our interfaces for GUIs, databases, network services, and so on. Even the Berkeley socket interface is too complex... all those details of address formats and address structures? Those shouldn't be there... you should be able to do this:
    fd = open("/tcp/host.domain/http/options",mode);
    // or at least fd = tcpopen("host.domain","HTTP",sockopts);
    if (fd != -1) {
    ...
    close(fd);
    }
    Yes, there's libraries that do this, but they all have the same socket()...gethostbyname()...connect() stuff under the hood. This should be handled at the system call level.

    As for GUIs... Plan 9 is about the only one I've seen that looks like it's even trying to reach the level of clarity, safety, and consistency we need.
  • QA (Score:3, Funny)

    by Brandybuck ( 704397 ) on Friday October 08, 2004 @12:16PM (#10471455) Homepage Journal
    ...but others seem heavily influenced by his recent MBA. In particular, he has a lot to say about Quality Assurance in the software world...

    Software QA by normal people: Test the product.

    Software QA by MBAs: Assure that twenty thousand meaningless documents are signed, perform audits to ensure that these documents are signed, provide mandatory training so engineers know how to sign these documents, award bonuses to those who sign the most documents, define productivity to be the number of signed documents in an engineer's cabinet.
  • by Pathetic Coward ( 33033 ) on Friday October 08, 2004 @12:30PM (#10471646)
    Any programmer that tries to "convince" management that quality assurance checking is desirable will be first on the outsource-to-India list.

    Management doesn't want any backtalk about "quality". They expect the programmers to do things right the first time; that's what they're paide to do. When management has compliant workers in India that work cheap, follow instructions, don't talk back, and aren't around the manager's office geeking up the place, they're not going to bother with "quality assurance" insubordination. In particular, programming methodologies such as Extreme Programming that require greater management involvement in the coding process will be treated with scorn; management wants less involvement, not more. As to whether or not the code actually works - once it's out the door and bonuses are in hand, who cares?

    • In particular, programming methodologies such as Extreme Programming that require greater management involvement in the coding process will be treated with scorn; management wants less involvement, not more. As to whether or not the code actually works - once it's out the door and bonuses are in hand, who cares?

      Well, for one, companies that want to make money. A company may be able to get away with shipping crap for a little while, but it's rare that somebody can do it for long. Why? It's not just that cu
  • by renehollan ( 138013 ) <{ten.eriwraelc} {ta} {nallohr}> on Friday October 08, 2004 @03:12PM (#10473778) Homepage Journal
    ... which surprises me because he is one smart dude.

    All his points are valid, but (a) dangerous when taken as gospel, and (b) miss the forest for the trees.

    What kills software is complexity. I've been writing code professionally (i.e. getting paid for it) since I was 15. It's been 28 years. Starting with Dijkstra's "GOTOs Considered Harmful", I've seen fads to improve software reliability come and go: structured programming, object-oriented programming, garbage collection to handle memory leaks, etc; as well as programming languages prividing the syntactic and semantic sugar to support the fad du jure. Hasn't helped, has it?

    The general problem is one of managing complexity: if you can "come from" anywhere to a snippet of code, how can you ensure that all your assumptions about the context you're in are valid? Similarly, if you can access a global variable, well, globally, how can you be sure it contains what you expect? How do you know all the ways your data can be accessed? How do you control when and where an object is destructed, or that all resources (memory being just one) are freed?

    Either restrict who can do what, or restrict the assumptions you make about the things you are operating on.

    Using a garbage collector restricts who can allocate and deallocate memory. Object oriented programming restricts who can muck with private members. Structured programming restricts how you can get somewhere. O.K. wise guy, how are you going to write a garbage collector without dealing with raw memory? Gee, you have to get your hands dirty. How are you going to deal with private members inside private member functions? Gee, looks a lot like functional programming with globals, doesn't it?. How, the heck are you going to compile that loop without a jump at the end? Gee, what was that about GOTOs?

    The trend appears to be to try to find the "next great technique" which will provide the best bang for the buck when improving quality. All of them help. None of them enough. Frankly, this incremental approach is not going to solve the problem of defects that are a result of the product of human error and code complexity. We need to learn how to manage complexity on its own.

    Are there any examples of success in managing complexity, and can we learn from them?

    I think so.

    The best example I can think of is a compiler. Look at how many different inputs it can handle and still produce correct machine code with a fairly low defect rate. I attribute this to the fact that its input is highly structured: it has to follow strict syntactic rulers. Thus, when compiling something, one has a great deal of knowledge about the context in which this occuring. Yes, you have to deal with semantic issues (types, declarations, etc.), but an unambiguous language should make this clear.

    One can point out that programming languages are well-specified, so implementing a compiler is relatively easy. If only all requirements were so detailed. I don't think that's it, however: one can come up with very detailed specifications for a complicated system that's difficult to implement. A programming language, by contrast, can be syntactically expressed in a few pages of BNF.

    Contrast this with a user interface, where various elements need to be enabled or disabled depending on what other elements have been previously activated, or used to gather data. How easy or difficult is it to "forget" to enable or disable a control? If you see a parallel with this problem and excessive use of GOTOs in a program, you're starting to get a feel for what I'm talking about.

    What has happened in the past 25 years or so of programming language evolution is that the complexities of the past have been pushed and morphed into the complexities of the present. And tackling one in isolation (memory management) does nothing for a transformation of the same problem (resource management in general -- memory isn't the only thing that leaks), though it may provide the best bang defect-rate-reduction-wise

  • Where to begin... (Score:3, Interesting)

    by andymac ( 82298 ) on Friday October 08, 2004 @09:37PM (#10476687) Homepage
    OK, first off let me fess up: I am a QA manager, have been for near on 10 years now. I've worked on projects with MS, Adobe, IBM, Sun, yadda yadda yadda. All sorts of projects and languages, products and programmers.

    Cox really does highlight some of the best practices out there, but he also skims over some key ones. The biggest problem I've seen out there (and I've done QA Management consulting for a good chunk of those 10 yrs, so I've seen a lot of organizations) is commitment by management. Most QA folk know that there will always be a challenge to find the right balance between schedule/budget, quality of product, and feature set of the product. Do you want it good? now? or stripped down? And most QA folk are willing to work within that mindset. But when management 1) does not appropriately staff QA activities; 2) does not appropriately fund QA activities & annual budgets; 3) does not make it damn clear to all staff that QA is a requirement for project and thus ultimately product success and if you don't like someone testing your code and logging bugs against it, you better move along pardner, the organization pays lip service to QA. Heck even when QA finds a horrible problem prior to release, so there's time to fix the problem, you'd think most folks would be happy (ok, not thrilled because that balance is skewed towards a schedule slip) that there was time to get in a fix. But no, 99 times out of 100, QA is slammed for holding up the process.

    Independent testing is a must for an organization to have any real understanding of the quality of the product. Engineers cannot be the only folks testing their outputs. For oone, it's a very expensive way to test - I want geers designing & coding & unit testing not integration or system or release testing. QA folks will cost between 30-70% of a geers salary - why would you want the most expensive resource doing the (in most cases) less technically demanding work? And work they usually don't enjoy doing anyways (grumble grumble, I didn't sign up for this!)

    OK, so this is a bit of a rant. I'm just dealing with my current senior management who say they want QA to manage and execute the independent testing, but turn around and find 95% of Engineereing who refuse to participate in the process.

    I'll just go have a beer and forget about my problems...mmm... beer... drool drool.

"Pull the trigger and you're garbage." -- Lady Blue

Working...