Alan Cox on Writing Better Software 391
Andy Gimblett writes "Alan Cox recently gave a talk in which he discussed some current and emerging techniques for producing higher quality software. Some of these will be familiar to Slashdot readers, such as validation tools, type checking, etc, but others seem heavily influenced by his recent MBA. In particular, he has a lot to say about Quality Assurance in the software world, and the kinds of things we should be doing (and some people are doing) to make better software. Story and lots of quotes at Ping Wales, and video at IT Wales."
Quality (Score:5, Funny)
Quality Assurance in 4 easy steps!
Dear Managers,
1. Listen
2. Close your mouth
3. Plan everything around #1
4. Profit!!! (notice there is no line with ??? because you listened!)
Re:Quality (Score:4, Interesting)
2. Close your mouth
One problem with this: By listening with your mouth closed, you run a very real risk of misunderstanding what you're hearing. Without feedback to verify that you're understanding, you are highly likely to do things completely wrong.
Funny example: A few years ago, I was implementing an interactive web site, and we had a nice test server machine in the lab. In one discussion with The Mgt, I casually mentioned that for testing, I thought we needed to run another server.
I didn't hear any response, so I went ahead and cloned the apache server that we were using, and fired it up on a different port. 10 minutes work, and we proceeded to test.
A while later, I discovered just in time that the managers had heard me say that we needed a second server machine, and were ordering another one. After a bit of discussion, I realized that to them, a server is a piece of hardware, while to us software guys, it is a piece of software.
I managed to explain to them that, no, we didn't need a new machine. I'd already set up a second web server on the old machine, and it was working fine. There was still time to cancel the hardware order.
But still, they had done a bunch of unnecessary work, and had almost spent a good sum of money unnecessarily, all because they had listened carefully without asking questions. If they had asked what hardware the new server needed, I'd have realized quickly that there was a misunderstanding, and we could all have saved a bit of time.
Listening without interacting, and acting on your understanding of what you heard, can lead to a lot of serious problems.
Re:Quality (Score:5, Interesting)
(steps 1 and 2 may be reversed if required)
Enjoy good results.
Re:Quality (Score:5, Insightful)
A good manager does all the detail work and keeps track of everything. My post was about what they haven't been doing much of lately.
I don't hate managers, and to suspect that I might by reading my previous post means you failed to understand it.
Quality in business comes from many different people in any given project. Software quality can be adversely affected by morale, and you guessed it -- programmers have some of the lowest morales in the business world today; they are often not respected by management, they have the highest workloads, and they have no time for social lives.
If managers listened, and remained silent or inquisitive instead of arbitrary or antagonistic, software quality would be up.
Re:Quality (Score:5, Informative)
but seriously, good managers manage. Bad managers threaten, cajole, bribe or whine. software is either a product, which means that management is monitoring to be sure the product is what they can sell; or software is a tool, in which case the manager must ensure that the tool works.
either way, successful software is a combination of good programming and good management.
Re:Quality (Score:2)
From somewhere log ago I read something that there is about an 85% overlap in the skill set required for good management and good programming. Both have to make do with conflicting goals and limited resources. Bad management and bad programming is a bad combination. Bad management and bad anything is bad, but seems that the situation is much worse with programming because programmers can instinctively pass sound judge
Re:Quality (Score:3, Insightful)
Trust me. All problems in software engineering are human issues.
It helps nothing if a guy in the team is a hero or super hero or the brightest expert if the human factros are a mess.
All problems in software projects are simple:
1) lack of understanding -- people do not understand the problem domain
2) lack of insight -- people do not realize they suffer from (1)
3) lack of problem aware ness -- people do not realize that there ARE problems an
Are you a manager? (Score:2)
I dont hate managers, but I gotta tell you, mlh hit the nail *right* on the head.
From the listening will come many many concrete steps.
Re:Quality (Score:5, Insightful)
My direct manager right now is acutally one of the best I have seen. The other 3 or 4 layers above are lost and clueless most of the time. Their buzz word at the moment is ISO 9000.
I think your comments bring up another issue. F**k up move up is often not just a saying. I have made friends and lost friends in the past for my opinions but if one of my fellow programmers is just plain incompetent, lazy, or an ass hole then they need to go. I don;t encorage moving them to get them out of my hair. I have done this before and somehow I always ended up having to deal with them again.
There is also the type that are just biding their engineering time till they can become a manager. I have generally found these guys to be less than worthless. They are usually garanteed to make the worst managers.
These are the criteria I usually find good in a manager and tried to use as one:
1. Have a good broad software background. If working at the kernel level (including driver) have a basic understanding of hardware
2. Understand how software impacts the potential customers and their needs. Books have been written on this, most of them wrong.
3. Know your employees and their stengths and weaknesses. Use this to build a team that can and will perform.
4. Trust the team but keep track of everything anyways.
5. Process is a must but it should be pretty lightweight. Make it enforcable and do not waiver in ensuring the necessary pieces occur.
6. Insulate your engineers from most of the stuff above. Not all, just most of it. Usually controlling the flow will suffice (often this involves rumor squashing).
OK enough for now.
Unit testing? (Score:5, Interesting)
It should, by now, be clear that "code that doesn't have tests, doesn't work", and that if XP has done anything for us, it's to focus on writing tests. I've seen this in action and it works.
John.
Re:Unit testing? (Score:5, Insightful)
1) "The unit tests passed, so it works." This assumption is flawed on several levels. First, and most fundamentally, even if all unit tests pass, there is still the issue of whether your software works as a whole. Software often has "emergent logic" and UI scenarios that are difficult or impossible to test (after all, that's not what unit testing is for, but some people seem to think it is).
Second, just because a test passes, doesn't always mean the API works. This is especially important if you didn't write the tests yourself. Just because a unit test CLAIMS it tests X, doesn't mean it does. Is the test complete? Any false positives? Is the test just a skeleton that was intended to be implemented later but never was? I've had all these bite me in the past.
2) "That particular test has NEVER passed, so there's something wrong with the test. We just ignore it now." Bzzzt! Wrong! There's a REASON it never passed. It's either not implemented properly, just a stub that fails waiting for someone to write an implementation, or maybe you just think the feature it tests actually works. Look closer. The test might be trying to tell you something.
If you are careful with unit tests, they can be very rewarding and useful (especially for regression testing, where they are invaluable), but put too much confidence in them or depend on them to do the kind of overall testing they were never designed to do, and you will fail long before your first test does.
Unit testing first (Score:2)
Because it is short cutting the design pahse by takign youd own less unworkable roads ti improves the design while decreasing tiem to complete the code..
That is why the phole complete process is called agile!!!
Unit testing after the design is not very effective except to catch where things break after a change
Re:Unit testing first (Score:3, Insightful)
Man, if only there were unit tests for slashdot postings... Ah, wait, we have those (lameness filter!) and they don't help at all!
Re:Unit testing? (Score:2, Interesting)
A solid requirements based development process and requirements based testing/verification process is the key to large high quality software. In my opinion, formalized unit testing tends to hide errant system level behavior. Sure, it aids an individual developer in understanding their code works as they intended, and should stay a vital part of low level development. But
Re:Unit testing? (Score:4, Insightful)
You tacitly assume that it is possible to get solid requirements. When writing avionics software, I'm sure you can, because the problem is well understood and we have good science/math to back it up (eg. GPS nav system).
But in most sofware projects it is impossible to create requirement ahead of time, mostly because the problem you are trying to solve is new and we don't understand it well enough yet.
Are there requirements for the web browser? Were they created before the code was written? WHat about requirements for MS Word?
Re:Unit testing? (Score:3, Insightful)
That's like saying that street maps don't tell you what the continent looks like. It's technically true, but it seems to miss the point.
Unit tests are for testing relatively small chunks of work. If you want to be sure the pieces work together, you do testing at higher levels. I think both are necessary for a solid system.
Personally, I think of my high-level tests as executable requirements. Every t
Re:Unit testing? (Score:4, Insightful)
Re:Unit testing? (Score:5, Insightful)
Unfortunately, when schedules get tight, it's things like unit testing (and testing in general) that get cut. The more emphasis we get on the importance of QA the better our industry will be.
Re:Unit testing? (Score:3, Insightful)
Re:Unit testing? (Score:2, Insightful)
Write the tests *first* (Score:5, Insightful)
This has several important benefits:
Re:Write the tests *first* (Score:3, Informative)
Re:Write the tests *first* (Score:3, Informative)
Re:Write the tests *first* (Score:3, Informative)
It's an iterative process. You don't necessarily write a complete suite of tests for your interfaces before you start writing a single implementation. Someone in QA might think of a unit test as white box, but they tend to be black box from the perspective of the developer. You should be able to write a unit test before writing the unit.
The point of most unit tests is to verify an implementation's conformance to its interface. When you late
Re:Write the tests *first* (Score:3, Interesting)
If you intend to do some structural testing (white box) it is impossible to write test cases before writing the software, once the testing requeriments are defined by analyzing the source code (that's why it is called white box testing).
This is only partly correct. If you design your software via Use Cases and Scenarios or User Storiers, then the possible pathes are in gereat deals predefined. That means you can do black box tests with out any need of white box tests.
To check wether your black box test c
Re:Write the tests *first* (Score:3, Informative)
But write which tests first? There are so many possible. Usually there are more ways for something to malfunction than for it to function correctly.
How do you write a test to check that a banking application does not allow a customer to cancel other customer's cheques? How do you write a test to check that someone didn't allow sql injection? How do you write a test to ensure that a user account cannot do what it is not authorized to?
Since ther
Re:Write the tests *first* (Score:3, Informative)
SO:
Your first unit test is a simple "return pass"
You run your test framework and verify everything works.
Commit the changes into the main line branch.
You then change your unit test to "return fail". Run test framework and verify it fails. (NOTE: You do all of this I
Re:Write the tests *first* (Score:3, Insightful)
Re:Write the tests *first* (Score:3, Insightful)
The process I like best for 'test-first' is actually 'almost-test-first'. Instead of just sitting down and writing tests, the process is:
1) Design a system that solves the problem.
2) Figure out what entry points are to the system.
3) Implement the interfaces for the entry points so that the public contract is defined.
4) Now you start writing tests that verify the fulfillment of the public contract.
5) Impl
Re:Write the tests *first* (Score:3)
1. You can be more confident when you begin refactoring mature code-bases. This for me is the clincher as code never stands still but the tests can be a constant. A permanent measure.
2. If it's an API, you have working examples to show people.
3. for years, I used informal, undocumented tests. Handing-over was always going to be bad. Now, hand-overs are a, uh, doddle.
4. Progress. Nothing like some concrete test results to show people. Most test suits show results as HTML. Put
Re:Unit testing? (Score:3, Insightful)
Do NOT get the engineer who wrote the code to also write the test.
It's fairly fundamental - the engineer who wrote it will have a prejudiced view of what should/will work.
Get someone else to do it and get a valuable fresh insight.
Re:Unit testing? (Score:2, Insightful)
With end user applications, on average, a couple hours of active ad-hoc testing and test case development per week finds 95% more bugs than hundreds to thousands of automated unit tests will.
For an API, unit testing might be more effective, but APIs are much simpler to test than full end user applications.
Re:Unit testing? (Score:2)
The advantage of unit tests is that, once written, they can be used again and again, which is useful for verifying that a change to the unit did not break its previously tested behavior.
The disadvantage of unit tests is that they can themselves contain bugs. It's good to know your program passed the tests, but were the tests good? Did you really test what you thought you were testing? Was what you thought you tested actually the right thing? Did you not
Re:Unit testing? (Score:3, Insightful)
First, it is not necessary for unit tests to be absolutely complete to be useful. Anything that finds a hitherto-undiscovered flaw is valuable - extremely valuable.
Second, that there are bugs in a unit test is not, in practise, that big of a deal. At worst it means is you haven't tested a unit as thoroughly as you think you have. Unfortunate, but not a disaster. Perhaps two people should write their own units tests for a single module, and then compare the bugs they found in the module. An in
2 words (Score:3, Insightful)
not a panacea, but it does go far.
Re:2 words (Score:3, Insightful)
Unfortunately, IMHO, most of software is done as this - let's put pretty gadgets around, cool nice icons, and then we'll do the job in event handlers somehow. Here it all breaks loose.
Why it is done that way is easy to see - managers/supervisors are not interested in you doing smth behind-the-scenes for we
Re:2 words (Score:5, Funny)
The kicker is, this year that same manager wants to re-use the code that my coworker was origionally going to write.
Re:2 words (Score:5, Funny)
I'll leave the results as an exercise for the reader...
Re:2 words (Score:5, Informative)
"The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they've been fixed. There's nothing wrong with it. It doesn't acquire bugs just by sitting around on your hard drive. Au contraire, baby!... it has grown little hairs and stuff on it and nobody knows why. Well, I'll tell you why: those are bug fixes.
One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn't have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.
Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it's like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters. When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work."
"free" ? (Score:2, Insightful)
How would Alan apply his quality methods to projects which member might never meet due to geographical contingencies ?
Good code... (Score:5, Insightful)
I'd say commenting is doubly important in OSS projects, as it involves many sets of eyes trying to comprehend what you coded.
Re:Good code... (Score:5, Insightful)
Commenting must be 100% accurate else it is detrimental to understanding the code.
Sometimes code changes don't result in updated comments...
Once I find an inaccurate comment in somebody's code, I have to start rewriting or deleting all the comments because I can't trust them anymore.
Re:Good code... (Score:2, Interesting)
first and foremost be self-documenting, by the
choice of proper variable names and subroutine
names. Only comment things that are not obvious,
like tricks that are employed. The point is that
comments are not read nearly as much as names,
and get stale more quickly, rendering them useless.
Depends on the language and context... (Score:3, Interesting)
That meant that the code itself could be quite hard to follow without some sort of logical commentary accompanying it.
I tended to comment my code quite heavily in that context, since I'd already had experience with uncommented code in
Re:Good code... (Score:2)
Re:Good code... (Score:3, Interesting)
Re:Good code... (Score:5, Insightful)
This is a reason to write better comments, not to avoid them completely!
Of course comments that are flippin' obvious, or wrong, do no good to anyone. But some sorts of comments can be incredibly helpful:
My rule of thumb is: Try to make the code itself obvious. And comment everything else!
Re:Good code... (Score:5, Insightful)
Recently I had the misfortune to wade through a few hundred kilobytes of Java that was written by someone who thought he should 'abstract' everything as much as was humanly possible. Sounds good, right? Well... It turns you can do a lot of harm that way, too. I don't think he had a single function in there that _wasn't_ called something like SetProperty(), GetValue(), DoFunction(), etc. There was absolutely no way to guess what it was doing based on the name of the functions. Naming of classes and variables wasn't much better. After looking at it for a couple of hours I don't think I could have guessed what it was trying to do if I hadn't already known that beforehand.
So, next time you are writing software, feel free to get in touch with reality and name stuff after what it is supposed to be doing. Nice long names please, no abbreviations unless you go over 30 or 40 characters. Down with CmtPmt2Db(), down with SCUPD(), and down with GetPropertyValueInterfaceCaller()!
Because, be honest: those mean nothing, while CommitPaymentToDatabase(), ScreenUpdate(), and GetXLocation(), have intuitive meanings we all understand...
Re:Good code... (Score:3, Insightful)
WHY, not what (Score:5, Informative)
Don't say:
double sum = 0.0;
for (i = 0; i < len; ++i)
{
double signal = buf[i];
sum += signal*signal;
}
return sqrt(sum/len);
say:
double sum = 0.0;
for (i = 0; i < len; ++i)
{
double signal = buf[i];
sum += signal*signal;
}
return sqrt(sum/len);
In other words, tell me WHY this code added the square of the signal, not THAT it added the square of the signal.
Moreover:
If more folks would follow these rules it would be a HELL of a lot easier to follow their code.
NOTE: If you can say it in the code, do so - if you can specify the exceptions to your function via a "throw(int code)" type statement, then do so rather than using a comment.
Remember - the code tells the COMPILER and the programmer what is going on, the comments tell the programmer WHY it is going on.
Alan Cox (Score:5, Funny)
Code review and pair programming (Score:4, Funny)
However the greatest problem with writing good software is still in the marketing. In order to sell/license software it needs to have features, and the lack of defect often does not count as "features".
Re:Code review and pair programming (Score:5, Interesting)
1) Checked in code. Spent fifteen minutes justifying design decisions. No changes made.
2) Checked in code. Code contained horrible horrible bug. Code reviewer didn't see it.
3) Checked in code. Defended my design against several more computationally expensive suggestions that were also more complicated. No changes made.
4) Listened to a friend gripe about having to spend a DAY AND A HALF repeating design reasons and fixing bugs introduced by his code reviewer "cleaning up" his code.
5) Received company-wide email about a build that flat-out didn't compile - apparently someone hadn't bothered compiling a patch, and had sent it to a code reviewer, who likewise hadn't bothered compiling it before authorizing it.
Now I'll admit that there are also a whole lot of "well, it only took five minutes, so it wasn't much of a waste" cases. But so far I haven't heard one person talking about how useful the mandatory code reviews are.
Maybe it's just an artifact of the kind of programmers working at this company, or the kind of code being worked on, but so far code reviews have been a net loss in my experience. I've taken to doing major changes in my own personal branch of the repository (which doesn't enforce the code-review requirement) and in a month or two I'll have 3000 lines of code for someone to look at - but at least I won't have nickel-and-dimed them to death with 120 100-line code reviews, 3/4 of which I will inevitably end up deleting entirely.
Note that I'm not saying "code reviews are bad" - what I am saying is that there's a time and a place for just about every technique, and there's also a time and a place where each technique is worse than useless. Pick your battles and pick your tools.
There are different types of code reviews... (Score:5, Insightful)
That type of thing doesn't work as well for large changes, but we found that for small ones it sometimes can be a useful thing.
Re:There are different types of code reviews... (Score:3, Interesting)
This type of sanity checking is especially useful for training junior programmers. It can be very instructive for a senior programmer to sit down with a junior progammer and go through their code together. The primary purpose of a review should be to have a second set of eyes on the code but it is very valuable for training and communication as well.
Re:There are different types of code reviews... (Score:2)
That isn't why we implemented it at first (it was initially put in place because we had a few offshore contractors imposed on us and we wanted to do sanity checking of their code before we allowed it on the production system we supported), but we found that such code reviews could also be an excellent teaching tool when someone new came on board.
Re:Code review and pair programming (Score:3, Informative)
Our company has some loose rules (we're working on strengthening them) that state that checked in code must be unit tested. This is to prevent things like your #5. But we haven't gotten to code reviews yet. Being on the team that's working on our process, I'll remember your experiences when we
Re:Code review and pair programming (Score:4, Insightful)
It's very different to things like design reviews. They have their place too. A lot of things Linus rejects are really design review things. Its not uncommon to get a "Yes this needs fixing but do it this way instead". It works well providing the person saying that has good judgement.
Bad code review, bad tools, bad compilers and bad managers are _all_ useless
Re:Code review and pair programming (Score:3, Insightful)
If this happens to you on a regular basis then you are probably a better-than-average developer. Which is just fine as long as we find a way to make your work more average. And code reviews seem to do the job.
Seriously, the productivity spread between developers (20 times as effective... adding more people... Thank you Mr. DeMarco) is what a lot of very strict process models and practices (such as code reviews) rea
Re: Code review and pair programming (Score:2)
But I doubt they've been a total waste. For example, knowing their code is going to be reviewed soon makes some people write better code. And in explaining your design decisions, maybe your listener(s) learned something about design. (Or maybe you did!)
My own feeling is that code reviews can be a great way of sharing knowledge about the system, and about development general
Testing is good (Score:4, Insightful)
Re:Testing is good (Score:4, Insightful)
No, testing finds new errors. That's what test suites are for: to let you know when your change to X caused Y to break.
Testing and releasing software (Score:5, Interesting)
Well, the effort paid off. Before we supported one version of HP/UX, and one release of Linux, now we support HP/UX (still a pain), and 4 looking at going to 6 Linux version/distributions and it is less work to produce a release now than ever just a year ago.
Tools like automake [redhat.com], autoconfig, libtool, cvstrac [cvstrac.org] and of course cvs have made my life bearable.
Re:Testing and releasing software (Score:2)
Automated building and testing really pay off when adding new platforms. We recently added IA64 Linux. We already supported IA32 Linux and IA64 HP-UX, so we had most of the C and assembler we needed. Great, we should have a new platform by Friday. Well, the automated tests we've accumulated over the last five years found bugs in places I never would have thought to look if I were testing by hand. Now we're adding O
Thought is was having a sabbatical? (Score:3, Funny)
"Alax Cox gave a talk"
Was it in Welsh?
Text Of Article (Score:2, Informative)
IT Wales, working in partnership with Know How Wales, Knowledge Exploitation Centre and Cygnus Online, has unveiled plans for an exciting new programme of events specifically targeted at computing professionals from both business and academia.
During the launch breakfast, held in Sketty Hall Swansea, on Thursday 23rd September, Beti Williams, Director of IT Wales, outlined the aims and objectives for the group.
"The IT Wales Advanced
Ping Wales slashdotted already. (Score:2)
Anyone got a mirror?
Re:Ping Wales slashdotted already. (Score:3, Informative)
my first slashdotting, fun.
Gareth
Paths to quality (Score:3, Insightful)
Slashdotted (Score:2, Funny)
Btw He didn't write it in Welsh did he? Coz Wales officially doesn't exist http://news.bbc.co.uk/1/hi/wales/3715512.stm
Code validation tools... (Score:5, Interesting)
PLUG: Need to check Java code? Try PMD [sf.net]!
Keeping prototypes just that (Score:4, Insightful)
Interfaces and contracting... (Score:3, Interesting)
The solution is better interface design. Clear, concise naming without ambiguity. And including the specification is absolutely necessary. With the contract included as part of the interface, there is even less chance for error and/or any ambiguity. Testing is aided because the rules for calling a routine are right there with the routine interface and comment.
Unfortunately, most programming languages refuse to support contracting in any form, and most developers don't see the benefits. Until this hurdle is breached, quality software will not be achieved.
Steve
--
Re:Interfaces and contracting... (Score:2, Insightful)
(Also note these are not tools, but techniques.)
Mirrors Here - Pages and Videos (Score:2, Informative)
Does page 2 (with the actual advice) exist? (Score:2)
Is this irony? An article about quality has only half of the article.
And it increases the Slashdot effect as everybody thinks they made a mistake and keeps clicking the "Next" link until finally realizing that the website is broken.
Being on a chip (Score:4, Funny)
Linus already covered this (Score:4, Funny)
'nuff said
Good practices (Score:5, Insightful)
1) Reviews at all stages.(Reqs/design/dev)
2) While at development, u sure must know whats the most efficient way to code a design (which libraries are more suitable etc)
3) Unit testing and Integration testing (when the project is huge)
Some practices that managers can really use to take the pressure off the team
1) Try giving buffers to the team (seriously, it works)
2) Proper Code management (Lotsa rework and pressure come due to lack of this)
3) Proper tracking and status updates to the customers
Didn't know he used Gentoo... (Score:3, Funny)
MBA? (Score:5, Funny)
MDA? (Score:3, Interesting)
Some of us more experienced developers do not think it is the holy grail. It looks like you can make as much mistake as in convetional languages. Also, development with a GUI (see at www.kc.com) is much more cumbersome.
Is there anyone who used MDA and ASL and has some experience about it?
MBA- Management (Score:3, Funny)
Yes your average programmer/engineer might be able to manage a project. But why not take some of the expertise of a manager to make it a bit better?
If someone like Alan Cox should now be ignored as "some MBA toting PHB" how open minded are you?
I think Alan might have a bit of an idea how the software development process works.
If you're not even willing to consider their ideas, you're doing yourself quite a disservice.
Some tools I've found handy .... (Score:2, Informative)
Gimpel Software's:
PC Lint (cheapish)
Flexelint (pricier)
Freeware (checks GDI leaks)
bear.exe (http://www.geocities.com/the_real_sz/misc/bear_. h tm)
gdiobj.exe (http://www.fengyuan.com)
Linux:
Electric Fence (free)
Valgrind (free)
Splint (free)
Books:
John Robbins books on debugging. Concentrates on Win 32 but useful ideas wise for any x86 platform.
And now the gags...
Tools I've not found helpful.....
Rational Rose!
Microsoft's beloved COM!
Ironically... (Score:2)
QA != testing !!! (Score:3, Informative)
Quality Assurance is not testing.
Testing is testing, and can run the gamut from unit to use case, from integration to system, from acceptance to beta. But QA is not testing. A lot of people call testing QA, but it is not the same thing.
Testing is what you do when you get the code. QA is everything that you do throughout the software development cycle to ensure that you have quality software. This can include code reviews, process audits, statistical analysis, etc.
I have been doing QA and testing for 11 years now. I have a degree in computer science, and I CHOSE to do this career. You may be able to get away with ignoring QA professionals and still produce high-quality software. But not all software projects are equal. QA is probably the most overlooked part of software development, testing the second.
INTERFACES INTERFACES INTERFACES! (Score:3, Insightful)
On UNIX you did this instead: This was revolutionary. Really. It's not perfect, but it's so much better than what we were using at the time that it's no wonder people couldn't wait for AT&T and re-implemented UNIX from scratch several times.
This is the same kind of improvement we need in our interfaces for GUIs, databases, network services, and so on. Even the Berkeley socket interface is too complex... all those details of address formats and address structures? Those shouldn't be there... you should be able to do this: Yes, there's libraries that do this, but they all have the same socket()...gethostbyname()...connect() stuff under the hood. This should be handled at the system call level.
As for GUIs... Plan 9 is about the only one I've seen that looks like it's even trying to reach the level of clarity, safety, and consistency we need.
QA (Score:3, Funny)
Software QA by normal people: Test the product.
Software QA by MBAs: Assure that twenty thousand meaningless documents are signed, perform audits to ensure that these documents are signed, provide mandatory training so engineers know how to sign these documents, award bonuses to those who sign the most documents, define productivity to be the number of signed documents in an engineer's cabinet.
Quality assurance is dead. (Score:3, Interesting)
Management doesn't want any backtalk about "quality". They expect the programmers to do things right the first time; that's what they're paide to do. When management has compliant workers in India that work cheap, follow instructions, don't talk back, and aren't around the manager's office geeking up the place, they're not going to bother with "quality assurance" insubordination. In particular, programming methodologies such as Extreme Programming that require greater management involvement in the coding process will be treated with scorn; management wants less involvement, not more. As to whether or not the code actually works - once it's out the door and bonuses are in hand, who cares?
Re:Quality assurance is dead. (Score:3, Insightful)
Well, for one, companies that want to make money. A company may be able to get away with shipping crap for a little while, but it's rare that somebody can do it for long. Why? It's not just that cu
Alan Cox doesn't get it... (Score:3, Insightful)
All his points are valid, but (a) dangerous when taken as gospel, and (b) miss the forest for the trees.
What kills software is complexity. I've been writing code professionally (i.e. getting paid for it) since I was 15. It's been 28 years. Starting with Dijkstra's "GOTOs Considered Harmful", I've seen fads to improve software reliability come and go: structured programming, object-oriented programming, garbage collection to handle memory leaks, etc; as well as programming languages prividing the syntactic and semantic sugar to support the fad du jure. Hasn't helped, has it?
The general problem is one of managing complexity: if you can "come from" anywhere to a snippet of code, how can you ensure that all your assumptions about the context you're in are valid? Similarly, if you can access a global variable, well, globally, how can you be sure it contains what you expect? How do you know all the ways your data can be accessed? How do you control when and where an object is destructed, or that all resources (memory being just one) are freed?
Either restrict who can do what, or restrict the assumptions you make about the things you are operating on.
Using a garbage collector restricts who can allocate and deallocate memory. Object oriented programming restricts who can muck with private members. Structured programming restricts how you can get somewhere. O.K. wise guy, how are you going to write a garbage collector without dealing with raw memory? Gee, you have to get your hands dirty. How are you going to deal with private members inside private member functions? Gee, looks a lot like functional programming with globals, doesn't it?. How, the heck are you going to compile that loop without a jump at the end? Gee, what was that about GOTOs?
The trend appears to be to try to find the "next great technique" which will provide the best bang for the buck when improving quality. All of them help. None of them enough. Frankly, this incremental approach is not going to solve the problem of defects that are a result of the product of human error and code complexity. We need to learn how to manage complexity on its own.
Are there any examples of success in managing complexity, and can we learn from them?
I think so.
The best example I can think of is a compiler. Look at how many different inputs it can handle and still produce correct machine code with a fairly low defect rate. I attribute this to the fact that its input is highly structured: it has to follow strict syntactic rulers. Thus, when compiling something, one has a great deal of knowledge about the context in which this occuring. Yes, you have to deal with semantic issues (types, declarations, etc.), but an unambiguous language should make this clear.
One can point out that programming languages are well-specified, so implementing a compiler is relatively easy. If only all requirements were so detailed. I don't think that's it, however: one can come up with very detailed specifications for a complicated system that's difficult to implement. A programming language, by contrast, can be syntactically expressed in a few pages of BNF.
Contrast this with a user interface, where various elements need to be enabled or disabled depending on what other elements have been previously activated, or used to gather data. How easy or difficult is it to "forget" to enable or disable a control? If you see a parallel with this problem and excessive use of GOTOs in a program, you're starting to get a feel for what I'm talking about.
What has happened in the past 25 years or so of programming language evolution is that the complexities of the past have been pushed and morphed into the complexities of the present. And tackling one in isolation (memory management) does nothing for a transformation of the same problem (resource management in general -- memory isn't the only thing that leaks), though it may provide the best bang defect-rate-reduction-wise
Where to begin... (Score:3, Interesting)
Cox really does highlight some of the best practices out there, but he also skims over some key ones. The biggest problem I've seen out there (and I've done QA Management consulting for a good chunk of those 10 yrs, so I've seen a lot of organizations) is commitment by management. Most QA folk know that there will always be a challenge to find the right balance between schedule/budget, quality of product, and feature set of the product. Do you want it good? now? or stripped down? And most QA folk are willing to work within that mindset. But when management 1) does not appropriately staff QA activities; 2) does not appropriately fund QA activities & annual budgets; 3) does not make it damn clear to all staff that QA is a requirement for project and thus ultimately product success and if you don't like someone testing your code and logging bugs against it, you better move along pardner, the organization pays lip service to QA. Heck even when QA finds a horrible problem prior to release, so there's time to fix the problem, you'd think most folks would be happy (ok, not thrilled because that balance is skewed towards a schedule slip) that there was time to get in a fix. But no, 99 times out of 100, QA is slammed for holding up the process.
Independent testing is a must for an organization to have any real understanding of the quality of the product. Engineers cannot be the only folks testing their outputs. For oone, it's a very expensive way to test - I want geers designing & coding & unit testing not integration or system or release testing. QA folks will cost between 30-70% of a geers salary - why would you want the most expensive resource doing the (in most cases) less technically demanding work? And work they usually don't enjoy doing anyways (grumble grumble, I didn't sign up for this!)
OK, so this is a bit of a rant. I'm just dealing with my current senior management who say they want QA to manage and execute the independent testing, but turn around and find 95% of Engineereing who refuse to participate in the process.
I'll just go have a beer and forget about my problems...mmm... beer... drool drool.
That's no mirror!!! (Score:2)
Re:slashdotted (Score:2)
Re:software dev (Score:2, Insightful)
The top 5 things are:
1) Meeting the requirements
2) Stability
3) Maintainability
4) Expandibility
5) Efficiency
Re:software dev (Score:3, Insightful)
I am also a big fan of profiling code, an
Re:software dev (Score:2)
Re:gmail invites (6) (Score:2)
Or am I the only IT boss who insists to disable JavaScript by default except for the office Intranet {our in-house LAMP [and we use at least two of the possible P's in that acronym] apps use JavaScript a lot for flinging data around between forms and highlighting stuff in tables
Re:Type Checking = false promises (Score:3, Insightful)
The earlier you catch a bug, the easier it is to fix. That's been proven. Heavy type-checking catches bugs at compile time. Dynamic type-checking may leave bugs until the user runs over them.
Is the array row-first or column-first? If rows and columns are different types, then the compiler will tell me when I got it wrong. I don't find that chasing that bug later, when it could be anywhere in a thousand lines of
Re:Original ideas are highly overrated (Score:3, Interesting)
I'm glad you don't think the content is original - that means you are one of the people who actually has some idea of how to write good code and automate/enhance the testing side. Unfortunately to a lot of people out there actually writing code in business the concepts are new.
This was Alan's mornin