Automated Software QA/Testing? 248
nailbite writes "Designing and developing software has been my calling ever since I first used a computer. The countless hours/days/months spent on imagining to actualizing is, to me, enjoyable and almost a form of art or meditation. However, one of the aspects of development that sometimes "kills" the fun is testing or QA.
I don't mind standalone testing of components since usually you create a separate program for this purpose, which is also fun. What is really annoying is testing an enterprise-size system from its UIs down to its data tier. Manually performing a complete test on a project of this size sucks the fun out of development. That's assuming all your developers consider development as fun (most apparently don't).
My question is how do you or your company perform testing on large-scale projects? Do you extensively use automated testing tools, and if so, can you recommend any? Or do you still do it the old-fashioned way? (manually operating the UI, going through the data to check every transaction, etc.)"
nothing to see here, move along. (Score:3, Insightful)
i'll stick with The Practice of Programming [bell-labs.com]. at the very least i trust the people who wrote it to have a better judgement.
You shouldn't be doing it (Score:5, Insightful)
As a former professional software tester ... (Score:5, Insightful)
Testing goes far beyond what any automated system can test, if you have a user in there somewhere. You also need to check things like "How easy is it to use?" and "Does this feature make sense?". We also suggested features that the program did not have, but from our experiance using it, thought that it should have.
Testing is fun too. It is MEETINGS that suck. (Score:2, Insightful)
QA is a separate function (Score:5, Insightful)
Automated process (Score:2, Insightful)
"We don't have time for that. Just get the QA testing complete so we can start the layoffs."
This basically makes the entire question of automating processes academic. Now, if automating processes can lead to massive job loss, salary savings and bonuses, it might actually be approved.
Long-term value is never EVER approved instead of short-term pocket-stuffing, EVEN IF a business case can be made for it. I've seen near-perfect business cases (complete financials, charts, graphs, blow-dried corporate phone-flipping management prick with a light pointer hosting Wheel of Buzzwords in the conference room) made for automating very expensive work schedules, and they were a) ignored or b) shouted down.
Based on this, it's very possible that even if an automated tool could be built, and worked, it would still be ignored because it was non-standard. Yes, I've seen this happen too. Five people assigned to a project that a Perl script could do in a half hour. The Perl script completes the job, but management refuses to believe the results are accurate, so they keep the five people working on the same project for four days... and produce the exact same results.
Now let's all sing the company song...
6 year experience in QA (Score:5, Insightful)
TDD (Score:4, Insightful)
I came across this when I recently read the book by Erich Gamma and Kent Beck, Contributing to Eclipse. They do TDD in this book all the time, and it sounds like it's actually fun.
Not that I have done it myself yet! It sounds like a case where you have to go through some initial inconvencience just to get into the habit, but I imagine that once you've done that, development and testing can be much more fun altogether.
Automation versus Manual Testing (Score:5, Insightful)
- Seque's silktest [segue.com]
- WinRunner [wilsonmar.com]
- WebLoad [radview.com]
- Tcl/Expect [nist.gov]
There are *many many* problems with large-scale automation, because once you develop scripts around a particular user interface, you've essentially tied that script to that version of your application. So this becomes a maintenance problem as you go forward.
One very useful paradigm we've employed in automation is to use it to *prep* the system under test. Many times its absolutely impossible to create 50,000 users, or 1,000 data elements without using automation in some form. We automate the creation of users, we automate the API calls that put the user into a particular state, then we use our brains to do the more "exotic" manual testing that stems from the more complex system states that we've created. If you are to embark on automating your software, this is a great place to start.
Hope this helps.
Re:Automation is difficult (Score:5, Insightful)
Even in the best case, automation scripts go out of date very quickly. And, running old scripts over and over again seldom finds any bugs. Usually nobody is touching the old functions anyway, so regression testing is largely pointless (take a lude, of course there are exceptions).
I think the most promising idea on improving reliability I've seen in recent years is the XP approach. At least there are usually four eyes on the code, and at least some effort is being put into writing unit test routines up front.
I think the least promising approach to reliability is taken by OO types who build so many accessors that you can't understand what the hell is really going on. It's "correctness through obfuscation." Reminds me of the idiots who rename all the registers when programming in assembly.
Re:QA is a separate function (Score:2, Insightful)
A developer has some responsibility to ensure thier code at least functions in the context of the overall application before making it my problem. Just because it compiles does not mean it is done.
Integrated Testing (Score:2, Insightful)
I would like to know how people in other systems companies divide up testing work between Dev and QA. I would also be interested in learning more about the kind of tools people use to develop and track QA.
Re:Manual Testing (Score:4, Insightful)
However, if you go into beta testing too early, then major basic features will be broken from time to time, and you'll only irritate your testers and make them stop helping you. This is where automated tests shine, because they help you ensure that major changes to the code have not broken anything.
Put another way, automated test can verify compliance to a specification or design. User testing can verify compliance to actual needs. Neither can replace the other.
The Mythical Man-Month (Score:5, Insightful)
Of course, nobody wants to do that, because it's expensive and/or boring. Thus we have the state of software today. Just like we had the state of software back in 1956 when he wrote the book.
It never ceases to amaze me that we're still making the same exact mistakes, 50 years later. If you work in software engineering, and you haven't read The Mythical Man-Month, you *need* to. Period. Go do it right now, before you write another line of code.
Re:You're not alone. (Score:1, Insightful)
Finding bugs != writing buggy code. When a tester finds a bug, it means the code was already broken. As a former tester, I always marveled at the hostility and abuse heaped on our profession by the people whose code we were trying to help improve. Who would you rather have finding your bugs, testers who work for the same company as you, and have signed the same NDA as you, or some end-user, who will tell all their friends about it and post in their blog about how lame your software is?
Re:The Mythical Man-Month (Score:2, Insightful)
Just as there is a creative rush in building a working software system out of the ether, there is an equal rush and creative element is software testing.
Testers and developers think differently but have the same purpose in mind. At the end of the day, both want the best possible product to be delivered.
I suggest signing up to StickyMinds [stickyminds.com] as a good place to start.
Re:Testing (Score:2, Insightful)
You do provide complete and acurate TDS (Technical Design Specifications) for architectural details and FDS (Functional Design Specifications) for system operation, don't you?
Re:You shouldn't be doing it (Score:3, Insightful)
Sometimes I find it amusing when the testers say that they can't imagine how someone could enjoy programming; that they find it tedious, painful, and boring; and that they are glad that I do it so they don't have to.
Testers are Professionals Too (Score:2, Insightful)
Re:Automation is difficult (Score:2, Insightful)
This is one part of extreme programming I like very much. The idea is to write the test cases before you write the software. That way, you're testing to specification, not implementation.
To this I must respectfully disagree. In small(er) projects, it might be closer to the truth, but from my experience regression testing is vital. Regression testing is mainly useful when requirement specifications change, and features creep in. Someone will be hacking at code somewhere to add a feature, without thinking about the implications. From my experience, people are always touching old functions! I always mandate automated regression testing on every project i've worked on with more than 4 people on it.
On a side note, I think regression testing in open source projects is even more important! Open source projects are by their very nature, hackish. People are constantly rewriting functions to do what they never were intended to. I've love to see a good automated regression testing framework for new open source projects.
Re:As a former professional software tester ... (Score:2, Insightful)
Automated testing tools are best suited for regression testing. Regression testing is the set of test cases that are perform over and over again with each release. Its main function it to make sure that the new features did not break anything that was not supposed to change.
Our test group uses a product called Hammer (sorry, but I don't know where they got it or how much they paid for it) for their regression testing. Hammer has its own scripting language (may be VB based) and its own database that is used to hold test case data. For example, the test case may require that 1000 (different) customer accounts can be created within 60 seconds.
I don't know much more than that about Hammer, I just design, write and unit test the software. The test group feature, system and regression tests it, plus they cooridnate the beta testing that is done by the trainers and a small group of actual users. The users must "sign-off" on new software before it can go into Production.
And I agree that there is nothing like having "actual" users doing the testing for finding bugs and for prividing feedback.
Re:The Mythical Man-Month (Score:2, Insightful)
Of course, this usually isn't on the schedule. Management's view is that if you spent 6 weeks testing and few bugs were found, then the time was wasted and the product could have shipped out earlier.
But regardless of the schedule, the test time that Brooks states will get spent. Often that time is spent on repeated testing as a result of bug fixes. Last minute bugs end up causing schedule slips and those slips are basically all testing, retesting, and debugging. The sooner tests are done, the sooner and cheaper bugs can be found and fixed.
But as you said we still ignore this (or rather we cross our fingers and hope it won't happen to us this time). Just a couple months ago on my current project, my boss told me that it had been decided that "we didn't have time for integration testing".