Automated Software QA/Testing? 248
nailbite writes "Designing and developing software has been my calling ever since I first used a computer. The countless hours/days/months spent on imagining to actualizing is, to me, enjoyable and almost a form of art or meditation. However, one of the aspects of development that sometimes "kills" the fun is testing or QA.
I don't mind standalone testing of components since usually you create a separate program for this purpose, which is also fun. What is really annoying is testing an enterprise-size system from its UIs down to its data tier. Manually performing a complete test on a project of this size sucks the fun out of development. That's assuming all your developers consider development as fun (most apparently don't).
My question is how do you or your company perform testing on large-scale projects? Do you extensively use automated testing tools, and if so, can you recommend any? Or do you still do it the old-fashioned way? (manually operating the UI, going through the data to check every transaction, etc.)"
Testing (Score:4, Informative)
bhj
Automation is difficult (Score:5, Informative)
From my personal experience at one of the world's largest software companies, automation maintenance for even a small suite (200-300 tests, 10 dedicated machines) is a full time job. That means one person's entire responsibility is making sure the machines are running, the tests aren't returning passes and fails for reasons other than they are actually running the tests, and making changes to the automation both when either the hardware or software changes. This person must know the automation suite as well as the tests attempting to be performed intimately, and must also be willing to spend his days being a lab jockey. It's usually pretty difficult to find these people.
My point here is that even after spending many dev or test hours developing automation, in no way is it suddenly complete. There is no magic bullet to replace a human tester, the only thing you can do is try and improve his productivity by giving him better tools.
-tsf
Re:You shouldn't be doing it (Score:5, Informative)
I once worked with a dyslexic drafer. He generally did very good work, except that his drawings often had spelling errors. When he looked at his own drawings, the errors were not obvious to him like they were to everyone else. Most people don't have such a pronounced systematic tendency to make some particular error. But we all occasionally make mistakes in which we were just thinking about something wrong. And those mistakes are invisible to us because we're still thinking the same wrong way when we review it. So having work checked by someone other than the one who created it is a good practice for just about any endeavor, not just software.
Manual work and automated testing tools (Score:3, Informative)
We evaluated a bunch of tools for testing the front end systems, and after a year long study of what's in the marketplace, we settled on the Mercury Interactive [note: I do not work for them] tools: QuickTest Professional for regression testing, and LoadRunner for stress testing.
No one product will test both the front and back ends, so you will need to use a mixture of the tools, open source tools, system logs, manual work, and some black magik.
Our applications are global in scope. We test against Windows clients, mainframes, UNIX boxes and mid-range machines.
The automated tools are definitely a blessing, but are not an end-all-be-all. Many of the testing tool companies just do not understand "global" enterprises, and working to integrate all the tools amongst all your developers, testers, operation staff, etc can be difficult.
Getting people on board with the whole idea of automated tools is yet another big challenge. Once you have determined which toolset to use, you have to do a very serious "sell" job on your developers, and most notably, your operations staff, who have "always done their testing using Excel spreadsheets".
Another big problem is no product out there will test the front and the back end. You will have to purchase tools from more than one vendor to do that. A tool for the backend iSeries, for example, is Original Software's TestBench/400. Again, this does not integrate with Mercury's tools, so manual correlation between the two products will be needed.
You can only go so far with these tools; they will not help you to determine the "look and feel" across various browsers - that you will have to do yourself.
QuickTest Professional does have a Terminal Emulator add-in (additional charge), that allows you to automated mouse/keystrokes via Client Access 5250, and other protocols.
The best way to determine your needs, is call up the big companies (CA, IBM, Mercury) and have them do demos for your staff. Determine your requirements. Setup a global team to evaluate the products. Get demo copies and a technical sales rep to help you evaluate in your environment. Compare all the products, looking at the global capability of the product, as well as support 24/7.
No tool is perfect, but it is better than manual testing. Once everybody is on board, and has been "sold" on the idea of the tools, you won't know how you lived without them.
Also, make sure that you have a global tool to help in test management to manage all your requirements and defects and test cases. Make sure it is web-based.
Expect to spend over a year on this sort of project for a global company. Make sure you document everything, and come up with best practice documentation and workflows.
Good luck!
SilkTest (Score:2, Informative)
Downside? Its Windows stuff AND its hellaciously expensive..
follow up (Score:2, Informative)
automated testing (Score:2, Informative)
Seems to me that there are really a few levels of automated testing: there's automatically generating and running test cases, and there's having the ability to automatically repeat test cases you've manually created and run in the past.
Just being able to repeat manually created test cases is a big help. It sounds really simple -- create a harness that you can plug test cases into and start writing test cases -- but scheduling and coordinating that sort of thing starts to get really difficult on large projects.
Where I work (a certain large software company) we have about as many full-time testers as we have developers, and it takes work from all of us to keep the test framework up to date and running. Our testers actually write a lot of code as part of their job; their code isn't shipped as part of the product -- it's used to test it. They write test cases and create infrastructure for running test cases. The developers also create test cases that can plug into the test harnesses. It actually took a lot of work to get all of that running smoothly for a project that is large and has a lot of people checking in every day.
Before every checkin we can submit our change to a server farm that runs a smoke build (to verify that things build) and also runs a suite of basic functionality tests (written by developers) to make sure nothing is outright broken by the checkin. It really goes a long way toward ensuring that you have a good build almost every day.
We also run separate automated test passes that take a lot longer but are much more thorough. They include a lot of manually developed cases with some written according to a plan and others arising as specific regression tests. (Having a big regression suite is very important when you're supporting/maintaining previous versions of your software with service packs and hotfixes!)
Automated deployment is also a biggie! Since the software product we work on is intended to be deployed as a distributed system, we also do a lot of testing with multiple nodes deployed. My group spent a lot of man-hours preparing a system than can automatically wipe and install multi-server topologies and then run test cases across them. So basically you get two things: hands-free setup of all installation and config for a large number of machines, and test case coordination between nodes (e.g. something runs on node A, then waits for node B to do something, then checks a result on node C). If you need to verify a fix that only affects scenarios with at least 5 servers involved it can really take a lot of time if you have to set up and test the scenario manually! (Also it is a pity that some software can get complicated enough that new problems will only appear after you have at least a certain number of nodes, but that is another story.)
As far as automatic test case generation goes, we don't have a lot of that. It has its place, but I have to agree with something another poster stated which is that as long as there is a human user involved you need a human tester at the other end. (There's no one-size-fits-all tool that you can run against your software that will magically return "true" iff it is flawless.)
One thing that automatic tools are good at is checking fundamental things at code level, and we do use some tools that do that. One of our tools instruments our binaries by marking every code block so that when you run it you can get "code coverage" output. This helps you see a couple of things: you can get a feel for the complexity of your code (you'll see the number of blocks and arcs for each function or group of functions), and you can see which code gets hit and how many times when you run a particular test case. This allows you to do a few things -- target complex areas of code to try and make simplifications, and target potential dead code areas for removal. This recognizes a couple of fundamental rules of software: complicated code is generally more bug-prone, and dead code is an accident waiting to happen. In the spirit of using automatic tools, this information by itself doesn't mean anything -- it's only good if it's used as feedback to the development process.
D
Re:You shouldn't be doing it (Score:2, Informative)
Autoexpect (Score:3, Informative)
Expect is a Tcl thing, that is also a module in Perl.
Autoexpect is a script generator of sorts for Expect.
Using that combination, you can create a test harness that can also test things that were not designed to be tested, such as GUIs, Browsers, SSH and Password sessions.
However, it is not easy and yes it works on Windoze too...
Re:You shouldn't be doing it (Score:1, Informative)
Re:You shouldn't be doing it (Score:2, Informative)
Elements that are going to or may be further developed will negate any benefits achievable from test automation.
The "Industry Standard" tools such as Mercury's, Compuware's or IBM Rational's test automation suites require a significant amount of coding effort themselves.
"Simple" design changes can blow away weeks of scripting.
Automating stable components is the best way to go. As each component is completed and stable it can be automated for regression testing. Adding the completed component scripts to your test harness will flesh out your test suite.
The above mentioned tools are very expensive but very effective.
I'm looking forward to the subproject from The Eclipse Foundation [eclipse.org] the number of big name contributor is quite encouraging.
If you like to code and have reliable and creative testers available, join up.
Test and quality (Score:1, Informative)
The first thing to keep in mind is that test is not QA. I am puzzled so many conflate the two and I suspect the reason is that the QA chief is not visible in the company. If anyone follow up to this posting of mine I would appreciate if you say wethe ror not you have seen/talked to the QA crew much.
OK, so let's start with test: I recommend a single sheet set of testing guidelines and a test process requirement set. test is done at several levels:
Note that test manager should review unit test documents and review possibility of automation. Resist automating GUI testing until system is getting stable, otherwise the overhead in setting up will eat up all benefits in running it.
A test manager needs of course thoroughness and a sense of details but also intuition. The latter part is more important them most people realise. I can from personal experiences tell you it is no fun to be tere with a customer at an acceptance test that has tanked.
Next is Quality Assurance. This is not about testing, it is about process, the process that includes testing, verification and validation.
Do consider CMM (Capability Maturity Model). many Indian companies are top rated and this is used as a lever to get outsourcing contracts. Sure, many cheat and we all know that there are consulatnts that can brief you on the "right answers" on an audit. No matter what, do try to live up to the intentions at least within the company. It does have a useful function, including the unofficial but very real ombudsman function. Thus this job requires tact and diplomacy as there is some authority associated with it. Suggestions:
I'd be interested in hearing from others with CMM/SQA experiences.