Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Software

Automated Software QA/Testing? 248

nailbite writes "Designing and developing software has been my calling ever since I first used a computer. The countless hours/days/months spent on imagining to actualizing is, to me, enjoyable and almost a form of art or meditation. However, one of the aspects of development that sometimes "kills" the fun is testing or QA. I don't mind standalone testing of components since usually you create a separate program for this purpose, which is also fun. What is really annoying is testing an enterprise-size system from its UIs down to its data tier. Manually performing a complete test on a project of this size sucks the fun out of development. That's assuming all your developers consider development as fun (most apparently don't). My question is how do you or your company perform testing on large-scale projects? Do you extensively use automated testing tools, and if so, can you recommend any? Or do you still do it the old-fashioned way? (manually operating the UI, going through the data to check every transaction, etc.)"
This discussion has been archived. No new comments can be posted.

Automated Software QA/Testing?

Comments Filter:
  • Testing (Score:4, Informative)

    by BigHungryJoe ( 737554 ) on Saturday July 31, 2004 @02:29PM (#9853423) Homepage
    We use certification analysts. They handle the testing. Unfortunately, the functional analysts that are supposed to write the tests rarely know the software well enough to do that, so some of my time is spent helping to write the black box tests. Writing good tests has become especially important since most of the cert analyst jobs are being sent to India and aren't on site anymore.

    bhj
  • by TheSacrificialFly ( 124096 ) on Saturday July 31, 2004 @02:31PM (#9853433)
    It's extremely difficult to develop and maintain on any enterprise size system. One of the big problems management has with automation I've found is that once they've put the money into initally developing the automation, they think it will run completely automatically forever more.

    From my personal experience at one of the world's largest software companies, automation maintenance for even a small suite (200-300 tests, 10 dedicated machines) is a full time job. That means one person's entire responsibility is making sure the machines are running, the tests aren't returning passes and fails for reasons other than they are actually running the tests, and making changes to the automation both when either the hardware or software changes. This person must know the automation suite as well as the tests attempting to be performed intimately, and must also be willing to spend his days being a lab jockey. It's usually pretty difficult to find these people.

    My point here is that even after spending many dev or test hours developing automation, in no way is it suddenly complete. There is no magic bullet to replace a human tester, the only thing you can do is try and improve his productivity by giving him better tools.

    -tsf
  • by wkitchen ( 581276 ) on Saturday July 31, 2004 @02:45PM (#9853506)
    Testing your own work is a huge no-no, as you are much more likely to let small things slide than an independent tester is.
    Yes. And also because you can make the same mental mistakes when testing that you did when developing.

    I once worked with a dyslexic drafer. He generally did very good work, except that his drawings often had spelling errors. When he looked at his own drawings, the errors were not obvious to him like they were to everyone else. Most people don't have such a pronounced systematic tendency to make some particular error. But we all occasionally make mistakes in which we were just thinking about something wrong. And those mistakes are invisible to us because we're still thinking the same wrong way when we review it. So having work checked by someone other than the one who created it is a good practice for just about any endeavor, not just software.
  • by Anonymous Coward on Saturday July 31, 2004 @02:47PM (#9853517)
    I was part of a team for a major global company that looked into automated testing tools for testing the GUI front end web applications. These were both staff and customer facing applications.

    We evaluated a bunch of tools for testing the front end systems, and after a year long study of what's in the marketplace, we settled on the Mercury Interactive [note: I do not work for them] tools: QuickTest Professional for regression testing, and LoadRunner for stress testing.

    No one product will test both the front and back ends, so you will need to use a mixture of the tools, open source tools, system logs, manual work, and some black magik.

    Our applications are global in scope. We test against Windows clients, mainframes, UNIX boxes and mid-range machines.

    The automated tools are definitely a blessing, but are not an end-all-be-all. Many of the testing tool companies just do not understand "global" enterprises, and working to integrate all the tools amongst all your developers, testers, operation staff, etc can be difficult.

    Getting people on board with the whole idea of automated tools is yet another big challenge. Once you have determined which toolset to use, you have to do a very serious "sell" job on your developers, and most notably, your operations staff, who have "always done their testing using Excel spreadsheets".

    Another big problem is no product out there will test the front and the back end. You will have to purchase tools from more than one vendor to do that. A tool for the backend iSeries, for example, is Original Software's TestBench/400. Again, this does not integrate with Mercury's tools, so manual correlation between the two products will be needed.

    You can only go so far with these tools; they will not help you to determine the "look and feel" across various browsers - that you will have to do yourself.

    QuickTest Professional does have a Terminal Emulator add-in (additional charge), that allows you to automated mouse/keystrokes via Client Access 5250, and other protocols.

    The best way to determine your needs, is call up the big companies (CA, IBM, Mercury) and have them do demos for your staff. Determine your requirements. Setup a global team to evaluate the products. Get demo copies and a technical sales rep to help you evaluate in your environment. Compare all the products, looking at the global capability of the product, as well as support 24/7.

    No tool is perfect, but it is better than manual testing. Once everybody is on board, and has been "sold" on the idea of the tools, you won't know how you lived without them.

    Also, make sure that you have a global tool to help in test management to manage all your requirements and defects and test cases. Make sure it is web-based.

    Expect to spend over a year on this sort of project for a global company. Make sure you document everything, and come up with best practice documentation and workflows.

    Good luck!

  • SilkTest (Score:2, Informative)

    by siberian ( 14177 ) on Saturday July 31, 2004 @03:09PM (#9853630)
    We use SilkTest for automated testing as well as monitoring. Our QA guys love it and it free's them up from doing regular system testing to focus on devising new and devise tests to confound the engineering teams!

    Downside? Its Windows stuff AND its hellaciously expensive..

  • follow up (Score:2, Informative)

    by nailbite ( 801870 ) on Saturday July 31, 2004 @03:50PM (#9853849)
    To clarify: The dev team only tests on the code level (memory leaks, code fault tolerance, etc.). We have a separate QA team that performs customer-level testing. Some of them are technically inclined, some are not. All of them, however, understand the mechanisms of the system on a user level. Currently, they manually test each operation of each component in the system using test procedure lists and guidelines. I was thinking if there was anyway to, even partially, automate their processes using software. I believe that in any job/work there must be a fun aspect, and lately I've seen that they have lost that spark. Perhaps by introducing a new toy (software) it would both help their productivity and ignite their spirits. Thus the reason for the post. I would like to thank everyone who replied.
  • automated testing (Score:2, Informative)

    by dannannan ( 470647 ) on Saturday July 31, 2004 @03:52PM (#9853864)

    Seems to me that there are really a few levels of automated testing: there's automatically generating and running test cases, and there's having the ability to automatically repeat test cases you've manually created and run in the past.

    Just being able to repeat manually created test cases is a big help. It sounds really simple -- create a harness that you can plug test cases into and start writing test cases -- but scheduling and coordinating that sort of thing starts to get really difficult on large projects.

    Where I work (a certain large software company) we have about as many full-time testers as we have developers, and it takes work from all of us to keep the test framework up to date and running. Our testers actually write a lot of code as part of their job; their code isn't shipped as part of the product -- it's used to test it. They write test cases and create infrastructure for running test cases. The developers also create test cases that can plug into the test harnesses. It actually took a lot of work to get all of that running smoothly for a project that is large and has a lot of people checking in every day.

    Before every checkin we can submit our change to a server farm that runs a smoke build (to verify that things build) and also runs a suite of basic functionality tests (written by developers) to make sure nothing is outright broken by the checkin. It really goes a long way toward ensuring that you have a good build almost every day.

    We also run separate automated test passes that take a lot longer but are much more thorough. They include a lot of manually developed cases with some written according to a plan and others arising as specific regression tests. (Having a big regression suite is very important when you're supporting/maintaining previous versions of your software with service packs and hotfixes!)

    Automated deployment is also a biggie! Since the software product we work on is intended to be deployed as a distributed system, we also do a lot of testing with multiple nodes deployed. My group spent a lot of man-hours preparing a system than can automatically wipe and install multi-server topologies and then run test cases across them. So basically you get two things: hands-free setup of all installation and config for a large number of machines, and test case coordination between nodes (e.g. something runs on node A, then waits for node B to do something, then checks a result on node C). If you need to verify a fix that only affects scenarios with at least 5 servers involved it can really take a lot of time if you have to set up and test the scenario manually! (Also it is a pity that some software can get complicated enough that new problems will only appear after you have at least a certain number of nodes, but that is another story.)

    As far as automatic test case generation goes, we don't have a lot of that. It has its place, but I have to agree with something another poster stated which is that as long as there is a human user involved you need a human tester at the other end. (There's no one-size-fits-all tool that you can run against your software that will magically return "true" iff it is flawless.)

    One thing that automatic tools are good at is checking fundamental things at code level, and we do use some tools that do that. One of our tools instruments our binaries by marking every code block so that when you run it you can get "code coverage" output. This helps you see a couple of things: you can get a feel for the complexity of your code (you'll see the number of blocks and arcs for each function or group of functions), and you can see which code gets hit and how many times when you run a particular test case. This allows you to do a few things -- target complex areas of code to try and make simplifications, and target potential dead code areas for removal. This recognizes a couple of fundamental rules of software: complicated code is generally more bug-prone, and dead code is an accident waiting to happen. In the spirit of using automatic tools, this information by itself doesn't mean anything -- it's only good if it's used as feedback to the development process.

    D

  • by nailbite ( 801870 ) on Saturday July 31, 2004 @04:01PM (#9853925)
    To clarify: We do have a separate QA team that performs large scale testing. I was hoping to help them by providing software to at least partially automate their processes. Testing is not a job that exercises a lot of creativity, unlike development. There should always be an element of fun in any kind of job/work; perhaps by automating their menial tasks it would lift their spirits up a bit. At the very least they get a new toy to play with. I agree that the dev team should not perform QA, mainly because you cannot expect them to see the user's POV, and of course bias.
  • Autoexpect (Score:3, Informative)

    by HermanAB ( 661181 ) on Saturday July 31, 2004 @04:08PM (#9853963)
    Expect and Perl.

    Expect is a Tcl thing, that is also a module in Perl.

    Autoexpect is a script generator of sorts for Expect.

    Using that combination, you can create a test harness that can also test things that were not designed to be tested, such as GUIs, Browsers, SSH and Password sessions.

    However, it is not easy and yes it works on Windoze too...

  • by Anonymous Coward on Saturday July 31, 2004 @05:06PM (#9854257)
    Formal safety critical software processes, as specified by the aerospace industry for example, will insist that your software is tested by a separate team as a minimum. For the higher levels of safety criticality, e.g. jet engine control, a separate company doing the testing is preferred
  • by SoSueMe ( 263478 ) on Saturday July 31, 2004 @05:09PM (#9854273)
    Automation is only good for stable applications.
    Elements that are going to or may be further developed will negate any benefits achievable from test automation.
    The "Industry Standard" tools such as Mercury's, Compuware's or IBM Rational's test automation suites require a significant amount of coding effort themselves.
    "Simple" design changes can blow away weeks of scripting.

    Automating stable components is the best way to go. As each component is completed and stable it can be automated for regression testing. Adding the completed component scripts to your test harness will flesh out your test suite.

    The above mentioned tools are very expensive but very effective.
    I'm looking forward to the subproject from The Eclipse Foundation [eclipse.org] the number of big name contributor is quite encouraging.

    If you like to code and have reliable and creative testers available, join up.
  • Test and quality (Score:1, Informative)

    by Anonymous Coward on Saturday July 31, 2004 @05:12PM (#9854300)
    OK, first some disclosure: I work as test manager and also as software quality assurance responsible in an ISO certified and CMM rated company working on critical software (think medical, space etc).

    The first thing to keep in mind is that test is not QA. I am puzzled so many conflate the two and I suspect the reason is that the QA chief is not visible in the company. If anyone follow up to this posting of mine I would appreciate if you say wethe ror not you have seen/talked to the QA crew much.

    OK, so let's start with test: I recommend a single sheet set of testing guidelines and a test process requirement set. test is done at several levels:

    • Unit test: Keep the test manager out of this one, here is where the developer tests. Keep is short and simple but do record if it has been completed. Those responsible for integration should refuse integrating untested units. Just a single rotten apple in the works can cause week long integration nightmares; I guess we have all been there, right? Avoid it, and track down what went wrong. Fix problems, not people. Suggestions:
      • Test interfaces:
      • Legal values (this is what most people do, nuice, safe values, basically testing to see that it works)
      • Out of range values (here is where many developers skip testing)
      • Borderline/fencepost cases
      • User interfaces:
      • View every new tab, window, button etc.
      • Check with GUI expert, if there is one
    • Integration test (into components, CSCIs or the finished product, many levels possible): The test manager rules here. Test from scripts (written or automated) but do make variations. Explore the unusual cases. Unlike developers the test manager is supposed to provoke the system, try to make it crash, see that it is robust. Suggestions:
      • Check use cases (if applicable)
      • Check that all units communicate, try stimulating all interfaces
      • Use multiple testers; everyone has different ways of doing things that very often shake out different bugs. Your alternatie is that your end customers find the bugs for you and you fail acceptance testing...
    • Acceptance test Typically project leader runs, assited with test manager who this time has the schizophrenic job of avoiding known bugs.
      • Test contract items
      • Refuse interruptions of other variations if your contract can support this stance
      • Record actions if costomer runs a free test as part of the acceptance testing

    Note that test manager should review unit test documents and review possibility of automation. Resist automating GUI testing until system is getting stable, otherwise the overhead in setting up will eat up all benefits in running it.

    A test manager needs of course thoroughness and a sense of details but also intuition. The latter part is more important them most people realise. I can from personal experiences tell you it is no fun to be tere with a customer at an acceptance test that has tanked.

    Next is Quality Assurance. This is not about testing, it is about process, the process that includes testing, verification and validation.

    Do consider CMM (Capability Maturity Model). many Indian companies are top rated and this is used as a lever to get outsourcing contracts. Sure, many cheat and we all know that there are consulatnts that can brief you on the "right answers" on an audit. No matter what, do try to live up to the intentions at least within the company. It does have a useful function, including the unofficial but very real ombudsman function. Thus this job requires tact and diplomacy as there is some authority associated with it. Suggestions:

    • Analyse defects for a pattern
    • Do make sure experiences are acted upon
    • Respect your confidentiality, people can lose their jobs over what you say
    • In analysing defects you will find not just what is wrong but also who makes most mistakes. Fix the problem, not the person, underlying issues can often be elsewhere
    • Be prepared to leave before caving in to management pressure

    I'd be interested in hearing from others with CMM/SQA experiences.

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...