Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software

Automated Software QA/Testing? 248

nailbite writes "Designing and developing software has been my calling ever since I first used a computer. The countless hours/days/months spent on imagining to actualizing is, to me, enjoyable and almost a form of art or meditation. However, one of the aspects of development that sometimes "kills" the fun is testing or QA. I don't mind standalone testing of components since usually you create a separate program for this purpose, which is also fun. What is really annoying is testing an enterprise-size system from its UIs down to its data tier. Manually performing a complete test on a project of this size sucks the fun out of development. That's assuming all your developers consider development as fun (most apparently don't). My question is how do you or your company perform testing on large-scale projects? Do you extensively use automated testing tools, and if so, can you recommend any? Or do you still do it the old-fashioned way? (manually operating the UI, going through the data to check every transaction, etc.)"
This discussion has been archived. No new comments can be posted.

Automated Software QA/Testing?

Comments Filter:
  • by F2F ( 11474 ) on Saturday July 31, 2004 @02:26PM (#9853405)
    how about we go back to basics and read the proper books on computer science? no need for your shmancy-fancy-'voice debugged'-automagically-'quality assured' offerings, thanks.

    i'll stick with The Practice of Programming [bell-labs.com]. at the very least i trust the people who wrote it to have a better judgement.
  • by dirk ( 87083 ) <dirk@one.net> on Saturday July 31, 2004 @02:26PM (#9853406) Homepage
    The first thing you need to learn is that you shouldn't be doing large scale testing on your own systems. That is just setting yourself up for failure, since the only real testing is independent testing. Preferably you should have full-time testers who not only design what needs to be tested, but how the testing will be done and who will do the actual testing. Where I work, we have 2 testers who write up the test plans, and then recruit actual users to do the testing (because they can then not only get some exporsure to the system, they can suggest any enhancements for the next version). Testing your own work is a huge no-no, as you are much more likely to let small things slide than an independent tester is.
  • by eddison_carter ( 165441 ) on Saturday July 31, 2004 @02:28PM (#9853418)
    Nothing can compare to having a dedicated test staff. At the last software place I worked (part-time, in school, while getting my engineering degree), we had 3-6 college students working on testing most of the time (we would also be given some small projects to work on).

    Testing goes far beyond what any automated system can test, if you have a user in there somewhere. You also need to check things like "How easy is it to use?" and "Does this feature make sense?". We also suggested features that the program did not have, but from our experiance using it, thought that it should have.
  • by Wargames ( 91725 ) on Saturday July 31, 2004 @02:29PM (#9853421) Journal
    I agree about programming. I prefer the design phase. I like to design a system to the point that programming it is a cinch. What really sucks about software development is not testing it is meetings. Meetings suck the fun out of programming. Stupid senseless timewasting meetings. Scott Adams hits the nail on the head about meetings every time.
  • by drgroove ( 631550 ) on Saturday July 31, 2004 @02:29PM (#9853422)
    Outside of unit testing and limited functional testing, developers should be doing QA on their own code. That's a bit like a farmer certifying his own produce as organic, or a college student awarding themselves a diploma. It misses the point. QA function, automated, regression et al testing is the responsibility of a QA department. If your employer is forcing you to perform QA's functions, then they obviously don't "get it".
  • Automated process (Score:2, Insightful)

    by cubicledrone ( 681598 ) on Saturday July 31, 2004 @02:38PM (#9853479)
    Here is the standard management response to automating anything:

    "We don't have time for that. Just get the QA testing complete so we can start the layoffs."

    This basically makes the entire question of automating processes academic. Now, if automating processes can lead to massive job loss, salary savings and bonuses, it might actually be approved.

    Long-term value is never EVER approved instead of short-term pocket-stuffing, EVEN IF a business case can be made for it. I've seen near-perfect business cases (complete financials, charts, graphs, blow-dried corporate phone-flipping management prick with a light pointer hosting Wheel of Buzzwords in the conference room) made for automating very expensive work schedules, and they were a) ignored or b) shouted down.

    Based on this, it's very possible that even if an automated tool could be built, and worked, it would still be ignored because it was non-standard. Yes, I've seen this happen too. Five people assigned to a project that a Perl script could do in a half hour. The Perl script completes the job, but management refuses to believe the results are accurate, so they keep the five people working on the same project for four days... and produce the exact same results.

    Now let's all sing the company song...
  • by cemaco ( 665884 ) on Saturday July 31, 2004 @02:39PM (#9853485)
    I worked 6 years as a Quality Assurance Specialist. You cannot avoid manual testing of a product. Standard practice is to manually test any new software and automate as you go along, to avoid having to go over the same territory each time there is a new build. You also automate specific tests for bugs found after they are fixed, to make sure they don't get broken again. My shop used Rational Robot from IBM. There are a number of others, Silk is one I have heard of, but never used. Developers often have an attitude that Q.A. is only a necessary evil. I think part of it is because it means admitting that they can't write perfect code. The only people I have seen treated worse are the help desk crowd. (another job I have done in the past). The workload was terrible and when layoff time came, who do you think got the axe first? As for developers doing their own testing? That would help some but not all that much. You need people with a different perspective.
  • TDD (Score:4, Insightful)

    by Renegade Lisp ( 315687 ) * on Saturday July 31, 2004 @02:41PM (#9853490)
    I think one answer may be Test Driven Development (TDD). This means developers are supposed to create tests as they code -- prior to coding a new feature, a test is written that exercises the feature. Initially, the test is supposed to fail. Add the feature, and the test passes. This can be done on any level, given appropriate tools: GUI, End-to-End, Unit Testing, etc. Oh, did I mention JUnit? The tiniest piece of code with the most impact in recent years.

    I came across this when I recently read the book by Erich Gamma and Kent Beck, Contributing to Eclipse. They do TDD in this book all the time, and it sounds like it's actually fun.

    Not that I have done it myself yet! It sounds like a case where you have to go through some initial inconvencience just to get into the habit, but I imagine that once you've done that, development and testing can be much more fun altogether.

  • by kafka47 ( 801886 ) on Saturday July 31, 2004 @02:46PM (#9853511) Homepage
    At my company, we have a small QA group that tests several enterprise client-server applications, including consumer-level applications on multiple platforms. To exhaustively test all of the permutations and platforms is literally impossible, so we turn to automation for many of the trivial tasks. We've developed several of our own automation harnesses for UI testing and for API and data verif. testing. The technologies that we've used :
    - Seque's silktest [segue.com]
    - WinRunner [wilsonmar.com]
    - WebLoad [radview.com]
    - Tcl/Expect [nist.gov]

    There are *many many* problems with large-scale automation, because once you develop scripts around a particular user interface, you've essentially tied that script to that version of your application. So this becomes a maintenance problem as you go forward.

    One very useful paradigm we've employed in automation is to use it to *prep* the system under test. Many times its absolutely impossible to create 50,000 users, or 1,000 data elements without using automation in some form. We automate the creation of users, we automate the API calls that put the user into a particular state, then we use our brains to do the more "exotic" manual testing that stems from the more complex system states that we've created. If you are to embark on automating your software, this is a great place to start.

    Hope this helps.
  • by Bozdune ( 68800 ) on Saturday July 31, 2004 @02:54PM (#9853546)
    Parent has it right. Most automation efforts fail because the test writers can't keep up with the code changes, and not many organizations can pay QA people what you need to pay them if you expect them to be programmers (which is what they need to be to use a decent automation tool). Plus, one refactoring of the code base, or redesign of the UI without any refactoring of the underlying code, and the testers have to throw out weeks of work. Very frustrating.

    Even in the best case, automation scripts go out of date very quickly. And, running old scripts over and over again seldom finds any bugs. Usually nobody is touching the old functions anyway, so regression testing is largely pointless (take a lude, of course there are exceptions).

    I think the most promising idea on improving reliability I've seen in recent years is the XP approach. At least there are usually four eyes on the code, and at least some effort is being put into writing unit test routines up front.

    I think the least promising approach to reliability is taken by OO types who build so many accessors that you can't understand what the hell is really going on. It's "correctness through obfuscation." Reminds me of the idiots who rename all the registers when programming in assembly.
  • by the_weasel ( 323320 ) on Saturday July 31, 2004 @03:03PM (#9853599) Homepage
    I do a fair bit of QA and testing of our software product - and If i could have a nickel for the number of times its been apparent that a developer has checked in code they clearly NEVER tested.

    A developer has some responsibility to ensure thier code at least functions in the context of the overall application before making it my problem. Just because it compiles does not mean it is done.
  • Integrated Testing (Score:2, Insightful)

    by vingufta ( 548409 ) on Saturday July 31, 2004 @03:08PM (#9853621)
    I work for a mid sized systems company with about 2K employees and we've struggled with testing our systems. There are a few reasons for this:
    • The system has grown overly complicated over the years with tens of subsystems. There is some notion of ownership of the individual subsystems in both dev and qa but no such thing exists when it comes to interoperability of these subsystems. Dev engineers are free to make changes to any subsystem they want to while QA does not feel that they should test anything outside their subsystem. What ends up happening is that we do rigorous subsystem testing but very little inter-operability testing which leads to a lot of issues in the field since those represent more realistic customer scenarios.
    • As a consequence, management teams have been pushing the dev engineers to write test plans and automated tests for the functionality their working on since they believe that dev can do a better job at testing. This not only overloads the dev engineer but also decreases the testing quality since I belive that developers cannot be solely held responsible for testing their own code. They are more likely to work under a lot of assumptions and would overlook a lot of bugs. Secondly they would not think of the interactions between various subsystems since they'll be concentrating on their own code.
    • Finally, it is very important that there are standard QA practices in a company. We've been lacking this since each subsystem started their QA efforts individually and ended up developing tools and methods that did not fit with each other. We do have a common reporting method on number of tests conducted vs planned but the quality of tests various so significantly that those numbers make no sense.

    I would like to know how people in other systems companies divide up testing work between Dev and QA. I would also be interested in learning more about the kind of tools people use to develop and track QA.

  • Re:Manual Testing (Score:4, Insightful)

    by GlassHeart ( 579618 ) on Saturday July 31, 2004 @03:20PM (#9853675) Journal
    What you're referring to is called "beta testing", where a feature-complete product is released to a selected group of real users. This is a highly effective technique, because it's simply impossible to think of everything.

    However, if you go into beta testing too early, then major basic features will be broken from time to time, and you'll only irritate your testers and make them stop helping you. This is where automated tests shine, because they help you ensure that major changes to the code have not broken anything.

    Put another way, automated test can verify compliance to a specification or design. User testing can verify compliance to actual needs. Neither can replace the other.

  • by DragonHawk ( 21256 ) on Saturday July 31, 2004 @03:54PM (#9853871) Homepage Journal
    When Fred Brooks published his book, The Mythical Man-Month, one of the things he noted was that testing should acount for *more then half* of the budget of a software project. Actual design and coding should be the minority. This is because software is complex, inter-related, easy to do wrong, and not obvious when it is done wrong.

    Of course, nobody wants to do that, because it's expensive and/or boring. Thus we have the state of software today. Just like we had the state of software back in 1956 when he wrote the book.

    It never ceases to amaze me that we're still making the same exact mistakes, 50 years later. If you work in software engineering, and you haven't read The Mythical Man-Month, you *need* to. Period. Go do it right now, before you write another line of code.
  • by Anonymous Coward on Saturday July 31, 2004 @04:44PM (#9854160)
    [that] get their fun from breaking what others have created.
    Finding bugs != writing buggy code. When a tester finds a bug, it means the code was already broken. As a former tester, I always marveled at the hostility and abuse heaped on our profession by the people whose code we were trying to help improve. Who would you rather have finding your bugs, testers who work for the same company as you, and have signed the same NDA as you, or some end-user, who will tell all their friends about it and post in their blog about how lame your software is?
  • by SoSueMe ( 263478 ) on Saturday July 31, 2004 @05:22PM (#9854353)
    I agree with most of what you say, except for the "boring" part. The Mythical Man month is still relevant today.
    Just as there is a creative rush in building a working software system out of the ether, there is an equal rush and creative element is software testing.

    Testers and developers think differently but have the same purpose in mind. At the end of the day, both want the best possible product to be delivered.

    I suggest signing up to StickyMinds [stickyminds.com] as a good place to start.
  • Re:Testing (Score:2, Insightful)

    by SoSueMe ( 263478 ) on Saturday July 31, 2004 @05:36PM (#9854433)
    Certified Analysits should be able to work from well written specifications.
    You do provide complete and acurate TDS (Technical Design Specifications) for architectural details and FDS (Functional Design Specifications) for system operation, don't you?
  • by Profane MuthaFucka ( 574406 ) <busheatskok@gmail.com> on Saturday July 31, 2004 @08:05PM (#9855136) Homepage Journal
    I often say that our testing people are very special people. That's because I'm a programmer. I write code because it's fun, and I consider testing to be a completely boring, mind-numbing, horrible way to spend a day. I don't say that they are special people to denigrate them, however. They are special people because through whatever quirk of environment and genetics, they actually love that kind of work, and they truly excel at it. It's a thing that completely mystifies me, but I won't question it. Because they do such a good job, I don't have to test. They are therefore at least partly responsible for my own enjoyment of my job, and I owe them big time.

    Sometimes I find it amusing when the testers say that they can't imagine how someone could enjoy programming; that they find it tedious, painful, and boring; and that they are glad that I do it so they don't have to. :-)

  • by gwiner ( 685297 ) on Saturday July 31, 2004 @08:28PM (#9855236)
    I have had many years of experience in QA departments over my career. My observation is that it is difficult to attract good talent to a QA depratment. Many developers and technically inclinded folks see QA as menial labor. This mentality misses the value add and complexities of a true QA department/function. Ideally, you would hire dedicated and technically experiences individuals that: + Can analyize requirements into test plans (not by following the programming logic, but by following the business logic) + Understand the application architecture and environments so they can design tests to get at those components and risks + Develop automation tools, test harnesses and tests data loaders. You really want the technical expertise in your QA department to think about certifying or trying to break your application from a different perspective. When developers guide test plan development too closely, QA can never really be sure they are getting the best test coverage. Do thses skills sound like a "college entry level job"? I think not. Companies that hire inexperienced QA analysts are missing the real benefits of an objective QA department. Depending on the size of the organization, it is helpful to have a QA department report into a centrallized org structure, like the PMO (Project Management Office), or have policies requiring "hard signoff" on quality from QA. This allows them a level of objectivity and ability to ensure the quality of the product. I challenge developers to think of QA differently than in the past. Look for talented independant technical professionals in your QA department, and you will truely assure quality.
  • by mewphobia ( 630153 ) on Saturday July 31, 2004 @10:42PM (#9855742) Homepage

    This is one part of extreme programming I like very much. The idea is to write the test cases before you write the software. That way, you're testing to specification, not implementation.

    Even in the best case, automation scripts go out of date very quickly. And, running old scripts over and over again seldom finds any bugs.

    To this I must respectfully disagree. In small(er) projects, it might be closer to the truth, but from my experience regression testing is vital. Regression testing is mainly useful when requirement specifications change, and features creep in. Someone will be hacking at code somewhere to add a feature, without thinking about the implications. From my experience, people are always touching old functions! I always mandate automated regression testing on every project i've worked on with more than 4 people on it.

    On a side note, I think regression testing in open source projects is even more important! Open source projects are by their very nature, hackish. People are constantly rewriting functions to do what they never were intended to. I've love to see a good automated regression testing framework for new open source projects.

  • by 2Paranoid ( 713951 ) on Sunday August 01, 2004 @02:04AM (#9856484)

    Automated testing tools are best suited for regression testing. Regression testing is the set of test cases that are perform over and over again with each release. Its main function it to make sure that the new features did not break anything that was not supposed to change.

    Our test group uses a product called Hammer (sorry, but I don't know where they got it or how much they paid for it) for their regression testing. Hammer has its own scripting language (may be VB based) and its own database that is used to hold test case data. For example, the test case may require that 1000 (different) customer accounts can be created within 60 seconds.

    I don't know much more than that about Hammer, I just design, write and unit test the software. The test group feature, system and regression tests it, plus they cooridnate the beta testing that is done by the trainers and a small group of actual users. The users must "sign-off" on new software before it can go into Production.

    And I agree that there is nothing like having "actual" users doing the testing for finding bugs and for prividing feedback.

  • by cetialphav ( 246516 ) on Sunday August 01, 2004 @03:29AM (#9856681)
    Based on my experience, I have to agree with this. I think the proportion has declined a bit with time, but it is still close to half the time.

    Of course, this usually isn't on the schedule. Management's view is that if you spent 6 weeks testing and few bugs were found, then the time was wasted and the product could have shipped out earlier.

    But regardless of the schedule, the test time that Brooks states will get spent. Often that time is spent on repeated testing as a result of bug fixes. Last minute bugs end up causing schedule slips and those slips are basically all testing, retesting, and debugging. The sooner tests are done, the sooner and cheaper bugs can be found and fixed.

    But as you said we still ignore this (or rather we cross our fingers and hope it won't happen to us this time). Just a couple months ago on my current project, my boss told me that it had been decided that "we didn't have time for integration testing".

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...