Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software

Automated Software QA/Testing? 248

nailbite writes "Designing and developing software has been my calling ever since I first used a computer. The countless hours/days/months spent on imagining to actualizing is, to me, enjoyable and almost a form of art or meditation. However, one of the aspects of development that sometimes "kills" the fun is testing or QA. I don't mind standalone testing of components since usually you create a separate program for this purpose, which is also fun. What is really annoying is testing an enterprise-size system from its UIs down to its data tier. Manually performing a complete test on a project of this size sucks the fun out of development. That's assuming all your developers consider development as fun (most apparently don't). My question is how do you or your company perform testing on large-scale projects? Do you extensively use automated testing tools, and if so, can you recommend any? Or do you still do it the old-fashioned way? (manually operating the UI, going through the data to check every transaction, etc.)"
This discussion has been archived. No new comments can be posted.

Automated Software QA/Testing?

Comments Filter:
  • Manual Testing (Score:4, Interesting)

    by Egonis ( 155154 ) on Saturday July 31, 2004 @02:26PM (#9853404)
    Although I haven't personally used many testing tools:

    I created an Enterprise Application composed of Client/Server Apps -- the best test was a small deployment of the Application to users who are apt to help you conduct the test, over a few weeks, I found bugs I never caught with my own manual tests.

    Applications that test your code, etc are great from what I have heard, but will not catch Human Interface related issues, i.e. GUI Mess-Ups, Invisible Buttons, etc.
  • Test Matrix (Score:5, Interesting)

    by Pedro Picasso ( 1727 ) on Saturday July 31, 2004 @02:30PM (#9853428) Homepage Journal
    I've used auto-test thingies, ones that I've written, and packaged ones. Some situations call for them. Most of the time, though, it's just a matter of doing it by hand. Here's what I do.

    Create a list of inputs that includes two or three normal cases as well as the least input and the most input (boundaries). Then make a list of states the application can be in when you put these values into it. Then create a graph with inputs as X and states as Y. Print your graph and have it handy as you run through the tests. As each test works, pencil in a check mark in the appropriate place.

    Now that you've automated the system to the point where you don't need higher brain functions for it, get an account on http://www.audible.com, buy an audio book, and listen to it while you run through your grid. It still takes a long time, but your brain doesn't have to be around for it.

    This is going to sound incredibly elementary to people who already have test methodologies in place, but when you need to be thorough, nothing beats an old fashioned test matrix. And audiobooks are a gift from God.

    (I'm not affiliated with Audible, I just like their service. I'm currently listening to _Stranger in a Strange Land_ unabridged. Fun stuff.)
  • by Anonymous Coward on Saturday July 31, 2004 @02:32PM (#9853442)
    How about we read books on the subject written by software engineering researchers and not programming language researchers? See the Dynamic Analysis lecture notes. [mit.edu]
  • by William Tanksley ( 1752 ) on Saturday July 31, 2004 @03:09PM (#9853629)
    You need two things: first, people who are dedicated to testing and aren't concerned merely to uphold their pride in the code they wrote (this is a long way to say that you need a dedicated testing team that doesn't report to the coding team); and second, testable code. The best way to get the second needed item, in my experience, is to have your developers write their automated unit tests BEFORE they write the unit they're developing.

    This is generally called "test-first" development. If you follow it, you'll find some nice characteristics:

    1. Each unit will be easily testable.
    2. Each unit will be coherent, since it's easier to test something that only does one thing.
    3. Units will have light coupling, since it's easier to express a test for something that depends only lightly on everything else.
    4. User interface layers will be thin, since it's hard to automatically test a UI.
    5. Programmers will tend to enjoy writing tests a bit more, since the tests now tell them when they're done with their code, rather than merely telling them that their code is still wrong.

    You can go a step further than this, and in addition to writing your tests before you write you code, you can even write your tests as you write your design. If you do this, your design will mutate to meet the needs of testing, which means all of the above advantages will apply to your large-scale design as well as your small-scale units. But in order to do this you have to be willing and able to code while you're designing, and many developers seem unwilling to combine the two activities in spite of the advantages.

    -Billy
  • Testing (Score:5, Interesting)

    by Jerf ( 17166 ) on Saturday July 31, 2004 @03:16PM (#9853661) Journal
    Lemma One: It is impossible to comprehensively test the entire system manually.

    Proof: For any significantly sized system, take a look at all the independen axes it has. For instance: The set of actions the user can take, the types of nouns the user can manipulate, the types of permissions the user can have, the number of environments the user may be in, etc. Even for a really simple program, that is typically at least 5 actions, 20 nouns, (lets estimate a minimal) 3 permission sets (no perm for the data, read only, read & write), and well in excess of 5 different environments (you need only count relevant differences, but this includes missing library A, missing library B, etc.). Even for this simple, simple program, that's 5*20*3*5, which is 1,500 scenarios, and yes, you can never be sure that precisely one of those will fail in a bad way.

    Even at one minute a test, that's 25 hours, which is most of a person-week.

    Thus, if you tested a enterprise class system for three days, you did little more than scratch the surface. Now, the "light at the end of the tunnel" is that most systems are not used equally across all of their theoretical capabilities, so you may well have gotten 90%+ of the use, but for the system itself, a vanishing fraction of the use cases. Nevertheless, as the system grows, it rapidly becomes harder to test even 90%.

    (The most common error here is probably missing an environment change, since almost by definition you tested with only one environment.)

    Bear in mind that such testing is still useful, as a final sanity check, but it is not sufficient. (I've seen a couple of comments that say part of the value of such testing is getting usability feedback; that really ought to be a seperate case, both because the tests you ought to design for usability are seperate, and because once someone has fuctionally tested the system they have become spoiled with pre-conceived notions, but that is better than nothing.)

    How do you attack this problem? (Bear in mind almost nobody is doing this right today.)
    1. Design for testability, generally via Unit Tests. There are architectures that make such testing easy, and some that make it impossible. It takes experience to know which is which. Generally, nearly everything can be tested, with the possible exception of GUIs, which are actually provide a great example of my "architecture" point.

      Why can't you test GUI's? In my experience, it boils down to two major failings shared by nearly all toolkits:

      1. You can not insert an event, such as "pressing the letter X", into the toolkit programmatically, and have it behave exactly as it would if the user did it. (Of the two, this is the more importent.)
      2. You can not drive the GUI programmatically without its event loop running, which is what you really want for testing. (You need to call the event engine as needed to process your inserted events but you want your code to be in control, not the GUI framework.) One notable counterexample here is Tk, which at least in the guise of Python/Tk I've gotten to work for testing without the event loop running, which has been great. (This could be hacked around with some clever threading work, but without the first condition isn't much worth it.)

      The GUIs have chosen an architecture that is not conducive to testing; they require their own loop to be running, they don't allow you to drive them programmatically, they are designed for use, not testing. When you find a GUI that has an architecture at least partially conducive to testing, suddenly, lo, you can do some automated tests.

      And in my case, I am talking serious testing that concerns things central to the use of my program. I counted 112 distinct programmatic paths that can be taken when the user presses the "down" key in my outliner. I was able to write a relatively concise test to cover all cases. Yes, code I thought was pretty good turned out to fail two specific cases (

  • by Kristoffer Lunden ( 800757 ) on Saturday July 31, 2004 @04:03PM (#9853936) Homepage
    More importantly, you tend to excuse and work around your own quirks and bugs. If you know the application will crash if you do certain things, you tend to avoid that - often using clunky workarounds instead.

    This may sound very strange to some; why not fix the problem instead of working around it all the time? But the reality is that this behaviour is really, really common. I've seen it so many times when someone is sitting down to test something and the developer is standing over the shoulder saying "no, not like that, you have to go *there* first and then click *that* button" when it was abvious from the GUI layout that that other button should be clicked and just work.

    And yes, guilty as charged myself - it is something of a subconscious thing, mostly. Having someone without emotional attachments to the application in question is imperative - as is not allowing people who do have attachments to be present when testings are conducted. It will almost inevetably end in a big excuse- and workaround fest where the developers take over the show.

    This is of course for stuff that neds to be tested and evaluated manually. Automated tests such as unit tests and the like is another story altogether (and great, great tools when used seriously). Anything you do more than twice should be automated, and test suites are great not so much for ensuring quality (though they do that too) as they are for *maintaining* quality.
  • Tools (Score:1, Interesting)

    by Anonymous Coward on Saturday July 31, 2004 @04:55PM (#9854217)
    I have used a few tools, and like anything else it comes down to your proficiency with the language.

    WinRunner http://www.mercury.com/ [mercury.com] = Gotta know C++, and you better not be looking at modern tech (.net, java, etc...) plus requires add ins

    Quick Test Pro http://www.mercury.com/ [mercury.com] = Expensive but deals with modern tech if you but the extention and know VB needs add ins

    OpenSource = OK for light stuff

    Rational http://www.rational.com/ [rational.com] = Very confusing

    SmarteScript http://www.accordsqa.com/ [accordsqa.com] = easy to use, but doesnt require programming experience, new to market, no need for add ins

    compuware http://www.compuware.com/ [compuware.com] = have to hire them as consultants

    just a few views.
  • by alanp ( 179536 ) on Saturday July 31, 2004 @07:23PM (#9854979) Homepage
    I've been in QA for a number of companies and have found one consistent problem... re-invention of the wheel...
    I attribute this to two things

    1) Commercial tools are over complicated and not very good
    2) There's no tool that exists to do X Y Z
    3) Our software can't be automatically tested !
    4) A lot of time is taken up with reporting !

    1).. Totally agree... have you ever seen Test Director ? It's a nightmare to use, takes ages to import testcases, and it's automation interface sucks... plus it costs a FORTUNE.
    Consequently people end up storing testcases in excel sheets, word docs or spreadsheets...

    2) In some cases thats true, but in most its not.. I've seen companies write there own test environments that address problems that have been addressed (often much better) millions of times before... I'm talking automating starting / stop processes and test scripts, reporting, logging, seeding with test data etc... ALL COMMON problems...
    The answer to this is SIMPLE
    Take a look at STAF (Software Test Automation Framework) by IBM. It's free and GPL. It gives a common framework for witing tests scripts and applications in java/perl/python/c and a few other languages. It consists of a STAF process running and a set of APIs and a command line interface. Services exist to start / stop processes on remote boxes, transfer files, global variables, semaphores, logging.... in fact EVERYTHING you would want to do in automation.
    Take a look at
    http://staf.sourceforge.net/ [sourceforge.net]
    This tool saves many man-months of coding in a test environment and gives a consistent way of doing things. No more tester X has this cool script, tester Y has this, dept Z does this....
    If you are a tester you need to look at this !

    3) This argument normally doesn't hold water anymore... most things can be automatically tested. The question is IS IT PRACTICAL ? If you are only going to test a product once (think end customer doing acceptance testing on your product) then often it's not.
    However if you are a software house, then there is no excuse for not doing automated testing, especially when you've got access to source code. You can also buy lots of expensive analysers and script generators for stimulating systems under test.

    4) Reporting... yes management wants to know how you are getting on.... A lot of companies do this by email... Each QA person has to send an email every day detailing what they've tested, problems they've had, progress etc . etc.
    This is UNNECESSARY. This is the perfect excuse for automation.
    An analysts' time should not be taken up with this crap. They should enter this info into a centralised test management system and management should be able to query and manage the test cycle using this tool. Again Test Director tries to do this, but it's TOO complicated.

    Now for the shameless Plug....
    I have writen a GPL testcase management system called Testmaster which does most of the above and integrates with STAF, allowing test scripts access to the testcase database via STAF.
    It provides web front end which holds all testcases and proceedures for multiple projects and departments, imports testcases from word docs, CSV files or directly into the DB and is the primary interface for everyone involved in the QA process.
    So now you have your testscripts running, automatically marking tests as passed/failed etc, automated emails going off to management on progress, a web based system for testers and managers to use and the ability with STAF to stop/start/pause testing at the click of a button on multiple remote systems.
    It's easy to use, free, under the GPL, eerything is held in a database and runs on almost any Unix system...

    Take a look at
    http://testmaster.sourceforge.net/ [sourceforge.net]

    and see if it will make your job easier.
    End shameless plug
  • Re:You're not alone. (Score:3, Interesting)

    by anonymous cowherd (m ( 783253 ) on Saturday July 31, 2004 @07:52PM (#9855090) Homepage
    Heh. I'm a software tester, and my boss, who actually is a Native American, put it to us this way: You guys are the cavalry at Little Bighorn, and the whole rest of IT is the Indians.

    Basically, what he was trying to get across to us was that as long as we can keep the BS somewhat at bay, we can do our jobs, but the second it gets out of hand, 2000 Indians are gonna be on your ass.

    To be fair, not all the development guys are like that, but some of them are.

  • by cetialphav ( 246516 ) on Sunday August 01, 2004 @03:04AM (#9856613)
    Testing is not a job that exercises a lot of creativity, unlike development.

    I have to take exception to this statement. I am a full-time tester, but I am also a very good programmer. Testing of large complex systems absolutely requires creativity. It may be that my problem domain is different than most other testers. I work with complex telecomm equipment (e.g. systems with 1200+ separate CPUs with full redundancy carrying terabytes of traffic). Most of our developers don't even understand how the system works; they only know bits and pieces.

    But obviously, I don't test GUIs. I'm testing traffic and the automated interfaces that these boxes provide. Maybe testing GUIs is boring, but that doesn't extend to all testing jobs.

    But you are right about automation. It is essential. I maintain the test automation tool that we use internally and it is worth its weight in gold. If I were testing GUIs, I'd be looking for a way to test the back-end logic separately from the GUI. That means automation hooks in the guts of the application. That way when testing the GUI, you know that you can focus on the presentation of the data because the accuracy of the data itself was already verified. Of course, there are always factors that make things more complex than this.
  • by faster ( 21765 ) on Sunday August 01, 2004 @04:29AM (#9856783)
    Testing is not a job that exercises a lot of creativity, unlike development.

    Thanks for the laugh.

    I guess it depends on what you're testing, and whether you hire QA people or testers.

    Testers generally run other people's test plans, but who wrote that test plan, and did they have a spec, or just a few conversations with the developers? I've only had one contract where the test plans I wrote were based on a complete specification. I've never had a job where we got even 80% of a spec before the test plans were due. And I have worked for two software companies that are in the top 10 in the industry.

    So you, the arrogant developer who knows SO much more than anyone in the QA department could possibly know, create features out of your ass (or your team lead's ass), and we lowly uncreative QA folks have to try to keep up. Oh, and when you're late (note that I didn't say 'if you're late'; more often than not, there's no 'if'), whose schedule gets compressed to make the ship date so the product manager can get his bonus for shipping on time, development's. or QA's?

    Thanks you soooo much for stooping to build testability into your code, or to provide test tools so that we can make good use of the two weeks we get to test your million lines of code.

    But I'm not bitter. Oh no.
  • 10 years of QA... (Score:2, Interesting)

    by drdreff ( 715277 ) on Sunday August 01, 2004 @04:54AM (#9856823) Homepage Journal
    ...has shown me that most of the commercial tools available are of extremely limited use. If you're working in any cutting edge environment or pushing existing technology to its limits the tools will not be able tohelp you.

    Home grown test suites built to suit the job at hand have always turned out to be the bets testing solution. Automation is the only way to keep up with the test effort required for a large project. Especially when QA is seen as an afterthough or a barrier to release. Automation will kepp the code from regressing and serves as a great check for problems duing large merges of patches.

    Load test tools are invaluable for verifying the stability of a system. The quality of these tools vary greatly, but things like Apache Flood and httpload have done wonders for stressing web services.

    FWIW YMMV
  • by SKarg ( 517167 ) on Sunday August 01, 2004 @05:07AM (#9856847) Homepage
    Software Development Magazine [sdmagazine.com] had an article about test first programming [sdmagazine.com] of a GUI application that describes a method of incorporating the automated testing into the application.

    The example GUI application is a simple game, but the methodology could be used for any GUI application.

    From the article:
    The big question was how to handle all the stuff that would normally be considered GUI code-specifically, the hit-testing: Where do users click, and what should happen when they click there?


    I suspected that hit-testing would be the most complex part of the application, so I wanted to ensure that I had tests for it. It also didn't belong in the model, since that should contain only basic game concepts. I needed an intermediary class somewhere: The Swing code would get a mouse event and delegate it to this intermediary class. The intermediary would interpret the mouse coordinates and determine if something had to happen. It would possibly do this by interacting with the model. If anything did happen, an event would be broadcast back to the Swing class.

    The author goes on to discover, after searching with Google [google.com], that he had developed "a pattern known as Model-View-Presenter [object-arts.com] (MVP), a variant of Model-View-Controller (MVC)."

    In MVP, the View actually serves as both the view and controller (presenting output and managing input). The Model is the Model, as in MVC. The extra middle layer is considered a bridge between the View and the Model. It's specific to the application and is often considered throwaway if the View needs to change (for example) from Swing to HTML. In this situation, the Model would remain unchanged.

    Martin Fowler [martinfowler.com] has a good description of how the Model View Presenter [martinfowler.com] works.

    The example source code [sdmagazine.com] is available on the site. It utilizes an automated Java test suite called JUnit [junit.org]

    Software Development Magazine is a magazine targeted at software developers, and there is no charge to subscribe to it [sdmagazine.com].
  • by DragonHawk ( 21256 ) on Sunday August 01, 2004 @09:36AM (#9857390) Homepage Journal
    Me: "Just like we had the state of software back in 1956 when he wrote the book."

    AC: "It was actually 1975. But your point still holds true."

    That was a deliberate over-simplification. I should have known somebody here would call me on it. Full version:

    The original edition of TMMM was copyright in 1975, yes. However, Brooks based his writings on his professional experience. He was working on projects relating to IBM's mainframe computer products in 1956. He became manager of the OS/360 project in 1964. He actually wrote most of the essays in TMMM from 1965 to 1974. I don't have exact dates for original publication, but I know some or all of the essays (including the "The Mythical Man-Month") were published separately, albeit in much less widely distributed media, prior to the book.

    Then, of course, in the "20th Anniversary" Edition, we have a renewed copyright of 1995, an essay "No Silver Bullet" first published in 1986, and a retrospective chapter written in 1994.

    I, of course, used the 1956 date because it sounds the most impressive. However, I honestly believe it is a valid "starting date" for the field of personal experience on which Brooks bases his essays.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...