Automated Software QA/Testing? 248
nailbite writes "Designing and developing software has been my calling ever since I first used a computer. The countless hours/days/months spent on imagining to actualizing is, to me, enjoyable and almost a form of art or meditation. However, one of the aspects of development that sometimes "kills" the fun is testing or QA.
I don't mind standalone testing of components since usually you create a separate program for this purpose, which is also fun. What is really annoying is testing an enterprise-size system from its UIs down to its data tier. Manually performing a complete test on a project of this size sucks the fun out of development. That's assuming all your developers consider development as fun (most apparently don't).
My question is how do you or your company perform testing on large-scale projects? Do you extensively use automated testing tools, and if so, can you recommend any? Or do you still do it the old-fashioned way? (manually operating the UI, going through the data to check every transaction, etc.)"
You're not alone. (Score:4, Funny)
Laugh, it's good for you.
Re:You're not alone. (Score:3, Interesting)
Basically, what he was trying to get across to us was that as long as we can keep the BS somewhat at bay, we can do our jobs, but the second it gets out of hand, 2000 Indians are gonna be on your ass.
To be fair, not all the development guys are like that, but some of them are.
Manual Testing (Score:4, Interesting)
I created an Enterprise Application composed of Client/Server Apps -- the best test was a small deployment of the Application to users who are apt to help you conduct the test, over a few weeks, I found bugs I never caught with my own manual tests.
Applications that test your code, etc are great from what I have heard, but will not catch Human Interface related issues, i.e. GUI Mess-Ups, Invisible Buttons, etc.
Re:Manual Testing (Score:4, Insightful)
However, if you go into beta testing too early, then major basic features will be broken from time to time, and you'll only irritate your testers and make them stop helping you. This is where automated tests shine, because they help you ensure that major changes to the code have not broken anything.
Put another way, automated test can verify compliance to a specification or design. User testing can verify compliance to actual needs. Neither can replace the other.
Touchscreen voting (Score:2)
Since in practice you wont be able to review
Re:Touchscreen voting (Score:2)
First, you have come up with a general plan.
- What exactly do you want to test - and to what degree of certainity
- What are your priorities - here I would say that the accuracy must be less than 1%
- What are your resources (i.e. how many testers, their qualifications)
- What is the timeline
- What is the estimated cost (and ultimately, what is the real profit from this stuff?)
Then you start to develop a small test plan. First, you need a precise behavioral m
Re:Touchscreen voting (Score:2)
Set up small degree voting stations in department stores, youth centres, community centres, restaurants, and libraries for such things as:
- New Services? Which are most popular
- Materials for Libraries, what would be used most frequently?
- Most Popular Items on a Menu
When done across a board, this is a nice way to get free testing, and please many different agencies and companies by providing this free service, thus improving the image of the voting system.
nothing to see here, move along. (Score:3, Insightful)
i'll stick with The Practice of Programming [bell-labs.com]. at the very least i trust the people who wrote it to have a better judgement.
Re:nothing to see here, move along. (Score:3, Interesting)
You shouldn't be doing it (Score:5, Insightful)
Re:You shouldn't be doing it (Score:5, Informative)
I once worked with a dyslexic drafer. He generally did very good work, except that his drawings often had spelling errors. When he looked at his own drawings, the errors were not obvious to him like they were to everyone else. Most people don't have such a pronounced systematic tendency to make some particular error. But we all occasionally make mistakes in which we were just thinking about something wrong. And those mistakes are invisible to us because we're still thinking the same wrong way when we review it. So having work checked by someone other than the one who created it is a good practice for just about any endeavor, not just software.
Re:You shouldn't be doing it (Score:3, Funny)
Imagine that. A "drafer" who makes spelling errors.
Re:You shouldn't be doing it (Score:2)
Re:You shouldn't be doing it (Score:2, Interesting)
This may sound very strange to some; why not fix the problem instead of working around it all the time? But the reality is that this behaviour is really, really common. I've seen it so many times when someone is sitting down to test something and the developer is standing over the shoulder saying "no,
Re:You shouldn't be doing it (Score:2)
Interesting point. That's one of my favorite arguments in the CLI vs. GUI debates. Writing instructions on how to do something in a console environment takes about the same work as writing a script that performs the job. After you write the script, nobody needs to do the handwork anymore.
Re:You shouldn't be doing it (Score:2)
I would qualify that statement by saying that you shouldn't be the only one to test your own system. Development should do unit testing. QA should do functional testing.
The sooner you test, the sooner you find bugs. The more experience you have (with testing and with the product), the more productive you can be. End user testing is great, but it's only one part of the process.
Re:You shouldn't be doing it (Score:2, Informative)
Re:You shouldn't be doing it (Score:2, Informative)
Elements that are going to or may be further developed will negate any benefits achievable from test automation.
The "Industry Standard" tools such as Mercury's, Compuware's or IBM Rational's test automation suites require a significant amount of coding effort themselves.
"Simple" design changes can blow away weeks of scripting.
Automating stable components is the best way to go. As each component is completed and stable it can be automated for regression testin
Re:You shouldn't be doing it (Score:2)
What white box automation needs (white box: where the testers are not contaminated by the code, and cannot contaminate the code) is a way to inject events into the device under test (DUT) and a way to observe and record repeatable results.
For complex embedded devices that communicate with other things (example cellphones, or
Re:You shouldn't be doing it (Score:2, Interesting)
I have to take exception to this statement. I am a full-time tester, but I am also a very good programmer. Testing of large complex systems absolutely requires creativity. It may be that my problem domain is different than most other testers. I work with complex telecomm equipment (e.g. systems with 1200+ separate CPUs with full redundancy carrying terabytes of traffic). Most of our developers don't even understand how the sy
Re:You shouldn't be doing it (Score:3, Interesting)
Thanks for the laugh.
I guess it depends on what you're testing, and whether you hire QA people or testers.
Testers generally run other people's test plans, but who wrote that test plan, and did they have a spec, or just a few conversations with the developers? I've only had one contract where the test plans I wrote were based on a complete specification. I've never had a job where we got even 80% of a spec before the test plan
Re:You shouldn't be doing it (Score:2)
The only company I've ever run across that ever did testing well was Data General. I was on their team that was doing an audit of the entire C standard library
Re:You shouldn't be doing it (Score:3, Insightful)
As a former professional software tester ... (Score:5, Insightful)
Testing goes far beyond what any automated system can test, if you have a user in there somewhere. You also need to check things like "How easy is it to use?" and "Does this feature make sense?". We also suggested features that the program did not have, but from our experiance using it, thought that it should have.
The Mythical Man-Month (Score:5, Insightful)
Of course, nobody wants to do that, because it's expensive and/or boring. Thus we have the state of software today. Just like we had the state of software back in 1956 when he wrote the book.
It never ceases to amaze me that we're still making the same exact mistakes, 50 years later. If you work in software engineering, and you haven't read The Mythical Man-Month, you *need* to. Period. Go do it right now, before you write another line of code.
Re:The Mythical Man-Month (Score:2, Insightful)
Just as there is a creative rush in building a working software system out of the ether, there is an equal rush and creative element is software testing.
Testers and developers think differently but have the same purpose in mind. At the end of the day, both want the best possible product to be delivered.
I suggest signing up to StickyMinds [stickyminds.com] as a good place to start.
Re:The Mythical Man-Month (Score:2, Insightful)
Of course, this usually isn't on the schedule. Management's view is that if you spent 6 weeks testing and few bugs were found, then the time was wasted and the product could have shipped out earlier.
But regardless of the schedule, the test time that Brooks states will get spent. Often that time is spent on repeated testing as a result of bug fixes. Last minute
Mythical Man-Month dates (Score:2, Interesting)
AC: "It was actually 1975. But your point still holds true."
That was a deliberate over-simplification. I should have known somebody here would call me on it. Full version:
The original edition of TMMM was copyright in 1975, yes. However, Brooks based his writings on his professional experience. He was working on projects relating to IBM's mainframe computer products in 1956. He became manager of the OS/360 project in 19
Re:As a former professional software tester ... (Score:2, Insightful)
Automated testing tools are best suited for regression testing. Regression testing is the set of test cases that are perform over and over again with each release. Its main function it to make sure that the new features did not break anything that was not supposed to change.
Our test group uses a product called Hammer (sorry, but I don't know where they got it or how much they paid for it) for their regression testing. Hammer has its own scripting language (may be VB based) and its own database that is used
Testing is fun too. It is MEETINGS that suck. (Score:2, Insightful)
QA is a separate function (Score:5, Insightful)
Re:QA is a separate function (Score:2)
Re:QA is a separate function (Score:2)
In actual practice, he got it right by forgetting the "not".
I usually present the QA folks with lots of test code. This makes them my friends, since they are usually under pressure to get the product out the door and don't have time to think up all the tests themselves. I don't either, of course, and I can't ever give them something complete. And I have the usual developer's problem of not being permitted any contact with actual user
Re:QA is a separate function (Score:2)
Your management has their hearts in the right place. The problem with the developer providing the QC of their own code is that they may miss the same problems in their test code as they did in development.
I think of the system testing at my company as comprised of two main activities: integration testing and feature testing. I can write
Re:QA is a separate function (Score:2, Insightful)
A developer has some responsibility to ensure thier code at least functions in the context of the overall application before making it my problem. Just because it compiles does not mean it is done.
Testing (Score:4, Informative)
bhj
Re:Testing (Score:2, Insightful)
You do provide complete and acurate TDS (Technical Design Specifications) for architectural details and FDS (Functional Design Specifications) for system operation, don't you?
Test Matrix (Score:5, Interesting)
Create a list of inputs that includes two or three normal cases as well as the least input and the most input (boundaries). Then make a list of states the application can be in when you put these values into it. Then create a graph with inputs as X and states as Y. Print your graph and have it handy as you run through the tests. As each test works, pencil in a check mark in the appropriate place.
Now that you've automated the system to the point where you don't need higher brain functions for it, get an account on http://www.audible.com, buy an audio book, and listen to it while you run through your grid. It still takes a long time, but your brain doesn't have to be around for it.
This is going to sound incredibly elementary to people who already have test methodologies in place, but when you need to be thorough, nothing beats an old fashioned test matrix. And audiobooks are a gift from God.
(I'm not affiliated with Audible, I just like their service. I'm currently listening to _Stranger in a Strange Land_ unabridged. Fun stuff.)
Re:Test Matrix (Score:2)
Automation is difficult (Score:5, Informative)
From my personal experience at one of the world's largest software companies, automation maintenance for even a small suite (200-300 tests, 10 dedicated machines) is a full time job. That means one person's entire responsibility is making sure the machines are running, the tests aren't returning passes and fails for reasons other than they are actually running the tests, and making changes to the automation both when either the hardware or software changes. This person must know the automation suite as well as the tests attempting to be performed intimately, and must also be willing to spend his days being a lab jockey. It's usually pretty difficult to find these people.
My point here is that even after spending many dev or test hours developing automation, in no way is it suddenly complete. There is no magic bullet to replace a human tester, the only thing you can do is try and improve his productivity by giving him better tools.
-tsf
Re:Automation is difficult (Score:5, Insightful)
Even in the best case, automation scripts go out of date very quickly. And, running old scripts over and over again seldom finds any bugs. Usually nobody is touching the old functions anyway, so regression testing is largely pointless (take a lude, of course there are exceptions).
I think the most promising idea on improving reliability I've seen in recent years is the XP approach. At least there are usually four eyes on the code, and at least some effort is being put into writing unit test routines up front.
I think the least promising approach to reliability is taken by OO types who build so many accessors that you can't understand what the hell is really going on. It's "correctness through obfuscation." Reminds me of the idiots who rename all the registers when programming in assembly.
Re:Automation is difficult (Score:2, Insightful)
This is one part of extreme programming I like very much. The idea is to write the test cases before you write the software. That way, you're testing to specification, not implementation.
To this I must respectfully disagree. In small(er) projects, it might be closer to the truth, but from my experience regression testing is vital. Regression testing is mainly useful
Re:Automation is difficult (Score:2)
Excellent point. I am fortunate to be able to work on test automation full time, and to have at least one other person to do most of the "lab jockey" chores. My day is divided between improving the tools, writing new tests, and analyzing results (which often requires revising a test).
There are times when I resist adding new tests (usually a big pile of stored procedures that a developer or another tester wrote), because I don't have the
Manually performing a complete test (Score:2)
Difficult if there is no logic and no interactions.
Scripts will be of some help.
Probably best will be long complicated sequences of operations whose net effect must logically be precisely nothing.
If you're lazy like me, integrate first, and every module is responsible for not going beserk in the presence of buggy neighbors. Too much software seems designed to assume that everything else is perfect. The old game of "gossip" shows that even if everything is perfect, your idea of perfect and m
Re:Manually performing a complete test (Score:2)
You haven't seen many bugs, have you? Sorry, I'm sure you have, but bugs are extremely adept at hiding in places where you "dont have places".
I've run into too many doubles and even a triple where the interactions were subtle enough that a bug was discernable only in the combination. Fix either (any in the case of the triple) and the problem goes away. Fix both,
I would class as a bug, any behavior which departs from what the specs should
Automated process (Score:2, Insightful)
"We don't have time for that. Just get the QA testing complete so we can start the layoffs."
This basically makes the entire question of automating processes academic. Now, if automating processes can lead to massive job loss, salary savings and bonuses, it might actually be approved.
Long-term value is never EVER approved instead of short-term pocket-stuffing, EVEN IF a business case can be made for it. I've seen near-perfect business cas
That's a bunch of crap (Score:2)
The thing is, once you automate something your automation will walk the same code path that was fixed after you logged your bugs after you ran this test case first time. It is very likely that the code paths that you're testing will never be touched again and therefor
6 year experience in QA (Score:5, Insightful)
TDD (Score:4, Insightful)
I came across this when I recently read the book by Erich Gamma and Kent Beck, Contributing to Eclipse. They do TDD in this book all the time, and it sounds like it's actually fun.
Not that I have done it myself yet! It sounds like a case where you have to go through some initial inconvencience just to get into the habit, but I imagine that once you've done that, development and testing can be much more fun altogether.
Thanks for the tip. (Score:3, Funny)
Drop "it." from the URL to get rid of
Thanks for the tip. Whoever designed the new scheme probably has to have his wife pick out his clothes for him, so the colors will match.
Automation versus Manual Testing (Score:5, Insightful)
- Seque's silktest [segue.com]
- WinRunner [wilsonmar.com]
- WebLoad [radview.com]
- Tcl/Expect [nist.gov]
There are *many many* problems with large-scale automation, because once you develop scripts around a particular user interface, you've essentially tied that script to that version of your application. So this becomes a maintenance problem as you go forward.
One very useful paradigm we've employed in automation is to use it to *prep* the system under test. Many times its absolutely impossible to create 50,000 users, or 1,000 data elements without using automation in some form. We automate the creation of users, we automate the API calls that put the user into a particular state, then we use our brains to do the more "exotic" manual testing that stems from the more complex system states that we've created. If you are to embark on automating your software, this is a great place to start.
Hope this helps.
Design it for testability (Score:2)
As a bonus you'll end up with a scriptable app that advanced users will love.
Manual work and automated testing tools (Score:3, Informative)
We evaluated a bunch of tools for testing the front end systems, and after a year long study of what's in the marketplace, we settled on the Mercury Interactive [note: I do not work for them] tools: QuickTest Professional for regression testing, and LoadRunner for stress testing.
No one product will test both the front and back ends, so you will need to use a mixture of the tools, open source tools, system logs, manual work, and some black magik.
Our applications are global in scope. We test against Windows clients, mainframes, UNIX boxes and mid-range machines.
The automated tools are definitely a blessing, but are not an end-all-be-all. Many of the testing tool companies just do not understand "global" enterprises, and working to integrate all the tools amongst all your developers, testers, operation staff, etc can be difficult.
Getting people on board with the whole idea of automated tools is yet another big challenge. Once you have determined which toolset to use, you have to do a very serious "sell" job on your developers, and most notably, your operations staff, who have "always done their testing using Excel spreadsheets".
Another big problem is no product out there will test the front and the back end. You will have to purchase tools from more than one vendor to do that. A tool for the backend iSeries, for example, is Original Software's TestBench/400. Again, this does not integrate with Mercury's tools, so manual correlation between the two products will be needed.
You can only go so far with these tools; they will not help you to determine the "look and feel" across various browsers - that you will have to do yourself.
QuickTest Professional does have a Terminal Emulator add-in (additional charge), that allows you to automated mouse/keystrokes via Client Access 5250, and other protocols.
The best way to determine your needs, is call up the big companies (CA, IBM, Mercury) and have them do demos for your staff. Determine your requirements. Setup a global team to evaluate the products. Get demo copies and a technical sales rep to help you evaluate in your environment. Compare all the products, looking at the global capability of the product, as well as support 24/7.
No tool is perfect, but it is better than manual testing. Once everybody is on board, and has been "sold" on the idea of the tools, you won't know how you lived without them.
Also, make sure that you have a global tool to help in test management to manage all your requirements and defects and test cases. Make sure it is web-based.
Expect to spend over a year on this sort of project for a global company. Make sure you document everything, and come up with best practice documentation and workflows.
Good luck!
The Microsoft way (Score:2)
2. Post to windowsupdate.com
3.
4. Profit
Software Testing (Score:2)
I suggest you beta test it with the end user and see what they have to say. Remember that THEY will be the ones that will have to use it on a day-to-day basis.
It seems to be a fait accompli that most businesses will accept IT bullshit, but the engineering companies (umm, people that build bridges and refineries, not engineering wannabees) are a lot less malleable when some computer specialist comes calling.
Re: (Score:2, Funny)
Mercury Test Director and Winrunner (Score:2)
To be honest many of the things they do could easily be done by something else, but QA/Testing may not seem to be the most interesting for open source developers.
Re:Mercury Test Director and Winrunner (Score:2)
What annoyed me was that TD is good but it has many failings. It would be nice to have an opensource equivalent. I find their database model somewhat restrictive (even if you do your own selects).
Testing? (Score:2)
OpenSTA (Score:2)
It's designed to stress test web pages, and analyse the load on web servers, database servers and operating systems.
There is also a new company - Cyrano [cyrano.com] that has risen from the ashes of the old
I Wonder if Anybody Does It (Score:2)
Re:I Wonder if Anybody Does It (Score:2)
Re:I Wonder if Anybody Does It (Score:2)
Customers pay for maintenance, support, and upgrades, but they are not forced to upgrade and can still get support if they don't upgrade. Thus, small changes are all that most custmers will accept, not anything that requires a huge data conversion or re-training project. Furthermore, because of the limited availability of expertise for such customer support, if a re-written version requir
Re:I Wonder if Anybody Does It (Score:2)
Well, it's just a thought. Good luck.
It's simple (Score:2)
Your UI/network code should not contain wiring logic spaghetti! Separate into validation etc logic that also can be unit-tested. If a button handler has 50 lines of code... - you have much more problems in the first place, solve them first.
Integrated Testing (Score:2, Insightful)
Use a QA team and test-driven development (Score:5, Interesting)
This is generally called "test-first" development. If you follow it, you'll find some nice characteristics:
1. Each unit will be easily testable.
2. Each unit will be coherent, since it's easier to test something that only does one thing.
3. Units will have light coupling, since it's easier to express a test for something that depends only lightly on everything else.
4. User interface layers will be thin, since it's hard to automatically test a UI.
5. Programmers will tend to enjoy writing tests a bit more, since the tests now tell them when they're done with their code, rather than merely telling them that their code is still wrong.
You can go a step further than this, and in addition to writing your tests before you write you code, you can even write your tests as you write your design. If you do this, your design will mutate to meet the needs of testing, which means all of the above advantages will apply to your large-scale design as well as your small-scale units. But in order to do this you have to be willing and able to code while you're designing, and many developers seem unwilling to combine the two activities in spite of the advantages.
-Billy
SilkTest (Score:2, Informative)
Downside? Its Windows stuff AND its hellaciously expensive..
Re:SilkTest (Score:2)
Good:
Mixed:
Bad:
jameleon (Score:2)
Pragmatic Programmer series (Score:2)
The third book in the series is about project automation, where they teach you how to repeat in a controlled manner all the stuff you learned in the first two books. The basics:
1) Write unit tests before writing your methods
2) Your unit tests should follow the inheritance tree of the classes under test to avoid duplicate test code.
3) Write a vertical "slice" of your application first (all the layers, none of the functionality). This wi
Last few companies I worked for (Score:2)
Testing (Score:5, Interesting)
Proof: For any significantly sized system, take a look at all the independen axes it has. For instance: The set of actions the user can take, the types of nouns the user can manipulate, the types of permissions the user can have, the number of environments the user may be in, etc. Even for a really simple program, that is typically at least 5 actions, 20 nouns, (lets estimate a minimal) 3 permission sets (no perm for the data, read only, read & write), and well in excess of 5 different environments (you need only count relevant differences, but this includes missing library A, missing library B, etc.). Even for this simple, simple program, that's 5*20*3*5, which is 1,500 scenarios, and yes, you can never be sure that precisely one of those will fail in a bad way.
Even at one minute a test, that's 25 hours, which is most of a person-week.
Thus, if you tested a enterprise class system for three days, you did little more than scratch the surface. Now, the "light at the end of the tunnel" is that most systems are not used equally across all of their theoretical capabilities, so you may well have gotten 90%+ of the use, but for the system itself, a vanishing fraction of the use cases. Nevertheless, as the system grows, it rapidly becomes harder to test even 90%.
(The most common error here is probably missing an environment change, since almost by definition you tested with only one environment.)
Bear in mind that such testing is still useful, as a final sanity check, but it is not sufficient. (I've seen a couple of comments that say part of the value of such testing is getting usability feedback; that really ought to be a seperate case, both because the tests you ought to design for usability are seperate, and because once someone has fuctionally tested the system they have become spoiled with pre-conceived notions, but that is better than nothing.)
How do you attack this problem? (Bear in mind almost nobody is doing this right today.)
Why can't you test GUI's? In my experience, it boils down to two major failings shared by nearly all toolkits:
The GUIs have chosen an architecture that is not conducive to testing; they require their own loop to be running, they don't allow you to drive them programmatically, they are designed for use, not testing. When you find a GUI that has an architecture at least partially conducive to testing, suddenly, lo, you can do some automated tests.
And in my case, I am talking serious testing that concerns things central to the use of my program. I counted 112 distinct programmatic paths that can be taken when the user presses the "down" key in my outliner. I was able to write a relatively concise test to cover all cases. Yes, code I thought was pretty good turned out to fail two specific cases (
get only really smart and really dumb testers.. (Score:2)
They will let you know flaws that you never imagined were possible!All in all nothing beats manual testing.We dont have any dedicated testing staff so we just gather up a few end users.
the UI is a pain (Score:2)
I have a pet project (jenny [burtleburtle.net]) that generates testcases for cross-functional testing, where the number of testcases grows with the log of the number of interacting features rather than exponentially. It often allows you to do a couple dozen testcases instead of thousands, and it
Re:the UI is a pain (Score:2)
Design your code to be automatically tested (Score:2)
It's an investment (Score:2)
Don't let developers check in code until it has passed the entire regression suite. Otherwise, SQA (the test maintainers) will be in a constant catch-up mode and ultimately the disconnect will grow until the whole test suite is abandoned.
Make tests easy to create. This is harder than it sounds. Tools that simp
Our recipe (Score:2)
Trac correct URL (Score:2)
MetaTest (Score:2)
follow up (Score:2, Informative)
automated testing (Score:2, Informative)
Seems to me that there are really a few levels of automated testing: there's automatically generating and running test cases, and there's having the ability to automatically repeat test cases you've manually created and run in the past.
Just being able to repeat manually created test cases is a big help. It sounds really simple -- create a harness that you can plug test cases into and start writing test cases -- but scheduling and coordinating that sort of thing starts to get really difficult on large proj
Autoexpect (Score:3, Informative)
Expect is a Tcl thing, that is also a module in Perl.
Autoexpect is a script generator of sorts for Expect.
Using that combination, you can create a test harness that can also test things that were not designed to be tested, such as GUIs, Browsers, SSH and Password sessions.
However, it is not easy and yes it works on Windoze too...
Trained monkeys. (Score:2, Funny)
The solutions to your problem is trained monkeys. it has been shown in trained monkeys that wide-dynamic-range neurons in the cerebral cortex participate in the encoding process by which monkeys perceive (Kenshalo et al., 1988). There has not been so large a demand for the little guys since Mustacioed italians wandered the steets of New York with them almost a centruy ago.
I find that ring-tailed monkeys (from South America, principally Brazil) are the best for software testin
from the trenches.. (Score:2)
In my experience, automated functional testing is good for 1 situation: functional GUI regression testing on systems using a classic "waterfall model" setup where the GUI dosen't change much and there are more than about 3-4 releases per year.
In any other situation, you usually don't get the payback. The software is expensive (I use Mercury Interactive's Winrunner), often running into $100k range. The skill set required is quite spec
My experiences with testing.... (Score:2)
During the dotBoom, I coughed up $500 out of my own pocket for Visual Test. I thought it would help me with the hard parts of testing my code: the NT services and the database c
Code to test (Score:2)
Don't re-invent the wheel ! (Score:2, Interesting)
I attribute this to two things
1) Commercial tools are over complicated and not very good
2) There's no tool that exists to do X Y Z
3) Our software can't be automatically tested !
4) A lot of time is taken up with reporting !
1).. Totally agree... have you ever seen Test Director ? It's a nightmare to use, takes ages to import testcases, and it's automation interface sucks... plus it costs a FORTUNE.
We do both (Score:2)
At the same time, we also utilize full-time testers when we hit milestones, to make sure that items related to look, feel and IA are caught and dealt with.
So, to sum up:
Automated testing to sanity-check builds
Manual testing to ensure milestone quality
What does QA mean in software (Score:2)
Re:What does QA mean in software (Score:2)
Okay, I did that and here's what I got:
"Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'"
That's not QA, that's project management. Or process management. Or lifecycle management. What's it got to do with quality? At what point are you validating the deliv
Don't forget Valgrind (Score:2)
I run a small open/commercial hybrid development company here - valgrind has been one of the most impressive bug prevention tools we've ever added to our arsenal.
One very important area of testing/QA is regression testing. Always make sure you have a record of what worked in the
Testers are Professionals Too (Score:2, Insightful)
Moo (Score:2)
If properly speced, bounds checking and others tests are very simple to write, and either Q&A can do it, or a day can be set aside where you listen to your favorite music, book on tape, or movie, and test each module. Testing is simple, and many suites make it even easier.
Ultimately, the problem with
10 years of QA... (Score:2, Interesting)
Home grown test suites built to suit the job at hand have always turned out to be the bets testing solution. Automation is the only way to keep up with the test effort required for a large project. Especially when QA is seen as an afterthough or a barrier to release. Automation will ke
Test First Programming (Score:2, Interesting)
The example GUI application is a simple game, but the methodology could be used for any GUI application.
From the article:
Testing's Many Layers (Score:2)
The best defense against a client-side blowup is to have as many layers as possible helping out, because they all have their various strengths and vulnerabilities.
In development:
Some useful things (Score:2)
(o) Just like coding testing is engineering: you have limited resources, and have to allocate them to get the max. benefit.
(o) Some small enterprises cannot afford full-time software testers. Some have huge existing code bases, and small or non-existent QA teams. And that just has to do.
(o) Black-box testing for regressions is often where the max. bang for your buck is. Consider: your unit tests may cover 99% of code, and 100% of the units tests may pass. But when one
Re:i hate to be obvious, but (Score:2)
I ran into this problem in my (small) OSS project. My solution was to form a small testing group (seven members at present) from dedicated users - I send the release out to them, they do pretty thorough testing and I get back detailed bug reports. Seems to work very wel