Sunday, April 29, 2007

Some Common Mistakes in Test Automation


As QA Engineers, Test Automaton is a major area that we need to improve our skills further. Automating the test cases for a project will save a lot of time and manual intervention needed for testing a project when we get a sudden release to be tested immediately. I have found a set of common mistakes in Test Automation that we should take into account when we are going to perform this in live projects. Here are only a few of them and for more information please visit: http://www.stickyminds.com/getfile.asp?ot =XML&id=2901&fn=XDD2901filelistfilename1.pdf 1) Confusing automation and testing is a skill. While this may come as a surprise to some people it is a simple fact. For any system, there are an astronomical number of possible test cases and yet practically we have time to run only a very small number of them. Yet this small number of test cases is expected to find most of the bugs in the software, so the job of selecting which test cases to build and run is an important one. Both experiment and experience has told us that selecting test cases at random is not an effectiveapproach to testing. A more thoughtful approach is required if good test cases are to be developed
2) Believe capture/replay = automation Capture / replay technology is indeed a useful part of test automation but it is only a very small part of it. The ability to capture all the keystrokes and mouse movements a tester makes is an enticing proposition, particularly when these exact keystrokes and mouse movements can be replayed by the tool time and time again. The test tool records the information in a file called a script. When it is replayed, the tool reads the script and passes the same inputs and mouse movements on to the software under test which usually has no idea that it is tool controlling it rather than a real person sat at the keyboard. In addition, the test tool generates a log file, recording precise information on when the replay was performed and perhaps some details of the machine. 3) Verify only screen based information Testers are often only seen sat in front of a computer screen so it is perhaps natural to assume that it only the information that is output to the screen by the software under test that it checked. This view is further strengthened by many of the testing tools that make it particularly easy to check information that appears on the screen both during a test and after it has been executed. However, this assumes that a correct screen display means that all is OK, but it is often the output that ends up elsewhere that is far more important. Just because information appears on the screen correctly does not always guarantee that it will be recorded elsewhere correctly.
4) Trying to automate too much
There are two aspects to this:
automating too much too soon; and automating too much, full stop. Automating too much early on leaves you with a lot of poorly automated tests which are difficult (and therefore, costly to maintai) and susceptible to software changes. It is much better to start small. Identify a few good, but diverse, tests (say 10 or 20 tests, or 2 to 3 hours’ worth of interactive testing) and automate them on an old (stable) version of software, perhaps a number of times, exploring different techniques and approaches.

No comments: