Monday, June 19, 2017

Basic Test case Concepts

A test case is simply a test with formal steps and instructions; test cases are valuable because they are repeatable, reproducible under the same environments, and easy to improve upon with feedback. A test case is the difference between saying that something seems to be working okay and proving that a set of specific tasks are known to be working correctly.

Some tests are more straightforward than others. For example, say you need to verify that all the links in your web site work. There are several different approaches to checking this:
  • you can read your HTML code to see that all the link code is correct
  • you can run an HTML DTD validator to see that all of your HTML syntax is correct, which would imply that your links are correct
  • you can use your browser (or even multiple browsers) to check every link manually
  • you can use a link-checking program to check every link automatically
  • you can use a site maintenance program that will display graphically the relationships between pages on your site, including links good and bad
  • you could use all of these approaches to test for any possible failures or inconsistencies in the tests themselves
Verifying that your site's links are not broken is relatively unambiguous. You simply need to decide which one of more of these tests best suits your site structure, your test resources, and your need for granularity of results. You run the test, and you get your results showing any broken links.

Notice that you now have a list of broken links, not of incorrect links. If a link is valid syntactically, but points at the incorrect page, your link test won't catch the problem. My general point here is that you must understand what you are testing. A test case is a series of explicit actions and examinations that identifies the "what".

A test case for checking links might specify that each link is tested for functionality, appropriateness, usability, style, consistency, etc. For example, a test case for checking links on a typical page of a site might include these steps:

Link Test: for each link on the page, verify that

the link works (i.e., it is not broken)
  • the link points at the correct page
  • the link text effectively and unambiguously describes the target page
  • the link follows the approved style guide for this web site (for example, closing punctuation is or is not included in the link text, as per the style guide specification)
  • every instance of a link to the same target page is coded the same way
As you can see, this is a detailed testing of many aspects of the link, with the result that on completion of the test, you can say definitively what you know works. However, this is a simple example: test cases can run to hundreds of instructions, depending on the types of functionality being tested and the need for iterations of steps.

Defining Test and Test case Parameters

A test case should set up any special environment requirements the test may have, such as clearing the browser cache, enabling JavaScript support, or turning on the warnings for the dropping of cookies.
In addition to specific configuration instructions, test cases should also record browser types and versions, operating system, machine platforms, connection speeds -- in short, the test case should record any parameter that would affect the reproducible of the results or could aid in troubleshooting any defects found by testing. Or to state this a little differently, specify what platforms this test case should be run against, record what platforms it is run against, and in the case of defects report the exact environment in which the defect was found. The various required fields of a test case are as follows
  • Test Case ID: It is unique number given to test case in order to be identified. 
  • Test description: The description if test case you are going to test.
  • Revision history: Each test case has to have its revision history in order to know when and by whom it is created or modified.
  • Function to be tested: The name of function to be tested.
  • Environment: It tells in which environment you are testing.
  • Test Setup: Anything you need to set up outside of your application for example printers, network and so on.
  • Test Execution: It is detailed description of every step of execution.
  • Expected Results: The description of what you expect the function to do.
  • Actual Results: pass / failed If pass - What actually happen when you run the test. If failed - put in description of what you've observed.
Sample Test case

Here is a simple test case for applying bold formatting to a text.
  1. Test case ID: B 001
  2. Test Description: verify B - bold formatting to the text
  3. Revision History: 3/ 23/ 00 1.0- Valerie- Created
  4. Function to be tested: B - bold formatting to the text
  5. Environment: Win 98
  6. Test setup: N/A
  7. Test Execution:
a.      Open program
b.      Open new document
c.       Type any text
d.      Select the text to make bold.
e.      Click Bold

      8. Expected Result: Applies bold formatting to the text
      9. Actual Result: pass

Test Case 1

Test Case ID : Test Case Title

The test case ID may be any convenient identifier, as decided upon by the tester. Identifiers should follow a consistent pattern within Test cases, and a similar consistency should apply access Test Modules written for the same project.

Test Case ID
Purpose
Owner
Expected Results
Test Data
Test Tools
Dependencies
Initialization
Description


Purpose:
The purpose of the Test case, usually to verify a specific requirement.

Owner:
The persons or department responsible for keeping the Test cases accurate.

Expected Result :
Describe the expected results and outputs from this Test Case. It is also desirable to include some method of recording whether or not the expected results actually occurred (i.e.) if the test case, or even individual steps of the test case, passed.

Test Data:
Any required data input for the Test Case.

Test Tools:
Any specific or unusual tools or utilities required for the execution of this Test Case.

Dependencies :
If correct execution of this Test Case depends on being preceded by any other Test Cases, that fact should be mentioned here. Similarly any dependency on factory outside the immediate test environment should also be mentioned.

Initialization :
If the system software or hardware has to be initialized in a particular manner in order for this Test case to succeed, such initialization should be mentioned here.

Description:
Describe what will take place during the Test Case the description should take the form of a narrative description of the Test Case, along with a Test procedure , which in turn can be specified by test case steps, tables of values or configurations, further narrative or whatever is most appropriate to the type of testing taking place.
Test Case 2

Test ID
Description
Expected Results 
Actual Resuls

Test Case 3

Project Name
Project ID
Version
Date
Test Purpose
Pre – Test Conditions 

Step
Test Description
Test Data
Test Actions
Expected Result
Actual Result

Test Case 4

Test Case Description : Identify the Items or features to be tested by this test case.
Pre and post conditions: Description of changes (if any) to be standard environment. Any modification should be automatically done

Case
Component
Author
Date
Version
Test Case Description
Pre and Post Conditions
Input / Output Specification 
Test Procedure
Expected Results
Failure Recovery
Comments


Test Case 4 - Description

Case : Test Case Name
Component : Component Name
Author : Developer Name
Date : MM – DD – YY
Version : Version Number
Input / Output Specifications:

Identify all inputs / Outputs required to execute the test case. Be sure to identify all required inputs / outputs not just data elements and values:

o   Data (Values , ranges, sets )
o   Conditions (States: initial, intermediate, final)
o   Files (database, control files)

Test Procedure 
Identify any special constrains on the test case. Focus on key elements such as special setup.

Expected Results
Fill this row with the description of the test results

Failure Recovery
Explanations regarding which actions should be performed in case of test failure.

Comments
Suggestions, description of possible improvements, etc.

Test Case 5

Test Case ID 
Test Case Name
Test Case Description
Test Steps
Test Case Status
Test Status (P/F)
Test Priority
Defect Severity
Step
Expected
Actual

Test Case ID
Test Case Title
Purpose
Pre Requisite
Test Data
Steps
Expected Result 
Actual Result
Status

Writing Test Cases for Web Browsers


This is a guide to making test cases for Web browsers, for example making test cases to show HTML, CSS, SVG, DOM, or JS bugs. There are always exceptions to all the rules when making test cases. The most important thing is to show the bug without distractions. This isn't something that can be done just by following some steps, you have to be intelligent about it. Minimizing existing test cases..

STEP ONE: FINDING A BUG

The first step to making a testcase is finding a bug in the first place. There are four ways of doing this:

1. Letting someone else do it for you: Most of the time, the testcases you write will be for bugs that other people have filed. In those cases, you will typically have a Web page which renders incorrectly, either a demo page or an actual Web site. However, it is also possible that the bug report will have no problem page listed, just a problem description.

2. Alternatively, you can find a bug yourself while browsing the Web. In such cases, you will have a Web site that renders incorrectly.

3. You could also find the bug because one of the existing testcases fails. In this case, you have a Web page that renders incorrectly.

4. Finally, the bug may be hypothetical: you might be writing a test suite for a feature without knowing if the feature is broken or not, with the intention of finding bugs in the implementation of that feature. In this case you do not have a Web page, just an idea of what a problem could be.

If you have a Web page showing a problem, move to the next step. Otherwise, you will have to create an initial test case yourself. This is covered on the section on "Creating test cases from scratch" later.


STEP TWO: REMOVING DEPENDENCIES

You have a page that renders incorrectly.

Make a copy of this page and all the files it uses, and update the links so they all point to the copies you made of the files. Make sure that it still renders incorrectly in the same way -- if it doesn't, find out why not. Make your copy of the original files as close to possible as the original environment, as close as needed to reproduce the bug. For example, instead of loading the files locally, put the files on a remote server and try it from there. Make sure the MIME types are the same if they need to be, etc.
Once you have your page and its dependencies all set up and still showing the same problem, embed the dependencies one by one.

For example, change markup like this:

...to this:

Each time you do this, check that you haven't broken any relative URIs and that the page still shows the problem. If the page stops showing the problem, you either made a mistake when embedding the external files, or you found a bug specifically related to the way that particular file was linked. Move on to the next file.


STEP THREE: MAKING THE TEST FILE SMALLER

Once you have put as many of the external dependencies into the test file as you can, start cutting the file down.
Go to the middle of the file. Delete everything from the middle of the file to the end. (Don't pay attention to whether the file is still valid or not.) Check that the error still occurs. If it doesn't, put that part pack, and remove the top half instead, or a smaller part.
Continue in this vein until you have removed almost all the file and are left with 20 or fewer lines of markup, or at least, the smallest amount that you need to reproduce the problem.
Now, start being intelligent. Look at the file. Remove bits that clearly will have no effect on the bug. For example if the bug is that the text "investments are good" is red but should be green, replace the text with just "test" and check it is still the wrong colour.
Remove any scripts. If the scripts are needed, try doing what the scripts do then removing them -- for example, replace this:

..with:

test


...and check that the bug still occurs.


Merge any