Monday, April 30, 2007
Importance of Prioritisation
This describes the importance and order (urgency)in which a bug should be fixed and also cannot be set when a bug is created.
Engineering project leader keep the responsibility of setting this prioriy.
Prioritising criterias are- immediate,high,normal and low.
So Prioritisation is essential because,
By Prioritising tests, whenever you stop testing,you have done the best possible tests within the available time.
QA Project RISKs management
It was found that RISK (unavailability of test data, test phones, Software, Skills etc) identified in QA projects are not
1) Communicated to all stakeholders
2) Tracked and current status is not known
3) Assigned to someone to take action
4) Prioritized
5) Action taken etc are not documented
RISK management is done on ad hoc basis and we cannot continue this trend.
QA will be using bugzilla to manage project RISKs by introducing component “RISK” to all bugzilla project.
Note: We thought of using a new system. But bugzilla has all the features required to manage RISKs.
Orthodox shot
In cricket you might have seen batsman playing orthodox shots.
The aim is to get as many runs as possible.
All techniques or process may not be documented or taught to find the bugs/ to ensure the quality.
Here are few ways to find them.
1) Review databases structure and data in databases
2) Review design document to prevent defect penetration from upper stream (phases)
3) Evaluate similar products (eg. Roshni is doing comparison of Reward+ Applications. You can ask for a copy)
4) Attend stand-up meetings and understand the high RISK areas and test critical areas.
5) Check logs if sensitive data (eg. PIN, Password, Credit Cards etc) are saved.
Good tester finds many critical bugs early
List of traits for a sotware tester
1. Software testers are explorers.
2. They are troubleshooters
3. They are relentless
4. They are creative
5. They are (mellowed) perfectionists.
6. They exercise good judgment
7. They are tactful and diplomatic
8. They are persuasive
A fundamental trait of software testers is that they simply like to break things. They live to find those elusive system crashes. They take great satisfaction in laying to waste the most complex programs. They are often seen jumping up and down in glee, giving each other high-fives, and doing a little dance when they bring a system to its knees. It's the simple joys of the life that matters the most." (Extract from Software Testing by Ron Patton)
Let's try to plant these habits in us and become Exceptional Testers
Information to get improved yourself
Software Quality Assurance (SQA) is defined as a planned and systematic approach to the evaluation of the quality of and adherence to software product standards, processes, and procedures. SQA includes the process of assuring that standards and procedures are established and are followed throughout the software acquisition life cycle. Compliance with agreed-upon standards and procedures is evaluated through process monitoring, product evaluation, and audits. Software development and control processes should include quality assurance approval points, where an SQA evaluation of the product may be done in relation to the applicable standards.
2. Software Quality Assurance Activities
Product evaluation and process monitoring are the SQA activities that assure the software development and control processes described in the project's Management Plan are correctly carried out and that the project's procedures and standards are followed. Products are monitored for conformance to standards and processes are monitored for conformance to procedures. Audits are a key technique used to perform product evaluation and process monitoring.
Review of the Management Plan should ensure that appropriate SQA approval points are built into these processes.
Product evaluation is an SQA activity that assures standards are being followed. Ideally, the first products monitored by SQA should be the project's standards and procedures. SQA assures that clear and achievable standards exist and then evaluates compliance of the software product to the established standards. Product evaluation assures that the software product reflects the requirements of the applicable standard(s) as identified in the Management Plan.
Process monitoring is an SQA activity that ensures that appropriate steps to carry out the process are being followed. SQA monitors processes by comparing the actual steps carried out with those in the documented procedures.
The Assurance section of the Management Plan specifies the methods to be used by the SQA process monitoring activity.
A fundamental SQA technique is the audit, which looks at a process and/or a product in depth, comparing them to established procedures and standards. Audits are used to review management, technical, and assurance processes to provide an indication of the quality and status of the software product.
The purpose of an SQA audit is to assure that proper control procedures are being followed, that required documentation is maintained, and that the developer's status reports accurately reflect the status of the activity. The SQA product is an audit report to management consisting of findings and recommendations to bring the development into conformance with standards and/or procedures.
Some Definitions
What is Automated Testing?
Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
What is Baseline?
The point at which some deliverable produced during the software engineering process is put under formal change control.
What is Performance Testing?
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
What is Test Bed?
An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.
What is Test Tools?
Computer programs used in the testing of a system, a component of the system, or its documentation.
Organisational structures for testing
1. Developer responsibility (only):
Subjective assessment of work.They can find and fix faults cheaply.They have tendency to 'see' expected results, not actual results
2. Development team responsibility (buddy system):
Have some independence on friendly terms with “buddy” but with lack of testing skills.
3. Tester(s) on the development team:
Dedicated to testing, no development responsibility and part of the team, working to same goal: quality.But work in a single view / opinion.
4. Dedicated team of testers (Independent test team):
Dedicated team just to do testing with specialist testing expertise and testing is more objective & more consistent.
5. Internal test consultants (advice, review, support):
Highly specialist testing expertise, providing support and help to improve testing done by all and better planning, estimation & control from a broad view of testing in the organisation.But not performs the testing so someone still has to do the testing.
6. outside organisation (3rd party testers):
Highly specialist testing expertise and independent of internal politics.Associated disadvantages are:
lack of company and product knowledge,expertise gained goes outside the company and more expensive.
Reminder when Testing Reports
We should at least check for the top level data values in the DB by writing a simple query. In some past projects there have been significant differences in the actual DB contents and the values given in the reports.
A simple example is:
SELECT REASON_CODE,SUM(AMOUNT) FROM CC_CREDIT_ADUSTMENTS WHERE CREDIT_DIRECTION="DECR" GROUP BY REASON_CODE
+-----------------+------------------+
| REASON_CODE | SUM(AMOUNT) |
+-----------------+------------------+
| 2000 | 2746.928 |
| 2001 | 3312.494 |
| 2002 | 3427.635 |
+-------------+----------------------+
If the top level query results do not match with the report contents then there seems to be some serious issue and we should take the necessary actions.
RPM - Redhat Package Manager.
RPM stands for Redhat Package Manager.
It’s a powerful software manager available in Linux.
It can install, query, remove and verify the software in your system.
Here are few useful commands
1) Install a rpm
$ rpm –ivh
2) Update a rpm
$ rpm –Uvh
3) Remove a rpm installation
$ rpm –e
4) Query rpm installation
$ rpm –qi
Try $ man rpm and documents in web for more information.
About FTP
Here are some useful and common commands that you have to be familier with,
#To connect your local machine to the remote machine:
ftp
#To copy multiple files/ all the files from remote to local machine:
mget/mget *
#To copy multiple files from local machine to remote machine:
mput
#To copy file:
get
#lcd - same as cd
#rmdir- to remove directory
#bye-same as quit(exit from FTP environment)
#close-to terminate a connection with another computer
Few GUI standards
Login Screen
- User ID and Password should be captured
- Password field should be masked
- Errors on invalid credentials should be descriptive
- Field Captions should be same across all the modules
Copyright Statement
- There should be a copyright statement in the GUI
- It shall contain
- Project started year
- Current year
- e.g.
Copyright © 2005 - 2007 ABC (Pvt.) Ltd.
Change Password Screen
- Old , New and Confirm New should be captured
- All password fields should be masked
Use of Auto complete feature
- Auto completion feature can be set on for frequently used fields
- It SHOULD NOT be used for fields with sensitive data. e.g Password, PIN , Credit Card Number
Font Usage
Grammar and Spelling
- Text appear in GUI pages should be checked for spelling and Grammar
- MS Word can be used for simple checking
Logout
- There should be a link to logout from current session
- Session should expire
Error messages
- Error messages should be descriptive
- Should be consistent
- Should be short
- It would be nice if they are configurable
Acknowledgements
- User should be acknowledge on events/changing status (e.g. On Update, Deleted , Add etc)
- Message should be consistent
- Message should be shorter
Consistency
- Look and feel should be same in all supported browsers
- Look and feel should be same across all the modules
Command Line Installation
Title Text
- Title should be appropriate to the current screen
- Should be short
Use of controls
- Appropriate controls should be used in the HTML pages
Length of text inputs
- Length of fields should be sufficient for users to enter longest possible value.
Use of Check boxes vs Radio buttons
- Radio buttons are used when there is a list of two or more options that are mutually exclusive and the user must select exactly one choice. (e.g. primary notification method SMS, Email, Voice Mail)
- Checkboxes are used when there are lists of options and the user may select any number of choices, including zero, one, or several. (e.g. Assign permissions)
- A stand-alone checkbox is used for a single option that the user can turn on or off. (e.g Enable /Disable a user)
Paging
- Paging should be introduced if user has to scroll down to see the list of records.
Defeault Values
- The value(s) selected should be a frequently used by user
Test Cases Should Be Reviewed By Team Members
the test case report should be reviewed by the appropriate team members (project manager, programmers, and other testers).
Reviewers should identify missing test cases and test cases that are unclear.
The review meeting should only take about 30 minutes, as each reviewer should come to the meeting prepared, just to identify missing or unclear test cases.
The tester will then make the changes and mark that requirement as Test Ready.
Sunday, April 29, 2007
How to learn from experience
When your working on a project for the first
time there will be allot of things to learn
and as you proceed to other projects you
will acquire more knowledge during each
project.
e.g. :-
* How to identify important test cases
which short cuts to use to save time
* How to cover all the areas of a
system without spending a large
amount of time testing every
possibility available, etc.
* How to make use of the knowledge you
acquire during a project:
1. Make sure you note down all the
important things that you learn
during a project
2. Learn from your mistakes and make
sure that it does not happen again.
3. Share with others the good
practices and important lessons
learned.
4. Identify one's weak areas and
improve in such areas.
These are some basic things but
it's very important for you to
keep them in mind.
SQA Relationships to Other Assurance Activities
Do you know the SQA Relationships to Other
Assurance Activities?
Here are some of the more important
relationships of SQA to other management
and assurance activities:
1.Configuration Management Monitoring
SQA assures that software Configuration
Management (CM) activities are performed
in accordance with the CM plans,standards,
and procedures. SQA reviews the CM plans for
compliance with software CM policies and
requirements and provides follow-up for
nonconformances. SQA audits the CM functions
for adherence to standards and procedures and
prepares reports of its findings.SQA also
monitors and audits the software library.
2.Verification and Validation Monitoring
SQA assures Verificaton and Validation
(V&V) activities by monitoring technical
reviews, inspections, and walkthroughs.
The SQA role in reviews, inspections, and
walkthroughs is to observe, participate
as needed, and verify that they were
properly conducted and documented.
SQA also ensures that any actions required
are assigned, documented, scheduled,
and updated.
3. Formal Test Monitoring
SQA assures that formal software testing,
such as acceptance testing, is done in
accordance with plans and procedures.
SQA reviews testing documentation for
completeness and adherence to standards.
The documentation review includes test
plans, test specifications, test
procedures, and test reports. SQA
monitors testing and provides follow-up
on nonconformances. By test monitoring,
SQA assures software completeness and
readiness for delivery.
Software testing verifies that the
software meets its requirements.
The quality of testing is assured by
verifying that project requirements
are satisfied and that the testing
process is in accordance with the
test plans and procedures.
(Ref:http://satc.gsfc.nasa.gov/assure)
Art of repoting bugs
"How well you report a bug directly affects how likely the programmer is to fix it."
*1*. Test early and test often.
*2*. Integrate the application development and testing life cycles. You'll get better results and you won't have to mediate between two armed camps in your IT shop.
*3*. Formalize a testing methodology; you'll test everything the same way and you'll get uniform results.
*4*. Develop a comprehensive test plan; it forms the basis for the testing methodology.
*5*. Use both static and dynamic testing.
*6*. Define your expected results.
*7*. Understand the business reason behind the application. You'll write a better application and better testing scripts.
*8*. Use multiple levels and types of testing (regression, systems, integration, stress and load).
*9*. Review and inspect the work, it will lower costs.
*10*. Don't let your programmers check their own work; they'll miss their own errors.
Some Common Mistakes in Test Automation
As QA Engineers, Test Automaton is a major area that we need to improve our skills further.
Automating the test cases for a project will save a lot of time and manual intervention
needed for testing a project when we get a sudden release to be tested immediately.
I have found a set of common mistakes in Test Automation that we should take
into account when we are going to perform this in live projects.
Here are only a few of them and for more information please visit:
http://www.stickyminds.com/getfile.asp?ot
=XML&id=2901&fn=XDD2901filelistfilename1.pdf
1) Confusing automation and testing is a skill. While this may come as a
surprise to some people it is a simple fact. For any system, there are an astronomical
number of possible test cases and yet practically we have time to run only a very small number of them.
Yet this small number of test cases is expected to find most of the bugs in the software, so the job of selecting which test cases to build and run is an important one.
Both experiment and experience has told us that selecting test cases at random is not an effectiveapproach to testing. A more thoughtful approach is required if good test cases are to be developed
2) Believe capture/replay = automation Capture / replay technology is indeed a useful part of test automation but it is only a very small part of it.
The ability to capture all the keystrokes and mouse movements a tester makes is an enticing proposition, particularly when these exact keystrokes and mouse movements can be replayed by the tool time and time again. The test tool records the information in a file called a script. When it is replayed, the tool reads the script and passes the same inputs and mouse movements on to the software under test which usually has no idea that it is tool controlling it rather than a real person sat at the keyboard. In addition, the test tool generates a log file, recording precise information on when the replay was performed and perhaps some details of the machine.
3) Verify only screen based information Testers are often only seen sat in front of a computer screen so it is perhaps natural to assume that it only the information that is output to the screen by the software under test that it checked. This view is further strengthened by many of the testing tools that make it particularly easy to check information that appears on the screen both during a test and after it has been executed.
However, this assumes that a correct screen display means that all is OK, but it is often the output that ends up elsewhere that is far more important. Just because information appears on the screen correctly does not always guarantee that it will be recorded elsewhere correctly.
4) Trying to automate too much
There are two aspects to this:
automating too much too soon; and automating too much, full stop. Automating too much early on leaves you with a lot of poorly automated tests which are difficult (and therefore, costly to maintai) and susceptible to software changes. It is much better to start small.
Identify a few good, but diverse, tests (say 10 or 20 tests, or 2 to 3 hours’ worth of interactive testing) and automate them on an old (stable) version of software, perhaps a number of times, exploring different techniques and approaches.
SQA, Become a good Tester
It consists of the software engineering processes and methods used to ensure quality. SQA encompasses the entire software development process, which may include processes such as reviewing requirements documents, source code control, code reviews, change management, configuration management, release management and of course, software testing.
Software quality assurance is related to the practice of quality assurance in product manufacturing. There are, however, some notable differences between software and a manufactured product. These differences all stem from the fact that the manufactured product is physical and can be seen whereas the software product is not visible. Therefore its function, benefit and costs are not as easily measured.
(Source www.wikipedia.org)
How to become a good Tester
Sometimes we tend to play the role of being the Quality Police. We enforce quality standards, identify programmers who are not following procedures, and sometimes go to a personal level to accuse programmers whom we feel are producing inferior work. This role traps us into being less effective as testers. And it’s more likely to undermine the project by discouraging communication, reducing trust, and causing delays.
A software tester’s job is to test software, find bugs, and report them so that they can be fixed. An effective software tester focuses on the software product itself and gathers empirical information regarding what it does and doesn’t do. This is a big job all by itself. The challenge is to provide accurate, comprehensive, and timely information, so managers can make informed decisions.
However, many testers take on additional “responsibilities.” They criticise programmers for shoddy work or for not following proper procedures. Or they try to mandate how programmers should operate. Or they snipe at the design instead of finding bugs. These testers may refuse to test builds that don’t have sufficient documentation or refuse to research bugs that shouldn’t have been there in the first place. They think that programmers require discipline and are determined to give it to them.
Some testers adopt the attitudes of the quality police on their own initiative. Others do so at the prompting of their managers or the advice of authors and consultants. Let’s look at some of the beliefs that can lead to trouble.
So let's not try to get personal but do our job in the most effective way.
(Extracts from www.stickyminds.com)
How software is full tested and has high quality, before moving to production?
Quality Assurance begins during Specification
Review.
The QA team should have the opportunity to
review each specification and to find
issues and make suggestions.
The QA team should look for unclear
requirements, requirements that are
not defined well enough to create test
cases around, and for requirements
that might affect other areas of the
software.
Time spent in requirements review
and correction can pay big dividends.
How to file a great bug report
Given below are the steps to file a great bug
report.
When writing bug reports include the exact
cause of the bug. If you can't find the
exact cause isolate the bug to a certain
area so that the developer can focus his
attention on that area to fix the bug.
The quicker the engineer can isolate the
bug to a specific area the more likely
he/she will expediently fix it.
1)Use bug titles that fully capture the
essence of the bug. Further it should
provide enough information to
differentiate it from other bugs.
2)Use clear and minimal reproducable
steps
3)Avoid jargon or terms that may be
difficult to understand by others.
Make sure developers can easily
understand what the bug report is.
So the developers can save time in
resolving it
4)Use attachments which are extremely
helpful.
Because picture is worth more than
1000 words.
Tips for writing effective bug reports -
We should read latest documentation
thoroughly. Documentation contains
what we can do with the program.
If it turns out that program does
something different from what the
documentation says that is a bug.
But be careful to distinguish
actual bugs from misbehavior that
happens due to a fault with your
testing environment.
For example if the program fails
because of disk is full that
should not be reported as a bug.
Because that should be fixed
by yourself.
Important QA Testing Levels
Here are some important QA Testing Levels
that you have to be known,
(Ref:http://en.wikipedia.org/wiki/
Software_testing)
1.System testing:
in which the software is integrated to
the overall product and tested to
show that all requirements are met.
2.Functional testing:
Tests the product according to
programmable work.
3.Black box testing:
Takes an external perspective of the
test object to derive test cases.
These tests can be functional or
non-functional, though usually
functional.
4.White box testing (clear box
testing, glass box testing or
structural testing):
Uses an internal perspective of the
system to design test cases based on
internal structure. It requires
programming skills to identify all
paths through the software.
5.Usability testing:
Usability testing is a means for
measuring how well people can use
some human-made object (such as
web page,interface). usability
testing measures the usability
of the object.
6.Compatibility testing:
Compatibility testing, part of
software non-functional tests,
is testing
conducted on the application to
evaluate the application's
compatibility with the
computing environment.
7.Performance testing:
From one perspective, to determine
how fast some aspect of a system
performs under a particular workload.
It can also serve to validate and
verify other quality attributes of the
system, such as scalability and
reliability. It can demonstrate that
the system meets performance
criteria. In performance testing, it
is often crucial (and often difficult
to arrange) for the test conditions
to be similar to the expected actual
use.
8.Sanity check:
A sanity test or sanity check is a
basic test to quickly evaluate the
validity of a claim or calculation.
9.Regression testing:
Regression bugs occur whenever
software functionality that
previously worked as desired stops
working or no longer works in the
same way that was previously
planned. Typically regression bugs
occur as an unintended consequence
of program changes. It used to
refer the repetition of the
earlier successful tests to ensure
that changes made in the software
have not introduced new bugs/side
effects.
10.Unit Testing:
In which each unit (basic component)
of the software is tested to verify
that the detailed design for the unit
has been correctly implemented.
Tests the minimal software component
and sub-component or modules by the
programmers.
11.Integration testing:
In which progressively larger groups
of tested software components
corresponding to elements of the
architectural design are integrated and
tested until the software works as a whole.
12.Acceptance testing
Testing can be conducted by the client.
It allows the end-user or customer
or client to decide whether or not
to accept the product. Acceptance
testing may be performed after the
testing and before the implementation
phase.
Alpha testing :
is simulated or actual operational
testing by potential users/customers or
an independent test team at the
developers' site.
Beta testing :
comes after alpha testing. Versions of
the software, known as beta versions,
are released to a limited audience
outside of the company.
13.Static Testing (Dry Run Testing):
Syntax checking and manually reading
the code to find errors are methods
of static testing. This type of
testing is mostly used by the developer,
who designed or code the module.
Static testing is usually the first type
of testing done on any system.Static
testing assess if program is ready
for more detailed testing, where methods
like code review, inspection and
walkthrough are used. Static testing
also applies to White box testing.
Some Less Talked Best Practices of QA and Testing
I have found the following best practices for
QA and Testing.
This mainly highlights the better inclusion of
QA group into the projects by the other project
related members.
I have added some of the extractions from the
source document as well.
1) Involve testers early in the development
cycle:
One director of quality management says his
quality assurance workers meet with business
analysts and business users before developers
even start writing code—while requirements are
being written—to determine what requirements
they ought to test and to develop test cases
for each requirement.
2) Establish quality checkpoints or milestones
throughout the entire development cycle.
Model company is e-Bay:
e-Bay's first milestone occurs when the QA and
product development groups review requirements.
The second milestone occurs before development
ends, when e-Bay's product development and
project management groups review the QA team's
test plan to make sure it's adequate. Just
before QA begins testing, the third checkpoint
occurs as the development group shows QA that
their code meets all functional and business
requirements.
3) Write a tech guide:
A lot of the problems that come up when
you're testing software are a result of people
not knowing the right way to do certain things.
If anyone has a question about the best way to
approach a specific task, they can refer to the
tech guide without having to request for
continuous knowledge transfers.
4) Centralize your test groups:
Centralizing testers into one group—as opposed
to staffing testers by application area—ensures
that testers share best practices and lessons
learned when they come off a project.
5) Raise testers' awareness of their value:
Highlighting the importance of their work
and the impact it has on the company improves
their morale and makes them approach their
job with even more diligence.
6) Cross-train developers and testers in
each other's roles:
Cross-training is an excellent way to foster
understanding between testers and developers
and thus improve relations between the two
groups.
Important QA Documents
I. PRAD
The Product Requirement Analysis Document is the document prepared/reviewed by marketing, sales, and technical product managers. This document defines the requirements for the product, the "What". It is used by the developer to build his/her functional specification and used by QA as a reference for the first draft of the Test Strategy.
II. Functional Specification
The functional specification is the "How" of the product. The functional specification identifies how new features will be implemented. This document includes items such as what database tables a particular search will query. This document is critical to QA because it is used to build the Test Plan.
QA is often involved in reviewing the functional specification for clarity and helping to define the business rules.
III. Test Strategy
The Test Strategy is the first document QA should prepare for any project. This is a living document that should be maintained/updated throughout the project. The first draft should be completed upon approval of the PRAD and sent to the developer and technical product manager for review.
The Test Strategy is a high-level document that details the approach QA will follow in testing the given product. This document can vary based on the project, but all strategies should include the following criteria:
· Project Overview - What is the project.
· Project Scope - What are the core components of the product to be tested
· Testing - This section defines the test methodology to be used, the types of testing to be executed (GUI, Functional, etc.), how testing will be prioritized, testing that will and will not be done and the associated risks. This section should also outline the system configurations that will be tested and the tester assignments for the project.
· Completion Criteria - These are the objective criteria upon which the team will decide the product is ready for release
· Schedule - This should define the schedule for the project and include completion dates for the PRAD, Functional Spec, and Test Strategy etc. The schedule section should include build delivery dates, release dates and the dates for the Readiness Review, QA Process Review, and Release Board Meetings.
· Materials Consulted - Identify the documents used to prepare the test strategy
· Test Setup - This section should identify all hardware/software, personnel pre-requisites for testing. This section should also identify any areas that will not be tested (such as 3rd party application compatibility.)
IV. Test Matrix (Test Plan)
The Test Matrix is the Excel template that identifies the test types (GUI, Functional etc.), the test suites within each type, and the test categories to be tested. This matrix also prioritizes test categories and provides reporting on test coverage.
· Test Summary report
· Test Suite Risk Coverage report
Upon completion of the functional specification and test strategy, QA begins building the master test matrix. This is a living document and can change over the course of the project as testers create new test categories or remove non-relevant areas. Ideally, a master matrix need only be adjusted to include near feature areas or enhancements from release to release on a given product line.
V. Test Cases
As testers build the Master Matrix, they also build their individual test cases. These are the specific functions testers must verify within each test category to qualify the feature. A test case is identified by ID number and prioritized. Each test case has the following criteria:
· Purpose - Reason for the test case
· Steps - A logical sequence of steps the tester must follow to execute the test case
· Expected Results - The expected result of the test case
· Actual Result - What actually happened when the test case was executed
· Status - Identifies whether the test case was passed, failed, blocked or skipped.
· Pass - Actual result matched expected result
· Failed - Bug discovered that represents a failure of the feature
· Blocked - Tester could not execute the test case because of bug
· Skipped - Test case was not executed this round
· Bug ID - If the test case was failed, identify the bug number of the resulting bug.
Listed below are a few more important QA documents.
VI. Test Results by Build
VII. Release Package
Preparing for Quality Assurance (QA)
Test Cases Should Be Developed as Coding is
Progressing. If you wait until the QA phase
begins to create your test suite, test
cases will be rushed and your team will
not have time to fully review each
suite of test cases for each requirements.
Test case development should begin the day
coding starts. Testers should be assigned
to create test cases for specific
requirements.
Testers should remember to create test
cases for:
**Positive Testing -
These test cases ensure the software
will work exactly as specified in
the requirement.
**Negative Testing -
These test cases are used to try to
"trick" the software. For example,
try entering in an invalid date in
a date field. Try entering character
data in a numeric field.
Try entering in a date range where
the "from date" is older than the "to
date".
This also includes entering in data
that contains apostrophes, as this
tends to trip up SQL based systems.
**Bounds Testing -
These test cases test the bounds
of each field.
For example, if a field is defined
as 50 characters, try entering 60
characters.
**Relational Testing -
These test cases test parent-child
relationships.
For example, if you have a
parent-child feature you are
testing (e.g. an invoice may have
one or more invoice line items),
try deleting the parent (invoice
in this example), then ensure that
all
the child items (invoice line items
in this example) were deleted.
**Performance Testing -
These test cases ensure that the new
release will perform as quickly (or
quicker) than the past release.
To test this, prior to the new release,
try different actions (add a record,
search for a record, update a record,
delete a record, etc.) and record
your timings in a spreadsheet.
Once the new release is in the QA
environment, try those same tests and
record the new timings, this will
tell you performance has improved or
degraded.
**Regression Testing -
Upon a successful test cycle, some
of the test cases above should be
marked as regression test cases so
that they are run in future releases
to ensure that existing features
continue to work as designed.
**Smoke Tests -
Once all the test cases are designed
for all the requirements, a small
set (10 to 30 test cases)of the
positive test cases should be
identified as Smoke Tests. These
will be run prior to beginning the
major testing effort.
If any of these fail, they should
be fixed immediately before testing
begins so that time is well spent
by the testing team.