Tuesday, February 05, 2008

Estimating Defects by Function Point

There is a strong relationship between the number of test cases and the number of function points. As expected there is a strong relationship between the number of defects and the number of test cases and number of function points.

The number of acceptance test cases can be estimated by multiplying the number of function points by 1.2. Like function points, acceptance test cases should be independent of technology and implementation techniques.
For example, if a software project was 100 function points the estimated number of test cases would be 120. To estimate the number of potential defects is more involved.
Estimating Defects
Intuitively the number of maximum potential defects is equal to the number of acceptance test cases which is 1.2 x Function Points.

Preventing, Discovering and Removing Defects
To reduce the number of defects delivered with a software project an organization can engage in a variety of activities. While defect prevention is much more effective and efficient in reducing the number of defects, most organization conduct defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.
Defect Removal Efficiency
If an organization has no defect prevention methods in place then they are totally reliant on defect removal efficiency.

1. Requirements Reviews up to 15% removal of potential defects
2. Design Reviews up to 30% removal of potential defects
3. Code Reviews up to 20% removal of potential defects
4. Formal Testing up to 25% removal of potential defects


In other words, if your organization is great at defect removal the maximum percentage of defects your organization can expect to remove is 90%. If a software project is 100 function points, the total number of maximum (or potential) defects could be 120. If you were perfect at defect removal your project would still have up to 12 defects after all your defect discovery and removal efforts. The far majority of organization would receive a B (medium) or even a D (poor) at defect removal efficiency.
Activity
Perfect
Medium
Poor
Requirements Reviews
15%
5%
0%
Design Reviews
30%
15%
0%
Code Reviews
20%
10%
0%
Formal Testing
25%
15%
15%
Total Percentage Removed
90%
45%
15%
Defect Discovery and Removal
Size in Function Points
Totals Defects Remaining
Max Defects
Perfect
Medium
Poor
100
120
12
66
102
200
240
24
132
204
500
600
60
330
510
1,000
1,200
120
660
1,020
2,500
3,000
300
1,650
2,550
5,000
6,000
600
3,300
5,100
10,000
12,000
1,200
6,600
10,200
20,000
24,000
2,000
13,200
20,400
An organization with a project of 2,500 function points and was about medium at defect discovery and removal would have 1,650 defects remaining after all defect removal and discovery activities. The calculation is 2,500 x 1.2 = 3,000 potential defects. The organization would be able to remove about 45% of the defects or 1,350 defects. The total potential defects (3,000) less the removed defects (1,350) equals the remaining defects of 1,650.
Defect Prevention

If an organization concentrates on defect prevention (instead of defect detection) then the number of defects inserted or created is much less. The amount of time and effort required to discover and remove this defects is much less also.
1. Roles and Responsibilities Clearly Defined up to 15% reduction in number of defects created
2. Formalized Procedures up to 25% reduction in number of defects created
3. Repeatable Processes up to 35% reduction in number of defects created
4. Controls and Measures in place up to 30% reduction in number of defects created

Imagine an organization with items 1 and 2 in place. A project with 100 function points would have a potential of 120 defects, but since they have preventative measures in place, they can reduce the number of potential defects by 48 (40% = 25% + 15%). That makes the potential number of defects 72 compared to 120 with no preventative efforts. Assuming that an organization was medium at defect discovery and removal they could remove 45% of the remaining defects or have 40 remaining when the project rolled to production.
Defect Removal
Max Defects
Prevention
Medium
100
120
72
40
200
240
144
79
500
600
360
198
1,000
1,200
720
396
2,500
3,000
1,800
990
5,000
6,000
3,600
1,980
10,000
12,000
7,200
3,960
20,000
24,000
14,400
7,920
The above table represents the number of defects that an organization that does items 1 and 2 above and is medium at discovery and removal.
The problem for estimating defects is multidimensional. First the total number of defects must be estimated. Second the impact of defect prevention needs to be understood and the estimated number of defects adjusted. Third an assessment needs to be done to understand how many defects can be discovered and removed by an organization.
Clearly, the fewer number of defects that an organization must discover and remove the better. The way this is accomplished is by better process, a more stable organization and repeatable processes. The focus of software organizations needs to be on defect prevention instead of defect detection.

Check list for Conducting Unit

  • Is the number of input parameters equal to number of arguments?
  • Do parameter and argument attributes match?
  • Do parameter and argument units system match?
  • Is the number of arguments transmitted to called modules equal to number of parameters?
  • Are the attributes of arguments transmitted to called modules equal to attributes of parameters?
  • Is the units system of arguments transmitted to called modules equal to units system of parameters?
  • Are the number of attributes and the order of arguments to built-in functions correct?
  • Are any references to parameters not associated with current point of entry?
  • Have input only arguments altered?
  • Are global variable definitions consistent across modules?
  • Are constraints passed as arguments?
  • When a module performs external I/O, additional interface tests must be conducted.
  • File attributes correct?
  • OPEN/CLOSE statements correct?
  • Format specification matches I/O statement?
  • Buffer size matches record size?
  • Files opened before use?
  • End-of-file conditions handled?
  • Any textual errors in output information?
  • improper or inconsistent typing
  • erroneous initialization or default values
  • incorrect (misspelled or truncated) variable names
  • inconsistent data types
  • underflow, overflow and addressing exceptions
  • Has the component interface been fully tested?
  • Have local data structured been exercised at their boundaries?
  • Has the cyclomatic complexity of the module been determined?
  • Have all independent basis paths been tested?
  • Have all loops been tested appropriately?
  • Have data flow paths been tested?
  • Have all error handling paths been tested?

Web Testing Checklist Part 3

General
  • Pages fit within the resolution(800x600)
  • Design works with liquid tables to fill the user's window size.
  • Separate print versions provided for long documents (liquid tables may negate this necessity). Accommodates A4 size paper.
  • Site doesn't use frames.
  • Complex tables are minimized.
  • Newer technologies are generally avoided for 1-2 years from release, or if used alternative traditional forms of content are easily available.
Home vs. Subsequent Pages & Sections
  • Home page logo is larger and more centrally placed than on other pages.
  • Home page includes navigation, summary of news/promotions, and a search feature.
  • Home page answers: Where am I; What does this site do; How do I find what I want?
  • Larger navigation space on home page, smaller on subsequent pages.
  • Logo is present and consistently placed on all subsequent pages (towards upper left hand corner).
  • "Home" link is present on all subsequent pages (but not home page).
  • If subsites are present, each has a home page, and includes a link back to the global home page.

Navigation
  • Navigation supports user scenarios gathered in the User Task Assessment phase (prior to design).
  • Users can see all levels of navigation leading to any page.
  • Breadcrumb navigation is present (for larger and some smaller sites).
  • Site uses DHTML pop-up to show alternative destinations for that navigation level.
  • Navigation can be easily learned.
  • Navigation is consistently placed and changes in response to rollover or selection.
  • Navigation is available when needed (especially when the user is finished doing something).
  • Supplimental navigation is offered appropriately (links on each page, a site map/index, a search engine).
  • Navigation uses visual hierarchies like movement, color, position, size, etc., to differentiate it from other page elements.
  • Navigation uses precise, descriptive labels in the user's language. Icon navigation is accompanied by text descriptors.
  • Navigation answers: Where am I (relative to site structure); Where have I been (obvious visited links); Where can I go (embedded, structural, and associative links)?
  • Redundant navigation is avoided.
  • Functional Items
  • Terms like "previous/back" and "next" are replaced by more descriptive labels indicating the information to be found.
  • Pull-down menus include a go button.
  • Logins are brief.
  • Forms are short and on one page (or demonstrate step X of Y, and why collecting a larger amount of data is important and how the user will benefit).
  • Documentation pages are searchable and have an abundance of examples. Instructions are task-oriented and step-by-step. A short conceptual model of the system is available, including a diagram that explains how the different parts work together. Terms or difficult concepts are linked to a glossary.

Linking
  • Links are underlined.
  • Size of large pages and multi-media files is indicated next to the link, with estimated dowload times.
  • Important links are above the fold.
  • Links to releated information appear at bottom of content or above/near the top.
  • Linked titles make sense out of context.
  • If site requires registration or subscription, provides special URLs for free linking. Indicates the pages are freely linkable, and includes and easy method to discover the URL.
  • If site is running an ad, it links to a page with the relevant content, not the corporate home page.
  • Keeps linked phrases short to aid scanning (2-4 words).
  • Links on meaningful words and phrases. Avoids phrases like, "click here."
  • Includs a brief description of what the user should expect on the linked page. In code:
  • Uses relative links when linking between pages in a site. Uses absolute links to pages on unrelated sites.
  • Uses link titles in the code for IE users (preferably less than 60 characters, no more than 80).

Search Capabilities
  • A search feature appears on every page (exceptions include pop-up forms and the like).
  • Search box is wide to allow for visible search parameters.
  • Advanced Search, if included, is named just that (to scare off novices).
  • Search system performs a spelling check and offers synonym expansion.
  • Site avoids scoped searching. If included it indicates scope at top of both query and results pages, and additionally offers an automatic extended site search immediately with the same parameters.
  • Results do not include a visible scoring system.
  • Eliminates duplicate occurances of the same results (e.g., foo.com/bar vs. foo.com/bar/ vs. foo.com/bar/index.html). Page Design
  • Content accounts for 50% to 80% of a page's design (what's left over after logos, navigation, non-content imagery, ads, white space, footers, etc.).
  • Page elements are consistent, and important information is above the fold.
  • Pages load in 10 seconds or less on users bandwidth.
  • Pages degrade adequately on older browsers.
  • Text is over plain background, and there is high contrast between the two.
  • Link styles are minimal (generally one each of link, visited, hover, and active states). Additional link styles are used only if necessary.
  • Specified the layout of any liquid areas (usually content) in terms of percentages.

Fonts and Graphics
  • Graphics are properly optimized.
  • Text in graphics is generally avoided.
  • Preferred fonts are used: Verdana, Arial, Geneva, sans-serif.
  • Fonts, when enlarged, don't destroy layout.
  • Images are reused rather than rotated.
  • Page still works with graphics turned off.
  • Graphics included are necessary to support the message.
  • Fonts are large enough and scalable.
  • Browser chrome is removed from screen shots.
  • Animation and 3D graphics are generally avoided.

Content Design
  • Uses bullets, lists, very short paragraphs, etc. to make content scannable.
  • Articles are structured with scannable nested headings.
  • Content is formatted in chunks targeted to user interest, not just broken into multiple pages.
  • No moving text; most is left-justified; sans-serif for small text; no upper-case sentences/paragraphs; italics and bold are used sparingly.
  • Dates follow the international format (year-month-day) or are written out (August 30, 2001).

Writing
  • Writing is brief, concise, and well edited.
  • Information has persistent value.
  • Avoids vanity pages.
  • Starts each page with the conclusion, and only gradually added the detail supporting that conclusion.
  • One idea per paragraph.
  • Uses simple sentence structures and words.
  • Gives users just the facts. Uses humor with caution.
  • Uses objective language. Folder Structure
  • Folder names are all lower-case and follow the alpha-numeric rules found under "Naming Conventions" below.
  • Segmented the site sections according to:
    Root directory (the "images" folder usually goes at the top level within the root folder)
    Sub-directories (usually one for each area of the site, plus an images folder at the top level within the root directory)
    Images are restricted to one folder ("images") at the top level within the root directory (for global images) and then if a great number of images are going to be used only section-specifically, those are stored in local "images" folders

Naming Conventions
  • Uses clients preferred naming method. If possible, uses longer descriptive names (like "content_design.htm" vs. "contdesi.htm").
  • Uses alphanumeric characters (a-z, 0-9) and - (dash) or _ (underscore)
  • Doesn't use spaces in file names.
  • Avoids characters which require a shift key to create, or any punctuation other than a period.
  • Uses only lower-case letters.
  • Ends filenames in .htm (not .html).

Multimedia
  • Any files taking longer than 10 seconds to download include a size warning (> 50kb on a 56kbps modem, > 200kb on fast connections). Also includes the running time of video clips or animations, and indicate any non-standard formats.
  • Includes a short summary (and a still clip) of the linked object.
  • If appropriate to the content, includes links to helper applications, like Adobe Acrobat Reader if the file is a .pdf.

Page Titles
  • Follows title strategy ... Page Content Descriptor : Site Name, Site section (E.g.: Content Implementation Guidelines : CDG Solutions, Usability Process )
  • Tries to use only two to six words, and makes their meaning clear when taken out of context.
  • The first word(s) are important information-carrying one(s).
  • Avoids making several page titles start with the same word. Headlines
  • Describes the article in terms that relate to the user.
  • Uses plain language.
  • Avoids enticing teasers that don't describe.

CSS
  • Uses CSS to format content appearance (as supported by browsers), rather than older HTML methods.
  • Uses a browser detect and serve the visitor a CSS file that is appropriate for their browser/platform combination.
  • Uses linked style sheets.

Documentation and Help Pages
  • When using screen shots, browser chrome was cropped out.
  • Hired a professional to write help sections (a technical writer).
  • Documentation pages are searchable.
  • Documentation section has an abundance of examples.
  • Instructions are task-oriented and step-by-step.
  • A short conceptual model of the system is provided, including a diagram that explains how the different parts work together.
  • Terms or difficult concepts are linked to a glossary.

Content Management
Site has procedures in place to remove outdated information immediately (such as calendar events which have passed).

Web Testing Checklist Part 2

Web Testing Checklist about Performance (2)
Resources
1. Are people with skill sets available?
2. Have the following skill sets been acquired?
" DBA
" Doc
" BA
" QA
" Tool Experts
" Internal and external support
" Project manager
" Training

Time Frame
1. When will the application be ready for performance testing?
2. How much time is available for performance testing?
3. How many iterations of testing will take place?

Test Environment
1. Does the test environment exist?
2. Is the environment self-contained?
3. Can one iteration of testing be performed in production?
4. Is a copy of production data available for testing?
5. Are end-users available for testing and analysis?
6. Will the test use virtual users?
7. Does the test environment mirror production?
8. Have the differences documented? (constraints)
9. Is the test available after production?
10. Have version control processes been used to ensure the correct versions of applications and data in the test environment?
11. Have the times been identified when you will receive the test data (globally) time frame?
12. Are there considerations for fail-over recovery? Disaster recovery?
13. Are replacement servers available?
14. Have back-up procedures been written?
Web Testing Checklist about Correctness (1)

Data
1. Does the application write to the database properly?
2. Does the application record from the database correctly?
3. Is transient data retained?
4. Does the application follow concurrency rules?
5. Are text fields storing information correctly?
6. Is inventory or out of stock being tracked properly?
7. Is there redundant info within web site?
8. Is forward/backward cashing working correctly?
9. Are requirements for timing out of session met?

Presentation
1. Are the field data properly displayed?
2. Is the spelling correct?
3. Are the page layouts and format based on requirements?
(e.g., visual highlighting, etc.)
4. Does the URL show you are in secure page?
5. Is the tab order correct on all screens?
6. Do the interfaces meet specific visual standards(internal)?
7. Do the interfaces meet current GUI standards?
8. Do the print functions work correctly?

Navigation
1. Can you navigate to the links correctly?
2. Do Email links work correctly?

Functionality
1. Is the application recording the number of hits correctly?
2. Are calculations correct?
3. Are edits rules being consistently applied?
4. Is the site listed on search engines properly?
5. Is the help information correct?
6. Do internal searches return correct results?
7. Are follow-up confirmations sent correctly?
8. Are errors being handled correctly?
9. Does the application properly interface with other applications?

Web Testing Checklist about Correctness (2)

Environment
1. Are user sessions terminated properly?
2. Is response time adequate based upon specifications?

  • Is a complete software requirements specification available?
  • Are requirements bounded?
  • Have equivalence classes been defined to exercise input?
  • Have boundary tests been derived to exercise the software at its boundaries.
  • Have test suites been developed to validate each software function?
  • Have test suites been developed to validate all data structures?
  • Have test suites been developed to assess software performance?
  • Have test suites been developed to test software behavior?
  • Have test suites been developed to fully exercise the user interface?
  • Have test suites been developed to exercise all error handling?
  • Are use-cases available to perform scenario testing?
  • Is statistical use testing (SEPA, 5/e, Chapter 26) being considered as an element of validation?
  • Have tests been developed to exercise the software against procedures defined in user documentation and help facilities?
  • Have error reporting and correction mechanisms been established?
  • Has a deficiency list been created?

Web Testing Checklist Part 1

Web Testing Checklist about Usability
Navigation
1. Is terminology consistent?
2. Are navigation buttons consistently located?
3. Is navigation to the correct/intended destination?
4. Is the flow to destination (page to page) logical?
5. Is the flow to destination the page top-bottom left to right?
6. Is there a logical way to return?
7. Are the business steps within the process clear or mapped?
8. Are navigation standards followed?

Ease of Use
1. Are help facilities provided as appropriate?
2. Are selection options clear?
3. Are ADA standards followed?
4. Is the terminology appropriate to the intended audience?
5. Is there minimal scrolling and resizeable screens?
6. Do menus load first?
7. Do graphics have reasonable load times?
8. Are there multiple paths through site (search options) that are user chosen?
9. Are messages understandable?
10. Are confirmation messages available as appropriate?

Presentation of Information
1. Are fonts consistent within functionality?
2. Are the company display standards followed?
- Logos
- Font size
- Colors
- Scrolling
- Object use
3. Are legal requirements met?
4. Is content sequenced properly?
5. Are web-based colors used?
6. Is there appropriate use of white space?
7. Are tools provided (as needed) in order to access the information?
8. Are attachments provided in a static format?
9. Is spelling and grammar correct?
10. Are alternative presentation options available (for limited browsers or performance issues)?

How to interpret/Use Info
1. Is terminology appropriate to the intended audience?
2. Are clear instructions provided?
3. Are there help facilities?
4. Are there appropriate external links?
5. Is expanded information provided on services and products? (why and how)
6. Are multiple views/layouts available?
Web Testing Checklist about Compatibility and Portability

Overall
1. Are requirements driven by business needs and not technology?

Audience
1. Has the audience been defined?
2. Is there a process for identifying the audience?
3. Is the process for identifying the audience current?
4. Is the process reviewed periodically?
5. Is there appropriate use of audience segmentation?
6. Is the application compatible with the audience experience level?
7. Where possible, has the audience readiness been ensured?
8. Are text version and/or upgrade links present?

Testing Process
1. Does the testing process include appropriate verifications? (e.g., reviews, inspections and walkthroughs)
2. Is the testing environment compatible with the operating systems of the audience?
3. Does the testing process and environment legitimately simulate the real world?

Operating systems Environment/ Platform
1. Has the operating environments and platforms been defined?
2. Have the most critical platforms been identified?
3. Have audience expectations been properly managed?
4. Have the business users/marketing been adequately prepared for what will be tested?
5. Have sign-offs been obtained?

Risk
1. Has the risk tolerance been assessed to identify the vital few platforms to test?

Hardware
1. Is the test hardware compatible with all screen types, sizes, resolution of the audience?
2. Is the test hardware compatible with all means of access, modems, etc of the audience?
3. Is the test hardware compatible will all languages of the audience?
4. Is the test hardware compatible with all databases of the audience?
5. Does the test hardware contain the compatible plug-ins and DLLs of the audience?

General
1. Is the application compatible with standards and conventions of the audience?
2. Is the application compatible with copyright laws and licenses?
Web Testing Checklist about Security (1)

Access Control
1. Is there a defined standard for login names/passwords?
2. Are good aging procedures in place for passwords?
3. Are users locked out after a given number of password failures?
4. Is there a link for help (e.g., forgotten passwords?)
5. Is there a process for password administration?
6. Have authorization levels been defined?
7. Is management sign-off in place for authorizations?

Disaster Recovery
1. Have service levels been defined. (e.g., how long should recovery take?)
2. Are fail-over solutions needed?
3. Is there a way to reroute to another server in the event of a site crash?
4. Are executables, data, and content backed up on a defined interval appropriate for the level of risk?
5. Are disaster recovery process & procedures defined in writing? If so, are they current?
6. Have recovery procedures been tested?
7. Are site assets adequately Insured?
8. Is a third party "hot-site' available for emergency recovery?
9. Has a Business Contingency Plan been developed to maintain the business while the site is being restored?
10. Have all levels in organization gone through the needed training & drills?
11. Do support notification procedures exist & are they followed?
12. Do support notification procedures support a 24/7 operation?
13. Have criteria been defined to evaluation recovery completion / correctness?

Firewalls
1. Was the software installed correctly?
2. Are firewalls installed at adequate levels in the organization and architecture? (e.g., corporate data, human resources data, customer transaction files, etc.)
3. Have firewalls been tested? (e.g., to allow & deny access).
4. Is the security administrator aware of known firewall defects?
5. Is there a link to access control?
6. Are firewalls installed in effective locations in the architecture? (e.g., proxy servers, data servers, etc.)

Proxy Servers
1. Have undesirable / unauthorized external sites been defined and screened out? (e.g. gaming sites, etc.)
2. Is traffic logged?
3. Is user access defined?

Privacy
1. Is sensitive data restricted to be viewed by unauthorized users?
2. Is proprietary content copyrighted?
3. Is information about company employees limited on public web site?
4. Is the privacy policy communicated to users and customers?
5. Is there adequate legal support and accountability of privacy practices?
Web Testing Checklist about Security (2)

Data Security
1. Are data inputs adequately filtered?
2. Are data access privileges identified? (e.g., read, write, update and query)
3. Are data access privileges enforced?
4. Have data backup and restore processes been defined?
5. Have data backup and restore processes been tested?
6. Have file permissions been established?
7. Have file permissions been tested?
8. Have sensitive and critical data been allocated to secure locations?
9. Have date archival and retrieval procedures been defined?
10. Have date archival and retrieval procedures been tested?

Monitoring
1. Are network monitoring tools in place?
2. Are network monitoring tool working effectively?
3. Do monitors detect
- Network time-outs?
- Network concurrent usage?
- IP spoofing?
4. Is personnel access control monitored?
5. Is personnel internet activity monitored?
- Sites visited
- Transactions created
- Links accessed

Security Administration
1. Have security administration procedures been defined?
2. Is there a way to verify that security administration procedures are followed?
3. Are security audits performed?
4. Is there a person or team responsible for security administration?
5. Are checks & balances in place?
6. Is there an adequate backup for the security administrator?

Encryption
1. Are encryption systems/levels defined?
2. Is there a standard of what is to be encrypted?
3. Are customers compatible in terms of encryption levels and protocols?
4. Are encryption techniques for transactions being used for secured transactions?
- Secure socket layer (SSL)
- Virtual Private Networks (VPNs)
5. Have the encryption processes and standards been documented?

Viruses
1. Are virus detection tools in place?
2. Have the virus data files been updated on a current basis?
3. Are virus updates scheduled?
4. Is a response procedure for virus attacks in place?
5. Are notification of updates to virus files obtained from anti-virus software vendor?
6. Does the security administrator maintain an informational partnership with the anti-virus software vendor?
7. Does the security administrator subscribe to early warning e-mail services? (e.g., www.fooorg or www.bar.net)
8. Has a key contact been defined for the notification of a virus presence?
9. Has an automated response been developed to respond to a virus presence?
10. Is the communication & training of virus prevention and response procedures to users adequate?
Web Testing Checklist about Performance (1)

Tools
1. Are virus detection tools in place?
2. Have the virus data files been updated on a current basis?
3. Are virus updates scheduled?
4. Is a response procedure for virus attacks in place?
5. Are notification of updates to virus files obtained from anti-virus software vendor?
6. Does the security administrator maintain an informational partnership with the anti-virus software vendor?
7. Does the security administrator subscribe to early warning e-mail services? (e.g., www.foo.org or www.bar.net)
8. Has a key contact been defined for the notification of a virus presence?
9. Has an automated response been developed to respond to a virus presence?
10. Is the communication & training of virus prevention and response procedures to users adequate?

Tools
1. Has a load testing tool been identified?
2. Is the tool compatible with the environment?
3. Has licensing been identified?
4. Have external and internal support been identified?
5. Have employees been trained?

Number of Users
1. Have the maximum number of users been identified?
2. Has the complexity of the system been analyzed?
3. Has the user profile been identified?
4. Have user peaks been identified?
5. Have languages been identified?, i.e. English, Spanish, French, etc. for global wide sites
6. Have the length of sessions been identified by the number of users?
7. Have the number of users configurations been identified?

Expectations/Requirements
1. Have the response time been identified?
2. Has the client response time been identified?
3. Has the expected vendor response time been identified?
4. Have the maximum and acceptable response times been defined?
5. Has response time been met at the various thresholds?
6. Has the break point been identified been identified for capacity planning?
7. Do you know what caused the crash if the application was taken to the breaking point?
8. How many transactions for a given period of time have been identified (bottlenecks)?
9. Have availability of service levels been defined?

Architecture
1. Has the database campacity been identified?
2. Has anticipated growth data been obtained?
3. Is the database self-contained?
4. Is the system architecture defined?
" Tiers
" Servers
" Network
5. Has the anticipated volume for initial test been defined - with allowance for future growth?
6. Has plan for vertical growth been identified?
7. Have the various environments been created?
8. Has historical experience with the databases and equipment been documented?
9. Has the current system diagram been developed?
10.Is load balancing available?
11.Have the types of programming languages been identified?
12.Can back end processes be accessed?

Some checklists for Automation Testing

By analyzing the current test case bucket in this manner you can quickly determine which test cases can be automated immediately and the priority for automating them that will give the best return on investment. Use the following list of questions to determine if you want to go for the automation testing for your application:

  1. The expected results of a single function are usually very predictable.
  2. The steps in the test are very repeatable and usually short.
  3. API testing consists of using supported programming languages to drive product functions. Many times these tests are based upon the unit tests generated by development.
  4. Command line testing uses higher level scripting languages such as Perl, REXX, DOS Batch Files, Shell Scripts, etc.
  5. These automated tests usually execute very rapidly.
  6. Do any of the current test cases drive function through a programmable interface?
  7. Are any of these interfaces frozen or at least stabilized?
  8. Which test cases are executed most often?
  9. Which test cases take the most time or effort to execute?
  10. Which test cases have expected results that are known or at least predictable?
  11. Which test cases seem to fail most often because of product defects?
  12. Which test cases have to updated or rewritten most often.

Some other factors to consider:

Prerequisites and Dependencies

  1. Test plan in place that includes resources and time to automate test case execution.
  2. Test cases written and validated via manual test execution.
  3. Might require development to put into the product some sort of testability "hooks" or to expose some API's or command line interface.
  4. Develop a Test Execution Automation Strategy.
  5. Design, develop, implement and deploy an automated test execution system.
  6. Maintain the test execution automation system from release to release.
  7. Continually expand automated test execution into more advanced areas of testing.

Skills Required

  1. Basic programming skill in the language(s) of choice.
  2. Knowledge of the tool/harness used to drive the automated suite of tests.

Cost of Implementation

  1. Education and development of required scripting/programming skill.
  2. Cost of test tool/harness license and maintenance license if applicable.
  3. Education and development of test tool/harness usage.
  4. Effort to automate the execution of desired test cases.
  5. Effort to run automated system and analyze the results.
  6. Cost to maintain the automated system.

Type of Automation Testing:

  1. Scenario based tests driven through a GUI and/or Web Interfaces.
  2. Error or negative path testing.
  3. Self-Checking tests, ie those that verify the final results against the expected results.
  4. Load/Stress/Performance testing.
  5. System monitoring testing, such as memory leakage detection, resource usage monitoring, I/O, core dumps, etc.

Software Specification Review Check List

· Do stated goals and objectives for software remain consistent will system goals and objectives.
· Have important interfaces to all system elements been described?
· " Is information flow and structure adequately defined for the problem?
· Are diagrams clear? Can each stand alone without supplementary text?
· Do major functions remain within scope and has each been adequately described?
· Is the behaviour of the software consistent with the information it must process and the functions it must perform?
· Are design constraints realistic?
· Have the technological risks of development been considered?
· Have alternative software requirements been considered?
· Have validation criteria been stated in detail? Are they adequate to describe a successful system?
· Do inconsistencies, omissions, or redundancies exist?
· Is the customer contact complete?
· Has the user reviewed the preliminary user's manual or prototype?
· How are planning estimates affected?
· Be on the lookout for persuasive connectors (e.g., certainly, therefore, clearly, obviously, it follows that), and ask 'why?"
· Watch out for vague terms (e.g., some, sometimes, often, usually, ordinarily, most, mostly); ask for clarification.
· Watch lists are given, but not completed, be sure all items are understood. Keys to look for: 'etc., and so forth, and so on, such as."
· Be sure stated ranges don't contain unstated assumptions (e.g., Valid codes range from 10 to 100. Integer? Real? Hex?).
· Beware of vague verbs such as "Handled, rejected, processed, skipped, eliminated.' They can be interpreted in many ways.
· Beware ambiguous pronouns (e.g., The I/O module communicates with the data validation module and its control flag is set. Whose control flag?)
· Look for statements that imply certainty (e.g., always, every, all, none, never), then ask fur proof.
· When a term is explicitly defined in one place, try substituting the definition for other occurrences of the term.
· When a structure is described in words, draw a picture to aid in understanding.
· When a calculation is specified, work at least two examples.

Full Lifecycle Testing Concept

What is Software Testing?

Ø Process of identifying defects
• Defect is any variance between actual and expected results

Ø Testing should intentionally attempt to make things go wrong to determine if
• things happen when they shouldn't or
• things don't happen when they should


Types of Testing

Ø Static Testing
Ø Dynamic Testing

Static Testing

Ø Involves testing of the development work products before any code is developed.
Ø Ex

• Plan reviews
• Requirements walkthroughs
• Design or code inspections
• Test plan inspections
• Test case reviews

Dynamic Testing

Ø Process of validation by exercising or operating a work product under scrutiny & observing its behavior to changing inputs or environments
Ø Some representative examples of dynamic testing are

• Executing test cases in an working system
• Simulating usage scenarios with real end-users to test usability
• Parallel testing in a production environment

Purpose of Static Testing


Ø Early removal of the Defects in the software development life cycle
Ø Increases the productivity by shortening the testing lifecycles & reducing work
Ø Increases the Quality of the project deliverables

Static Testing Techniques

Ø Prototyping
Ø Desk checks
Ø Checklists
Ø Mapping
Ø Reviews
Ø Walkthroughs
Ø Inspections
Ø Prototyping
• Prototyping is reviewing a work product model (initial version) defined without the full capability of the proposed final product

- The prototype is demonstrated and the result of the exercise is evaluated, leading to an enhanced design

• Used to verify and validate the User Interface design and to confirm usability

Ø Desk checks
• This technique involves the process of a product author reading his or her own product to identify defects

Ø Checklists
• Checklists are a series of common items, or prompting questions to verify completeness of task steps.

Ø Mapping
• Mapping technique is identification of functions to the specification and to show how that function directly or indirectly, maps to the requirements.

- Used with Test Scripts to Cases, Test Conditions to Test Scripts etc

Ø Reviews
• Reviews are useful mechanism for getting feedback quickly from peers and team members

Ø Walkthroughs
• Walkthroughs are generally run as scheduled meetings and participants are invited to attend.

- The minutes of the meeting are recorded, as are the issues and action items resulting from the meeting.
- Owners are assigned for issues and actions and there is generally follow up done.

Ø Inspections
• Defect detection activity, aimed at producing a "defect free" work product, before it is passed on to the next phase of the development process

Ø Objectives of an Inspection.
• Increase quality and productivity
• Minimize costs and cycle elapsed time
• Facilitate project management

Inspection Process
Ø An inspection has the following key properties

• A moderator
• Definite participant roles
• Author, Inspector, Recorder
• Stated entry and exit criteria
• Clearly defined defect types
• A record of detected defects
• Re-inspection criteria
• Detected defect feedback to author
• Follow-up to ensure defects are fixed / resolved

Inspection Process Results

Ø Major defects

• Missing (M) - an item is missing
• Wrong (W) - an item has been implemented incorrectly.
• Extra (E) - an item is included which is not part of the specifications.
• Issues (I) - an item not implemented in satisfactory manner
• Suggestion (S) - suggestion to improve the work product.

Ø Minor defects

• Clarity in comments and description
• Insufficient / excessive documentation
• Incorrect spelling / punctuation


Testing Techniques


Ø Black Box Testing

• The testers have an "outside" view of the system.
• They are concerned with "what is done" NOT "how it is done.“

Ø White Box Testing

• In the White Box approach, the testers have an inside view of the system. They are concerned with "how it is done" NOT "what is done


Levels of Testing

* Unit testing
* Integration testing
* System testing
* Systems integration testing
* User acceptance testing

Unit Testing

Ø Unit level test is the initial testing of new and changed code in a module.
Ø Verifies the program specifications to the internal logic of the program or module and validates the logic.

Integration Testing

Ø Integration level tests verify proper execution of application components and do not require that the application under test interface with other applications

Ø Communication between modules within the sub-system is tested in a controlled and isolated environment within the project

System Testing

Ø System level tests verify proper execution of the entire application components including interfaces to other applications

Ø Functional and structural types of tests are performed to verify that the system is functionally and operationally sound.


Systems Integration Testing

Ø Systems Integration testing is a test level which verifies the integration of all applications

• Includes interfaces internal and external to the organization, with their hardware, software and infrastructure components.

Ø Carried out in a production-like environment

User Acceptance Testing

Ø Verify that the system meets user requirements as specified.
Ø Simulates the user environment and emphasizes security, documentation and regression tests
Ø Demonstrate that the system performs as expected to the sponsor and end-user so that they may accept the system.


Types of Tests

Ø Functional Testing
Ø Structural Testing

Functional Testing

Ø Audit and Controls testing
Ø Conversion testing
Ø Documentation & Procedures testing
Ø Error Handling testing
Ø Functions / Requirements testing
Ø Interface / Inter-system testing
Ø Installation testing
Ø Parallel testing
Ø Regression testing
Ø Transaction Flow (Path) testing
Ø Usability testing

Audit And Controls Testing

Ø Verifies the adequacy and effectiveness of controls and ensures the capability to prove the completeness of data processing results

• Their validity would have been verified during design

Ø Normally carried out as part of System Testing once the primary application functions have been stabilized


Conversation Testing

Ø Verifies the compatibility of the converted program, data, and procedures with those from existing systems that are being converted or replaced.
Ø Most programs that are developed for conversion purposes are not totally new. They are often enhancements or replacements for old, deficient, or manual systems.
Ø The conversion may involve files, databases, screens, report formats, etc.


User Documentation And Procedures Testing

Ø Ensures that the interface between the system and the people works and is useable.
Ø Done as part of procedure testing to verify that the instruction guides are helpful and accurate.
• Both areas of testing are normally carried out late in the cycle as part of System Testing or in the UAT.

Ø Not generally done until the externals of the system have stabilized.
• Ideally, the persons who will use the documentation and procedures are the ones who should conduct these tests.

Error-Handling Testing

Ø Error-handling is the system function for detecting and responding to exception conditions (such as erroneous input)
Ø Ensures that incorrect transactions will be properly processed and that the system will terminate in a controlled and predictable way in case of a disastrous failure

Function Testing

Ø Function Testing verifies, at each stage of development, that each business function operates as stated in the Requirements and as specified in the External and Internal Design documents.

Ø Function testing is usually completed in System Testing so that by the time the system is handed over to the user for UAT, the test group has already verified that the system meets requirements.


Installation Testing

Ø Any application that will be installed and run in an environment remote from the development location requires installation testing.

• This is especially true of network systems that may be run in many locations.
• This is also the case with packages where changes were developed at the vendor's site.
• Necessary if the installation is complex, critical, should be completed in a short window, or of high volume such as in microcomputer installations.
• This type of testing should always be performed by those who will perform the installation process


Interface / Inter-system Testing

Ø Application systems often interface with other application systems. Most often, there are multiple applications involved in a single project implementation.
Ø Ensures that the interconnections between applications function correctly.
Ø More complex if the applications operate on different platforms, in different locations or use different languages.

Parallel Testing

Ø Parallel testing compares the results of processing the same data in both the old and new systems.
Ø Parallel testing is useful when a new application replaces an existing system, when the same transaction input is used in both, and when the output from both is reconcilable.
Ø Useful when switching from a manual system to an automated system.

Regression Testing

Ø Verifies that no unwanted changes were introduced to one part of the system as a result of making changes to another part of the system.

Transaction Flow Testing

Ø Testing of the path of a transaction from the time it enters the system until it is completely processed and exits a suite of applications

Usability Testing

Ø Ensures that the final product is usable in a practical, day-to-day fashion
Ø Looks for simplicity and user-friendliness of the product
Ø Usability testing would normally be performed as part of functional testing during System and User Acceptance Testing.

Structural Testing

Ø Ensures that the technical and "housekeeping" functions of the system work
Ø Designed to verify that the system is structurally sound and can perform the intended tasks.

The categories for structural testing

Ø Backup and Recovery testing
Ø Contingency testing
Ø Job Stream testing
Ø Operational testing
Ø Performance testing
Ø Security testing
Ø Stress / Volume testing

Backup And Recovery Testing

Ø Recovery is the ability of an application to be restarted after failure.
Ø The process usually involves backing up to a point in the processing cycle where the integrity of the system is assured and then re-processing the transactions past the original point of failure.
Ø The nature of the application, the volume of transactions, the internal design of the application to handle a restart process, the skill level of the people involved in the recovery procedures, documentation and tools provided, all impact the recovery process

Contingency Testing

Ø Operational situations may occur which result in major outages or "disasters". Some applications are so crucial that special precautions need to be taken to minimize the effects of these situations and speed the recovery process. This is called Contingency.

Job Stream Testing

Ø Done as a part of operational testing (the test type not the test level, although this is still performed during Operability Testing).

Ø Starts early and continues throughout all levels of testing.
- Conformance to standards is checked in User Acceptance and Operability testing.

Operational Testing

Ø All products delivered into production must obviously perform according to user requirements. However, a product's performance is not limited solely to its functional characteristics. Its operational characteristics are just as important since users expect and demand a guaranteed level of service from Computer Services. Therefore, even though Operability Testing is the final point where a system's operational behavior is tested, it is still the responsibility of the developers to consider and test operational factors during the construction phase.

Performance Testing

Ø Performance Testing is designed to test whether the system meets the desired level of performance in a production environment. Performance considerations may relate to response times, turn around times (through-put), technical design issues and so on. Performance testing can be conducted using a production system, a simulated environment, or a prototype.

Security Testing

Ø Security of an application system is required to ensure the protection of confidential information in a system and in other affected systems is protected against loss, corruption, or misuse; either by deliberate or accidental actions. The amount of testing needed depends on the risk assessment of the consequences of a breach in security. Tests should focus on, and be limited to those security features developed as part of the system

Stress/ Volume Testing

Ø Stress testing is defined as the processing of a large number of transactions through the system in a defined period of time. It is done to measure the performance characteristics of the system under peak load conditions.
Ø Stress factors may apply to different aspects of the system such as input transactions, report lines, internal tables, communications, computer processing capacity, throughput, disk space, I/O and so on.
Ø Stress testing should not begin until the system functions are fully tested and stable. The need for Stress Testing must be identified in the Design Phase and should commence as soon as operationally stable system units are available.

Bottom-up Testing

Ø Approach to integration testing where the lowest level components are tested first then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested.

Test Bed

Ø A test environment containing the hardware, instrumentation tools, simulators, and other support software necessary for testing a system or system component. (2) A set of test files, (including databases and reference files), in a known state, used with input test data to test one or more test conditions, measuring against expected results.

Software testing methodologies

Doing software testing after coding is like looking for typos once a book has gone to press. Not only is it ineffective, it's costly. The most efficient testing approach applies sound testing practices throughout the entire software lifecycle.


Automated Testing

Automated testing is as simple as removing the "human factor" and letting the computer do the thinking. This can be done with integrated debug tests, to much more intricate processes. The idea of the these tests is to find bugs that are often very challenging or time intensive for human testers to find. This sort of testing can save many man hours and can be more "efficient" in some cases. But it will cost more to ask a developer to write more lines of code into the game (or an external tool) then it does to pay a tester and there is always the chance there is a bug in the bug testing program. Reusability is another problem; you may not be able to transfer a testing program from one title (or platform) to another. And of course, there is always the "human factor" of testing that can never truly be replaced.

Other successful alternatives or variation: Nothing is infallible. Realistically, a moderate split of human and automated testing can rule out a wider range of possible bugs, rather than relying solely on one or the other. Giving the tester limited access to any automated tools can often help speed up the test cycle.

Release Acceptance Test

The release acceptance test (RAT), also referred to as a build acceptance or smoke test, is run on each development release to check that each build is stable enough for further testing. Typically, this test suite consists of entrance and exit test cases plus test cases that check mainstream functions of the program with mainstream data. Copies of the RAT can be distributed to developers so that they can run the tests before submitting builds to the testing group. If a build does not pass a RAT test, it is reasonable to do the following:

Suspend testing on the new build and resume testing on the prior build until another build is received.

Report the failing criteria to the development team.

Request a new build.

Functional Acceptance Simple Test

The functional acceptance simple test(FAST) is run on each development release to check that key features of the program are appropriately accessible and functioning properly on the at least one test configuration (preferable the minimum or common configuration).This test suite consists of simple test cases that check the lowest level of functionality for each command- to ensure that task-oriented functional tests(TOFTs) cna be performed on the program. The objective is to decompose the functionality of a program down to the command level and then apply test cases to check that each command works as intended. No attention is paid to the combination of these basic commands, the context of the feature that is formed by these combined commands, or the end result of the overall feature. For example, FAST for a File/Save As menu command checks that the Save As dialog box displays. However, it does not validate that the overall file-saving feature works nor does it validate the integrity of save files.

Deployment Acceptance Test

The configuration on which the Web system will be deployed will often be much different from develop-and-test configurations. Testing efforts must consider this in the preparation and writing of test cases for installation time acceptance tests. This type of test usually includes the full installation of the applications to the targeted environments or configurations.


Task-Oriented Functional Test

The task-oriented functional test (TOFT) consists of positive test cases that are designed to verify program features by checking the task that each feature performs against specifications, user guides, requirements, and design documents. Usually, features are organized into list or test matrix format. Each feature is tested for:

The validity of the task it performs with supported data conditions under supported operating conditions.

The integrity od the task's end result

The feature's integrity when used in conjunction with related features

Forced-Error Test

The forced-error test (FET) consists of negative test cases that are designed to force a program into error conditions. A list of all error messages that the program issues should be generated. The list is used as a baseline for developing test cases. An attempt is made to generate each error message in the list. Obviously, test to validate error-handling schemes cannot be performed until all the handling and error message have been coded. However, FETs should be thought through as early as possible. Sometimes, the error messages are not available. The error cases can still be considered by walking through the program and deciding how the program might fail in a given user interface such as a dialog or in the course of executing a given task or printing a given report. Test cases should be created for each condition to determine what error message is generated.

Real-world User-level Test

These tests simulate the actions customers may take with a program. Real-World user-level testing often detects errors that are otherwise missed by formal test types.

Exploratory Test

Exploratory Tests do not involve a test plan, checklist, or assigned tasks. The strategy here is to use past testing experience to make educated guesses about places and functionality that may be problematic. Testing is then focused on those areas. Exploratory testing can be scheduled. It can also be reserved for unforeseen downtime that presents itself during the testing process.

Compatibility and Configuration Testing

Compatibility and configuration testing is performed to check that an application functions properly across various hardware and software environments. Often, the strategy is to run the functional acceptance simple tests or a subset of the task-oriented functional tests on a range of software and hardware configurations. Sometimes, another strategy is to create a specific test that takes into account the error risks associated with configuration differences. For example, you might design an extensive series of tests to check for browser compatibility issues. Software compatibility configurations include variances in OS versions, input/output (I/O) devices, extension, network software, concurrent applications, online services and firewalls. Hardwere configurations include variances in manufacturers, CPU types, RAM, graphic display cards, video capture cards, sound cards, monitors, network cards, and connection types(e.g. T1, DSL, modem, etc..).

Documentation

Testing of reference guides and user guises check that all features are reasonably documented. Every page of documentation should be keystroke-tested for the following errors:

Accuracy of every statement of fact

Accuracy of every screen shot, figure and illustration

Accuracy of placement of figures and illustration

Accuracy of every tutorial, tip, and instruction

Accuracy of marketing collateral (claims, system requirements, and screen shots)

Accuracy of downloadable documentation (PDFs, HTML, or test files)

Online Help Test

Online help tests check the accuracy of help contents, correctness of features in the help system, and functionality of the help system.

Install/uninstall Test

Web system often require both client-side and server-side installs. Testing of the installer checks that installed features function properly--including icons, support documentation , the README file, and registry keys. The test verifies that the correct directories are created and that the correct system files are copied to the appropriate directories. The test also confirms that various error conditions are detected and handled gracefully.

Testing of the uninstaller checks that the installed directories and files are appropriately removed, that configuration and system-related files are also appropriately removed or modified, and that the operating environment is recovered in its original state.

User Interface Tests

Easy-of-use UI testing evaluates how intuitive a system is. Issues pertaining to navigation, usability, commands, and accessibility are considered. User interface functionality testing examines how well a UI operates to specifications.

AREAS COVERED IN UI TESTING

Usability

Look and feel

Navigation controls/navigation bar

Instructional and technical information style

Images

Tables

Navigation branching

Accessibility

External Beta Testing

External beta testing offers developers their first glimpse at how users may actually interact with a program. Copies of the program or a test URL, sometimes accompanied with letter of instruction, are sent out to a group of volunteers who try out the program and respond to questions in the letter. Beta testing is black-box, real-world testing. Beta testing can be difficult to manage, and the feedback that it generates normally comes too late in the development process to contribute to improved usability and functionality. External beta-tester feedback may be reflected in a README file or deferred to future releases.

Security Tests

Security measures protect Web systems from both internal and external threats. E-commerce concerns and the growing popularity of Web-based applications have made security testing increasingly relevant. Security tests determine whether a company's security policies have been properly implemented; they evaluate the functionality of existing systems, not whether the security policies that have been implemented are appropriate.

PRIMARY COMPONENTS REQUIRING SECURITY TESTING

Application software

Database

Servers

Client workstations

Networks

Unit Tests

Unit tests are positive tests that evaluate the integrity of software code units before they are integrated with other software units. Developers normally perform unit testing. Unit testing represents the first round of software testing--when developers test their own software and fix errors in private.

Click-Stream Testing

Click stream Testing is to show which URLs the user clicked, The Web site's user activity by time period during the day, and other data otherwise found in the Web server logs. Popular choice for Click-Stream Testing statistics include Keynote Systems Internet weather report , WebTrends log analysis utility, and the NetMechanic monitoring service.

Disadvantage: Click-Stream Testing statistics reveal almost nothing about the user's ability to achieve their goals using the Web site. For example, a Web site may show a million page views, but 35% of the page views may simply e pages with the message "Found no search results," With Click-Stream Testing, there's no way to tell when user reach their goals.

Click-stream measurement tests

Makes a request for a set of Web pages and records statiestics about the response, including total page views per hour, total hits per week, total user sessions per week, and derivatives of these numbers. The downside is that if your Web-enabled application takes twics as many pages as it should for a user to complete his or her goal, the click stream test makes it look as though your Web site is popular, while to the user your Web site is frustrating.

HTML content-checking tests

HTML content checking tests makes a request to a Web page, parses the response for HTTP hyperlinks, requests hyperlinks from their associated host, and if the links returned successful or exceptional conditions. The downside is that the hyperlinks in a Web-enalbled application are dynamic and can change, depending on the user's actions. There is little way to know the context of the hyperlinks in a Web-enabled application. Just checking the links' validity is meaningless if not misleading. These tests were meant to test static Web sites, not Web-enabled application

Web-Enabled Application Measurement Tests

Meantime between failures in seconds

Amount of time in seconds for each user session, sometimes know as transaction

Application availability and peak usage periods.

Which media elements are most used ( for example, HTML vs. Flash, JavaScript vs. HTML forms, Real vs. Windows Media Player vs. QuickTime)

Ping tests

Ping tests use the Internet Control Message Protocol(ICMP) to send a ping request to a server. If the ping returns, the server is assumed to be alive and well. The downside is that usually a Web server will continue to return ping requests even when the Web-enable application has crashed.

Unit Testing

Unit testing finds problems and errors at the module level before the software leaves development. Unit testing is accomplished by adding a small amount of the code to the module that validates the module's responses.

System-Level Test

System-level tests consists of batteris of tests that are designed to fully exercise a program as a whole and check that all elements of the integrated system function properly.

Functional System Testing

System tests check that the software functions properly from end-to-end. The components of the system include: A database, Web-enable application software modules, Web servers, Web-enabled application frameworks deploy Web browser software, TCP/IP networking routers, media servers to stream audio and video, and messaging services for email.

A common mistake of test professionals is to believe that they are conducting system tests while they are actually testing a single component of the system. For example, checking that the Web server returns a page is not a system test if the page contains only a static HTML page.

System testing is the process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It verifies proper execution of the entire set of application components including interfaces to other applications. Project teams of developers and test analysts are responsible for ensuring that this level of testing is performed.

System testing checklist include question about:

Functional completeness of the system or the add-on module

Runtime behavior on various operating system or different hardware configurantions.

Installability and configurability on various systems

Capacity limitation (maximum file size, number of records, maximum number of concurrent users, etc.)

Behavior in response to problems in the programming environment (system crash, unavailable network, full hard-disk, printer not ready)

Protection against unauthorized access to data and programs.

"black-box" (or functional) testing

Black Box Testing is testing without knowledge of the internal workings of the item being tested. The Outside world comes into contact with the test items, --only through the application interface ,,,, an internal module interface, or the INPUT/OUTPUT description of a batch process. They check whether interface definitions are adhered to in all situation and whether the product conform to all fixed requirements. Test cases are created based on the task descriptions.

Black Box Testing assumes that the tester does not know anything about the application that is going to be tested. The tester needs to understand what the program should do, and this is achieved through the business requirements and meeting and talking with users.

Funcional tests: This type of tests will evaluate a specific operating condition using inputs and validating results. Functional tests are designed to test boundaries. A combination of correst and incorrect data should be used in this type of test.

Scalability and Performance Testing

Scalability and performance testing is the way to understand how the system will handle the load cause by many concurrent users. In a Web environment concurrent use is measured as simply the number of users making requests at the same time.

Performance testing is designed to measure how quickly the program completes a given task. The primary objective is to determine whether the processing speed is acceptable in all parts of the program. If explicit requirements specify program performance, then performance test are often performed as acceptance tests.

As a rule, performance tests are easy to automate. This makes sense above all when you want to make a performance comparison of different system conditions while using the user interface. The capture and automatic replay of user actions during testing eliminates variations in response times.

This type of test should be designed to verify response and excution time. Bottlenecks in a system are generally found during this stage of testing.

Stress Testing

Overwhelm the product for performance, reliability, and efficiency assessment; To find the breakpoint when system is failure; to increase load regressively to gather information for finding out maximum concurrent users.

Stress tests force programs to operate under limited resource conditions. The goal is to push the upper functional limits of a program to ensure that it can function correctly and handle error conditions gracefully. Examples of resources that may be artificially manipulated to create stressful conditions include memory, disk space, and network bandwidth. If other memory-oriented tests are also planned, they should be performed here as part of the stress test suite. Stress tests can be automated.

Breakpoint:

the capabilites and weakness of the product:

High volunmes of data

Device connections

Long transation chains

Stress Test Environment:

As you set up your testing environment for a stress test, you need to make sure you can answer the following questions:

Will my test be able to support all the users and still maintain performance?

Will my test be able to simulate the number of transactions that pass through in a matter of hours?

Will my test be able to uncover whether the system will break?

Will my server crash if the load continues over and over?

The test should be set up so that you can simulate the load; for example:

If you have a remote Web site you should be able to monitor up to four Web sites or URLs.

There should be a way to monitor the load intervals.

The load test should be able to simulate the SSL (Secure Server)

The test should be able to simulate when a user submits the Form Data (GET method)

The test should be set up to simulate and authentical the keyword verification.

The test should be able to simulate up to six email or pager mail addresses and an alert should occur when there is a failure.

It is important to remember when stressing your Web site to give a certain number of users a page to stress test and give them a certain amount of time in which to run the test.

Some of the key data features that can help you measure this type of stress test, determine the load, and uncover bottlenecks in the system are:

Amount of memory available and used

The processor time used

The number of requests per second

The amount of time it takes ASP pages to be set up.

Server timing errors.

Load Testing

The process of modeling application usage conditions and performing them against the application and system under test, to analyze the application and system and determine capacity, throughout speed, transation handling capabilities, scalabilities and reliability while under under stress.

This tyoe of test is designed to identify possible overloads to the system such as too many users signed on to the system, too many terminals on the network, and network system too slow.

Load testing a simulation of how a browser will respond to intense use by many individuals. The Web sessions can be recorded live and set up so that the test can be run during peak times and also during slow times. The following are two different types of load tests:

Single session - A single session should be set up on browser that will have one or multiple responses. The timeing of the data should be put in a file. After the test, you can set up a separate file for report analysis.

Multiple session - a multiple session should be developed on multiple browsers with one or multiple responses. The multivariate statistical methods may be needed for a complex but general performance model

When performing stress testing, looping transactions back on themselves so that the system stresses itself simulates stress loads and may be useful for finding synchronization problems and timing bugs, Web priority problems, memory bugs, and Windows problems using API. For example, you may want ot simulate an incoming message that is then put out on a looped-back line; this in turn will generate another incoming message. The nyou can use another system of comparable size to create the stress load.

Memory leaks are often found under stress testing. A memory leak occurs when a test leaves allocated memory behind and does not correctly return the memory to the memory allocation scheme. The test seems to run correctly, but after several iteration available memory is reduced until the system fails.

Peak Load and Testing Parameters:

Determining your peak load is important before beginning the assessment of the Web site test. It may mean more than just using user requests per second to stress the system. There should be a combination of determinants such as requests per second , processor time, and memory usage. There is also the consideration of the type of information that is on your Web page from graphics and code processing, such as scripts, to ASP pages. Then it is important to determine what is fast and what is slow for your system. The type of connection can be a critical component here, such as T1 or T3 versus a modem hookup. After you have selected your threshold, you can stress your system to additional limits.

As a tester you need to set up test parameters to make sure you can log the number of users coming into and leaving the test. This should be started in a small way and steadily increased. The test should also begin by selecting a test page that may not have a large amount of graphics and steadily increasing the complexity of the test by increasing the number of graphics and image requests. Keep in mind that images will take up additional bandwidth and resources on the server but do not really have a large impact on the server's processor.

Another important item to remember is that you need to account for the length of time the user will spend surfing each page. As you test, you should set up a log to determine the approximate time spend on each page, whether it is 25 or 30 seconds. It may be recorded that each user spends at least 30 seconds on each page, and that will produce a heightened response for the server. As the request is queued, and this will be analyzed as the test continues.

Load/Volume Test

Load/volume tests study how a program handles large amounts of data, excessive calculations, and excessive processing. These tests do not necessarily have to push or exceed upper functional limits. Load/volume tests can, and usually must, be automated.

Focus of Load/Volume Tesing

Pushing through large amounts of data with extreme processing demands.

Requesting many processes simulateously.

Repeating tasks over a long period of time

Load/volume tests, which involve extreme conditions, are normally run after the execution of feature-level tests, which prove that a program functions correctly under normal conditions.

Difference between Load and Strees testing

The idea of stress testing is to find the breaking point in order to find bugs that will make that break potentially harmful. Load testing is merely testing at the highest transaction arrival rate in performance testing to see the resource contention, database locks etc..

Web Capacity Testing Load and Stress

The performance of the load or stress test Web site should be monitored with the following in mind:

The load test should be able to support all browser

The load test should be able to support all Web server.

The tool should be able to simulate up 500 users or playback machines

The tool should be able to run on WIndows NT, Linux, Solaris, and most Unix variants.

There should be a way to simulate various users at different connection speeds.

After the tests are run, you should be able to report the transactions, URL, and number of users who visited the site.

The test cases should be asssembled in a like fashion to set up test suites.

There should be a way to test the different server and port addresses.

There should be a way to account for the user's cookies.

Performance Test

The primary goal of performance-testing is to develop effective enhancement strategies for maintaining acceptable system performance. Performance testing is a capacity analysis and planning process in which measurement data are used to predict when load levels will exhaust system resources.

The Mock Test

It is a good idea to set up s mock test before you begin your actual test. This is a way to measure the server's stressd performance. As you progress with your stress testing, you can set up a measurement of metrics to determine the efficiency of the test.

After the initial test, you can determine the breaking point for the server. It may be a processor problem or even a memory problem. You need to be able to check your log to determine the average amount of time that it takes your provessor to perform the test. Running graphics or even ASP pages can cause processor problems and a limitation every time you run your stress test.

Memory tends to be a problem with the stress test. This may be due to a memary leak or lack of memory. You need to log and monitor the amount of disk capacity during the stress test. As mentioned earlier, the bandwidth can account for the slow down of the processing of the Web site speed. If the test hanges and there is a large waiting period, your processor usage is too low to handle the a,ount of stress on the system.

Simulate Resources

It is important to be able to run system in a high-stress format so that you can actually simulate the resources and understand how to handle a specific load. For example, a bank transaction processing system may be designed to process up to 150 transactions per second, whereas an operating system may be designed to handle up to 200 separate terminals. The different tests need to be designed to ensure that the system can process the expected load. This type of testing usually involves planning a series of tests where the load is gradually increased to reflect the expected usage pattern. The stress tests can steadily increase the load on the system beyond the maximum design load until the system fails.

This type of testing has a dual function of testing the system for failure and looking for a combination of events that occur when a load is placed on the server. Stress testing can then determine if overloading the system results in loss of data or user sevice to the customers The use of stress testing is particularly relevant to an ecommerce system with Web database.

Increas Capacity Testing

When you begin your stress testing, you will want to increase your capacity testing to make sure you are able to handle the increased load of data such as ASP pages and graphics. When you test the ASP pages, you may want to create a page similar to the original page that will simulate the same items on the ASP page and have it send the information to a test bed with a process that completes just a small data output. By doing this, you will have your processor still stressing the system but not taking up the bandwidth by sending the HTML code along the full path. This will not stress the entire code but will give you a basis from which to work. Dividing the requests per second by the total number of user or threads will determine the number of transactions per second. It will tell you at what point the server will start becoming less efficient at handling the load. Let's look at an example. Let's say your test with 50 users shows your server can handle 5 requests per seconf, with 100 users it is 10 requests per second, with 200 users it is 15 requests per second, and eventually with 300 users it is 20 requests per second. Your requests per second are continually climbing, so it seems that you are obtaining steadily improving performance. Let's look at the ratios:

05/50 = 0.1

10/100 = 0.1

15/200 = 0.075

20/300 = 0.073

From this example you can see that the performance of the server is becoming less and less efficient as the load grows. This in itself is not necessarily bad (as long as your pages are still returning within your target time frame). However, it can be a useful indicator during your optimization process and does give you some indication of how much leeway you have to handle expected peaks.

Stateful testing

When you use a Web-enabled application to set a value, does the server respond correctly later on?

Privilage testing

What happens when the everyday user tries to access a control that is authorized only for adminstrators?

Speed testing

Is the Web-enabled application taking too long to respond?

Boundary Test

Boundary tests are designed to check a program's response to extreme input values. Extreme output values are generated by the input values. It is important to check that a program handles input values and output results correctly at the lower and upper boundaries. Keep in mind that you can create extreme boundary results from non-extreme input values. It is essential to analyze how to generate extremes of both types. In addition. sometime you know that there is an intermediate variable involved in processing. If so, it is useful to determine how to drive that one through the extremes and special conditions such as zero or overflow condition.

Boundary timeing testing

What happens when your Web-enabled application request times out or takes a really long time to respond?

Regression testing

Did a new build break an existing function? Repeat testing after changes for managing risk relate to product enhancement.

A regression test is performded when the tester wishes to see the progress of the testing processs by performing identical tests before and after a bug has been fixed. A regression test allows the tester to compare expeted test results with the actual results.

Regression testing's primary objective is to ensure that all bugfree features stay that way. In addition, bugs which have been fixed once should not turn up again in subsequent program versions.

Regression testing: After every software modification or before next release, we repeat all test cases to check if fixed bugs are not show up again and new and existing functions are all working correctly.

Regression testing is used to confirm that fixed bugs have, in fact, been fixed and that new bugs have not been introduced in the process, and that festures that were proven correctly functional are intact. Depending on the size of a project, cycles of regression testing may be perform once per milestone or once per build. Some bug regression testing may also be performed during each accceptance test cycle, forcusing on only the most important bugs. Regression tests can be automated.

CONDITIONS DURING WHICH REGRESSION TESTS MAY BE RUN Issu fixing cycle. Once the development team has fixed issues, a regression test can be run t ovalidate the fixes. Tests are based on the step-by-step test casess that were originally reported:

If an issue is confirmeded as fixed, then the issue report status should be changed to Closed.

If an issue is confirmed as fixed, but with side effects, then the issue report status should be changed to Closed. However, a new issue should be filed to report the side effect.

If an issue is only partially fixed, then the issue report resolution should be changed back to Unfixed, along with comments outlining the oustanding problems

Open-status regression cycle. Periodic regression tests may be run on all open issue in the issue-tracking database. During this cycle, issue status is confirmed either the report is reproducible as is with no modification, the report is reproducible with additional comments or modifications, or the report is no longer reproducible

Closed-fixed regression cycle. In the final phase of testing, a full-regression test cycle should be run to confirm the status of all fixed-closed issues.

Feature regression cycle. Each time a new build is cut or is in the final phase of testing depending on the organizational procedure, a full-regression test cycle should be run to confirm that the proven correctly functional features are still working as expected.

Database Testing

Items to check when testing a database What to test Environment toola/technique

Seach results System test environment Black Box and White Box technique

Response time System test environment Sytax Testing/Functional Testing

Data integrity Development environment White Box testing

Data validity Development environment White Box testing

Query reaponse time

The turnaround time for responding to queries in a database must be short; therefor, query response time is essential for online transactions. The results from this test will help to identify problems, such as possible bottlenecks in the network, sspecific queries, the database structure, or the hardware.

Data integrity

Data stored in the database should include such items as the catalog, pricing, shipping tables, tax tables, order database, and customer information. Testng must verify the integrity of the stored data. Testing should be done on a regular basis because data changes over time.

Data integrity tests

Data integrity can be tested as follows to ensure that the data is valid and not corrupt:

Test the creation, modification, and deletion of data in tables as specified in the business requirement.

Test to make sure that sets of radio buttons represent a fixed set of values. You should also check for NULL or EMPTY values.

Test to make sure that data is save to the database and that each values gets saved fully. You should watch for the truncation of strings and that numeric values are not rounded off.

Test to make sure that default values are stored and saved.

Test the compatibility with old data. You should ensure that all updates do not affect the data you have on file in your database.

Data validity

The most common data errors are due to incorrect data entry, called data validity errors.

Recovery testing

The system recovers from faukts and resumes processing within a predefined period of time.

The system is fault-tolerant, which means that processing faults do not halt the overall functioning of the system.

Data recovery and restart are correct in case of automatic recovery. If recovery requires human intervention, the mean time to repair the database is within predefined acceptable limits.

When testing a SQL server

If the Web site publishes from inside the SQL Server straight to a Web page, is the data accurate and of the correct data type?

If the SQL Server reads from a stored procedure to produce a Web page or if the stored procedure is changed, does the data on the page change?

If you are using FrontPage or interDev is the data connection to your pages secure?

Does the database have scheduled maintenance with a log so testers can set changes or errors?

Can the tester check to see how back ups are being handled?

Is the database secure?

When testing a Access database

If the database is creating Web pages from the datbase to a URL, is the information correct and updated? If the pages are not dynamic or Active Server pages, they will not update automatically.

If the tables in the database are linked to another database, make sure that all the links are active and giving reevant information.

Are the fields such as zip code, phone numbers, dates, currency, and social security number formateed properly?

If there are formulas in the database, do they work? How will they take care of updates if numbers change (for example, updating taxes)?

Do the forms populate the correct tables?

Is the database secure?

When test a FoxPro database

If the database is linked to other database, are the links secure and working?

If the database publishes to the Internet, is the data correct?

When data is deployed, is it still accurate?

Do the queries give accurate information to the reports?

If thedatabase performs calculations, are the calculatons accurate?

Other important Database and security feature

Credite Card Transaction

Shopping Carts

Payment Transaction Security

Secure Sockets Layer (SSL)

SSL is leading security protocol on the Internet.

When an SSL session is started, the server sends its publice key to the browser, which the browser uses to send a randomly generated secret key back to the server to have a secret key exchange for that session.

SSL is a protocol that is submitted to the WWW consortium (W3C) working group on security for consideration as a standard security hanhshake that is used to initiate the TCP/IP connection. This handshake results in the client and server agreeing on the level of security that they will use, and this will fulfill any authentication requirements for the connection. SSL's role is to encrypt and decrypt the byte stream of the application protocol being used. This means the all the inofrmation in both the HTTP request and the HTTP response are fully encrypted, including the URL the client is requesting, any submitted form contents (such as credit card numbers), anty HTTP access authorization information (user names and passwords), and all the data returned from the server to the client.

Transport Layer Security (TLS)

TLS is a majo security standard on the internet. TLS is backward compatible with SSL and use Triple Data Encryption Standard (DES) encryption.