Tuesday, April 19, 2011

TEST PLAN

• The test plan keeps track of possible tests that will be run on the system after coding.
• The test plan is a document that develops as the project is being developed.
• Record tests as they come up
• Test error prone parts of software development.
• The initial test plan is abstract and the final test plan is concrete.
• The initial test plan contains high level ideas about testing the system without getting into the details of exact test cases.
• The most important test cases come from the requirements of the system.
• When the system is in the design stage, the initial tests can be refined a little.
• During the detailed design or coding phase, exact test cases start to materialize.
• After coding, the test points are all identified and the entire test plan is exercised on the software.

Purpose of Software Test Plan:

• To achieve 100% CORRECT code. Ensure all Functional and Design Requirements are implemented as specified in the documentation.
• To provide a procedure for Unit and System Testing.
• To identify the documentation process for Unit and System Testing.
• To identify the test methods for Unit and System Testing.
Advantages of test plan

• Serves as a guide to testing throughout the development.
• We only need to define test points during the testing phase.
• Serves as a valuable record of what testing was done.
• The entire test plan can be reused if regression testing is done later on.
• The test plan itself could have defects just like software!

In software testing, a test plan gives detailed testing information regarding an upcoming testing effort, including
• Scope of testing
• Schedule
• Test Deliverables
• Release Criteria
• Risks and Contingencies

Process of the Software Test Plan

• Identify the requirements to be tested. All test cases shall be derived using the current Design Specification.
• Identify which particular test(s) you're going to use to test each module.
• Review the test data and test cases to ensure that the unit has been thoroughly verified and that the test data and test cases are adequate to verify proper operation of the unit.
• Identify the expected results for each test.
• Document the test case configuration, test data, and expected results. This information shall be submitted via the on-line Test Case Design(TCD) and filed in the unit's Software Development File(SDF). A successful Peer Technical Review baselines the TCD and initiates coding.
• Perform the test(s).
• Document the test data, test cases, and test configuration used during the testing process. This information shall be submitted via the on-line Unit/System Test Report(STR) and filed in the unit's Software Development File(SDF).
• Successful unit testing is required before the unit is eligible for component integration/system testing.
• Unsuccessful testing requires a Program Trouble Report to be generated. This document shall describe the test case, the problem encountered, its possible cause, and the sequence of events that led to the problem. It shall be used as a basis for later technical analysis.
• Test documents and reports shall be submitted on-line. Any specifications to be reviewed, revised, or updated shall be handled immediately.
Deliverables: Test Case Design, System/Unit Test Report, Problem Trouble Report(if any).
Test plan template
• Test Plan Identifier
• References
• Introduction of Testing
• Test Items
• Software Risk Issues
• Features to be Tested
• Features not to be Tested
• Approach
• Item Pass/Fail Criteria
• Entry & Exit Criteria
• Suspension Criteria and Resumption Requirements
• Test Deliverables
• Remaining Test Tasks
• Environmental Needs
• Staffing and Training Needs
• Responsibilities
• Schedule
• Planning Risks and Contingencies
• Approvals
• Glossary
Test plan identifier
Master test plan for the Line of Credit Payment System.
References
List all documents that support this test plan.
Documents that are referenced include:
• Project Plan
• System Requirements specifications.
• High Level design document.
• Detail design document.
• Development and Test process standards.
• Methodology guidelines and examples.
• Corporate standards and guidelines.
Objective of the plan
Scope of the plan
In relation to the Software Project plan that it relates to. Other items may include, resource and budget constraints, scope of the testing effort, how testing relates to other evaluation activities (Analysis & Reviews), and possibly the process to be used for change control and communication and coordination of key activities.
Test items (functions)
These are things you intend to test within the scope of this test plan. Essentially, something you will test, a list of what is to be tested. This can be developed from the software application inventories as well as other sources of documentation and information.
This can be controlled on a local Configuration Management (CM) process if you have one. This information includes version numbers, configuration requirements where needed, (especially if multiple versions of the product are supported). It may also include key delivery schedule issues for critical elements.
Remember, what you are testing is what you intend to deliver to the client.
This section can be oriented to the level of the test plan. For higher levels it may be by application or functional area, for lower levels it may be by program, unit, module or build.
Software risk issues
Identify what software is to be tested and what the critical areas are, such as:
• Delivery of a third party product.
• New version of interfacing software.
• Ability to use and understand a new package/tool, etc.
• Extremely complex functions.
• Modifications to components with a past history of failure.
• Poorly documented modules or change requests.
There are some inherent software risks such as complexity; these need to be identified.
• Safety.
• Multiple interfaces.
• Impacts on Client.
• Government regulations and rules.
Another key area of risk is a misunderstanding of the original requirements. This can occur at the management, user and developer levels. Be aware of vague or unclear requirements and requirements that cannot be tested.
The past history of defects (bugs) discovered during Unit testing will help identify potential areas within the software that are risky. If the unit testing discovered a large number of defects or a tendency towards defects in a particular area of the software, this is an indication of potential future problems. It is the nature of defects to cluster and clump together. If it was defect ridden earlier, it will most likely continue to be defect prone.
One good approach to define where the risks are is to have several brainstorming sessions.
Features to be tested
This is a listing of what is to be tested from the user's viewpoint of what the system does. This is not a technical description of the software, but a users view of the functions.
Set the level of risk for each feature. Use a simple rating scale such as (H, M, L): High, Medium and Low. These types of levels are understandable to a User. You should be prepared to discuss why a particular level was chosen.
Features not to be tested
This is a listing of what is 'not' to be tested from both the user's viewpoint of what the system does and a configuration management/version control view. This is not a technical description of the software, but a user's view of the functions.
Identify why the feature is not to be tested, there can be any number of reasons.
• Not to be included in this release of the Software.
• Low risk has been used before and was considered stable.
• Will be released but not tested or documented as a functional part of the release of this version of the software.
Approach (strategy)
This is your overall test strategy for this test plan; it should be appropriate to the level of the plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels of plans. Overall rules and processes should be identified.
• Are any special tools to be used and what are they?
• Will the tool require special training?
• What metrics will be collected?
• Which level is each metric to be collected at?
• How is Configuration Management to be handled?
• How many different configurations will be tested?
• Hardware
• Software
• Combinations of HW, SW and other vendor packages
• What levels of regression testing will be done and how much at each test level?
• Will regression testing be based on severity of defects detected?
• How will elements in the requirements and design that do not make sense or are un testable be processed?

If this is a master test plan the overall project testing approach and coverage requirements must also be identified.
Specify if there are special requirements for the testing.
• Only the full component will be tested.
• A specified segment of grouping of features/components must be tested together.
Other information that may be useful in setting the approach are:
(3) MTBF, Mean Time Between Failures - if this is a valid measurement for the test involved and if the data is available.
(4) SRE, Software Reliability Engineering - if this methodology is in use and if the information is available.
How will meetings and other organizational processes be handled?
.Item pass/fail criteria
Show stopper issues. Specify the criteria to be used to determine whether each test item has passed or failed.
Entry & exit criteria
Entrance Criteria
The Entrance Criteria specified by the system test controller, should be fulfilled before System Test can commence. In the event, that any criterion has not been achieved, the System Test may commence if Business Team and Test Controller are in full agreement that the risk is manageable.
• All developed code must be unit tested. Unit and Link Testing must be completed and signed off by development team.
• System Test plans must be signed off by Business Analyst and Test Controller.
• All human resources must be assigned and in place.
• All test hardware and environments must be in place, and free for System test use.
• The Acceptance Tests must be completed, with a pass rate of not less than 80%.
Exit Criteria
The Exit Criteria detailed below must be achieved before the Phase 1 software can be recommended for promotion to Operations Acceptance status. Furthermore, I recommend that there be a minimum 2 days effort Final Integration testing AFTER the final fix/change has been retested.
• All High Priority errors from System Test must be fixed and tested
• If any medium or low-priority errors are outstanding - the implementation risk must be signed off as acceptable by Business Analyst and Business Expert
Suspension Criteria and Resumption Requirements:
This is a particular risk clause to define under what circumstances testing would stop and restart.

Resumption Criteria
In the event that system testing is suspended resumption criteria will be specified and testing will not re-commence until the software reaches these criteria.
• Project Integration Test must be signed off by Test Controller and Business Analyst.
• Business Acceptance Test must be signed off by Business Expert.

Risks and Contingencies
This defines all other risk events, their likelihood, impact and counter measures to over come them.
Summary
The goal of this exercise is to familiarize students with the process of creating test plans.
This exercise is divided into three tasks as described below.
• Devise a test plan for the group.
• Inspect the plan of another group who will in turn inspect yours.
• Improve the group's own plan based on comments from the review session
Task 1: Creating a Test Plan
The role of a test plan is to guide all testing activities. It defines what is to be tested and what is to be overlooked, how the testing is to be performed (described on a general level) and by whom. It is therefore a managerial document, not technical one - in essence, it is a project plan for testing. Therefore, the target audience of the plan should be a manager with a decent grasp of the technical issues involved.
Experience has shown that good planning can save a lot of time, even in an exercise, so do not underestimate the effort required for this phase.
The goal of all these exercises is to carry out system testing on Word Pad, a simple word processor. Your task is to write a thorough test plan in English using the above-mentioned sources as guidelines. The plan should be based on the documentation of Word Pad




Task 2: Inspecting a Test Plan

The role of a review is to make sure that a document (or code in a code review) is readable and clear and that it contains all the necessary information and nothing more. Some implementation details should be kept in mind:
• The groups will divide their roles themselves before arriving at the inspection. A failure to follow the roles correctly will be reflected in the grading. However, one of the assistants will act as the moderator and will not assume any other roles.
• There will be only one meeting with the other group and the moderator. All planning, overview and preparation is up to the groups themselves. You should use the suggested check lists in the lecture notes while preparing. Task 3 deals with the after-meeting activities.
• The meeting is rather short, only 60 minutes for a pair (that is, 30 minutes each). Hence, all comments on the language used in the other group's test plan are to be given in writing. The meeting itself concentrates on the form and content of the plan.

Task 3: Improved Test Plan and Inspection Report

After the meeting, each group will prepare a short inspection report on their test plan listing their most typical and important errors in the first version of the plan together with ideas for correcting them. You should also answer the following questions in a separate document:
• What is the role of the test plan in designing test cases?
• What were the most difficult parts in your test plan and why?
Furthermore, the test plan is to be revised according to the input from the inspection.

Test Case:
A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
In software engineering, a test case is a set of conditions or variables under which a tester will determine if a requirement upon an application is partially or fully satisfied. It may take many test cases to determine that a requirement is fully satisfied. In order to fully test that all the requirements of an application are met, there must be at least one test case for each requirement unless a requirement has sub requirements. In that situation, each sub requirement must have at least one test case. Some methodologies recommend creating at least two test cases for each requirement. One of them should perform positive testing of requirement and other should perform negative testing.

If the application is created without formal requirements, then test cases are written based on the accepted normal operation of programs of a similar class.
What characterises a formal, written test case is that there is a known input and an expected output, which is worked out before the test is executed. The known input should test a precondition and the expected output should test a post condition.
Under special circumstances, there could be a need to run the test, produce results, and then a team of experts would evaluate if the results can be considered as a pass. This happens often on new products' performance number determination. The first test is taken as the base line for subsequent test / product release cycles.
Written test cases include a description of the functionality to be tested taken from either the requirements or use cases, and the preparation required to ensure that the test can be conducted.
Written test cases are usually collected into Test suites.
A variation of test cases are most commonly used in acceptance testing. Acceptance testing is done by a group of end-users or clients of the system to ensure the developed system meets their requirements. User acceptance testing is usually differentiated by the inclusion of happy path or positive test cases.

No comments: