Tuesday, April 19, 2011

TESTING METRICS

What is Test Metric?

Test metrics is a process for analyzing the current level of maturity while testing and predict future trends, finally meant for enhancing the testing activities which were missed in current testing will be added in next build to improve or enhance the Testing Process.
Metrics are the numerical data which will help us to measure the test effectiveness.
Metrics are produced in two forms
1. Base Metrics and
2. Derived Metrics.
Example of Base Metrics:

# Test Cases
# New Test Cases
# Test Cases Executed
# Test Cases Unexecuted
# Test Cases Re-executed
# Passes
# Fails
# Test Cases Under Investigation
# Test Cases Blocked
# 1st Run Fails
#Test Case Execution Time
# Testers

Examples of Derived Metrics:

% Test Cases Complete
% Test Cases Passed
% Test Cases Failed
% Test Cases Blocked
% Test Defects Corrected

Objective of Test Metrics
The objective of Test Metrics is to capture the planned and actual quantities the effort, time and resources required to complete all the phases of Testing of the software Project.
Test metrics usually covers 3 things:
1. Test coverage
2. Time for one test cycle.
3. Convergence of testing

Why Testing metrics?
As, we all know, a major percentage of software projects suffer from quality problems. Software testing provides visibility into product and process quality. Test metrics are key ”facts” that project managers can understand their current position and to prioritize their activities to reduce the risk of schedule over-runs on software releases.
Test metrics are a very powerful management tool. They help you to measure your current performance. Because today’s data becomes tomorrow’s historical data. its never too late to start recording key information on your project. This data can be used to improved future work estimates and quality levels. Without historical data, estimates will be guesses.
You cannot track the project status meaningfully unless you know the actual effort and time spent on each task as compared to your estimates. You cannot sensibly decide whether your product is stable enough to ship unless you track the rates at which your team is finding and fixing defects. You cannot quantify the performance of your new development processes without some statistics on your current performance and a baseline to compare it with. Metrics help you to better control your software projects. They enable you to learn more about the functioning of your organization by establishing a Process Capability baseline that can be used to better estimate and predict the quality of your projects in the future.
The benefits of having testing metrics
1. Test metrics data collection helps predict the long-term direction and scope for an organization and enables a more holistic view of business and identifies high-level goals
2. Provides a basis for estimation and facilitates planning for closure of the performance gap
3. Provides a means for control / status reporting
4. Identifies risk areas that requires more testing
5. Provides meters to flag actions for faster, more informed decision making
6. Quickly identifies and helps resolve potential problems and identifies areas of improvement
7. Test metrics provide an objective measure of the effectiveness and efficiency of testing
Key factors to bear in mind while setting up test metrics
1. Collect only the data that you will actually use/need to make informed decisions to alter your strategies, if you are not going to change your strategy regardless of the finding, your time is better spent in testing.
2. Do not base decisions solely on data that is variable or can be manipulated. For example, measuring testers on the number of tests they write per day can reward them for speeding through superficial tests or punish them for tracking trickier functionality.
3. Use statistical analysis to get a better understanding of the data. Difficult metrics data should be analyzed carefully. The templates used for presenting data should be self explanatory.
4. One of the key inputs to the metrics program is the defect tracking system in which the reported process and product defects are logged and tracked to closure. It is therefore very important to carefully decide on the fields that need per defect in the defect tracking systems and then generate customizable reports.
5. Metrics should be decided on the basis of their importance to stakeholders rather than ease of data collection. Metrics that are of not interest to the stakeholders should be avoided.
6. Inaccurate data should be avoided and complex data should be collected carefully. Proper benchmarks should be definite for the entire program
17.6 Deciding on the Metrics to Collect

There are literally thousands of possible software metrics to collect and possible things to measure about software development. There are many books and training programs available about software metrics, which of the many metrics are appropriate for your situation? One method is to start with one of the many available published suites of metrics and a vision of your own management problems and goals, and then customize the metrics list based on the following metrics collection checklist. For each metric, you must consider,
1) What are you trying to manage with this metric? Each metric must relate to a specific management area of interest in a direct way. The more convoluted the relationship between the measurement and the management goal, the less likely you are to be collecting the right thing.
2) What does this metric measure? Exactly what does this metric count? High-level attempts to answer this question (such as "it measures how much we've accomplished") may be misleading. The detailed answers (such as "it reports how much we had budgeted for design tasks that first-level supervisors are reporting as greater than 80 percent complete") is much more informative, and can provide greater insight regarding the accuracy and usefulness of any specific metric.
3) If your organization optimized this metric alone, what other important aspects of your software design, development, testing, deployment, and maintenance would be affected? Asking this question will provide a list of areas where you must check to be sure that you have a balancing metric. Otherwise, your metrics program may have unintended effects and drive your organization to undesirable behavior.
4) How hard/expensive is it to collect this information? This is where you actually get to identify whether collection of this metric is worth the effort. If it is very expensive or hard to collect, look for automation that can make the collection easier, or consider alternative metrics that can be substituted.
5) Does the collection of this metric interact with (or interfere with) other business processes? For example, does the metric attempt to gather financial information on a different periodic basis or with different granularity than your financial system collects and reports it? If so, how will the two quantitative systems be synchronized? Who will reconcile differences? Can the two collection efforts be combined into one and provide sufficient software metrics information?

6) How accurate will the information be after you collect it? Complex or manpower-intensive metrics collection efforts are often short circuited under time and schedule pressure by the people responsible for the collection. Metrics involving opinions (e.g., what percentage complete do you think you are?) are notoriously inaccurate. Exercise caution, and carefully evaluate the validity of metrics with these characteristics.
7) Can this management interest area be measured by other metrics? What alternatives to this metric exist? Always look for an easier-to-collect, more accurate, more timely metric that will measure relevant aspects of the management issue of concern.
Use of this checklist will help ensure the collection of an efficient suite of software development metrics that directly relates to management goals. Periodic review of existing metrics against this checklist is recommended.
Projects that are underestimated, over-budget, or that produce unstable products, have the potential to devastate the company. Accurate estimates, competitive productivity, and renewed confidence in product quality are critical to the success of the company.
Hoping to solve these problems as quickly as possible, the company management embarks on the 8-Step Metrics Program
Step 1: Document the Software Development Process
Integrated Software does not have a defined development process. However, the new metrics coordinator does a quick review of project status reports and finds that the activities of requirements analysis, design, code, review, recode, test, and debugging describe how the teams spend their time. The inputs, work performed, outputs and verification criteria for each activity have not been recorded. He decides to skip these details for this "test" exercise. The recode activity includes only effort spent addressing software action items (defects) identified in reviews.
Step 2: State the Goals
The metrics coordinator sets out to define the goals of the metrics program. The list of goals in Step 2 of the 7 -Step Metrics Program are broader than (yet still related to) the immediate concerns of Integrated Software. Discussion with development staff leads to some good ideas on how to tailor these goals into specific goals for the company.
1. Estimates
The development staff at Integrated Software considers past estimates to have been unrealistic as they were established using “finger in the wind” techniques. They suggest that current plan could benefit from past experience as the present project is very similar to past projects.
Goal: Use previous project experience to improve estimations of Productivity.
2. Productivity
Discussions about the significant effort spent in debugging center on a comment by one of the developers that defects found early on in reviews have been faster to repair than Defects discovered by the test group. It seems that both reviews and testing are needed, but the amount of effort to put into each is not clear.
Goal: Optimize defect detection and removal.
3. Quality
The test group at the company argues for exhaustive testing. This however, is Prohibitively expensive. Alternatively, they suggest looking at the trends of defects discovered and repaired over time to better understand the probable number of defects remaining.
Goal: Ensure that the defect detection rate during testing is converging towards a level that indicates that less than five defects per KSLOC will be discovered in the next year.
Step 3: Define Metrics Required to Reach Goals and Identify Data to Collect
Working from the Step 3 tables, the metrics coordinator chooses the following metrics for the metrics program.
Goal 1: Improve Estimates
• Actual effort for each type of software in PH
• Size of each type of software in SLOC
• Software product complexity (type)
• Labor rate (PH/SLOC) for each type
Goal 2: Improve Productivity
• Total number of person hours per activity
• Number of defects discovered in reviews
• Number of defects discovered in testing
• Effort spent repairing defects discovered in reviews
• Effort spent repairing defects discovered in testing
• Number of defects removed per effort spent in reviews and recode
• Number of defects removed per effort spent in testing and debug
Goal 3: Improve Quality
• total number of defects discovered
• total number of defects repaired
• number of defects discovered / schedule date
• number of defects repaired / schedule date


Types of test metrics
1. Product test metrics
i. Number of remarks
ii. Number of defects
ii. Remark status
iv. Defect severity
v. defect severity index
vi. Time to find a defect
vii. Time to solve a defect
viii. Test coverage
ix. defects/KLOC
2. Project test metrics
i. workload capacity ratio
ii. Test planning performance.
Iii. Test effort ratio.
iv. Defect category
3. Process test metrics
i. should be found in which phase
ii. Residual defect density
iii. Defect remark ratio
iv. Valid remark ratio
v. bad fix ratio
vi. Defect removal efficiency
vii. Phase yield
viii. Backlog development
ix. Backlog testing
x. scope changes

Product test metrics

I. Number of remarks
Definition
The total number of remarks found in a given time period/phase/test type. A remark is a claim made by test engineer that the application shows an undesired behavior. It may or may not result in software modification or changes to documentation.
Purpose
One of the earliest indicators to measure once the testing commences; provides initial indications about the stability of the software
Data to collect
Total number of remarks found.

II. Number of defects
Definition
The total number of remarks found in a given time period/phase/test type that resulted in software or documentation modifications.
Purpose
The total number of remarks found in a given time period/phase/test type that resulted in software or documentation modifications.
Data to collect
Only remarks that resulted in modifying the software or the documentation are counted.

III. Remark status
Definition
The status of the defect could vary depending upon the defect-tracking tool that is used. Broadly, the following statuses are available: To be solved: Logged by the test engineers and waiting to be taken over by the software engineer. To be retested: Solved by the developer, and waiting to be retested by the test engineer. Closed: The issue was retested by the test engineer and was approved.
Purpose
Track the progress with respect to entering, solving and retesting the remarks. During this phase, the information is useful to know the number of remarks logged, solved, waiting to be resolved and retested.
Data to collect
This information can normally be obtained directly from the defect tracking system based on the remark status.

IV. Defect severity
Definition
The severity level of a defect indicates the potential business impact for the end user (business impact = effect on the end user x frequency of occurrence).
Purpose
Provides indications about the quality of the product under test. A high-severity defect means low product quality, and vice versa. At the end of this phase, this information is useful to make the release decision based on the number of defects and their severity levels.
Data to collect
Every defect has severity levels attached to it. Broadly, these are Critical, Serious, Medium and Low.

V. Defect severity index
Definition
An index representing the average of the severity of the defects.
Purpose
Provides a direct measurement of the quality of the product—specifically, reliability, fault tolerance and stability.
Data to collect
Two measures are required to compute the defect severity index. A number is assigned against each severity level: 4 (Critical), 3 (Serious), 2 (Medium), 1 (Low). Multiply each remark by its severity level number and add the totals; divide this by the total number of defects to determine the defect severity index.

VI. Time to find a defect
Definition
The effort required to find a defect.
Purpose
Shows how fast the defects are being found. This metric indicates the correlation between the test effort and the number of defects found.
Data to collect
Divide the cumulative hours spent on test execution and logging defects by the number of defects entered during the same period.




VII. Time to solve a defect
Definition
Effort required resolving a defect (diagnosis and correction).
Purpose
Provides an indication of the maintainability of the product and can be used to estimate projected maintenance costs.
Data to collect
Divide the number of hours spent on diagnosis and correction by the number of defects resolved during the same period.

VIII. Test coverage
Definition
Defined as the extent to which testing covers the product’s complete functionality.
Purpose
This metric is an indication of the completeness of the testing. It does not indicate anything about the effectiveness of the testing. This can be used as a criterion to stop testing.
Data to collect
Coverage could be with respect to requirements, functional topic list, business flows, use cases, etc. It can be calculated based on the number of items that were covered vs. the total number of items.

IX. Test case effectiveness
Definition
The extent to which test cases are able to find defects
Purpose
This metric provides an indication of the effectiveness of the test cases and the stability of the software.
Data to collect
Ratio of the number of test cases that resulted in logging remarks vs. the total number of test cases.

X. Defects/KLOC
Definition
The number of defects per 1,000 lines of code.
Purpose
This metric indicates the quality of the product under test. It can be used as a basis for estimating defects to be addressed in the next phase or the next version.
Data to collect
Ratio of the number of defects found vs. the total number of lines of code (thousands)

Formula used

Uses of defect/KLOC
Defect density is used to compare the relative number of defects in various software components. This helps identifies candidates various for additional inspection or testing or for possible engineering or replacement. Identifying defect prone components allows the concentration of limited resources into areas with the highest potential return on investment.
Another use of defect density is to compare subsequent releases of a product to track the impact of defect reduction and quality improvement activities. Normalling by size allows releasing of various sizes to be compared. Differences between products or products lines can also be compared in this manner.

Project test metrics:
I. Workload capacity
Definition
Ratio of the planned workload and the gross capacity for the total test project or phase. Similar.
Purpose
This metric helps in detecting issues related to estimation and planning. It serves as an input for estimating similar projects as well.
Data to collect
Computation of this metric often happens in the beginning of the phase or project. Workload is determined by multiplying the number of tasks against their norm times. Gross capacity is nothing but planned working time, determined by workload divided by gross capacity.

II. Test planning performance
Definition
The planned value related to the actual value.
Purpose
Shows how well estimation was done.
Data to collect
The ratio of the actual effort spent to the planned effort

III. Test effort percentage
Definition
Test effort is the amount of work spent, in hours or days or weeks. Overall project effort is divided among multiple phases of the project: requirements, design, coding, testing and such. This metric can be computed by dividing the overall test effort by the total project effort.

Purpose
The effort spent in testing, in relation to the effort spent in the development activities, will give us an indication of the level of investment in testing. This information can also be used to estimate similar projects in the future.
Data to collect
This metric can be computed by dividing the overall test effort by the total project effort.

IV. Defect category
Definition
An attribute of the defect in relation to the quality attributes of the product. Quality attributes of a product include functionality, usability, documentation, performance, installation and internationalization.
Purpose
This metric can provide insight into the different quality attributes of the product.

Data to collect
This metric can be computed by dividing the defects that belong to a particular category by the total number of defects.

Process test metrics
I. Should be found in which phase
Definition
An attribute of the defect, indicating in which phase the remark should have been found.
Purpose
Are we able to find the right defects in the right phase as described in the test strategy? Indicates the percentage of defects that are getting migrated into subsequent test phases.
Data to collect
Computation of this metric is done by calculating the number of defects that should have been found in previous test phases.

II. Residual defect density:
Definition
An estimate of the number of defects that may have been unresolved in the product phase.
Purpose
The goal is to achieve a defect level that is acceptable to the clients. We remove defects in each of the test phases so that few will remain.
Data to collect
This is a tricky issue. Released products have a basis for estimation. For new versions, industry standards, coupled with project specifics, form the basis for estimation.


III. Defect remark ratio
Definition
Ratio of the number of remarks that resulted in software modification vs. the total number of remarks.
Purpose
Provides an indication of the level of understanding between the test engineers and the software engineers about the product, as well as an indirect indication of test effectiveness.
Data to collect
The number of remarks that resulted in software modification vs. the total number of logged remarks. Valid for each test type, during and at the end of test phases.

IV. Valid remark ratio
Definition
Percentage of valid remarks during a certain period.
Purpose
Indicates the efficiency of the test process.
Data to collect
Ratio of the total number of remarks that are valid to the total number of remarks found.
Formula used
Valid remarks = number of defects + duplicate remarks + number of remarks that will be resolved in the next phase or release.

V. Phase yield
Definition
Defined as the number of defects found during the phase of the development life cycle vs. the estimated number of defects at the start of the phase.
Purpose
Shows the effectiveness of the defect removal. Provides a direct measurement of product quality; can be used to determine the estimated number of defects for the next phase.
Data to collect
Ratio of the number of defects found by the total number of estimated defects. This can be used during a phase and also at the end of the phase.

VI. Backlog development
Definition
The number of remarks that are yet to be resolved by the development team.
Purpose
Indicates how well the software engineers are coping with the testing efforts.
Data to collect
The number of remarks that remain to be resolved.

VII. Backlog testing

Definition
The number of resolved remarks that are yet to be retested by the development team.
Purpose
Indicates how well the test engineers are coping with the development efforts.
Data to collect
The number of remarks that have been resolved.

VII. Scope changes
Definition
The number of changes that were made to the test scope.
Purpose
Indicates requirements stability or volatility, as well as process stability.
Data to collect
Ratio of the number of changed items in the test scope to the total number of items.

VIII. Defect removal efficiency
Definition
The number of defects that are removed per time unit (hours/days/weeks)
Purpose
Indicates the efficiency of defect removal methods, as well as indirect measurement of the quality of the product.
Data to collect
Computed by dividing the effort required for defect detection, defect resolution time and retesting time by the number of remarks. This is calculated per test type, during and across test phases.

TEST REPORT

TEST REPORT


The Software Test Report (STR) is a record of the qualification testing performed on a Computer Software Configuration Item (CSCI), a software system or subsystem, or other software-related item.
The STR enables the acquirer to assess the testing and its results.
APPLICATION/INTERRELATIONSHIP
The Data Item Description (DID) contains the format and content preparation instructions for the data product generated by specific and discrete task requirements as delineated in the contract.
This DID is used when the developer is tasked to analyze and record the results of CSCI qualification testing, system qualification testing of a software system, or other testing identified in the contract.
The Contract Data Requirements List (CDRL) should specify whether deliverable data are to be delivered on paper or electronic media; are to be in a given electronic form (such as ASCII, CALS, or compatible with a specified word processor or other support software); may be delivered in developer format rather than in the format specified herein; and may reside in a computer-aided software engineering (CASE) or other automated tool rather than in the form of a traditional document.

PREPARATION INSTRUCTIONS
General instructions.
a. Automated techniques. Use of automated techniques is encouraged. The term "document" in this DID means a collection of data regardless of its medium.
b. Alternate presentation styles. Diagrams, tables, matrices, and other presentation styles are acceptable substitutes for text when data required by this DID can be made more readable using these styles.
c. Title page or identifier. The document shall include a title page containing, as applicable: document number; volume number; version/revision indicator; security markings or other restrictions on the handling of the document; date; document title; name, abbreviation, and any other identifier for the system, subsystem, or item to which the document applies; contract number; CDRL item number; organization for which the document has been prepared; name and address of the preparing organization; and distribution statement. For data in a database or other alternative form, this information shall be included on external and internal labels or by equivalent identification methods.
d. Table of contents. The document shall contain a table of contents providing the number, title, and page number of each titled paragraph, figure, table, and appendix. For data in a database or other alternative form, this information shall consist of an internal or external table of contents containing pointers to, or instructions for accessing, each paragraph, figure, table, and appendix or their equivalents.
e. Page numbering/labeling. Each page shall contain a unique page number and display the document number, including version, volume, and date, as applicable. For data in a database or other alternative form, files, screens, or other entities shall be assigned names or numbers in such a way that desired data can be indexed and accessed.
f. Response to tailoring instructions. If a paragraph is tailored out of this DID, the resulting document shall contain the corresponding paragraph number and title, followed by "This paragraph has been tailored out." For data in a database or other alternative form, this representation need occur only in the table of contents or equivalent.
g. Multiple paragraphs and subparagraphs. Any section, paragraph, or subparagraph in this DID may be written as multiple paragraphs or subparagraphs to enhance readability.
h. Standard data descriptions. If a data description required by this DID has been published in a standard data element dictionary specified in the contract, reference to an entry in that dictionary is preferred over including the description itself.
i. Substitution of existing documents. Commercial or other existing documents may be substituted for all or part of the document if they contain the required data.
1. Content requirements. Content requirements begin on the following page. The numbers shown designate the paragraph numbers to be used in the document. Each such number is understood to have the prefix within this DID. For example, the paragraph numbered 1.1 is understood to be paragraph 10.2.1.1 within this DID.

Scope. This section shall be divided into the following paragraphs.
Identification. This paragraph shall contain a full identification of the system and the software to which this document applies, including, as applicable, identification number(s), title(s), abbreviation(s), version number(s), and release number(s).
System overview. This paragraph shall briefly state the purpose of the system and the software to which this document applies. It shall describe the general nature of the system and software; summarize the history of system development, operation, and maintenance; identify the project sponsor, acquirer, user, developer, and support agencies; identify current and planned operating sites; and list other relevant documents.
1. Document overview. This paragraph shall summarize the purpose and contents of this document and shall describe any security or privacy considerations associated with its use.

1. Referenced documents. This section shall list the number, title, revision, and date of all documents referenced in this report. This section shall also identify the source for all documents not available through normal Government stocking activities.

Overview of test results. This section shall be divided into the following paragraphs to provide an overview of test results.
Overall assessment of the software tested. This paragraph shall:
a. Provide an overall assessment of the software as demonstrated by the test results in this report
b. Identify any remaining deficiencies, limitations, or constraints that were detected by the testing performed. Problem/change reports may be used to provide deficiency information.
c. For each remaining deficiency, limitation, or constraint, describe:
1) Its impact on software and system performance, including identification of requirements not met
2) The impact on software and system design to correct it
3) A recommended solution/approach for correcting it
Impact of test environment. This paragraph shall provide an assessment of the manner in which the test environment may be different from the operational environment and the effect of this difference on the test results.
Recommended improvements. This paragraph shall provide any recommended improvements in the design, operation, or testing of the software tested. A discussion of each recommendation and its impact on the software may be provided. If no recommended improvements are provided, this paragraph shall state "None."

Detailed test results. This section shall be divided into the following paragraphs to describe the detailed results for each test. Note: The word "test" means a related collection of test cases.
(Project-unique identifier of a test). This paragraph shall identify a test by project-unique identifier and shall be divided into the following subparagraphs to describe the test results.
Summary of test results. This paragraph shall summarize the results of the test. The summary shall include, possibly in a table, the completion status of each test case associated with the test (for example, "all results as expected," "problems encountered," "deviations required"). When the completion status is not "as expected," this paragraph shall reference the following paragraphs for details.
Problems encountered. This paragraph shall be divided into subparagraphs that identify each test case in which one or more problems occurred.
(Project-unique identifier of a test case). This paragraph shall identify by project-unique identifier a test case in which one or more problems occurred, and shall provide:
a. A brief description of the problem(s) that occurred
b. Identification of the test procedure step(s) in which they occurred
c. Reference(s) to the associated problem/change report(s) and backup data, as applicable
d. The number of times the procedure or step was repeated in attempting to correct the problem(s) and the outcome of each attempt
e. Back¬up points or test steps where tests were resumed for retesting
Deviations from test cases/procedures. This paragraph shall be divided into subparagraphs that identify each test case in which deviations from test case/test procedures occurred.
(Project-unique identifier of a test case). This paragraph shall identify by project-unique identifier a test case in which one or more deviations occurred, and shall provide:
a. A description of the deviation(s) (for example, test case run in which the deviation occurred and nature of the deviation, such as substitution of required equipment, procedural steps not followed, schedule deviations). (Red-lined test procedures may be used to show the deviations)
b. The rationale for the deviation(s)
1. An assessment of the deviations' impact on the validity of the test case

Test log. This section shall present, possibly in a figure or appendix, a chronological record of the test events covered by this report. This test log shall include:
a. The date(s), time(s), and location(s) of the tests performed
b. The hardware and software configurations used for each test including, as applicable, part/model/serial number, manufacturer, revision level, and calibration date of all hardware, and version number and name for the software components used
c. The date and time of each test¬related activity, the identity of the individual(s) who performed the activity, and the identities of witnesses, as applicable
Notes. This section shall contain any general information that aids in understanding this document (e.g., background information, glossary, rationale). This section shall include an alphabetical listing of all acronyms, abbreviations, and their meanings as used in this document and a list of any terms and definitions needed to understand this document.

Boundary value analysis

What is boundary value analysis in software testing?
Concepts: Boundary value analysis is a methodology for designing test cases that concentrates software testing effort on cases near the limits of valid ranges Boundary value analysis is a method which refines equivalence partitioning. Boundary value analysis generates test cases that highlight errors better than equivalence partitioning. The trick is to concentrate software testing efforts at the extreme ends of the equivalence classes. At those points when input values change from valid to invalid errors are most likely to occur. As well, boundary value analysis broadens the portions of the business requirement document used to generate tests. Unlike equivalence partitioning, it takes into account the output specifications when deriving test cases.

How do you perform boundary value analysis?
Once again, you'll need to perform two steps:
1. Identify the equivalence classes.
2. Design test cases.
But the details vary. Let's examine each step.
Step 1: identify equivalence classes
Follow the same rules you used in equivalence partitioning. However, consider the output specifications as well. For example, if the output specifications for the inventory system stated that a report on inventory should indicate a total quantity for all products no greater than 999,999, then you d add the following classes
to the ones you found previously:
6. The valid class ( 0 < = total quantity on hand < = 999,999 )
7. The invalid class (total quantity on hand <0)
8. The invalid class (total quantity on hand> 999,999 )


Step 2: design test cases
In this step, you derive test cases from the equivalence classes. The process is similar to that of equivalence partitioning but the rules for designing test cases differ. With equivalence partitioning, you may select any test case within a range and any on either side of it with boundary analysis, you focus your attention on cases close to the edges of the range. The detailed rules for generating test cases follow:



Rules for test cases
1. If the condition is a range of values, create valid test cases for each end of the range and invalid test cases just beyond each end of the range. For example, if a valid range of quantity on hand is -9,999
through 9,999, write test cases that include:
1. the valid test case quantity on hand is -9,999,
2. the valid test case quantity on hand is 9,999,
3. the invalid test case quantity on hand is -10,000 and
4. the invalid test case quantity on hand is 10,000
You may combine valid classes wherever possible, just as you did with equivalence partitioning, and, once again, you may not combine invalid classes. Don�t forget to consider output conditions as well. In our inventory example the output conditions generate the following test cases:
1. the valid test case total quantity on hand is 0,
2. the valid test case total quantity on hand is 999,999
3. the invalid test case total quantity on hand is -1 and
4. the invalid test case total quantity on hand is 1,000,000

2. A similar rule applies where the, condition states that the number of values must lie within a certain range select two valid test cases, one for each boundary of the range, and two invalid test cases, one just below and one just above the acceptable range.
3. Design tests that highlight the first and last records in an input or output file.
4. Look for any other extreme input or output conditions, and generate a test for each of them.

Definition of Boundary Value Analysis from our Software Testing Dictionary:
Boundary Value Analysis (BVA). BVA is different from equivalence partitioning in that it focuses on "corner cases" or values that are usually out of range as defined by the specification. This means that if function
expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive the value often used as a technique for stress, load or volume testing. This type of validation is usually performed after positive functional validation has completed (successfully) using requirements specifications and user documentation

Equivalence Class Partitioning

A definition of Equivalence Partitioning from our software testing dictionary:
Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.

WHAT IS EQUIVALENCE PARTITIONING?
Concepts: Equivalence partitioning is a method for deriving test cases. In this method, classes of input conditions called equivalence classes are identified such that each member of the class causes the same kind of processing and output to occur.
In this method, the tester identifies various equivalence classes for partitioning. A class is a set of input conditions that are is likely to be handled the same way by the system. If the system were to handle one case in the class erroneously, it would handle all cases erroneously.
WHY LEARN EQUIVALENCE PARTITIONING?
Equivalence partitioning drastically cuts down the number of test cases required to test a system reasonably. It is an attempt to get a good 'hit rate', to find the most errors with the smallest number of test cases.
DESIGNING TEST CASES USING EQUIVALENCE PARTITIONING
To use equivalence partitioning, you will need to perform two steps

 Identify the equivalence classes
 Design test cases


STEP 1: IDENTIFY EQUIVALENCE CLASSES
Take each input condition described in the specification and derive at least two equivalence classes for it. One class represents the set of cases which satisfy the condition (the valid class) and one represents cases which do not (the invalid class)
Following are some general guidelines for identifying equivalence classes:

a) If the requirements state that a numeric value is input to the system and must be within a range of values, identify one valid class inputs which are within the valid range and two invalid equivalence classes inputs which are too low and inputs which are too high. For example, if an item in inventory can have a quantity of - 9999 to + 9999, identify the following classes:

1. One valid class: (QTY is greater than or equal to -9999 and is less than or equal to 9999). This is written as (- 9999 < = QTY < = 9999)
2. the invalid class (QTY is less than -9999), also written as (QTY < -9999)
3. the invalid class (QTY is greater than 9999) , also written as (QTY >9999)

b) If the requirements state that the number of items input by the system at some point must lie within a certain range, specify one valid class where the number of inputs is within the valid range, one invalid class where there are too few inputs and one invalid class where there are too many inputs.

For example, specifications state that a maximum of 4 purchase orders can be registered against anyone product. The equivalence classes are: the valid equivalence class: (number of purchase an order is greater than or equal to 1 and less than or equal to 4, also written as (1 < = no. of purchase orders < = 4) the invalid class (no. of purchase orders> 4) the invalid class (no. of purchase orders < 1)
c) If the requirements state that a particular input item match one of a set of values and each case will be dealt with the same way, identify a valid class for values in the set and one invalid class representing values outside of the set.
Says that the code accepts between 4 and 24 inputs; each is a 3-digit integer
• One partition: number of inputs
• Classes “x<4”, “4<=x<=24”, “24• Chosen values: 3,4,5,14,23,24,25

TEST PLAN

• The test plan keeps track of possible tests that will be run on the system after coding.
• The test plan is a document that develops as the project is being developed.
• Record tests as they come up
• Test error prone parts of software development.
• The initial test plan is abstract and the final test plan is concrete.
• The initial test plan contains high level ideas about testing the system without getting into the details of exact test cases.
• The most important test cases come from the requirements of the system.
• When the system is in the design stage, the initial tests can be refined a little.
• During the detailed design or coding phase, exact test cases start to materialize.
• After coding, the test points are all identified and the entire test plan is exercised on the software.

Purpose of Software Test Plan:

• To achieve 100% CORRECT code. Ensure all Functional and Design Requirements are implemented as specified in the documentation.
• To provide a procedure for Unit and System Testing.
• To identify the documentation process for Unit and System Testing.
• To identify the test methods for Unit and System Testing.
Advantages of test plan

• Serves as a guide to testing throughout the development.
• We only need to define test points during the testing phase.
• Serves as a valuable record of what testing was done.
• The entire test plan can be reused if regression testing is done later on.
• The test plan itself could have defects just like software!

In software testing, a test plan gives detailed testing information regarding an upcoming testing effort, including
• Scope of testing
• Schedule
• Test Deliverables
• Release Criteria
• Risks and Contingencies

Process of the Software Test Plan

• Identify the requirements to be tested. All test cases shall be derived using the current Design Specification.
• Identify which particular test(s) you're going to use to test each module.
• Review the test data and test cases to ensure that the unit has been thoroughly verified and that the test data and test cases are adequate to verify proper operation of the unit.
• Identify the expected results for each test.
• Document the test case configuration, test data, and expected results. This information shall be submitted via the on-line Test Case Design(TCD) and filed in the unit's Software Development File(SDF). A successful Peer Technical Review baselines the TCD and initiates coding.
• Perform the test(s).
• Document the test data, test cases, and test configuration used during the testing process. This information shall be submitted via the on-line Unit/System Test Report(STR) and filed in the unit's Software Development File(SDF).
• Successful unit testing is required before the unit is eligible for component integration/system testing.
• Unsuccessful testing requires a Program Trouble Report to be generated. This document shall describe the test case, the problem encountered, its possible cause, and the sequence of events that led to the problem. It shall be used as a basis for later technical analysis.
• Test documents and reports shall be submitted on-line. Any specifications to be reviewed, revised, or updated shall be handled immediately.
Deliverables: Test Case Design, System/Unit Test Report, Problem Trouble Report(if any).
Test plan template
• Test Plan Identifier
• References
• Introduction of Testing
• Test Items
• Software Risk Issues
• Features to be Tested
• Features not to be Tested
• Approach
• Item Pass/Fail Criteria
• Entry & Exit Criteria
• Suspension Criteria and Resumption Requirements
• Test Deliverables
• Remaining Test Tasks
• Environmental Needs
• Staffing and Training Needs
• Responsibilities
• Schedule
• Planning Risks and Contingencies
• Approvals
• Glossary
Test plan identifier
Master test plan for the Line of Credit Payment System.
References
List all documents that support this test plan.
Documents that are referenced include:
• Project Plan
• System Requirements specifications.
• High Level design document.
• Detail design document.
• Development and Test process standards.
• Methodology guidelines and examples.
• Corporate standards and guidelines.
Objective of the plan
Scope of the plan
In relation to the Software Project plan that it relates to. Other items may include, resource and budget constraints, scope of the testing effort, how testing relates to other evaluation activities (Analysis & Reviews), and possibly the process to be used for change control and communication and coordination of key activities.
Test items (functions)
These are things you intend to test within the scope of this test plan. Essentially, something you will test, a list of what is to be tested. This can be developed from the software application inventories as well as other sources of documentation and information.
This can be controlled on a local Configuration Management (CM) process if you have one. This information includes version numbers, configuration requirements where needed, (especially if multiple versions of the product are supported). It may also include key delivery schedule issues for critical elements.
Remember, what you are testing is what you intend to deliver to the client.
This section can be oriented to the level of the test plan. For higher levels it may be by application or functional area, for lower levels it may be by program, unit, module or build.
Software risk issues
Identify what software is to be tested and what the critical areas are, such as:
• Delivery of a third party product.
• New version of interfacing software.
• Ability to use and understand a new package/tool, etc.
• Extremely complex functions.
• Modifications to components with a past history of failure.
• Poorly documented modules or change requests.
There are some inherent software risks such as complexity; these need to be identified.
• Safety.
• Multiple interfaces.
• Impacts on Client.
• Government regulations and rules.
Another key area of risk is a misunderstanding of the original requirements. This can occur at the management, user and developer levels. Be aware of vague or unclear requirements and requirements that cannot be tested.
The past history of defects (bugs) discovered during Unit testing will help identify potential areas within the software that are risky. If the unit testing discovered a large number of defects or a tendency towards defects in a particular area of the software, this is an indication of potential future problems. It is the nature of defects to cluster and clump together. If it was defect ridden earlier, it will most likely continue to be defect prone.
One good approach to define where the risks are is to have several brainstorming sessions.
Features to be tested
This is a listing of what is to be tested from the user's viewpoint of what the system does. This is not a technical description of the software, but a users view of the functions.
Set the level of risk for each feature. Use a simple rating scale such as (H, M, L): High, Medium and Low. These types of levels are understandable to a User. You should be prepared to discuss why a particular level was chosen.
Features not to be tested
This is a listing of what is 'not' to be tested from both the user's viewpoint of what the system does and a configuration management/version control view. This is not a technical description of the software, but a user's view of the functions.
Identify why the feature is not to be tested, there can be any number of reasons.
• Not to be included in this release of the Software.
• Low risk has been used before and was considered stable.
• Will be released but not tested or documented as a functional part of the release of this version of the software.
Approach (strategy)
This is your overall test strategy for this test plan; it should be appropriate to the level of the plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels of plans. Overall rules and processes should be identified.
• Are any special tools to be used and what are they?
• Will the tool require special training?
• What metrics will be collected?
• Which level is each metric to be collected at?
• How is Configuration Management to be handled?
• How many different configurations will be tested?
• Hardware
• Software
• Combinations of HW, SW and other vendor packages
• What levels of regression testing will be done and how much at each test level?
• Will regression testing be based on severity of defects detected?
• How will elements in the requirements and design that do not make sense or are un testable be processed?

If this is a master test plan the overall project testing approach and coverage requirements must also be identified.
Specify if there are special requirements for the testing.
• Only the full component will be tested.
• A specified segment of grouping of features/components must be tested together.
Other information that may be useful in setting the approach are:
(3) MTBF, Mean Time Between Failures - if this is a valid measurement for the test involved and if the data is available.
(4) SRE, Software Reliability Engineering - if this methodology is in use and if the information is available.
How will meetings and other organizational processes be handled?
.Item pass/fail criteria
Show stopper issues. Specify the criteria to be used to determine whether each test item has passed or failed.
Entry & exit criteria
Entrance Criteria
The Entrance Criteria specified by the system test controller, should be fulfilled before System Test can commence. In the event, that any criterion has not been achieved, the System Test may commence if Business Team and Test Controller are in full agreement that the risk is manageable.
• All developed code must be unit tested. Unit and Link Testing must be completed and signed off by development team.
• System Test plans must be signed off by Business Analyst and Test Controller.
• All human resources must be assigned and in place.
• All test hardware and environments must be in place, and free for System test use.
• The Acceptance Tests must be completed, with a pass rate of not less than 80%.
Exit Criteria
The Exit Criteria detailed below must be achieved before the Phase 1 software can be recommended for promotion to Operations Acceptance status. Furthermore, I recommend that there be a minimum 2 days effort Final Integration testing AFTER the final fix/change has been retested.
• All High Priority errors from System Test must be fixed and tested
• If any medium or low-priority errors are outstanding - the implementation risk must be signed off as acceptable by Business Analyst and Business Expert
Suspension Criteria and Resumption Requirements:
This is a particular risk clause to define under what circumstances testing would stop and restart.

Resumption Criteria
In the event that system testing is suspended resumption criteria will be specified and testing will not re-commence until the software reaches these criteria.
• Project Integration Test must be signed off by Test Controller and Business Analyst.
• Business Acceptance Test must be signed off by Business Expert.

Risks and Contingencies
This defines all other risk events, their likelihood, impact and counter measures to over come them.
Summary
The goal of this exercise is to familiarize students with the process of creating test plans.
This exercise is divided into three tasks as described below.
• Devise a test plan for the group.
• Inspect the plan of another group who will in turn inspect yours.
• Improve the group's own plan based on comments from the review session
Task 1: Creating a Test Plan
The role of a test plan is to guide all testing activities. It defines what is to be tested and what is to be overlooked, how the testing is to be performed (described on a general level) and by whom. It is therefore a managerial document, not technical one - in essence, it is a project plan for testing. Therefore, the target audience of the plan should be a manager with a decent grasp of the technical issues involved.
Experience has shown that good planning can save a lot of time, even in an exercise, so do not underestimate the effort required for this phase.
The goal of all these exercises is to carry out system testing on Word Pad, a simple word processor. Your task is to write a thorough test plan in English using the above-mentioned sources as guidelines. The plan should be based on the documentation of Word Pad




Task 2: Inspecting a Test Plan

The role of a review is to make sure that a document (or code in a code review) is readable and clear and that it contains all the necessary information and nothing more. Some implementation details should be kept in mind:
• The groups will divide their roles themselves before arriving at the inspection. A failure to follow the roles correctly will be reflected in the grading. However, one of the assistants will act as the moderator and will not assume any other roles.
• There will be only one meeting with the other group and the moderator. All planning, overview and preparation is up to the groups themselves. You should use the suggested check lists in the lecture notes while preparing. Task 3 deals with the after-meeting activities.
• The meeting is rather short, only 60 minutes for a pair (that is, 30 minutes each). Hence, all comments on the language used in the other group's test plan are to be given in writing. The meeting itself concentrates on the form and content of the plan.

Task 3: Improved Test Plan and Inspection Report

After the meeting, each group will prepare a short inspection report on their test plan listing their most typical and important errors in the first version of the plan together with ideas for correcting them. You should also answer the following questions in a separate document:
• What is the role of the test plan in designing test cases?
• What were the most difficult parts in your test plan and why?
Furthermore, the test plan is to be revised according to the input from the inspection.

Test Case:
A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
In software engineering, a test case is a set of conditions or variables under which a tester will determine if a requirement upon an application is partially or fully satisfied. It may take many test cases to determine that a requirement is fully satisfied. In order to fully test that all the requirements of an application are met, there must be at least one test case for each requirement unless a requirement has sub requirements. In that situation, each sub requirement must have at least one test case. Some methodologies recommend creating at least two test cases for each requirement. One of them should perform positive testing of requirement and other should perform negative testing.

If the application is created without formal requirements, then test cases are written based on the accepted normal operation of programs of a similar class.
What characterises a formal, written test case is that there is a known input and an expected output, which is worked out before the test is executed. The known input should test a precondition and the expected output should test a post condition.
Under special circumstances, there could be a need to run the test, produce results, and then a team of experts would evaluate if the results can be considered as a pass. This happens often on new products' performance number determination. The first test is taken as the base line for subsequent test / product release cycles.
Written test cases include a description of the functionality to be tested taken from either the requirements or use cases, and the preparation required to ensure that the test can be conducted.
Written test cases are usually collected into Test suites.
A variation of test cases are most commonly used in acceptance testing. Acceptance testing is done by a group of end-users or clients of the system to ensure the developed system meets their requirements. User acceptance testing is usually differentiated by the inclusion of happy path or positive test cases.

System Testing

System testing is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of Black box testing and as such, should require no knowledge of the inner design of the code or logic. System testing should be performed by testers who are trained to plan, execute, and report on application and system code. They should be aware of scenarios that might not occur to the end user, like testing for null, negative, and format inconsistent values.
System testing is actually done to the entire system against the Functional Requirement Specifications (FRS) and/or the System Requirement Specification (SRS). Moreover, the System testing is an investigatory testing phase, where the focus is to have almost a destructive attitude and test not only the design, but also the behavior and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the software/hardware requirements specification.
Types of System Testing:

Types of Testing:
• Sanity Testing
• Compatibility Testing
• Recovery Testing
• Usability Testing
• Exploratory Testing
• Adhoc Testing
• Stress Testing
• Volume Testing
• Load Testing
• Performance Testing
• Security Testing

Sanity Testing
Testing the major working functionality of the system whether the system is working fine for the major testing effort. This testing is done before the testing phase and after the coding. The tests performed during sanity testing are
Compatibility Testing
Testing how well software performs in a particular hardware/software/operating system/network/environment etc.
Recovery Testing
Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications.
UsabilityTesting:
This testing is also called as ‘Testing for User-Friendliness’. This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user.
ExploratoryTesting:
This testing is similar to the ad-hoc testing and is done in order to learn/explore the application.
Ad-hocTesting:
This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing helps in deciding the scope and duration of the various other testing and it also helps testers in learning the application prior starting with any other testing.
StressTesting:
The application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries etc. which checks for the stress/load the applications can withstand.
VolumeTesting:
Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of theSystem.
LoadTesting:
The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what point the web-site/application fails or at what point its performance degrades.
UserAcceptanceTesting:
In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to.
AlphaTesting:
In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers.

BetaTesting:
In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers.








Testing Techniques

Manual Testing Technique:

This method of testing is done manually by the tester on the application under test
Why do manual testing?
Even in this age of short development cycles and automated-test-driven development, manual testing contributes vitally to the software development process. Here are a number of good reasons to do manual testing:
• By giving end users repeatable sets of instructions for using prototype software, manual testing allows them to be involved early in each development cycle and draws invaluable feedback from them that can prevent "surprise" application builds that fail to meet real-world usability requirements.
• Manual test cases gives testers something to use while awaiting for the construction of the application
• Manual test cases can be used to provide feedback to development teams in the form of a set of repeatable steps that lead to bugs or usability problems.
• If done thoroughly, manual test cases can also form the basis for help or tutorial files for the application under test.
• Finally, in a nod toward test-driven development, manual test cases can be given to the development staff to provide a clear description of the way the application should flow across use cases.
In summary, manual testing fills a gap in the testing repertoire and adds invaluably to the software development process.
Test automation is the use of software to control the execution of tests , the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.
Over the past few years , tools that help programmers quickly create applications with graphical user interfaces have dramatically improved programmer productivity. This has increased the pressure on testers, who are often perceived as bottlenecks to the delivery of software products. Testers are being asked to test more and more code in less and less time. Test automation is one way to do this, as manual testing is time consuming. As and when different versions of software are released, the new features will have to be tested manually time and again. But, now there are tools available that help the testers in the automation of the GUI which reduce the test time as well as the cost, other test automation tools support execution of performance tests.
Many test automation tools provide record and playback features that allow users to record interactively user actions and replay it back any number of times, comparing actual results to those expected. However, reliance on these features poses major reliability and maintainability problems. Most successful automators use a software engineering approach, and as such most serious test automation is undertaken by people with development experience.
A growing trend in software development is to use testing frameworks such as the xUnit frameworks (for example, JUnit and Nunit which allow the code to conduct unit tests to determine whether various sections of the code are acting as expected in various circumstances. Test cases describe tests that need to be run on the program to verify that the program runs as expected. All three aspects of testing can be automated.
Another important aspect of test automation is the idea of partial test automation, or automating parts but not all of the software testing process. If, for example, an oracle cannot reasonably be created, or if fully automated tests would be too difficult to maintain, then a software tools engineer can instead create testing tools to help human testers perform their jobs more efficiently. Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped with oracles), defect logging, etc., without necessarily automating tests in an end-to-end fashion.
Test automation is expensive and it is an addition, not a replacement, to manual testing. It can be made cost-effective in the longer term though, especially in regression testing. One way to generate test cases automatically is model-based testing where a model of the system is used for test case generation, but research continues into a variety of methodologies for doing so.

Testing Types

This article explains about different testing types Unit Test. System Test, Integration Test, Functional Test, Performance Test, Beta Test and Acceptance Test.
Introduction:
The development process involves various types of testing. Each test type addresses a specific testing requirement. The most common types of testing involved in the development process are:

• Unit Test.
• System Test
• Integration Test
• Functional Test
• Performance Test
• Beta Test
• Acceptance Test.


Unit Test
The first test in the development process is the unit test. The source code is normally divided into modules, which in turn are divided into smaller units called units. These units have specific behavior. The test done on these units of code is called unit test. Unit test depends upon the language on which the project is developed. Unit tests ensure that each unique path of the project performs accurately to the documented specifications and contains clearly defined inputs and expected results.


System Test
Several modules constitute a project. If the project is long-term project, several developers write the modules. Once all the modules are integrated, several errors may arise. The testing done at this stage is called system test.


System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.


Testing a specific hardware/software installation. This is typically performed on a COTS (commercial off the shelf) system or any other system comprised of disparent parts where custom configurations and/or unique installations are the norm.


Functional Test
Functional test can be defined as testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do.


Acceptance Testing
Testing the system with the intent of confirming readiness of the product and customer acceptance.


Ad Hoc Testing
Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.


Alpha Testing
Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.





Automated Testing
Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is diminished. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up the tests.


Beta Testing
Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large in hopes that they will buy the final product when it is released.


Black Box Testing
Testing software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as a specification or requirements document.
Compatibility Testing
Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.


Configuration Testing
Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.


Independent Verification & Validation
The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software. A term often applied to government work or where the government regulates the products, as in medical devices.


Installation Testing
Testing with the intent of determining if the product will install on a variety of platforms and how easily it installs.


Integration Testing
Testing two or more modules or functions together with the intent of finding interface defects between the modules or functions. Testing completed at as a part of unit or functional testing, and sometimes, becomes its own standalone test phase. On a larger level, integration testing can involve a putting together of groups of modules and functions with the goal of completing and verifying that the system meets the system requirements. (see system testing)


Load Testing
Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation.


Performance Testing
Testing with the intent of determining how quickly a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing.


Pilot Testing
Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a Move-to-Production activity for ERP releases or a beta test for commercial products. Typically involves many users, is conducted over a short period of time and is tightly controlled. (see beta testing).


Regression Testing
Testing with the intent of determining if bug fixes have been successful and have not created any new problems. Also, this type of testing is done to ensure that no degradation of baseline functionality has occurred.


Security Testing
Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.


Software Testing
The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The organization and management of individuals or groups doing this work is not relevant. This term is often applied to commercial products such as internet applications. (contrast with independent verification and validation)


Stress Testing
Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity.


User Acceptance Testing
See Acceptance Testing.


White Box Testing
Testing in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose.

TEST DESIGN TECHNIQUES:

While developing the test cases if at all the test engineer feels complex in some areas to over come that complexity usually the test engineer will use test design techniques.

Generally two types of techniques are used in most of the companies.
1. Boundary Value Analysis (BVA).
2. Equableness Class Partition (ECP).
1). Boundary Value Analysis (BVA).
Whenever the engineers need to develop test cases for a range kind of input then they will go for boundary value analysis. Which describes to concentrate on the boundary of the rang.
Usually they test with the following values.
LB-1 LB LB+1 MV UB-1 UB UB+1

2). Equableness Class Partition (ECP).
Whenever the test engineer need to develop test cases for a feature which has more number of validation then one will go for equableness class partition. Which describe first divide the class of inputs and then prepare the test cases.
Ex: Develop the test cases for E-Mail Test box whose validations are as follows.
Requirements:
1. It should accept Minimum 4 characters Maximum 20 characters.
2. It should accept only small characters.
3. It should accept @ and _ special symbols only.

BUG LIFE CYCLE

New / Open:
When ever the defect is found for the first time the test engineer will set the status as New / Open. But some companies will say to set the status as only new at this situation and once the developers accept the defect they will set the status as open.
Reopen and Closed:
Once the defects are rectified by the developer and the next build is released to the testing department then the testers will check whether the defects are rectified properly or not.
If they feel rectified they will set the status as Closed. Other wise they will set the status as Reopen
Fixed for Verification / Fixed / Rectified.
When ever the test engineer raises the defects, accepted in the developers. Rectified then they will set the status as Fixed.
Hold:
Whenever the developer confused to accept or Reject the defect he will set the status as hold.
Testers Mistake / Testers Error / Rejected.
Whenever the developer is confused it is not at all a defect then he will set the status as reject.
As Per Design (This is a Rare case)
When ever some new changes are incorporated engineers then the test engineers will raze then as defects but the developers will set the status as ‘As Per Design’.
Error:
It is a problem related to the program.
Defect:
If the test engineer with respect to the functionality identifies a problem then it is called defect.
Bug:
If the developer accepts the defect, that is called as Bug.
Fault / Failure:
The customer identity the problem, after delivery. It is called Fault / Failure.

SOFTWARE TESTING LIFE CYCLE

It contains 6 phases.
1. TEST PLANNING.
2. TEST DEVELOPMENT.
3. TEST EXECUTION.
4. RESULT ANALYSIS.
5. BUG TRACKING.
6. REPORTING.
1) TEST PLANNING
Plan:
Plan is a strategic document, which describes how to perform a task in an effective, efficient and optimized way.
Optimization:
Optimization is a process of reducing or utilizing the input resources to their maximum and getting the maximum possible output.
Test Plan:
It is a strategic document, which describe how to perform testing on an application in an effective, efficient and optimized way. The Test Lead prepares test plan.

2. TEST DEVELOPMENT.

TYPES OF TEST CASES
Test cases are broadly divided into two types.
1. G.U.I Test Cases.
2. Functional test cases.
Functional test cases are further divided into two types.
1. Positive Test Cases.
2. Negative Test Cases.
GUIDELINES TO PREPARE GUI TEST CASES:
1. Check for the availability of all the objects.
2. Check for the alignments of the objects if at all customer has specified the requirements.
3. Check for the consistence of the all the objects.
4. Check for the Spelling and Grammar.
Apart from these guidelines anything we test without performing any action will fall under GUI test cases.
GUIDELINES FOR DEVELOPING POSITIVE TEST CASES.
1. A test engineer must have positive mind setup.
2. A test engineer should consider the positive flow of the application.
3. A test engineer should use the valid input from the point of functionality.
GUIDELINES FOR DEVELOPING THE NEGATIVE TEST CASES:
1. A test engineer must have negative mind setup.
2. He should consider the negative flow of the application.
3. He should use at least one invalid input for a set of data.

3. TEST EXECUTION.
During the test execution phase the test engineer will do the following.
1. He will perform the action that is described in the description column.
2. He will observe the actual behavior of the application.
3. He will document the observed value under the actual value column.

4. RESULT ANALYSIS.
In this phase the test engineer will compare the expected value with actual value and mention the result as pass if both are match otherwise mentioned the result as fail.
5. BUG TRACKING.
Bug tracking is a process in which the defects are identifying, isolated and managed.

DEFECT PROFILE DOCUMENT
Defect ID:
The sequences of defect numbers are listed out here in this section.
Steps of Reproducibility:
The list of all the steps that are followed by a test engineer to identity the defect are listed out here in this section.
Submitter:
The test engineer name who submits the defect will be mentioned here in this section.
Date of Submission:
The date on which the defects submitted is mentioned here in this section.
Version Number:
The corresponding version number is mentioned here in this section.
Build Number:
Corresponding build number is mentioned here is this section.
Assigned to:
The project lead or development lead will mentioned the corresponding developers name for name the defect is assigned.
Severity:
How serious the defect is, is described in terms of severity. It is classified in to 4 types.
1. FATAL Sev1 S1 1
2. MAJOR Sev2 S2 2
3. MINOR Sev3 S3 3
4. SUGGESION Sev4 S4 4
FATAL:
It is all the problems are related to navigational blocks or unavailability of functionality then such types of problems are treated to be FATAL defect.
Note: It is also called as show stopper defects.
MAJOR:
It at all the problems are related to the working of the features then such types of problems are treated to be MAJOR defects.
MINOR:
It at all the problems are related to the look and feel of the application then such types of problems are treated to be MINOR defects.
SUGGITIONS:
If at all the problems are related to the value of the application then such types of problems are treated to be suggestions.
Priority:
The sequence in which the defects have to be rectified is described in terms of priority. It is classified in to 4 types.
1. CRITICAL
2. HIGH
3. MEDIUM
4. LOW

Usually the FATAL defects are given CRITICAL priority, MAJOR defects are given HIGH priority, MINOR defects are given MEDIUM priority and SUGGITION defects are given LOW priority sent depending upon the situation the priority may be changed by the project lead or development lead.
Ex: -
Low Severity High Priority Case:
In the case of customer visit all the look and feel defects, which are usually less savior, are given highest priority.

High Severity Low Priority Case:
If at all some part of the application is not available because it is under development still the test engineer will treat team as FATAL defect, but the development lead will give less priority for those defects.

SOFTWARE DEVELOPMENT MODELS

There are 6 MAJOR models.
 Water fall Model (or) Sequential Model
 Prototype Model
 Evolutionary Model
 Spiral Model
 Fish Model
 V - Model
1) Water fall Model (or) Sequential Model









Advantages:
It is a simple model and easy to maintain project implementation is very easy.
Drawbacks:
Can’t incorporate new changes in the middle of the project development.
2) Prototype Model








Advantages:
Whenever the customer with the requirements then this is the best model to gather the clear requirements.
Drawbacks:
It is not a complete model.
Time consuming model
Prototype has to be build company’s cost
The user may strict to the prototype and limit his requirements.
3) Evolutionary Model








Advantages
Whenever the customer is revolving the requirements this is the best suitable model.
Drawbacks
Dead lines are not clearly defined
Project monitoring and maintenance is difficult.
4) Spiral Model



Advantages
This is the best-suited model for highly risk-based projects.
Drawbacks
Time consumed model, costly model and project monitoring and maintenance is difficult.
5) Fish Model
Verification:
Verification is a process of checking conducted on each and every role of an organization in order to check whether he is doing his tasks in a right manner according to the guidelines or not. Right from the starting of the process tiles the ending of the process. Usually the documents are verified in this process of checking.
Validation
Validation is a process of checking conducted on the developed product in order to check whether it is working according to the requirements or not.














Advantages
As the verification and validation are done the outcome of a Fish Model is a quality product.
Drawbacks: Time consuming and costly model.
6) V – Model

Advantages
As the verification and validation are done along with the Test Management. The out come of V-Model is a quality product.
Drawback
Time consuming and costly model.

LEVELS OF TESTING

There are 5 levels of testing.
1) Unite level testing
2) Module level testing
3) Integration level testing
4) System level testing
5) User acceptance level testing
1) Unit level testing
If one performs testing on a unit then that level of testing is known as unit level testing. It is white box testing usually developers perform it.
Unit: - It is defined as a smallest part of an application.
2) Module level testing
If one perform testing on a module that is known as module level testing. It is black box testing usually test engineers perform it.
3) Integration level testing
Once the modules are developing the developers will develop some interfaces and integrate the module with the help of those interfaces while integration they will check whether the interfaces are working fine or not. It is a white box testing and usually developers or white box testers perform it.
The developers will be integrating the modules in any one of the following approaches.
i) Top Down Approach (TDA)
In this approach the parent modules are developed first and then integrated with child modules.
ii) Bottom Up Approach (BUA)
In this approach the child modules are developed first and the integrated that to the corresponding parent modules.
iii) Hybrid Approach
This approach is a mixed approach of both Top down and Bottom up approaches.
iv) Big Bang Approach/system testing
Once all the modules are ready at a time integrating them finally is known as big bang approach.
STUB
While integrating the modules in top down approach if at all any mandatory module is missing then that module is replace with a temporary program known as STUB.

DRIVER
While integrating the modules in bottom up approach if at all any mandatory module is missing then that module is replace with a temporary program known as DRIVER.
4) System level testing
Once the application is deployed into the environment then if one performs testing on the system it is known as system level testing it is a black box testing and usually done by the test engineers.
At this level of testing so many types of testing are done.
Some of those are
 System Integration Testing
 Load Testing
 Performance Testing
 Stress Testing etc….
5) User acceptance testing.
The same system testing done in the presents of the user is known as user acceptance testing. It s a black box testing usually done by the Test engineers.

TESTING METHODOLOGY (OR) TESTING TECHNIQUES

There are 3 methods are there
1) Black Box Testing
2) White Box Testing
3) Gray Box Testing
1 Black Box Testing
It is a method of testing in which one will perform testing only on the functional part of an application with out having any structural knowledge. Usually test engineers perform it.
2 White box Testing (Or) Glass box Testing (Or) Clear box Testing
It is a method of testing in which one will perform testing on the structural part of an application. Usually developers are white box testers perform it.
3 Gray box Testing
It is a method of testing in which one will perform testing on both the functional part as well as the structural part of an application.
Note:
The Test engineer with structural Knowledge will perform gray box testing.

SDLC

SDLC
It contains 6 phases.

1) INITIAL PHASE / REQUIREMENT PHASE.
2) ANALYSIS PHASE.
3) DESIGN PHASE.
4) CODING PHASE.
5) TESTING PHASE.
6) DELIVERY AND MAINTENANCE PHASE.
Initial Phase
Task :Interacting with the customer and gathering the requirements.
Roles :BA (Business Annalist) ,EM (Engagement Manager)
Process :
First of all the business analist will take an appointment from the customer, collects the templates from the company meats the customer on the appointed date gathers the requirements with the support of the templates and comeback to the company with a requirements documents. Then the engagement manager will check for the extra requirements if at all he fined any extra requirements he is responsible for the excess cast of the project. The engagement manager is also responsible for prototype demonstration in case of confused requirements.
Template: It is defined as a pre-defined format with pre-defined fields used for preparing a document perfectly.
Prototype: It is a rough and rapidly developed model used for demonstrating to the client in order to gather clear requirements and to win the confidence of a customer.
Proof: The proof of this phase is requirements document which is also called with the following name
FRS - (Functional Requirement Specification)
BRS - (Business Requirement Specification)
CRS - (Client/Customer Requirement Specification)
URS - (User Requirement Specification)
BDD - (Business Design Document)
BD - (Business Document)
Note: Some company’s may the over all information in one document called as ‘BRS’ and the detailed information in other document called ‘FRS’. But most of the company’s will maintain both of information in a single document.
2) Analysis Phase
Task Feasibility study.
Tentative planning.
Technology selection.
Requirement Aanalysis.
Roles: System Annalist (SA), Project Manager (PM), Team Manager (TM)
Process:

(I) Feasibility study It is detailed study of the requirements in order to check whether all the requirements are possible are not.
(II) Tentative planning The resource planning and time planning is temporary done in this section.
(III) Technology selection The lists of all the technologies that are to be used to accomplish the project successfully will be analyzed listed out hear in this section.
(IV) Requirement analysis
The list of all the requirements like human resources, hardware, software required to accomplish this project successfully will be clearly analyzed and listed out hear in this section.
Proof: The proof of this phase is SRC (Software Requirement Specification).
3) Design phase
Tasks: HLD (High Level Designing), LLD (Low Level Designing)
Roles HLD is done by the CA (Chief Architect). LLD is done by the TL (Technical Lead).
Process: The chief architect will divided the whole project into modules by drawing some diagrams and technical lead will divided each module into sub modules by drawing some diagrams using UML (Unified Modeling Language).
The technical lead will also prepare the PSEUDO Code.
Proof: The proof of this phase is technical design document (TDD).
Pseudo Code: It is a set of English instructions used for guiding the developer to develop the actual code easily.
Module :Module is defined as a group of related functionalities to perform a major task.
4) Coding Phase
Task: Programming / Coding.
Roles: Developers / Programmers.
Process: Developers will develop the actual source code by using the PSUEDO Code and following the coding standards like proper indentation, color-coding, proper commenting and etc…
Proof: The proof of this phase is SCD (Source Code Document).
5) Testing Phase
Task: Testing.
Roles: Test Engineer.
Process: First of all the Test Engineer will receive the requirement documents and review it for under studying the requirements.
 If at all they get any doubts while understanding the requirements they will prepare the Review Report (RR) with all the list of doubts.
 Once the clarifications are given and after understanding the requirements clearly they will take the test case template and write the test cases.
 Once the build is released they will execute the test cases.
 After executions if at all find any defects then they will list out them in a defect profile document.
 Then they will send defect profile to the developers and wait for the next build.
 Once the next build is released they will once again execute the test cases
 If they find any defects they will follow the above procedure again and again till the product is defect free.
 Once they feel product is defect free they will stop the process.
Proof: The proof of this phase is Quality Product.
Test case: Test case is an idea of a Test Engineer based on the requirement to test a particular feature.
6) Delivery and Maintenance phase
Delivery:
Task: Installing application in the client environment.
Roles: Senior Test Engineers / Deployment Engineer.
Process: The senior test engineers are deployment engineer will go to the client place and install the application into the client environment with the help of guidelines provided in the deployment document.
Maintenance: After the delivery if at all any problem occur then that will become a task based on the problem the corresponding roll will be appointed. Based on the problem role will define the process and solve the problem.