Sta - Unit Iii
Sta - Unit Iii
UNIT III
TEST DESIGN AND EXECUTION
Test Objective Identification, Test Design Factors, Requirement identification, Testable
Requirements, Modeling a Test Design Process, Modeling Test Results, Boundary Value
Testing, Equivalence Class Testing, Path Testing, Data Flow Testing, Test Design
Preparedness Metrics, Test Case Design Effectiveness, Model Driven Test Design, Test
Procedures, Test Case Organization and Tracking, Bug Reporting, Bug Life Cycle.
3.1 TEST OBJECTIVE IDENTIFICATION
Test Objective Identification in software testing refers to the process of defining clear
and specific goals for a testing effort. It involves determining what aspects of the
software need to be tested, what specific functionalities should be validated and what
quality attributes should be evaluated.
Test Objective Identification also defines the goals that a test case is intended to achieve.
This is an important step in software testing as it helps to ensure that the test cases are
targeted at the correct areas of the software and they are effective in finding faults.
Test Objective Identification phase is crucial for planning and designing effective test case
and test suites. It helps testers and stakeholders align their understanding of the testing
scope and expectations. This ensures that the testing effort is focused and purposeful. By
defining clear objectives, the testers can prioritize their testing activities and allocate
resources effectively.
We cannot test the system comprehensively if we do not understand it. Therefore, the first
step in identifying the test objective is to read, understand, and analyze the functional
specification. It is essential to have a background familiarity with the subject area, the
goals of the system, business processes, and system users for a successful analysis.
We have to understand the explicit requirements and also critically analyze requirements
to extract the inferred requirements that are embedded in the requirements.
An inferred requirement is one that a system is expected to support but is not explicitly
stated. Inferred requirements need to be tested just like the explicitly stated requirements.
As an example, let us consider the requirement that the system must be able to sort a list of
items into a desired order.
There are several unstated requirements not being verified by the above test objective. Many
more test objectives can be identified for the requirement:
• Verify that the system produces the sorted list of items when an already sorted list of
items is given as input.
• Verify that the system produces the sorted list of items when a list of items with
varying length is given as input.
• Verify that the number of output items is equal to the number of input items.
• Verify that the contents of the sorted output records are the same as the input record
contents.
• Verify that the system produces an empty list of items when an empty list of items is
given as input.
• Check the system behavior and the output list by giving an input list containing one
or more empty (null) records.
• Verify that the system can sort a list containing a very large number of unsorted items.
The test objectives are put together to form a test group or a subgroup after they have been
identified. A set of (sub) groups of test cases are logically combined to form a larger group. A
hierarchical structure of test groups is called a test suite.
1
UNIT III-SOFTWARE TESTING AND AUTOMATION
Once the test objectives have been identified, they should be documented in the test plan. This will
help to ensure that the test cases are developed and executes in a way that meets the specific goals
of the testing.
2
UNIT III-SOFTWARE TESTING AND AUTOMATION
It helps to ensure that the software meets the requirements
3
UNIT III-SOFTWARE TESTING AND AUTOMATION
3.1.3 Some of the tips for identifying the test objectives:
Start by reviewing the software requirements
Consider the risks associated with the software
Think about the test environment
Be specific and measurable
Document the test objectives in the test plan
3.1.4 The objectives of software testing vary depending on the level of testing being performed:
Unit testing - The goal of unit testing is to ensure that each unit of code performs as
expected and is free of bugs.
Integration testing - The goal of integration testing is to identify issues with how different
units of the software interact.
System testing - The goal of system testing is to verify that the complete software system
meets all of its requirements.
Acceptance testing- The goal of acceptance testing is to ensure that the software is ready to
be delivered to the users.
The main objectives of software testing are to ensure that the software is reliable, efficient, and meets
the user's requirements.
generated for the above idea of coverage metrics. The general structure of the coverage
4
UNIT III-SOFTWARE TESTING AND AUTOMATION
matrix [Aij] is represented as shown in the Table, where Ti stands for the ith test case and
Nj stands for the jth requirement to be covered; [Aij] stands for coverage of the test case
Ti over the tested element Nj.
The complete set of test cases, that is, a test suite, and the complete set of tested elements
of the coverage matrix are identified as Tc ={T1,T2,..,Tq} and Nc ={N1,N2,..,Np},
respectively.
Effectiveness : A structured test case development methodology must be used as much
as possible to generate a test suite. A structured development methodology
minimizes maintenance work and improves productivity. Careful design of test cases in
the early stages of test suite development ensures their maintainability as new
requirements emerge.
The correctness of the requirements is very critical in order to develop effective test
cases to reveal defects. Therefore, emphasis must be put on identification and analysis
of the requirements from which test objectives are derived.
Productivity :Test cases are created based on the test objectives (productivity).
Validation : Another aspect of test case production is validation of the test cases to
ensure that thoseare reliable. It is natural to expect that an executable test case meets
its specification before it is used to examine another system. This includes ensuring that
test cases have adequate error handling procedures and precise pass–fail criteria.
Maintenance : We need to develop a methodology to assist the production, execution,
and maintenance of the test suite.
User Skill : Another factor to be aware is the potential users of the test suite. The test
suite should be developed with these users in mind; the test suite must be easy to
deploy and execute in other environments, and the procedures for doing so need to be
properly documented. A test suite production life cycle should consider all six factors
discussed above.
3.2.1 :Other Factors that are considered for designing Test Cases :
1. Correctness
2. Negatives
3. User Interface
4. Usability
5. Performance
6. Security
7. Integration
8. Reliability
9. Compatibility
Security: Process to determine that an Information System protects data and maintains
functionality as intended. The basic security concepts that need to be covered by security testing
are the Confidentiality, Integrity, Authentication and Authorization.
Integration : Integration testing is a logical extension of unit testing. In its simplest form, two
units that have already been tested are combined into a component and the interface between them
is tested.
Reliability : Reliability testing is to monitor a statistical measure of software maturity over time
and compare this to a desired reliability goal.
Compatibility : Compatibility testing is a part of software's non-functional tests. This testing is
conducted on the application to evaluate the application's compatibility with the computing
environment. Browser compatibility testing can be more appropriately referred to as user
experience testing. This requires that the web applications are tested on various web browsers to
ensure that the Users have the same visual experience irrespective of the browsers through which
they view the web application.
7
UNIT III-SOFTWARE TESTING AND AUTOMATION
Open State: In this state, the marketing manager is in charge of the requirement and
coordinates the following activities.
Reviews the requirement to find duplicate entries. The marketing manager can move the
duplicate requirement from the open state to the decline state with an explanation and a
pointer to the existing requirement. Also, he or she may ensure that there are no
ambiguities in the requirement and, if there is any ambiguity, consult with the submitter
and update the description and the note fields of the requirement.
Reevaluates the priority of the requirement assigned by the submitter and either accepts it
or modifies it. Determines the severity of the requirement. There are two levels of
severity defined for each requirement: normal and critical.
The marketing manager may decline a requirement in the open state and terminate the
development process, thereby moving the requirement to the decline state with a proper
explanation.
The following fields may be updated by the marketing manager, who is the owner of the
requirement in the open state:
priority: Reevaluate the priority—high or normal—of this requirement.
severity: Assign a severity level—normal or critical—to the requirement.
decline_note: Give an explanation of the requirement if declined.
software_release: Suggest a preferred software release for the requirement.
Review State: The director of software engineering is the owner of the requirement in the
review state. The software engineering director reviews the requirement to understand it and
estimate the time required to implement this. The director thus prepares a preliminary version
of the functional specification for this requirement. This scheme provides a framework to map
the requirement to the functional specification which is to be implemented.
The director of software engineering can move the requirement from the review state
to the assign state by changing the ownership to the marketing manager. Moreover, the
director may decline this requirement if it is not possible to implement.The following fields
may be updated by the director:
eng_comment: Comments generated during the review are noted in this field.
8
UNIT III-SOFTWARE TESTING AND AUTOMATION
time_to_implement: This field holds the estimated time in person-weeks to implement
the requirement.
attachment: An analysis document, if there is any, including figures and descriptions
that are likely to be useful in the future development of functional specifications.
eng_assigned: Name of the engineer assigned by the director to review the
requirement.
Assign State: The marketing manager is the owner of the requirement in the assign state. A
marketing manager assigns the requirement to a particular software release and moves the
requirement to the commit state by changing the ownership to the program manager, who
owns that particular software release. The marketing manager may decline the requirement
and terminate the development process, thereby moving the requirement to the decline state.
The following fields are updated bythe marketing manager:
decline_note and software_release.
The former holds an explanation for declining, if it is moved to the decline state. On
the other hand, if the requirement is moved to the commit state, the marketing manager
updates the latter field to specify the software release in which the requirement will be
available.
Commit State: The program manager is the owner of the requirement in the commit state.
The requirement stays in this state until it is committed to a software release. The program
managerreviews all the requirements that are suggested to be in a particular release which is
owned by him.
The requirement may be moved to the implement state by the program manager after
it is committed to a particular software release. The test engineers must complete the review
of the requirement and the relevant functional specification from a testability point of view.
Next, the test engineers can start designing and writing test cases for this requirement.
The only field to be updated by the program manager, who is the owner of the
requirement in the commit state, is committed_release. The field holds the release number
for this requirement.
Implement State: The director of software engineering is the owner of the requirement in the
implement state. This state implies that the software engineering group is currently coding
and unit testing the requirement.
The following fields may be updated by the director, since he or she is the owner of
a requirement in the implement state:
decline_note: An explanation of the reasons the requirement is moved to decline state.
Verification State: The test manager is the owner of the requirement in the verification state.
The test manager verifies the requirement and identifies one or more methods for assigning a
test verdict: (i) testing, (ii) inspection, (iii) analysis, and (iv) demonstration.
If testing is a method for verifying a requirement, then the test case identifiers and
their results are provided. This information is extracted from the test factory. Inspection
means review of the code. Analysis means mathematical and/or statistical analysis.
Demonstration means observing the system in a live operation. A verdict is assigned to the
requirement by providing the degree of compliance information: full compliance, partial
compliance, or noncompliance.
9
UNIT III-SOFTWARE TESTING AND AUTOMATION
The test manager may move the requirement to the closed state after it has been
verified and the value of the verification_status field set to “passed.”
1
0
UNIT III-SOFTWARE TESTING AND AUTOMATION
The following are some of the fields that are updated by the test manager since he or
she is the owner of therequirement at the verification state:
decline_note: The reasons to decline this requirement.
verification_method: Can take one of the four values from the set {Testing, Analysis,
Demonstration, Inspection}.
verification_status: Can take one of the three values from the set {Passed, Failed,
Incomplete}, indicating the final verification status of the requirement.
Closed State: The requirement is moved to the closed state from the verification state by the
test manager after it is verified.
Decline State: In this state, the marketing department is the owner of the requirement. A
requirement comes to this state because of some of the following reasons:
• The marketing department rejected the requirement.
• It is technically not possible to implement this requirement and, possibly, there is
an associated EC ( Engineering Change) number.
• The test manager declines the implementation with an EC number.
The marketing group may move the requirement to the submit state after reviewing it
with the customer.
11
UNIT III-SOFTWARE TESTING AND AUTOMATION
3.4 TESTABLE REQUIREMENTS
3.4.1 : System level testable Requirements :
System-level tests are designed based on the requirements to be verified. Testable requirements
are the requirements that can be tested to determine whether they have been met. A test
engineer analyzes the requirement, the relevant functional specifications, and the standards to
determinethe testability of the requirement. The above task is performed in the commit state.
Testability analysis means assessing the static behavioral characteristics of the requirement to
reveal test objectives.
One way to determine the requirement description is testable is as follows:
• Take the requirement description X
• The system must perform X.
• Then encapsulate the requirement description to create a test objective: Verify that the
system performs X correctly.
• Review this test objective by asking the question: Is it workable? In other words, find out if
it is possible to execute it assuming that the system and the test environment are available.
• If the answer to the above question is yes, then the requirement description is clear and
detailed for testing purpose. Otherwise, more work needs to be done to revise or supplement
the requirement description.
As an example, let us consider the following requirement: The software image must be
easy to upgrade/downgrade as the network grows. This requirement is too broad and vague to
determine the objective of a test case. In other words, it is a poorly crafted requirement. One
can restate the previous requirement as: The software image must be easy to
upgrade/downgrade for 100 network elements. Then one can easily create a test objective:
Verify that the software image can be upgraded/downgraded for 100 network elements. It
takes time, clear thinking, and courage to change things.
In addition to the testability of the requirements, the following items must be
analyzed by the system test engineers during the review:
• Safety: Have the safety-critical requirements been identified? The safety-critical
requirements specify what the system shall not do, including means for eliminating and
controlling hazards and for limiting any damage in the case that a mishap occurs.
• Security: Have the security requirements, such as confidentiality, integrity, and
availability, been identified?
• Completeness: Have all the essential items been completed? Have all possible
situations been addressed by the requirements? Have all the irrelevant items been omitted?
• Correctness: Are the requirements understandable and have they been stated
without error? Are there any incorrect items?
• Consistency: Are there any conflicting requirements?
• Clarity: Are the requirement materials and the statements in the document clear,
useful, and relevant? Are the diagrams, graphs, and illustrations clear? Have those been
expressed using proper notation to be effective? Do those appear in proper places?
• Relevance: Are the requirements pertinent to the subject?
• Feasibility: Are the requirements implementable?
• Verifiable: Can tests be written to demonstrate conclusively and objectively that the
requirements have been met?
• Traceable: Can each requirement be traced to the functions and data related to it so
that changes in a requirement can lead to easy reevaluation?
12
UNIT III-SOFTWARE TESTING AND AUTOMATION
3.4.2 Functional Specification
A functional specification provides:
i. A precise description of the major functions the system must fulfill the requirements,
description of the implementation of the functions, and explanation of the technological
risks involved
ii. External interfaces with other software modules
iii. Data flow such as flowcharts, transaction sequence diagrams, etc describing
the sequence of activities
iv. Fault handling, memory utilization and performance estimates
The functional specification must be reviewed from the point of view of testability.
Common problems with functional specifications include lack of clarity, ambiguity, and
inconsistency.
The following are the Objectives that are kept in mind while reviewing a functional
specification:
• Correctness: Whenever possible, the specification parts should be compared directly
to an external reference for correctness.
• Extensible: The specification is designed to easily accommodate future extensions
that can be clearly envisioned at the time of review.
• Comprehensible: The specification must be easily comprehensible. By the end of
the review process, if the reviewers do not understand how the system works, the
specification or its documentation is likely to be flawed. Such specifications and
documentations need to be reworked to make them more comprehensible.
Necessity: Each item in the document should be necessary.
Sufficiency: The specification should be examined for missing or incomplete items.
All functions must be described as well as important properties of input and output
data such as volume and magnitude.
Implementable: It is desirable to have a functional specification that is
implementable within the given resource constraints that are available in the target
environment such as hardware, processing power, memory, and network bandwidth.
Efficient: The functional specification must optimize those parts of the solution that
contribute most to the performance of the system.
Simplicity: In general, it is easier to achieve and verify requirements stated in the
form of simple functional specifications.
Reusable Components: The specification should reuse existing components as
much as possible and be modular enough that the common components can be
extracted to be reused.
Limitations: The limitations should be realistic and consistent with the requirements.
13
UNIT III-SOFTWARE TESTING AND AUTOMATION
3.5 MODELING A TEST DESIGN PROCESS
Test design is the process of creating a strategic plan for test cases, scenarios, and conditions
to ensure it meets the performance and reliability of software or systems. It aims to ensure
test cases effectively uncover software defects and behave as expected under various
conditions. Test objectives are identified from a requirement specification, and one test case
14
UNIT III-SOFTWARE TESTING AND AUTOMATION
test case is expected to verify the requirements referred to in the requirement_ids fields. The
originator_ group is the group who found a need for the test. The creator may assign the test case to
a specific test engineer, including himself, by filling out the eng_assigned field, and move the test
case from the create to the draft state.
Draft State The owner of this state is the test group, that is, the system test team. In this state, the
assigned test engineer enters the following information: tc_author, objective, setup, test_steps,
cleanup, candidate_for_automation, automation_priority. After completion of all the mandatory
fields, the test engineer may reassign the test case to the creatorto go through the test case. The test
case stays in this state until it is walked through by the creator. After that, the creator may move the
state from the draft state to the review state by entering all the approvers’ names in the
approver_names field.
Review and Deleted States The owner of the review state is the creator of the test case. The
owner invites test engineers and developers to review and validate the test case. They ensure
that the test case is executable, and the pass–fail criteria are clearly specified.
Action items are created for the test case if any field needs a modification. Action
items from a review meeting are entered in the review_actions field, and the action items are
executed by the owner to effect changes to the test case.
The test case moves to the released state after all the reviewers approve the changes. If
the reviewers decide that this is not a valid test case or it is not executable, then the test case is
moved to the deleted state. A review action item must tell to delete this test case for a test case
to be deleted.
Released and Update States A test case in the released state is ready for execution, and it
becomes a part of a test suite. On the other hand, a test case in the update state implies that it is in
the process of being modified to enhance its reusability, being fine-tuned with respect to its pass–fail
criteria, and/orhaving the detailed test procedure fixed. For example, a reusable test case should be
parameterized rather than hard coded with data values.
Moreover, a test case should be updated to adapt it to changes in system functionality
or the environment.
One can improve the repeatability of the test case so that others can quickly
understand, borrow, and reuse it bymoving a test case in the released–update loop a small
number of times.
Also, this provides the foundation and justification for the test case to be automated. A
test case should be platform independent. If an update involves a small change, the test
engineer may move the test case back to the released state after the fix. Otherwise, the test
case is subject to a further review, which is achieved by moving it to the review state. A test
case may be revised once every time it is executed.
Deprecated State An obsolete test case may be moved to a deprecated state. Ideally, if it has
not been executed for a year, then the test case should be reviewed for its continued
existence.
A test case may become obsolete over time because of the following reasons.
First, the functionality of the system being tested has much changed, and due to a lack
of test case maintenance, a test case becomes obsolete.
Second, as an old test case is updated, some of the requirements of the original test case
may no longer be fulfilled.
Third, reusability of test cases tends to degrade over time as the situation changes. This
is especially true of test cases which are not designed with adequate attention to
possible reuse.
Finally, test cases may be carried forward carelessly long after their original
15
UNIT III-SOFTWARE TESTING AND AUTOMATION
justifications have disappeared. Nobody may know the original justification for a
particular test case,
so it continues to be used.
16
UNIT III-SOFTWARE TESTING AND AUTOMATION
The schema requires a test suite ID, a title, an objective, and a list of test cases to be managed
by the test suite. One also identifies the individual test cases to be executed (test cycles 1, 2, 3
and/or regression) and the requirements that the test cases satisfy.
The idea here is to gather a selected number of released test cases and repackage them
to form a test suite for a new project..
In a large, complex system with many defects, there are several possibilities of the
result of a test execution, not merely passed or failed. Therefore, we model the results of test
execution by using a state transition diagram as shown in Figure below, and the
corresponding schema is given in Table following the figure.
17
UNIT III-SOFTWARE TESTING AND AUTOMATION
The execution status of a test case is put in its initial state of untested after designing or
selecting a test case.
If the test case is not valid for the current software release, the test case result is moved to
the invalid state.
In the untested state, the test suite identifier is noted in a field called test_suite_id. The
state of the test result, after execution of a test case is started, may change to one of the
following states:
passed, failed, invalid, or blocked.
A test engineer may move the test case result to the passed state from the untested state
if the test case execution is complete and satisfies the pass criteria.
If the test execution is complete and satisfied the fail criteria, a test engineer moves the
test result to the failed state from the untested state and associates the defect with the test
case by initializing the defect_ids field.
18
UNIT III-SOFTWARE TESTING AND AUTOMATION
The test case must be reexecuted when a new build containing a fix for the defect is
received. If the reexecution is complete and satisfies the pass criteria, the test result is
moved to the passed state.
The test case result is moved to a blocked state if it is not possible to completely execute
it. If known, the defect number that blocks the execution of the test case is recorded in the
defect_ids field. The test case may be reexecuted when a new build addressing a blocked
test case is received.
If the execution is complete and satisfies the pass criteria, the test result is moved to the
passed state. On the other hand, if it satisfies the fail criteria, the test result is moved to
the failed state. If the execution is unsuccessful due to a new blocking defect, the test
result remains in the blocked state and the new defect that blocked the test case is listed
in the defect_ids field.
The benefits of Modeling test results are : Improved Analysis, Improved decision
making and Improved Communication
Challenges in Modelling test results :
• Complexity : Can be difficult to create a model if the results are complex
• Time : It can take time to create a model of the test results
• Cost : Can be expensive to create a model
Common metric used to Model the test results:
• No. of defects found: Gives the count of no. of defects found during testing
• Severity of defects: Severity of defects can be classified as major, Minor or critical
• Time to find the defects: This can be used to identify the area that are difficult to test
• Coverage : Gives the percentage of software that was tested
19
UNIT III-SOFTWARE TESTING AND AUTOMATION
Importance:
This is done when there is a huge number of test cases are available for testing
purposes and for checking them individually, this testing is of great use.
The analysis of test data is done at the boundaries of partitioned data after equivalence
class partitioning happens and analysis is done.
This testing process is actually known as black-box testing that focuses on valid and
invalid test case scenarios and helps in finding the boundary values at the extreme
ends without hampering any effective test data valuable for testing purpose.
This is also responsible for testing where lots of calculations are required for any kind
of variable inputs and for using in varieties of applications.
The testing mechanism also helps in detecting errors or faults at the boundaries of the
partition that is a plus point as most errors occur at the boundaries before the
applications are submitted to the clients.
The following are the key steps involved in performing Boundary value testing:
Identify the boundaries
Identify the valid and invalid boundaries
Select test cases
Execute the test cases
Analyze the results
20
UNIT III-SOFTWARE TESTING AND AUTOMATION
Disadvantages of boundary value analysis:
Limited Scope : BVA’s scope is limited to addressing boundary-related defects and
potentially missing issues that occur within the input domain.
Combinatorial Explosion: BVA can result in a large number of test cases for systems with
multiple inputs, increasing the testing effort.
Time Consuming: Can be time consuming especially when dealing with the complex input
ranges or multiple boundary conditions.
BVA may not cover all possible scenarios or corner cases: While it is effective in many
cases, BVA may not cover all possible scenarios or corner cases.
The divided sets are called Equivalence Partitions or Equivalence Classes. Then we
pick only one value from each partition for testing. The hypothesis behind this technique
is that if one condition/value in a partition passes all others will also pass. Likewise, if
one condition in a partition fails, all other conditions in that partition will fail.
21
UNIT III-SOFTWARE TESTING AND AUTOMATION
Advantages :
It is process-oriented
It helps to decrease the general test execution time
Reduce the set of test data.
Disadvantages :
All necessary inputs may not cover
This technique will not consider the condition for boundary value analysis
The test engineer might assume that the output for all data set is right, which leads to
the problem during the testing process
22
UNIT III-SOFTWARE TESTING AND AUTOMATION
Path Testing Process:
In the path testing method, the control flow graph of a program is designed to find a
set of linearly independent paths of execution. In this method, Cyclomatic Complexity is used
to determine the number of linearly independent paths and then test cases are generated for
each path.
Cyclomatic Complexity = E - N + 2P
= 8 – 7 – 2(1) = 3
In the above example, we can see there are few conditional statements that is executed
depending on what condition it suffice. Here there are 3 paths or condition that need to be
tested to get the output,
Path 1: 1,2,3,5,6, 7
Path 2: 1,2,4,5,6, 7
Path 3: 1, 6, 7
Generation of Test Cases:
After the identification of independent paths, we may generate test cases that traverse
all independent paths at the time of executing the program. This process will ensure that each
23
UNIT III-SOFTWARE TESTING AND AUTOMATION
transition of the control flow diagram is traversed at least once.
Test Case A B C PATH
1 50 55 52 A=B, PRINT 55
2 50 55 60 A=C, PRINT 60
3 40 ANY ANY PRINT 40
Path Testing Techniques
Control Flow Graph: The program is converted into a control flow graph by representing
the code into nodes and edges.
Decision to Decision path: The control flow graph can be broken into various Decision
to Decision paths and then collapsed into individual nodes.
Independent paths: An Independent path is a path through a Decision to Decision path
graph that cannot be reproduced from other paths by other methods.
24
UNIT III-SOFTWARE TESTING AND AUTOMATION
A du-path for a variable ‘v’ may have manyredefinitions of variable ‘v’ between
initial node (DEF (v, m)) and final node (USE (v, n)).
A dc-path for a variable ‘v’ will not have any definition of variable ‘v’ between
initial node (DEF (v, m)) and final node (USE (v, n)).
The du-paths that are not definition clear paths are potential troublesome
paths. They should be identified and tested on topmost priority.
Identification of du and dc Paths
The various steps for the identification of du and dc paths are given as:
(i) Draw the program graph of the program.
(ii) Find all variables of the program and prepare a table for define / use status of
all variables using the following format:
(iii) Generate all du-paths from define/use variable table of step (ii) using the following format:
(iv) Identify those du-paths which are not dc-paths. Four testing strategies are used for this
Testing Strategies Using du-Paths:
We want to generate test cases which trace everydefinition to each of its use and
every use is traced to each of its definition. Some of the testing strategies are given as:
a. Test all du-paths:
All du-paths generated for all variables are tested. This is the strongest data flow
testing strategy covering all possible du-paths.
b. Test all uses
Find at least one path from every definition of every variable to every use of that
variable which can be reached by that definition.
For everyuse of a variable, there is a path from the definition of that variable to the
use of that variable.
c. Test all definitions
Find paths from every definition of every variable to at least one use of that variable;
we may choose any strategy for testing.
As we go from ‘test all du-paths’ (no. (i)) to ‘test all definitions’ (no.(iii)), the number
of paths are reduced. However, it is best to test all du-paths (no. (i)) and give priority to those
du-paths which are not definition clear paths. The first requires that each definition reaches
all possible uses through all possible du-paths. The second requires that each definition
reaches allpossible uses, and the third requires that each definition reaches at least one use.
Generation of Test Cases:
After finding paths, test cases are generated by giving values to the input parameter.
We get different test suites for each variable.
Example: Let us consider the program as follows:
a. read x, y;
b. if(x>y)
3. a = x+1
else
4. a = y-1
5. print a;
25
UNIT III-SOFTWARE TESTING AND AUTOMATION
Control flow graph ( Program graph) of above example:
Define/use of variables of above example:
The du-paths with beginning node and end node are given as:
Variable du-path (Begin, end)
X 1,2,3
1,2
Y 1,2,4
1,2
A 3,5
4,5
The first strategy (best) is to test all du-paths, the second is to test all uses and the third is to test
all definitions. The du-paths as per these three strategies are given as:
S. Expected
x y a Remarks
No. Output
1 20 10 21 21 1,2,3
2 20 10 21 21 1,2
3 15 25 24 24 1,2,4
4 15 25 24 24 1,2
5 20 10 21 21 3,5
6 15 25 24 24 4,5
26
UNIT III-SOFTWARE TESTING AND AUTOMATION
Test cases for all definitions:
S. Expected
x Y a Remarks
No. Output
1. 20 10 21 21 1-2-3-5
2. 20 10 21 21 1-2-4-5
The following metrics can be used to represent the level of preparedness of test design.
Preparation Status of Test Cases (PST): A test case can go through a number of phases, or
states, such as draft and review, before it is released as a valid and useful test case. Thus, it is
useful to periodically monitor the progress of test design by counting the test cases lying in
different states of design—create, draft, review, released, and deleted. It is expected that all the
planned test cases that are created for a particular project eventually move to the released state
before the start of test execution.
Average Time Spent (ATS) in Test Case Design: It is useful to know the amount of time it
takes for a test case to move from its initial conception, that is, create state, to when it is
considered to be usable, that is, released state. This metric is useful in allocating time to the
test preparation activity in a subsequent test project. Hence, it is useful in test planning.
Number of Available Test (NAT) Cases: This is the number of test cases in the released
state from existing projects. Some of these test cases are selected for regression testing in the
current test project.
Number of Planned Test (NPT) Cases: This is the number of test cases that are in a test
suite and ready for execution at the start of system testing. This metric is useful in scheduling
test execution. As testing continues, new, unplanned test cases may be required to be
designed. A large number of new test cases compared to NPT suggest that initial planning was
not accurate. Coverage of a Test Suite (CTS): This metric gives the fraction of all
requirements covered by a selected number of test cases or a complete test suite. The CTS is a
measure of the number of test cases needed to be selected or designed to have good coverage
27
UNIT III-SOFTWARE TESTING AND AUTOMATION
of system requirements.
28
UNIT III-SOFTWARE TESTING AND AUTOMATION
29
UNIT III-SOFTWARE TESTING AND AUTOMATION
30
UNIT III-SOFTWARE TESTING AND AUTOMATION
the model-based testing literature. The input values are augmented with other values needed
to run the tests (including values to reach the point in the software being tested, to display
output, and to terminate the program).
The test cases are then automated into test scripts (when feasible and practical), run on
the software to produce results, and results are evaluated. It is important that results
from automation and execution be used to feed back into test design, resulting in
additional or modified tests.
This process has two major benefits:
o First, it provides a clean separation of tasks between test design, automation,
execution and evaluation.
o Second, raising our abstraction level makes test design much easier. Instead of
designing tests for a messy implementation or complicated design model, we design
at an elegant mathematical level of abstraction. This is exactly how algebra and
calculus has been used in traditional engineering for decades.
31
UNIT III-SOFTWARE TESTING AND AUTOMATION
Required Skill
3.14 Testcase Planning Overview (Test Plan Documents)
The figure below shows the relationships among the different types of test plans. As we can see in
the Figure, moving further away from the top-level test plan puts less emphasis on the process
of creation and more on the resulting written document. The reasonis that these plans become
useful on a daily, sometimes hourly, basis by the testers performing the testing.
At the lowest level they become step-by-step instructions for executing a test, making
sure that they’re clear, concise, and organized.
The bottom line is that the test team should create test plans that cover the information
outlined in IEEE 829. The four goals of test case planning should be met: organization,
repeatability, tracking, and proof.
32
UNIT III-SOFTWARE TESTING AND AUTOMATION
33
UNIT III-SOFTWARE TESTING AND AUTOMATION
oStop : Describes the steps for an orderly halt to the test.
o Wrap up: Explains how to restore the environment to its pre-test condition.
o Contingencies: Explains what to do if things don’t go as planned.
The following are the elements that should be included in a test procedure :
Purpose of the test
Steps of the test
Expected Results
Pre conditions
Post Conditions
Test Procedures can be written in various formats, but they should be clear , concise and easy
to follow. They should be consistent with other documentation for the software project.
Benefits of Test Procedures:
Consistency : Test procedures helps to ensure that tests are executes consistently. This
improves the accuracy of the results
Documentation : Test procedures document the testing process which can be helpful
for debugging and trouble shooting
Communication : Test Procedures can help to communicate the testing requirements
to the development team and other stake holders
Reusability : Test procedures can be reused for future testing which can save time and
effort
3. Spreadsheet - A popular and very workable method of tracking test cases is by using a
spreadsheet. Figure below shows an example of this. By keeping all the details of the test cases
in one place, a spreadsheet can provide an at-a-glance view of your testing status.They are easy
to use, relatively easy to set up, and provide good tracking and proof of testing.
34
UNIT III-SOFTWARE TESTING AND AUTOMATION
Figure- A spreadsheet can be used to effectively track and manage test suites and testcases.
4. Custom database - The ideal method for tracking test cases is to use a customized
database programmed specifically to handle test cases. Many commercially available
applications are set up to perform just this specific task.
Database software such as FileMaker Pro, Microsoft Access, and many others provide almost
drag-and- drop database creation that would let us build a database that mapped to the
IEEE 829 standard in just a few hours. A tester could then set up reports and queries that
would allow the tester to answer just about any question regarding the test cases.
5. Test case Management Tool : A Test case Management Tool is a software application
that can be used to create , store, update and execute test cases. These tools provides
features such as version control , test case prioritization and test case execution reports