0% found this document useful (0 votes)
16 views35 pages

Sta - Unit Iii

Unit III covers software testing and automation, focusing on test design and execution, including test objective identification, design factors, and requirement identification. It emphasizes the importance of defining clear test objectives, understanding requirements, and employing various testing methodologies to ensure software quality. The document outlines key steps in test planning, factors influencing test design, and the requirement life cycle, highlighting the need for effective communication and traceability throughout the testing process.

Uploaded by

selvajothi252k1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views35 pages

Sta - Unit Iii

Unit III covers software testing and automation, focusing on test design and execution, including test objective identification, design factors, and requirement identification. It emphasizes the importance of defining clear test objectives, understanding requirements, and employing various testing methodologies to ensure software quality. The document outlines key steps in test planning, factors influencing test design, and the requirement life cycle, highlighting the need for effective communication and traceability throughout the testing process.

Uploaded by

selvajothi252k1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

UNIT III-SOFTWARE TESTING AND AUTOMATION

UNIT III
TEST DESIGN AND EXECUTION
Test Objective Identification, Test Design Factors, Requirement identification, Testable
Requirements, Modeling a Test Design Process, Modeling Test Results, Boundary Value
Testing, Equivalence Class Testing, Path Testing, Data Flow Testing, Test Design
Preparedness Metrics, Test Case Design Effectiveness, Model Driven Test Design, Test
Procedures, Test Case Organization and Tracking, Bug Reporting, Bug Life Cycle.
3.1 TEST OBJECTIVE IDENTIFICATION
 Test Objective Identification in software testing refers to the process of defining clear
and specific goals for a testing effort. It involves determining what aspects of the
software need to be tested, what specific functionalities should be validated and what
quality attributes should be evaluated.
 Test Objective Identification also defines the goals that a test case is intended to achieve.
This is an important step in software testing as it helps to ensure that the test cases are
targeted at the correct areas of the software and they are effective in finding faults.
 Test Objective Identification phase is crucial for planning and designing effective test case
and test suites. It helps testers and stakeholders align their understanding of the testing
scope and expectations. This ensures that the testing effort is focused and purposeful. By
defining clear objectives, the testers can prioritize their testing activities and allocate
resources effectively.
 We cannot test the system comprehensively if we do not understand it. Therefore, the first
step in identifying the test objective is to read, understand, and analyze the functional
specification. It is essential to have a background familiarity with the subject area, the
goals of the system, business processes, and system users for a successful analysis.
 We have to understand the explicit requirements and also critically analyze requirements
to extract the inferred requirements that are embedded in the requirements.
 An inferred requirement is one that a system is expected to support but is not explicitly
stated. Inferred requirements need to be tested just like the explicitly stated requirements.
As an example, let us consider the requirement that the system must be able to sort a list of
items into a desired order.
There are several unstated requirements not being verified by the above test objective. Many
more test objectives can be identified for the requirement:
• Verify that the system produces the sorted list of items when an already sorted list of
items is given as input.
• Verify that the system produces the sorted list of items when a list of items with
varying length is given as input.
• Verify that the number of output items is equal to the number of input items.
• Verify that the contents of the sorted output records are the same as the input record
contents.
• Verify that the system produces an empty list of items when an empty list of items is
given as input.
• Check the system behavior and the output list by giving an input list containing one
or more empty (null) records.
• Verify that the system can sort a list containing a very large number of unsorted items.

The test objectives are put together to form a test group or a subgroup after they have been
identified. A set of (sub) groups of test cases are logically combined to form a larger group. A
hierarchical structure of test groups is called a test suite.

1
UNIT III-SOFTWARE TESTING AND AUTOMATION

Figure - Test suite structure.


It is necessary to identify the test groups based on test categories and refine the test groups
into sets of test objectives. Individual test cases are created for each test objective within the
subgroups. Test groups may be nested to an arbitrary depth. They may be used to help system
test planning and execution.

5 Typical Objectives of Testing


Delivering quality products is the ultimate objective of testing. The various objectives of testing are:
 Identification of Bugs and Errors
 Delivering Quality Product
 Justification with Requirement
 Increasing Confidence in the Product
 Enhanced Growth

3.1.1 Key steps involved in test objective Identification :


 Requirement Analysis : Understanding the functional and non-functional requirements of the
software or the system under test. This involves analyzing project documentation, user stories,
use case and other relevant sources
 Risk assessment : Identifying potential risks and their impact on the software. This includes
evaluating the criticality of various functionalities and determining which areas requires more
rigorous testing
 Defining testing goals : Establishing specific goals for the testing effort, such as validating
certain functionalities , ensuring system stability under load, assessing performance metrics or
confirming compliance with industry standards
 Prioritization : Determining the order of testing activities based on the factors like risk,
importance, dependencies and project timelines. This helps allocate testing resources
efficiently and ensures that critical aspects are tested first
 Test scope Definition : Clearly defining the boundaries of the testing effort including which
components, modules or functionalities will be covered and which ones are out of scope
 Test Coverage planning : Identifying the necessary test types such as functional ,
performance, security, usability, etc to ensure comprehensive coverage of the identified
objectives

Once the test objectives have been identified, they should be documented in the test plan. This will
help to ensure that the test cases are developed and executes in a way that meets the specific goals
of the testing.

3.1.2 Some of the benefits of the test objective identification:


 It helps to ensure that the test cases are targeted at the correct areas of the software
 It helps to ensure that the test cases are effective in finding faults
 It helps to improve the efficiency of the testing process

2
UNIT III-SOFTWARE TESTING AND AUTOMATION
 It helps to ensure that the software meets the requirements

3
UNIT III-SOFTWARE TESTING AND AUTOMATION
3.1.3 Some of the tips for identifying the test objectives:
 Start by reviewing the software requirements
 Consider the risks associated with the software
 Think about the test environment
 Be specific and measurable
 Document the test objectives in the test plan

3.1.4 The objectives of software testing vary depending on the level of testing being performed:
 Unit testing - The goal of unit testing is to ensure that each unit of code performs as
expected and is free of bugs.
 Integration testing - The goal of integration testing is to identify issues with how different
units of the software interact.
 System testing - The goal of system testing is to verify that the complete software system
meets all of its requirements.
 Acceptance testing- The goal of acceptance testing is to ensure that the software is ready to
be delivered to the users.

The main objectives of software testing are to ensure that the software is reliable, efficient, and meets
the user's requirements.

3.2 TEST DESIGN FACTORS


Test Design factors in Software testing refers to the various considerations and factors
that influence the design of the test cases and test suites. The test design activities must be
performed in a planned manner in order to meet some technical criteria, such as effectiveness,
and economic criteria, such as productivity. Therefore,the following factors during test design
need to be considered:
1. Coverage metrics,
2. Effectiveness,
3. Productivity,
4. Validation,
5. Maintenance, and
6. User skill.
 Coverage Metrics : Coverage metrics concern the extent to which the Device Under
Test (DUT) is examined by a test suite (or test case) designed to meet certain criteria.
Coverage metrics lend us two advantages.
1. First, these allow us to quantify the extent to which a test suite covers certain
aspects, such as functional, structural, and interface of a system.
2. Second, these allow us to measure the progress of system testing. The criteria
may be path testing, branch testing, or a feature identified from a requirement
specification.
Each test case is given an identifier(s) to be associated with a set of requirements. This
association is done by using the idea of a coverage matrix. A coverage matrix [Aij] is

generated for the above idea of coverage metrics. The general structure of the coverage

4
UNIT III-SOFTWARE TESTING AND AUTOMATION
matrix [Aij] is represented as shown in the Table, where Ti stands for the ith test case and
Nj stands for the jth requirement to be covered; [Aij] stands for coverage of the test case
Ti over the tested element Nj.
The complete set of test cases, that is, a test suite, and the complete set of tested elements
of the coverage matrix are identified as Tc ={T1,T2,..,Tq} and Nc ={N1,N2,..,Np},
respectively.
 Effectiveness : A structured test case development methodology must be used as much
as possible to generate a test suite. A structured development methodology
minimizes maintenance work and improves productivity. Careful design of test cases in
the early stages of test suite development ensures their maintainability as new
requirements emerge.
The correctness of the requirements is very critical in order to develop effective test
cases to reveal defects. Therefore, emphasis must be put on identification and analysis
of the requirements from which test objectives are derived.
 Productivity :Test cases are created based on the test objectives (productivity).
 Validation : Another aspect of test case production is validation of the test cases to
ensure that thoseare reliable. It is natural to expect that an executable test case meets
its specification before it is used to examine another system. This includes ensuring that
test cases have adequate error handling procedures and precise pass–fail criteria.
 Maintenance : We need to develop a methodology to assist the production, execution,
and maintenance of the test suite.
 User Skill : Another factor to be aware is the potential users of the test suite. The test
suite should be developed with these users in mind; the test suite must be easy to
deploy and execute in other environments, and the procedures for doing so need to be
properly documented. A test suite production life cycle should consider all six factors
discussed above.

3.2.1 :Other Factors that are considered for designing Test Cases :
1. Correctness
2. Negatives
3. User Interface
4. Usability
5. Performance
6. Security
7. Integration
8. Reliability
9. Compatibility

Correctness : Correctness is the minimum requirement of software, the essential purpose of


testing. The tester may or may not know the inside details of the software module under test e.g.
control flow, data flow etc.
Negatives : In this factor we can check what the product it is not supposed to do.
User Interface : In UI testing we check the user interfaces. For example in a web page we may
check for a button. In this we check for button size and shape. We can also check the navigation
links.
Usability : Usability testing measures the suitability of the software for its users, and is directed at
measuring the efficiency of the software with which specified users can achieve specified goals in
particular environments.
Performance : In software engineering, performance testing is testing that is performed from one

Panimalar Engg. College Chennai City Campus 5


UNIT III-SOFTWARE TESTING AND AUTOMATION
perspective to determine how fast some aspect of a system performs under a particular workload.

Panimalar Engg. College Chennai City Campus 6


UNIT III-SOFTWARE TESTING AND AUTOMATION

Security: Process to determine that an Information System protects data and maintains
functionality as intended. The basic security concepts that need to be covered by security testing
are the Confidentiality, Integrity, Authentication and Authorization.
Integration : Integration testing is a logical extension of unit testing. In its simplest form, two
units that have already been tested are combined into a component and the interface between them
is tested.
Reliability : Reliability testing is to monitor a statistical measure of software maturity over time
and compare this to a desired reliability goal.
Compatibility : Compatibility testing is a part of software's non-functional tests. This testing is
conducted on the application to evaluate the application's compatibility with the computing
environment. Browser compatibility testing can be more appropriately referred to as user
experience testing. This requires that the web applications are tested on various web browsers to
ensure that the Users have the same visual experience irrespective of the browsers through which
they view the web application.

3.3 REQUIREMENT IDENTIFICATION


Requirements are a description of the needs or desires of users that a system is
supposed to implement. There are two main challenges in defining requirements.
First is to ensure that the right requirements are captured, which is essential for
meeting the expectations of the users. Requirements must be expressed in such a form that the
users can easily review and confirm their correctness.
Second is to ensure that the requirements are communicated unambiguously to the
developers and testers so that there are no surprises when the system is delivered.
Requirement Life Cycle:

Figure - State transition diagram of requirement.


Figure shows a state diagram of a simplified requirement life cycle starting from the
submit state to the closed state. This transition model provides different phases of a
requirement, where each phase is represented by a state. This model represents the life of a
requirement from its inception to completion through the following states: submit,
open, review, assign, commit, implement, verification, and finally closed. At each of these
states certain actions are taken by the owner, and the requirement is moved to the next state
after the actions are completed.
The requirements traceability is the ability to describe and follow the life of a
requirement, in both forward and backward direction, i.e., from its origins, through its
development and specification, to its subsequent deployment and use, and through periods of
ongoing refinement and iteration in any of these phases.

7
UNIT III-SOFTWARE TESTING AND AUTOMATION

A traceability matrix finds two applications:


(i) identify and track the functional coverage of a test and
(ii) identify which test cases must be exercised or updated when a system evolves.
Submit State: A new requirement is put in the submit state to make it available to others. The
owner of this state is the submitter. A new requirement may come from different sources:
customer, marketing manager, and program manager.
A program manager oversees a software release starting from its inception to its
completion and is responsible for delivering it to the customer. A software release is the
release of a software providing new features. Usually, the requirements are generated from the
customers and marketing managers.
The following fields are filled out when a requirementis submitted:
requirement_id: A unique identifier associated with the requirement.
priority: A priority level of the requirement—high or normal.
title: A title for the requirement.
description: A short description of the requirement.
product: Name of the product in which the requirement is desired.
customer: Name of the customer who requested this requirement.

Open State: In this state, the marketing manager is in charge of the requirement and
coordinates the following activities.
 Reviews the requirement to find duplicate entries. The marketing manager can move the
duplicate requirement from the open state to the decline state with an explanation and a
pointer to the existing requirement. Also, he or she may ensure that there are no
ambiguities in the requirement and, if there is any ambiguity, consult with the submitter
and update the description and the note fields of the requirement.
 Reevaluates the priority of the requirement assigned by the submitter and either accepts it
or modifies it. Determines the severity of the requirement. There are two levels of
severity defined for each requirement: normal and critical.
 The marketing manager may decline a requirement in the open state and terminate the
development process, thereby moving the requirement to the decline state with a proper
explanation.
The following fields may be updated by the marketing manager, who is the owner of the
requirement in the open state:
priority: Reevaluate the priority—high or normal—of this requirement.
severity: Assign a severity level—normal or critical—to the requirement.
decline_note: Give an explanation of the requirement if declined.
software_release: Suggest a preferred software release for the requirement.
Review State: The director of software engineering is the owner of the requirement in the
review state. The software engineering director reviews the requirement to understand it and
estimate the time required to implement this. The director thus prepares a preliminary version
of the functional specification for this requirement. This scheme provides a framework to map
the requirement to the functional specification which is to be implemented.
The director of software engineering can move the requirement from the review state
to the assign state by changing the ownership to the marketing manager. Moreover, the
director may decline this requirement if it is not possible to implement.The following fields
may be updated by the director:
eng_comment: Comments generated during the review are noted in this field.

8
UNIT III-SOFTWARE TESTING AND AUTOMATION
time_to_implement: This field holds the estimated time in person-weeks to implement
the requirement.
attachment: An analysis document, if there is any, including figures and descriptions
that are likely to be useful in the future development of functional specifications.
eng_assigned: Name of the engineer assigned by the director to review the
requirement.

Assign State: The marketing manager is the owner of the requirement in the assign state. A
marketing manager assigns the requirement to a particular software release and moves the
requirement to the commit state by changing the ownership to the program manager, who
owns that particular software release. The marketing manager may decline the requirement
and terminate the development process, thereby moving the requirement to the decline state.
The following fields are updated bythe marketing manager:
decline_note and software_release.
The former holds an explanation for declining, if it is moved to the decline state. On
the other hand, if the requirement is moved to the commit state, the marketing manager
updates the latter field to specify the software release in which the requirement will be
available.

Commit State: The program manager is the owner of the requirement in the commit state.
The requirement stays in this state until it is committed to a software release. The program
managerreviews all the requirements that are suggested to be in a particular release which is
owned by him.
The requirement may be moved to the implement state by the program manager after
it is committed to a particular software release. The test engineers must complete the review
of the requirement and the relevant functional specification from a testability point of view.
Next, the test engineers can start designing and writing test cases for this requirement.
The only field to be updated by the program manager, who is the owner of the
requirement in the commit state, is committed_release. The field holds the release number
for this requirement.

Implement State: The director of software engineering is the owner of the requirement in the
implement state. This state implies that the software engineering group is currently coding
and unit testing the requirement.
The following fields may be updated by the director, since he or she is the owner of
a requirement in the implement state:
decline_note: An explanation of the reasons the requirement is moved to decline state.

Verification State: The test manager is the owner of the requirement in the verification state.
The test manager verifies the requirement and identifies one or more methods for assigning a
test verdict: (i) testing, (ii) inspection, (iii) analysis, and (iv) demonstration.
If testing is a method for verifying a requirement, then the test case identifiers and
their results are provided. This information is extracted from the test factory. Inspection
means review of the code. Analysis means mathematical and/or statistical analysis.
Demonstration means observing the system in a live operation. A verdict is assigned to the
requirement by providing the degree of compliance information: full compliance, partial
compliance, or noncompliance.

9
UNIT III-SOFTWARE TESTING AND AUTOMATION
The test manager may move the requirement to the closed state after it has been
verified and the value of the verification_status field set to “passed.”

1
0
UNIT III-SOFTWARE TESTING AND AUTOMATION

The following are some of the fields that are updated by the test manager since he or
she is the owner of therequirement at the verification state:
decline_note: The reasons to decline this requirement.
verification_method: Can take one of the four values from the set {Testing, Analysis,
Demonstration, Inspection}.
verification_status: Can take one of the three values from the set {Passed, Failed,
Incomplete}, indicating the final verification status of the requirement.

Closed State: The requirement is moved to the closed state from the verification state by the
test manager after it is verified.

Decline State: In this state, the marketing department is the owner of the requirement. A
requirement comes to this state because of some of the following reasons:
• The marketing department rejected the requirement.
• It is technically not possible to implement this requirement and, possibly, there is
an associated EC ( Engineering Change) number.
• The test manager declines the implementation with an EC number.
The marketing group may move the requirement to the submit state after reviewing it
with the customer.

3.3.1 : Key aspects involved in Requirement Identification are :


1. Requirement Gathering – Involves collecting information about the software from various sources
2. Requirement Analysis- Gathered information should be analyzed to ensure clarity
3. Requirement Documentation – The identified requirements should be documented to serve as a
reference to the testing team
4. Requirement Prioritization – Determine the relative importance of each requirement based on
factors such as business values, risk, customer expectation, etc
5. Requirement Traceability – Should be able to track and link the test cases back to the specific
requirements
6. Requirement Validation – Involves confirming that the identified requirements reflect the needs of
the stake holders
7. Requirement Change Management – Requirements can change through the software development
life cycle due to evolving business needs, customer feedback or other factors. So there should be
mechanisms to access the impact of requirement changes and update the testing approach when
needed.

3.3.2. : Some of the Techniques used for Requirement Identification :


 Talking to the stakeholders
 Reviewing the Software Requirement Document
 Using Use cases
 Executing Exploratory testing

3.3.3 : Benefits of Requirement Identification:


 Helps to ensure that the test cases are targeted at the correct areas of the software
 Helps to ensure that the test cases are effective in finding defects
 Helps to improve the efficiency of the testing process
 Helps to ensure that the software meets the requirements

11
UNIT III-SOFTWARE TESTING AND AUTOMATION
3.4 TESTABLE REQUIREMENTS
3.4.1 : System level testable Requirements :
System-level tests are designed based on the requirements to be verified. Testable requirements
are the requirements that can be tested to determine whether they have been met. A test
engineer analyzes the requirement, the relevant functional specifications, and the standards to
determinethe testability of the requirement. The above task is performed in the commit state.
Testability analysis means assessing the static behavioral characteristics of the requirement to
reveal test objectives.
One way to determine the requirement description is testable is as follows:
• Take the requirement description X
• The system must perform X.
• Then encapsulate the requirement description to create a test objective: Verify that the
system performs X correctly.
• Review this test objective by asking the question: Is it workable? In other words, find out if
it is possible to execute it assuming that the system and the test environment are available.
• If the answer to the above question is yes, then the requirement description is clear and
detailed for testing purpose. Otherwise, more work needs to be done to revise or supplement
the requirement description.
As an example, let us consider the following requirement: The software image must be
easy to upgrade/downgrade as the network grows. This requirement is too broad and vague to
determine the objective of a test case. In other words, it is a poorly crafted requirement. One
can restate the previous requirement as: The software image must be easy to
upgrade/downgrade for 100 network elements. Then one can easily create a test objective:
Verify that the software image can be upgraded/downgraded for 100 network elements. It
takes time, clear thinking, and courage to change things.
In addition to the testability of the requirements, the following items must be
analyzed by the system test engineers during the review:
• Safety: Have the safety-critical requirements been identified? The safety-critical
requirements specify what the system shall not do, including means for eliminating and
controlling hazards and for limiting any damage in the case that a mishap occurs.
• Security: Have the security requirements, such as confidentiality, integrity, and
availability, been identified?
• Completeness: Have all the essential items been completed? Have all possible
situations been addressed by the requirements? Have all the irrelevant items been omitted?
• Correctness: Are the requirements understandable and have they been stated
without error? Are there any incorrect items?
• Consistency: Are there any conflicting requirements?
• Clarity: Are the requirement materials and the statements in the document clear,
useful, and relevant? Are the diagrams, graphs, and illustrations clear? Have those been
expressed using proper notation to be effective? Do those appear in proper places?
• Relevance: Are the requirements pertinent to the subject?
• Feasibility: Are the requirements implementable?
• Verifiable: Can tests be written to demonstrate conclusively and objectively that the
requirements have been met?
• Traceable: Can each requirement be traced to the functions and data related to it so
that changes in a requirement can lead to easy reevaluation?

12
UNIT III-SOFTWARE TESTING AND AUTOMATION
3.4.2 Functional Specification
A functional specification provides:
i. A precise description of the major functions the system must fulfill the requirements,
description of the implementation of the functions, and explanation of the technological
risks involved
ii. External interfaces with other software modules
iii. Data flow such as flowcharts, transaction sequence diagrams, etc describing
the sequence of activities
iv. Fault handling, memory utilization and performance estimates
The functional specification must be reviewed from the point of view of testability.
Common problems with functional specifications include lack of clarity, ambiguity, and
inconsistency.

The following are the Objectives that are kept in mind while reviewing a functional
specification:
• Correctness: Whenever possible, the specification parts should be compared directly
to an external reference for correctness.
• Extensible: The specification is designed to easily accommodate future extensions
that can be clearly envisioned at the time of review.
• Comprehensible: The specification must be easily comprehensible. By the end of
the review process, if the reviewers do not understand how the system works, the
specification or its documentation is likely to be flawed. Such specifications and
documentations need to be reworked to make them more comprehensible.
 Necessity: Each item in the document should be necessary.
 Sufficiency: The specification should be examined for missing or incomplete items.
All functions must be described as well as important properties of input and output
data such as volume and magnitude.
 Implementable: It is desirable to have a functional specification that is
implementable within the given resource constraints that are available in the target
environment such as hardware, processing power, memory, and network bandwidth.
 Efficient: The functional specification must optimize those parts of the solution that
contribute most to the performance of the system.
 Simplicity: In general, it is easier to achieve and verify requirements stated in the
form of simple functional specifications.
 Reusable Components: The specification should reuse existing components as
much as possible and be modular enough that the common components can be
extracted to be reused.
 Limitations: The limitations should be realistic and consistent with the requirements.

Benefits of testable requirements:


 Effective testing
 Early defect detection
 Improved Communication

Steps to ensure Requirements are testable :


 Clear and concise writing
 Use of examples
 Review by Stake holders

13
UNIT III-SOFTWARE TESTING AND AUTOMATION
3.5 MODELING A TEST DESIGN PROCESS
Test design is the process of creating a strategic plan for test cases, scenarios, and conditions
to ensure it meets the performance and reliability of software or systems. It aims to ensure
test cases effectively uncover software defects and behave as expected under various
conditions. Test objectives are identified from a requirement specification, and one test case

Figure : State transition diagram of a test case.


is created for each test objective. Each test case is designed as a combination of modular
components called test steps. Test cases are clearly specified so that testers can quickly
understand, borrow, and reuse the test cases.
The Figure above illustrates the life-cycle model of a test case in the form of a state
transitiondiagram. The state transition model shows the different phases, or states, in the life
cycle of a test case from its inception to its completion through the following states: create,
draft, review, deleted, released, update, and deprecated. Certain actions are taken by the
“owner” of the state,and the test case moves to a next state after the actions are completed.
One can easily implement a database of test cases using the test case schema shown
in Table below. We refer to such a database of test cases as a test factory.

Table -Test Case Schema Summary


Create State A test case is put in this initial state by its creator, called the owner, who initiates the
design of the test case. The creator initializes the following mandatory fields associated with the
test case such as requirement_ids, tc_id, tc_title, originator_group, creator, and test_category. The

14
UNIT III-SOFTWARE TESTING AND AUTOMATION
test case is expected to verify the requirements referred to in the requirement_ids fields. The
originator_ group is the group who found a need for the test. The creator may assign the test case to
a specific test engineer, including himself, by filling out the eng_assigned field, and move the test
case from the create to the draft state.
Draft State The owner of this state is the test group, that is, the system test team. In this state, the
assigned test engineer enters the following information: tc_author, objective, setup, test_steps,
cleanup, candidate_for_automation, automation_priority. After completion of all the mandatory
fields, the test engineer may reassign the test case to the creatorto go through the test case. The test
case stays in this state until it is walked through by the creator. After that, the creator may move the
state from the draft state to the review state by entering all the approvers’ names in the
approver_names field.
Review and Deleted States The owner of the review state is the creator of the test case. The
owner invites test engineers and developers to review and validate the test case. They ensure
that the test case is executable, and the pass–fail criteria are clearly specified.
Action items are created for the test case if any field needs a modification. Action
items from a review meeting are entered in the review_actions field, and the action items are
executed by the owner to effect changes to the test case.
The test case moves to the released state after all the reviewers approve the changes. If
the reviewers decide that this is not a valid test case or it is not executable, then the test case is
moved to the deleted state. A review action item must tell to delete this test case for a test case
to be deleted.
Released and Update States A test case in the released state is ready for execution, and it
becomes a part of a test suite. On the other hand, a test case in the update state implies that it is in
the process of being modified to enhance its reusability, being fine-tuned with respect to its pass–fail
criteria, and/orhaving the detailed test procedure fixed. For example, a reusable test case should be
parameterized rather than hard coded with data values.
Moreover, a test case should be updated to adapt it to changes in system functionality
or the environment.
One can improve the repeatability of the test case so that others can quickly
understand, borrow, and reuse it bymoving a test case in the released–update loop a small
number of times.
Also, this provides the foundation and justification for the test case to be automated. A
test case should be platform independent. If an update involves a small change, the test
engineer may move the test case back to the released state after the fix. Otherwise, the test
case is subject to a further review, which is achieved by moving it to the review state. A test
case may be revised once every time it is executed.
Deprecated State An obsolete test case may be moved to a deprecated state. Ideally, if it has
not been executed for a year, then the test case should be reviewed for its continued
existence.
A test case may become obsolete over time because of the following reasons.
 First, the functionality of the system being tested has much changed, and due to a lack
of test case maintenance, a test case becomes obsolete.
 Second, as an old test case is updated, some of the requirements of the original test case
may no longer be fulfilled.
 Third, reusability of test cases tends to degrade over time as the situation changes. This
is especially true of test cases which are not designed with adequate attention to
possible reuse.
 Finally, test cases may be carried forward carelessly long after their original
15
UNIT III-SOFTWARE TESTING AND AUTOMATION
justifications have disappeared. Nobody may know the original justification for a
particular test case,
so it continues to be used.

16
UNIT III-SOFTWARE TESTING AND AUTOMATION

Benefits of Modeling a test design process


 Improved Communication
 Improved Efficiency
 Improved Effectiveness

Challenges of modeling a test design process


 Complexity: Process can be complex that it is difficult to create a model
 Time : It can take time to create a model
 Cost : Can be expensive to create a model

3.6 MODELING TEST RESULTS


Test engineers execute test cases from a selected test suite using different test methods. The
results of executing those test cases are recorded in the test factory database for gathering and
analyzing test metrics A test suite schema can be used by a test manager to design a test suite
after a test factory is created. A test suite schema, as shown in Table below, is used to group
test cases for testing a particular release.

Table - Test Suite Schema Summary

The schema requires a test suite ID, a title, an objective, and a list of test cases to be managed
by the test suite. One also identifies the individual test cases to be executed (test cycles 1, 2, 3
and/or regression) and the requirements that the test cases satisfy.
The idea here is to gather a selected number of released test cases and repackage them
to form a test suite for a new project..
In a large, complex system with many defects, there are several possibilities of the
result of a test execution, not merely passed or failed. Therefore, we model the results of test
execution by using a state transition diagram as shown in Figure below, and the
corresponding schema is given in Table following the figure.

17
UNIT III-SOFTWARE TESTING AND AUTOMATION

Figure : State transition diagram of test case result.


Above figure illustrates a state diagram of a test case result starting from the untested
stateto four different states: passed, failed, blocked, and invalid.

Table : Test Result Schema Summary

 The execution status of a test case is put in its initial state of untested after designing or
selecting a test case.
 If the test case is not valid for the current software release, the test case result is moved to
the invalid state.
 In the untested state, the test suite identifier is noted in a field called test_suite_id. The
state of the test result, after execution of a test case is started, may change to one of the
following states:
passed, failed, invalid, or blocked.
 A test engineer may move the test case result to the passed state from the untested state
if the test case execution is complete and satisfies the pass criteria.
 If the test execution is complete and satisfied the fail criteria, a test engineer moves the
test result to the failed state from the untested state and associates the defect with the test
case by initializing the defect_ids field.

18
UNIT III-SOFTWARE TESTING AND AUTOMATION
 The test case must be reexecuted when a new build containing a fix for the defect is
received. If the reexecution is complete and satisfies the pass criteria, the test result is
moved to the passed state.
 The test case result is moved to a blocked state if it is not possible to completely execute
it. If known, the defect number that blocks the execution of the test case is recorded in the
defect_ids field. The test case may be reexecuted when a new build addressing a blocked
test case is received.
 If the execution is complete and satisfies the pass criteria, the test result is moved to the
passed state. On the other hand, if it satisfies the fail criteria, the test result is moved to
the failed state. If the execution is unsuccessful due to a new blocking defect, the test
result remains in the blocked state and the new defect that blocked the test case is listed
in the defect_ids field.
 The benefits of Modeling test results are : Improved Analysis, Improved decision
making and Improved Communication
 Challenges in Modelling test results :
• Complexity : Can be difficult to create a model if the results are complex
• Time : It can take time to create a model of the test results
• Cost : Can be expensive to create a model
 Common metric used to Model the test results:
• No. of defects found: Gives the count of no. of defects found during testing
• Severity of defects: Severity of defects can be classified as major, Minor or critical
• Time to find the defects: This can be used to identify the area that are difficult to test
• Coverage : Gives the percentage of software that was tested

11.1 BOUNDARY VALUE TESTING


Boundary Value Testing is one of the popular software testing mechanism, where
testing of data is done based on boundary values or between two opposite ends where the
ends may be like from start to end, or lower to upper or from maximum to minimum. This
testing process was introduced to select boundary values that came from the boundary based
on the inputs at different ends of testing values. This black box testing strategy was
introduced after equivalence class partitioning where the partition of classes takes place first
followed by a partition at the boundaries.
Example:
Let us assume a test case that takes the value of age from 21 to 65.

BOUNDARY VALUE TEST


CASE
INVALID TEST CASE VALID TEST CASES INVALID TEST CASE
(Min Value – 1) (Min, +Min, Max, -Max)(Max Value + 1)
20 21, 22, 65, 64 66

19
UNIT III-SOFTWARE TESTING AND AUTOMATION

Test Case Scenarios


1. Input: Enter the value of age as: 20 (ie., 21-1) Output: Invalid
2. Input: Enter the value of age as 21 Output: Valid
3. Input: Enter the value of age as 22 (ie., 21+1) Output: Valid
4. Input: Enter the value of age as 65 Output: Valid
5. Input: Enter the value of age as 64 (ie., 65-1) Output: Valid
6. Input: Enter the value of age as 66 (65+1) Output: Invalid

Importance:
 This is done when there is a huge number of test cases are available for testing
purposes and for checking them individually, this testing is of great use.
 The analysis of test data is done at the boundaries of partitioned data after equivalence
class partitioning happens and analysis is done.
 This testing process is actually known as black-box testing that focuses on valid and
invalid test case scenarios and helps in finding the boundary values at the extreme
ends without hampering any effective test data valuable for testing purpose.
 This is also responsible for testing where lots of calculations are required for any kind
of variable inputs and for using in varieties of applications.
 The testing mechanism also helps in detecting errors or faults at the boundaries of the
partition that is a plus point as most errors occur at the boundaries before the
applications are submitted to the clients.

The following are the key steps involved in performing Boundary value testing:
 Identify the boundaries
 Identify the valid and invalid boundaries
 Select test cases
 Execute the test cases
 Analyze the results

 Guidelines for BVA:


o If an input condition specifies a range bounded by values a and b, test cases should be
designed with values a and b and just above and just below a and b.
o If an input condition specifies a number of values, test cases should be developed that
exercise the minimum and maximum numbers. Values just above and below minimum
and maximum are also tested.
o If internal program data structures have prescribed boundaries (e.g., an array has a
defined limit of 100 entries), be certain to design a test case to exercise the data structure
at its boundary
o If the input is a Boolean value (T/F) , test cases are designed to test both values.

Advantages of Boundary Value Analysis:


 Effective defect identification: BVA focuses on the edges or boundaries of input domains,
making it effective at identifying issues related to these critical points.
 Increased test Coverage : It provides comprehensive test coverage for values near the
boundaries, which are often more likely to cause errors.
 Simple : BVA is simple to understand and implement, making it suitable for both
experienced and inexperienced testers.
 Early defect Identification : It can detect defects in the early stages of development,
lowering the cost of later problem resolution.

20
UNIT III-SOFTWARE TESTING AND AUTOMATION
Disadvantages of boundary value analysis:
 Limited Scope : BVA’s scope is limited to addressing boundary-related defects and
potentially missing issues that occur within the input domain.
 Combinatorial Explosion: BVA can result in a large number of test cases for systems with
multiple inputs, increasing the testing effort.
 Time Consuming: Can be time consuming especially when dealing with the complex input
ranges or multiple boundary conditions.
 BVA may not cover all possible scenarios or corner cases: While it is effective in many
cases, BVA may not cover all possible scenarios or corner cases.

11.2 EQUIVALENCE CLASS TESTING


Equivalence Partitioning or Equivalence Class Partitioning is type of black box testing
technique which can be applied to all levels of software testing like unit, integration, system,
etc. In this technique, input data units are divided into equivalent partitions that can be used to
derive test cases which reduces time required for testing because of small number of test
cases.
 It divides the input data of software into different equivalence data classes.
 We can apply this technique, where there is a range in the input
field. Example:
Let’s consider the behavior of Order Pizza Text Box Below.

Pizza values 1 to 10 is considered valid. A success message is shown.


While value 11 to 99 are considered invalid for order and an error message will appear,
“Only 10 Pizza can be ordered”.
Here is the test condition:
1. Any Number greater than 10 entered in the Order Pizza field (let say 11) is
considered invalid.
2. Any Number less than 1 that is 0 or below, then it is considered invalid.
3. Numbers 1 to 10 are considered valid
4. Any 3 Digit Number say -100 is invalid.
We cannot test all the possible values because if done, the number of test cases will be
more than 100. To address this problem, we use equivalence partitioning hypothesis where
we divide the possible values of tickets into groups or sets as shown below where the system
behavior can be considered the same.

The divided sets are called Equivalence Partitions or Equivalence Classes. Then we
pick only one value from each partition for testing. The hypothesis behind this technique
is that if one condition/value in a partition passes all others will also pass. Likewise, if
one condition in a partition fails, all other conditions in that partition will fail.

21
UNIT III-SOFTWARE TESTING AND AUTOMATION

Guidelines for identifying the Equivalence classes :


 Valid Equivalence Classes: Represents the inputs that are valid and expected to produce
the same behaviour
 Invalid Equivalence Classes : Represents the inputs that are invalid and outside the
expected range
 Special Equivalence Classes : Represents the special or extreme conditions .

Advantages :
 It is process-oriented
 It helps to decrease the general test execution time
 Reduce the set of test data.
Disadvantages :
 All necessary inputs may not cover
 This technique will not consider the condition for boundary value analysis
 The test engineer might assume that the output for all data set is right, which leads to
the problem during the testing process

Difference between Equivalence Partitioning and Boundary Value Analysis

Equivalence Partitioning Boundary Value Analysis


Divides the input domain into groups or partitions,
Focuses on testing values at the edges or
where each group is expected to behave in a
boundaries of the input domain.
similar way.
Suitable for inputs with a wide range of valid
Effective when values near the boundaries of
values, where values within a partition are
the input domain are more likely to cause
expected to have similar behavior.
issues.
Multiple test cases are created to test values at
Typically, one test case is selected from each
the boundaries, including just below, on, and
equivalence class or partition.
just above the boundaries.
Provides broad coverage across input domains, Focuses on testing edge cases and situations
ensuring that different types of inputs are tested. where errors often occur.

11.3 PATH TESTING


Path Testing is a method that is used to design the test cases. It is a structural testing method
that involves using the source code of a program in order to find every possible executable
path. It helps to determine all faults lying within a piece of code. This method is designed to
execute all or selected path through a computer program.
Any software program includes, multiple entry and exit points. Testing each of these
points is a challenging as well as time-consuming. In order to reduce the redundant tests and
to achieve maximum test coverage, path testing is used.

22
UNIT III-SOFTWARE TESTING AND AUTOMATION
Path Testing Process:
In the path testing method, the control flow graph of a program is designed to find a
set of linearly independent paths of execution. In this method, Cyclomatic Complexity is used
to determine the number of linearly independent paths and then test cases are generated for
each path.

1. Control Flow Graph:


Draw the corresponding control flow graph of the program in which all the executable
paths are to be discovered.
2. Cyclomatic Complexity:
After the generation of the control flow graph, calculate the cyclomatic complexity of the
program using the following formula.
McCabe's Cyclomatic Complexity = E - N + 2P
Where, E = Number of edges in the control flow graph
N = Number of vertices in the control flow graph
P = Number of connected components
3. Make Set:
Make a set of all the paths according to the control flow graph and calculate cyclomatic
complexity. The cardinality of the set is equal to the calculated cyclomatic complexity.
4. Create Test Cases:
Create a test case for each path of the set obtained in the above step.
Here we will take a simple example, to get a better idea what is basis path testing include

Cyclomatic Complexity = E - N + 2P
= 8 – 7 – 2(1) = 3
In the above example, we can see there are few conditional statements that is executed
depending on what condition it suffice. Here there are 3 paths or condition that need to be
tested to get the output,
Path 1: 1,2,3,5,6, 7
Path 2: 1,2,4,5,6, 7
Path 3: 1, 6, 7
Generation of Test Cases:
After the identification of independent paths, we may generate test cases that traverse
all independent paths at the time of executing the program. This process will ensure that each
23
UNIT III-SOFTWARE TESTING AND AUTOMATION
transition of the control flow diagram is traversed at least once.
Test Case A B C PATH
1 50 55 52 A=B, PRINT 55
2 50 55 60 A=C, PRINT 60
3 40 ANY ANY PRINT 40
Path Testing Techniques
 Control Flow Graph: The program is converted into a control flow graph by representing
the code into nodes and edges.
 Decision to Decision path: The control flow graph can be broken into various Decision
to Decision paths and then collapsed into individual nodes.
 Independent paths: An Independent path is a path through a Decision to Decision path
graph that cannot be reproduced from other paths by other methods.

Advantages of path testing:


 The path testing method reduces the redundant tests.
 Path testing focuses on the logic of the programs.
 Path testing is used in test case design.
Disadvantages of Path Testing
1. A tester needs to have a good understanding of programming knowledge to execute the tests.
2. The test case increases when the code complexity is increased.
3. It will be difficult to create a test path if the application has a high complexity of code.

11.4 DATA FLOW TESTING


Data Flow Testing is a type of structural testing. It is a method that is used to find the test
paths of a program according to the locations of definitions and uses of variables in the
program. Furthermore, it is concerned with:
 Statements where variables receive values,
 Statements where these values are used or referenced.
Define/Reference Anomalies:
Reference or define anomalies in the flow of the data are detected at the time of
associations between values and variables. These anomalies are:
 A variable is defined but not used or referenced,
 A variable is used but never defined,
 A variable is defined twice before it is used
Definitions:
To illustrate the approach of data flow testing, assume that each statement in the program
assigned a unique statement number. For a statement number ‘n’ -
DEF(v,n) = statement ‘n’ contains the definition of variable ‘v’
USE(v,n) = statement ‘n’ contains the use of variable ‘v’
Use Path = (denoted as du-path) for a variable ‘v’ is a path between two nodes ‘m’ and
‘n’where ‘m’ is the initial node in the path but the defining node for variable
‘v’(denoted as DEF (v, m)) and ‘n’ is the final node in the path but usage
node for variable ‘v’ (denoted as USE (v, n)).
Clear path = (denoted as dc-path) for a variable ‘v’ is a definition use path with initial
and final nodes DEF (v, m) and USE (v, n) such that no other node in the
path is a defining node of variable ‘v’.
The du-paths and dc-paths describe the flow of data across program statements from
statements where values are defined to statements where the values are used.

24
UNIT III-SOFTWARE TESTING AND AUTOMATION
 A du-path for a variable ‘v’ may have manyredefinitions of variable ‘v’ between
initial node (DEF (v, m)) and final node (USE (v, n)).
 A dc-path for a variable ‘v’ will not have any definition of variable ‘v’ between
initial node (DEF (v, m)) and final node (USE (v, n)).
 The du-paths that are not definition clear paths are potential troublesome
paths. They should be identified and tested on topmost priority.
Identification of du and dc Paths
The various steps for the identification of du and dc paths are given as:
(i) Draw the program graph of the program.
(ii) Find all variables of the program and prepare a table for define / use status of
all variables using the following format:

(iii) Generate all du-paths from define/use variable table of step (ii) using the following format:

(iv) Identify those du-paths which are not dc-paths. Four testing strategies are used for this
Testing Strategies Using du-Paths:
We want to generate test cases which trace everydefinition to each of its use and
every use is traced to each of its definition. Some of the testing strategies are given as:
a. Test all du-paths:
All du-paths generated for all variables are tested. This is the strongest data flow
testing strategy covering all possible du-paths.
b. Test all uses
Find at least one path from every definition of every variable to every use of that
variable which can be reached by that definition.
For everyuse of a variable, there is a path from the definition of that variable to the
use of that variable.
c. Test all definitions
Find paths from every definition of every variable to at least one use of that variable;
we may choose any strategy for testing.
As we go from ‘test all du-paths’ (no. (i)) to ‘test all definitions’ (no.(iii)), the number
of paths are reduced. However, it is best to test all du-paths (no. (i)) and give priority to those
du-paths which are not definition clear paths. The first requires that each definition reaches
all possible uses through all possible du-paths. The second requires that each definition
reaches allpossible uses, and the third requires that each definition reaches at least one use.
Generation of Test Cases:
After finding paths, test cases are generated by giving values to the input parameter.
We get different test suites for each variable.
Example: Let us consider the program as follows:
a. read x, y;
b. if(x>y)
3. a = x+1
else
4. a = y-1
5. print a;

25
UNIT III-SOFTWARE TESTING AND AUTOMATION
Control flow graph ( Program graph) of above example:
Define/use of variables of above example:

Variable Defined at node Used at node


x 1 2, 3
y 1 2, 4
a 3, 4 5

The du-paths with beginning node and end node are given as:
Variable du-path (Begin, end)
X 1,2,3
1,2
Y 1,2,4
1,2
A 3,5
4,5

The first strategy (best) is to test all du-paths, the second is to test all uses and the third is to test
all definitions. The du-paths as per these three strategies are given as:

Test all du-paths:

Testing Strategy Paths Definition Clear?


All du paths and all uses paths 1,2,3 Yes
(Both are same in this 1,2 Yes
example) 1,2,4 Yes
(6 paths) 1,2 Yes
3,5 Yes
4,5 Yes
all definitions(2 paths) 1-2-3-5 Yes
1-2-4-5 Yes

Test cases for data flow paths are given below:

S. Expected
x y a Remarks
No. Output
1 20 10 21 21 1,2,3
2 20 10 21 21 1,2
3 15 25 24 24 1,2,4
4 15 25 24 24 1,2
5 20 10 21 21 3,5
6 15 25 24 24 4,5

26
UNIT III-SOFTWARE TESTING AND AUTOMATION
Test cases for all definitions:

S. Expected
x Y a Remarks
No. Output
1. 20 10 21 21 1-2-3-5
2. 20 10 21 21 1-2-4-5

Advantages of Data Flow Testing:


Data Flow Testing is used to find the following issues-
 To find a variable that is used but never defined,
 To find a variable that is defined but never used,
 To find a variable that is defined multiple times before it is use,

Disadvantages of Data Flow Testing


 Time consuming and costly process
 Requires knowledge of programming languages

11.5 TEST DESIGN PREPAREDNESS METRICS


Management may be interested to know the progress, coverage, and productivity
aspects of the test case preparation work being done by a team of test engineers. Hence
metrics are used.
Test Metrics are used by the management and others involved in a software project to
(i) know if a test project is progressing according to schedule and if
more resources are required and
(ii)(ii) plan their next project more accurately.

The following metrics can be used to represent the level of preparedness of test design.
Preparation Status of Test Cases (PST): A test case can go through a number of phases, or
states, such as draft and review, before it is released as a valid and useful test case. Thus, it is
useful to periodically monitor the progress of test design by counting the test cases lying in
different states of design—create, draft, review, released, and deleted. It is expected that all the
planned test cases that are created for a particular project eventually move to the released state
before the start of test execution.

Average Time Spent (ATS) in Test Case Design: It is useful to know the amount of time it
takes for a test case to move from its initial conception, that is, create state, to when it is
considered to be usable, that is, released state. This metric is useful in allocating time to the
test preparation activity in a subsequent test project. Hence, it is useful in test planning.
Number of Available Test (NAT) Cases: This is the number of test cases in the released
state from existing projects. Some of these test cases are selected for regression testing in the
current test project.
Number of Planned Test (NPT) Cases: This is the number of test cases that are in a test
suite and ready for execution at the start of system testing. This metric is useful in scheduling
test execution. As testing continues, new, unplanned test cases may be required to be
designed. A large number of new test cases compared to NPT suggest that initial planning was
not accurate. Coverage of a Test Suite (CTS): This metric gives the fraction of all
requirements covered by a selected number of test cases or a complete test suite. The CTS is a
measure of the number of test cases needed to be selected or designed to have good coverage
27
UNIT III-SOFTWARE TESTING AND AUTOMATION
of system requirements.

28
UNIT III-SOFTWARE TESTING AND AUTOMATION

11.6 TEST CASE DESIGN EFFECTIVENESS


The objectives of the test case design effectiveness metric is to
(i) measure the “defect revealing ability” of the test suite and
(ii) use the metric to improve the test design process.
During system-level testing, defects are revealed due to the execution of planned test
cases. In addition to these defects, new defects are found during testing for which no test
cases had been planned. For these new defects, new test cases are designed, which are called
test case escaped (TCE).
Test case escapes occur because of deficiencies in the test design process. This
happens because the test engineers get new ideas while executing the planned test cases.
A metric commonly used in the industry to measure test case design effectiveness is
the test case design yield (TCDY), defined as

Where NPT – Number of Planned Test


TCE – Test Case Escaped
The TCDY is also used to measure the effectiveness of a particular testing phase. For
example, the system integration manager may want to know the TCDY value for his or her
system integration testing.

Factors that contribute to the effectiveness to test case design:


1. Test Coverage: Effective test design should ensure comprehensive coverage of the system’s
requirements, functionalities and critical paths
2. Test case relevance : Test cases should be relevant to the system being tested
3. Clear objectives: Each test case should have a specific purpose
4. Test case Independence : Test cases should be designed that they are independent of each other
5. Test data accuracy : Test cases should have accurate and valid test data
6. Reproducibility : Effective test cases should be reproducible
7. Test case Efficiency : Test cases should be effective in terms of detecting defects

Some of the other Metrics for accessing Test case design :


 Defect Detection rate : Measures the percentage of test cases that successfully identifies
defects
 Test case Effectiveness Ratio: Compares the no. of test cases that detect defects to the
total no. of executed test cases
 Code Coverage: Measures the percentage of code covered by the executed test cases

Best Practices for test case design :


 Start by understanding the requirements of the software application
 Use a variety of test case design techniques
 Prioritize the test cases
 Trace the test cases back to the requirements
 Review the test cases with the development team
 Execute the test cases and track the results

29
UNIT III-SOFTWARE TESTING AND AUTOMATION

11.7 MODEL DRIVEN TEST DESIGN


Model Driven Test Design (MDTD) is built on the idea that designs will become more effective
and efficient if the designers can raise the level of abstraction. This approach breaks down
the testing into a series of small tasks that simplify test generation. Then test designers
isolate their tasks and work at a higher level of abstraction by using mathematical engineering
structures to design test values independently of the details of the software or design
artifacts,test automation, and Test Execution.

Figure : Model-driven test design.


The model driven test design process is illustrated in Figure above, which shows test
design activities above the line and other test activities below.
 The starting point is a software artifact. This could be program source, a UML diagram,
natural language requirements, or even a user manual.
 A criteria-based test designer uses that artifact to create an abstract model of the
software in the form of an input domain, a graph, logic expressions, or a syntax
description.
o Criteria-based test designer design test values to satisfy coverage criteria
o Theyrequire knowledge of Discrete math, Programming, and Testing.
o Require a traditional Computer Science degree
 Then a coverage criterion is applied to create test requirements.
o Coverage criteria give structured, practical ways to search the input space
o Testers search a huge input space -- to find the fewest inputs that will reveal
the mostproblems
 A human-based test designer uses the artifact to consider likely problems in the
software, then creates requirements to test for those problems.
o Human-based test designer design test values based on Domain knowledge of
the program
o It is a Human knowledge of testing
o Designer must have a knowledge of user interface
o Require almost no traditional CS degree.
 These requirements are sometimes refined into a more specific form, called the test
specification. For example, if edge coverage is being used, a test requirement specifies
which edge in a graph must be covered. A refined test specification would be a complete
path through the graph.
 Once the test requirements are refined, input values that satisfy the requirements must be
defined. This brings the process down from the design abstraction level to the
implementation abstraction level. These are analogous to the abstract and concrete tests in

30
UNIT III-SOFTWARE TESTING AND AUTOMATION
the model-based testing literature. The input values are augmented with other values needed
to run the tests (including values to reach the point in the software being tested, to display
output, and to terminate the program).
 The test cases are then automated into test scripts (when feasible and practical), run on
the software to produce results, and results are evaluated. It is important that results
from automation and execution be used to feed back into test design, resulting in
additional or modified tests.
 This process has two major benefits:
o First, it provides a clean separation of tasks between test design, automation,
execution and evaluation.
o Second, raising our abstraction level makes test design much easier. Instead of
designing tests for a messy implementation or complicated design model, we design
at an elegant mathematical level of abstraction. This is exactly how algebra and
calculus has been used in traditional engineering for decades.

Figure - Example method, CFG, test requirements and test paths.


The Figure illustrates this process for unit testing of a small Java method. The Java
source is shown on the left, and its control flow graph is in the middle. This is a standard
control flow graph with the initial node marked as a dotted circle and the final nodes marked
as double circles.
The first step in the MDTD process is to take this software artifact, the indexOf()
method, and model it as an abstract structure. The control flow graph from Figure 3.6 is
turned into an abstract version. This graph can be represented textually as a list of edges,
initial nodes, and final nodes, as shown in Figure above under Edges. If the tester uses edge-
pair coverage, six requirements are derived. For example, test requirement #3, [2, 3, 2],
means the subpath from node 2 to 3 and back to 2 must be executed. The Test Paths box
shows three complete test paths through the graph that will cover all six test requirements.
Some of the models that used for MBT are :
 Use case Models : Describe the different ways in which the users will interact with the
software
 Data Flow Models: Describe the flow of data through the software
 State Machine Models : Describe the different states of the software
Advantages of Model Driven test design :
 Increased Coverage
 Improved Efficiency
 Reduced defects

DisAdvantages of Model Driven test design :


 Modeling Complexity
 Tool Support

31
UNIT III-SOFTWARE TESTING AND AUTOMATION
 Required Skill
3.14 Testcase Planning Overview (Test Plan Documents)
The figure below shows the relationships among the different types of test plans. As we can see in
the Figure, moving further away from the top-level test plan puts less emphasis on the process
of creation and more on the resulting written document. The reasonis that these plans become
useful on a daily, sometimes hourly, basis by the testers performing the testing.
At the lowest level they become step-by-step instructions for executing a test, making
sure that they’re clear, concise, and organized.
The bottom line is that the test team should create test plans that cover the information
outlined in IEEE 829. The four goals of test case planning should be met: organization,
repeatability, tracking, and proof.

Figure - The different levels of test documents


3.14.1 : Test Design Specification:
IEEE 829 states that the test design specification “refines the test approach defined in the
test plan and identifies the features to be covered by the design and its associated tests. It also
identifies the test cases and test procedures, required to accomplish the testing andspecifies the
feature pass/fail criteria.”
The purpose of the test design spec is to organize and describe the testing that needs to
be performed on a specific feature. It doesn’t, however, give the detailed cases or the steps to
execute to perform the testing. The following topics, adapted from the IEEE 829 standard,
address this purpose and should be part of the test design specs that a tester create:
 Identifiers - A unique identifier that can be used to reference and locate the test design
spec. The spec should also reference the overall test plan and contain pointers to any
other plans or specs that it references.
 Features to be tested - A description of the software feature covered by the test design
spec—for example, “the addition function of Calculator,” “font size selection and display
in WordPad”
 Approach - A description of the general approach that will be used to test the features.It
should expand on the approach, if any, listed in the test plan, describe the technique to be
used, and explain how the results will be verified.
 Test case identification - A high-level description and references to the specific test
cases that will be used to check the feature.
 Pass/fail criteria - Describes exactly what constitutes a pass and a fail of the tested
feature. What is acceptable and what is not? constitutes a pass or a fail of the feature.

32
UNIT III-SOFTWARE TESTING AND AUTOMATION

3.14.2 : Test Case Specification:


IEEE 829 states that the test case specification as “Test cases are documents that has actual
values used for input along with the anticipated outputs”.
The IEEE 829 standard also lists some other important information that should be included. The
parameters of test cases are given below :
 Identifiers - A unique identifier is referenced by the test design specs and the test
procedure specs.
 Test item- This describes the detailed feature, code module, and so on that’s being
tested. It should be more specific than the features listed in the test design spec. If the test
design spec said “the addition function of Calculator,” the test case spec would say
“upper limit overflow handling of addition calculations.”
 Input specification- This specification lists all the inputs or conditions given to the
software to execute the test case. If you’re testing Calculator, this may be as simple as
1+1. If you’re testing cellular telephone switching software, there could be hundreds or
thousands of input conditions.
 Output specification- This describes the result you expect from executing the test case.
 Environmental needs- Environmental needs are the hardware, software, test tools,
facilities, staff, and so on that are necessary to run the test case.
 Special procedural requirements- This section describes anything unusual that mustbe
done to perform the test.
 Intercase dependencies- A bug caused NASA’s Mars Polar Lander to crash on Mars.
It’s a perfect example of an undocumented intercase dependency. If a test case depends
on another test case or might be affected by another, that information should go here.

3.14.3 : Test Procedures: (Procedures for Execution of Test cases)


After the tester document the test designs and test cases, what remains are the procedures
that need to be followed to execute the test cases. IEEE 829 states that the test procedure
specification “identifies all the steps required to operate the system and exercise the
specifiedtest cases in order to implement the associated test design.”
The test procedure or test script spec defines the step-by-step details of exactly how to
perform the test cases. Here’s the information that needs to be defined:
 Identifier- A unique identifier that ties the test procedure to the associated test cases
and test design.
 Purpose - The purpose of the procedure and reference to the test cases that it will
execute.
 Special requirements - Other procedures, special testing skills, or special equipment
needed to run the procedure.
 Procedure steps- Detailed description of how the tests are to be run:
oLog : Tells how and by what method the results and observations will be recorded.
oSetup : Explains how to prepare for the test.
oStart : Explains the steps used to start the test.
oProcedure : Describes the steps used to run the tests.
oMeasure : Describes how the results are to be determined
oShut down : Explains the steps for suspending the test for unexpected reasons.
oRestart: Tells the tester how to pick up the test at a certain point if there’s a failure.

33
UNIT III-SOFTWARE TESTING AND AUTOMATION
oStop : Describes the steps for an orderly halt to the test.
o Wrap up: Explains how to restore the environment to its pre-test condition.
o Contingencies: Explains what to do if things don’t go as planned.

The following are the elements that should be included in a test procedure :
 Purpose of the test
 Steps of the test
 Expected Results
 Pre conditions
 Post Conditions
Test Procedures can be written in various formats, but they should be clear , concise and easy
to follow. They should be consistent with other documentation for the software project.
Benefits of Test Procedures:
 Consistency : Test procedures helps to ensure that tests are executes consistently. This
improves the accuracy of the results
 Documentation : Test procedures document the testing process which can be helpful
for debugging and trouble shooting
 Communication : Test Procedures can help to communicate the testing requirements
to the development team and other stake holders
 Reusability : Test procedures can be reused for future testing which can save time and
effort

3.14.4 : Test Case Organization and Tracking


One consideration that a tester should take into account when creating the test case documentation
is how the information will be organized and tracked. There are essentially four possible systems:
1. In tester’s head - This method should be considered only if tester is testing his own software
for his own personal use.
2. Paper/documents - It’s possible to manage the test cases for very small projects on paper.
Tables and charts of checklists have been used effectively. They’re obviously aweak method for
organizing and searching the data but they do offer one very important positive—a written
checklist that includes a tester’s initials or signature denoting that tests were run. This is
excellent proof in a court-of-law that testing was performed.

3. Spreadsheet - A popular and very workable method of tracking test cases is by using a
spreadsheet. Figure below shows an example of this. By keeping all the details of the test cases
in one place, a spreadsheet can provide an at-a-glance view of your testing status.They are easy
to use, relatively easy to set up, and provide good tracking and proof of testing.
34
UNIT III-SOFTWARE TESTING AND AUTOMATION

Figure- A spreadsheet can be used to effectively track and manage test suites and testcases.
4. Custom database - The ideal method for tracking test cases is to use a customized
database programmed specifically to handle test cases. Many commercially available
applications are set up to perform just this specific task.
Database software such as FileMaker Pro, Microsoft Access, and many others provide almost
drag-and- drop database creation that would let us build a database that mapped to the
IEEE 829 standard in just a few hours. A tester could then set up reports and queries that
would allow the tester to answer just about any question regarding the test cases.
5. Test case Management Tool : A Test case Management Tool is a software application
that can be used to create , store, update and execute test cases. These tools provides
features such as version control , test case prioritization and test case execution reports

Benefits of Test Case Organization and tracking:


1. Improved Efficiency : Test case organization and tracking can improve the efficiency of testing
by making it easier to find and execute test cases
2. Improved Accuracy : Test case organization and tracking can help to improve communication
between testing teams and other stake holders by providing centralized repository for test cases
and reports
3. Reduced Risks : Test case organization and tracking can help to reduce the risk of defects
by making it easier to identify and fix the defects in the development process
Some of the Best Practices for Test case Organization and tracking :
 Create a test case Library : A test case library is a central repository for test cases.
This will make it easier to find and execute test cases
 Use a consistent format : Understanding test cases will be easier if consistent format is used
 Include all relevant information : This will make it easier to execute test cases and to
track their status and results
 Track the status of the test cases: This will help to ensure that tests are executed
in a timely manner
 Review the test cases regularly : This ensures that test cases are up- to-date and accurate
By following these practices, organizations can organize the track test cases effectively and efficiently.
Hence test results will be accurate

• For Bug Reporting, Bug Life Cycle - Refer Unit II Notes.


35

You might also like