Software testing
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 1
Objectives
l To discuss the distinctions between
validation testing and defect testing
l To describe the principles of system and
component testing
l To describe strategies for generating system
test cases
l To understand the essential characteristics of
tool used for test automation
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 2
Topics covered
l System testing
l Component testing
l Test case design
l Test automation
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 3
The testing process
l Component testing
• Testing of individual program components;
• Usually the responsibility of the component developer
(except sometimes for critical systems);
• Tests are derived from the developer’s experience.
l System testing
• Testing of groups of components integrated to create a
system or sub-system;
• The responsibility of an independent testing team;
• Tests are based on a system specification.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 4
Testing phases
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 5
Defect testing
l The goal of defect testing is to discover
defects in programs
l A successful defect test is a test which
causes a program to behave in an
anomalous way
l Tests show the presence not the absence of
defects
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 6
Testing process goals
l Validation testing
• To demonstrate to the developer and the system
customer that the software meets its requirements;
• A successful test shows that the system operates as
intended.
l Defect testing
• To discover faults or defects in the software where its
behaviour is incorrect or not in conformance with its
specification;
• A successful test is a test that makes the system perform
incorrectly and so exposes a defect in the system.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 7
The software testing process
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 8
Testing policies
l Only exhaustive testing can show a program is free
from defects. However, exhaustive testing is
impossible,
l Testing policies define the approach to be used in
selecting system tests:
• All functions accessed through menus should be tested;
• Combinations of functions accessed through the same
menu should be tested;
• Where user input is required, all functions must be tested
with correct and incorrect input.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 9
System testing
l Involves integrating components to create a
system or sub-system.
l May involve testing an increment to be
delivered to the customer.
l Two phases:
• Integration testing - the test team have access
to the system source code. The system is tested
as components are integrated.
• Release testing - the test team test the
complete system to be delivered as a black-box.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 10
Integration testing
l Involves building a system from its
components and testing it for problems that
arise from component interactions.
l Top-down integration
• Develop the skeleton of the system and
populate it with components.
l Bottom-up integration
• Integrate infrastructure components then add
functional components.
l To simplify error localisation, systems should
be incrementally integrated.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 11
Incremental integration testing
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 12
Testing approaches
l Architectural validation
• Top-down integration testing is better at discovering errors
in the system architecture.
l System demonstration
• Top-down integration testing allows a limited
demonstration at an early stage in the development.
l Test implementation
• Often easier with bottom-up integration testing.
l Test observation
• Problems with both approaches. Extra code may be
required to observe tests.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 13
Release testing
l The process of testing a release of a system
that will be distributed to customers.
l Primary goal is to increase the supplier’s
confidence that the system meets its
requirements.
l Release testing is usually black-box or
functional testing
• Based on the system specification only;
• Testers do not have knowledge of the system
implementation.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 14
Black-box testing
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 15
Testing guidelines
l Testing guidelines are hints for the testing
team to help them choose tests that will
reveal defects in the system
• Choose inputs that force the system to generate
all error messages;
• Design inputs that cause buffers to overflow;
• Repeat the same input or input series several
times;
• Force invalid outputs to be generated;
• Force computation results to be too large or too
small.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 16
Testing scenario
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 17
System tests
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 18
Use cases
l Use cases can be a basis for deriving the
tests for a system. They help identify
operations to be tested and help design the
required test cases.
l From an associated sequence diagram, the
inputs and outputs to be created for the tests
can be identified.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 19
Collect weather data sequence chart
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 20
Performance testing
l Part of release testing may involve testing
the emergent properties of a system, such
as performance and reliability.
l Performance tests usually involve planning a
series of tests where the load is steadily
increased until the system performance
becomes unacceptable.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 21
Stress testing
l Exercises the system beyond its maximum design
load. Stressing the system often causes defects to
come to light.
l Stressing the system test failure behaviour..
Systems should not fail catastrophically. Stress
testing checks for unacceptable loss of service or
data.
l Stress testing is particularly relevant to distributed
systems that can exhibit severe degradation as a
network becomes overloaded.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 22
Component testing
l Component or unit testing is the process of
testing individual components in isolation.
l It is a defect testing process.
l Components may be:
• Individual functions or methods within an object;
• Object classes with several attributes and
methods;
• Composite components with defined interfaces
used to access their functionality.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 23
Object class testing
l Complete test coverage of a class involves
• Testing all operations associated with an object;
• Setting and interrogating all object attributes;
• Exercising the object in all possible states.
l Inheritance makes it more difficult to design
object class tests as the information to be
tested is not localised.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 24
Weather station object interface
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 25
Weather station testing
l Need to define test cases for reportWeather,
calibrate, test, startup and shutdown.
l Using a state model, identify sequences of
state transitions to be tested and the event
sequences to cause these transitions
l For example:
• Waiting -> Calibrating -> Testing -> Transmitting
-> Waiting
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 26
Interface testing
l Objectives are to detect faults due to
interface errors or invalid assumptions about
interfaces.
l Particularly important for object-oriented
development as objects are defined by their
interfaces.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 27
Interface testing
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 28
Interface types
l Parameter interfaces
• Data passed from one procedure to another.
l Shared memory interfaces
• Block of memory is shared between procedures or
functions.
l Procedural interfaces
• Sub-system encapsulates a set of procedures to be called
by other sub-systems.
l Message passing interfaces
• Sub-systems request services from other sub-system.s
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 29
Interface errors
l Interface misuse
• A calling component calls another component and makes
an error in its use of its interface e.g. parameters in the
wrong order.
l Interface misunderstanding
• A calling component embeds assumptions about the
behaviour of the called component which are incorrect.
l Timing errors
• The called and the calling component operate at different
speeds and out-of-date information is accessed.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 30
Interface testing guidelines
l Design tests so that parameters to a called
procedure are at the extreme ends of their ranges.
l Always test pointer parameters with null pointers.
l Design tests which cause the component to fail.
l Use stress testing in message passing systems.
l In shared memory systems, vary the order in which
components are activated.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 31
Test case design
l Involves designing the test cases (inputs and
outputs) used to test the system.
l The goal of test case design is to create a
set of tests that are effective in validation and
defect testing.
l Design approaches:
• Requirements-based testing;
• Partition testing;
• Structural testing.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 32
Requirements based testing
l A general principle of requirements
engineering is that requirements should be
testable.
l Requirements-based testing is a validation
testing technique where you consider each
requirement and derive a set of tests for that
requirement.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 33
LIBSYS requirements
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 34
LIBSYS tests
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 35
Partition testing
l Input data and output results often fall into
different classes where all members of a
class are related.
l Each of these classes is an equivalence
partition or domain where the program
behaves in an equivalent way for each class
member.
l Test cases should be chosen from each
partition.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 36
Equivalence partitioning
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 37
Equivalence partitions
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 38
Search routine specification
procedure Search (Key : ELEM ; T: SEQ of ELEM;
Found : in out BOOLEAN; L: in out ELEM_INDEX) ;
Pre-condition
-- the sequence has at least one element
T’FIRST <= T’LAST
Post-condition
-- the element is found and is referenced by L
( Found and T (L) = Key)
or
-- the element is not in the array
( not Found and
not (exists i, T’FIRST >= i <= T’LAST, T (i) = Key ))
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 39
Search routine - input partitions
l Inputs which conform to the pre-conditions.
l Inputs where a pre-condition does not hold.
l Inputs where the key element is a member of
the array.
l Inputs where the key element is not a
member of the array.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 40
Testing guidelines (sequences)
l Test software with sequences which have
only a single value.
l Use sequences of different sizes in different
tests.
l Derive tests so that the first, middle and last
elements of the sequence are accessed.
l Test with sequences of zero length.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 41
Search routine - input partitions
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 42
Structural testing
l Sometime called white-box testing.
l Derivation of test cases according to
program structure. Knowledge of the
program is used to identify additional test
cases.
l Objective is to exercise all program
statements (not all path combinations).
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 43
Structural testing
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 44
Binary search - equiv. partitions
l Pre-conditions satisfied, key element in array.
l Pre-conditions satisfied, key element not in
array.
l Pre-conditions unsatisfied, key element in array.
l Pre-conditions unsatisfied, key element not in array.
l Input array has a single value.
l Input array has an even number of values.
l Input array has an odd number of values.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 45
Binary search equiv. partitions
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 46
Binary search - test cases
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 47
Path testing
l The objective of path testing is to ensure that
the set of test cases is such that each path
through the program is executed at least
once.
l The starting point for path testing is a
program flow graph that shows nodes
representing program decisions and arcs
representing the flow of control.
l Statements with conditions are therefore
nodes in the flow graph.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 48
Binary search flow graph
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 49
Independent paths
l 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 14
l 1, 2, 3, 4, 5, 14
l 1, 2, 3, 4, 5, 6, 7, 11, 12, 5, …
l 1, 2, 3, 4, 6, 7, 2, 11, 13, 5, …
l Test cases should be derived so that all of
these paths are executed
l A dynamic program analyser may be used to
check that paths have been executed
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 50
Test automation
l Testing is an expensive process phase. Testing
workbenches provide a range of tools to reduce the
time required and total testing costs.
l Systems such as Junit support the automatic
execution of tests.
l Most testing workbenches are open systems
because testing needs are organisation-specific.
l They are sometimes difficult to integrate with closed
design and analysis workbenches.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 51
A testing workbench
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 52
Testing workbench adaptation
l Scripts may be developed for user interface
simulators and patterns for test data
generators.
l Test outputs may have to be prepared
manually for comparison.
l Special-purpose file comparators may be
developed.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 53
Key points
l Testing can show the presence of faults in a system;
it cannot prove there are no remaining faults.
l Component developers are responsible for
component testing; system testing is the
responsibility of a separate team.
l Integration testing is testing increments of the
system; release testing involves testing a system to
be released to a customer.
l Use experience and guidelines to design test cases
in defect testing.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 54
Key points
l Interface testing is designed to discover defects in
the interfaces of composite components.
l Equivalence partitioning is a way of discovering test
cases - all cases in a partition should behave in the
same way.
l Structural analysis relies on analysing a program
and deriving tests from this analysis.
l Test automation reduces testing costs by supporting
the test process with a range of software tools.
©Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 23 Slide 55