STM - Unit 3
STM - Unit 3
Validation is the next step after verification. As shown in Figure, every validation testing
focuses on a particular SDLC phase and thereby focuses on a particular class of errors. For
example, the purpose of unit validation testing is to find discrepancies between the developed
module’s functionality and its requirements and interfaces specified in the SRS. Similarly, the
purpose of system validation testing is to explore whether the product is consistent with its
original objectives. The advantage of this structure of validation testing is that it avoids
redundant testing and prevents one from overlooking large classes of errors. Software
validation is achieved through a series of black-box tests that demonstrate conformity with
requirements. A test plan outlines the classes of tests to be conducted and a test procedure
defines specific test cases that will be used to demonstrate conformity with requirements.
Both the plan and procedure are designed to ensure that all functional requirements are
satisfied, all behavioral characteristics are achieved, all performance requirements are
attained, documentation is correct and human-engineered, and other requirements are met.
Drivers
A test driver can be defined as a software module which is used to invoke a module under test
and provide test inputs, control and monitor execution, and report test results or most
simplistically, a line of code that calls a method and passes a value to that method .
Suppose a module is to be tested, wherein some inputs are to be received from another
module.
The module which passes inputs to the module to be tested is not ready and under
development then simulate the inputs required in the module to be tested.
Code is prepared, wherein the required inputs are either hard-coded or entered by the user and
passed on to the module under test.
This module where the required inputs for the module under test are simulated for the
purpose of module or unit testing is known as a driver module.
For example, module B /C is under test and module A is not ready which passes inputs to B
and C. Therefore, a driver module is needed which will simulate module A in the sense that it
passes the required inputs to module B /C as shown in figure.
A test driver may take inputs in the following form and call the unit to be tested:
It may hard-code the inputs as parameters of the calling unit.
It may take the inputs from the user.
It may read the inputs from a file.
Stubs
A stub can be defined as a piece of software that works similar to a unit which is referenced
by the unit being tested, but it is much simpler than the actual unit.
A stub works as a ‘stand-in’ for the subordinate unit and provides the minimum required
behaviour for that unit.
The module under testing may also call some other module which is not ready at the time of
testing dummy modules instead of actual modules, which are not ready, are prepared for these
subordinate modules. These dummy modules are called stubs.
Module B under test needs to call module D and module E. But they are not ready. Therefore,
stubs are designed for module D and module E.
Stub is a place holder for the actual module to be called. Therefore, it is not designed with the
functionalities performed by the actual module. It is a reduced implementation of the actual
module.
It does not perform any action of its own and returns to the calling unit
We may include a display instruction as a trace message in the body of stub. The idea is that
the module to be called is working fi ne by accepting the input parameters.
A constant or null must be returned from the body of stub to the calling module.
Stub may simulate exceptions or abnormal conditions, if required.
Figure: Stubs
Disadvantages: Drivers and stubs represent overheads also. Overhead of designing them may increase
the time and cost of the entire software system. They must be designed simple to keep overheads low.
Stubs and drivers are generally prepared by the developer of the module under testing.
Example 7.1
Consider the following program:
main()
{
int a,b,c,sum,diff,mul;
scanf(“%d %d %d”, &a, &b, &c);
sum = calsum(a,b,c);
diff = caldiff(a,b,c);
mul = calmul(a,b,c);
printf(“%d %d %d”, sum, diff, mul);
}
calsum(int x, int y, int z)
{
int d;
d = x + y + z;
return(d);
}
(a) Suppose main() module is not ready for the testing of calsum() module. Design a driver module for
main().
(b) Modules caldiff() and calmul() are not ready when called in main(). Design stubs for these two
modules.
Solution
Various types of integration testing can be seen in a hierarchical tree given below
Integration method depends on the methods on which the activity of integration is based
(a) To integrate all the modules together and then test it - non-incremental
(b) To integrate the modules one by one and test them incrementally - incremental
Non-Incremental/Big-Bang Integration Testing
In this type of testing, either all untested modules are combined together and then tested or
unit tested modules are combined together.
It is also known as Big-Bang integration testing.
Big-Bang method cannot be adopted practically. This theory has been discarded due to the
following reasons:
o Big-Bang requires more work.
o Actual modules are not interfaced directly until the end of the software system.
o It will be difficult to localize the errors since the exact location of bugs cannot be
found easily.
In this type, you start with one module and unit test it. Then combine the module which has to
be merged with it and perform test on both the modules.
In this way, incrementally keep on adding the modules and test the recent environment. Thus,
an integrated tested software system is achieved.
Incremental integration testing is beneficial for the following reasons:
1. Incremental approach does not require many drivers and stubs.
2. Interfacing errors are uncovered earlier.
3. It is easy to localize the errors since modules are combined one by one. The first suspect is
the recently added module. Thus, debugging becomes easy.
4. Incremental testing is a more thorough testing.
It suffers from the problem of serially combining the methods according to the design. But
practically, sometimes it is not feasible in the sense that all modules are not ready at the same
time.
According to the big-bang method, all modules should be unit tested independently as they
are developed. In this way, there is parallelism.
As soon as one module is ready, it can be combined and tested again in the integrated
environment according to the incremental integration testing.
Incremental integration can be done either from top to bottom or bottom to top. Incremental
integration testing is divided into two categories.
1. Top – down integration Testing
2. Bottom-up integration testing
Top-down Integration Testing
Start with the high-level modules and move downward through the design hierarchy.
Modules subordinate to the top module are integrated in the following two ways:
Depth first integration
o In this type, all modules on a major control path of the design hierarchy are integrated
first.
o In the example shown in Fig. 7.8, modules 1, 2, 6, 7/8 will be integrated first. Next,
modules 1, 3, 4/5 will be integrated.
Breadth first integration
In this type, all modules directly subordinate at each level, moving across the design
hierarchy horizontally, are integrated first.
In the example shown in Fig. 7.8, modules 2 and 3 will be integrated first. Next,
modules 6, 4, and 5 will be integrated. Modules 7 and 8 will be integrated last.
Guidelines can be considered are:
1. In practice, the availability of modules matter the most.
2. If there are critical sections of the software test them as early as possible.
3. I/O modules are added as early as possible so that all interface errors will be detected
earlier.
The procedure for top-down integration process is discussed in the following steps:
1. Start with the top or initial module in the software. Substitute the stubs for all the
subordinate modules of top module. Test the top module.
2. After testing the top module, stubs are replaced one at a time with the actual modules for
integration.
3. Perform testing on this recent integrated environment.
4. Regression testing may be conducted to ensure that new errors have not appeared.
5. Repeat steps 2–4 for the whole design hierarchy
The bottom-up strategy begins with the terminal modules at the lowest level in the software
structure.
After testing these modules, they are integrated and tested moving from bottom to top level.
Bottom-up integration can be performed at an early stage in the developmental process.
It is useful for integrating object-oriented systems, real-time systems, and systems with strict
performance requirements.
Bottom-up integration has the disadvantage that the software as a whole does not exist until
the last module is added.
It is not an optimal strategy for functionally decomposed systems, as it tests the most
important subsystem last.
The steps in bottom-up integration are as follows:
o Start with the lowest level modules in the design hierarchy.
o Look for the super-ordinate module which calls the module selected in step 1.
o Test the module selected in step 1 with the driver designed in step 2.
o The next module to be tested is any module whose subordinate modules have all been
tested.
o Repeat steps 2 to 5 and move up in the design hierarchy.
o Whenever, the actual modules are available, replace stubs and drivers with the actual
one and test again.
This integration avoids the efforts made in developing the stubs and drivers.
There are two types of integration testing based on call graph. A) Pair-wise Integration b)
Neighbourhood integration.
Pair-wise Integration - consider only one pair of calling and called modules at a time for
integration, and total test sessions which will be equal to the sum of all edges in the call
graph. For the given example, 19 sessions are required
Neighbourhood Integration - consider neighbourhoods of a node in the call graph for
integration. The neighbourhood for a node is the immediate predecessor /successor nodes.
7.2.3 PATH-BASED INTEGRATION
It focus on interactions among system units
It Combine structural and behavioral type of testing for integration testing.
Source Node: an instruction in the module at which the execution starts or resumes
Sink Node: an instruction in a module at which the execution terminates
Module Execution Path (MEP) Message: a path consisting of a set of executable
statements within a module like in a flow graph
MM-Path: a path consisting of MEPs and messages
MM-Path Graph: an extended flow graph where nodes are MEPs and edges are
messages.
7.3 FUNCTION TESTING
Function testing is the process of attempting to detect discrepancies between the functional
specifications of software and its actual behaviour.
Function test is used to measure the quality of the functional (business) components of the
system.
Functional tests verify that the system behaves correctly from the user/business perspective
and functions according to the requirements, models, or any other design paradigm used to
specify the application.
The function test must determine if each component or business event:
o performs in accordance to the specifications,
o responds correctly to all conditions that may present themselves by incoming
events/data,
o moves data correctly from one business event to the next (including data stores), and
o is initiated in the order required to meet the business objectives of the system.
To keep a record of function testing, function coverage metric is used. Function coverage can
be measured with a function coverage matrix. It keeps track of those functions that exhibited
the greatest number of errors.
The primary processes/deliverables for requirements based function test are:
o Test planning - defines the scope, schedule, test plan (document) and a test schedule
(work plan) and deliverables for the function test cycle.
o Partitioning/functional decomposition - is the breakdown of a system into its
functional components
o Requirement definition - specified requirements in the form of proper documents
o Test case design - A tester designs and implements a test case to validate that the
product performs in accordance with the requirements
o Traceability matrix formation - A function coverage matrix is prepared to track which
functions are being tested through which test cases.
o Test case execution - test cases executed and the results are recorded
It is the ability of a system to restart operations after the recovery from failure.
Recovery testing is the activity of testing how well the software is able to recover from
crashes, hardware failures, and other similar problems.
Systems (e.g. operating system, database management systems, etc.) must recover to a known
state from programming errors, hardware failures, data errors, or any disaster in the system.
Recovery tests would determine if the system can return to a well-known state, and that no
transactions have been compromised.
A check point system can be used to state the safe position of transactions of system.
During recovery testing the testers should work on
o Restart - If there is a failure, the most recent checkpoint record is to be retrieved and
the system is initialized to the states in the checkpoint record and begins to process
new transactions.
o Switchover - in case of failure of one component, the standby takes over the control.
The ability of the system to switch to a new component must be tested.
A good way to perform recovery testing is under maximum load.
Security Testing
Security is a protection system that is needed to assure the customers that their data will be
protected.
Security may include controlling access to data, encrypting data in communication, ensuring
secrecy of stored data, auditing security events, etc.
The effects of security breaches could be extensive and can cause loss of information,
corruption of information, misinformation, privacy violations, denial of service, etc.
Types of Security Requirements While performing security testing, the following
security requirements must be considered -
o Each functional requirement, most likely, has a specific set of related security issues
to be addressed in the software implementation. For example, the log-on must specify
the number of retries allowed the action to be taken if the log-on fails, and so on.
o A software project has security issues that are global in nature, related to the
application’s architecture and overall implementation. For example, a Web
application may have a global requirement that all private customer data of any kind
is stored in encrypted form in the database.
Security vulnerabilities Vulnerability is an error that an attacker can exploit.
o Bugs at the implementation level, such as local implementation errors or inter
procedural interface errors
o Design-level mistakes – hardest category to identify - Examples: problem in error-
handling, unprotected data channels, incorrect or missing access control mechanisms,
and timing errors especially in multithreaded systems
How to perform security testing - By identifying risks and potential loss associated
with those risks in the system and creating tests driven by those risks, the tester can properly
focus on areas of code in which an attack is likely to succeed.
Risk management and security testing Software security practitioners perform many
different tasks to manage software security risks, including:
o Creating security abuse/misuse cases
o Listing normative security requirements
o Performing architectural risk analysis
o Building risk-based security test plans
o Wielding static analysis tools
o Performing security tests
o Based on design-level risk analysis and ranking of security related risks, security test
plans are prepared which guide the security testing.
o Thus, security testing must necessarily involve two diverse approaches:
Testing security mechanisms to ensure that their functionality is properly
implemented
Performing risk-based security testing motivated by understanding and
simulating the attacker’s approach
Elements of security testing The basic security concepts that need to be covered by
security testing are discussed below:
o Confidentiality A security measure which protects against the disclosure of
information to parties other than the intended recipient.
o Integrity A measure intended to allow the receiver to determine that the information
which it receives has not been altered in transit or by anyone other than the originator
of the information.
o Authentication A measure designed to establish the validity of a transmission,
message, or originator. It allows the receiver to have confidence that the information
it receives originates from a specific known source.
o Authorization It is the process of determining that a requester is allowed to receive a
service or perform an operation. Access control is an example of authorization.
o Availability It assures that the information and communication services will be ready
for use when expected. Information must be kept available for authorized persons
when they need it.
o Non-repudiation A measure intended to prevent the later denial that an action
happened, or a communication took place, etc. In communication terms, this often
involves the interchange of authentication information combined with some form of
provable timestamp.
Performance Testing
Performance testing is to test the run-time performance of the software on the basis of various
performance factors. They may be in terms of memory use, response time, throughput, and
delays.
Ex: for a Web application, you need to know at least two things: (a) expected load in terms of
concurrent users or HTTP connections and (b) acceptable response time.
The following tasks must be done for this testing:
o Decide whether to use internal or external resources to perform tests, depending on
in-house expertise (or the lack of it).
o Gather performance requirements (specifi cations) from users and/or business
analysts.
o Develop a high-level plan (or project charter), including requirements, resources,
timelines, and milestones.
o Develop a detailed performance test plan (including detailed scenarios and test
cases, workloads, environment info, etc).
o Choose test tool(s).
o Specify test data needed.
o Develop detailed performance test project plan, including all dependencies and
associated timelines.
o Configure the test environment (ideally identical hardware to the production
platform), router configuration, deployment of server instrumentation, database test
sets developed, etc.
o Execute tests, probably repeatedly (iteratively), in order to see whether any
unaccounted factor might affect the results.
Load Testing
When a system is tested with a load that causes it to allocate its resources in maximum
amounts, it is called load testing.
Through load testing, we are able to determine the maximum sustainable load the system can
handle.
Load is varied from a minimum (zero) to the maximum level the system can sustain without
running out of resources.
Stress Testing
Stress testing is also a type of load testing, but the difference is that the system is put under
loads beyond the limits so that the system breaks.
Thus, stress testing tries to break the system under test by overwhelming its resources in
order to find the circumstances under which it will crash.
The areas that may be stressed in a system are:
o Input transactions
o Disk space
o Output
o Communications
o Interaction with users
Stress testing is important for real-time systems where unpredictable events may occur,
resulting in input loads that exceed the values described in the specification, and the system
cannot afford to fail due to maximum load on resources.
Therefore, in real-time systems, the entire threshold values and system limits must be noted
carefully. Then, the system must be stress-tested on each individual value.
Stress testing demands the amount of time it takes to prepare for the test and the amount of
resources consumed during the actual execution of the test.
Usability Testing
This type of system testing is related to a system’s presentation rather than its functionality.
The goal of usability testing is to verify that intended users of the system are able to interact
properly with the system while having a positive and convenient experience.
Usability testing identifies discrepancies between the user interfaces of a product and the
human engineering requirements of its potential users.
What the user wants or expects from the system can be determined using several ways are:
o Area experts
o Group meetings
o Surveys
o Analyse similar products
o Ease of use
o Interface steps
o Response time
o Help System
o Error messages
Compatibility/Conversion/Configuration Testing
Compatibility testing is to check the compatibility of a system being developed with different
operating system, hardware and software configuration available, etc.
Configuration testing allows developers/testers to evaluate system performance and
availability when hardware exchanges and reconfigurations occur.
Some guidelines for compatibility testing are
Operating systems- The specifications must state all the targeted end-user operating systems
on which the system being developed will be run.
Software/Hardware -The product may need to operate with certain versions of Web
browsers, with hardware devices such as printers, or with other softwares such as virus
scanners or word processors.
Conversion testing- Compatibility may also extend to upgrades from previous versions of the
software.
Ranking of possible configurations -Since there will be a large set of possible configurations
and compatibility concerns, the testers must rank the possible configurations in order, from
the most to the least common, for the target system.
Identification of test cases - select the most representative set of test cases that confirms the
application’s proper functioning on a particular platform.
Updating the compatibility test cases -The compatibility test cases must also be continually
updated
The following categories for evaluating the regression test selection techniques: