Planning for testing begins early in a software project when the overall testing strategy for the project
is decided.
The testing strategy defines the approach to testing for the project. It is required to understand the types of tests
that will be performed and the related test techniques.
The required skills and resources for testing need to be identified and arranged for and the roles and organization
need to be defined. In addition, there is a need to plan for the acquisition of testing tools and training on testing
concepts, processes, and tools.
Detailed planning for specific tests is done as and when the specification against which testing is to be done gets
ready. Each test requires planning the scope of the test in terms of the software and the features being tested.
Test cases have to be defined, and test scripts have to be written. When the software to be tested is ready, tests
are executed, results are checked, and errors are identified.
Primary Objective of Testing:
Testing is a quality filter that is used to detect errors in a product.
In his book, The Art of Software Testing, Glenford Myers states the objective of testing as:
"Testing is a process of executing a program with the intent of finding an error."
It is important to note that the primary objective of testing is to find errors and not to prove that there are no errors.
It follows, therefore, as Myers says, "A successful test is one that uncovers an as-yet-undiscovered error."
You can hear more on the primary objective of testing.
Secondary Objective of Testing:
Besides the primary objective of uncovering errors, some of the secondary objectives of testing are:
Demonstration of conformance to user requirements: This is achieved by testing for each
requirement using high-order tests that exercise the system as a whole.
Demonstration of certain performance characteristics of the software: This is achieved by executing
test cases that check for some of the performance characteristics of software, such as the response time
under defined conditions.
Indication of the overall software quality: This is achieved by analyzing the errors that are uncovered
and using this analysis as an indication of the errors that may be remaining in the software.
Principle of Software Testing:
While planning the testing approach and specific tests, certain established principles should be followed. Some of
the important principles are:
Testing must be traceable to the requirements.
Testing needs to be planned for early in a software project.
The data on the errors detected should be used to focus future testing efforts.
Testing should be incremental.
Testing should focus on exceptions.
Debugging should be planned for separately.
Let us discuss each of the principles.
a) Testing must be traceable to the requirements:
Testing is a quality control mechanism and is intended to improve the quality of software by detecting errors so
that they can be fixed.
Quality is understood as meeting customer requirements. One important principle for testing, therefore, is that it
should be related to the requirements and needs to check that each requirement is met. We should, therefore,
design tests for each requirement. That is, we should be able to trace back our test cases to the requirements
being tested—establish requirement traceability.
You can hear what is meant when we say, "All testing must be traceable to the overall user requirements."
Requirements Traceability: Requirements traceability is a mechanism that provides information to trace the
relationship between the requirements stated in the specification document to specific parts of work products built
in subsequent phases, such as the design document.
Traceability is made possible by having a unique label for each requirement and then labeling the work products
of all phases according to the requirements to which they correspond.
Testing is concerned with checking that each requirement is met. Traceability provides a simple way of knowing
how a requirement has been met and how it can be tested.
b) Testing needs to be planned for early in a software project:
One common mistake made by software engineers is to think of testing only after coding is complete.While it is
true that tests can be executed only after the code is available, the designing of tests should be started early in a
project. Think of the core activities required for building software, and consider how test designing can be done
early in a project. Jot down your thoughts
Transcript: Planning for software testing: Part of the mythology that swirls around software testing is that
testing occurs late in the process; and therefore, we don't have to worry about it until late in the process. Nothing
could be further from the truth. Software test planning begins early in the overall software engineering process.
Test case design happens a bit later, and test execution does happen late. To justify these comments, note that
test planning can occur as soon as the software requirements have been established. In fact, the definition of
validation criteria during the requirements activity is the first step towards test planning. Test case design happens
as soon as the software design has been defined. Once procedural designs have been established for each
module in the system, test case design can commence. Finally, test execution happens late after coding is
finished when the testing activity begins.
c) The data on the errors detected should be used to focus future testing efforts:
Testing requires effort; and therefore, it makes sense to focus this effort on areas that have more errors. An
important principle of testing is that the testing resources should be used to uncover the largest possible number
of errors. To be able to focus the testing efforts on areas that have more errors, we need to understand the
Pareto principle, a principle that applies to most things in life. Applied in the context of software testing, it can be
stated as: 20 percent of the software accounts for 80 percent of the errors uncovered during testing.
d) Testing should be incremental:
Software usually consists of a number of modules that interface with each other to provide the overall
functionality. Some testers have the tendency to test software only after it is fully coded and integrated. The
rationale for this is that if coding is done properly, there should be very few errors in it. Therefore, there is no need
to waste time testing parts of the software separately. This approach is called 'big-bang' testing. However, it is
problematic because it is very difficult to isolate the sources of the errors encountered as well as detect smaller
errors when the software is tested in one shot. An important testing principle, therefore, is that testing should be
done incrementally.
Transcript: Incremental testing: Back in the good old days, macho computer programmers used to do what we
laughingly call the big-bang testing. What this meant was that we attempted to test the entire program in the
whole. This rarely, if ever, worked. Effective testing is incremental, and what this means is that we begin by
testing in the small, continue testing as we construct the system, and ultimately finish by testing the requirements
in the large. This is an effective approach to testing and one that every organization should follow.
e) Testing should focus on exceptions:
Testing aims at detecting as many errors as possible. To test effectively, we therefore need to take into account
the human tendency of making mistakes. It has been found that while most programmers code correctly for typical
processing, they make mistakes in the code dealing with aberrant conditions, such as erroneous data entry or an
unexpected data combination. Testing should, therefore, focus on such exceptions in the program so that these
errors are detected.
Levels of Testing:
Testing aims at detecting as many errors as possible. To test effectively, we therefore need to take into account
the human tendency of making mistakes. It has been found that while most programmers code correctly for typical
processing, they make mistakes in the code dealing with aberrant (abnormal) conditions, such as erroneous data
entry or an unexpected data combination. Testing should, therefore, focus on such exceptions in the program so
that these errors are detected.
f) Debugging should be planned for separately:
While planning for testing, a common mistake is not to plan for debugging, which is the effort required to
understand why a bug occurred and to correct it. It is important to remember that testing and debugging are two
separate activities. While the activity of testing is required to uncover errors, debugging is required to understand
the underlying causes that resulted in errors and then to correct the errors. Another important testing principle is
that debugging is an activity distinct from testing and needs to be planned for and handled separately.
Summary:
In this section, we learned about the objectives and principles of software testing. Testing is performed to detect
errors while executing code. Tests are successful when a higher number of errors are detected. Testing can also
be used to demonstrate that the requirements are met. However, if no errors are detected during testing, it does
not mean that the code is error-free. Testing should be planned for based on the principles of testing. It must be
traceable to requirements and be planned for early in a software project. The data on the errors detected should
be used to focus future testing efforts. Testing should be incremental—starting from unit testing and proceeding to
high-order testing. Testing should focus on exceptions because bugs are usually higher in exceptional paths.
Debugging should be planned for separately so that there is enough time and budget for understanding and
resolving errors.
Limitation for Software Testing:
Introduction:
Software engineers tend to think of testing as a mechanism that will detect all errors and will, therefore, ensure
that the software is of good quality. Although testing is an essential activity in the software process and a useful
way of detecting errors in the software, it has a number of limitations. These limitations should be understood so
that the expectations from testing are realistic. Software engineers should also take into account these limitations
while planning for testing and performing it. In this section, you will learn about the limitations of software testing.
Limitations of Software Testing:
Testing is performed to detect errors by executing code. Tests are considered successful when they detect a
higher number of errors. The detected errors can then be removed before the software is delivered.
However, testing cannot be considered a safety net because of its inherent limitations. Its usefulness is limited
regardless of how well testing is performed. Broadly, the limitations of testing are:
Testing cannot be exhaustive.
Selective testing cannot detect all errors.
Testing itself may have bugs.
Testing cannot prevent errors; it only detects them.
Testing is not as cost-effective as other quality filters such as formal technical reviews because it comes
late in the cycle when errors are more in number and costlier to rectify.
Let us now discuss these limitations.
a) Testing cannot be exhaustive: Testing which covers all combinations of input values and preconditions for
an element of the software under test. A test case design technique in which the test case suite comprises all
combinations of input.
b) Selective testing cannot detect all errors: It is impractical to exhaustively test code. Therefore, we need to
perform selective testing. That is, we should decide on the criteria for identifying test cases so that we select
the test cases that are more likely to detect errors. We should select the specific paths that we want to test.
This is done by selecting specific functions, using specific inputs and exercising them, and deciding which
specific outputs we will examine. Selecting one set of test cases means excluding some others and could
therefore mean missing out on some errors. The choice of test case design techniques should, therefore,
depend on the desired focus of testing. It is important to note that, depending on what we include and what
we exclude while testing, there are some errors that can escape detection during testing.
c) Testing itself may have bugs:
d) Testing is not as cost-effective as other quality filters such as formal technical reviews because it
comes late in the cycle when errors are more in number and costlier to rectify: