UNIT-1
Introduction:
What is testing?
Testing is the process of evaluating a system components by manual or automated means to
verify that it satisfies specified requirements.
1. The Purpose of Testing:
Testing is the most time consuming, but a needful activity of a software project. It is vital
to the success of new system.
The main Purpose of testing is to:
1. Measure the quality of the software
2. Find out the faults in the application and software
3. Reduce the number of bugs in the program.
4. Check all the requirements and specifications given by client and customers are met or
not.
5. Produce a product which is full of quality oriented.
6. Comfort the need of client and customers.
7. Provide a defect and error free software.
8. Check whether the application is working as per the functional requirement specified or
not.
9. Carry the difference between the expected and actual results.
Important purpose of testing is to increase the project and product quality, and definitely to
increase the organization quality side by side also.
2. Dichotomies:
Dichotomies in software testing are the differences between various approaches,
techniques, and tools used in the process.
• Testing vs debugging
Testing is used to show that a program has bugs, while debugging usually follows
testing and has different goals, methods, and psychology.
• Functional vs structural
Tests can be designed from a functional or structural point of view. Functional
testing treats the program as a blackbox, while structural testing uses the source
code to find every possible executable path.
• Designer vs tester
The test designer creates the tests, while the tester actually tests the code.
• Modularity Versus Efficiency: A module is a discrete, well-defined, small
component of a system. Smaller the modules, difficult to integrate; larger the
modules, difficult to understand. Both tests and systems can be modular.
Testing can and should likewise be organised into modular components.
Small, independent test cases can be designed to test independent
modules.
• Small Versus Large: Programming in large means constructing programs
that consists of many components written by many different programmers.
Programming in the small is what we do for ourselves in the privacy of our
own offices. Qualitative and Quantitative changes occur with size and so
must testing methods and quality criteria.
• Builder Versus Buyer: Most software is written and used by the same
organization.
The different roles / users in a system include:
1. Builder: Who designs the system and is accountable to the buyer.
2. Buyer: Who pays for the system in the hope of profits from providing
services.
3.Model For Testing:
A software testing method that uses models to automatically generate, execute, and
evaluate test cases.
Above figure is a model of testing process. It includes three models: A model of the
environment, a model of the program and a model of the expected bugs.
• ENVIRONMENT:
o A Program's environment is the hardware and software required to make
it run. For online systems, the environment may include communication
lines, other systems, terminals and operators.
o The environment also includes all programs that interact with and are
used to create the program under test - such as OS, linkage editor, loader,
compiler, utility routines.
o Because the hardware and firmware are stable, it is not smart to blame the
environment for bugs.
• PROGRAM:
o Most programs are too complicated to understand in detail.
o The concept of the program is to be simplified inorder to test it.
o If simple model of the program doesnot explain the unexpected
behaviour, we may have to modify that model to include more facts and
details. And if that fails, we may have to modify the program.
• BUGS:
o Bugs are more insidious (deceiving but harmful) than ever we expect
them to be.
o An unexpected test result may lead us to change our notion of what a bug
is and our model of bugs.
o Some optimistic notions that many programmers or testers have about
bugs are usually unable to test effectively and unable to justify the dirty
tests most programs need.
o OPTIMISTIC NOTIONS ABOUT BUGS:
1. Benign Bug Hypothesis: The belief that bugs are nice, tame
and logical. (Benign: Not Dangerous)
2. Bug Locality Hypothesis: The belief that a bug discovered
with in a component effects only that component's behaviour.
3. Control Bug Dominance: The belief that errors in the control
structures (if, switch etc) of programs dominate the bugs.
4. Code / Data Separation: The belief that bugs respect the
separation of code and data.
• TESTS:
o Tests are formal procedures, Inputs must be prepared, Outcomes should
predicted, tests should be documented, commands need to be executed,
and results are to be observed. All these errors are subjected to error
o We do three distinct kinds of testing on a typical software system.
They are:
1. Unit / Component Testing: A Unit is the smallest testable
piece of software that can be compiled, assembled, linked,
loaded etc. A unit is usually the work of one programmer and
consists of several hundred or fewer lines of code. Unit
Testing is the testing we do to show that the unit does not
satisfy its functional specification or that its implementation
structure does not match the intended design structure.
A Component is an integrated aggregate of one or more
units. Component Testing is the testing we do to show that
the component does not satisfy its functional specification or
that its implementation structure does not match the intended
design structure.
2. Integration Testing: Integration is the process by which
components are aggregated to create larger
components. Integration Testing is testing done to show that
even though the componenets were individually satisfactory
(after passing component testing), checks the combination of
components are incorrect or inconsistent.
3. System Testing: A System is a big component. System
Testing is aimed at revealing bugs that cannot be attributed to
components. It includes testing for performance, security,
accountability, configuration sensitivity, startup and recovery.
Role of Models: The art of testing consists of creating , selecting, exploring, and revising
models. Our ability to go through this process depends on the number of different models
we have at hand and their ability to express a program's behaviour.
4.Levels Of Testing:
Unit testing: Tests individual lines of code
Integration testing: Tests how interconnected units work together
System testing: Tests the overall system
Acceptance testing: Tests how well the software meets user requirements
5.Software Testing Principles:
1. Testing shows the Presence of Defects
The goal of software testing is to make the software fail. Software testing reduces the
presence of defects. Software testing talks about the presence of defects and doesn’t talk
about the absence of defects. Software testing can ensure that defects are present but it can
not prove that software is defect-free. Even multiple tests can never ensure that software is
100% bug-free. Testing can reduce the number of defects but not remove all defects.
2. Exhaustive Testing is not Possible
It is the process of testing the functionality of the software in all possible inputs (valid or
invalid) and pre-conditions is known as exhaustive testing. Exhaustive testing is impossible
means the software can never test at every test case. It can test only some test cases and
assume that the software is correct and it will produce the correct output in every test case.
If the software will test every test case then it will take more cost, effort, etc., which is
impractical.
3. Early Testing
To find the defect in the software, early test activity shall be started. The defect detected in
the early phases of SDLC will be very less expensive. For better performance of software,
software testing will start at the initial phase i.e. testing will perform at the requirement
analysis phase.
4. Defect Clustering
In a project, a small number of modules can contain most of the defects. The Pareto Principle
for software testing states that 80% of software defects come from 20% of modules.
5. Pesticide Paradox
Repeating the same test cases, again and again, will not find new bugs. So it is necessary to
review the test cases and add or update test cases to find new bugs.
6. Testing is Context-Dependent
The testing approach depends on the context of the software developed. Different types of
software need to perform different types of testing. For example, The testing of the e-
commerce site is different from the testing of the Android application.
7. Absence of Errors Fallacy
If a built software is 99% bug-free but does not follow the user requirement then it is
unusable. It is not only necessary that software is 99% bug-free but it is also mandatory to
fulfill all the customer requirements.
6. The Tester‘s Role in a Software Development:
In software development, the tester plays a crucial role in ensuring the quality and reliability
of the final product. Their primary responsibility is to identify and report defects or bugs in
the software, which are then fixed by the development team. Testers also verify that the
software meets the required specifications and functional requirements. Additionally, they
often participate in requirements gathering, usability testing, and exploratory testing to
identify potential issues early on. Effective testers must possess strong analytical skills,
attention to detail, and excellent communication skills to collaborate with the development
team.
• Analyze software: Testers analyze the software's usability, functionality, and
performance.
• Identify bugs: Testers identify and help remove bugs, glitches, and other user
experience issues.
• Create test designs: Testers create test designs, processes, cases, and data.
• Carry out testing: Testers carry out testing as per the defined procedures.
• Participate in walkthroughs: Testers participate in walkthroughs of testing
procedures.
7. Consequences of Bugs:
1. System Crash: Software stops working completely.
2. Data Loss: Important data is deleted or corrupted.
3. Security Risks: Hackers can exploit bugs to steal sensitive information.
4. Financial Loss: Bugs can cause monetary losses due to system downtime or incorrect
calculations.
5. Reputation Damage: Bugs can harm a company's reputation and erode customer trust.
6. Time and Resource Waste: Debugging and fixing bugs can be time-consuming and costly.
7. User Frustration: Bugs can cause frustration and annoyance for end-users.
8. Compliance Issues: Bugs can lead to non-compliance with regulatory requirements.
9. Maintenance Challenges: Bugs can make software maintenance and updates more difficult.
10. Business Disruption: Bugs can disrupt business operations and impact revenue.
8. Taxonomy of Bugs:
In software testing, a taxonomy is a scheme that classifies different types of
testing
• critical bugs
These bugs can cause a program to crash, fail, or have information issues. They
require immediate attention.
• Major bugs
These bugs can significantly impact the usefulness of a program, but they are not
as severe as critical bugs.
• Minor bugs
These bugs are minor issues that don't significantly impact the program's
execution. They can be minor blunders or interface issues that are relatively easy
to fix.
• Minor errors
These are small mistakes that usually don't impact usefulness. They can include
typing mistakes, blunders, or minor visual mistakes.
• Out-of-bound bugs
These bugs occur when logical or arithmetic errors exceed the allowable
boundaries of an operation.
• Security bugs
These bugs can pose a significant risk to the integrity and confidentiality of
software and its data.
(or)
Functional Bugs
1. Syntax Errors: Mistakes in code syntax.
2. Logic Errors: Flaws in program logic.
3. Calculation Errors: Incorrect calculations or formulas.
Non-Functional Bugs
1. Performance Issues: Slow or inefficient program execution.
2. Security Vulnerabilities: Weaknesses that allow unauthorized access.
3. Usability Issues: Difficulties with user interface or experience.
Environmental Bugs
1. Compatibility Issues: Problems with different hardware, software, or browsers.
2. Configuration Errors: Mistakes in system configuration or setup.
3. Network Issues: Problems with network connectivity or communication.
User Interface Bugs
1. Layout Issues: Problems with screen layout or formatting.
2. Navigation Issues: Difficulties with menu navigation or links.
3. Display Issues: Problems with text, image, or video display.
Data Bugs
1. Data Corruption: Errors in data storage or retrieval.
2. Data Loss: Loss of important data due to bugs or errors.
3. Data Inconsistency: Inconsistent data across different parts of the system.
9. Basics Concepts of Path Testing:
Basis path testing is a technique of selecting the paths in the control flow graph,
that provide a basis set of execution paths through the program or module. Since
this testing is based on the control structure of the program, it requires complete
knowledge of the program’s structure. To design test cases using this technique,
four steps are followed :
1. Construct the Control Flow Graph
2. Compute the Cyclomatic Complexity of the Graph
3. Identify the Independent Paths
4. Design Test cases from Independent Paths
1. Control Flow Graph – A control flow graph (or simply, flow graph) is a directed graph
which represents the control structure of a program or module. A control flow graph (V, E)
has V number of nodes/vertices and E number of edges in it. A control graph can also have :
• Junction Node – a node with more than one arrow entering it.
• Decision Node – a node with more than one arrow leaving it.
• Region – area bounded by edges and nodes (area outside the graph is also counted as a
region.).
Below are the notations used while constructing a flow graph :
• Sequential Statements –
• If – Then – Else –
• Do – While –
• While – Do –
• Switch – Case –
Predicates:
A predicate is a logical expression that evaluates to true or false.
- Simple predicate: A predicate that involves a single variable or constant.
- Compound predicate: A predicate that involves multiple variables or constants.
Path Predicates and Achievable Paths:
A path predicate is a predicate that is associated with a path in a flow graph.
- Path predicate: A predicate that is true if and only if the path is executed.
- Achievable path: A path that can be executed by providing appropriate input values.
Path Sensitizing:
Path sensitizing is the process of identifying the input values that will cause a particular path
to be executed.
- Path sensitizing: Identifying the input values that will cause a particular path to be executed.
Path Instrumentation:
Path instrumentation is the process of modifying a program to collect data about the paths that
are executed.
- Path instrumentation: Modifying a program to collect data about the paths that are executed.
Applications of Path Testing.:
1. Verification of Code Logic
• Path testing helps in verifying the correctness of the program's logic by checking all
possible paths.
• It ensures that conditional statements, loops, and branches behave as expected.
2. Bug Identification
• It identifies defects in control flow, such as infinite loops, unreachable code, or
missing branches.
• Complex algorithms and decision-making structures are rigorously tested to uncover
hidden errors.
3. Coverage Analysis
• Path testing is used to measure code coverage, especially decision and branch
coverage.
• It provides quantitative metrics on the proportion of code tested, aiding in
completeness assessment.
4. Regression Testing
• Path testing is applied in regression testing to ensure that new code changes have
not introduced errors into existing paths.
• It verifies that modified code still adheres to all execution paths.
5. Optimizing Test Case Design
• Test cases can be derived systematically based on paths, reducing redundancy and
improving testing efficiency.
• Critical paths in high-risk areas of the application are prioritized for testing.
6. Validation of Error Handling
• Path testing ensures that all error-handling routines and exceptions are triggered and
behave correctly.
• It validates paths for unexpected input and boundary conditions.
7. Integration Testing
• In integration testing, path testing is used to validate the interaction between
modules and ensure proper data flow.
• It ensures that all communication paths between integrated components are
exercised.
8. Performance Analysis
• Execution paths can be analyzed to identify performance bottlenecks.
• Helps in profiling to see which paths are most frequently executed and optimize
them.
9. Safety-Critical Systems
• For safety-critical applications (e.g., in aerospace, healthcare), path testing ensures
that all possible execution scenarios are thoroughly evaluated.
• Verifies compliance with stringent safety standards.
10. Automated Testing Tools
• Many automated testing tools leverage path testing to generate test cases or validate
code coverage.
• Tools like static analyzers and coverage analyzers incorporate path-testing
techniques.