UNIT 1
1.What is the purpose of testing?
The purpose of testing is to ensure that a product, system, or component
works as intended and meets the specified requirements. In software and
systems development, testing helps to:
1. Detect defects or bugs early in the development process.
2. Verify functionality to ensure the system performs correctly.
3. Validate requirements to confirm the product meets business and
user needs.
4. Improve quality by identifying areas for improvement.
5. Ensure reliability and performance under different conditions.
6. Reduce risk by identifying and fixing issues before deployment.
7. Enhance user confidence in the product or system.
2.Briefly explain the differences between testing and debugging.
Aspect Testing Debugging
To identify if there are any
To locate and fix the cause
Purpose errors or defects in the
of the identified defects.
software.
Usually done by
Performed Usually done by testers or
developers or
by QA engineers.
programmers.
When it After or during development After a test fails or an issue
occurs to check software behavior. is reported.
To find why it is wrong and
Goal To find what is wrong.
fix it.
Process Systematic and can be Investigative and usually
type automated. manual.
Testing finds the problem, debugging fixes it.
3.Explain a model of software testing?
Developers life cycle
1. Business Requirement Specification
• Purpose: Understand what the customer needs from the system.
• Activities:
o Collect high-level business goals and expectations.
o Interact with stakeholders for requirement gathering.
• Output: Business Requirements Document (BRD).
• Testing Counterpart: Acceptance Testing (ensures final product
meets business goals).
2. System Requirement Specification
• Purpose: Define system-level requirements based on business
needs.
• Activities:
o Specify what functions and features the system must
include.
o Define performance, security, and compliance criteria.
• Output: System Requirement Specification (SRS).
• Testing Counterpart: System Integration Testing (validates overall
system behavior against these specs).
3. High-Level Design (HLD)
• Purpose: Design the system’s architecture and module
interaction.
• Activities:
o Create module diagrams and define interfaces.
o Plan how components will work together.
• Output: Architecture Design Document.
• Testing Counterpart: Component Testing (checks each
module/component individually).
4. Low-Level Design (LLD)
• Purpose: Provide internal logic and structure for each module.
• Activities:
o Define functions, classes, and pseudocode.
o Plan detailed workflows.
• Output: Detailed design document.
• Testing Counterpart: Unit Testing (validates each function or
block of code).
5. Coding
• Purpose: Convert the design into an executable application.
• Activities:
o Developers write code for each module.
o Adhere to coding standards and best practices.
• Output: Working software ready for testing.
🟢 Tester’s Life Cycle (Validation Phase)
6. Unit Testing
• Purpose: Test individual functions or classes.
• Performed by: Developers.
• Focus: Logic accuracy, boundary conditions, error handling.
7. Component Testing
• Purpose: Test each software component/module independently.
• Performed by: Testers.
• Focus: Input/output correctness, API calls, and data flow within
the module.
8. System Integration Testing
• Purpose: Ensure all integrated components work together.
• Performed by: QA team.
• Focus: Interface testing, communication between modules, end-
to-end flow.
9. Acceptance Testing
• Purpose: Validate the system against business needs.
• Performed by: End-users or stakeholders.
• Focus: Usability, compliance with requirements, readiness for
deployment.
4.Give differences between functional testing and structural testing?
Structural Testing
Aspect Functional Testing
(White-box Testing)
Tests the software's Tests the internal
Definition functions against structure or code of the
requirements. program.
Also Known White-box or glass-box
Black-box testing
As testing
How the system works
Focus What the system does
internally
Knowledge No need for knowledge Requires understanding
Needed of code of code logic
Based on specifications Based on code structure,
Test Basis
and requirements design, and logic
Performed Developers or white-box
Testers or QA team
By testers
Login functionality, form Loop testing, condition
Examples validation, payment testing, branch/path
process testing
Selenium, QTP, Postman, JUnit, NUnit, Coverage
Tools Used
etc. tools like JaCoCo
5. Specify on which factors the importance of bugs depends? And give
the metric for it.
The importance of a bug is typically assessed based on Severity and
Priority, along with other influencing factors:
🔹 1. Severity
Definition: Indicates how badly the bug impacts the system's
functionality.
Levels:
Critical: System crash, data loss, or complete blockage.
High: Major feature failure, no workaround.
Medium: Partial malfunction, workaround available.
Low: Minor issues, cosmetic errors, typos.
🔹 2. Priority
Definition: Reflects how urgently the bug should be fixed.
Levels:
High: Needs immediate attention.
Medium: Fix in next release or sprint.
Low: Can be fixed later; not urgent.
🔹 3. Impact on End Users
Does the bug affect the core functionality that users rely on?
🔹 4. Frequency of Occurrence
Does the bug occur consistently or only under rare conditions?
🔹 5. Affected Module’s Importance
Bugs in core modules (e.g., login, payment) are more important than in
less-used features.
🔹 6. Stage of Detection
Bugs found in production or post-release are more critical than those
found during development.
6. Briefly explain the consequences of bugs.
1. System Failure or Crash
Bugs can cause the application to stop working, freeze, or crash entirely,
affecting availability and reliability.
2. Poor User Experience
Functional bugs or UI issues frustrate users, reduce trust, and may drive
them to switch to competitors.
3. Financial Loss
Critical bugs in financial or e-commerce systems can lead to incorrect
transactions, revenue loss, or costly downtime.
4. Security Vulnerabilities
Bugs may open loopholes for hackers, risking data breaches, privacy
violations, or compliance failures.
5. Increased Maintenance Cost
Bugs discovered late require more effort and cost to fix, especially if
they’re found in production.
6. Delays in Delivery
If bugs are frequent or severe, release cycles can be delayed due to
repeated testing and rework.
📉 7. Damage to Reputation
Frequent or public bugs reduce customer confidence and harm the
company’s image and credibility.
⚖️ 8. Legal and Regulatory Issues
Bugs causing non-compliance with legal standards (e.g., GDPR, HIPAA)
can result in fines or legal action.
7.What are the remedies of test bugs?
Remedies: The remedies of test bugs are:
1. Test Debugging: The first remedy for test bugs is testing and debugging
the tests. Test debugging, when compared to program debugging, is
easier because tests, when properly designed are simpler than programs
and donot have to make concessions to efficiency.
2. Test Quality Assurance: Programmers have the right to ask how
quality in independent testing is monitored.
3. Test Execution Automation: The history of software bug removal and
prevention is indistinguishable from the history of programming
automation aids. Assemblers, loaders, compilers are developed to reduce
the incidence of programming and operation errors. Test execution bugs
are virtually eliminated by various test execution automation tools.
4. Test Design Automation: Just as much of software development has
been automated, much test design can be and has been automated. For
a given productivity rate, automation reduces the bug count - be it for
software or be it for tests.
8. Classify the different kinds of bugs and explain.
The major categories are: (1) Requirements, Features and Functionality
Bugs (2) Structural Bugs (3) Data Bugs (4) Coding Bugs (5) Interface,
Integration and System Bugs (6) Test and Test Design Bugs
o REQUIREMENTS, FEATURES AND FUNCTIONALITY BUGS: Various
categories in Requirements, Features and Functionlity bugs include:
1. Requirements and Specifications Bugs:
Requirements and specifications developed from them can be
incomplete ambiguous, or self-contradictory. They can be misunderstood
or impossible to understand.
The specifications that don't have flaws in them may change while the
design is in progress. The features are added, modified and deleted.
Requirements, especially, as expressed in specifications are a major
source of expensive bugs.
The range is from a few percentage to more than 50%, depending on
the application and environment.
What hurts most about the bugs is that they are the earliest to invade
the system and the last to leave.
2. Feature Bugs:
Specification problems usually create corresponding feature problems.
A feature can be wrong, missing, or superfluous (serving no useful
purpose). A missing feature or case is easier to detect and correct. A
wrong feature could have deep design implications.
Removing the features might complicate the software, consume more
resources, and foster more bugs.
3. Feature Interaction Bugs:
Providing correct, clear, implementable and testable feature
specifications is not enough.
Features usually come in groups or related features. The features of
each group and the interaction of features with in the group are usually
well tested.
The problem is unpredictable interactions between feature groups or
even between individual features. For example, your telephone is
provided with call holding and call forwarding. The interactions between
these two features may have bugs.
Every application has its peculiar set of features and a much bigger set
of unspecified feature interaction potentials and therefore result in
feature interaction bugs.
o STRUCTURAL BUGS: Various categories in Structural bugs include:
0. Control and Sequence Bugs:
Control and sequence bugs include paths left out, unreachable code,
improper nesting of loops, loop-back or loop termination criteria
incorrect, missing process steps, duplicated processing, unnecessary
processing, rampaging, GOTO's, ill-conceived (not properly planned)
switches, sphagetti code, and worst of all, pachinko code.
One reason for control flow bugs is that this area is amenable
(supportive) to theoritical treatment.
Most of the control flow bugs are easily tested and caught in unit
testing.
Another reason for control flow bugs is that use of old code especially
ALP & COBOL code are dominated by control flow bugs.
Control and sequence bugs at all levels are caught by testing, especially
structural testing, more specifically path testing combined with a bottom
line functional test based on a specification.
1. Logic Bugs:
Bugs in logic, especially those related to misundertanding how case
statements and logic operators behave singly and combinations
Also includes evaluation of boolean expressions in deeply nested IF-
THEN-ELSE constructs.
If the bugs are parts of logical (i.e. boolean) processing not related to
control flow, they are characterized as processing bugs.
If the bugs are parts of a logical expression (i.e control-flow statement)
which is used to direct the control flow, then they are categorized as
control-flow bugs.
2. Processing Bugs:
Processing bugs include arithmetic bugs, algebraic, mathematical
function evaluation, algorithm selection and general processing.
Examples of Processing bugs include: Incorrect conversion from one
data representation to other, ignoring overflow, improper use of grater-
than-or-eual etc
Initialization Bugs:
Initialization bugs are common. Initialization bugs can be improper and
superfluous.
Superfluous bugs are generally less harmful but can affect
performance.
Typical initialization bugs include: Forgetting to initialize the variables
before first use, assuming that they are initialized elsewhere, initializing
to the wrong format, representation or type etc
Explicit declaration of all variables, as in Pascal, can reduce some
initialization problems.
4. Data-Flow Bugs and Anomalies:
A data flow anomaly occurs where there is a path along which we expect
to do something unreasonable with data, such as using an uninitialized
variable, attempting to use a variable before it exists, modifying and then
not storing or using the result, or initializing twice without an
intermediate use.
o DATA BUGS:
Data bugs include all bugs that arise from the specification of data
objects, their formats, the number of such objects, and their initial
values.
Data Bugs are atleast as common as bugs in code, but they are foten
treated as if they didnot exist at all.
Code migrates data: Software is evolving towards programs in which
more and more of the control and processing functions are stored in
tables.
Because of this, there is an increasing awareness that bugs in code are
only half the battle and the data problems should be given equal
attention.
CODING BUGS:
Coding errors of all kinds can create any of the other kind of bugs.
Syntax errors are generally not important in the scheme of things if the
source language translator has adequate syntax checking.
If a program has many syntax errors, then we should expect many logic
and coding bugs.
The documentation bugs are also considered as coding bugs which may
mislead the maintenance programmers.
o INTERFACE, INTEGRATION, AND SYSTEM BUGS:
Various categories of bugs in Interface, Integration, and System Bugs
are:
1. External Interfaces:
The external interfaces are the means used to communicate with the
world.
These include devices, actuators, sensors, input terminals, printers, and
communication lines.
The primary design criterion for an interface with outside world should
be robustness.
All external interfaces, human or machine should employ a protocol.
The protocol may be wrong or incorrectly implemented.
Other external interface bugs are: invalid timing or sequence
assumptions related to external signals
Misunderstanding external input or output formats.
Insufficient tolerance to bad input data.
2. Internal Interfaces:
Internal interfaces are in principle not different from external interfaces
but they are more controlled.
A best example for internal interfaces are communicating routines.
The external environment is fixed and the system must adapt to it but
the internal environment, which consists of interfaces with other
components, can be negotiated.
Internal interfaces have the same problem as external interfaces.
3. Hardware Architecture:
Bugs related to hardware architecture originate mostly from
misunderstanding how the hardware works.
Examples of hardware architecture bugs: address generation error, i/o
device operation / instruction error, waiting too long for a response,
incorrect interrupt handling etc.
The remedy for hardware architecture and interface problems is two
fold: (1) Good Programming and Testing (2) Centralization of hardware
interface software in programs written by hardware interface specialists.
4. Operating System Bugs:
Program bugs related to the operating system are a combination of
hardware architecture and interface bugs mostly caused by a
misunderstanding of what it is the operating system does.
Use operating system interface specialists, and use explicit interface
modules or macros for all operating system calls.
This approach may not eliminate the bugs but at least will localize them
and make testing easier.
5. Software Architecture:
Software architecture bugs are the kind that called - interactive.
Routines can pass unit and integration testing without revealing such
bugs.
Many of them depend on load, and their symptoms emerge only when
the system is stressed.
Sample for such bugs: Assumption that there will be no interrupts,
Failure to block or un block interrupts, Assumption that memory and
registers were initialized or not initialized etc
Careful integration of modules and subjecting the final system to a
stress test are effective methods for these bugs.
6. Control and Sequence Bugs (Systems Level):
These bugs include: Ignored timing, Assuming that events occur in a
specified sequence, Working on data before all the data have arrived
from disc, Waiting for an impossible combination of prerequisites,
Missing, wrong, redundant or superfluous process steps.
The remedy for these bugs is highly structured sequence control.
Specialize, internal, sequence control mechanisms are helpful.
7. Resource Management Problems:
Memory is subdivided into dynamically allocated resources such as
buffer blocks, queue blocks, task control blocks, and overlay buffers.
External mass storage units such as discs, are subdivided into memory
resource pools.
Some resource management and usage bugs: Required resource not
obtained, Wrong resource used, Resource is already in use, Resource
dead lock etc
8. Integration Bugs:
Integration bugs are bugs having to do with the integration of, and with
the interfaces between, working and tested components.
These bugs results from inconsistencies or incompatibilities between
components.
The communication methods include data structures, call sequences,
registers, semaphores, communication links and protocols results in
integration bugs.
The integration bugs do not constitute a big bug category(9%) they are
expensive category because they are usually caught late in the game and
because they force changes in several components and/or data
structures.
9. System Bugs:
System bugs covering all kinds of bugs that cannot be ascribed to a
component or to their simple interactions, but result from the totality of
interactions between many components such as programs, data,
hardware, and the operating systems.
There can be no meaningful system testing until there has been
thorough component and integration testing.
System bugs are infrequent(1.7%) but very important because they are
often found only after the system has been fielded.
TEST AND TEST DESIGN BUGS:
Testing: testers have no immunity to bugs. Tests require complicated
scenarios and databases.
They require code or the equivalent to execute and consequently they
can have bugs.
9.Why is it impossible for a tester to find all the bugs in a system? Why
might it not be necessary for a program to be completely free of defects
before it is delivered to its customers?
Why It’s Impossible for a Tester to Find All Bugs in a System:
1. Complexity of Software: As software grows, it becomes more
complex with numerous interactions between different
components. Testing every possible combination is virtually
impossible.
2. Unpredictable Edge Cases: Many bugs arise only in edge cases or
unusual scenarios that developers or testers may not anticipate,
making it impossible to cover all potential situations.
3. Time and Resource Constraints: Testing takes time, and with
limited resources (human or computational), not every path
through the software can be explored before release.
4. Changing Requirements: Software requirements evolve, and bugs
may only be introduced or discovered when new features are
added or existing functionality is modified.
Why a Program Doesn't Need to Be Completely Free of Defects Before
Delivery:
1. Cost of Perfection: Striving for a completely defect-free product
can be prohibitively expensive and time-consuming, making it
unfeasible for most projects.
2. Prioritization of Critical Bugs: Not all defects are equal. It’s more
important to fix high-severity bugs that impact functionality, while
minor issues can be deferred to later updates.
3. Customer Feedback and Iterative Improvement: Software can be
released with known issues, with the understanding that it will be
improved based on customer feedback and real-world use.
4. Market Pressure and Competitive Advantage: To remain
competitive, companies often release products quickly, accepting
some bugs in exchange for early market entry or to meet
deadlines.
10.To what extent can testing be used to validate that the program is fit
for its purpose. Discuss?
Testing can help ensure a program meets its intended purpose, but it has
limitations:
1. Functionality: Validates that the program works as expected for
core tasks, but can't test every possible scenario.
2. Performance: Ensures the software performs well under expected
conditions, but may not account for extreme or real-world stress.
3. Usability: Identifies how user-friendly the software is, but may
miss some user behaviors or diverse needs.
4. Security: Finds vulnerabilities and ensures compliance, but can’t
predict unknown or future threats.
5. Requirements: Ensures the software meets specified
requirements, but doesn't guarantee it will meet future needs or
evolve with changes.
6. Defects: Finds and fixes bugs, but can’t guarantee all bugs are
caught, especially rare ones.
7. External Factors: May not account for third-party integrations or
changing environments.
8. Human Factors: Can't fully predict how real users will interact with
the software.
11.What is meant by integration testing? Goals of Integration Testing?
Integration Testing is a type of software testing where individual
components or modules of a system are combined and tested as a group.
The purpose is to ensure that the different parts of the system interact
correctly with each other. Unlike unit testing, which focuses on individual
components, integration testing checks the data flow, interactions, and
overall functioning between integrated modules or systems.
Goals of Integration Testing:
1. Verify Interface Compatibility:
o Goal: Ensure that the interfaces between different modules
or systems work correctly. This includes checking whether
data is passed properly between components and whether
communication protocols are followed.
2. Identify Issues in Data Flow:
o Goal: Ensure that data flows smoothly across different parts
of the system. Integration testing identifies problems in how
data is processed, formatted, or transferred between
modules.
3. Check for Functional Interactions:
o Goal: Verify that the combined modules perform as expected
when working together. This ensures that the modules
achieve the desired functionality as a group, not just in
isolation.
4. Detect Problems Early:
o Goal: Identify and resolve problems in the interactions
between integrated components early in the development
process. Catching these issues during integration testing can
prevent more significant problems later in the lifecycle.
12.Explain white-box testing and behavioral testing?
White-Box Testing
White-box testing, also known as clear-box testing or structural testing,
is a testing methodology where the tester has knowledge of the internal
workings of the application. This type of testing focuses on verifying the
internal structure, logic, and flow of the system.
Key Features of White-box Testing:
1. Code Visibility: The tester has access to the source code and designs
of the application, allowing them to check the internal structure,
code paths, and algorithms.
2. Focus on Internal Logic: The focus is on testing individual functions,
loops, branches, conditions, and code paths.
3. Types of Tests:
o Unit Testing: Testing individual functions or methods.
o Path Testing: Ensures every possible path of the program is
tested.
o Loop Testing: Verifies loop execution in code.
o Condition Testing: Focuses on conditions within the code
(e.g., if-else statements).
4. Advantages: It helps catch logical errors and vulnerabilities early
and ensures complete code coverage.
Limitations:
• Requires knowledge of the code.
• Can be time-consuming, especially for large applications.
• Not suitable for testing user interfaces or external components.
Behavioral Testing
Behavioral testing (also known as black-box testing) focuses on testing the
behavior of the application from the user’s perspective, without
knowledge of the internal code or structure. The goal is to ensure the
system behaves as expected based on its requirements.
Key Features of Behavioral Testing:
1. Focus on Output: Testers focus on inputs and outputs, validating
that the system produces the correct outputs for given inputs.
2. No Knowledge of Code: The internal workings of the system are
unknown to the tester. Only the inputs, outputs, and the system’s
behavior are tested.
3. Types of Tests:
o Functional Testing: Ensures that the system functions
according to the requirements.
o System Testing: Verifies the system as a whole, including all
integrated components.
o Acceptance Testing: Ensures the system meets user
requirements and expectations.
4. Advantages: It’s great for testing the user experience, functional
requirements, and system behavior, independent of the underlying
code.
Limitations:
• Doesn’t detect code-specific issues like logical errors or
performance bottlenecks.
• May not cover all internal edge cases of the application.
State and explain various dichotomies in software testing?
Testing versus Debugging.
Function Versus Structure: Tests can be designed from a functional or a
structural point of view. In functional testing, the program or system is
treated as a blackbox. It is subjected to inputs, and its outputs are verified
for conformance to specified behaviour. Functional testing takes the user
point of view- bother about functionality and features and not the
program's implementation. Structural testing does look at the
implementation details. Things such as programming style, control
method, source language, database design, and coding details dominate
structural testing.
Both Structural and functional tests are useful, both have limitations, and
both target different kinds of bugs. Functional tets can detect all bugs but
would take infinite time to do so. Structural tests are inherently finite but
cannot detect all errors even if completely executed.
Designer Versus Tester: Test designer is the person who designs the tests
where as the tester is the one actually tests the code. During functional
testing, the designer and tester are probably different persons. During unit
testing, the tester and the programmer merge into one person.
Tests designed and executed by the software designers are by nature
biased towards structural consideration and therefore suffer the
limitations of structural testing.
Modularity Versus Efficiency: A module is a discrete, well-defined, small
component of a system. Smaller the modules, difficult to integrate; larger
the modules, difficult to understand. Both tests and systems can be
modular. Testing can and should likewise be organised into modular
components. Small, independent test cases can be designed to test
independent modules.
Small Versus Large: Programming in large means constructing programs
that consists of many components written by many different
programmers. Programming in the small is what we do for ourselves in the
privacy of our own offices. Qualitative and Quantitative changes occur
with size and so must testing methods and quality criteria.
Builder Versus Buyer: Most software is written and used by the same
organization. Unfortunately, this situation is dishonest because it clouds
accountability. If there is no separation between builder and buyer, there
can be no accountability.
The different roles / users in a system include:
1. Builder: Who designs the system and is accountable to the buyer.
2. Buyer: Who pays for the system in the hope of profits from providing
services.
3. User: Ultimate beneficiary or victim of the system. The user's interests
are also guarded by.
4. Tester: Who is dedicated to the builder's destruction.
5. Operator: Who has to live with the builders' mistakes, the buyers' murky
14&15 in 8
16.Explain different types of loops with an example to each.
Kinds of Loops:There are only three kinds of loops with respect to path
testing:
Nested Loops:
The number of tests to be performed on nested loops will be the
exponent of the tests performed on single loops.
As we cannot always afford to test all combinations of nested loops'
iterations values. Here's a tactic used to discard some of these values:
1. Start at the inner most loop. Set all the outer loops to their minimum
values.
2. Test the minimum, minimum+1, typical, maximum-1 , and maximum for
the innermost loop, while holding the outer loops at their minimum
iteration parameter values. Expand the tests as required for out of range
and excluded values.
3. If you've done the outmost loop, GOTO step 5, else move out one loop
and set it up as in step 2 with all other loops set to typical values.
4. Continue outward in this manner until all loops have been covered.
5. Do all the cases for all loops in the nest simultaneously.
Concatenated Loops:
Concatenated loops fall between single and nested loops with respect to
test cases. Two loops are concatenated if it's possible to reach one after
exiting the other while still on a path from entrance to exit.
If the loops cannot be on the same path, then they are not concatenated
and can be treated as individual loops.
Horrible Loops:
A horrible loop is a combination of nested loops, the use of code that
jumps into and out of loops, intersecting loops, hidden loops, and cross
connected loops.
Makes iteration value selection for test cases an awesome and ugly task,
which is another reason