1 DEFINE ST
Software testing is an important process in the Software Development Lifecycle(SDLC). It
involves verifying and validating that a Software Application is free of bugs, meets the
technical requirements set by its Design and Development, and satisfies user requirements
efficiently and effectively.
Software testing mainly divides into the two parts, which is used in the Software
Development Process:
Verification: This step involves checking if the software is doing what is supposed to do. Its
like asking, "Are we building the product the right way?"
Validation: This step verifies that the software actually meets the customer's needs and
requirements. Its like asking, "Are we building the right product?"
2 STLC LIFECYLE AND PRINCIPLES
STLC stands for Software Testing Life Cycle. STLC is a process where all the
components and features of an application are tested completely such that they
satisfy the requirements and quality as per the standard. STLC involves strategic
testing that needs to be carried out in each phase of development.
Phases of STLC:
1. Requirement Analysis – Understand what needs to be tested from SRS (Software
Requirement Specification).
2. Test Planning – Define scope, objectives, resources, cost, schedule, and
responsibilities.
3. Test Case Design – Create detailed test cases, test data, and expected results.
4. Test Environment Setup – Prepare hardware, software, and network environment for
execution.
5. Test Execution – Execute test cases and log defects if actual results differ from
expected.
6. Defect Reporting & Tracking – Report, fix, retest, and track defects until closure.
7. Test Closure – Evaluate exit criteria, prepare test summary reports, and share
learnings.
Seven Principles of Software Testing (One Point Each)
1. Presence of Defects – Testing proves defects exist, but cannot prove software is
defect-free.
2. Exhaustive Testing Impossible – Testing all inputs/cases is impractical; only selected
cases are tested.
3. Early Testing – Start testing early in SDLC to reduce cost and effort.
4. Defect Clustering – Most defects are found in a few critical modules (Pareto
principle).
5. Pesticide Paradox – Repeated tests lose effectiveness; test cases must be updated.
6. Context Dependent – Testing method depends on application type and purpose.
7. Absence of Errors Fallacy – A bug-free system is useless if it doesn’t meet user
requirements.
3 Bugs, lifecycle of Bugs/failure.
Defect States
#1) New: This is the first state of a defect in the Defect Life Cycle. When any new defect is
found, it falls in a ‘New’ state, and validations & testing are performed on this defect in the
later stages of the Defect Life Cycle.
#2) Assigned: In this stage, a newly created defect is assigned to the development team to
work on the defect. This is assigned by the project lead or the manager of the testing team to
a developer.
#3) Open: Here, the developer starts the process of analyzing the defect and works on fixing
it, if required.
If the developer feels that the defect is not appropriate then it may get transferred to any of
the below four states namely Duplicate, Deferred, Rejected, or Not a Bug-based upon a
specific reason. We will discuss these four states in a while.
#4) Fixed: When the developer finishes the task of fixing a defect by making the required
changes then he can mark the status of the defect as “Fixed”.
#5) Pending Retest: After fixing the defect, the developer assigns the defect to the tester to
retest the defect at their end, and until the tester works on retesting the defect, the state of the
defect remains in “Pending Retest”.
#6) Retest: At this point, the tester starts the task of retesting the defect to verify if the defect
is fixed accurately by the developer as per the requirements or not.
#7) Reopen: If any issue persists in the defect, then it will be assigned to the developer again
for testing and the status of the defect gets changed to ‘Reopen’.
#8) Verified: If the tester does not find any issue in the defect after being assigned to the
developer for retesting and he feels that if the defect has been fixed accurately then the status
of the defect gets assigned to ‘Verified’.
#9) Closed: When the defect does not exist any longer, then the tester changes the status of
the defect to “Closed”.
1. Functional Bugs – Errors in software functions or features not working as expected.
Example: Login button not redirecting to homepage.
2. Performance Bugs – Issues related to speed, response time, load, or stability.
Example: Website taking 15 sec to load instead of 3 sec.
3. Usability Bugs – Problems affecting user-friendliness or ease of use.
Example: Poor navigation, unclear error messages.
4. Compatibility Bugs – Software works on one environment but fails in another.
Example: Application works in Chrome but not in Firefox.
5. Security Bugs – Weaknesses that allow unauthorized access or data leaks.
Example: User can access admin panel without login.
6. Logical Bugs – Errors in business logic or calculations.
Example: Shopping cart calculates wrong total price.
7. Syntax/Typographical Bugs – Spelling mistakes, UI alignment issues, or coding
syntax errors.
Example: “Regiser” instead of “Register” button.
Difference betwwen
Aspect Error Defect Fault (Bug) Failure
Deviation from System not
Mistake made The incorrect part in
Definition requirements found working as
by developer code/design
during testing expected
During coding During execution
Origin During testing phase In software/codebase
or design by user
Detected Developer
Tester Developer/Tester End user
By (self-review)
Causes system to
May cause Leads to a bug in Causes incorrect
Impact stop meeting user
defect software behavior in system
needs
Wrong
Function not giving Null pointer exception App crashes
Example formula in
expected output in program when user logs in
code
5 Verification and validation.
What is Verification?
Verification is the process of checking that software achieves its goal without
any bugs. It is the process to ensure whether the product that is developed is
right or not. It verifies whether the developed product fulfills the requirements
that we have. Verification is static testing. Verification means Are we building
the product right?
What is Validation?
Validation is the process of checking whether the Software Product is up to the
mark or in other words product has high-level requirements. It is the process of
checking the validation of the product i.e. it checks what we are developing is
the right
It includes checking documents, designs, codes, and It includes testing and validating the
programs. actual product.
Verification is the Static testing. Validation is Dynamic testing.
It does not include the execution of the code. It includes the execution of the code
6 Exhaustive and effective testing (difference).
7 bbt vs wbt
Parameters Black Box Testing White Box Testing
White Box Testing is a way of
Black Box Testing is a way of
testing the software in which the
software testing in which the internal
Definition tester has knowledge about the
structure or the program or the code is
internal structure or the code or the
hidden and nothing is known about it.
program of the software.
Black box testing is mainly focused White box testing is mainly
Testing on testing the functionality of the focused on ensuring that the
objectives software, ensuring that it meets the internal code of the software is
requirements and specifications. correct and efficient.
Black box testing uses methods like White box testing uses methods
Testing Equivalence Partitioning, Boundary like Control Flow Testing, Data
methods Value Analysis, and Error Guessing to Flow Testing and Statement
create test cases. Coverage Testing.
White box testing requires
Black box testing does not require any
Knowledge knowledge of programming
knowledge of the internal workings of
level languages, software architecture
the software, and can be performed by
and design patterns.
Parameters Black Box Testing White Box Testing
testers who are not familiar with
programming languages.
White box testing is used for
Black box testing is generally used for
testing the software at the unit
Scope testing the software at the functional
level, integration level and system
level.
level.
Implementation of code is not needed Code implementation is necessary
Implementation
for black box testing. for white box testing.
Black Box Testing is mostly done by White Box Testing is mostly done
Done By
software testers. by software developers.
Black Box Testing can be referred to White Box Testing is the inner or
Terminology
as outer or external software testing. the internal software testing.
Black Box Testing is a functional test White Box Testing is a structural
Testing Level
of the software. test of the software.
1. Test Structure
It refers to the overall framework or arrangement of testing activities, processes, roles, and
responsibilities that define how testing will be conducted in a project.
2. Test Planning
Test Planning is the process of defining the objectives, scope, approach, schedule,
resources, and activities of testing. It answers what to test, how to test, who will test, and
when testing will be done.
3. Testing Group
A Testing Group is a team of professionals (testers, QA engineers, test managers) who are
responsible for carrying out testing tasks such as writing test cases, executing tests, and
reporting defects.
4. Test Organization
Test Organization refers to the structure and management of the testing function within the
software development process. It defines how testing responsibilities are distributed
among teams (independent testing team, developers, or integrated QA team).
5. Test Specification
A Test Specification is a detailed document that describes the test conditions, input data,
execution steps, and expected results for a test case. It ensures clarity and repeatability in
testing.
6. Test Design
Test Design is the process of creating test cases and test procedures based on requirements,
specifications, and design documents. It involves identifying inputs, outputs, test conditions,
and test data.
8 Explain in detail S/W metrics (Need).
Software metrics in software engineering are the standards for estimating the
quality, progress, and health of software development activity. A software metric
is a quantifiable or countable assessment of the qualities of software.
Need for Software Measurement
Software is measured to:
Create the quality of the current product or process.
Anticipate future qualities of the product or process.
Enhance the quality of a product or process.
Regulate the state of the project concerning budget and schedule.
Enable data-driven decision-making in project planning and control.
Identify bottlenecks and areas for improvement to drive process improvement
activities.
Ensure that industry standards and regulations are followed.
Give software products and processes a quantitative basis for evaluation.
Enable the ongoing improvement of software development practices.
9 Classification of S/W metrics.
1. Product Metrics
Represent product attributes such as:
o Size
o Complexity
o Design features
o Performance
o Quality level
Measured at any stage of the software development process.
Determine whether the product meets user requirements.
Help identify and correct potential issues early, preventing catastrophic failures.
Ensure the final product aligns with consumer expectations.
2. Process Metrics
Measure defined properties of the software development process.
Help establish meaningful metrics to guide process improvement strategies.
Evaluate software process performance using process metrics.
Process is central to three key aspects influencing quality and performance:
o People (competence, motivation)
o Product (complexity)
o Technology (tools and methods used)
Aim to improve software quality and team performance by analyzing process
efficiency.
3. Project Metrics
Describe characteristics and execution aspects of a project.
Examples include:
o Number of developers
o Staffing pattern during project lifecycle
o Cost
o Schedule
o Productivity
Used by project managers to monitor project progress.
Compare actual effort, time, and cost with initial estimates.
Help reduce:
o Development cost
o Effort
o Risks
o Time
Improve overall project quality and decrease errors, rework, and delays.
10 What is function point & Test point analysis? (With Example).
1. Function Point Analysis (FPA)
Definition:
Function Point Analysis is a software measurement technique used to estimate the size and
complexity of a software application based on its functionality from the user’s perspective.
It measures what the system does, rather than how it is implemented.
Key Components:
Function points are calculated based on:
1. External Inputs (EI): User inputs that update internal files (e.g., form submission).
2. External Outputs (EO): Reports or messages sent to the user.
3. External Inquiries (EQ): Interactive inputs requiring immediate response.
4. Internal Logical Files (ILF): Data stored internally for processing.
5. External Interface Files (EIF): Data used from other systems.
Steps to Calculate Function Points:
1. Count the number of each component (EI, EO, EQ, ILF, EIF).
2. Assign a weight to each component (Low, Medium, High).
3. Compute Unadjusted Function Points (UFP):
4. Apply Value Adjustment Factor (VAF) to account for system complexity:
Example:
A simple banking system has:
5 external inputs (medium) = 5 × 4 = 20
3 external outputs (high) = 3 × 7 = 21
2 internal logical files (medium) = 2 × 7 = 14
UFP = 20 + 21 + 14 = 55
If VAF = 1.1, then FP = 55 × 1.1 = 60.5 ≈ 61 Function Points
2. Test Point Analysis (TPA)
Definition:
Test Point Analysis is a technique to estimate the effort required for software testing
based on the functionality, complexity, and environment. It is derived from Function Point
Analysis, but specifically for testing activities.
Key Components:
TPA considers:
1. Functionality of the software (based on function points).
2. Complexity factors (e.g., user interface, platform, business logic).
3. Environment factors (e.g., test tools availability, experience of testers).
Steps to Calculate Test Points:
1. Determine the function points (FP) of the software.
2. Multiply FP by Test Weight Factor (TWF) based on complexity.
3. Adjust for testing environment using an Environmental Factor (EF):
Example:
Software has 61 Function Points (from previous example).
Test Weight Factor (TWF) = 0.8 (medium complexity)
Environment Factor (EF) = 1.1
This means 54 test points will be used to estimate testing effort and resources.
11 Metric size
Software metrics can be categorized based on the size or scope they measure. These metrics
help quantify various aspects of software, from individual components to the whole project.
Here’s a point-wise explanation of different sizes of software metrics:
1. Small-Scale Metrics
Measure individual modules or components of software.
Focus on code-level attributes like:
o Lines of Code (LOC)
o Cyclomatic Complexity
o Function Points (per module)
Useful for:
o Detecting complexity
o Estimating debugging or testing effort
2. Medium-Scale Metrics
Measure subsystems or software units that consist of multiple modules.
Focus on integration, interactions, and quality across modules.
Metrics include:
o Cohesion and Coupling
o Module Interaction Complexity
o Defect Density (per subsystem)
3. Large-Scale Metrics
Measure entire software systems or projects.
Focus on overall quality, productivity, and project-level characteristics.
Metrics include:
o Project Schedule & Effort
o Total Defects
o Reliability
o Maintainability
o Function Point totals for the entire system
4. Extra-Large / Organizational Metrics
Measure organization-wide software processes and performance.
Focus on multiple projects and processes to improve organizational efficiency.
Metrics include:
o Process Maturity (CMMI level)
o Productivity per team or department
o Customer Satisfaction
o Overall defect trends across projects
12 Static vs Dynamic
Parameters Static Testing Dynamic Testing
Static testing is performed to check Dynamic testing is performed to
Definition the defects in the software without analyze the dynamic behavior of the
actually executing the code. code.
The objective is to find and fix
Objective The objective is to prevent defects.
defects.
Stage of It is performed at the early stage of It is performed at the later stage of
Execution software development. the software development.
In static testing, the whole code is In dynamic testing, the whole code is
Code Execution
not executed. executed.
Before/After Static testing is performed before Dynamic testing is performed after
Deployment code deployment. code deployment.
Cost Static testing is less costly. Dynamic testing is highly costly.
Documents Static Testing involves a checklist Dynamic Testing involves test cases
Required for the testing process. for the testing process.
It usually takes a longer time as it
Time Required It generally takes a shorter time.
involves running several test cases.
It exposes the bugs that are
explorable through execution hence
Bugs It can discover a variety of bugs.
discovering only a limited type of
bugs.
Static testing may complete 100%
Statement Dynamic testing only achieves less
statement coverage incomparably in
Coverage than 50% coverage.
less time.
It includes Informal reviews,
It involves functional and non-
Techniques walkthroughs, technical reviews,
functional testing.
code reviews, and inspections.
Example It is a verification process. It is a validation process.