0% found this document useful (0 votes)
64 views140 pages

Manual Testing Notes

Uploaded by

atulmisal97
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views140 pages

Manual Testing Notes

Uploaded by

atulmisal97
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 140

📘 LEARNING TO TEST, TESTING TO LEARN

A Beginner’s Daily QA Series


Introduction to Software and Software Testing
Software Overview:
A software is a collection of computer programs designed to perform specific
tasks or functions.
Types of Software:

Type​ Description Examples


1. System Software Controls and manages the Windows, Linux, device
hardware and basic system drivers
operations
2. Programming Software Helps developers write, test, Compilers, interpreters,
and debug code debuggers
3. Application Software Used by end users to do daily MS Word, Chrome,
tasks​ WhatsApp, YouTube

What is Software Testing?

Software Testing is a process that checks if the software is working correctly,


bug-free, and meets the customer’s requirements.

💡 Simple Definitions:
✅ 1. Software Testing is a part of the Software Development Life Cycle (SDLC).
✅ 2. It is an activity to detect defects (bugs) in the software before it reaches the
customer.

✅ 3. The main goal of testing is to deliver a quality product that works as


expected.
🤔 Why Do We Need Software Testing
Reason Explanation
To ensure bug-free software Defects affect user experience and trust
To meet customer requirements Software should behave as expected
To meet user expectations Features should work smoothly
To avoid costly post-release fixes Bugs found later are expensive to fix

📌 Real-Life Example:
Suppose you buy a mobile phone with all features, but the camera doesn’t
open. That’s a defect. Though the camera exists, it’s not working – this is why
testing is crucial before launch.

📐 Core Characteristics of Software Quality:


To call a software “high quality,” it must:

✅ Be Bug-free​
✅ Be Delivered on time​
✅ Stay Within budget​
✅ Meet customer requirements or expectations​
✅ Be Maintainable (easy to update/fix)
The Three P's of Software Companies

🧱 What Are the 3 P's?


These are the three foundational pillars of any successful company — not just in
IT, but in pharma, manufacturing, etc.
P Stands For Meaning Why It Matters in
Testing
👨‍💻 People The human resources who do the Skilled testers ensure
bugs are found early
work
It includes:Developers, Testers​
Designers, Business Analysts​
Managers, Clients​
Without skilled and motivated
people, no project can succeed.

⚙️ Process Step-by-step methods to develop and


deliver software.
Well-defined testing
process = fewer missed
defects
A process is a series of well-defined
steps that guide how the software
should be: Planned, Developed,
Tested​
Delivered​

A good process avoids chaos and


ensures quality and consistency.

🧩 Product The final software or application
delivered to the client
End result should be high
quality and user-friendly

It can be:

A web application (e.g., Flipkart)

A mobile app (e.g., WhatsApp)

A desktop tool (e.g., MS Word)​

Relation Between the 3 P's

People → follow a Process → to build a Product.


Project vs Product
🔹 What is a Project?
A Project is:

●​ A software developed for a specific customer.


●​ Based on specific requirements of that individual client.
●​ Customized and not reusable for others without changes.

📌 Example:
A banking software created specially for ICICI Bank cannot be used by SBI
Bank — because it's tailored to ICICI’s needs.

What is a Product?

A Product is:

●​ A software developed for many users or customers.


●​ Based on general market needs, not for one client.
●​ It is ready-to-use and can be used by anyone.

📌 Example:
WhatsApp, Microsoft Word, Google Chrome – all are products made for the
general public, not a specific person or company.

●​ Project = Tailor-made dress — stitched just for you, as per your


measurements.
●​ Product = Ready-made dress — made for everyone based on common sizes

🏢 Company Types Based on This:


1.​ Product-Based Companies:
○​ Build their own products for mass usage.
○​ Ex: Google, Microsoft, Adobe, Oracle
2.​ Service-Based Companies:
○​ Work for client projects as per their needs.
○​ Ex: TCS, Infosys, Accenture, Wipro

⭐Error, Bug / Defect, and Failure


🔹 1. Error
●​ Definition: A human mistake made by a developer or tester during coding,
requirement analysis, or design.
●​ Stage: Occurs during development.
●​ Cause: Misunderstanding, lack of knowledge, or carelessness.

Example: A developer mistakenly writes a = b - c instead of a = b + c.

2. Bug / Defect

●​ Definition: A deviation between the actual output and the expected


output of the software.
●​ Stage: Found during testing phase.
●​ Cause: Usually caused by an earlier error in the code.

📌 Example: Expected result is "Total = 100", but software shows "Total = 50".
🧠 Bug and Defect are interchangeable terms – both mean the same in real-world
usage.

🔹 3. Failure
●​ Definition: When the defect is not caught during testing, and it causes the
application to fail in the real environment (customer's hands).
●​ Stage: Happens in the production / user environment.
●​ Cause: An uncaught bug that reaches the user.

📌 Example: A customer clicks "Pay Now" on an e-commerce site, and the app
crashes — this is a failure.

Relation Between the Three:

ERROR → leads to → DEFECT / BUG → if undetected → leads to → FAILURE


Why Does Software Have Bugs?

1. Miscommunication or No Communication

●​ Between: Developers, testers, business analysts, or clients.


●​ Result: Different people understand the same requirement in different ways.

📌 Example: Client wanted a "Download PDF" button, but developer created a


"Save as PDF" feature.

🔹 2. Software Complexity
●​ Large applications = More modules + More interconnections.
●​ As complexity increases, chances of making mistakes also increase.

📌 Example: Banking software involves thousands of calculations, reports,


security rules — very hard to keep bug-free without strict processes.

🔹 3. Programming Errors
●​ Developers are humans too — they make mistakes in logic, syntax, or data
handling.
●​ Even a small wrong condition (if vs else) can break critical features.

Example: A misplaced semicolon or wrong formula can calculate incorrect taxes.

🔹 4. Changing Requirements (Scope Creep)


●​ Customers often change their mind mid-project.
●​ Changes in code affect other modules and create new bugs.

Example: Client adds a new payment method in the final stage → It breaks the
cart flow that was already tested.

🔹 5. Lack of Skilled Testers


●​ Inexperienced testers may miss critical bugs during testing.
●​ Test cases may not cover edge cases or real-time scenarios.
📌 Example: Tester checks only valid inputs, but forgets to test what happens
when you enter special characters.

There are also some additional common reasons that contribute to bugs

1. Time Pressure ("Release Fast!")

Teams are pushed to meet tight deadlines and deliver features quickly. This can
lead to:

●​ Skipping test cases


●​ Ignoring proper QA processes
●​ Rushed or unoptimized code

Result: Increased chance of bugs due to compromised quality.

2. Incomplete Unit Testing

When developers skip or perform minimal unit testing:

●​ Core logic may go untested


●​ Edge cases are missed

Result: Bugs go undetected until later stages.

🔁 3. Lack of Code Reviews


If code is not reviewed properly:

●​ Mistakes made by developers may remain unchecked


●​ No second opinion to validate logic or standards

Result: Bugs slip through to testing or even production.

📄 4. Poor or Missing Documentation


Inadequate documentation leads to:

●​ Misunderstanding of functionality
●​ Wrong implementation or testing
Result: Bugs due to incorrect assumptions or confusion.

🔗 5. Third-Party Integration Issues (APIs, Plugins)


When software depends on external tools/services:

●​ If APIs behave unexpectedly


●​ If a plugin update breaks compatibility

Result: Bugs appear even if your code is fine.


📘 LEARNING TO TEST, TESTING TO LEARN
A Beginner’s Daily QA Series

📅 Day 2 — Part 1: Introduction to SDLC , Waterfall Model & Spiral Model


Software Development Life Cycle (SDLC)
SDLC is the process used by the software industry to design, develop, and test
high-quality software.
📌 Software Development Life Cycle (SDLC) has 6 key phases:

1️⃣ Requirement Gathering & Analysis

●​ This is the first and most important phase of the SDLC.


●​ The goal is to clearly understand what the customer needs from the software.
●​ The customer explains their current way of working (manual or existing system)
and what improvements they expect.

A feasibility study is also done to analyze:

●​ Is the project technically possible?


●​ Is it financially and time-wise realistic?
●​ Can it be delivered within constraints?
📄 All requirements are documented in:
●​ SRS (Software Requirements Specification)
●​ FRS (Functional Requirements Specification)
●​ BRS (Business Requirements Specification)

These documents become the foundation for all the upcoming phases like design,
development, and testing.

Example: A bank client wants an app that allows login, balance view, and fund transfer.

2️⃣ Design

●​ Based on the requirement documents, the design of the system is created.


●​ Think of this as the blueprint of the application — just like how you’d plan a
house before building it.
●​ Two types of design documents:
○​ High-Level Design (HLD): Overall architecture and modules
○​ Low-Level Design (LLD): Detailed design of each module
●​ Use case diagrams, flowcharts, and architecture diagrams are often used here.

Example: Planning the user flow — login → dashboard → transfer funds.

3️⃣ Development (Coding)

●​ Developers start writing code based on the design and requirement docs.
●​ All components of the software are built in this phase.
●​ Developers may also create:
○​ Unit tests for their code
○​ Integration points between different modules

Example: Code is written to perform login validation, balance fetching from DB, fund
transfer APIs, etc.
4️⃣ Testing

●​ Once development is complete, the testing team begins validating the software.
●​ The goal is to find bugs and ensure the software works as per the requirement.
●​ Types of testing involved:
○​ Functional Testing
○​ Non-Functional Testing (Performance, Security, etc.)
●​ Any mismatches between expected vs. actual behavior are reported as defects.
Example: Testing if invalid login shows error, if transfer works only when balance is
available, etc.

5️⃣ Deployment

●​ The tested software is now deployed to the customer environment.


●​ This could be:
○​ Uploading to a production server
○​ Publishing to App Stores
○​ Installing on client’s internal system
●​ Customer starts using the application.

Example: Bank’s staff begins using the app to manage accounts and transactions.

6️⃣ Maintenance

●​ Even after release, issues may arise or new features may be requested.
●​ The software needs to be updated, bugs fixed, and enhancements made.
●​ A support team is usually responsible for handling:
○​ Bug fixes
○​ Minor updates
○​ Environment compatibility issues

Example: After release, the bank reports that fund transfer fails at midnight →
developers investigate and fix.
🔄 Introduction to SDLC Models
In Software Development Life Cycle (SDLC), there are various models that define how
the development process flows from requirement gathering to delivery and maintenance.
Each model has its own structure, flow, and best use case.

Here are the most popular SDLC models:

1.​ Waterfall Model​


A linear and sequential model — each phase must be completed before the next
begins. Best for small, well-defined projects.
2.​ V-Model (Verification and Validation Model)​
An extension of Waterfall where each development phase has a corresponding
testing phase. Emphasizes early testing.
3.​ Iterative Model​
The software is developed in small parts (iterations). Each iteration adds more
features and is tested and improved.
4.​ Spiral Model​
Combines the iterative approach with risk analysis. Suitable for complex and
high-risk projects.
5.​ Agile Model​
A highly flexible model where the software is built in frequent, short cycles
(sprints). Involves continuous feedback and improvement. Best for dynamic
requirements.
🌟Waterfall Model ( Sequential or Linear Model)⭐
The Waterfall Model is one of the oldest and simplest SDLC models. It's called
"Waterfall" because the process flows step by step, from top to bottom, like a waterfall
once a phase is completed, we move to the next. No going back.

This model is best used when requirements are clear and fixed.

1️⃣ Requirement Gathering​

Understanding what the customer needs.​

Who is involved: Business Analysts, Clients, Product Managers

2️⃣ Design​

Planning how the software will look and work​

Who's involved: Architects, Designers​


3️⃣ Development​

Writing the actual code​

Who's involved: Developers

4️⃣ Testing​

Checking for bugs and issues​

Who's involved: QA Testers​

Example: Testers check if the login works, if fund transfer happens properly, etc.

5️⃣ Deployment​

Making the software live for customers​

Who's involved: DevOps, Release Team​

Example: App is uploaded to the Play Store or delivered to the bank for use

6️⃣ Maintenance​

Fixing bugs or adding new features after release​

Who's involved: Developers, Support Team​

Example: Customer complains "Transfer fails at midnight" → Devs fix the bug in next

update

Important Characteristics of Waterfall Model:

●​ Each phase must be completed before moving forward.


●​ Very documentation-heavy (every phase has supporting documents).
●​ Very structured and easy to manage when the requirements are fixed.
●​ Not suitable for frequent requirement changes (we’ll cover this in disadvantages
later).​
✅ Advantages vs ❌ Disadvantages of Waterfall Model
Advantages Disadvantages

High Product Quality: Requirement Changes Not Allowed:


Each phase has clear documentation, Customer cannot change requirements
ensuring the software is well-structured mid-way, which is unrealistic in
and tested. real-world projects.

Less Chances of Bugs: Defect Propagation:


Since no changes are allowed during the If a defect is missed in early stages (like
process, there is less chance of introducing requirement/design), it spreads to later
new bugs. phases.

Low Initial Investment: High Rework Cost:


Testers are hired only at later stages, If bugs are found during testing, rework
reducing upfront cost. across all previous stages takes time and
increases cost.

Best for Small, Fixed Projects: Testing Happens Late:


Works well when requirements are clearly Testing only begins after development,
known and won’t change. making early bug detection impossible.
🌟Spiral Model 🌟
The Spiral Model is an evolutionary process model that combines the structured
approach of the Waterfall Model with the iterative nature of prototyping. It was
designed to overcome limitations of classical models like Waterfall by:

●​ Allowing repeated development in cycles (iterations or "spirals")


●​ Incorporating risk analysis and planning into each cycle
●​ Allowing partial and regular delivery of working software
●​ Supporting requirement changes at the end of every cycle
●​ It is widely used for large and complex projects where requirements are expected
to evolve, and risks need to be managed continuously.

Each spiral (cycle) goes through four major activities:

1.​ Planning
2.​ Risk Analysis
3.​ Development and Testing
4.​ Evaluation
1️⃣ Planning

In this step, the team defines the objectives for the current cycle. This includes:

●​ Understanding customer requirements for this version


●​ Identifying deliverables (e.g., login module, dashboard)
●​ Setting deadlines and responsibilities

Example: Let’s say the customer wants a login feature first. In the planning phase, the
team discusses:

●​ What the login should do


●​ Which technologies (tools, programming languages, frameworks, and databases)
the team will use to build the software.
●​ Who will handle coding, testing, reviewing

2️⃣ Risk Analysis

Here, the team identifies possible risks or challenges that could impact the project or
this module. They then prepare solutions or backups.

Example: If the login involves OTP via SMS, what if the SMS gateway fails?

●​ Risk: External SMS service may not respond


●​ Mitigation: Use email as a backup or retry logic

This step prevents failure and builds risk management into the process.
3️⃣ Development and Testing

Once risks are planned for, the team starts building the feature/module and then tests it.

●​ Code is written
●​ Unit testing, integration testing are done
●​ Testers validate the functionality

Example: Developers code the login form. Testers then check:

●​ Can a user log in with valid data?


●​ Do error messages show for wrong passwords?
●​ Is password stored securely?

4️⃣ Evaluation

Once the module is tested, the team evaluates the outcome with the customer. Feedback
is collected before starting the next cycle.

Example: Customer checks the login module and says:

●​ "Can we add a forgot password option in the next cycle?"


●​ "Change the button color to match our brand."

The team notes this and plans for it in the next cycle.

🔁 Summary:
●​ Spiral Model = Iterative + Controlled + Risk-focused
●​ It overcomes some Waterfall drawbacks like rigidity and customer disconnect.
●​ But it still lacks testing at the earliest stages and doesn't allow dynamic
changes during cycles.
Spiral Model – Advantages vs Disadvantages

Advantages Disadvantages

Testing is done in every cycle Requirement changes are NOT allowed


Each spiral cycle includes testing before mid-cycle
moving to the next cycle. Bugs can be If a change is needed during coding or
identified early. testing of a cycle, it must wait for the next
cycle.

Customer gets to use working Each spiral cycle follows Waterfall steps
software after every cycle Inside each cycle, the process is linear
Each cycle delivers a usable version (Requirement → Design → Coding →
(module) of the product. This keeps the Testing), similar to the Waterfall model
customer engaged and provides faster
feedback.

Requirement changes allowed No testing during Requirement &


between cycles Design stages
At the end of a cycle, customer feedback Testing is still done after coding, not
can be used to plan for changes or add during the earlier phases of each cycle.
features in the next cycle.

Suitable for long-term projects Costly model


Can accommodate ongoing enhancements Continuous cycles, rework, and customer
and long-duration developments. collaboration increase time and
investment.

Risk is analyzed early in each cycle No fixed number of cycles


Spiral model includes risk analysis to The model keeps continuing based on new
avoid potential failures. changes, so project closure can become
uncertain.

Common Problems in Waterfall & Spiral Models: Both models have a dedicated
testing phase only after coding is complete. There's no testing during earlier phases
like Requirements or Design. This increases the risk of late defect discovery.
📘 LEARNING TO TEST, TESTING TO LEARN
A Beginner’s Daily QA Series
📅 Day 2 — Part 2: The V-Model: Verification, Validation & Testing Techniques
Introduction to the V-Model: Addressing Traditional SDLC Challenges

The V-Model is a critical Software Development Life Cycle (SDLC) model designed to
enhance software quality. It addresses key limitations observed in earlier, classic SDLC
models such as the Waterfall and Spiral models.

Challenges in Traditional SDLC Models:

1. Delayed Testing Phase:

●​ In models like Waterfall, testing is typically a distinct phase initiated only after
development is substantially complete.
●​ This late-stage testing means that design flaws or requirement misunderstandings
from earlier phases are detected much later.
●​ Consequence: Defects found late in the cycle are significantly more expensive
and time-consuming to rectify.

2. Rigidity and Difficulty in Accommodating Changes:

●​ Waterfall Model: Highly sequential and rigid, making it extremely difficult and
costly to incorporate new requirements or changes once a phase is completed.
●​ Spiral Model: Offers some flexibility by allowing changes after each cycle.
However, integrating significant mid-development changes can still be challenging
across the entire project scope.
V-Model

The V-Model, also known as the Verification and Validation Model, was introduced to
overcome these limitations by emphasizing concurrent testing activities throughout
the SDLC.

Key Principle of the V-Model:

●​ Integrated Testing: Unlike traditional models, the V-Model explicitly integrates


testing activities parallel to each development phase, rather than confining testing
to a single, end-of-cycle phase.
●​ This approach ensures that quality checks and testing processes begin right from
the initial requirement gathering stage.

Benefits of the V-Model's Approach:

●​ Early Defect Detection: By incorporating testing at every phase, defects are


identified and addressed much earlier in the development lifecycle.
●​ Reduced Cost of Defects: Early detection significantly lowers the cost and effort
associated with fixing bugs.
●​ Enhanced Product Quality: Continuous quality assurance leads to a more robust
and reliable final software product.

Overview of the V-Model Structure:

The V-Model diagram visually represents its core concept with two distinct arms:

●​ Left Arm (Verification): Depicts the traditional development phases, moving


downwards.
●​ Right Arm (Validation): Represents the corresponding testing phases, moving
upwards, directly linked to their respective development counterparts.

The Left Arm of the V-Model: Verification & Static Testing

The V-Model's left arm represents the development phases of the Software Development
Life Cycle (SDLC), moving downwards. These phases are foundational, defining what
the software should do and how it should be built.

1. Requirement Analysis Phase:

User Requirements (BRS - Business Requirement Specification): This document


captures the high-level needs and expectations of the end-users or customers. It describes
the system from a business perspective. *

System Specifications (SRS - Software Requirement Specification): This document


translates the BRS into detailed software requirements, outlining the functionalities,
performance, and other constraints of the software system.

Created by: Typically, these documents are prepared by Product Managers or Business
Analysts, as they are the primary liaisons with the customers.
2. Design Phase:

Architecture Design (HLD - High-Level Design): This phase defines the overall
structure and components of the software system, including its architecture, major
modules, database design, and external interfaces.

Module Design (LLD - Low-Level Design): This phase breaks down the high-level
components into more detailed sub-modules, specifying their internal logic, algorithms,
and data structures. It provides precise instructions for coding each module.

Note: The entire software is broken down into smaller, manageable modules during this
phase.

3. Implementation/Coding Phase:

This is where the actual software development begins. Developers start writing the code
based on the detailed design documents (HLD and LLD). At this point, concrete
software modules are being built.

Verification: "Are we building the right product?"

The left arm of the V-Model is primarily concerned with Verification.

●​ Definition: Verification is the process of evaluating whether the product, at each


stage of the SDLC, conforms to specified requirements and standards. It asks
the question: "Are we building the product right?" or "Are we following the
correct process at each step?"
●​ Focus: Verification primarily focuses on documents, designs, and specifications.
It ensures that the outputs of each development phase correctly reflect the inputs
and adhere to defined rules.
●​ When it happens: Verification activities occur before the actual software is fully
built or deployed.
Static Testing: Testing Without Executing Code

During the Requirement Analysis and Design phases (the upper parts of the left arm),
there is no executable software available – only documents. This is where Static
Testing comes into play.

●​ Definition: Static testing is a type of software testing that analyzes software


artifacts (like documents, code, or design specifications) without executing the
actual code. It involves reviewing and analyzing documentation to find defects
early.
●​ Why it's done: Since the software is not yet developed, static testing allows us to
identify and correct issues in requirements or design documents before they
translate into code, which is much more expensive to fix later.

Key Aspects of Static Testing (on Documents):

When testing documents, the primary focus is on two critical aspects:

1.​ Correctness:
○​ Ensuring the document contains accurate, factually sound, and correctly
stated information.
○​ Checking if the content aligns with user needs and industry standards.
2.​ Completeness:
○​ Verifying that the document includes all necessary information and that
nothing essential is missing.
○​ Ensuring all requirements, designs, or specifications are fully detailed and
unambiguous
Static Testing Techniques:

To achieve correctness and completeness in documents, specific techniques are


employed. These are often formal or informal reviews involving multiple stakeholders:

1.​ Reviews:

Process: A thorough examination of documents (e.g., Requirement Reviews, Design


Reviews, Test Plan Reviews) to ensure correctness and completeness.

Execution: Can be done individually or as a team. A single person reviews the document
independently to check every line and paragraph for correct and complete content.

Example: A Business Analyst reviewing an SRS document to ensure all customer


requirements are accurately captured.

2.​ Walkthroughs:

Process: An informal review where the author of the document or code explains it to a
team of peers.

Execution: Not strictly pre-planned; conducted as needed. One person leads by "walking
through" the document, explaining each part, while others provide feedback, ask
questions, and discuss clarifications.

Documentation: Typically, no formal minutes are recorded, making it less formal than
inspections.

Example: A designer explaining a new module's design to a group of developers to get


their immediate feedback.
3.​ Inspections

Process: The most formal type of review, involving a structured and systematic
examination of documents or code by a trained team.

Participants: Involves a larger team (e.g., 3-8 participants), including a moderator, a


reader (who reads the document aloud), a writer (who notes down issues), and other
reviewers.

Execution: Highly structured, follows a proper schedule, and often uses checklists.
Formal minutes of the meeting are recorded.

Example: A team conducting a detailed inspection of the High-Level Design document,


with a moderator ensuring adherence to process and a dedicated person recording all
identified defects.

In summary, the left arm of the V-Model is where we verify the "building blocks" of our
software through documentation and static testing, ensuring we're on the right path even
before a single line of code is written.

The Right Arm of the V-Model: Validation & Dynamic Testing

While the left arm of the V-Model focuses on building the product correctly (Verification
through documentation and static testing), the right arm moves upwards, focusing on
testing the actual software to ensure it meets user expectations and requirements. This
side is primarily concerned with Validation.

Implementation / Coding Phase:

At the base of the 'V' is the Implementation or Coding phase. This is the point where:

●​ Developers translate the Low-Level Design (LLD) documents into executable


code.
●​ The actual software components and modules are built.
●​ Unlike the earlier phases where only documents existed, the implementation phase
marks the beginning of having a tangible, executable software product.

Validation: "Are we building the product right?"

The right arm of the V-Model is dedicated to Validation.

●​ Definition: Validation is the process of evaluating the finished or partially finished


product to determine if it satisfies the business needs and user requirements. It
asks the question: "Are we building the right product?" or "Does the software
meet the customer's actual needs and expectations?"
●​ Focus: Validation primarily focuses on the actual, executable software.
●​ When it happens: Validation activities take place after the software components
have been developed and are ready for execution.

Dynamic Testing: Testing the Executable Software

Once the software (or its components) is developed during the implementation phase, we
move into Dynamic Testing.

●​ Definition: Dynamic testing is a type of software testing that involves executing


the actual software with various inputs and observing its behavior and outputs.
It's about how the software performs in action.
●​ Key Difference from Static Testing:
○​ Static Testing: Done before execution, on documents, for correctness and
completeness.
○​ Dynamic Testing: Done during or after execution, on the actual software,
to check functionality and performance.
Dynamic Testing Techniques (The Right Arm's Phases):

The V-Model maps specific dynamic testing phases to their corresponding development
phases on the left, ensuring thorough validation.

1.​ Unit Testing (Corresponds to Module Design - LLD):

What it is: The first level of dynamic testing. It involves testing individual, smallest
testable parts of an application, known as 'units' or 'modules' (e.g., a single function,
method, or class).

Who does it: Primarily performed by developers themselves. They test the code they
have written to ensure each unit works correctly in isolation.

Purpose: To verify that each module performs as designed.

Example: In a calculator application, a developer tests the 'add' function by providing


specific inputs (e.g., 2+3) and verifying the output (5).

2.​ Integration Testing (Corresponds to Architecture Design - HLD):

What it is: After individual units are tested, they are combined or 'integrated'. Integration
testing focuses on verifying the interfaces and interactions between these integrated
modules or components.

Who does it: Often performed by developers, sometimes with collaboration from testers,
as it still involves understanding internal code interactions.

Purpose: To ensure that modules communicate and work together correctly as a group.

Example (Gmail Application):

●​ Unit 1 (Compose Module): Allows a user to write an email.


●​ Unit 2 (Sent Folder Module): Displays emails that have been sent.
●​ Integration Test: A user composes and sends an email. The integration test
verifies if that same email correctly appears in the 'Sent Folder' – demonstrating
successful communication between the "Compose" and "Sent Folder" modules.

3. System Testing (Corresponds to System Specification - SRS):

What it is: Once all modules are integrated into a complete system (often called a 'build'
or 'release candidate'), system testing evaluates the complete, integrated software product
against the specified requirements (SRS).

Who does it: Primarily performed by dedicated testers.

Purpose: To verify that the entire system functions as intended, meets all functional and
non-functional requirements, and works correctly in its intended environment. It
simulates real-world scenarios.

4. User Acceptance Testing (UAT) (Corresponds to User Requirements -


BRS):

What it is: The final phase of testing before deployment. The software is tested by actual
end-users or customers in their own environment.

Who does it: Performed by customers or end-users who will actually be using the
software.

Purpose: To verify if the software meets their business needs, user expectations, and is
fit for purpose in a real-world setting. It confirms that the 'right product' has been built
from a user perspective.

The right arm of the V-Model ensures that the software, once built, is rigorously validated
against its requirements and user expectations through a series of increasingly
comprehensive dynamic tests. This dual approach of Verification (left arm) and
Validation (right arm) is what makes the V-Model robust for ensuring software quality.
V-Model Advantages, Disadvantages

Advantages Disadvantages

Early Defect Detection: Increased Documentation:


Testing activities commence from the very Places a strong emphasis on detailed
beginning of the project, allowing for earlier documentation at every phase (BRS, SRS, HLD,
identification and rectification of defects, LLD, test plans, etc.), which can lead to
significantly reducing the cost of fixes significant documentation overhead.

Higher Quality Product: Higher Initial Investment:


Rigorous verification and validation at each With both development and testing teams
stage contribute to the development of a more involved from early stages, the initial setup and
robust, reliable, and higher-quality software resource allocation can demand a higher
product. investment compared to models where testing
starts later.

Clear Roles and Responsibilities: Limited Flexibility for Changes:


The structured approach defines clear While an improvement over Waterfall, major
deliverables and responsibilities for each phase, requirement changes midway through the
simplifying project management and progress project can still be challenging and costly, as
tracking. they often require revisiting multiple
corresponding phases.

Enhanced Traceability: Not Ideal for Small Projects:


Direct mapping between development and For very small or short-duration projects, the
testing phases (connections across the 'V' extensive documentation and structured process
diagram) enhances traceability, making it easier of the V-Model might be an unnecessary
to track requirements to tests and vice-versa. overhead and too rigid.
Connecting the V-Model's Arms: Traceability

The horizontal lines in the V-Model diagram (refer to the image you provided) visually
represent the direct correspondence and traceability between the development phases
on the left and their corresponding testing phases on the right. This ensures that every
development output is systematically validated:

●​ User Requirements (BRS) are validated by User Acceptance Testing (UAT).


●​ System Specifications (SRS) are validated by System Testing.
●​ High-Level Design (HLD) is validated by Integration Testing.
●​ Low-Level Design (LLD) is validated by Unit Testing.

Verification vs. Validation


Feature Verification Validation

Question "Are we building the product "Are we building the right product?"
right?"

Focus Process, methodology, and The actual software's functionality and


adherence to specifications. Checks its alignment with user needs. Checks
how the product is being built. what product is built.

Stage Primarily conducted before software Primarily conducted after software is


execution (during planning, built and executable.
requirements, design).

Techniques Reviews, Walkthroughs, Inspections Unit, Integration, System, UAT


(Static Testing). Testing (Dynamic Testing).
Static Testing vs. Dynamic Testing

Feature Static Testing Dynamic Testing

Approach Non-execution-based. Analyzes Execution-based. Involves running the


documents and code without actual software with inputs and
running the software. observing outputs.

What is Tested Project documents (BRS, SRS, The actual executable software.
HLD, LLD), design documents,
source code.

Purpose To find defects early in To verify functionality, performance,


documentation and design; and behavior of the working software.
ensure correctness and
completeness.

Techniques Reviews, Walkthroughs, Unit Testing, Integration Testing,


Inspections. System Testing, User Acceptance
Testing (UAT).
📘 LEARNING TO TEST, TESTING TO LEARN
A Beginner’s Daily QA Series

📅 Day 3 — Part 1
QA vs. QC & Testing Methodologies (White Box, Black Box & Grey Box)

1. Quality Assurance (QA) vs. Quality Control (QC)

While both QA and QC are crucial for delivering high-quality software, they represent

different aspects of quality management:

Quality Assurance (QA): Focuses on the process used to build the software. It's about

preventing defects by defining and ensuring adherence to proper procedures. It asks,

"Are we building the product right?"

Quality Control (QC): Focuses on the product itself. It's about identifying and detecting

defects in the software after it has been built. It asks, "Are we building the right product?"

Feature Quality Assurance (QA) Quality Control (QC)

Focus Process-oriented. Product-oriented.


Ensures the right processes are in place Ensures the actual product meets
and followed throughout the SDLC. quality standards and requirements.

Goal Preventing defects. Detecting defects.


Aims to stop defects from occurring in Aims to find and identify existing
the first place. defects in the software.

Activity Type Management Activity. Technical Activity.


Involves defining standards, procedures, Involves executing tests, identifying
and ensuring compliance. bugs, and reporting

SDLC Entire SDLC Cycle. Testing Phase.


Involvement Involved from requirements gathering Primarily focused on the testing
to deployment and maintenance. phase of the SDLC.

Relationship Broad Scope. Specific Scope.


QA encompasses all activities to assure QC is a subset of QA, specifically
quality throughout the project. dealing with the testing of the
product.

Team Role Defines the process, sets rules, and Follows the defined process to
monitors adherence. perform actual testing.

Example Setting up coding standards, conducting Writing and executing test cases,
process audits, defining review performing functional testing,
guidelines. reporting bugs.

In essence: QA sets the rules for building quality (prevention), while QC follows those
rules to test for quality in the final product (detection). All testers, whether manual or
automation, primarily fall under Quality Control (QC).

Quality Engineering (QE): A Modern Perspective

Recently, the term Quality Engineering (QE) has gained prominence in the IT industry.

●​ Focus: QE emphasizes building quality into the product from the very beginning,

often through automation, tooling, and continuous integration.

●​ Role of Testers: Testers in a QE role, particularly automation testers, are often

called Quality Engineers (QEs) or Software Development Engineers in Test

(SDETs).

●​ Why SDET? The "Software Development Engineer in Test" designation

highlights that modern testers, especially those in automation, write code to test

software. They develop test frameworks, automation scripts, and tools, bridging

the gap between development and testing. While developers write code to build

the software, SDETs write code to test the software.


Testing Methodologies

▪ White Box Testing ▪ Black Box Testing ▪ Grey Box Testing

1. White Box Testing: Looking Inside the Box

Imagine a transparent box – you can see everything inside, how its components are
structured, and how they work. That's the essence of White Box Testing in software!
●​ Focus: White Box Testing involves testing the internal logic and structure of the
software's code. Testers examine the code, design, and internal workings to verify
paths, conditions, and data flow.
●​ Knowledge Required: This type of testing requires programming knowledge
and an understanding of the code written by developers.
●​ How it's Done: Testers might step through the code, analyze control flow, test
specific conditions, and ensure that all internal paths are functioning correctly.
Techniques often include statement coverage, branch coverage, and path coverage.
●​ Who Does It: Primarily performed by developers who have in-depth knowledge
of the codebase.
●​ Examples:
○​ Unit Testing: A developer tests if a specific function in their code, like an "add"
function, correctly calculates 2 + 3 = 5.
○​ Code Review: One developer reads another developer's code line by line to spot
potential errors or inefficient parts before the program is even run.
○​ Integration Testing (Code Level): When integration testing is performed by
examining the underlying code interactions between modules, it falls under white
box testing.
○​ Static Analysis: Using tools to analyze code without executing it, looking for
potential vulnerabilities or bad practices.

2. Black Box Testing: Testing from the Outside


Picture a solid box you can't see through – you can only interact with the outside (like
pressing buttons), without any knowledge of what's inside. This is Black Box Testing.

●​ Focus: Black Box Testing involves testing the functionality and behavior of the
application from the user's perspective, without any knowledge of its internal
code structure. Testers interact with the software's interface and validate whether
it meets the specified requirements.
●​ Knowledge Required: No programming knowledge is necessary. Testers focus
solely on understanding the application's features and how users are expected to
interact with it.
●​ How it's Done: Testers use the application as an end-user would, providing inputs
and verifying the outputs against the expected results. Techniques include
Equivalence Partitioning, Boundary Value Analysis, and Decision Table Testing.
●​ Who Does It: Primarily performed by dedicated testers (manual and
automation), and also by end-users during User Acceptance Testing.
●​ Examples:
❖​ Website Login: Trying to log into a website by typing your username and
password to confirm it either lets you in (for valid credentials) or shows an error
(for invalid ones).
❖​ Calculator App: Testing if a calculator app gives the correct answer when you
press "2 + 2 =" and expect "4," without knowing how the app calculates internally.
❖​ Online Shopping: Verifying if clicking "Add to Cart" actually puts the item into
your shopping cart.
❖​ System Testing: As a Black Box test, System Testing involves interacting with the
complete software application (e.g., a banking portal) to ensure all its features
work together as required, without looking at its code.
❖​ User Acceptance Testing (UAT): For UAT, actual users test the software (e.g., a
new internal company tool) to confirm it meets their business needs and is ready
for real-world use, just like a customer would.
3. Grey Box Testing:

A Blend of Both White Box and Black Box Testing

Grey Box Testing sits in between White Box and Black Box Testing. It involves testing
with partial knowledge of the internal structure of the application.

●​ Focus: Grey Box Testers have some understanding of the system's architecture,
databases, or APIs. This limited internal knowledge is used to design more
effective tests or to investigate issues found during black box testing by looking at
logs or backend data.
●​ Knowledge Required: Requires some level of technical understanding, such as
knowledge of databases (e.g., SQL queries), APIs, or high-level design
documents.
●​ How it's Done: Testers might perform front-end actions (Black Box) and then
verify corresponding changes in the backend database (White Box insight), or
directly test APIs using known endpoints.
●​ Who Does It: Can be performed by testers who have acquired some technical
skills, or by developers in certain integration testing scenarios.
●​ Examples:
○​ Database Testing: You register on a website through the user interface (Black
Box part). Then, as a tester, you directly connect to the database (Grey Box part)
to confirm your registration details were correctly saved in the backend table.
○​ Form Submission with API Check: You fill out an online form and click
'Submit' (Black Box). As a tester, you might then use a tool to check the specific
data sent through the API (the communication layer between the front-end and
backend) to ensure it was formatted and transmitted correctly.

Comparison of Testing Methodologies

Feature White Box Testing Black Box Testing Grey Box Testing
Knowledge of Full: None: Partial:
Internals Complete knowledge of Treats the system as a Some knowledge of
source code, design, black box; no internal structure,
and structure knowledge of internal architecture, or data
(transparent) code or structure flow (semi-transparent).
(opaque).
Primary Focus Internal logic, code External functionality, Both functionality and
paths, statement/branch user behavior, the interaction/data flow
coverage, structural requirements between
integrity, security compliance, usability. components/layers
vulnerabilities.
Objective To verify the internal To ensure the system To test interactions
working, optimize meets user between components
code, improve design, requirements and (e.g., UI and DB),
and identify internal functions as specified, identify context-specific
flaws. from an end-user errors, and ensure data
perspective. integrity across layers.
Programming Required: Essential for Not Required: Beneficial/Required
Skills code analysis and test Focuses on UI and for specific tasks: E.g.,
case creation. functional behavior, SQL for database
independent of code. testing, understanding
API contracts.
Performed By Primarily Developers. Dedicated Testers Testers with technical
(Manual & skills, sometimes
Automation), Developers.
End-users.
Test Basis Source code, detailed Requirements High-level design,
design documents, Specification (SRS), database schema, API
architectural diagrams. Use Cases, User documentation, data
Stories, Functional flow diagrams.
Specifications.
Examples Unit Testing, Code System Testing, User Database Testing, API
Review, Static Code Acceptance Testing Testing, Web Services
Analysis, Integration (UAT), Functional, Testing, Penetration
Testing (code-level). Regression, Testing (often involves
Performance Testing, some grey-box
Usability Testing. elements).
📘 LEARNING TO TEST, TESTING TO LEARN
A Beginner’s Daily QA Series

📅 Day 3 - Part 2 : Levels of Software Testing


LEVELS OF TESTING

1.​ Unit Testing


2.​ Integration Testing
3.​ System Testing
4.​ User Acceptance Testing (UAT)

1. UNIT TESTING: The First Check (Building Blocks)


Imagine a software application like a complex

machine made of many tiny gears and parts.

Before assembling the whole machine,

you'd want to make sure each individual gear

works perfectly on its own.

That's what Unit Testing is all about!

What is a Unit?
A "unit" is the smallest testable part of a software application. This could be a
single function, a method, a procedure, or a module – essentially, a small,
independent piece of code.

Example : In a banking application, checking the "login" logic, or a function that


calculates "interest," or a module for "displaying balance" are all individual units.
When is it Done?
Unit testing is the first level of testing performed once a specific piece of code

(a unit) is developed. It happens before the entire software is ready.

Who Performs It?


●​ Unit testing is primarily conducted by developers. They test the code
they've just written.
●​ As a QA Tester, you are generally NOT responsible for performing unit
testing. It's a development activity.

How is it Done? (The White Box Connection)


●​ Unit testing is a White Box Testing technique. This means developers have
full knowledge of the internal code structure.
●​ They write specific "unit test cases" (often automated using frameworks like
JUnit) to directly test the internal logic, code paths, and data flow of that
single unit.

Example: For a login function, the developer would directly test the code to ensure
it correctly processes different usernames and passwords.

UNIT TESTING TECHNIQUES

1.​ Basis Path Testing


Ensuring every line of code in a unit is executed at least once to confirm its
functionality.

Example

def number_type(x):

if x > 0:

return "Positive"
elif x == 0:

return "Zero"

else:

return "Negative"

Independent Paths

This code can take 3 different routes:

Path 1: x > 0 → "Positive"​


Path 2: x == 0 → "Zero"​
Path 3: x < 0 → "Negative"

Test Cases for Basis Path

Test Case Input Expected Output


1 5 Positive
2 0 Zero
3 -3 Negative

If all three paths are tested at least once, Basis Path Testing is complete.

2. Control Structure Testing


A white-box testing technique that checks the logic flow in the program using
different inputs to make sure all decision points and loops behave correctly.

●​ Condition Coverage
Testing all decision points within the code (e.g., if-else statements, switch-case) to ensure
that both true and false branches of conditions are executed at least once with different
inputs.
int age = 20;

if (age >= 18) {

System.out.println("Adult"); }

else {

System.out.println("Minor");

Test Cases:

age = 20 → condition true , age = 15 → condition false

●​ Branch Coverage

Ensures all possible branches from decision points (if, else if, else) are
executed.

int score = 75;

if (score > 80) {

System.out.println("Excellent"); // branch 1

} else if (score >= 50) {

System.out.println("Pass"); // branch 2

} else {

System.out.println("Fail"); // branch 3

Test Cases:

score = 85 → branch 1​
score = 75 → branch 2​
score = 40 → branch 3
●​ Loops Coverage

Testing code that involves repetition (loops) to ensure they start, execute the
correct number of iterations, and terminate properly. This includes checking
boundary conditions (e.g., zero iterations, maximum iterations, one iteration).

for (int i = 0; i < n; i++) {

System.out.println(i); }

Test Cases:

●​ n = 0 → loop doesn’t run


●​ n = 1 → loop runs once (prints 0)
●​ n = 3 → loop runs multiple times (prints 0,1,2)

3. Mutation Testing
We make small changes (mutations) in the program’s code to see if our existing
test cases catch the mistake.

If tests fail → good (they caught the bug)

If tests pass → bad (tests are weak)

Original code:

int add(int a, int b) {

return a + b; }

Mutation: (changed + to -)

int add(int a, int b) {

return a - b; // mutation }

A good test case should detect this wrong output.

add(2, 3) == 5; // will fail after mutation


Unit vs. Component Testing
While often used interchangeably for individual pieces, some define:

Unit Testing: Testing a single piece of code directly (requires programming


knowledge).

Component Testing: Testing a single application feature/UI element


(e.g., a login screen) from the application side, which might involve using the UI
but still focusing on that one "component" in isolation. This often blurs into Black
Box for testers.

In essence, Unit Testing is about ensuring that each individual building block
of the software works flawlessly on its own before it's combined with other
blocks.

2. INTEGRATION TESTING: Testing Module Connections


Once individual units (modules) have been tested

and are working correctly, the next crucial step is

to see if they work well together.

That's where Integration Testing comes in.

What is Integration Testing?


●​ It's the process of testing the interfaces and interactions between two or
more combined modules (or units) of an application.
●​ The main goal is to check if these different modules, when put together, can
communicate and exchange data correctly.
Example

In a banking app, if "Balance Inquiry" and "Send Money" are two separate
modules. Integration testing would check if, after sending money, the "Balance
Inquiry" accurately reflects the reduced balance.

Who Performs It?


Both developers and testers can perform integration testing:

Developers: Often do it at the code level (White Box Integration Testing) to


ensure the logic between combined programs/functions is sound.

Testers: Conduct it at the application/UI level (Black Box Integration Testing) to


verify the seamless flow and data exchange across different features from a user's
perspective.

Why is it Needed?
Even if individual modules work perfectly (unit tested), defects can arise when
they start interacting. These can include:

●​ Incorrect data formatting between modules.


●​ Missing data.
●​ Incorrect order of operations.
●​ Misunderstandings in API contracts.

APPROACHES TO INTEGRATION TESTING


Developers follow different strategies to combine and test modules. These are
broadly categorized into

1. Incremental Integration Testing

2. Non-Incremental Integration Testing (Big Bang Testing)


1. Incremental Integration Testing (Preferred Method)
This approach involves adding modules one by one (incrementally) to the already
tested modules and then testing the data flow and communication. This makes it
easier to pinpoint where a defect originated.

Why is it preferred?

If an issue arises, it's typically located within the newly added module or its
interaction with the existing ones, making defect isolation much faster.

Three Main Approaches

1.​ Top Down


2.​ Bottom Up
3.​ Hybrid (sandwich)

1. Top-Down Integration
We start testing from the main module at the top and keep adding its child
modules step by step.

●​ Goal: Verify the main control flow early.


●​ Order: Parent → Child → Grandchild, and so on.
How it Works (Diagram Walkthrough)
Incrementally adding the modules and testing the data flow between the modules.
●​ Ensure the module added is the child of previous module.
●​ Takes help of stubs for testing.

Step 1:

●​ M1 (parent module) is developed and ready.


●​ M2 (its child) is not ready, so we use a Stub to represent M2.

Step 2:

●​ The stub for M2 is a dummy program that pretends to perform M2’s job.
●​ It sends back predefined outputs so that M1 can still be tested.

Step 3:

●​ When M2’s real code is ready, the stub is removed, and real M2 is
integrated.
●​ The process continues down to M3, M4, etc., until the full system is
connected.

What is a Stub in this Context?


●​ A Stub is a dummy program that pretends to be a missing child module.
●​ It returns fake or predefined outputs so the parent module can still be tested
without waiting for the real child module to be developed.

Example

●​ M1: E-commerce Main Menu


●​ M2 (Stub): Order Processing
●​ M3: Payment Gateway
Testing Flow

1.​ M1 sends an order to M2 (stub).


2.​ M2 (stub) fakes the response → “Order Placed Successfully.”
3.​ M1 behaves as if M2 is real.
4.​ Later, real M2 replaces the stub, and integration continues to M3

2. Bottom-Up Integration
We start testing from the lowest-level (child) modules and keep adding their
parent modules step by step, moving upward.

●​ Goal: Prove the building blocks first, then integrate up.


●​ Order: Child → Parent → Grandparent → …..so on

How it Works (Diagram Walkthrough)

●​ Incrementally adding the modules and testing the data flow between the modules.
●​ Ensure the module added is the parent of the previous module

Step 1:

●​ M1 (lowest-level child module) is developed and ready.


●​ Its parent M2 is not ready, so we use a Driver to mimic M2’s behavior.​
Step 2:

●​ The Driver is a dummy program that pretends to be M2, calling M1 with


test inputs and receiving outputs for verification.
●​ This allows us to test M1 in isolation without waiting for M2’s development.

Step 3:

●​ M3 (top-level) is already developed, but we can’t fully integrate it until M2


is done.
●​ Once M2 is developed, we remove the Driver and integrate M1 into M2.
●​ Then, we integrate M3 on top of M2, completing the chain.

What is a Driver in this Context?

A Driver is a dummy program that pretends to be a missing parent module.​


It calls the lower-level module with test data, collects the output, and checks
results. It allows testing of lower modules before the higher ones are ready.

Example

●​ M1: Database Module (Developed ✅)


●​ M2 (Driver): Business Logic Layer (Not developed ❌ → Driver used)
●​ M3: UI Layer (Developed ✅)
Testing Flow:

1.​ Driver calls M1 with sample queries.


2.​ M1 processes them and returns results.
3.​ Driver verifies if M1 works correctly.
4.​ When M2 is ready, replace the Driver with real M2.
5.​ Finally, connect M3 → M2 → M1 for full system testing.
3. Hybrid (Sandwich) Integration
How it Works​
This approach mixes the Top-Down and Bottom-Up methods.

●​ Some parts of the system are tested starting from the top
(like Top-Down Integration).
●​ At the same time, other parts are tested starting from the bottom
(like Bottom-Up Integration).
●​ Eventually, both sides meet in the middle layer — like putting the filling
between two slices of bread, hence the name sandwich.

Why Use It?

●​ Speeds up testing because work happens on both ends at the same time.
●​ High-level control logic is verified early (from Top-Down).
●​ Low-level utility modules are verified early (from Bottom-Up).

How It Looks in Steps:

1.​ Top-Down part: Start testing from the main control module downward. Use
stubs for any missing child modules.
2.​ Bottom-Up part: Start testing from the lowest-level modules upward. Use
drivers for any missing parent modules.
3.​ Meet in the middle: When both sides are ready, connect them and test the
whole flow.

Example
Top Part: Start from Main Menu → Order Processing, using stubs for missing
deeper modules.​
Bottom Part: Start from Database Layer → Payment Utility, using drivers for
missing upper modules.​
Eventually, connect Order Processing with Payment Utility in the middle and test
the end-to-end process.

2. Non-Incremental Integration Testing (Big Bang Testing)


How it Works

Instead of adding modules one by one, all available modules are combined and
integrated in a single shot. The entire integrated system is then tested at once.

Drawbacks (Why it's NOT Preferred):

●​ Difficult Defect Isolation: If a defect is found, it's very challenging and


time-consuming to identify which specific module or interface caused the
issue, as many modules were integrated simultaneously.
●​ Risk of Missing Bugs: There's a higher chance of overlooking
communication issues between specific modules due to the sheer volume of
interactions being tested at once.

Example:

Imagine you’re building an online food delivery app.

●​ You have separate modules for:


1.​ User Login
2.​ Menu Browsing
3.​ Cart Management
4.​ Payment Gateway
5.​ Order Tracking​
In Big Bang Testing:

●​ You finish coding all these modules first without testing their integration
individually.
●​ Then, in one single step, you integrate them all together and run the tests.

Scenario Problem : On the first test run:

●​ You find that orders are not being placed.


●​ But is it because the cart didn’t send the correct data?
●​ Or because the payment gateway isn’t processing?
●​ Or because the order tracking isn’t updating?

Since everything is integrated together, debugging is a nightmare because there


are too many connections to check at once.

Stubs and Drivers: Temporary Assistants


Stubs and Drivers are vital tools in Incremental Integration Testing.

●​ Purpose: They are temporary, dummy programs created by developers to


simulate the presence of missing or undeveloped modules. They implement
just enough logic to allow the modules under test to communicate, without
implementing the full functionality.
●​ Stub: Replaces a missing child module (called by the module under test).
●​ Driver: Replaces a missing parent module (calls the module under test).

In summary, Integration Testing ensures that the different parts of your


software work together seamlessly, allowing for effective data communication
and interaction between them.
3. SYSTEM TESTING: END-TO-END VALIDATION
Once individual modules are working (Unit Testing)

and they can communicate with each other

(Integration Testing), it’s time to test the entire,

fully integrated software application as a

complete system.

Purpose in SDLC

System Testing acts as a bridge between Integration Testing and Acceptance


Testing. It validates the complete integrated system before handing it over for
User Acceptance Testing (UAT).

What is System Testing?

●​ It is the process of testing the overall functionality of the application from


an end-to-end perspective.
●​ The primary goal is to verify that the entire system works correctly and
meets all the client's requirements and specifications.

When is it Done?

●​ Begins after Unit Testing and Integration Testing are completed.


●​ Requires a stable, integrated build of the software.

Who Performs It?

●​ Conducted primarily by the dedicated testing team (QA Testers).


●​ This is our main area of involvement as testers.​
How is it Done? (The Black Box Connection)

●​ Black Box Testing technique — testers do not need programming


knowledge or access to the internal code.
●​ Testing is performed via the User Interface (UI).
●​ Testers evaluate application behavior against documented customer
requirements.

Prerequisite

●​ Testers must thoroughly understand the customer requirements from the


Software Requirements Specification (SRS) and design documents.
●​ This knowledge guides test case preparation.

Test Environment Requirements

System Testing is conducted in a staging or pre-production environment that


closely mirrors the production setup in:

●​ Hardware
●​ Software
●​ Network configurations

Ensures realistic test results that reflect real-world usage conditions.

Key Focus Areas of System Testing

1.​ User Interface Testing (GUI)


2.​ Functional Testing–
3.​ Non-Functional Testing
4.​ Usability Testing
1. USER INTERFACE (UI) / GUI TESTING

●​ Verify all visual elements: input boxes, checkboxes, buttons, dropdowns,


logos, text, font colors, alignment.
●​ Ensure look and feel are consistent and correct.

2. FUNCTIONAL TESTING

Ensures all features work as per requirements.​


Examples:
Banking app: Login works, money transfers, balance updates.​
E-commerce app: Product search, cart, payment, order history.​
Functional Subtypes:

●​ Smoke Testing – Quick check for basic stability.


●​ Sanity Testing – Verifying specific changes work.
●​ Regression Testing – Ensuring new changes don’t break existing features.

3. NON-FUNCTIONAL TESTING

Focuses on how well the system performs, not just what it does.

Examples:

●​ Performance Testing: How fast the application responds under normal and
peak loads (e.g., page load times, transaction speed).
●​ Security Testing: Ensuring the application is protected against unauthorized
access, data breaches, and vulnerabilities.
●​ Recovery Testing: How well the application recovers from crashes or
failures.
●​ Load Testing: Testing the system's behavior under a specific expected load.
●​ Stress Testing: Testing the system's breaking point under extreme loads.
●​ Compatibility Testing: Checking the application's functionality across
different browsers, operating systems, devices, etc.

Non-Functional Subtypes:

●​ Volume Testing – Large data handling capability.


●​ Scalability Testing – Ability to handle growth.
●​ Endurance Testing – Long-term stability.

4. USABILITY TESTING

●​ Evaluates user-friendliness and ease of navigation.


●​ Checks clarity of error messages and logical flow.

Summary

System Testing ensures the complete software product works reliably as intended
in a real-world environment and meets all customer requirements.​
It’s the final internal testing phase before handing over to the client for UAT,
making it one of the most critical stages in the SDL

4. User Acceptance Testing (UAT): The Customer's Approval


After the software has been thoroughly tested

by the development and QA teams, the ultimate

validation comes from the people who will

actually use it – the customers or end-users.

This is the essence of User Acceptance Testing (UAT).


What is UAT?

●​ UAT is the final level of testing performed to confirm that the software
meets the business requirements and is acceptable for deployment from the
customer's or end-user's perspective.
●​ It's about ensuring the software works in their day-to-day environment,
fulfilling their specific use cases and workflows.

When is it Done?

UAT is conducted after System Testing has been completed and the software is
deemed stable and functionally complete by the testing team.

Who Performs It?

●​ UAT is typically performed by actual end-users, customers, or a


dedicated UAT team (sometimes composed of business analysts or subject
matter experts from the client's side).
●​ As a QA Tester, you might assist them by setting up environments or
providing support, but the actual testing and decision to "accept" the
software is by the users.

How is it Done? (The Black Box Connection)

●​ UAT is a Black Box Testing technique. The users have no knowledge of


the internal code; they simply interact with the application as they would in
their daily tasks.
●​ They test the application in their own environment using real-world
scenarios and data to confirm it supports their business operations.
Levels of User Acceptance Testing:

UAT is often conducted in two distinct phases:

1.​ ALPHA TESTING


●​ Performed by internal teams (developers, QA, or a small group of internal
users) within the development or testing environment of the company that
built the software.
●​ Uses real data (or realistic simulated data). It's an early form of
acceptance testing before wider release.
2.​ BETA TESTING
●​ Performed by a limited number of real end-users or customers in their
actual customer environment.
●​ Uses customer's live data. This helps gather feedback from a wider
audience outside the company, identifying issues that might not have been
caught internally, and validating user experience in diverse real-world
conditions.

In essence, UAT is the final gate, ensuring the software is not just technically
sound, but also practically useful and fully accepted by those who will rely on
it every day.

Unit Testing: Individual components, done by developers, White Box.

Integration Testing: Communication between combined modules, done by


developers/testers, White Box & Black Box aspects, Incremental vs.
Non-Incremental.

System Testing: Overall functionality against client requirements, done by testing


team, Black Box, focuses on UI, Functional, and Non-Functional aspects.

User Acceptance Testing (UAT): Final validation by customers/users in their


environment, Black Box, includes Alpha & Beta testing.
📘 LEARNING TO TEST, TESTING TO LEARN
A Beginner’s Daily QA Series

📅 Day 4 - SYSTEM TESTING & ITS TYPES


SYSTEM TESTING OVERVIEW

What is System Testing?


System Testing is the process of verifying the entire integrated software
application to ensure it meets the client’s requirements.

●​ It is a Black Box Testing technique, which means we test the application


from the user’s perspective without looking into the code or internal logic.
●​ The goal is to ensure the software works exactly as expected in real-world
scenarios.

Why System Testing is Important


1.​ Validates that the system functions correctly as a whole.
2.​ Detects defects that may arise due to integration of different modules.
3.​ Ensures the software meets functional and non-functional requirements.
4.​ Acts as a final check before user acceptance testing (UAT).

TYPES OF SYSTEM TESTING


1) GUI Testing

2) Usability Testing

3) Functional Testing

4) Non-Functional Testing
Type Purpose / What it Checks Example
GUI Testing Verifies the look and feel Checking buttons, icons,
(Graphical User of the software, ensuring text fields, menus, layouts,
Interface) visual elements work colors, and fonts.
correctly.
Ensures the software is Checking if navigation is
Usability Testing easy to use and simple, menus are clear,
understand for end-users. and tasks can be performed
easily.
Confirms that each function Testing login, search,
Functional Testing in the software works checkout, and other features
according to requirements. to ensure they work as
intended.
Evaluates software Testing system performance
qualities beyond specific under load, security against
Non-Functional Testing functions, such as unauthorized access, and
performance, security, and recovery from errors.
reliability.

Note: System Testing is usually done after Integration Testing and before User
Acceptance Testing (UAT).

1.​GUI Testing (Graphical User Interface Testing)


What is GUI Testing?
GUI Testing is a type of System Testing that

focuses on the appearance and functionality

of a software application's interface.

Goal: Ensure the software looks correct and

all interface elements work as intended.


Type: Black Box Testing – testers do not need to know the internal code.

Applicable For: Both web applications and desktop applications.

Why GUI Testing is Important


●​ Confirms the look and feel of the software matches design specifications.
●​ Ensures UI elements are functional.
●​ Improves user experience by making the interface intuitive and attractive.
●​ Detects errors in fonts, colors, alignment, images, and layouts early.

What GUI Testing Checks


1.​ UI Elements: Buttons, icons, text boxes, dropdowns, radio buttons,
checkboxes, links, menus, and overall screen layout.
2.​ Visual Design: Colors, fonts, alignment, and image quality.
3.​ Screen Layout: Sections and overall organization.
4.​ Error Messages: Correct content, color coding (red for errors, yellow for
warnings, etc.), and spelling.
5.​ User Experience: Interface should be intuitive, attractive, and prevent user
frustration.
6.​ Responsiveness: Works correctly across different screen sizes and
resolutions.
7.​ Functionality: All UI elements perform expected actions.
8.​ Scrollbars & Disabled Fields: Scrollbars appear and work properly;
disabled fields behave correctly.
9.​ Hyperlinks: Links display correct colors (normal, hover, visited).
GUI Testing Checklist
Before testing, create a checklist based on the UI Design Document or
Wireframes

checklist is a structured document listing all the specific visual elements,


properties, and interactions of an application's user interface that need to be
systematically verified to ensure they meet design specifications and provide a
positive user experience.

wireframe is a basic, skeletal visual guide or "dummy screen" that outlines the
structure, layout, and content arrangement of a user interface, serving as a blueprint
for testing.

Example checklist includes:

●​ Element Properties: Check size, position, width, and height of all elements.
●​ Error Messages: Verify content, colors, and spelling.
●​ Screen Sections: Ensure all sections are properly aligned.
●​ Fonts: Test readability, colors, and size.
●​ Resolution & Zoom: Test different resolutions and zoom levels.
●​ Alignment: Ensure texts, buttons, icons, and images are properly aligned.
●​ Colors: Verify fonts, backgrounds, and links use correct colors.
●​ Images: Check clarity, size, and placement.
●​ Spelling: All text is free from spelling errors.
●​ User Experience: Interface is attractive and easy to use.
●​ Scrollbars: Function correctly for large pages.
●​ Disabled Fields: Appear correctly and function as expected.
●​ Headings: Properly aligned and distinct from body text.
●​ Hyperlinks: Colors change correctly on hover/click.
●​ UI Functionality: Buttons, text boxes, text areas, checkboxes, radio buttons,
dropdowns, links, etc., perform correctly.
Example of GUI Testing

Scenario: Login screen

Checks:

●​ Buttons: Login button is clickable and redirects correctly.


●​ Text Fields: Accepts correct input formats.
●​ Layout: Username/password fields are aligned; fonts are readable.
●​ Colors: Error messages appear in red.
●​ Images: Logo is clear and properly placed.

GUI Testing is all about “Look and Feel” – making sure the software is visually
correct, functional, and user-friendly.

2. USABILITY TESTING
What is Usability Testing?
Usability Testing focuses on how easy, user-friendly,
and efficient the software is for end users.

Goal
Ensure that users can understand, navigate,

and operate the application without confusion or errors.

●​ Unlike GUI Testing, which checks appearance and visual correctness,


Usability Testing evaluates user experience and ease of use.
●​ Helps identify areas where users might get confused, frustrated, or make
mistakes.
What Usability Testing Checks

1.​ Ease of Use:


○​ Are menus, buttons, and functions easy to find and operate?
○​ Can a new user perform common tasks without help?
2.​ Understandability:
○​ Is the language clear and simple?
○​ Are instructions, messages, and labels easy to understand?
3.​ User Experience:
○​ Does the application feel intuitive and efficient?
○​ Does it prevent errors or reduce user frustration?
4.​ Help Documents & Manuals:
○​ User guides and manuals should be:
■​ Readable: Written in simple, easy-to-understand language.
■​ Accurate: Reflect the current application functionality.
5.​ Context-Sensitive Help:
○​ Features that provide help depending on the user’s current action:
■​ Tooltips: Small hints appearing when hovering over buttons or
fields.
■​ Shortcut Keys: Keyboard shortcuts to perform actions quickly.

Key Principles of Usability Testing

●​ Test the real user workflow, not just individual screens.


●​ Observe user behavior to identify difficulties or confusion.
●​ Collect feedback on user satisfaction, comfort, and overall experience.
●​ Verify that help options, tooltips, and shortcuts actually make tasks easier.
Example of Usability Testing

Scenario: Online shopping checkout process

Checks:

●​ Can a user easily add items to the cart?​

●​ Is it clear how to apply a discount code?​

●​ Are error messages understandable (e.g., “Invalid card number”)?​

●​ Does the checkout flow feel intuitive without instructions?

Note:

●​ Usability Testing is all about the ease of using the application.​

●​ Ensures that users can access all functions efficiently.​

●​ Help documents, manuals, and context-sensitive help must be readable,


accurate, and user-friendly.

Key Takeaway:

●​ GUI Testing = “Look and Feel” → checks appearance and visual


correctness.
●​ Usability Testing = “Ease of Use” → Checks how a user can interact with
the application without confusion.
●​ Both are crucial, but Usability Testing focuses on user satisfaction,
efficiency, and comfort.
⭐3. FUNCTIONAL TESTING ⭐
What is Functional Testing?
Functional Testing checks the behavior of the application.

Goal
Ensure that features and functionalities are work as

expected according to the customer’s requirements.

Example

If a user searches for a product in an e-commerce app,

the search function should return the correct results.

Types of Functional Testing


1.​ Object Properties Testing​

2.​ Database Testing​

3.​ Error Handling Testing​

4.​ Calculations / Manipulations Testing​

5.​ Links Testing​

6.​ Cookies & Sessions Testing​

1.​ Object Properties Testing

●​ Every element on the user interface (UI) is considered an object.​

●​ Examples of objects:​

○​ Input boxes (e.g., First Name, Last Name)​


○​ Buttons (e.g., Submit, Cancel)​

○​ Dropdowns​

○​ Radio buttons​

○​ Links​

○​ Images and Tables​

●​ Each object has properties (also called attributes) that define its behavior
and appearance.
●​ Object Properties Testing ensures that these objects behave correctly
according to their properties.

Common Object Properties to Test


1.​ Enable / Disable – Can the user interact with the object?​

○​ Example: An input box may be enabled for text entry or disabled to


prevent editing.​

2.​ Visible / Hidden – Is the object displayed on the screen?​

○​ Example: A “Submit” button should be visible only after filling


required fields.​

3.​ Focus – Does the cursor automatically move to the correct input field?​

○​ Example: After entering the first name, the cursor should


automatically focus on the last name field.​

4.​ Dropdown Properties​

○​ Single-select or multi-select options.​

○​ Number of available options.​


5.​ Radio Button Properties
○​ Only one option can be selected at a time.
○​ Selecting one option disables the others.
6.​ Link Properties
○​ Color before and after clicking.
○​ Opens the correct target page.
7.​ Other Visual Elements
○​ Colors, fonts, alignment, images, and buttons must follow design
specifications.

Example Scenario: A login form.

●​ Objects & Properties to Test:


○​ Username input box – enabled, visible, cursor focuses here first.
○​ Password input box – enabled, hidden characters (****), cursor moves
here automatically.
○​ Login button – visible, clickable, correct color.
○​ Forgot password link – visible, clickable, opens the correct page.

Key Takeaway:

Object Properties Testing ensures each UI element behaves correctly, so


users can interact with the application without issue

2. Database Testing
What is Database Testing?

Database Testing is a type of Functional Testing that ensures the application


interacts correctly with its database (back-end)

Goal: Verify that data entered through the front-end is stored, updated, deleted,
and retrieved correctly in the database.
●​ Also called Back-End Testing.

In Functional Testing, testers mainly focus on DML operations (Data


Manipulation Language) from the front-end perspective. Advanced or
in-depth database testing is usually performed by database testers or DBAs.

Key Concepts
1.​ Object Properties vs Database Testing

Object Properties Testing: Checks UI elements (buttons, text boxes, links) and
their behavior.

Database Testing: Ensures the data entered through UI elements is correctly


reflected in the database.

2.​ Gray Box Testing

Database testing is considered Gray Box Testing because it involves partial


knowledge of the internal database structure.

Focus Areas for Functional Database Testing


Functional testers usually focus on DML operations to ensure data manipulation
is correct. Basic SQL knowledge is essential for functional testers. This can include
MySQL, Oracle, SQL Server, etc.

1.​ Insert – Adding new data

Example: Registering a new user; user details should appear in the users table.

2.​ Update – Modifying existing data

Example: Updating a user’s email via the profile page; changes should reflect in
the database.

3.​ Delete – Removing data

Example: Deleting a product from the cart; it should be removed from the database
table.​
4.​ Select / Retrieve – Fetching data

Example: Viewing orders placed by a user; data should be retrieved correctly.

Advanced Database Testing (For Database Testers)

For testers with strong database knowledge, additional areas include:

●​ Table and Column Validation: Check column types, lengths, constraints.


●​ Relationships / Normalization: Verify that data entered in one table is
correctly linked to other tables.
○​ Example: Employee info in Employee table; department info in
Department table.
●​ Queries: Joins, subqueries, and set operations for retrieving complex data.
●​ Functions, Procedures, Triggers, Views, Indexes: Verify database logic
implemented by developers.
●​ Requires PL/SQL or advanced SQL knowledge.

Note: These are mostly back-end tasks and not part of functional testing for
beginners. Functional testers focus on ensuring front-end actions reflect
correctly in the database.

Example: Registration Form (Functional Database Checks)

Scenario: A user registers on an e-commerce app

Steps:

1.​ Enter user details in the registration form.


2.​ Submit the form (front-end operation).
3.​ Verify in the database (users table) that the user record is created.
4.​ Update the user’s address in the profile and check that it is reflected in the
database.
5.​ Delete the user and verify the record is removed.
Database Testing / Back-End Testing Checklist

●​ Checking database operations w.r.t user actions


●​ DML – Data Manipulation Language:
○​ Insert
○​ Update
○​ Delete
○​ Select
●​ Table & Column level validations: Column type, length, number of
columns
●​ Relations between tables (Normalization)
●​ Functions, Procedures, Triggers, Indexes, Views, etc.

Key Takeaway:​
Database Testing ensures that user actions on the application are accurately
reflected in the database,

3. Error Handling Testing

What is Error Handling Testing?

Error Handling Testing checks how an application responds when users


perform invalid or unexpected actions.​
It ensures that:

●​ The system detects the error correctly


●​ The system displays a meaningful and clear message
●​ The application does not crash but continues to work smoothly

Why is it Important?

●​ Users often make mistakes (wrong input, missing data, wrong password etc.)
●​ A good application should guide users with clear and helpful messages
●​ Generic or unclear messages confuse users and reduce application quality​
Key Things to Verify
1.​ Error is triggered correctly
○​ If wrong input is given, the system must show an error.
○​ If no error appears, that is a serious bug.
2.​ Error messages are clear and specific
○​ ❌ Bad message: “Invalid Data” (too generic, unclear)
○​ ✅ Good message: “Invalid Username” or “Invalid Password”
(specific, user knows what to correct)
3.​ Language of the error
○​ Should be simple, easy to understand, and readable
○​ Avoid technical terms like “Null Pointer Exception” for end users
4.​ Types of messages to check
○​ Error Messages → when incorrect data is entered
○​ Warning Messages → when action may cause an issue
(e.g., leaving unsaved changes)
○​ Information Messages → extra guidance for the user
(e.g., “Password must be at least 8 characters”)

Example Scenario: Login Page

●​ User enters wrong username/password →


○​ ❌ “Invalid Data” → too vague
○​ ✅ “Invalid Username” or “Invalid Password” → clear & helpful

Benefits of Error Handling Testing

●​ Improves user experience


●​ Prevents confusion and frustration
●​ Makes the system more professional and reliable
●​ Ensures application meets UI/UX design guidelines (wireframes/specs)
📌 In summary:​
Error Handling Testing ensures that applications not only detect incorrect actions
but also communicate them clearly and helpfully to users through proper error,
warning, and informational messages.

4. Calculations / Manipulations Testing


What is Calculations/Manipulations Testing?

●​ This is a type of Functional Testing where the focus is on verifying whether


all calculations, numeric operations, and data manipulations are
happening accurately in the application.
●​ Any system dealing with numbers (finance, payroll, shopping cart, banking,
insurance, billing, etc.) must undergo this testing.

Why is it Important?

Even a small error in calculation can lead to:

●​ Monetary losses (banking, payroll, e-commerce).


●​ Wrong decisions based on incorrect data (business dashboards, tax systems).
●​ Customer dissatisfaction or legal issues.

What Do We Test?

1.​ Mathematical Accuracy


○​ Are addition, subtraction, multiplication, division correct?
○​ Example: 2 items costing ₹200 each → Total should be ₹400.

2.​ Percentage & Tax Calculations


○​ Discounts, GST, VAT, interest rates.
○​ Example: 10% discount on ₹1000 should reduce price to ₹900.
3.​ Rounding Off Rules
○​ Round up vs Round down handling.
○​ Example: 99.567 rounded to 2 decimals should be 99.57.
4.​ Date/Time Calculations
○​ Number of days worked, tenure, subscription period.
○​ Example: Employee joined on 01-Jan, resigned on 31-Mar → 90 days.
5.​ Currency / Unit Conversions
○​ Dollars to INR, kg to lbs.
○​ Example: $1 = ₹80 → $10 should be ₹800.
6.​ Bonus / Salary / Incentive Calculations
○​ Based on joining date, years of experience, and relieving date.
○​ Example: Bonus = 1 month salary for each year of service.

Real-Life Examples

1.​ E-commerce Site:


○​ Cart total = Sum of (Price × Quantity) – Discounts + Taxes.
○​ Verify each step separately.
2.​ Banking Application:
○​ EMI calculation → (P × R × (1+R)^N) / ((1+R)^N – 1)
○​ Check principal, interest, tenure inputs are correctly used.
3.​ Payroll System:
○​ Salary = Basic + HRA + Allowances – Deductions – Taxes.
○​ Verify formulas for each component.

Defects Commonly Found

●​ Wrong formula applied.


●​ Calculation done but wrong rounding off.
●​ Missing calculation in edge cases (e.g., zero items in cart)
●​ Different results across modules (backend vs UI mismatch)

Summary

●​ Focus: Ensure all numeric & logical calculations are 100% accurate.
●​ Includes: Prices, discounts, taxes, salaries, bonuses, conversions.
●​ Example: Shopping cart → final bill must be correct.

5. Links Testing
What is Links Testing?

Links Testing in functional testing ensures that all the hyperlinks in an application:

1.​ Exist (are present on the page).


2.​ Work correctly (navigate to the correct target).

This guarantees smooth navigation and prevents broken user journeys.

Types of Links

1)​ Internal Links


●​ Navigate to a different section of the same page or another page within the
same website.
●​ Example: In Wikipedia, clicking on “History” scrolls down to the “History”
section in the same article.​
●​ Testing: Check if the link correctly moves to the section without errors.
2)​ External Links
●​ Navigate to a different website or domain.
●​ Example: On a news portal, clicking a “Follow us on Twitter” link opens
Twitter in a new tab.
●​ Testing: Ensure the correct external site opens (not outdated or malicious).
3)​ Broken Links
●​ Links that don’t work and show errors like 404 Not Found or Server
Error.
●​ Example: Clicking on “Careers” page → site says This page doesn’t exist.
●​ Testing: Identify broken links using manual checks or tools (e.g., Screaming
Frog, Selenium scripts).

What to Test in Links

1.​ Link Existence


○​ Verify all expected links are present on the page.
2.​ Link Execution
○​ Clicking a link should navigate to the correct target page/section.
3.​ Correct Behavior
○​ Internal links → Navigate within site.
○​ External links → Open in new tab (if expected).
○​ Broken links → Should not exist; if found, must be reported.
4.​ Performance of Links
○​ Verify page opens within reasonable time (not stuck or blank).

Examples

●​ E-commerce Site:
○​ Clicking “Cart” should always open the cart page.
○​ “Terms & Conditions” (external link) should open the correct legal
site.
●​ University Website:
○​ Clicking “Admissions” → opens Admissions section (internal).
○​ “Accreditation” → opens UGC/NAAC official page (external).
○​ Any link giving “Page Not Found” → Broken.

Common Issues Found

●​ Links redirecting to the wrong page.


●​ External links opening in the same tab (breaking session flow).
●​ Broken links after a new deployment.
●​ Old cached links still showing even if page was removed.

Summary

●​ Links Testing ensures all navigation paths work correctly.


●​ Types: Internal, External, Broken.
●​ Focus: Existence + Execution.
●​ Tools: Manual clicks, Selenium, automated link checkers.

6. Cookies & Sessions Testing


What is a Web Application Architecture?

●​ Browser (Client / Front-end):​


Where users access the application using a URL.
●​ Server (Back-end):​
Stores all web pages, application logic, and databases.
●​ Request & Response Flow:
○​ Browser sends a request (like login, page access).
○​ Server processes it and sends a response (like homepage, OTP, data).

🍪 What are Cookies?


●​ Small temporary files created at the browser level.
●​ Used to store user-related information (preferences, login data, cart items )

Example:

You add products to a cart → refresh page → items still there (because of cookies).
⏳ What are Sessions?
●​ Time slots created by the server when a user is active on the application.
●​ If the user is idle for a set time, the session will expire.
●​ After session expiry:
○​ System logs you out, or
○​ Asks you to login again.

Example:

You log in to a banking site → keep the page idle for 10 mins → system logs you
out → you must log in again.

🔎 Cookies vs Sessions
Aspect Cookies (Browser) Sessions (Server)
Location Stored in browser Managed by server
Purpose Store user data temporarily Manage user activity time slot
Expiry When browser is closed or After idle timeout or defined
manually cleared time
Example Remembering login / cart items Auto logout after 10–15 mins of
inactivity

How to Test Sessions?


1.​ Login to application.​

2.​ Stay idle (don’t perform any action).​

3.​ Wait until the configured timeout (e.g., 5 mins, 10 mins).​

4.​ Try to perform an action → Application should:


○​ Expire session, and
○​ Redirect to login page OR show “Session expired” message.
How to Test Cookies?
1.​ Login to application.
2.​ Close the browser → reopen the browser.
3.​ Revisit the site:
○​ If cookies are set properly → You stay logged in.
○​ If cookies are not handled → You are logged out unexpectedly (bug).

Cookies & Sessions Testing (in Functional Testing)


Ensures:

●​ Proper session expiry handling.


●​ Proper persistence of cookies.
●​ No unauthorized access after session expiry.
●​ No login issues when cookies are cleared.

summary
●​ Object Properties Testing – Checks if UI elements (buttons, textboxes)
have correct properties and behave as expected.
●​ Database Testing – Ensures data is stored, retrieved, and managed correctly
in the database.
●​ Error Handling Testing – Verifies proper error messages and system
stability during invalid operations.
●​ Calculation/Manipulation Testing – Confirms numeric and data operations
produce accurate results.
●​ Links Testing – Validates internal, external, and broken links for correct
navigation.
●​ Cookies & Sessions Testing – Tests login persistence and proper handling
of user session data.
📘 LEARNING TO TEST, TESTING TO LEARN
A Beginner’s Daily QA Series

📅 Day 5 - NON-FUNCTIONAL TESTING


NON-FUNCTIONAL TESTING
After functional testing (checking whether

features work according to requirements),

we test the non-functional aspects of an application.

Non-functional testing focuses on:


●​ Performance: Speed & responsiveness of the application
●​ Security: Protecting data & preventing unauthorized access
●​ Recovery: Application’s ability to recover from failures
●​ Installation & Configuration: Verifying proper setup on different systems
●​ Compatibility: Application works on different devices, OS, browsers
●​ Sanitation/Garbage Testing: Checks for unwanted or leftover data

Key Point:

●​ Non-functional testing is performed only after functional testing is stable.


●​ It needs dedicated teams and special environments (cannot use the same
environment as functional testing).
●​ Requires special tools and skill sets (like performance testing engineers,
security testing experts).

Functional vs Non-Functional Testing

●​ Functional Testing: Tests “what the application does.” Example: Are all
features working as per the customer’s requirements?
●​ Non-Functional Testing: Tests “how well the application works.”
Example: How fast is it? Is it secure? Does it handle high traffic?
Why is Non-Functional Testing Complex?

●​ Needs separate teams for tasks like performance or security testing.


●​ Needs separate environments for simulating real-world conditions.
●​ Example: To test 1000 users accessing an app at the same time, we need
virtual users (dummy users created with tools) instead of physically
arranging 1000 people & computers.

⭐ TYPES OF NON-FUNCTIONAL TESTING ⭐


1.​ PERFORMANCE TESTING (5 SUBTYPES)

○​ Load Testing

○​ Stress Testing

○​ Endurance Testing

○​ Spike Testing

○​ Volume Testing

2.​ SECURITY TESTING

3.​ RECOVERY TESTING

4.​ COMPATIBILITY TESTING

5.​ CONFIGURATION TESTING

6.​ INSTALLATION TESTING

7.​ SANITATION / GARBAGE TESTING


1.​PERFORMANCE TESTING – Overview
●​ Performance generally refers to speed, but it also includes other factors like
stability, scalability, and responsiveness.
●​ To perform performance testing, special requirement documents are
needed from the client (different from functional testing documents).

Key Details to Collect from the Client Before Performance Testing:

1.​ Number of users: Daily and concurrent users (parallel users).


2.​ Usage duration: How long users stay active on the app (10 hrs/day, 24
hrs/day, etc.).
3.​ Peak traffic days: For example, e-commerce apps have high traffic on sale
days.
4.​ Database size: How much data the app needs to handle.
5.​ Concurrent usage details: Approximate number of users accessing at the
same time.

1. LOAD TESTING (First Type of Performance Testing)​

Load Testing checks how the application performs under the expected load

(expected number of users or transactions defined by the client).

Key Points:

●​ Example: If a client expects 1000 concurrent users, we test performance


with exactly 1000 virtual users, not more or less.
●​ If a client expects 100 users, we test with 100 users.
What is Measured in Load Testing?

●​ Response Time: Time taken by the server to send back a response after
receiving a request.
●​ Turnaround Time: Total time from sending the request to receiving the
response (Request Time + Response Time).
●​ Throughput: Number of requests processed by the server per second.

Why Use Tools for Load Testing?

●​ Setting up 1000 physical users and computers is not practical.


●​ Tools help create virtual users that simulate real user traffic.
●​ Popular Performance Testing Tools:
○​ Apache JMeter (Open Source – free)
○​ LoadRunner (Licensed – paid)

Example of Load Testing:

●​ A new application expects 1000 concurrent users.


●​ We create a load test script and configure it for 1000 virtual users.
●​ Run it for 1 hour and measure:
○​ Response time
○​ Throughput
○​ Error rate, etc.
●​ After the test, analyze how the application behaves at expected peak load.

Key Takeaways:

●​ Non-functional testing focuses on “how well” an app works (speed, security,


recovery).
●​ Load Testing is the most common performance test, always done with the
expected load.
●​ Response time, turnaround time, and throughput are key metrics measured.
●​ Tools like JMeter and LoadRunner are essential for simulating users
realistically.

2. STRESS TESTING

●​ Stress Testing is a type of performance testing where we check how an


application performs when the load is much higher than the expected load.
●​ The goal is to test the application’s stability and breaking point under
extreme conditions.

Key Difference Between Load vs Stress Testing

Load Testing:

●​ Tests the performance under expected load (the load given by the client).
●​ Example: If the client says 1000 users are expected, we test only with 1000
users.
●​ Objective: To verify whether the application can handle the expected user
load smoothly.

Stress Testing:

●​ Tests the performance under unexpected or beyond the expected load


(higher than the client’s expectation).
●​ Example: If the client says 1000 users, we test with 1200, 1300, or more
users.
●​ Objective:
○​ To intentionally break the application and find its breaking point
(the point where it stops responding or crashes).
○​ To evaluate how the system behaves under extreme stress.

How Stress Testing is Performed?

●​ Start with the expected load (e.g., 1000 users).


●​ Gradually increase the number of virtual users beyond the limit:
○​ Add 50, 100, or more users step by step.
○​ Example: Test with 1100 → 1200 → 1300, etc.
●​ Keep increasing until the application fails to respond properly.
●​ The point of failure is the breaking point, and this is reported to the client.

Example:​
If the expected load is 100 users:

●​ Load Testing → Test only with 100 users.


●​ Stress Testing → Test with 120, 130, 150 users until the application breaks.

What Metrics Are Measured in Stress Testing?

Stress testing uses the same parameters as load testing:

●​ Response Time: Time taken by the server to respond after receiving a


request.
●​ Turnaround Time: Request + Response time (total time to complete the
operation).
●​ Throughput: Number of requests processed by the server per second.

Additional Focus in Stress Testing:

●​ Breaking Point Identification:


○​ Where exactly does the application crash or fail to respond correctly?
○​ How much load beyond the expected limit can it handle?

Why is Stress Testing Important?

●​ Helps identify the maximum capacity of the system.


●​ Ensures that the application won’t suddenly crash when traffic
unexpectedly spikes.
●​ Provides insights for developers to improve scalability and stability.

What Happens If Performance is Poor?

During stress (or any performance) testing, if you notice poor performance (slow
response, crashes):

●​ The issues are reported to the development team.


●​ Developers apply optimization techniques:
○​ Improve code logic and algorithms (frontend & backend).
○​ Optimize database queries and size.
○​ Enhance network performance.

After optimization, performance is re-tested until it meets requirements.

Real-Life Example:

For an application with an expected load of 1000 concurrent users:

●​ Load Test: Run with 1000 users for 1 hour.


●​ Stress Test: Run with 1200, 1300, or more users.
●​ Identify at what point (e.g., 1500 users) the application stops responding
properly.

Key Takeaways (Load vs Stress Testing)

●​ Load Testing Objective: Can the system handle expected load?


●​ Stress Testing Objective: Where is the breaking point?
●​ Intent in Stress Testing: Intentionally push the system until it fails to
understand its limits.
●​ Both are essential parts of performance testing and provide different
insights.
3. ENDURANCE TESTING (SOAK TESTING)
●​ Endurance testing is a type of Performance Testing.
●​ It is also called Soak Testing.
●​ The main goal is to check if the system can handle the expected load
continuously for a long period of time without performance issues.
●​ Key focus areas:
○​ Time (long duration)
○​ Stability of the application

How is it Different from Load and Stress Testing?


Load Testing:
●​ Tests application with expected load (e.g., 100 users)
●​ Focus is on how the application performs under normal expected usage.
●​ Time is not the main factor; it's about performance under expected user
load.

Stress Testing:
●​ Tests application by going beyond the expected load (e.g., 1200 users for
an app designed for 1000 users).
●​ The goal is to break the application and find its breaking point.

Endurance Testing:
●​ Tests application with expected load only (we do not go beyond the
expected load).
●​ Time is the main factor.
●​ We check if the application remains stable and performs well over an
extended duration (e.g., half a day, 1 day, 2 days, a week, or even a month).
Why is Endurance Testing Important?
●​ Over a long duration, some applications may face:
○​ Memory leaks (slowly consuming more memory until failure)
○​ Performance degradation (slowing down over time)
○​ Stability issues
●​ Endurance testing helps detect these long-term issues early.

How is Endurance Testing Performed?


1. Set up the expected load:
●​ Example: Customer says 100 users will use the application at the same time.
●​ We test with exactly 100 users (whatever the customer’s expected load is).

2. Run the test for an extended time period:


●​ Example: Run the system continuously for 24 hours, 2 days, or more
depending on requirements.

3. Monitor application performance:


●​ Check stability (Is the app still working fine after long hours?)
●​ Check memory utilization (Is memory leaking or increasing abnormally?)
●​ Check response time (Is the app slowing down?)

4. Compare results with expected performance:

●​ If performance matches expectations → System is stable.


●​ If performance degrades → Report issues to developers.

5. Optimization : Developers apply fixes and optimizations to improve stability


and performance if issues are found.
Example of Endurance Testing

Income Tax Filing Application:

●​ Used by thousands of users continuously for long hours/days.


●​ Memory management is critical to avoid crashes.
●​ Run test for 24 hours to 2 days and monitor memory and stability.

Key Points to Remember


●​ Endurance testing = Stability testing over time.
●​ We do NOT exceed expected load.
●​ We focus on:
○​ Time duration
○​ Memory leaks
○​ Long-term stability

4. SPIKE TESTING
Spike Testing is a type of performance testing.​
It is used to analyze the behavior of an application when the number of users
suddenly increases or decreases.

The main purpose is to check:

1.​ How the application responds to an unexpected, sudden load.

2.​ Whether it can recover properly after the sudden spike in traffic.
Key Characteristics of Spike Testing
1. Sudden Load Increase
●​ Unlike stress testing (where load increases gradually), in spike testing the
load increases abruptly.
●​ Example: Load jumps from 100 users to 200 users suddenly.

2. Sudden Load Decrease:


●​ The load is also suddenly reduced back to normal levels (e.g., from 200
users back to 100 or even 80).
●​ This pattern is sometimes called a zig-zag load pattern (increase and
decrease suddenly)

3. Beyond Expected Load:


●​ Spike testing usually tests the system beyond its normal or expected
traffic levels to see if it breaks or recovers.

Why is Spike Testing Important?

●​ To evaluate the stability and recovery capability of an application when


unexpected user traffic occurs.
●​ To ensure the application:
○​ Does not crash suddenly.
○​ Handles unexpected spikes efficiently.
○​ Returns to normal operation smoothly after the spike.

Example Scenario

Consider an e-commerce application like Amazon or Flipkart:


●​ During special sales or advertisement campaigns, huge discounts attract
many users.
●​ The number of users suddenly increases within a very short time.
●​ Spike testing checks:
○​ Can the application handle this sudden spike?
○​ Does it remain stable during and after the sudden traffic surge?

Difference Between Spike Testing and Stress Testing


Stress Testing:
●​ Load is increased gradually (e.g., 100 → 120 → 150 → 200).
●​ Aim: To find the breaking point of the system.

Spike Testing:
●​ Load is increased and decreased suddenly and unexpectedly
(e.g., 100 → 200 → 90 → 150).
●​ To test how the system behaves and recovers under sudden changes in load.

Main Focus of Spike Testing


●​ Performance under sudden load surge.
●​ Application stability and recovery.
●​ Identifying:
○​ If the system breaks under sudden traffic.
○​ Any bottlenecks or performance issues during spike conditions

Summary
Spike testing analyzes how well an application performs when the load
suddenly increases beyond expected limits and whether it recovers properly
afterward.
●​ It is especially important for applications that can face sudden traffic bursts
(e.g., e-commerce sites, ticket booking systems).

5. VOLUME TESTING
Volume Testing checks how the application performs when it has to handle a
large amount of data at once.

It focuses on:
●​ How the application processes, stores, and retrieves data.
●​ Whether the database and application remain stable with high data
volumes.
●​ Identifying any performance bottlenecks caused by data growth.

Purpose of Volume Testing – Why is it Important?

●​ Ensures that:
○​ The system can handle growth of data over time.
○​ The response time does not degrade when data increases.
○​ There are no crashes, memory issues, or slow queries.
○​ The database design and indexing can support high data loads.
●​ Especially important for:
○​ Banking apps (millions of transactions).
○​ E-commerce platforms (huge product & order records).
○​ Social media platforms (billions of posts).

How is Volume Testing Performed?

Volume testing can be done in two main ways:

1.​ Inserting a huge amount of data into the database.


○​ Example: Add millions of rows into user/order tables.
2.​ Providing the application with a large file or data for processing.
○​ Example: Uploading a file with millions of records for import.

Steps followed:

1.​ Prepare large test data.


2.​ Insert the data into the application/database.
3.​ Execute performance tests (queries, transactions, reports).
4.​ Measure:
○​ Response time (is it slower?).
○​ Memory usage (any leaks?).
○​ CPU usage.
○​ Database query performance.
5.​ Identify bottlenecks and fix them (e.g., indexing, query optimization).

What Bottlenecks Can Volume Testing Detect?

●​ Slow queries due to lack of proper indexing.


●​ Memory leaks or crashes due to insufficient memory.
●​ Storage issues (e.g., database runs out of space).
●​ Long backup/restore times.
●​ Data retrieval delays (e.g., search queries take too long).

Example of Volume Testing (Real-World)

●​ Scenario:​
A newly developed e-commerce application.
●​ Expected Growth:​
Millions of customers & orders in future.
●​ Volume Test Plan:
○​ Insert millions of rows into product, customer & order tables.
○​ Perform typical operations (search products, place orders, generate
reports).
●​ Goal:
○​ Identify any performance issues early.
○​ Ensure app works well even with massive data growth.

COMPARISON OF PERFORMANCE TESTING TYPES

Aspect Load Testing Stress Testing Endurance Spike Testing Volume Testing
(Soak)
Testing
Goal Check if the Find the Verify Check how the Test system
system breaking long-term system reacts performance
performs well point of the stability under to a sudden with huge
under system by continuous surge or drop amounts of
expected user testing beyond expected load. in traffic. data
load. normal limits.
Load Gradually Apply Keep the load Apply a short Insert or process
Pattern / increase load extreme or constant burst of large datasets
Method (users or unrealistic (expected sudden traffic or high data
transactions) load beyond level) for a spikes (instant transfer rates.
to expected expected long duration increase/decrea
levels. traffic to find (hours/days). se).
failure points
Focus Area Response time, Stability under Memory leaks, How quickly Database and
throughput, extreme performance the app storage
stability under conditions; degradation recovers from handling; query
expected how system over time, sudden performance
usage. fails and resource usage spikes; failure with huge data.
recovers. trends. handling.
Expected Within normal Beyond Expected load Sudden traffic Huge data
Load expected user expected load for long changes volume, not just
levels. (overload continuous (spikes in short user load.
conditions) duration. time).

Example E-commerce Push app to Run app with TV ad for Insert millions
Scenario app with 50,000+ users 10,000 users e-commerce of records into
10,000 daily suddenly to continuously app causes DB and run
users – test see when it for 72 hours. instant surge queries to check
with 10,000 crashes. in traffic – test performance.
users. sudden 30,000
users at once.

Key Differences Explained (in Simple Words)

●​ Load vs Stress:​
Load = Normal usage​
Stress = Push beyond normal to find failure point.
●​ Load vs Endurance:​
Both use expected load, but Endurance tests long-term effects, e.g.,
memory leaks.
●​ Spike vs Load/Stress:​
Spike is about sudden instant change in traffic, while load/stress are more
gradual.
●​ Volume vs Others:​
Others focus on users/traffic, Volume focuses on data size in DB.
2. SECURITY TESTING​
Security Testing ensures that the software is secure and protects sensitive
information. Its goal is to identify potential vulnerabilities that could be exploited
by hackers, and to ensure that only authorized users can access the application and
its data.

Key Aspects of Security Testing


1. Authentication
●​ Checks whether a user is valid to access the application.
●​ Valid users should be able to log in.
●​ Invalid or unauthorized users should be denied access.
●​ Example: Creating a valid user and an invalid user in a test environment, and
verifying login access.

2. Authorization (Access Control)


●​ Determines what a valid user is allowed to access.
●​ Example: In a banking application:
○​ Employee in the Home Loan department can only access home loan
records.
○​ They cannot access Vehicle Loan or Ledger departments.
○​ Focus is on permissions assigned to each user and ensuring proper
access enforcement.

3. Network Security
●​ Tests for vulnerabilities during data transmission over the network.
●​ Hackers may intercept or tamper with data while it is traveling between
client and server.
●​ Security testers analyze potential network breaches and ensure data
protection.

4. Data Encryption & Decryption


●​ Encryption converts sensitive data into a coded format before it travels over
the network.
●​ Decryption converts it back to readable format once it reaches the server or
client.
●​ Ensures data remains secure while in transit and is only readable by intended
systems.
●​ Example: Passwords entered in a login form are masked (****) and
transmitted securely.

5. System Software Security


●​ Ensures that the operating system and application environment are protected.
●​ Prevents unauthorized access, malware attacks, or system-level
vulnerabilities.

6. Client-Side Application Security


●​ Protects sensitive data on the user’s device.
●​ Prevents local attacks or data leaks from the client side.

7. Server-Side Application Security


●​ Protects the server where data is processed and stored.
●​ Prevents database hacking, SQL injection, unauthorized access, or data
corruption.
Advanced Security Testing Techniques

●​ Vulnerability Testing: Checks for weaknesses in the system that can be


exploited.
●​ Data Encryption/Decryption Validation: Ensures that data is properly
encrypted while traveling through the network and correctly decrypted at
client/server.
●​ Network Traffic Analysis: Security testers may capture and analyze
network packets to check if sensitive data is exposed.
●​ Data Masking: Sensitive data fields like passwords or personal info are
hidden or masked from unauthorized access.

Example:

●​ Online banking: Ensuring account info is protected, only valid users can
access it, and network data is encrypted.
●​ Testing if the application properly handles invalid login attempts.
●​ Validating that user permissions (authorization) are enforced correctly.

Who Performs Security Testing?

●​ Functional testers can test basic authentication and authorization in the


test environment.
●​ Security specialists perform network security, encryption/decryption,
and vulnerability analysis using specialized tools.
●​ Tools may capture network traffic, analyze encrypted data, or attempt
simulated attacks to ensure robustness.
Summary of Security Testing Focus

Focus Area Purpose


Authentication Verify user identity (valid/invalid users)
Authorization / Access Control Ensure valid users have correct permissions
Data Encryption & Decryption Protect sensitive data during network transfer
Network Security Identify vulnerabilities in data transmission and
communication
Client-side Security Protect sensitive data on user devices
Server-side Security Protect data and processes on the server
Vulnerability Testing Identify and fix potential system weaknesses

3. RECOVERY TESTING​
Recovery Testing checks how well a system or application can recover after a
failure or abnormal situation. It ensures that the software resumes normal
operation without losing data or tasks.

Why Recovery Testing is Important


Systems can fail due to unexpected events such as:

●​ Power outages
●​ Network failures
●​ Database crashes
●​ Application crashes or forced closure
●​ API failures
●​ Users should not lose their work or transactions due to such failures.
Example:

●​ Composing an email in Gmail and the power suddenly goes off


●​ When power is restored and the application is reopened
●​ The composed email should be available in the Draft folder.
●​ If the data is lost, the recovery mechanism has failed.

Common Scenarios for Recovery Testing


1. System Shutdown or Crash
●​ Power off the system during a task.
●​ Restart and check if the application resumes properly.

2. Database Failure
●​ Simulate a database crash while performing a transaction.
●​ Verify whether data is preserved or rolled back safely.

3. Network Failures
●​ Disconnect the network during data submission.
●​ Check if the application can recover and process data correctly once the
network is restored.

4. Application Closure
●​ Close the application unexpectedly during a task.
●​ Reopen and check if work is saved or needs to be redone.

5. API Failures
●​ APIs handle communication between client and server.
●​ Even if the application and database are working, API failure should not
cause permanent data loss.
Example in Banking Applications : A failed transaction should eventually
restore the money back to the account automatically. The recovery mechanism
ensures the user does not lose money even if a failure occurs.

Key Aspects to Test During Recovery


1. System Recovery
●​ Shut down the system unexpectedly.
●​ Verify if the application can resume normal operation.

3. Database Recovery
●​ Bring down the database intentionally during transactions.
●​ Check if partial transactions are handled correctly.

4. Network Recovery
●​ Disconnect the internet while using the application.
●​ Check whether the application resumes properly once network is back.

5. Application Closure Recovery


●​ Force close the application.
●​ Reopen and verify whether unsaved data or tasks are restored.

How Recovery Testing is Done


Functional testers can perform recovery testing in a normal test environment.

Steps include:

1.​ Start a task in the application.


2.​ Introduce a failure (power off, network down, database down, API failure).
3.​ Restore the environment.
4.​ Check if the application resumes the task or preserves data.
Key Point:

●​ Recovery Testing ensures that abnormal situations do not cause


permanent data loss or application failure.
●​ It is primarily done by functional testers and does not require specialized
tools.

Example in Practice:

●​ Word processor: type a document → power failure → reopen → check if


document is saved.
●​ Banking app: make a transaction → database failure → check if transaction
is rolled back safely.

Summary of Recovery Testing

Focus Area Purpose


System Failure Check if application recovers after shutdown or crash
Database Failure Ensure transactions are safe, rolled back, or preserved
Network Failure Validate recovery when internet or network is lost
Application Closure Ensure work is not lost if the app is closed unexpectedly
API Failure Verify proper handling of failed API calls during
operations

4. COMPATIBILITY TESTING​
Compatibility Testing ensures that an application works correctly across different
devices, browsers, operating systems, and hardware configurations. It verifies
that the software behaves consistently in various environments.
Why Compatibility Testing is Important
●​ Users access applications from different platforms, devices, browsers, and
hardware setups.
●​ Without compatibility testing, the application may:
○​ Fail to install
○​ Display incorrectly
○​ Lose functionality
○​ Cause poor user experience

Example:

●​ A mobile app should work on both Android and iOS devices.


●​ A web app should render properly on Chrome, Firefox, and Edge.

Focus Areas of Compatibility Testing


1. Operating System (OS) Compatibility
●​ Checks if the application works on different operating systems.
●​ Example:
○​ Software installable on Windows, Mac, Linux, Unix.
○​ Some applications may only support specific OS versions.
●​ Ensures the application meets customer’s OS requirements.

2. Browser Compatibility (Cross-Browser Testing)


●​ Web-based applications should work on all supported browsers.
●​ Example: Chrome, Edge, Firefox, Safari.
●​ Common issues found:
○​ UI misalignment
○​ Overlapping elements
○​ Different font styles or sizes
●​ Also called Cross-Browser Testing.

3. Hardware Compatibility (Configuration Testing)


●​ Ensures the application works correctly on different hardware setups.
●​ Parameters include:
○​ RAM (e.g., 8GB, 12GB, 16GB)
○​ Hard Disk (e.g., 100GB, 500GB)
○​ Processor (low-end vs high-end)
●​ Also called Configuration Testing.
●​ Ensures smooth installation and functionality on varied hardware.

4. Backward Compatibility
●​ Checks if the application supports older versions of OS, browser, or
hardware.
●​ Example: An application developed for Windows 11 should also work on
Windows 10 or 8.

5. Forward Compatibility
●​ Checks if the application will work on future versions of OS, browsers, or
devices.
●​ Example: A web app working on Chrome 75 should still work on Chrome
80 or 85.

Examples of Compatibility Testing


1. Mobile Applications

Testing WhatsApp on multiple devices: Android phones, iOS devices, different


screen sizes.
2. Web Application
Testing a website on multiple browsers to ensure UI elements display correctly.

3. Hardware Configurations

Testing a video game on computers with different RAM, processors, and hard
disks.

Key Points

●​ Compatibility testing is usually manual, except for performance and


security testing.
●​ Functional testers can perform compatibility testing in the same functional
testing environment.
●​ Focus areas include:
○​ OS compatibility
○​ Browser compatibility
○​ Hardware/configuration compatibility
○​ Forward & backward compatibility

Summary Table

Compatibility Type Purpose & Example


OS Compatibility Application works on Windows, Mac, Linux, etc.
Browser Compatibility Web app works across Chrome, Firefox, Edge,
Safari; UI displays correctly
Hardware/Configuration App works on different RAM, HDD, processor
configurations
Backward Compatibility Supports older versions of OS, browser, or hardware
Forward Compatibility Supports future versions of OS, browser, or hardware
Tip:

Compatibility testing ensures users have a smooth and consistent experience,


regardless of their environment.

5. CONFIGURATION TESTING​
Configuration Testing ensures that a software application works correctly under
different system configurations. It verifies that the application adapts well to
varied hardware, software, and network setups.

Why Configuration Testing is Important


●​ Users have different hardware and software setups.
●​ Without configuration testing, the application may:
○​ Fail to install
○​ Crash on certain setups
○​ Perform poorly
●​ Ensures smooth installation and operation on multiple configurations.

Focus Areas of Configuration Testing


1. Hardware Configurations
●​ Check if the application works on systems with different RAM, hard disk,
and processor setups.
●​ Example:
○​ A game tested on computers with 8GB, 12GB, and 16GB RAM.
○​ Test on processors with different speeds.

2. Software Configurations
●​ Test how the application behaves with different software setups.
●​ Example: Application tested with different versions of Java, .NET, or
browser plugins.

3. Network Configurations
●​ Checks the app performance on various network speeds or connectivity
setups.
●​ Example: App should work on Wi-Fi, LAN, or 4G/5G network.

4. Version Updates
●​ Tests if the application works properly after an upgrade or update.
●​ Ensures forward and backward compatibility:
○​ Forward: App supports future software/hardware versions.
○​ Backward: App supports previous versions.

Example of Configuration Testing

1. Video Game Testing

●​ Tested on computers with different RAM, HDD, and processor setups.


●​ Ensures smooth gameplay regardless of hardware configuration.

2. Software Updates

●​ Test a web application after updating the browser from Chrome 80 →


Chrome 85.
●​ Ensures no functionality is broken after the upgrade.

3. Network Testing

●​ Test a video streaming application on fast and slow networks.


●​ Ensure consistent performance and streaming quality.
Key Points

●​ Configuration testing is mostly manual.


●​ It ensures that the application works across different setups, environments,
and versions.
●​ Functional testers can perform this type of testing without specialized
environments.
●​ Helps prevent installation issues and crashes on user devices.

Summary Table

Configuration Aspect Purpose & Example


Hardware Configuration Test with different RAM, HDD,
processors (e.g., 8GB vs 16GB RAM)
Software Configuration Test with different software setups or
versions (Java, .NET, plugins)
Network Configuration Test with different network speeds or
connectivity setups
Version Updates Test app after upgrades for forward &
backward compatibility

Tip:

Configuration testing ensures the application adapts to all possible user


environments before release.

6. INSTALLATION TESTING​
Installation Testing is a type of software testing that checks whether an application
installs and uninstalls correctly on a system. It ensures that the installation process
is smooth, easy to follow, and does not cause any issues on the computer.
Objectives of Installation Testing
1.​ Verify that the software can be installed without errors.
2.​ Ensure that the installation process is user-friendly and simple.
3.​ Confirm that all files are correctly copied to the system.
4.​ Check that uninstallation removes all installed files completely.
5.​ Verify that auto-update features work as expected.
6.​ Ensure the software does not conflict with existing applications on the
system.

Key Focus Areas During Installation Testing


1. Installation Process
●​ Check each screen during installation (Next, Back, Finish) for clarity.
●​ Ensure instructions are easy to understand; users should not be confused.
●​ Verify navigation is simple and straightforward.

2. Uninstallation
●​ Ensure the software can be removed completely from the system.
●​ Check whether uninstallation removes all files, folders, and registry entries.
●​ Verify that reinstalling the software after uninstallation does not show errors
such as “files already exist.”

3. Auto Updates
●​ Test auto-update functionality.
●​ Verify if updates are installed automatically when enabled.
●​ Ensure that if auto-update is disabled, the user gets proper notification or
prompt.
●​ Check whether updates do not break existing installation.
Important Points
●​ Installation should feel easy and simple for the user.
●​ All screens during installation must be clear and understandable.
●​ The process should not be complex; the user should just follow the
instructions by clicking “Next.”
●​ Testing should cover:
○​ First-time installation
○​ Uninstallation
○​ Reinstallation
○​ Auto-update functionality

Example
Suppose you are installing a new version of a word processor:

1.​ The software installs successfully with all screens easy to follow.
2.​ You uninstall it completely without leaving any residual files.
3.​ Later, you reinstall or update it, and the process runs smoothly without
errors.

This confirms that installation and uninstallation are working perfectly.

Summary

Aspect Focus

Installation Process Screens clarity, ease of navigation, user-friendly


installation
Uninstallation Complete removal of files, folders, registry entries
Reinstallation Ensure no conflicts or errors after uninstall
Auto Updates Works automatically if enabled, prompts user if disabled
Goal Smooth, error-free installation and uninstallation with
proper user experience

7. SANITATION / GARBAGE TESTING​


Sanitation or Garbage Testing ensures that the software does not contain any
unnecessary features, leftover data, or extra functionalities that were not requested
in the requirement document. Essentially, it checks that the system “cleans up after
itself.”

Purpose:
●​ Identify extra features or functionalities that are not part of the original
requirement.
●​ Ensure the application provides only what the user expects.
●​ Consider any extra or unused features as potential bugs and report them to
the developers.

How it works:
1.​ Compare the features implemented in the software with the customer
requirements.
2.​ Identify any features that are extra, unused, or not requested.
3.​ Report them to the development team to remove or correct.
4.​ Developers fix these issues according to requirements.

Example Scenario:
Bank Software Customization:
●​ Bank X uses a custom banking software with features A, B, and C.
●​ Bank Y approaches the same software company for a slightly different set of
features.
●​ The company customizes the previous software for Bank Y.
●​ Some features from Bank X (extra features not required by Bank Y) may
still appear.
●​ These extra features are considered bugs in Garbage Testing.

Key Points:
●​ Functional testers can perform Sanitation/Garbage Testing in the normal
testing environment.
●​ Developers are responsible for removing any extra features reported as bugs.
●​ Bug reporting and fixing process is integral but separate.

Simple Example:

A messaging app deletes messages but leftover traces are still in the database.
Garbage Testing identifies this leftover data and flags it.

FUNCTIONAL TESTING VS NON-FUNCTIONAL TESTING

Aspect Functional Testing Non-Functional Testing


Focus Ensures software functions as Evaluates software performance,
expected security, usability, and more
What it Checks Specific features, actions, and How well the software performs
behaviors from requirements under various conditions
Key Question WHAT does the software do? HOW WELL does the software
perform?
Example Login functionality works for System can handle 1000 users
valid and invalid users simultaneously without crashing
Summary
●​ Functional Testing ensures that the software’s features work correctly.
●​ Non-Functional Testing ensures that the software performs efficiently,
securely, and provides a good user experience.
●​ Non-Functional Testing includes Performance, Security, Recovery,
Compatibility, Configuration, Installation, Sanitation, etc.

System Testing: Non-Functional Areas Overview

Non-Functional Testing Purpose / Focus Example / Notes


Type
Performance Testing Checks speed, response time, Load, stress, endurance, spike,
stability under load volume testing
Security Testing Checks user authentication, Online banking app protected
authorization, data security from unauthorized access
Recovery Testing Checks system recovery after Power outage simulation,
failures database downtime, API failure
Compatibility Testing Ensures software works across Mobile apps working on Android
OS, browsers, hardware & iOS, different screen sizes
Configuration Testing Ensures software works on Testing a game on PCs with
different hardware setups varying RAM, processor, HDD
Installation Testing Ensures smooth Software installs and uninstalls
installation/uninstallation and properly, updates work
auto-updates
Sanitation / Garbage Checks for extra, leftover, or Extra features in customized
Testing unnecessary features banking software scenario
📘 LEARNING TO TEST, TESTING TO LEARN
📅 Day 6 - SOFTWARE TESTING TERMINOLOGY
A Beginner’s Daily QA Series

1. REGRESSION TESTING
Make sure new changes (bug fix, new feature, code updates) don’t break
existing working features.​
Example: In Build 2, developer adds “Upload Profile Picture.”
Tester must still check: Login, Profile, Send Messages → All should work fine.

TYPES OF REGRESSION TESTING

1. UNIT REGRESSION TESTING

●​ Focus only on the specific change made by the developer.


●​ No other areas are tested.

Example:​

●​ Bug: “Password reset not working.”


●​ Developer fixes it → Tester checks ONLY the password reset function.
●​ Tester does not check login, profile, etc.

✅ Use Case: When risk of impact is very low, and change is very small.
2. REGIONAL REGRESSION TESTING

●​ Test the changed functionality + the connected/related features.


●​ Example:
○​ Bug: “Login not working.” Developer fixes it.
○​ Login is connected to Profile View and Dashboard access.
○​ Tester tests: Login, Profile, Dashboard.
●​ Why? Because if login is broken, user cannot reach those features.

✅ Use Case: When change may affect nearby or connected modules.


3. FULL REGRESSION TESTING

●​ When major changes are made, and it’s hard to identify all impacted areas.
●​ Test the entire application in one full round.

Example:

●​ Developer upgrades the payment module and changes user account


features.
●​ These changes might affect: Login, Profile, Cart, Checkout, Notifications,
Reports.
●​ Tester retests all modules to ensure system stability.

✅ Use Case: When risk is high and many modules are touched by the change.
Summary
Term Focus Example

Unit Regression Only changed part Only test “Password Reset”


fix

Regional Changed part + connected Test Login + Profile +


Regression parts Dashboard

Full Regression Entire application Test all modules after big


changes
IMPACT ANALYSIS MEETING

●​ Conducted between developers and testers before regression testing.


●​ Purpose
○​ Decide test scope before regression
○​ Identify which areas might be affected by the changes.
●​ Example: Developer says, “I changed code in the profile picture upload.”
○​ Tester notes: Must test Profile Module + Login + Dashboard.

2. RETESTING

What Is Retesting?

●​ Retesting means testing the same bug again after the developer has fixed it.
●​ Purpose: To confirm that the reported defect is actually resolved in the new
build.
●​ Tester action:
○​ If the fix works → close the bug ✅
○​ If the fix fails → reopen the bug ❌ and send it back to the developer

When Do We Do Retesting?

●​ Whenever a developer fixes a bug and gives a new build.


●​ Tester runs the same failed test cases again to verify if the bug is fixed.

Example of Retesting

●​ Build 1.0 is released → Tester finds defects 1.0.1 and 1.0.2 → Reports them.
●​ Developer fixes those defects → releases Build 1.1.
●​ Tester now checks defects 1.0.1 and 1.0.2 again in Build 1.1.
Key Points about Retesting

●​ Retesting is done only for bug fixes.


●​ Same test case(s) are executed multiple times until the bug is fixed.
●​ No focus on impacted or surrounding areas → only the defect itself.
●​ Retesting can happen in multiple cycles if the bug is not fixed properly.

RETESTING VS REGRESSION TESTING

Aspect RETESTING REGRESSION TESTING

Focus Specific bug fix Overall system stability after changes

Scope Only the reported defect Defect + connected / interdependent modules

Example Test only Purchase module bug Test Purchase + Finance module (since Finance
fix depends on Purchase)

When After developer fixes a bug After any change (bug fix, new feature,
modification)

Subset? Retesting is part of regression Regression testing is broader, includes retesting


testing + other impacted areas

Example: Admin, Purchase, Finance Modules

●​ Application has 3 modules: Admin, Purchase, Finance


●​ Dependencies:
○​ Purchase depends on Admin
○​ Finance depends on Purchase
👉 Scenario:
●​ A bug is found in Purchase module.
●​ Developer fixes it and gives a new build.
●​ Retesting: Tester checks ONLY the Purchase module bug fix.
●​ Regression Testing: Tester also checks the Finance module, because
Finance depends on Purchase.
○​ If Purchase is broken, Finance may also get impacted.

✅ In summary:
●​ Retesting → Testing the fixed bug again.
●​ Regression Testing → Testing the fixed bug + making sure connected
modules still work fine.

Relation between Retesting & Regression

●​ Retesting is a subset of regression testing.


●​ Why? Because in regression testing:
1.​ We first check the change/bug fix (this step itself is retesting).
2.​ Then we check the impacted areas.

So regression = retesting + impact verification.

👉 Key takeaway for interviews:


●​ Retesting = Did the fix work?
●​ Regression = Did the fix break anything else?


3. SMOKE TESTING

Also called: Build Verification Test (BVT).


Purpose: To check whether the build (software version received from developers)
is stable enough for further testing.

Focus: Stability of the build.


When done: In the first few testing cycles, builds are often unstable. Smoke
testing helps identify whether we can proceed or not.

What we check:
●​ Build installation.
●​ Appearance of basic screens.
●​ Basic navigation between screens.

Origin of the term:


From hardware testing → If a device produces smoke, it means there’s a major
issue → stop testing.

Examples:

●​ Application installs successfully without crashing.


●​ After installation, screens open correctly.
●​ Database is connected and running.
●​ APIs related to the application are invoked.
4. SANITY TESTING
Also called: Basic Functional Test (BFT).

Purpose: To check whether the basic functionality of the application is working


properly.

Focus: Functional correctness of the build (not stability).


When done: After smoke testing passes, and the build is stable enough.
What we check:
Login, user registration, logout, etc.

Examples:

●​ Sign up option is available on login page.


●​ Clicking "Sign up" redirects to proper , sign up form.
●​ Clicking Sign in does not re-direct to "Sign up" form.
●​ Submitting "Sign up" form goes successful, with out crash.
●​ User signed up, is able to login

📱 Phone Example (Easy to Relate)


Smoke Testing (Stability Check):​
Imagine you bought a new mobile phone. First, you check:

●​ Can it be switched ON?


●​ Does the home screen appear?
●​ Can you open the menu?​
If these don’t work, you stop testing further.​
Sanity Testing (Basic Functionality Check):​
Once the phone switches on, you check:

●​ Can you make a call?


●​ Can you send an SMS?
●​ Can you install and open apps?​
If these fail, you cannot continue using the phone properly.

👉 So, Smoke = stability of the device, Sanity = basic functions working.


⭐DIFFERENCES BETWEEN SMOKE AND SANITY TESTING⭐
Aspect SMOKE TESTING SANITY TESTING

Objective Ensure build is stable/testable. Ensure basic functionality


works.

Performed by Developers and testers. Only testers.

When done On initial builds (early cycles). On stable builds (after smoke).

Build condition May be stable or unstable. Relatively stable.

Depth of testing Shallow, stability check. Shallow, functional check.

Frequency Done every time a new build is Done when there’s no time for
released. in-depth testing.

Example (Phone) Can the phone power ON, Can the phone make a call,
open home screen? send SMS?
🚦 End-to-end Workflow:Smoke → Sanity → other testing

🔄 Smoke vs Sanity Workflow (Simple)


1.​ New Build Arrives (Initial/Unstable Build)
●​ First check = Smoke Testing (Build Verification Test).
●​ Purpose → Is the build stable enough to test?
●​ If FAIL → Reject build (High Sev/High Pri bug).
●​ If PASS → Move forward.
2.​ Stable Build Arrives
●​ Now do Sanity Testing (Basic Functional Test).
●​ Purpose → Do the basic functions work? (Login, signup, logout, basic
navigation).
●​ If FAIL → Reject build, send back.
●​ If PASS → Move forward.
3.​ After Both Pass

Continue with detailed testing:

●​ Functional/System Testing
●​ Regression Testing
●​ Retesting
●​ Non-functional Testing (performance, security, usability, etc.).

🎯 Quick Memory Hook


●​ Smoke = Stability check (Can we test this build at all?).
●​ Sanity = Basic functionality check (Is it worth testing deeper?).
●​ If either fails → build is rejected.
●​ Only after both pass → do full testing.

Context you must remember

●​ Early cycles = unstable builds (usually the first 2–3 rounds).


●​ Smoke Testing = Build Verification Test (BVT) → checks stability
(install, basic screens, navigation, DB/API up).
●​ Sanity Testing = Basic Functional Test (BFT) → checks basic
functionality (login, signup, logout, can proceed further).
●​ If Smoke or Sanity fails → defects are High Severity + High Priority →
reject the build; devs fix and send a new build.
●​ Developers + Testers do Smoke (devs in dev env; testers in test env).
Sanity is done by testers only.
●​ After both pass, you can do any other testing: system/functional,
regression, retesting, non-functional (performance, security, etc.).
5. EXPLORATORY TESTING
What is Exploratory Testing?
●​ In exploratory testing, the tester explores the application, understands it,
and tests it at the same time.
●​ The main goal is to learn the application while testing.
●​ This is useful when:
○​ The application is ready, but there are no requirements or
documentation available.
○​ No test cases are created yet.
○​ The tester has little or no prior knowledge of the application.

How it works
1.​ Tester opens the application.
2.​ Starts exploring its features and behavior.
3.​ Identifies possible test scenarios.
4.​ Documents those scenarios while testing.
5.​ Uses them for further testing.

👉 So, exploring + testing happen together.


📌 Example : Suppose you are given a new e-commerce mobile app
●​ You don’t have documentation or test cases.
●​ You try to log in, register, add items to cart, check out, and explore
navigation.
●​ While doing this, you’re noting down observations and possible test
scenarios.
●​ This process is exploratory testing.
📌 When do we use it?
●​ Application is ready but requirements are missing.
●​ Time is limited to create formal test cases.
●​ To quickly get an initial understanding of a new application.

📌 Drawbacks of Exploratory Testing


●​ You might misunderstand a feature as a bug or a bug as a feature,
because there are no clear requirements.
●​ It can be time-consuming, since you must explore and test at the same time.
●​ If there is a bug in the application, you may not even notice it, because
there’s no reference document to compare with.

Summary in one line: Exploratory Testing = Learn + Explore + Test the


application when you have no documentation or requirements.

6. AD HOC TESTING

What is Ad hoc Testing?


●​ Ad hoc Testing means testing the application randomly without:
○​ Any test cases
○​ Any business requirement document
●​ It is an informal type of testing.
●​ The main aim is to break the system (find defects by testing in an
unstructured way).

Key Points
●​ Conducted without proper planning or documentation.
●​ No test cases, no schedule, no formal plan.
●​ Tester should have some knowledge of the application (unlike exploratory
testing where tester doesn’t know anything).
●​ Done when you want to quickly check functionalities and try to find corner
cases.
●​ Since it’s random and unplanned → it is called informal testing.

Difference: Ad hoc Testing vs Exploratory Testing

Aspect Exploratory Testing Ad hoc Testing

Knowledge of Tester doesn’t know the Tester already knows something about
Application application, learns while exploring. the application.

Documentation No requirements, no No requirements, no documentation.


documentation.

Approach Explore → understand → test. Randomly test functionalities without


exploring.

Example A tester is given a new A tester who already uses Axis Bank
e-commerce app with no Net Banking starts testing HDFC Bank
requirements. They explore the Net Banking randomly without
app, understand features, then test. documents, based on prior knowledge.
Example to Understand
👉 Imagine you already use Axis Bank Net Banking.​
Now you open an account in HDFC Bank.

●​ Since you already know how online banking works, you can assume certain
things (like money transfer, balance check, download statements).
●​ Without any requirements or documents, you randomly test HDFC Net
Banking to check whether it works.
●​ This is Ad hoc Testing.

Why Ad hoc Testing is Done?


●​ To find hidden or corner case defects that may not be captured in formal
test cases.
●​ Sometimes formal test cases miss rare scenarios → Ad hoc testing helps
catch them.
●​ The intention is to break the system (break functionality, not the whole
app).

Characteristics of Ad hoc Testing


●​ Random – No fixed sequence of actions.
●​ Unplanned – Can be done anytime.
●​ No documentation – Nothing written or tracked beforehand.
●​ Informal – No process or schedule.
●​ Knowledge-based – Relies on tester’s prior experience and understanding.
Advantages
●​ Can quickly find unexpected defects.
●​ Helps discover corner scenarios missed in formal test cases.
●​ Saves time when requirements are unclear.

Disadvantages
●​ Since it’s unplanned and undocumented:
○​ Hard to reproduce bugs.
○​ Difficult to track coverage (not sure which areas were tested).
○​ Depends heavily on tester’s knowledge and skills.

Summary in one line:​


Ad hoc Testing = Random, unplanned testing without documentation, done by
testers who already know the application, mainly to break the system and
catch hidden bugs.

7. MONKEY TESTING

What is Monkey Testing?

●​ Monkey Testing is an informal type of testing where the tester tests the
application randomly.
●​ There are:
○​ No test cases
○​ No requirement documents
○​ No predefined plan
●​ The goal is to try to break the system by giving unexpected or invalid
inputs.

Key Features
●​ Informal → unplanned, no schedule, no documentation.
●​ Tester has no knowledge of the application’s functionality.
●​ Random inputs/actions are performed.
●​ The purpose is to check if the system can still work as expected without
crashing.

Usage / Application Area


Mainly used in gaming applications.

●​ In games, users often perform unpredictable actions.


●​ Example: children press random keys, move characters unexpectedly, or
click without knowing the rules.The game should still run smoothly without
breaking.
●​ Apart from games, it can be applied in entertainment apps or highly
interactive software, where random actions by users are very common.

Example
Imagine a new mobile game:

●​ A child installs it and starts pressing all buttons randomly.


●​ They don’t know the instructions but just try random actions (jumping,
attacking, moving around).
●​ Even with these unexpected inputs, the game should not freeze or crash.

👉 That is Monkey Testing in action.


✅ Summary:​
Monkey Testing = Random unplanned testing without knowledge of the app →
mainly used in gaming apps to check whether unexpected user actions or invalid
inputs can crash the system.

⭐ Exploratory Testing vs Ad-hoc Testing vs Monkey Testing ⭐


Aspect Exploratory Testing Ad-hoc Testing Monkey Testing

Documentation ❌ No documentation ❌ No documentation ❌ No documentation


is prepared

Planning ❌ No formal test ❌ No formal test ❌ No formal test plan,


plan, informal plan, informal informal

Style of Testing Informal, free-style Informal, free-style Informal, free-style


testing testing testing

Tester’s Tester does not know Tester should have Tester does not know
Knowledge of much about the some knowledge of anything about the
Application application at the the application application
beginning functionality
Testing Approach Random testing while Random testing with Purely random testing
learning and the intention to break without any knowledge
exploring the the system
application

Main Purpose / To learn and explore To break the To break the


Intention the functionality of the application and find application randomly
application hidden/corner defects and find corner defects

Application Type Used for any Can be used for any Typically used for
application new to the type of application gaming applications
tester

When to Perform When documentation When there is less time Anytime, usually when
or prior knowledge is (e.g., just before testing robustness of
not available release) to test quickly gaming apps

Outcome Helps tester Helps to find Helps to check stability


understand and learn critical/corner issues under random actions
application behavior quickly
Example A tester gets a new Product release is Pressing random
Scenario application with no tomorrow, only 1 day keys/clicks in a game to
docs and explores its left but 20 test cases → see if it crashes
features step by step tester does quick
ad-hoc testing instead
of all cases

8. POSITIVE TESTING
●​ Positive testing is done to check whether the application works correctly
with valid inputs.
●​ It ensures the system behaves as expected under normal or correct
conditions.
●​ The main goal is to confirm the system is doing what it is supposed to do.

Key Idea​
👉 “When everything is right, does the system work correctly?”
Examples
1. Login Test (Positive):

Enter valid username and password → User should log in successfully.​


2. Calculator Addition:​
Enter 2 + 3 → System should return 5.

3. Form Submission:

●​ Fill out a form with all valid details (name, email, phone) → Form should
submit successfully.
Summary for Positive Testing​
✅ Test with correct/valid inputs​
✅ Confirms system behaves as expected​
✅ Focus is on “system success flow”
9. NEGATIVE TESTING
●​ Negative testing is done to check how the application behaves with invalid,
wrong, or unexpected inputs.
●​ The goal is to find weaknesses or defects in the software by deliberately
providing incorrect data.
●​ It helps ensure the system doesn’t break or behave wrongly when users make
mistakes.

Key Idea 👉 “When something is wrong, does the system handle it properly?”
Examples
1. Login Test (Negative):​
Enter incorrect password → System should reject login and show an error
message.

2. File Upload (Negative):​


Try uploading a file with an unsupported format (e.g., .exe instead of .jpg) →
System should not accept it and should give a proper error message.

3. Credit Card Payment (Negative):

Enter an expired credit card date → Payment should not be processed; error
message should appear.
Summary for Negative Testing​
✅ Test with wrong/invalid inputs​
✅ Confirms system handles errors properly​
✅ Focus is on “system failure flow” (making sure it fails safely)
10. End-to-End Testing (E2E Testing)

●​ End-to-End Testing checks the entire application workflow from start to


finish.
●​ It verifies that all components of the system work together seamlessly.
●​ Unlike testing individual features separately, E2E testing ensures that the
whole process works as expected.

Key Idea “Does the system work correctly from the first step to the last step?”

Example: General Application

Features : Login → Add Customer → Edit Customer → Delete Customer →


Logout

E2E testing will test the complete flow:

1.​ Login successfully


2.​ Add a new customer
3.​ Edit that customer
4.​ Delete the customer
5.​ Logout

Example: E-commerce Application (Purchase Flow)​


Scenario: User buys a product online
Steps:

1.​ User logs into the website.


2.​ Browses products and adds items to the cart.
3.​ Proceeds to checkout.
4.​ Provides shipping and payment information.
5.​ System processes payment and generates an order confirmation.
6.​ User receives an email confirmation.

Checks during E2E Testing:

●​ User can successfully log in


●​ Shopping cart calculates correct totals
●​ Checkout collects shipping & payment info accurately
●​ System generates valid order confirmation
●​ User receives expected email confirmation

Summary for End-to-End Testing​


✅ Tests the complete workflow of an application​
✅ Ensures components interact correctly​
✅ Covers all steps from start to finish​
✅ Useful to verify real-world user scenarios
11. GLOBALIZATION TESTING

●​ Globalization testing ensures that an application can work across different


countries, regions, and cultures.
●​ It verifies that the software supports multiple languages, date formats,
currencies, and cultural norms.

Key Idea : “Can the application work anywhere in the world?”


Examples:

●​ PayPal: Works in almost all countries and supports multiple currencies.


●​ Amazon: Available in different countries, supports different languages
and currencies.
●​ Date format differences: DD/MM/YYYY vs MM/DD/YYYY
●​ Currency differences: $ vs ₹ vs €

Goal: Ensure the application is truly global-ready.

12. LOCALIZATION TESTING

●​ Localization testing ensures the application is adapted for a specific locale


or region.
●​ Verifies that the software handles local languages, translations, cultural
preferences, and regional requirements.

Key Idea:​
👉 “Does the application fit the local audience?”
Examples:

●​ A Chinese banking app: Available only in Chinese language, supports local


currency and date formats.
●​ Food delivery apps in India: Supports Indian language, Indian currency
(₹), and Indian addresses.

✅ Goal: Ensure the application is suitable and functional for a specific region.
Key Difference: Globalization vs Localization

Aspect Globalization Testing Localization Testing

Purpose Works across all regions & Works for specific region/culture
cultures

Language Supports multiple languages Supports one specific language

Currency Multiple currencies Local currency only

Date/Time Supports different formats Local formats only

Example PayPal, Amazon Chinese banking apps,


India-specific apps

✅ Summary:
●​ Globalization: Make software work anywhere in the world.
●​ Localization: Make software fit a specific country/region perfectly.
●​ Both are important for apps with global reach or regional focus

You might also like