Lecture Notes: Software Testing Strategies
Chapter 20.1: Software Testing Fundamentals
Goal of Testing: The primary goal of testing is to find errors. A good test is
characterized by having a high probability of finding an error. To achieve this, testers
must thoroughly understand the software and anticipate potential failure points.
Verification and Validation (V&V):
o Verification: Focuses on whether the software is built correctly according to
specifications. It ensures that the software "conforms to its specification".
o Validation: Focuses on whether the software built is the right product, meaning it
meets the customer's actual needs and expectations.
o V&V encompasses a broad range of Software Quality Assurance (SQA)
activities, including technical reviews, quality and configuration audits,
performance monitoring, simulation, feasibility studies, documentation reviews,
database reviews, algorithm analysis, development testing, usability testing,
qualification testing, acceptance testing, and installation testing. Testing is a
crucial part of V&V, but it's not the only activity.
Strategic Approach to Software Testing (The Big Picture):
o Software development can be visualized as spiraling inward, starting from system
engineering, moving to requirements analysis, then design, and finally coding.
o Testing is a series of four sequential steps within this context:
1. Unit Testing:
Focuses on individual components (units) to ensure they function
properly.
Employs white-box testing techniques to exercise specific paths
in a component's control structure for comprehensive coverage and
error detection.
Often conducted by individual software engineers.
2. Integration Testing:
Addresses the issues of verification and program construction
by assembling or integrating components to form the complete
software package.
Black-box testing techniques (focus on inputs and outputs) are
more common, but white-box techniques may be used for major
control paths.
3. Validation Testing:
Conducted after the software has been integrated.
Evaluates the software against the validation criteria established
during requirements analysis.
Provides final assurance that the software meets all functional,
behavioral, and performance requirements.
4. System Testing:
Falls outside the strict boundary of software engineering and into
the broader context of computer system engineering.
Verifies that the software, combined with other system elements
(e.g., hardware, people, databases), meshes properly and that
overall system function and performance are achieved.
Example
A real-time example to illustrate the Software Testing Fundamentals—using a Ride-Sharing
App (like Uber) as the software project:
🎯 Goal of Testing
Suppose the team is developing a new ride-scheduling feature.
The goal of testing is to find errors—e.g., a bug where the ride gets scheduled at the wrong
time due to a timezone mismatch.
A good test would target this edge case by simulating a user in a different timezone from the
driver.
✅ Verification and Validation (V&V)
📌 Verification (Are we building the product right?)
The development team checks if the ride scheduling algorithm follows the exact logic
defined in the specification:
o "A ride cannot be scheduled more than 24 hours in advance."
Activities involved:
o Code reviews
o Static analysis of scheduling logic
o Technical documentation reviews
📌 Validation (Are we building the right product?)
Beta testers use the app to see if the ride scheduling feature actually meets their
expectations.
o E.g., "Can users in rural areas with poor connectivity schedule rides without
errors?"
Activities involved:
o Usability testing with real users
o Feedback sessions
o Acceptance testing by stakeholders
Strategic Approach to Software Testing
The testing spiral progresses through stages as the ride-scheduling feature is built:
1️⃣ Unit Testing
A developer tests the CalculateArrivalTime() function.
o Uses white-box testing to test different input times, driver speeds, and route
distances.
o Ensures the function returns correct ETA in all cases.
2️⃣ Integration Testing
The team integrates CalculateArrivalTime() with the map API, driver availability
module, and notification service.
Uses black-box testing to check if, when a ride is scheduled:
o The driver receives a correct notification.
o The estimated time appears correctly in the user’s app.
o No internal errors occur during data handoff.
3️⃣ Validation Testing
QA team checks the entire ride-scheduling feature:
o Are rides scheduled within allowed times?
o Does the feature work for different user roles (e.g., passenger, driver)?
o Does it comply with requirements gathered earlier?
4️⃣ System Testing
The feature is tested on real devices and networks with production-like data.
o Does the ride scheduling still work when the user is in airplane mode?
o Is performance acceptable during high traffic hours?
o Are all system-level interactions—with GPS, notifications, user database,
payment gateways—working together smoothly?
🔄 Summary:
Testing Type Real-Time Example (Ride Scheduling Feature)
Unit Testing Testing CalculateArrivalTime() function logic.
Integration
Ensuring modules like GPS, map APIs, and notifications interact properly.
Testing
Validation Testing Verifying the feature aligns with user needs and requirements.
End-to-end testing of the app under real-world conditions and with full
System Testing
system setup.
Chapter 20.2: Integration Testing
Purpose: The objective of integration testing is to build the software architecture while
simultaneously conducting tests to uncover errors related to interfacing between
software components. It aims to take unit-tested components and construct the program
structure as dictated by the design.
Integration Strategies:
o Top-Down Integration:
Begins with the main control module as a test driver, with stubs
substituting for all subordinate components.
Subordinate stubs are replaced one at a time with actual components,
either depth-first (integrating all components on a major control path) or
breadth-first (incorporating all components directly subordinate at each
level, horizontally).
Tests are conducted as each component is integrated.
o Bottom-Up Integration:
Starts with atomic modules (lowest-level components) that are
individually tested.
These atomic modules are then combined into clusters (builds), which
are also tested.
Drivers (control programs) are written to coordinate test case input and
output for the clusters.
The process continues upward through the program structure until all
modules are integrated and tested.
o Continuous Integration (CI):
A modern agile practice where small, incremental software changes are
integrated into a larger system (the build) and then tested frequently
(often multiple times per day).
Each integration is verified by an automated build, including testing, to
detect integration errors as quickly as possible.
Benefits include reduced risk of integration failures, immediate feedback
on changes, and improved reporting.
Integration Test Work Products:
o An overall plan for integration and a description of specific tests are
documented in a test specification.
o This specification includes a test plan and a test procedure and becomes part of
the software configuration.
o Testing is divided into phases and incremental builds corresponding to functional
and behavioral characteristics.
o The test plan includes: a schedule for integration, development of scaffolding
software (stubs and drivers), and descriptions of the test environment and
resources.
o The detailed testing procedure describes the order of integration and
corresponding tests, including a listing of all test cases and expected results. In
agile contexts, this level of detail occurs as code for user stories is developed.
o A test report records actual test results, problems, or peculiarities and can be
appended to the test specification, often implemented as a shared web document
for stakeholder access.
Chapter 21.4: Web Application Testing
Fundamental Philosophy: The core philosophy of testing, which is to exercise software
with the intent of finding (and ultimately correcting) errors, remains the same for
WebApps.
Challenges: Web-based systems present significant challenges due to their networked
nature and interoperability with various operating systems, browsers, hardware platforms,
communication protocols, and backend applications.
The Testing Process (Figure 21.2): WebApp testing is influenced by two key
dimensions:
o Technology: This refers to the underlying technical infrastructure and
components.
o User: This refers to the human-computer interaction and user experience.
Here's a real-time example of the concepts from Chapter 20.2: Integration Testing, using a
Food Delivery App (like Zomato or DoorDash) as the software project.
🍕 Real-Time Example: Food Delivery App – “QuickBite”
🔍 Purpose of Integration Testing
After unit testing individual modules like:
SearchRestaurants()
PlaceOrder()
TrackDelivery()
…the goal of integration testing is to combine these modules and test whether they work
together seamlessly.
Example Error Found:
The TrackDelivery() module expects the order ID in a specific format, but
PlaceOrder() is passing it incorrectly.
This kind of interface-related bug is exactly what integration testing aims to catch.
🧭 Integration Strategies in QuickBite
1️⃣ Top-Down Integration
Start with the main control module, say MainAppController.
Replace subcomponents like SearchRestaurants(), OrderModule, and PaymentModule
with stubs.
Gradually replace stubs with real components:
o First: replace SearchRestaurants() with the actual module.
o Then move down depth-first into SearchByCuisine() or breadth-first to
include all search-related modules at once.
💡 Advantage: Critical functionality is tested early.
📌 Example Stub: A placeholder function that simulates returning a list of restaurants.
2️⃣ Bottom-Up Integration
Start with lowest-level functions, like:
o ValidateCoupon()
o CalculateTax()
Test them independently.
Then combine them into a Payment Cluster and test the cluster using a driver (e.g., a
mock module that simulates placing an order).
Eventually, integrate this cluster with higher modules like OrderModule.
💡 Advantage: Useful when lower-level utilities are critical and complex.
📌 Example Driver: A small program that simulates a customer placing an order to trigger the
payment logic.
3️⃣ Continuous Integration (CI)
QuickBite uses CI tools like Jenkins or GitHub Actions:
Every time a developer pushes code (e.g., modifies ApplyDiscount()), the system
automatically:
o Builds the app
o Runs integration tests
o Notifies the team if anything breaks
🟢 Example Test: Ensuring that a coupon applied at checkout also correctly reflects in the final
order summary.
💡 Benefits:
Bugs are caught early
Frequent feedback
Smooth collaboration between teams
📂 Integration Test Work Products in QuickBite
🧾 Test Specification Includes:
Integration Plan:
o "Day 1–3: Integrate and test Search & Order modules."
o "Day 4–5: Add and test Payment system."
Scaffolding Software:
o Stubs: Fake version of TrackDelivery() returning a static ETA.
o Drivers: Simulate placing an order without needing the front-end UI.
Test Environment:
o Mobile emulator (Android/iOS), staging backend with mock data.
🧪 Testing Procedure Example:
Order of Integration:
1. SearchRestaurants() + OrderModule
2. Add PaymentModule
3. Add TrackDelivery()
Sample Test Case:
o Input: Selects restaurant → Adds item → Applies coupon → Proceeds to payment
o Expected Result: Correct discount is applied, and payment is successful.
📝 Test Report:
Actual Result: Payment failed due to incorrect currency conversion.
Notes: Bug opened with Payment Team.
Format: Shared via Jira or Confluence for team visibility.
🧠 Summary Table
Integration Strategy QuickBite Real-Time Example
Top-Down Integration Start with MainAppController, stub out OrderModule
Bottom-Up Integration Test ValidateCoupon() + CalculateTax() first
Continuous Integration Use CI tools to automatically test every code change
Test Specification Contains schedule, test plan, stubs, drivers, environment
Test Report Tracks actual test outcomes and any issues found
Chapter 21.7: Security Testing
Importance: Security testing is a specialized testing strategy that is crucial for mobile
and web applications. This is due to their increasing integration with critical corporate
and government databases and their use in e-commerce applications that handle
sensitive customer information.
Key Measure: The primary measure of security is the mobile app and its server
environment's ability to rebuff unauthorized access and/or thwart outright
malevolent attacks.
Further Details: Chapter 18 provides a more detailed discussion of security engineering.
Here’s a real-time example to illustrate the concepts from Chapter 21.7: Security Testing,
using a Mobile Banking App as the software project:
🏦 Real-Time Example: Security Testing in a Mobile Banking App –
“SafeBank”
🔐 Importance of Security Testing
Scenario: SafeBank is a mobile app used by millions to:
Check balances
Transfer money
Pay bills
Access personal financial data
💥 A security breach could expose:
Account numbers
Login credentials
Credit card data
Why it matters: SafeBank connects to corporate financial databases and government tax
APIs. Any vulnerability puts sensitive data and user trust at risk.
📏 Key Measure of Security
The key security goal is to prevent unauthorized access and defend against attacks.
SafeBank must withstand:
Brute force login attempts
SQL injection in forms (e.g., “Pay Tax” page)
Session hijacking (intercepting tokens over public Wi-Fi)
Rooted or jailbroken device threats
API abuse by bots or attackers
✅ Pass condition: Unauthorized users cannot gain access, escalate privileges, or manipulate
sensitive data.
🧪 Examples of Security Testing in SafeBank
1️⃣ Authentication Testing
Tester tries multiple invalid login attempts to simulate a brute force attack.
Ensure:
o App locks account after 5 failed attempts.
o 2FA (OTP or biometric) is triggered correctly.
2️⃣ Authorization Testing
Tester logs in as a standard user.
Attempts to access admin endpoints via URL manipulation or API tampering.
Expectation: Access is denied, and the event is logged.
3️⃣ Data Transmission Security
Simulate man-in-the-middle (MITM) attacks using tools like Wireshark.
Verify that:
o Data is encrypted via HTTPS/TLS.
o No sensitive data (e.g., passwords) is sent in plain text.
4️⃣ Session Management Testing
Tester checks:
o Do sessions expire after inactivity?
o Are session tokens regenerated after login/logout?
o Can a session token be reused maliciously?
5️⃣ Input Validation Testing
Try SQL injection:
o Input '; DROP TABLE users;-- into login or support form fields.
o Expectation: Input is sanitized and error is handled safely.
📝 Summary Table
Security Test Type SafeBank Example Expected Outcome
Authentication Testing Multiple failed logins Account lock, 2FA prompt
Authorization Testing User tries accessing admin API Access denied, activity logged
Data Transmission Security MITM attack attempt Data encrypted via TLS
Session Management Reuse of old session token Token invalidated, session expired
Input Validation SQL injection in search or login fields Input sanitized, error handled securely
Security Test Type SafeBank Example Expected Outcome
🧠 Final Thought:
Security testing is non-negotiable in apps like SafeBank. A single missed vulnerability could
result in:
Financial loss
Legal consequences (e.g., GDPR/PCI-DSS violations)
Reputational damage
🔐 That’s why security testing is a core part of QA, especially for mobile/web apps handling
sensitive data.
Chapter 21.8: Performance Testing
Definition and Purpose: Performance testing is conducted to uncover performance
problems.
Causes of Performance Problems: These problems can arise from a variety of factors,
including:
o Lack of server-side resources.
o Inappropriate network bandwidth.
o Inadequate database capabilities.
o Faulty or weak operating system capabilities.
o Poorly designed WebApp functionality.
o Other hardware or software issues that lead to degraded client-server
performance.
Twofold Intent: The purpose of performance testing is twofold:
1. To understand how the system responds as loading increases (e.g., number of
users, transactions, or overall data volume).
2. To collect metrics that will guide design modifications to improve performance.
Here's a real-time example for Chapter 21.8: Performance Testing, using a Ticket Booking
Web Application — let’s call it “QuickTickets” — like IRCTC or Ticketmaster.
🎫 Real-Time Example: Performance Testing for “QuickTickets”
🔍 Definition and Purpose
Scenario: QuickTickets allows users to:
Search events or train tickets
Book seats
Make online payments
View booking history
During high-demand periods (e.g., concert ticket release or festival train booking), the app
slows down or crashes.
✅ Goal of Performance Testing: Identify where and why the system struggles under load and
how to fix it.
⚠️Causes of Performance Problems (Examples from QuickTickets)
Cause Real-Time Example
Lack of server-side resources CPU hits 100% when 5,000 users search tickets simultaneously.
Inappropriate network Mobile users experience lag due to server's poor outbound data
bandwidth capacity.
Inadequate database Query to fetch available seats slows down dramatically with large
capabilities datasets.
Weak OS capabilities The load balancer OS crashes under high concurrency.
Poorly designed WebApp Seat availability check runs for all users even if they haven’t searched.
Memory leaks cause server slowdown after 24 hours of continuous
Other issues
operation.
🎯 Twofold Intent of Performance Testing
1️⃣ Understand System Behavior Under Load
Test Case: Simulate 10,000 users trying to book concert tickets at once.
Tools used: Apache JMeter or LoadRunner
Measured metrics:
o Response time: 8 seconds (too slow)
o Error rate: 15% requests failed
o Server CPU: 95%
o DB query latency: 4 seconds
📌 Conclusion: System becomes unstable beyond 3,000 users — must optimize back-end and add
scaling.
2️⃣ Collect Metrics for Design Improvements
From the load test:
Identified that SQL joins in ticket availability queries slow down dramatically.
Determined session timeout logic is inefficient, leading to memory bloat.
Based on findings:
o Optimized SQL queries
o Added Redis caching for frequently accessed data
o Upgraded server RAM and added autoscaling in the cloud environment
📈 After improvements:
Response time improved from 8s → 1.2s
System handled 10,000 users with <1% failure
📊 Summary Table
Performance Factor QuickTickets Example Action Taken
Server Resource Added auto-scaling and optimized
CPU overload with many users
Limitations services
Increased outbound bandwidth on
Network Bandwidth Slow mobile load times
servers
Database Bottlenecks Seat query slowdown Indexed DB tables, query optimization
OS/Infrastructure Issues Load balancer crashed Switched to cloud-native load balancing
WebApp Design Inefficient code called too often Refactored for event-driven execution
Performance Factor QuickTickets Example Action Taken
Collected via JMeter and server Used to inform cloud migration
Metrics for Redesign
logs decisions
✅ Final Thought:
Without performance testing, QuickTickets would:
Crash during peak hours
Frustrate users
Lose revenue and brand trust
🧪 That’s why performance testing isn’t optional—it’s essential for user-facing apps, especially
those involving transactions and real-time interactions.