1) Integration Testing (with neat sketch)
Definition:
Integration Testing is a testing level where individual software modules are combined and tested
as a group to uncover errors related to the interactions between them.
Purpose:
To check the interfaces between modules.
Ensure data is passed correctly between units.
Validate the design and construction of software architecture.
Types:
Top-down integration
Bottom-up integration
Sandwich/hybrid approach
Diagram:
Top-Down Integration Sketch:
[Main Module]
-------------
|||
[A1] [A2] [A3]
/\
[B1] [B2]
Where testing starts from Main Module to lower modules (stubs used for unready parts).
---
2) Unit Testing
Definition:
Unit Testing is the process of testing individual components or modules of a software
application in isolation.
Key Points:
Focuses on internal logic and data structures.
Done by developers during coding phase.
Helps find bugs early in development.
Test cases may cover boundaries, logic, and data flow.
Example: Testing a function add(a, b) to verify it returns correct sum for different inputs.
---
3) Black Box Testing vs White Box Testing
Feature Black Box Testing White Box Testing
Focus Functional behavior Internal logic and structure
Tester Knowledge No knowledge of internal code Full knowledge of internal code
Techniques Used Equivalence partitioning, boundary analysis Path testing, loop testing, condition
testing
Test Basis Requirement specifications Source code and design
Example Check login works with valid/invalid input Ensure all loops and conditions are tested
---
4) Art of Debugging
Definition:
Debugging is the process of identifying, analyzing, and fixing bugs in the software after testing
reveals issues.
Key Strategies:
1. Brute Force:
Using print statements, logs, or memory dumps (least efficient).
2. Backtracking:
Tracing code backwards from error point.
3. Cause Elimination:
List all causes, isolate them by testing.
Tools:
Debuggers, dynamic analyzers, test case generators.
Tip:
Debugging is more of an art using logic, experience, and sometimes luck!
---
5) Validation Testing
Definition:
Validation Testing ensures the final product matches the user requirements.
Purpose:
Confirm functional, performance, and usability requirements.
Perform after integration testing.
Types:
Alpha Testing: Done at developer’s site with internal users.
Beta Testing: Done at user’s site in real environment.
Steps Involved:
1. Prepare test plan.
2. Execute validation tests.
3. Record any deviations.
4. Perform configuration review.
---
6) Metrics for Software Measurement
Definition:
Metrics help in measuring software quality, productivity, and maintainability.
Types:
Direct Metrics: Cost, LOC (Lines of Code), execution time.
Indirect Metrics: Quality aspects like reliability, usability.
Categories:
1. Size-Oriented: Based on LOC, cost, effort.
2. Function-Oriented: Function Points (based on user functionality).
3. OO Metrics: Class count, inheritance depth.
4. Web Metrics: Static/dynamic page count, customization ratio.
Purpose:
Track progress.
Predict future performance.
Improve software processes.
---
7) Metrics for Maintenance
Key Metric:
Software Maturity Index (SMI)
Formula:
SMI = (Mt - (Fc + Fa + Fd)) / Mt
Where:
Mt = total modules
Fc = changed modules
Fa = added modules
Fd = deleted modules
Purpose of SMI:
Measure software stability.
Track impact of changes.
Help predict future maintenance needs.
---
8) Top-Down Integration Testing
Definition:
Top-down integration is a method where testing starts from the top-level modules and integrates
downward.
Process:
Use stubs to simulate lower modules.
Gradually replace stubs with actual modules.
Conduct testing level-by-level.
Advantages:
Early detection of high-level design flaws.
Major decisions are tested first.
Diagram:
[Main Control]
/\
[Sub1] [Sub2]
/\
[Stub1] [Stub2]
Stubs are replaced later by real modules.
---
9) Software Quality Metrics
Purpose:
To measure and improve software product quality.
Common Metrics:
1. Correctness:
Defects per KLOC (thousand lines of code)
2. Maintainability:
Mean Time to Change (MTTC)
3. Integrity:
= 1 - (Threat × (1 - Security))
4. Usability:
Ease of use, user satisfaction
5. Defect Removal Efficiency (DRE):
DRE = E / (E + D)
(E = errors before delivery, D = defects after delivery)
Goal:
Achieve high DRE (close to 1) and good performance in other metrics for high-quality software.