5 and 6
5)White box testing, also known as clear-box testing or structural testing, is a software testing
methodology where the tester has full visibility into the internal workings of the system. In this type of
testing, the tester is aware of the code, design, and architecture of the application being tested.
The techniques used in white box testing are focused on thoroughly checking the internal structure and
flow of the code. Below are some common white box testing techniques:
1. Code Coverage
Code coverage ensures that all lines of code are executed during testing. The goal is to verify that every
part of the application has been tested and that no part of the code is left unexamined.
• Types of Code Coverage:
• Statement Coverage: Ensures that every line of code is executed at least once. It checks if the
program executes all statements in the code.
• Branch Coverage: Ensures that every decision point (like if statements) is tested for both true
and false outcomes. This verifies that all possible branches of the program have been tested.
• Path Coverage: Tests all possible paths in the program, including different combinations of
branches. It’s more comprehensive than statement or branch coverage.
• Condition Coverage: Ensures that each condition in the program evaluates to both true and
false at least once.
2. Control Flow Testing
This technique is focused on testing the flow of control through the program. It involves creating test
cases that follow different paths through the code to ensure that every possible route is executed.
• Control Flow Graph: A graph is created based on the program’s flow, where nodes represent
statements or blocks of code, and edges represent the flow of control. By following different paths in this
graph, testers ensure that all parts of the code are executed.
• This technique helps identify areas where the program might behave unexpectedly or
inefficiently.
3. Data Flow Testing
Data flow testing focuses on the flow of data within the program. It ensures that data is correctly
manipulated and that variables are properly initialized, assigned, and used throughout the program.
• Definition-Use Pairs: This technique identifies where a variable is defined and where it is used
in the program. Test cases are created to ensure that data is appropriately passed between different
parts of the program.
• It helps catch errors related to incorrect variable assignments, undeclared variables, or
incorrect data operations.
4. Branch Testing
Branch testing is a subset of control flow testing that ensures that every branch in the program’s decision
points is executed at least once. This includes testing all if statements, switch cases, loops, and other
conditional structures.
• This technique checks that both the true and false outcomes of decision points (like if and else)
are tested. It’s more thorough than just testing the individual statements inside the conditions.
5. Path Testing
Path testing goes one step further than branch testing by ensuring that all possible execution paths in the
program are tested. It takes into account multiple decision points and their combinations, ensuring that
all logical paths are covered.
• Path Coverage: This ensures that all possible paths through the program’s code are executed.
For example, if a program has two if statements, each with true and false conditions, path testing will
ensure that all four combinations of those conditions are tested.
• Path testing is useful in detecting errors that might not be caught by testing individual branches
or statements alone.
6. Loop Testing
This technique focuses on testing the loops in the program, which are common sources of errors.
• Types of Loops to Test:
• Simple Loops: A loop that runs a fixed number of times or until a condition is met.
• Nested Loops: Loops within other loops, which can create complex combinations of test cases.
• Unstructured Loops: Loops that may not be easy to detect or analyze.
• The goal is to ensure that loops execute the correct number of times and handle edge cases,
such as zero iterations or very large numbers of iterations.
7. Mutation Testing
Mutation testing involves intentionally introducing small changes (mutations) to the program’s code to
check whether the test cases can detect the introduced errors. If the test cases can identify the
mutations, then the test suite is considered effective.
• It helps in evaluating the strength of the test cases by simulating possible faults in the code and
checking whether the test suite catches them.
Black box testing is a software testing technique where the tester focuses on evaluating the functionality
of an application without any knowledge of its internal workings, source code, or structure. The primary
goal is to test the software based on its expected behavior, user requirements, and specifications, rather
than its implementation details.
Black box testing involves several techniques to ensure that the software meets its requirements and
performs correctly. Here are the key techniques used in black box testing:
1. Equivalence Partitioning
• Concept: This technique divides input data into valid and invalid partitions or classes.
Each partition represents a set of equivalent inputs that should produce similar results.
• Purpose: It reduces the number of test cases by selecting one representative value
from each partition. The goal is to minimize redundant tests while still covering various input
scenarios.
• Example: If a system accepts values from 1 to 100, you might divide the inputs into
three partitions: values less than 1 (invalid), values between 1 and 100 (valid), and values
greater than 100 (invalid). A test case from each partition will be sufficient.
2. Boundary Value Analysis (BVA)
• Concept: This technique focuses on testing the boundaries or edge cases of input
values, as errors often occur at the boundaries.
• Purpose: It ensures that the system handles inputs at the extreme ends of a range, as
well as just inside and outside the boundaries.
• Example: For a system that accepts values from 1 to 100, the boundary values would
be 1, 100, and values just outside the range like 0 and 101. These are tested to ensure proper
handling.
3. Decision Table Testing
• Concept: A decision table is used to represent combinations of inputs and their
corresponding system outputs. It is particularly useful for testing systems with complex logic
and multiple conditions.
• Purpose: This technique helps to ensure that all possible combinations of inputs and
outputs are covered in the test cases.
• Example: If a discount system provides different discounts based on customer status
and purchase amount, a decision table could map all combinations of these inputs to
determine the expected output (e.g., discount value).
4. State Transition Testing
• Concept: This technique tests the system’s behavior based on different states and
the transitions between those states. It is used for systems that exhibit state-dependent
behavior, where the output or behavior depends on the current state of the system.
• Purpose: To ensure that the system correctly transitions between states and handles
all possible state changes.
• Example: In a banking application, a user might be in different states (logged in,
logged out, etc.). The tester ensures that the system behaves as expected when
transitioning from one state to another (e.g., logging in, making a transaction, logging out).
5. Cause-Effect Graphing
• Concept: This technique maps the relationships between causes (inputs) and effects
(outputs). The graph is used to represent the logic of the system, and test cases are derived
from it.
• Purpose: To create tests based on the logical relationships between inputs and
outputs. It is especially useful for testing systems with complex business logic or multiple
interacting conditions.
• Example: For a shipping system that calculates shipping cost based on the order
amount, weight, and delivery location, cause-effect graphs would identify how these factors
affect the shipping cost.
6. Error Guessing
• Concept: This technique relies on the tester’s intuition and experience to predict
where errors are likely to occur in the system. The tester guesses the areas of the software
most likely to fail and designs test cases accordingly.
• Purpose: To find defects based on previous knowledge of the system or similar
systems. It complements other techniques by targeting areas that are prone to errors.
• Example: A tester might guess that an application’s login form will fail when users
input special characters in the username field. The tester then designs a test case to check
this scenario.
7. Random Testing
• Concept: This technique involves providing random inputs to the system and
observing its behavior. It is often used when other test techniques are not applicable or when
a wide range of random test cases is needed.
• Purpose: To discover unexpected bugs that might not be detected through more
structured testing methods.
• Example: Randomly inputting different data types, ranges, and edge cases into a
form to see if the system behaves unexpectedly or crashes.
8. Functional Testing
• Concept: Functional testing involves testing the system based on its functional
specifications to ensure that the software performs all the required functions as expected.
• Purpose: To validate that the software behaves correctly according to the functional
requirements (e.g., login functionality, user registration).
• Example: Testing if a user can successfully log into an application with valid credentials and
receive an error message with invalid credentials.
6) System testing is a type of software testing that involves testing a complete, integrated software
application to ensure that it works as intended. It is typically performed after integration testing and
focuses on validating the overall functionality, performance, and behavior of the system as a whole.
During system testing, the system is tested in an environment that simulates real-world conditions to
ensure that all components of the system interact correctly.
Types of System Testing:
1. Functional Testing:
• Purpose: Validates that the system’s functionalities align with the specified
requirements.
• Focus: Ensures that each feature of the system works as expected (e.g., login,
registration, payment processing).
• Example: Testing whether the “Submit” button on a form works correctly by
submitting data and verifying the response.
2. Non-Functional Testing:
• Purpose: Assesses the non-functional aspects of the system, such as
performance, scalability, and security.
• Focus: Testing aspects such as load capacity, responsiveness, and security
measures.
• Example: Stress testing to see how the system performs under heavy user load.
3. Performance Testing:
• Purpose: Evaluates how the system performs under various conditions, such as
normal load, peak load, and stress.
• Focus: Measures the system’s speed, scalability, and stability.
• Types of Performance Testing:
• Load Testing: Tests the system under expected user load.
• Stress Testing: Tests the system beyond its maximum load to identify breaking
points.
• Spike Testing: Tests the system’s response to sudden, unexpected load
increases.
• Scalability Testing: Tests how well the system can handle increased load over
time.
• Example: Testing an e-commerce website by simulating thousands of
simultaneous users to check if it can handle the traffic.
4. Security Testing:
• Purpose: Ensures that the system is secure and protects sensitive data from
unauthorized access, attacks, or breaches.
• Focus: Identifies vulnerabilities, security flaws, and potential threats to the
system.
• Example: Testing for SQL injection vulnerabilities, cross-site scripting (XSS), and
secure user authentication.
5. Usability Testing:
• Purpose: Assesses the system’s user interface (UI) and overall user experience
(UX).
• Focus: Ensures that the system is user-friendly, intuitive, and meets user
expectations.
• Example: Having users navigate the website and provide feedback on the
design, layout, and ease of use.
6. Compatibility Testing:
• Purpose: Verifies that the system works across different environments,
platforms, devices, and configurations.
• Focus: Tests the system’s compatibility with various operating systems,
browsers, mobile devices, and hardware configurations.
• Example: Checking a website’s compatibility across different browsers (e.g.,
Chrome, Firefox, Safari) and mobile devices.
7. Regression Testing:
• Purpose: Ensures that recent code changes have not negatively affected the
existing functionality of the system.
• Focus: Validates that previously tested features still function correctly after
modifications or updates to the system.
• Example: After a bug fix is applied to a login feature, regression testing ensures
that the login functionality is still working correctly.
8. Recovery Testing:
• Purpose: Tests the system’s ability to recover from crashes, hardware failures,
or other unexpected events.
• Focus: Verifies that the system can restore data, functionality, and state after a
failure.
• Example: Simulating a server crash and ensuring that the system can recover
and continue functioning normally.
9. Installation Testing:
• Purpose: Verifies that the software can be successfully installed and uninstalled
in different environments.
• Focus: Checks for installation issues, compatibility with operating systems, and
correct configuration.
• Example: Testing the installation process on different operating systems (e.g.,
Windows, macOS) to ensure the software installs correctly without errors.
10. Compliance Testing:
• Purpose: Ensures that the system adheres to legal, regulatory, and industry-
specific standards and policies.
• Focus: Verifies that the system complies with relevant laws, standards, and
guidelines.
• Example: Ensuring that a financial application complies with data protection
regulations (e.g., GDPR) or industry standards (e.g., PCI-DSS for payment systems).
11. Localization and Internationalization Testing:
• Purpose: Verifies that the system works in different languages and regions.
• Focus: Tests if the software can handle multiple languages, currencies, and
regional settings (e.g., date formats, address fields).
• Example: Ensuring that a global e-commerce site displays prices in local
currencies and languages.
12. Smoke Testing:
• Purpose: Tests the basic functionality of the system to ensure that the most
critical parts of the software are working.
• Focus: A quick check to see if the software build is stable enough to proceed
with further testing.
• Example: Verifying that an application launches, logs in, and navigates to the
main screen without crashing.
13. Sanity Testing:
• Purpose: A focused check to verify that a specific functionality or bug fix works
correctly after a build or update.
• Focus: Ensures that specific functionality works as expected without conducting
full-scale testing.
• Example: After fixing a bug in the search function, sanity testing confirms that the search
feature now works correctly.