0% found this document useful (0 votes)
45 views97 pages

Software Testing UNIT-4

Automated software testing utilizes specialized tools to execute tests and compare actual results with expected outcomes, primarily focusing on regression tests. The document outlines criteria for tool selection, types of automation frameworks, popular tools for functional and non-functional automation, and the classification of testing tools. Additionally, it discusses static analysis, test case generation, and the importance of test strategies in software testing.

Uploaded by

mksecret28598
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views97 pages

Software Testing UNIT-4

Automated software testing utilizes specialized tools to execute tests and compare actual results with expected outcomes, primarily focusing on regression tests. The document outlines criteria for tool selection, types of automation frameworks, popular tools for functional and non-functional automation, and the classification of testing tools. Additionally, it discusses static analysis, test case generation, and the importance of test strategies in software testing.

Uploaded by

mksecret28598
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 97

UNIT-4

What is Automated Software Testing?

Software Test automation makes use of specialized tools to control the execution
of tests and compares the actual results against the expected result. Usually
regression tests, which are repetitive actions, are automated.

Testing Tools not only help us to perform regression tests but also helps us to
automate data set up generation, product installation, GUI interaction, defect
logging, etc.

Criteria for Tool Selection:

For automating any application, the following parameters should be considered.

● Data driven capabilities


● Debugging and logging capabilities
● Platform independence
● Extensibility & Customizability
● E-mail Notifications
● Version control friendly
● Support unattended test runs

Types of Frameworks:

Typically, there are 4 test automation frameworks that are adopted while
automating the applications.

● Data Driven Automation Framework


● Keyword Driven Automation Framework
● Modular Automation Framework
● Hybrid Automation Framework

Popular Tools that are used for Functional automation:

Product Vendor URL

Quick Test HP www.hp.com/go/qtp


Professional
Rational Robot IBM http://www-03.ibm.com/software/
products/us/en/robot/

Coded UI Microsoft http://msdn.microsoft.com/en-us/library/


dd286726.aspx

Selenium Open http://docs.seleniumhq.org/


Source

Auto IT Open http://www.autoitscript.com/site/


Source

Popular Tools that are used for Non-Functional automation:

Product Vendor URL

Load Runner HP www.hp.com/go/LoadRunner

Jmeter Apache jmeter.apache.org/

Burp Suite PortSwigger http://portswigger.net/burp/

Acunetix Acunetix http://www.acunetix.com/

Testing Tools:

Tools from a software testing context can be defined as a product that supports one
or more test activities right from planning, requirements, creating a build, test
execution, defect logging and test analysis.
Classification of Tools

Tools can be classified based on several parameters. They include:

● The purpose of the tool


● The Activities that are supported within the tool
● The Type/level of testing it supports
● The Kind of licensing (open source, freeware, commercial)
● The technology used

Types of Tools:

S.No. Tool Type Used for Used by

1. Test Management Test Managing, scheduling, testers


Tool defect logging, tracking and
analysis.

2. Configuration For Implementation, execution, All Team


management tool tracking changes members

3. Static Analysis Static Testing Developers


Tools

4. Test data Analysis and Design, Test data Testers


Preparation Tools generation

5. Test Execution Implementation, Execution Testers


Tools

6. Test Comparators Comparing expected and actual All Team


results members

7. Coverage Provides structural coverage Developers


measurement tools
8. Performance Monitoring the performance, Testers
Testing tools response time

9. Project planning For Planning Project


and Tracking Managers
Tools

10. Incident For managing the tests Testers


Management
Tools

Tools Implementation - process

● Analyze the problem carefully to identify strengths, weaknesses and


opportunities
● The Constraints such as budgets, time and other requirements are noted.
● Evaluating the options and Shortlisting the ones that are meets the
requirement
● Developing the Proof of Concept which captures the pros and cons
● Create a Pilot Project using the selected tool within a specified team
● Rolling out the tool phase wise across the organization

static code analyzer

A tool that analyzes source code without executing the code. Static code
analyzers are designed to review bodies of source code (at the programming
language level) or compiled code (at the machine language level) to identify
poor coding practices. Static code analyzers provide feedback to developers
during the code development phase on security flaws that might be
introduced into code.

Static analysis, also called static code analysis, is a method of computer program
debugging that is done by examining the code without executing the program. The
process provides an understanding of the code structure and can help ensure that
the code adheres to industry standards. Static analysis is used in software
engineering by software development and quality assurance teams. Automated
tools can assist programmers and developers in carrying out static analysis. The
software will scan all code in a project to check for vulnerabilities while validating
the code.

Static analysis is generally good at finding coding issues such as:

● Programming errors
● Coding standard violations
● Undefined values
● syntax violations
● Security vulnerabilities

The static analysis process is also useful for addressing weaknesses in source code
that could lead to buffer overflows -- a common software vulnerability.

How is static analysis done?

The static analysis process is relatively simple, as long as it's automated. Generally,
static analysis occurs before software testing in early development. In the DevOps
development practice, it will occur in the create phases.

Once the code is written, a static code analyzer should be run to look over the
code. It will check against defined coding rules from standards or custom
predefined rules. Once the code is run through the static code analyzer, the
analyzer will have identified whether or not the code complies with the set rules. It
is sometimes possible for the software to flag false positives, so it is important for
someone to go through and dismiss any. Once false positives are waived,
developers can begin to fix any apparent mistakes, generally starting from the most
critical ones. Once the code issues are resolved, the code can move on to testing
through execution.

Without having code testing tools, static analysis will take a lot of work, since
humans will have to review the code and figure out how it will behave in runtime
environments. Therefore, it's a good idea to find a tool that automates the process.
Getting rid of any lengthy processes will make for a more efficient work
environment.

Types of static analysis

There are several static analysis methods an organization could use, which include:

● Control analysis -- focuses on the control flow in a calling structure. For


example, a control flow could be a process, function, method or in a
subroutine.
● Data analysis -- makes sure defined data is properly used while also
making sure data objects are properly operating.
● Fault/failure analysis -- analyzes faults and failures in model
components.
● Interface analysis -- verifies simulations to check the code and makes
sure the interface fits into the model and simulation.

In a broader sense, with less official categorization, static analysis can be broken
into formal, cosmetic, design properties, error checking and predictive categories.
Formal meaning if the code is correct; cosmetic meaning if the code syncs up with
style standards; design properties meaning the level of complexities; error
checking which looks for code violations; and predictive, which asks how code
will behave when run.
This image shows some of the objectives within static analysis.
Benefits of using static analysis include:

● It can evaluate all the code in an application, increasing code quality.


● It provides speed in using automated tools compared to manual code
review
● Paired with normal testing methods, static testing allows for more depth
into debugging code.
● Automated tools are less prone to human error.
● It will increase the likelihood of finding vulnerabilities in the code,
increasing web or application security.
● It can be done in an offline development environment.

However, static analysis comes with some drawbacks. For example, organizations
should stay aware of the following:

● False positives can be detected.


● A tool might not indicate what the defect is if there is a defect in the
code.
● Not all coding rules can always be followed, like rules that need external
documentation.
● Static analysis may take more time than comparable methods.
● Static analysis can't detect how a function will execute.
● System and third-party libraries may not be able to be analyzed.
Static verification vs. dynamic verification

The principal advantage of static analysis is the fact that it can reveal errors that do
not manifest themselves until a disaster occurs weeks, months or years after
release. Nevertheless, static analysis is only a first step in a comprehensive
software quality-control regime. After static analysis has been done, Dynamic
analysis is often performed in an effort to uncover subtle defects or vulnerabilities.
In computer terminology, static means fixed, while dynamic means capable of
action and/or change. Dynamic analysis involves the testing and evaluation of a
program based on execution. Static and dynamic analysis, considered together, are
sometimes referred to as glass-box testing.

What is Test case?

A test case is a document, which has a set of test data, preconditions, expected
results and postconditions, developed for a particular test scenario in order to
verify compliance against a specific requirement.

Test Case acts as the starting point for the test execution, and after applying a set of
input values, the application has a definitive outcome and leaves the system at
some end point or also known as execution postcondition.

Typical Test Case Parameters:

● Test Case ID
● Test Scenario
● Test Case Description
● Test Steps
● Prerequisite
● Test Data
● Expected Result
● Test Parameters
● Actual Result
● Environment Information
● Comments

Example:

Let us say that we need to check an input field that can accept maximum of 10
characters.

While developing the test cases for the above scenario, the test cases are
documented the following way. In the below example, the first case is a pass
scenario while the second case is a FAIL.

Scenario Test Step Expected Result Actual


Outcome

Verify that the input Login to Application Application


field that can accept application and should be able to accepts all
maximum of 10 key in 10 accept all 10 10
characters characters characters. characters.

Verify that the input Login to Application Application


field that can accept application and should NOT accepts all
maximum of 11 key in 11 accept all 11 10
characters characters characters. characters.

If the expected result doesn't match with the actual result, then we log a defect. The
defect goes through the defect life cycle and the testers address the same after fix.

Test case Design Technique:

Following are the typical design techniques in software engineering:

1. Deriving test cases directly from a requirement specification or black box test
design technique. The Techniques include:
· Boundary Value Analysis (BVA)
· Equivalence Partitioning (EP)
· Decision Table Testing
· State Transition Diagrams
· Use Case Testing

2. Deriving test cases directly from the structure of a component or system:

· Statement Coverage
· Branch Coverage
· Path Coverage
· LCSAJ Testing

3. Deriving test cases based on tester's experience on similar systems or testers


intuition:

· Error Guessing
· Exploratory Testing

What is Test Strategy?

Test Strategy is also known as test approach defines how testing would be carried
out. Test approach has two techniques:

· Proactive - An approach in which the test design process is initiated as


early as possible in order to find and fix the defects before the build is
created.
· Reactive - An approach in which the testing is not started until after
design and coding are completed.

Different Test approaches:

There are many strategies that a project can adopt depending on the context and
some of them are:

· Dynamic and heuristic approaches


· Consultative approaches
· Model-based approach that uses statistical information about failure
rates.
· Approaches based on risk-based testing where the entire development
takes place based on the risk
· Methodical approaches which is based on failures.
· Standard-compliant approach specified by industry-specific standards.

Factors to be considered:
· Risks of product or risk of failure or the environment and the
company
· Expertise and experience of the people in the proposed tools and
techniques.
· Regulatory and legal aspects, such as external and internal regulations
of the development process
· The nature of the product and the domain

Testing Tools:

Tools from a software testing context can be defined as a product that supports one
or more test activities right from planning, requirements, creating a build, test
execution, defect logging and test analysis.

Classification of Tools

Tools can be classified based on several parameters. They include:

● The purpose of the tool


● The Activities that are supported within the tool
● The Type/level of testing it supports
● The Kind of licensing (open source, freeware, commercial)
● The technology used

Types of Tools:

S.No. Tool Type Used for Used by

1. Test Management Test Managing, scheduling, testers


Tool defect logging, tracking and
analysis.

2. Configuration For Implementation, execution, All Team


management tool tracking changes members

3. Static Analysis Static Testing Developers


Tools

4. Test data Analysis and Design, Test data Testers


Preparation Tools generation

5. Test Execution Implementation, Execution Testers


Tools

6. Test Comparators Comparing expected and actual All Team


results members

7. Coverage Provides structural coverage Developers


measurement tools

8. Performance Monitoring the performance, Testers


Testing tools response time

9. Project planning For Planning Project


and Tracking Managers
Tools

10. Incident For managing the tests Testers


Management
Tools

Tools Implementation - process


● Analyze the problem carefully to identify strengths, weaknesses and
opportunities
● The Constraints such as budgets, time and other requirements are noted.
● Evaluating the options and Shortlisting the ones that are meets the
requirement
● Developing the Proof of Concept which captures the pros and cons
● Create a Pilot Project using the selected tool within a specified team
● Rolling out the tool phase wise across the organization

What is a Test Case Generator?

A Test Case Generator is a tool in software testing. It creates test cases


automatically. These cases help check if software works right. Here's more about
it:

● Automates Test Creation: It makes test cases without manual work. This
saves time and effort.
● Improves Test Coverage: It covers more parts of the software. This finds
more bugs.
● Reduces Human Error: Less manual work means fewer mistakes in tests.
● Adapts to Changes: It can quickly make new tests when software changes.
● Efficient Testing: It speeds up the testing process. This helps software reach
users faster.

Test Case Generators are vital for quality software. They make testing efficient and
thorough. This is key for any software project.

Why Use a Test Case Generator?

Using a Test Case Generator can transform software testing. It brings efficiency
and accuracy. Here's why it's a smart choice:

1. Time and Resource Saving: A Test Case Generator saves time. It creates test
cases quickly. This means testers spend less time on routine tasks. They can
focus on more complex issues. It also reduces the need for a large testing
team. This saves resources.
2. Consistent Quality: Manual test case creation can vary in quality. A Test
Case Generator offers consistency. Every test case follows the same high
standards. This means reliable results every time. Consistent quality is
crucial for trustworthy software.
3. Comprehensive Coverage: Manual testing might miss some areas. A Test
Case Generator covers more ground. It checks every part of the software.
This thorough approach finds more bugs. It ensures the software is robust
and user-friendly.
4. Adaptability to Changes: Software often changes during development. A
Test Case Generator adapts quickly. It can create new test cases to match
these changes. This keeps the testing process up-to-date. It ensures the final
product meets all requirements.

How Does Test Case Generator Work?

How a Test Case Generator works is key in software testing. It's a tool that
automates and simplifies the process. Here's a breakdown:

1. Input Analysis: The generator starts by analyzing inputs. These inputs are
software requirements and specifications. It understands what the software
should do. This step is crucial. It sets the foundation for relevant test cases.
2. Test Case Design: Based on the analysis, the generator designs test cases. It
uses algorithms to create scenarios. These scenarios test different aspects of
the software. The goal is to cover all functionalities. This step ensures
comprehensive testing.
3. Output Generation: After designing, the generator produces test cases. These
are detailed instructions for testing. They include steps to follow, expected
outcomes, and test data. This output is ready for testers to use. It makes their
work easier and more focused.
4. Maintenance and Updates: Software changes over time. The Test Case
Generator adapts to these changes. It updates test cases to match new
requirements. This keeps the testing process relevant. It ensures ongoing
quality control.

How to Create a Test Case With This Generator?

Creating a test case with the Onethread Generator is straightforward. Here's how to
do it:

1. Input Software Specifications: Start by entering your software's details.


Include its functions and requirements. This information guides the
generator. It helps create relevant test cases.
2. Select Test Criteria: Choose what you want to test. This could be
performance, functionality, or usability. The generator uses these criteria to
focus the test cases.
3. Generate Test Cases: Click the generate button. The Onethread Generator
will create test cases based on your inputs. These cases will cover different
scenarios and aspects of your software.
4. Review and Customize: Once generated, review the test cases. You can
customize them if needed. This ensures they match your specific testing
needs.
5. Export and Implement: Finally, export the test cases. They are now ready for
your testing team to use. Implement them in your testing process to check
your software's performance.

By following these steps, you can efficiently create comprehensive test cases with
the Onethread Generator. This tool simplifies and streamlines the test case creation
process.

Importance of Test Case

Test cases are vital in software development. They ensure the software works well.
Here's why they are important:

● Ensures Software Quality: Test cases check if software meets requirements.


They find bugs and issues. This helps in delivering high-quality software.
● Facilitates Thorough Testing: They cover different scenarios. This includes
normal, boundary, and error conditions. This thorough testing ensures
software reliability.
● Aids in Regression Testing: When software changes, test cases help check if
new changes break anything. This is key for maintaining software over time.
● Improves User Experience: By finding and fixing issues, test cases help in
creating user-friendly software. This leads to satisfied users.
● Documentation and Knowledge Transfer: Test cases serve as documentation.
They help new team members understand the software. This is useful for
training and knowledge sharing.

Advantages
Efficiency and time-savings:
The TestCase Generator from MicroNova automates the process of
generating test cases, thereby increasing efficiency in safeguarding
electronic control units many times over. More results are available more
quickly.
Quality assurance:Using the TCG for all test runs allows their results to be
checked uniformly, and their progression to be reliably tracked.
Standardizing test specifications will ensure consistently high quality in
future.
Flexibility and performance:
The generator approach allows test departments to respond quickly to the
rocketing rate of change in functional development.
Simple traceability:
It is sufficient to check the test specification. An additional review of
implementation is no longer necessary.
Optimum resource utilization:
Thanks to automation, test engineers can spend more time on particularly
complex cases.
Quick commissioning with low costs:
The TCG can be seamlessly integrated into EXAM, so no additional and
cost-intensive interfaces are required.

How it works

Central tool: The TestCase Generator understands the test specifications as


a sequence of commands. Mapping these commands to EXAM
operations automatically generates the test cases. This saves a lot of
extra cost in implementation for testing that accompanies development.
Commands are mapped centrally in the EXAM model. Those writing test
specifications set out clear requirements for new test specs, while
existing ones can be adapted with minimal effort. The TCG then
creates a full, executable test case as a sequence diagram from the
user’s technical specifications in just a few seconds.
The TestCase Generator uses synchronized test cases from a specifications
tool as the basis. Test procedures and parameters are maintained
outside EXAM in requirements management. Changes are made
centrally in the respective EXAM library or the associated “TestSpec”.
Once made, the changes can then be easily applied to all test cases.
The TCG forms a functional enhancement to EXAM: it works as a plug-in
and supports many convenient EXAM functions, since it is based on
well-known concepts such as Shortnames or TestCaseStates

Please log in to the EXAM user area to d

There are multiple techniques available for generating test cases:

● Goal-oriented approach – The purpose of the goal-oriented test case


generation approach is to cover a particular section, statement or function.
Here the execution path is not important, but testing the goal is the primary
objective.
● Random approach – The random approach generates test cases based on
assumptions of errors and system faults.
● Specification-based technique – This model generates test cases based on the
formal requirement specifications.
● Source-code-based technique – The source-code-based case generation
approach follows a control flow path to be tested, and the test cases are
generated accordingly. It tests the execution paths.
● Sketch-diagram-based approach – This type of case generation approach
follows the Unified Modeling Language (UML) diagram to formulate the
test cases.

Apart from these test case generation approaches, there are multiple other
processes available in the testing world. But whatever the approach, a proper test
case generation process is one of the most critical factors for successful project
implementation.

What is a Capture/Replay Tool?

GUI capture replay tools have been developed for testing the applications against
graphical user interfaces. Using a capture and replay tool, testers can run an
application and record the interaction between a user and the application. The
Script is recorded with all user actions including mouse movements and the tool
can then automatically replay the exact same interactive session any number of
times without requiring a human intervention. This supports fully automatic
regression testing of graphical user interfaces.

Tools for GUI Capture/Replay:

Product Vendor URL

QF-Test QFS www.qfs.de/en/qftest/

SWTBot OpenSource http://wiki.eclipse.org/SWTBot/UsersGuide


GUIdancer and BREDEX http://testing.bredex.de/
Jubula

TPTP GUI Eclipse http://www.eclipse.org/tptp/


Recorder

Capture or Replay Tool In Testing


We can understand the concept of Capture and replay in the context of GUI
capture and replay tools. These tools test an application with their graphical
interface. The capture and replay tool will simply record the user interactions with
the system. Let us have a look at the different types of tools that offer capture and
replay facility.

GUI capture/replay tools :


● QF- Test -This is an automation testing tool that is meant for testing Java

and Web GUI applications. This tool is intended for both testers and

developers. Benefits offered by this testing tool are – it has a robust

framework, cross platform, and has multi browser support.

○ It allows regression and load test for Java Swing, Eclipse plug ins and

RCP (Rich Client Platforms) applications, applets, webstart, JavaFX

on windows and linux/unix.

○ As it's a cross browser tool, one can perform testing of static and

dynamic web pages (HTML5/Ajax).


○ It's robust platform offers support for complex UI components.

○ Modular structure allows development of advanced test cases and

efficient team work.

○ Configurable reports and documentation.

○ User friendly, easy to adapt for both testers and developers.

○ Well written documentation, quick and competent support from the

authors whenever required.

● SWTBot -It is an open-source automation tool for testing Eclipse based

applications. It offers the API's which are quite simple to read and write,

thereby reducing the complexities involved in working with SWT(Standard

Widget Toolkit) and Eclipse. It has the feature to record and playback tests

integrating them with Eclipse. SWTBot has a plugin called GEF/GMF

diagrams, editors, which helps to manipulate SWT widgets. To configure

SWTBot, we need to add org.eclipse.swtbot.eclipse.gef.finder plugin .

● GUI Dancer and Jubula - It is a automation testing tool, that facilitates

functional testing of various GUI applications. It does not require much

coding and is easily maintainable. Jubula is a product of BREDEX and the

former version of Jubula was GUI dancer. They are functionally the same.

Few aspects of Jubula can be noted down as follows :


○ No source code is required to implement the test scripts, it is a

complete black box testing.

○ Tests are written by experts from the user perspective.

○ It builds up a communication among the developers, customers and

testers.

○ Helps to find and fix errors in a timely manner.

○ Provides automatic screenshot on errors.

● Jubula offers support for variety of platforms like Swing, JavaFX, HTML

and Ios applications, is platform independent on Windows and Linux

operating systems, can be accessed across multi user database and also

portable due to its XML format.

● TPTP GUI recorder -This tool is again an automation framework which is

used to test GUI applications. It enables faster creation of GUI functional

tests that are simple to maintain. The Automated GUI Recorder allows

recording of the execution of events which can be played later as a part of

functional tests. This method of testing allows to ensure that UI components

are in conformance with the expected ones.

● Abbot -This is an automation GUI testing framework that requires one to

write Java based unit test cases. It has a script editor named Costello, that

allows one to write manual test scripts as well as facilitates recording of test
execution that is capturing the sequence of events that had taken place

throughout the process of running the application.

● Jacareto -It facilitates the creation of animated demo of UI components. It

has got a wide variety of features which includes highlighting specific

components in GUI, replay action specific semantic events etc. Jacareto has

two front end applications- CleverPHL and Picorder. The former has the

capability to record, edit and replay user interaction scripts and the latter is a

command – line record and replay tool.

● Pounder -This is completely an automation testing tool for GUI applications.

It does not store interaction scripts as XML file unlike Abbot and Jacareto

and is not intended for manual test script writers.

● Marathon -It also has the facility to record and replay, it's interaction scripts

are stored in Python.

● JFCUnit -It allows testing on Junit6 framework. It is just an extension to

enable GUI testing. It allows developers to write GUI java tests and test case

methods. The recording feature has been added in the recent versions.

Capture and Replay defines four essential steps:


● Capture mode records user interactions with user interface elements. Capture

and Replay is a script that documents both the test process and the test
parameters.
● The scripts, which are usually defined and editable in XML formats, can be

used to describe simple test scenarios or complex test suites.


● During test evaluation, it must be verified whether previously defined events

occur or errors occur. Output formats, database contents or GUI states are
checked and the results documented accordingly.
● Test scenarios can be easily reproduced by repeatedly replaying (mode) the

previously recorded scripts. Individual elements of the user interface are also
recognised if their position or shape has changed. This works because the
user input in capture mode, for example, not only saves the behavior of the
mouse pointer, but also records the corresponding object ID at the same
time.

Capture and Replay were developed to test the applications against graphical user

interfaces. With a capture and replay tool, an application can be tested in which an

interactive session can be repeated any number of times without human

intervention. This saves time and effort while providing valuable insights for

further application development.


What is Stress Testing?

Stress testing is a Non-Functional testing technique that is performed as part of


performance testing. During stress testing, the system is monitored after subjecting
the system to overload to ensure that the system can sustain the stress.

The recovery of the system from such phase (after stress) is very critical as it is
highly likely to happen in production environment

Reasons for conducting Stress Testing:


· It allows the test team to monitor system performance during failures.
· To verify if the system has saved the data before crashing or NOT.
· To verify if the system prints meaning error messages while crashing
or did it print some random exceptions.
· To verify if unexpected failures do not cause security issues.

Stress Testing - Scenarios:


· Monitor the system behavior when the maximum number of users
logged in at the same time.
· All users performing the critical operations at the same time.
· All users Accessing the same file at the same time.
· Hardware issues such as database server down or some of the servers
in a server park crashed.
Characteristics of Stress Testing

1. Stress testing is a method of analyzing the system behavior subsequent to a


failure.
2. Stress testing makes sure that the system works again after something goes
wrong.
3. It verifies whether the system functions properly under abnormal
circumstances.
4. This feature ensures the display of appropriate error messages during periods
of stress on the system.
5. It verifies that unanticipated failures do not result in security concerns.
6. It verifies whether the system has saved the data prior to crashing or not.

Importance Of Stress Testing

Stress testing is an important part of the software testing process. It assesses the
systems capacity to handle a substantial workload and identifies any potential
issues that might arise under demanding conditions.
Typically, stress tests are conducted on applications or systems in order to
determine their performance levels under various loading scenarios. It helps to
figure out where there are problems and areas for improvement before it is released
into production.
The comprehension of the necessity for stress testing is imperative for the upkeep
of high-quality software products, as it provides valuable insights into the
applications performance in challenging environments.

Types Of Stress Testing


Distributed Stress Testing
Stress testing with distributed components simulates hefty demands on a system by
spreading the strain across multiple devices. In a distributed environment, it helps
evaluate the systems performance and scalability.

Application Stress Testing


Application stress testing checks how well an application or software works when
its under a lot of stress. The objective of this analysis is to identify any bottlenecks,
resource limitations, or performance issues that may arise within the application
when subjected to substantial loads.

Transactional Stress Testing


The process of transactional stress testing involves simulated high volumes of
transactions or requests on a system in order to evaluate its capacity to effectively
manage the load. It tests the systems response time, concurrency, transaction
processing abilities, and data consistency during heavy transactional activities.

Systemic Stress Testing


Systemic stress testing involves putting the whole system, including the hardware,
software, and network parts, under very stressful conditions. The objective is to
identify any deficiencies or malfunctions in the overall system architecture,
infrastructure, or communication channels when subjected to heavy loads or
adverse conditions.

Exploratory Stress Testing


The exploratory approach to stress testing involves exploring and experimenting
with various stress scenarios that were not predetermined. The tester applies
different stress types, load levels, or environmental conditions to observe how the
system responds, and uncovers any unexpected issues or vulnerabilities.

What is Stress Testing in Software Testing?

Stress Testing is a software testing technique that determines the robustness of


software by testing beyond the limits of normal operation. Stress testing is
particularly important for critical software but is used for all types of software.
Stress testing emphasizes robustness, availability, and error handling under a heavy
load rather than what is correct behavior under normal situations. Stress testing is
defined as a type of software testing that verifies the stability and reliability of the
system. This test particularly determines the system on its robustness and error
handling under extremely heavy load conditions. It even tests beyond the normal
operating point and analyzes how the system works under extreme conditions.
Stress testing is performed to ensure that the system would not crash under crunch
situations. Stress testing is also known as Endurance Testing or Torture Testing.

Prerequisite –
Characteristics of Stress Testing:

1. Identification of Risk: Stress testing’s main objective is to locate and


evaluate a system’s possible hazards and weaknesses.
2. Quantitative and Qualitative Analysis: While numerical data are crucial,
it’s also critical to comprehend the qualitative characteristics of the
system’s response and potential weak points.
3. Variable Parameters: Stress testing include changing variables including
interest rates, market conditions, transaction volumes and outside
influences that could have an impact on the system.
4. Cross-Functional Involvement: Many departments within an organization
must work together and participate in stress testing. This cross-functional
strategy makes sure that the stress testing procedure benefits from a
variety of viewpoints and specialties.
5. Open and Honest Communication: Stress testing necessitates open and
honest communication regarding the goal, approach, and outcomes of the
testing procedure.

Need For Stress Testing:

● To accommodate the sudden surges in traffic: It is important to perform


stress testing to accommodate abnormal traffic spikes. For example,
when there is a sale announcement on the e-commerce website there is a
sudden increase in traffic. Failure to accommodate such needs may lead
to a loss of revenue and reputation.
● Display error messages in stress conditions: Stress testing is important to
check whether the system is capable to display appropriate error
messages when the system is under stress conditions.
● The system works under abnormal conditions: Stress testing checks
whether the system can continue to function in abnormal conditions.
● Prepared for stress conditions: Stress testing helps to make sure there are
sufficient contingency plans in case of sudden failure due to stress
conditions. It is better to be prepared for extreme conditions by executing
stress testing.

Purpose of Stress Testing:

● Analyze the behavior of the application after failure: The purpose of


stress testing is to analyze the behavior of the application after failure and
the software should display the appropriate error messages while it is
under extreme conditions.
● System recovers after failure: Stress testing aims to make sure that there
are plans for recovering the system to the working state so that the
system recovers after failure.
● Uncover Hardware issues: Stress testing helps to uncover hardware
issues and data corruption issues.
● Uncover Security Weakness: Stress testing helps to uncover the security
vulnerabilities that may enter into the system during the constant peak
load and compromise the system.
● Ensures data integrity: Stress testing helps to determine the application’s
data integrity throughout the extreme load, which means that the data
should be in a dependable state even after a failure.

Stress Testing Process:

The stress testing process is divided into 5 steps:

1. Planning the stress test: This step involves gathering the system data,
analyzing the system, and defining the stress test goals.
2. Create Automation Scripts: This step involves creating the stress testing
automation scripts and generating the test data for the stress test
scenarios.
3. Script Execution: This step involves running the stress test automation
scripts and storing the stress test results.
4. Result Analysis: This phase involves analyzing stress test results and
identifying the bottlenecks.
5. Tweaking and Optimization: This step involves fine-tuning the system
and optimizing the code with the goal meet the desired benchmarks.
Stress Testing

Types of Stress Testing:

1. Server-client Stress Testing: Server-client stress testing also known as


distributed stress testing is carried out across all clients from the server.
2. Product Stress Testing: Product stress testing concentrates on discovering
defects related to data locking and blocking, network issues, and
performance congestion in a software product.
3. Transactional Stress Testing: Transaction stress testing is performed on
one or more transactions between two or more applications. It is carried
out for fine-tuning and optimizing the system.
4. Systematic Stress Testing: Systematic stress testing is integrated testing
that is used to perform tests across multiple systems running on the same
server. It is used to discover defects where one application data blocks
another application.
5. Analytical Stress Testing: Analytical or exploratory stress testing is
performed to test the system with abnormal parameters or conditions that
are unlikely to happen in a real scenario. It is carried out to find defects in
unusual scenarios like a large number of users logged at the same time or
a database going offline when it is accessed from a website.
6. Application Stress Testing: Application stress testing also known as
product stress testing is focused on identifying the performance
bottleneck, and network issues in a software product.

Stress Testing Tools:

1. Jmeter: Apache JMeter is a stress testing tool is an open-source, pure


Java-based software that is used to stress test websites. It is an Apache
project and can be used for load testing for analyzing and measuring the
performance of a variety of services.
2. LoadNinja: LoadNinja is a stress testing tool developed by SmartBear
that enables users to develop codeless load tests, substitutes load
emulators with actual browsers, and helps to achieve high speed and
efficiency with browser-based metrics.
3. WebLoad: WebLoad is a stress testing tool that combines performance,
stability, and integrity as a single process for the verification of mobile
and web applications.
4. Neoload: Neoload is a powerful performance testing tool that simulates
large numbers of users and analyzes the server’s behavior. It is designed
for both mobile and web applications. Neoload supports API testing and
integrates with different CI/ CD applications.
5. SmartMeter: SmartMeter is a user-friendly tool that helps to create
simple tests without coding. It has a graphical user interface and has no
necessary plugins. This tool automatically generates advanced test
reports with complete and detailed test results.

Metrics of Stress Testing:

Metrics are used to evaluate the performance of the stress and it is usually carried
out at the end of the stress scripts or tests. Some of the metrics are given below.

1. Pages Per Second: Number of pages requested per second and number of
pages loaded per second.
2. Pages Retrieved: Average time is taken to retrieve all information from a
particular page.
3. Byte Retrieved: Average time is taken to retrieve the first byte of
information from the page.
4. Transaction Response Time: Average time is taken to load or perform
transactions between the applications.
5. Transactions per Second: It takes count of the number of transactions
loaded per second successfully and it also counts the number of failures
that occurred.
6. Failure of Connection: It takes count of the number of times that the
client faced connection failure in their system.
7. Failure of System Attempts: It takes count of the number of failed
attempts in the system.
8. Rounds: It takes count of the number of test or script conditions executed
by the clients successfully and it keeps track of the number of rounds
failed.

Benefits of Stress Testing:

● Determines the behavior of the system: Stress testing determines the


behavior of the system after failure and ensures that the system recovers
quickly.
● Ensure failure does not cause security issues: Stress testing ensures that
system failure doesn’t cause security issues.
● Makes system function in every situation: Stress testing makes the
system work in normal as well as abnormal conditions in an appropriate
way.
● Improving Decision Making: Decision-making processes can benefit
from the insightful information that stress testing offers.
● Increasing Stakeholder confidence: Providing clear information about the
outcomes of stress tests helps boost stakeholder confidence.
Organizations that show a proactive approach to risk management are
valued by investors, customers, and other stakeholders, since it cultivates
credibility and confidence.

Limitations of Stress Testing:

1. Manual stress testing is complicated: The manual process of stress testing


takes a longer time to complete and it is a complicated process.
2. Good scripting knowledge required: Good scripting knowledge for
implementing the script test cases for the particular tool is required.
3. Need for external resources: There is a need for external resources to
implement stress testing. It leads to an extra amount of resources and
time.
4. Constantly licensed tool: In the case of a licensed stress testing tool, it
charges more than the average amount of cost.
5. Additional tool required in case of open-source stress testing tool: In the
case of some open-source tools, there is a need for a load testing tool
additionally for setting up the stress testing environment.
6. Improper test script implementation results in wastage: If proper stress
scripts or test cases are not implemented then there will be a chance of
failure of some resources and wastage of time.

What is Client Server Testing?


Client-server testing is a testing approach designed to verify the accurate and
secure exchange of data between the client and server, guaranteeing that requests
and responses are synchronized correctly.
This testing also involves assessing the system’s performance, scalability, and
resource utilization to confirm its ability to handle various loads and user
interactions without compromising performance. Moreover, client-server testing
includes functional testing to ensure that the application’s features and
functionalities operate as expected on both the client and server sides.
What is the Main Goal of Client Server Testing?
The main goal of client-server testing is to ensure the robustness, availability, and
reliability of software applications or systems that are built upon a client-server
architecture.
Key objectives of client-server testing include:
● Functionality Validation: Confirm that the client and server components

work together to deliver the intended features and functionalities without


errors or inconsistencies.
● Data Integrity and Security: Ensuring that data exchanged between the client

and server is accurate, secure, and protected from unauthorized access or


manipulation.
● Performance Assessment: Evaluating the responsiveness, scalability, and

resource utilization of the system to guarantee that it can handle various


loads and user interactions while maintaining acceptable performance levels.
● Fault Tolerance and Reliability: Testing the system’s ability to handle

adverse conditions, such as network failures or server crashes, and recover


gracefully without data loss or service disruption.
● Compatibility: Verifying that the client software is compatible with different

server configurations, versions, and environments, ensuring a seamless user


experience.
● Scalability and Load Handling: Determining how well the system scales to

accommodate a growing number of clients and transactions while


maintaining performance and stability.
● Security: Identifying vulnerabilities and weaknesses in data transmission,

authentication, and access control mechanisms to enhance security measures


and protect user data.
Basic Characteristics of Client Server Testing Architecture
What is Client Server Testing?

As the name suggests, the Client-Server application consists of two systems, one is
the Client and the other is the Server. Here, the client and server interact with each
other over the computer network.

In Client-Server application testing, the client sends requests to the server for
specific information and the server sends the response back to the client with the
requested information. Hence, this testing is also known as two-tier application
testing.

Few client-server applications are Email, Web Server, FTP, etc

The picture below depicts what the Client-Server application looks like:

This type of testing is usually done for 2 tier applications (usually developed for
LAN). We will be having Front-end and Backend here.

Applications launched on the front-end will have forms and reports which will be
monitoring and manipulating data.
For Example, applications developed in VB, VC++, Core Java, C, C++, D2K,
PowerBuilder, etc., The backend for these applications would be MS Access, SQL
Server, Oracle, Sybase, MySQL, and Quadbase.

Client-server testing architecture is characterized by several fundamental attributes


that distinguish it from other software testing methodologies. These basic
characteristics include:
● Distributed Components: Client-server architecture consists of two primary

components—the client, which runs on the user’s device, and the server,
hosted on remote hardware. Testing involves evaluating the interaction
between these distributed components.
● Communication Over a Network: Clients and servers communicate over a

network, typically using protocols like HTTP, TCP/IP, or custom


communication protocols. Testing ensures reliable and efficient data
exchange between them.
● Service Independence: Each microservice is responsible for a specific

function or feature, making it crucial to verify that each service operates


independently and integrates seamlessly with other services. Testing
assesses how well these services work together to deliver end-to-end
functionality.
● Data Integrity: Ensuring the accuracy and integrity of data transmission

between the client and server is a key concern. Testing validates that the data
sent and received is correct, complete, and secure.
● Caching and Messaging: Depending on the architecture, client-server

systems may involve caching and message queues. Testing these


components ensures they function correctly and enhance system
performance.
● Asynchronous Processing: In some cases, client-server systems handle

asynchronous operations. Testing verifies that asynchronous tasks, such as


background processing, are executed reliably and efficiently.
● Continuous Integration/Continuous Deployment (CI/CD): Typical client-

server architectures are part of a CI/CD pipeline these days due to


automation. Testing ensures that the testing and deployment processes are
automated, consistent, and reliable.
● Monitoring and Observability: Effective monitoring and observability

solutions (e.g., Prometheus, Grafana) are crucial in any distributed system.


Testing these covers the integration and functionality of these tools for real-
time system insights.
Types Of Testing To Perform in Client-Server Test
Performing a comprehensive range of testing types is essential in client-server
testing to ensure the reliability, performance, and security of systems. Here are the
primary types of testing to consider:
Functional Testing

● Unit Testing: Evaluate individual client and server components to verify that

they perform their specific functions correctly.


● Integration Testing: Assess how well client and server components integrate

and work together, ensuring that data is exchanged accurately.


● System Testing: Conduct end-to-end testing to validate that the entire client-

server system functions as expected in real-world scenarios.


Performance Testing
● Load Testing: Measure the system’s response and behavior under varying

levels of user load to identify performance bottlenecks.


● Stress Testing: Push the system beyond its intended capacity to determine

breaking points and assess its ability to recover.


● Scalability Testing: Evaluate how well the system scales to accommodate

growing numbers of clients or data transactions while maintaining


performance.
Security Testing

● Authentication Testing: Verify that the authentication mechanisms (e.g.,

username/password, tokens) work correctly and securely.


● Authorization Testing: Ensure that users can only access resources and

functionalities they are permitted to, and unauthorized access is prevented.


● Encryption Testing: Confirm that data transmission between the client and

server is properly encrypted to protect sensitive information.


Data Integrity Testing

● Data Validation Testing: Check that data sent and received between the

client and server is accurate, complete, and follows validation rules.


● Data Corruption Testing: Assess how the system handles data corruption or

loss scenarios, ensuring data integrity is maintained.


Compatibility Testing

● Cross-Browser and Cross-Device Testing: Ensure that the client application

functions correctly on various browsers and devices.


● Cross-Platform Testing: Confirm that the client application is compatible

with different server configurations, versions, and operating systems.


Caching and Performance Optimization Testing
● Cache Testing: Assess the effectiveness of caching mechanisms (e.g., Redis)

to speed up data retrieval and reduce server load.


● Performance Optimization Testing: Identify areas for optimization to

enhance the system’s overall performance and responsiveness.


Fault Tolerance and Recovery Testing

● Failover Testing: Simulate server failures and network disruptions to

evaluate how well the system can recover without data loss.
● Redundancy Testing: Verify that redundant server setups work as intended

to ensure system availability.


Usability and User Experience Testing

● Usability Testing: Evaluate the client application’s user interface and overall

user experience to ensure it is user-friendly.


● Accessibility Testing: Confirm that the application is accessible to users

with disabilities, complying with accessibility standards.


Regression Testing

● Continuously run regression tests to ensure that new updates or changes do

not introduce unexpected issues or regressions in existing functionality.


Load Balancing and Network Testing

● Test load balancers to ensure they distribute client requests effectively and

maintain high availability.


● Assess network configurations to confirm they support secure and efficient

client-server communication.
Message Queue Testing
● Validate the reliability and efficiency of message queues (e.g., RabbitMQ) in

handling asynchronous communication between client and server


components.
Containerization and Orchestration Testing

● Verify the functionality and compatibility of containerized applications (e.g.,

Docker) and their orchestration configurations (e.g., Kubernetes).


Automate your tests for web, mobile, desktops and APIs, 5x faster, with Client-
Server Testing Techniques
Client-server testing employs various techniques to help identify and address
potential issues. Here are some essential client-server testing techniques:
Manual Testing and Its Types

Manual testing is a fundamental testing approach where human testers execute test
cases without the use of automation tools or scripts. It relies on human intuition
and expertise to evaluate an application’s functionality, user interface, and overall
quality. Manual testing encompasses several types:
● Functional Testing: This involves testers verifying that an application’s

features and functionalities work as expected. For example, in a client-server


context, a functional test may involve manually validating that a user can log
in to a web application and access their account information without
encountering errors.
● Usability Testing: Usability testing assesses the user-friendliness of an

application. Testers, acting as end-users, interact with the client-side


interface and provide feedback on the ease of use, navigation, and overall
user experience. For instance, in a client-server environment, testers might
evaluate the intuitiveness of a web application’s menu structure.
● Exploratory Testing: Exploratory testing is an unscripted approach where

testers explore the application to uncover defects, usability issues, or


unexpected behavior. Testers use their creativity and domain knowledge to
simulate real-world user interactions and identify potential defects.
Automated Testing

Automated testing involves the use of specialized tools and scripts to perform
testing tasks automatically. It is especially valuable for repetitive or complex test
scenarios. Automated testing offers various advantages, such as consistency,
repeatability, and the ability to perform tests quickly and efficiently.
Example: In a client-server application, automated functional testing could involve
using a tool like Testsigma to create and execute test scripts that validate the
registration process. These scripts can simulate user actions such as filling out a
registration form, submitting it and verifying that the user’s data is correctly stored
on the server.
Automated Testing can help you save tons of testing time.

Black-Box Testing

Black-box testing focuses on evaluating an application’s functionality without


knowledge of its internal code or structure. Testers interact with the application’s
interface and assess how it responds to different inputs and conditions. This
approach ensures that testing is conducted from a user’s perspective, emphasizing
expected outcomes and behaviors.
Example: In a client-server context, black-box testing might involve validating that
a file-sharing application (client) can successfully upload files to a server and that
the server correctly stores and retrieves these files. Testers would not need to
examine the server’s code but instead, assess whether the system functions as
expected.
White-Box Testing

White-box testing is an approach where testers have access to the internal code and
structure of the application. They design tests to evaluate the correctness of the
code, its logic, and the execution paths within the application. White-box testing
aims to uncover defects in the code’s implementation.
Example: In a client-server application, white-box testing might involve code
reviews and static code analysis of the server-side components to identify potential
vulnerabilities or code quality issues. Testers may inspect the server’s code to
ensure that it handles user authentication securely and adheres to coding standards.
Mocking and Simulation

Mocking and simulation involve creating simulated or mock components to mimic


the behavior of real components or services that an application relies on. This is
useful for testing when the actual components or services are not readily available
or should not be used during testing.
Example: In a client-server application, testers can create a mock payment gateway
that simulates responses if the server depends on an external payment gateway
service that should not be invoked during testing. This allows the testing of
payment-related scenarios without using the actual payment service.
Network Testing

Network testing assesses how an application performs under various network


conditions, such as latency, packet loss, or limited bandwidth. It ensures that the
client and server components can maintain functionality and responsiveness in
real-world network environments.
Example: In a client-server setup, network testing might involve simulating high-
latency conditions to evaluate how well the application handles delayed responses.
Testers can use network emulation tools to introduce latency and assess the impact
on client-server communication.
Concurrency Testing

Concurrency testing evaluates how an application behaves when multiple users or


processes access it simultaneously. This type of testing helps identify
synchronization issues, race conditions, and potential conflicts that can occur in a
multi-user environment.
Example: In a client-server system, concurrency testing could involve simulating
concurrent logins from multiple clients to assess whether the server accurately
handles authentication requests without conflicts or unexpected behavior.
Language Processors: Assembler, Compiler and Interpreter
The aim of compiler testing is to verify that the compiler implementation conforms
to its specifications, which is to generate an object code that faithfully corresponds
to the language semantic and syntax as specified in the language documentation. A
compiler should be carefully verified before its release, since it has to be used by
many users. Finding an optimal and complete test suite that can be used in the
testing process is often an exhaustive task. Various methods have been proposed
for the generation of compiler test cases. Many papers have been published on
testing compilers, most of which address classical programming languages. In this
paper, we assess and compare various compiler testing techniques with respect to
some selected criteria and also propose some new research directions in compiler
testing of modem programming languages.

What are Language Processors?


Compilers, interpreters, translate programs written in high-level languages into
machine code that a computer understands and assemblers translate programs
written in low-level or assembly language into machine code. In the compilation
process, there are several stages. To help programmers write error-free code, tools
are available.
Assembly language is machine-dependent, yet mnemonics used to represent
instructions in it are not directly understandable by machine and high-Level
language is machine-independent. A computer understands instructions in machine
code, i.e. in the form of 0s and 1s. It is a tedious task to write a computer program
directly in machine code. The programs are written mostly in high-level languages
like Java, C++, Python etc. and are called source code. These source code cannot
be executed directly by the computer and must be converted into machine language
to be executed. Hence, a special translator system software is used to translate the
program written in a high-level language into machine code is called Language
Processor and the program after translated into machine code (object
program/object code).

Types of Language Processors


The language processors can be any of the following three types:
1. Compiler

The language processor that reads the complete source program written in high-
level language as a whole in one go and translates it into an equivalent program in
machine language is called a Compiler. Example: C, C++, C#.
In a compiler, the source code is translated to object code successfully if it is free
of errors. The compiler specifies the errors at the end of the compilation with line
numbers when there are any errors in the source code. The errors must be removed
before the compiler can successfully recompile the source code again the object
program can be executed number of times without translating it again.

2. Assembler

The Assembler is used to translate the program written in Assembly language into
machine code. The source program is an input of an assembler that contains
assembly language instructions. The output generated by the assembler is the
object code or machine code understandable by the computer. Assembler is
basically the 1st interface that is able to communicate humans with the machine.
We need an assembler to fill the gap between human and machine so that they can
communicate with each other. code written in assembly language is some sort of
mnemonics(instructions) like ADD, MUL, MUX, SUB, DIV, MOV and so on. and
the assembler is basically able to convert these mnemonics in binary code. Here,
these mnemonics also depend upon the architecture of the machine.
For example, the architecture of intel 8085 and intel 8086 are different.

3. Interpreter

The translation of a single statement of the source program into machine code is
done by a language processor and executes immediately before moving on to the
next line is called an interpreter. If there is an error in the statement, the interpreter
terminates its translating process at that statement and displays an error message.
The interpreter moves on to the next line for execution only after the removal of
the error. An Interpreter directly executes instructions written in a programming or
scripting language without previously converting them to an object code or
machine code. An interpreter translates one line at a time and then executes it.
Example: Perl, Python and Matlab.

Difference Between Compiler and Interpreter

Compiler Interpreter

A compiler is a program that An interpreter takes a source


converts the entire source code of a program and runs it line by line,
programming language into translating each line as it comes to
executable machine code for a it.
CPU.
The compiler takes a large amount An interpreter takes less amount of
of time to analyze the entire source time to analyze the source code but
code but the overall execution time the overall execution time of the
of the program is comparatively program is slower.
faster.

The compiler generates the error Its Debugging is easier as it


message only after scanning the continues translating the program
whole program, so debugging is until the error is met.
comparatively hard as the error can
be present anywhere in the
program.

The compiler requires a lot of It requires less memory than a


memory for generating object compiler because no object code is
codes. generated.

Generates intermediate object code. No intermediate object code is


generated.

For Security purpose compiler is The interpreter is a little vulnerable


more useful. in case of security.

Examples: C, C++, C# Examples: Python, Perl, JavaScript,


Ruby.

web-enabled applications

Web app testing is a software testing practice that ensures the application's
functionality and quality as per the requirements. Before delivery, web testing must
identify all underlying issues, such as security breaches, integration issues,
functional inconsistencies, environmental challenges, or traffic load.

Web testing is a software testing technique to test web applications or websites for
finding errors and bugs. A web application must be tested properly before it goes
to the end-users. Also, testing a web application does not only mean finding
common bugs or errors but also testing the quality-related risks associated with the
application. Software Testing should be done with proper tools and resources and
should be done effectively. We should know the architecture and key areas of a
web application to effectively plan and execute the testing.
Testing a web application is very common while testing any other application like
testing functionality, configuration, or compatibility, etc. Testing a web application
includes the analysis of the web fault compared to the general software faults. Web
applications are required to be tested on different browsers and platforms so that
we can identify the areas that need special focus while testing a web application.
Types of Web Testing:
Basically, there are 4 types of web-based testing that are available and all four of
them are discussed below:
· Static Website Testing:
A static website is a type of website in which the content shown or displayed
is exactly the same as it is stored in the server. This type of website has great
UI but does not have any dynamic feature that a user or visitor can use. In
static testing, we generally focus on testing things like UI as it is the most
important part of a static website. We check things font size, color, spacing,
etc. testing also includes checking the contact us form, verifying URLs or
links that are used in the website, etc.
· Dynamic Website Testing:
A dynamic website is a type of website that consists of both a frontend i.e,
UI, and the backend of the website like a database, etc. This type of website
gets updated or change regularly as per the user’s requirements. In this
website, there are a lot of functionalities involved like what a button will do
if it is pressed, are error messages are shown properly at their defined time,
etc. We check if the backend is working properly or not, like does enter the
data or information in the GUI or frontend gets updated in the databases or
not.
· E-Commerce Website Testing:
An e-commerce website is very difficult in maintaining as it consists of
different pages and functionalities, etc. In this testing, the tester or developer
has to check various things like checking if the shopping cart is working as
per the requirements or not, are user registration or login functionality is also
working properly or not, etc. The most important thing in this testing is that
does a user can successfully do payment or not and if the website is secured.
And there are a lot of things that a tester needs to test apart from the given
things.
· Mobile-Based Web Testing: In this testing, the developer or tester
basically checks the website compatibility on different devices and generally
on mobile devices because many of the users open the website on their
mobile devices. So, keeping that thing in mind, we must check that the site
is responsive on all devices or platforms.
Points to be Considered While Testing a Website:
As the website consists of a frontend, backend, and servers, so things like HTML
pages, internet protocols, firewalls, and other applications running on the servers
should be considered while testing a website. There are various examples of
considerations that need to be checked while testing a web application. Some of
them are:
· Do all pages are having valid internal and external links or URLs?
· Whether the website is working as per the system compatibility?
· As per the user interface-Does the size of displays are the optimal and the
best fit for the website?
· What type of security does the website need (if unsecured)?
· What are the requirements for getting the website analytics, and also
controlling graphics, URLs, etc.?
· The contact us or customer assistance feature should be added or not on
the page, etc.?
Objectives of Web Based Testing:
· Testing for functionality: Make that the web application performs as
expected for all features and functions. Check that user interface elements
like form submissions and navigation work as intended.
· Testing for Compatibility: To make sure it is compatible, test the web
application across a variety of devices, operating systems, and browsers.
Verify that the program operates consistently in a range of settings.
· Evaluation of Performance: Analyze the online application’s overall
performance, speed, and responsiveness. Any performance bottlenecks, such
as slow page loads or delayed server response times, should be located and
fixed.
· Testing for load: Examine how well the web application can manage a
particular load or multiple user connections at once. Determine and fix
performance problems when there is a lot of traffic.
· Testing for accessibility: Make sure the online application complies with
applicable accessibility standards (e.g., WCAG) and is usable by people
with disabilities. Make sure the program can communicate with assistive
technologies efficiently.
· Testing Across Browsers: Make sure the operation and appearance of the
web application are consistent by testing it in various web browsers.
Determine and fix any problems that might develop with a particular
browser.
Steps in Software Testing:
There is a total of 11 steps in software testing. You can read all of them from the
article “General Steps of Software Testing Process”. In web-based testing, various
areas have to be tested for finding the potential errors and bugs, and steps for
testing a web app are given below:
· App Functionality: In web-based testing, we have to check the specified
functionality, features, and operational behavior of a web application to
ensure they correspond to its specifications. For example, Testing all the
mandatory fields, Testing the asterisk sign should display for all the
mandatory fields, Testing the system should not display the error message
for optional fields, and also links like external linking, internal linking,
anchor links, and mailing links should be checked properly and checked if
there’s any damaged link, so that should be removed. We can do testing with
the help of Functional Testing in which we test the app’s functional
requirements and specifications.
· Usability: While testing usability, the developers face issues with
scalability and interactivity. As different numbers of users will be using the
website, it is the responsibility of developers to make a group for testing the
application across different browsers by using different hardware. For
example, Whenever the user browses an online shopping website, several
questions may come to his/her mind like, checking the credibility of the
website, testing whether the shipping charges are applicable, etc.
· Browser Compatibility: For checking the compatibility of the website to
work the same in different browsers we test the web application to check
whether the content that is on the website is being displayed correctly across
all the browsers or not.
· Security: Security plays an important role in every website that is
available on the internet. As a part of security, the testers check things like
testing the unauthorized access to secure pages should not be permitted, files
that are confined to the users should not be downloadable without the proper
access.
· Load Issues: We perform this testing to check the behavior of the system
under a specific load so that we can measure some important transactions
and the load on the database, the application server, etc. are also monitored.
· Storage and Database: Testing the storage or the database of any web
application is also an important component and we must sure that the
database is properly tested. We test things like finding errors while
executing any DB queries, checking the response time of a\the query, testing
whether the data retrieved from the database is correctly shown on the
website or not.

What is Adhoc Testing?

When a software testing is performed without proper planning and documentation,


it is said to be Adhoc Testing. Such tests are executed only once unless we uncover
the defects.

Ad Hoc Tests are done after formal testing is performed on the application. Ad
Hoc methods are the least formal type of testing as it is NOT a structured
approach. Hence, defects found using this method are hard to replicate as there are
no test cases aligned for those scenarios.

Testing is carried out with the knowledge of the tester about the application and the
tester tests randomly without following the specifications/requirements. Hence the
success of Adhoc testing depends upon the capability of the tester, who carries out
the test. The tester has to find defects without any proper planning and
documentation, solely based on the tester's intuition.

When to Execute AdhocTesting ?


Adhoc testing can be performed when there is limited time to do exhaustive testing
and usually performed after the formal test execution. Adhoc testing will be
effective only if the tester has in-depth understanding about the System Under
Test.

Forms of AdhocTesting :

1. Buddy Testing: Two buddies, one from the development team and one
from the test team mutually work on identifying defects in the same
module. Buddy testing helps the testers develop better test cases while
the development team can also make design changes early. This kind of
testing happens usually after completing the unit testing.
2. Pair Testing: Two testers are assigned the same modules and they share
ideas and work on the same systems to find defects. One tester executes
the tests while another tester records the notes on their findings.
3. Monkey Testing: Testing is performed randomly without any test cases
in order to break the system.

Various ways to make Adhoc Testing More Effective

1. Preparation: By getting the defect details of a similar application, the


probability of finding defects in the application is more.
2. Creating a Rough Idea: By creating a rough idea in place the tester will
have a focussed approach. It is NOT required to document a detailed plan
as to what to test and how to test.
3. Divide and Rule: By testing the application part by part, we will have a
better focus and better understanding of the problems if any.
4. Targeting Critical Functionalities: A tester should target those areas
that are NOT covered while designing test cases.
5. Using Tools: Defects can also be brought to the lime light by using
profilers, debuggers and even task monitors. Hence being proficient in
using these tools one can uncover several defects.
6. Documenting the findings: Though testing is performed randomly, it is
better to document the tests if time permits and note down the deviations
if any. If defects are found, corresponding test cases are created so that it
helps the testers to retest the scenario.
Adhoc Testing :
Adhoc testing is a type of software testing that is performed informally and randomly
after the formal testing is completed to find any loophole in the system. For this
reason, it is also known as Random or Monkey testing. Adhoc testing is not performed
in a structured way so it is not based on any methodological approach. That’s why
Adhoc testing is a type of Unstructured Software Testing.

Adhoc testing has –

● No Documentation.

● No Test cases.

● No Test Design.
As it is not based on any test cases or requires documentation or test design, resolving
issues that are identified at last becomes very difficult for developers. Sometimes very
interesting and unexpected errors or uncommon errors are found which would never
have been found in written test cases. This Ad Hoc testing is used in Acceptance
testing.

Adhoc testing saves a lot of time and one great example of Adhoc testing can be when
the client needs the product by 6 PM today but the product development will be
completed at 4 PM the same day. So in hand only limited time i.e. 2 hours only,
within that 2hrs the developer and tester team can test the system as a whole by taking
some random inputs and can check for any errors.

Types of Adhoc Testing


Adhoc testing is divided into three types as follows.

1. Buddy Testing – Buddy testing is a type of Adhoc testing where two bodies
will be involved one is from the Developer team and one from the tester
team. So that after completing one module and after completing Unit testing
the tester can test by giving random inputs and the developer can fix the
issues too early based on the currently designed test cases.
2. Pair Testing – Pair testing is a type of Adhoc testing where two bodies
from the testing team can be involved to test the same module. When one
tester can perform the random test another tester can maintain the record of
findings. So when two testers get paired they exchange their ideas, opinions,
and knowledge so good testing is performed on the module.
3. Monkey Testing – Monkey testing is a type of Adhoc testing in which the
system is tested based on random inputs without any test cases the behavior
of the system is tracked and all the functionalities of the system are working
or not is monitored. As the randomness approach is followed there is no
constraint on inputs so it is called Monkey testing.

Characteristics of Adhoc Testing


● Adhoc testing is performed randomly.

● Based on no documentation, no test cases, and no test designs.

● It is done after formal testing.

● It follows an unstructured way of testing.

● It takes comparatively less time than other testing techniques.

● It is good for finding bugs and inconsistencies that are mentioned in test
cases.

When to conduct Adhoc testing


● When there is limited time in hand to test the system.

● When there are no clear test cases to test the product.

● When formal testing is completed.

● When the development is mostly complete.

When not to conduct Adhoc testing


● When an error exists in the test cases.

● When Beta testing is being carried out.

Advantages of Adhoc testing


● The errors that can not be identified with written test cases can be identified
by Adhoc testing.
● It can be performed within a very limited time.

● Helps to create unique test cases.

● This test helps to build a strong product that is less prone to future
problems.
● This testing can be performed at any time during Softthe ware Development
Life Cycle Process (SDLC)

Disadvantages of Adhoc testing


● Sometimes resolving errors based on identified issues is difficult as no
written test cases and documents are there.
● Needs good knowledge of the product as well as testing concepts to
perfectly identify the issues in any model.
● It does not provide any assurance that the error will be identified.

● Finding one error may take some uncertain period.

What is Adhoc Testing?

● This testing we do when the build is in the checked sequence, then we go for Adhoc
testing by checking the application randomly.

● Adhoc testing is also known as Monkey testing and Gorilla testing.

● It is negative testing because we will test the application against the client's requirements.
● When the end-user uses the application randomly, he/she may see a bug, but the
professional test engineer uses the software systematically, so he/she may not find the
same bug.

● Example of Adhoc Testing


● Scenario 1

● Suppose we will do one round of functional testing, integration, and system testing on the
software.

● Then, we click on some feature instead of going to the login page, and it goes to the blank
page, then it will be a bug.

● To avoid these sort of scenarios, we do one round of adhoc testing as we can see in the
below image:

● Scenario 2

● ADVERTISEMENT

● In Adhoc testing, we don't follow the requirement because we randomly check the
software. Our need is A?B?C?D, but while performing Adhoc testing, the test engineer
directly goes to the C and test the application as we can see in the below image:

● Scenario 3

● Suppose we are using two different browsers like Google Chrome and Mozilla Firefox
and login to the Facebook application in both the browsers.

● Then, we will change the password in the Google Chrome browser, and then in another
browser (Firefox,) we will perform some action like sending a message.
● It should navigate to the login page, and ask to fill the login credentials again because we
change our credentials in another browser (Chrome), this process is called adhoc testing.

Why do we need to perform Ad Hoc Testing?

● When the product is released to the market, we go for Adhoc testing because the
customer never uses the application in sequence/systematically for that sake; we check
the application by going for Adhoc testing by checking randomly.

● Checking the application randomly without following any sequence or procedure since
the user doesn't know how to use the application, they may use it randomly and find some
issues to cover this we do one round of Adhoc testing.

When we perform adhoc testing

● We go for Adhoc testing when all types of testing are performed. If the time permits then
we will check all the negative scenarios during adhoc testing.

What is Buddy Testing?


Buddy system practice is used in this type of testing, wherein two team members
AT are identified as buddies. The buddies mutually help each other, with a
common goal of identifying defects early and correcting them. A developer and a
tester usually become buddies. It may be advantageous to team up with people
with good working relationships as buddies to overcome any apprehensions. On
the other hand, if this is mapped to a complete agreement of views and approaches
between the buddies, the diversity required between the two may not be achieved.
This may make buddy testing less effective, Buddying people with good working
relationships yet diverse backgrounds is a kind of safety measure that improves the
chances of detecting errors in the program very early.

● In this technique, two team members work on the same machine where

one of the team members will work with the system and the other one is
responsible for making notes and scenarios.
● One person known as the primary tester performs the testing while the

other person, known as the buddy tester, observes and provides assistance
as needed.

Importance of Buddy Testing:


● Avoid errors or early detection of errors: A buddy test may help to

avoid errors of omission, misunderstanding, and communication by


providing varied perspectives or interactive exchanges between the
buddies.
● Provides clarity on specifications: Buddy testing not only helps in

finding errors in the code but also helps the tester to understand how the
code is written and provides clarity on specifications.
● Helps to design better testing strategy: Buddy testing is normally done

at the unit test phase, which helps testers to come out with a better testing
strategy for subsequent planned and testing activities.
● Helpful for testing new modules: It is done for new or critical modules

in the product where the specification is not clear to buddies who perform
Buddy testing.
● Helps provide additional perspective on the testing process: The

importance of buddy testing lies in its ability to enhance the effectiveness


and efficiency of the testing process. By working together, the primary
tester and the buddy tester can share knowledge and expertise, catch
errors and defects more quickly, and provide additional perspectives on
the testing process.

Types of Buddy Testing:

1. Pair Testing

In pair testing, two people work closely together at a single workstation. As the
other person watches and evaluates the process, one person assumes the role of the
tester, carrying out test cases or utilizing the application. A dynamic interchange of
ideas and viewpoints is ensured by this cooperative method, which promotes more
thorough testing and early defect discovery.

2. Developer-Tester Buddy Testing

Buddy testing for developers and testers is working together to find and fix bugs
early in the development cycle. This cooperation could be demonstrated by code
reviews, pair programming or cooperative testing. Facilitating communication
between these two crucial responsibilities will help the team improve the software
as a whole.

3. Exploratory Testing Pairing

In this type of testing, two testers collaborate to examine the programme without
using pre-written test cases. This method fosters flexibility and inventiveness,
enabling testers to find unexpected problems and situations. The cooperation of
testers guarantees a deeper investigation of the functionality of the programme.

4. Peer Review Testing

Through cooperative efforts, peer review testing focuses on the examination and
enhancement of testing artifacts. In order to find any problems, contradictions, or
places for improvement, testers go over each other’s test cases, scripts or plans.
This kind of buddy testing keeps a consistent and efficient testing procedure going
and improves the overall quality of testing documents.

5. Cross-functional Team Buddy Testing

This type of testing involves working together with individuals from several
functional domains, including development, testing and design. This method
encourages a variety of viewpoints and skill sets while advancing an overall
comprehension of the system. The cross-functional team’s ability to communicate
effectively helps them grasp the programme more thoroughly.
When to use Buddy Testing?
Buddy testing is typically used in the later stages of the software development
process when the software is almost complete and ready for final testing. It is
particularly useful for testing complex or critical systems, or for testing systems
that require specialized knowledge or expertise.

There are several factors that can influence the decision to use buddy testing,
including:

1. When testing complex systems: Buddy testing can be particularly useful

for testing complex or critical systems, as it allows two individuals to


work together to identify defects and issues more quickly.
2. When there are individuals with different levels of expertise: Buddy

testing can be beneficial if the testing team includes individuals with


different levels of expertise or knowledge. For example, if the primary
tester is an experienced tester with a strong understanding of the software
or system, but the buddy tester is a subject matter expert with knowledge
of the domain being tested, the combination of these two perspectives can
enhance the effectiveness of the testing process.
3. When there are limited resources: Buddy testing may be more efficient

and cost-effective than other testing methods, particularly if the testing


team has limited resources or time available for testing.
4. If the goal is to identify all possible defects: If the goal of the testing is

to identify as many defects and issues as possible, buddy testing can be a


useful technique, as it allows two individuals to work together to catch
defects more quickly.
5. When the specification is not clear: Lack of proper specification

confuses the tester so the presence of another developer or experienced


tester may help to resolve the issues and achieve the goal.
6. Deadline is near: Buddy testing is helpful in scenarios where the

development took a lot of time and the testing team has only a few days
for testing the product.
7. When the team is new: When there is a new team member in the team

and quick knowledge of the product is required. Using buddy testing, a


new tester can get a hold of the functional flow of the product.

Process of Buddy Testing:


The process of buddy testing involves the following steps:

1. Identify the primary tester and the buddy tester: The primary tester is

typically an experienced tester with a strong understanding of the


software or system being tested, while the buddy tester may be a less
experienced tester or a subject matter expert with knowledge of the
domain being tested.
2. Define the scope and objectives of the testing: The primary tester and

the buddy tester should agree on the scope and objectives of the testing,
including the specific features or functionality that will be tested and the
expected results.
3. Plan the testing: The primary tester and the buddy tester should develop

a testing plan that outlines the specific test cases and test scenarios that
will be executed, as well as the resources and tools needed to complete
the testing.
4. Execute the testing: The primary tester performs the testing while the

buddy tester observes and provides assistance as needed. The buddy


tester may also be responsible for documenting defects and issues that are
identified during the testing.
5. Review and debrief: After the testing is complete, the primary tester and

the buddy tester should review the results of the testing and debrief to
discuss any issues or challenges that were encountered.

Benefits of Buddy Testing:


● Enhanced effectiveness: By working together, the primary tester and the

buddy tester can share knowledge and expertise, and catch defects more
quickly.
● Increased efficiency: Buddy testing can help reduce the time and

resources needed to complete the testing process.


● Improved quality: Buddy testing can help ensure that the software or

system being tested is of high quality, as defects and issues are more
likely to be identified and addressed.
● Enhanced collaboration: Buddy testing promotes collaboration between

team members and can help build trust and teamwork within the team.
● Less workload: There will be less workload in presence of another team

member and the tester can think clearly and use more scenarios for
testing.

Limitations of Buddy Testing:


● Training required: They are trained (if required) on the philosophy and

objective of buddy training. They should also be made to appreciate that


they have a responsibility to one another.
● Both have to agree on working terms: They also have to agree on the

modalities and the terms of working before actually starting the testing
work. They stay close together to be able to follow the agreed plan. The
code is unit tested to ensure what it is supposed to do before buddy
testing starts.
● Lengthy review session: After the code is successfully tested through

unit testing the developer approaches the testing buddy. Starting buddy
testing before completing unit testing may result in a lengthy review
session for the buddy on a code that may not meet specified
requirements. This in turn may cause unnecessary rework and erode the
confidence of the buddy.
● Dependence on the buddy tester: If the buddy tester is not available or

is not able to provide assistance, the testing process may be slowed down
or disrupted.
● Limited scalability: Buddy testing may not be practical for large-scale

testing projects, as it requires close collaboration between two


individuals.
● Limited flexibility: Buddy testing may not be suitable for testing

scenarios that require a high degree of flexibility or adaptability.

Pair Testing:
Pair Testing is verification in software by duo team members operating behind a
machine. The first member controls the mouse and keyboards and the second
member make notes discusses test scenarios and prepare/question. One of them has
to be a tester and the next has to be a developer or business analyst.

Pair programming is a familiar practice in extreme programming. Therefore, Pair


programming is considered a great approach to programming software. Likewise,
pair testing is a similar process for testing software.

How to Perform Pair Testing in Software Testing:


Before starting the pair testing, some most important things to keep in mind are –

● Pairing up with Right Person: We can pair with anyone but it’s better

to pair if both individuals have some sense with each other’s working
process and goals effectively.
● Allocating Proper Space: The required pair need to allocate a device

and space to seat together and perform the test properly. Basically, it will
done via Video Conferencing tools where the driver would share the
entire screen.
● Establish the Goals: After planning the structured approach to test, we

need to keep an eye the areas to be covered, timebox the testing and
aware about the required changes done.
● Decide the Roles: Before start the testing, we need to assign the role of

driver and the navigator.


● Logging bugs and taking notes: The navigator need to take notes and

maintain the bug log, when driver performs all the manual tasks. Once
the process done, they should maintain a bug report and log all bugs.
Advantages Of Using Pair Testing:
1. Developers have an approach to the software from a different or positive

view. They discuss and discover what happens if I execute it or what will
happen if a business analyst doesn’t implement it.
2. While Pair Testing is enforced with a business analyst, they exchange

ideas and knowledge like an analyst and tester between them.


3. When a new project begins with the new team members there is often a

hurdle between testers and developers.


4. If we find any trouble and want to register them in a bug registration

system the trouble is automatically revisualized, therefore walking in


pairs will help each other to be sharp.

Disadvantages Of Using Pair Testing:


When you don’t know the accurate condition for setting Pair Testing, you ought or
not use it.

1. All enforced tests should be automatic and the solution of PT is findings

and not test cases. We can’t use the outcome of the PT session right after
test automation.
2. When there are team members there are chances that two team members

may end a classing with each other. That’s why we should not use PT
when the team members not communicating or working together well.
3. If you are planning to enforce structure test cases, it can’t add more value

or zero value executing the test cases together. The task should be
performed by one team member alone.
The setting, of Pair Testing:

1. The team members should obligate together. It is not going to work when

we try to force cooperation, so we need to create to workable atmosphere.


2. There should be a separate room or table where the team members can

perform without being interrupted. They should also switch off their
mobile phone and notifications for better work.
3. The work place consists of two people who would sit behind one desk.

You can’t force people if there is no space.

Preparing Pair Testing Session:

1. We should create an ET charter.

2. We should define the concentration and syllabus of the test.

3. We should elaborate the aims or the goals for the test.

4. We should fix a time limit to carry out the test. Normally one session is

ninety minutes.
5. We should plan the meeting in a coordinated manner.

Execution Of Pair Testing Session:

● While carrying out the session the team members should discuss the test

parts and how deeply the test should be. The test should be with the
targets, focused and the part of the test as described in the ET charter.
● The first team member (the driver) controls the keyboard and mouse and

the second team member analyses, ask a question, and prepare notes.
Pair Testing(PT) Finishing:

After the completion of PT session, the discoveries will be submitted to the bug
registration system. If needed and not completed. The ET charter will be upgraded,
the aim test will be tested where problems were found and other remarks will be
checked. The scope of the ET charter is a short calculation of the PT session, what
has been done well, and what should be looked after for improvement.

Finally, it is a blend of teamwork and testing but it has many advantages like
sharing knowledge about testing and SUT, training new members, making barriers
between members, and above that it is fun. we should use pair testing wisely.

What is Exploratory Testing?

Exploratory Testing is an unscripted, manual software testing type where testers


examine the system with no pre-established test cases and no previous exposure to
the system. Instead of following a strict test plan, they jump straight to testing and
make spontaneous decisions about what to test on the fly.

Prior to the testing phase, they might list down some concepts or areas to explore,
but the essence of exploratory testing still emphasizes the tester’s personal freedom
and responsibility in simultaneous knowledge acquisition and quality checks. In a
way, exploratory testing is similar to an exciting adventure where testers don’t
really know what lies ahead of them, and they must utilize certain techniques to
uncover those mysteries.

During exploratory testing, testers explore the software by interacting with it,
trying out different features, then observing its behavior. They may even
intentionally break the software, input unexpected data, or explore edge cases to
uncover potential issues. As long as testers get to understand the system’s
workings and can suggest strategies to test the system more methodically, they
have accomplished the task.

Why Exploratory Testing?

Exploratory testing plays a crucial role in software testing, especially Agile


projects, where automation testing has taken precedence. Automation testing
admittedly brings a wide range of benefits to QA teams, but it is impossible to
ignore exploratory testing and more broadly manual testing.

1. Exploratory testing finds bugs we aren’t yet aware of

A recent research compares exploratory testing (ET) with test case based testing
(TCT) in terms of their effectiveness for catching software defects. Results show
that test case based testing excels in catching immediately visible defects (Mode
0) or defects that require only 1 interaction (Mode 1) to cause failure, while
exploratory testing does a better job in catching complex defects requiring 2 and
3+ user inputs (Mode 2 and Mode 3).
In other words, exploratory testing helps testers catch bugs that automated testing
would have missed. Automated testing only catches bugs that we know may
happen (we can only create test scripts for something we know about), while
exploratory testing catches bugs that we don’t even know to be existing in the first
place, tapping in the Unknown Unknown region of our understanding.
Exploratory testing and manual testing in general expands the test coverage to
blind zones of automation testing, pushing product quality to a new level.

2. Exploratory testing encourages innovation and knowledge sharing

Exploratory testing is free-style and not as rule-based as automation testing, and


this characteristic opens up a lot of room for creativity and innovation. If a tester
individually performs exploratory testing, they need to apply their domain
knowledge, intuition, critical thinking, and even user-centric thinking to interact
with the system and uncover potential issues.

An even better approach to exploratory testing is applying the pair programming


principles, which involves 2 people - 1 driver and 1 navigator. In a time-boxed
period, the driver performs the actual testing while the navigator observes,
provides guidance, and takes notes where necessary. Sometimes called Crowd
Exploratory Testing, this approach maximizes the level of creativity and subject
expertise of both testers while encouraging knowledge sharing and collaboration
between team members.

3. Exploratory testing encourages continuous learning

In the words of CemKarner who coined the term for this testing type:
“[exploratory testing treats] test-related learning, test design, test execution and
test result interpretation as mutually supportive activities that run in parallel
throughout the project.” As exploratory testing requires minimal to no planning, it
can be conducted almost whenever we want, giving us the autonomy we need to
learn about the product while also performing quality checks, saving time and
resources.

4. Exploratory testing complements continuous testing

In this era, the entire Quality Engineering industry is moving towards shift-left
testing and continuous testing where testing is performed in synchronicity with
development. Shift-left testing allows testing to happen earlier, leaving ample time
for troubleshooting while improving the project’s agility.

However, driven solely by automation, there is always a critical gap in this


approach as we have mentioned above: the lack of investigation into the Unknown
Unknown zone. When testers become too fixated on what can be automated to fit
in the Continuous Testing approach, they get the cognitive bias called “tunnel
vision”, ignoring alternative solutions. At the end of the day, automation testing
only checks if a feature works and does not find bugs.

Exploratory testing requires every tester to acknowledge this gap, and encourages
them to explore beyond scripted scenarios, uncovering new and unexpected
behaviors. A highly recommended approach is to combine manual testing with
automated testing to utilize the benefits of both.

Exploratory Testing Example


Consider yourself in the position of testing an eCommerce website with features
like online shopping, payment processing, and order management. If you adopt an
Exploratory Testing approach, your objectives would be to navigate through the
website, identifying possible usability issues, interface defects, and any potential
risks.

You would interact with various functionalities of the website, such as browsing
product catalogs, adding items to the shopping cart, and completing the checkout
process. After that, you can explore different menus, screens, and buttons to ensure
smooth navigation and intuitive user experience. You would also input values and
scenarios into certain field forms, such as using different payment methods or
applying discount codes, to assess the accuracy of calculations and the transaction
processes. There is no rule - your goal is to explore and familiarize yourself with
the website.

In the process, you may find critical issues such payment gateway failures and
security vulnerabilities, or minor problems like broken links and inconsistent
product descriptions. Even if you did not find any bugs, you still learn a lot about
how the system works. This knowledge will come in handy when you start
developing automation test scripts in the future.

There are many website or application-specific features that a tester needs to pay
attention to when doing exploratory testing. For example, exploring a personal
finance application would require them to apply the mindset of a customer needing
security and accuracy, compared to exploring a website that places stronger
emphasis on interactivity.
Pros and Cons To Exploratory Testing

Pros:
● Encourages creative and innovative thinking
● Identifies defects that are usually missed by formal testing methods
● Simultaneously provides a comprehensive understanding of software
functionality and quality
● Requires less preparation time compared to formal testing
● Efficient in quickly identifying major defects

Cons:
● Difficult to measure and control without a formal test script
● Requires skilled and experienced testers for effective defect identification
● Replication and automation of results can be challenging
● Inconsistencies may arise from different testers using different approaches
● Time-consuming if we can only uncover minor issues with no impact on
software performance

Types Of Exploratory Testing

There are 3 styles of exploratory testing:


1. Free-style Exploratory Testing: as the name suggests, this approach
does not require any rules or specifications, and the tester has great
control over what they want to do with the application. However, it does
not mean that testers approach it with randomness. In fact, free-style
testers must themselves build an exploration strategy in their head based
on their experience and intuition before performing the actual testing.
2. Scenario-based Exploratory Testing: this approach is performed on
real-user scenarios where the QA team isolates and tests 1 scenario at a
time from all angles. This is a more structured approach than free-style
testing.
3. Strategy-based Exploratory Testing: there is an overcompassing
strategy that guides the exploratory testing activity, requiring the testers
to apply various testing techniques (boundary value analysis,
equivalence, risk-based, error guessing, etc.)

Agile Software Testing·

Agile Testing is a type of software testing that follows the principles of agile
software development to test the software application. All members of the project
team along with the special experts and testers are involved in agile testing. Agile
testing is not a separate phase and it is carried out with all the development phases
i.e. requirements, design and coding, and test case generation. Agile testing takes
place simultaneously throughout the Development Life Cycle. Agile testers
participate in the entire development life cycle along with development team
members and the testers help in building the software according to the customer
requirements and with better design and thus code becomes possible. The agile
testing team works as a single team towards the single objective of achieving
quality. Agile Testing has shorter time frames called iterations or loops. This
methodology is also called the delivery-driven approach because it provides a
better prediction on the workable products in less duration time.
· Agile testing is an informal process that is specified as a dynamic type of
testing.
· It is performed regularly throughout every iteration of the Software
Development Lifecycle (SDLC).
· Customer satisfaction is the primary concern for agile test engineers at
some stage in the agile testing process.
Features of Agile Testing
Some of the key features of agile software testing are:

· Simplistic approach: In agile testing, testers perform only the necessary


tests but at the same time do not leave behind any essential tests. This
approach delivers a product that is simple and provides value.
· Continuous improvement: In agile testing, agile testers depend mainly
on feedback and self-learning for improvement and they perform their
activities efficiently continuously.
· Self-organized: Agile testers are highly efficient and tend to solve
problems by bringing teams together to resolve them.
· Testers enjoy work: In agile testing, testers enjoy their work and thus
will be able to deliver a product with the greatest value to the consumer.
· Encourage Constant communication: In agile testing, efficient
communication channels are set up with all the stakeholders of the project to
reduce errors and miscommunications.
· Constant feedback: Agile testers need to constantly provide feedback to
the developers if necessary.
Agile Testing Principles
· Shortening feedback iteration: In Agile Testing, the testing team gets
to know the product development and its quality for each and every iteration.
Thus continuous feedback minimizes the feedback response time and the
fixing cost is also reduced.
· Testing is performed alongside Agile testing is not a different phase. It
is performed alongside the development phase. It ensures that the features
implemented during that iteration are actually done. Testing is not kept
pending for a later phase.
· Involvement of all members: Agile testing involves each and every
member of the development team and the testing team. It includes various
developers and experts.
· Documentation is weightless: In place of global test documentation,
agile testers use reusable checklists to suggest tests and focus on the essence
of the test rather than the incidental details. Lightweight documentation tools
are used.
· Clean code: The defects that are detected are fixed within the same
iteration. This ensures clean code at any stage of development.
· Constant response: Agile testing helps to deliver responses or feedback
on an ongoing basis. Thus, the product can meet the business needs.
· Customer satisfaction: In agile testing, customers are exposed to the
product throughout the development process. Throughout the development
process, the customer can modify the requirements, and update the
requirements and the tests can also be changed as per the changed
requirements.
· Test-driven: In agile testing, the testing needs to be conducted alongside
the development process to shorten the development time. But testing is
implemented after the implementation or when the software is developed in
the traditional process.
Agile Testing Methodologies
Some of the agile testing methodologies are:

1. Test-Driven Development (TDD): TDD is the software development


process relying on creating unit test cases before developing the actual code
of the software. It is an iterative approach that combines 3 operations,
programming, creation of unit tests, and refactoring.
2. Behavior Driven Development (BDD): BDD is agile software testing that
aims to document and develop the application around the user behavior a
user expects to experience when interacting with the application. It
encourages collaboration among the developer, quality experts, and
customer representatives.
3. Exploratory Testing: In exploratory testing, the tester has the freedom to
explore the code and create effective and efficient software. It helps to
discover the unknown risks and explore each aspect of the software
functionality.
4. Acceptance Test-Driven Development (ATDD): ATDD is a
collaborative process where customer representatives, developers, and
testers come together to discuss the requirements, and potential pitfalls and
thus reduce the chance of errors before coding begins.
5. Extreme Programming (XP): Extreme programming is a customer-
oriented methodology that helps to deliver a good quality product that meets
customer expectations and requirements.
6. Session-Based Testing: It is a structured and time-based approach that
involves the progress of exploratory testing in multiple sessions. This
involves uninterrupted testing sessions that are time-boxed with a duration
varying from 45 to 90 minutes. During the session, the tester creates a
document called a charter document that includes various information about
their testing.
7. Dynamic Software Development Method (DSDM): DSDM is an agile
project delivery framework that provides a framework for building and
maintaining systems. It can be used by users, developers, and testers.
8. Crystal Methodologies: This methodology focuses on people and their
interactions when working on the project instead of processes and tools. The
suitability of the crystal method depends on three dimensions, team size,
criticality, and priority of the project.
Agile Testing Strategies

1. Iteration 0

It is the first stage of the testing process and the initial setup is performed in this
stage. The testing environment is set in this iteration.

· This stage involves executing the preliminary setup tasks such as finding
people for testing, preparing the usability testing lab, preparing resources,
etc.
· The business case for the project, boundary situations, and project scope
are verified.
· Important requirements and use cases are summarized.
· Initial project and cost valuation are planned.
· Risks are identified.
· Outline one or more candidate designs for the project.

2. Construction Iteration
It is the second phase of the testing process. It is the major phase of the testing and
most of the work is performed in this phase. It is a set of iterations to build an
increment of the solution. This process is divided into two types of testing:

· Confirmatory testing: This type of testing concentrates on verifying that


the system meets the stakeholder’s requirements as described to the team to
date and is performed by the team. It is further divided into 2 types of
testing:
● Agile acceptance testing: It is the combination of acceptance testing and
functional testing. It can be executed by the development team and the
stakeholders.
● Developer testing: It is the combination of unit testing and integration
testing and verifies both the application code and database schema.

· Investigative testing: Investigative testing detects the problems that are


skipped or ignored during confirmatory testing. In this type of testing, the
tester determines the potential problems in the form of defect stories. It
focuses on issues like integration testing, load testing, security testing, and
stress testing.

3. Release End Game

This phase is also known as the transition phase. This phase includes the full
system testing and the acceptance testing. To finish the testing stage, the product is
tested more relentlessly while it is in construction iterations. In this phase, testers
work on the defect stories. This phase involves activities like:

· Training end-users.
· Support people and operational people.
· Marketing of the product release.
· Back-up and restoration.
· Finalization of the system and user documentation.
4. Production
It is the last phase of agile testing. The product is finalized in this stage after the
removal of all defects and issues raised.

Agile Testing Quadrants


The whole agile testing process is divided into four quadrants:

1. Quadrant 1 (Automated)

The first agile quadrat focuses on the internal quality of code which contains the
test cases and test components that are executed by the test engineers. All test cases
are technology-driven and used for automation testing. All through the agile first
quadrant of testing, the following testing can be executed:

· Unit testing.
· Component testing.
2. Quadrant 2 (Manual and Automated)

The second agile quadrant focuses on the customer requirements that are provided
to the testing team before and throughout the testing process. The test cases in this
quadrant are business-driven and are used for manual and automated functional
testing. The following testing will be executed in this quadrant:

· Pair testing.
· Testing scenarios and workflow.
· Testing user stories and experiences like prototypes.

3. Quadrant 3 (Manual)

The third agile quadrant provides feedback to the first and the second quadrant.
This quadrant involves executing many iterations of testing, these reviews and
responses are then used to strengthen the code. The test cases in this quadrant are
developed to implement automation testing. The testing that can be carried out in
this quadrant are:

· Usability testing.
· Collaborative testing.
· User acceptance testing.
· Collaborative testing.
· Pair testing with customers.

4. Quadrant 4 (Tools)

The fourth agile quadrant focuses on the non-functional requirements of the


product like performance, security, stability, etc. Various types of testing are
performed in this quadrant to deliver non-functional qualities and the expected
value. The testing activities that can be performed in this quadrant are:

· Non-functional testing such as stress testing, load testing, performance


testing, etc.
· Security testing.
· Scalability testing.
· Infrastructure testing.
· Data migration testing.

Agile Testing Life Cycle


The agile testing life cycle has 5 different phases:

1. Impact Assessment: This is the first phase of the agile testing life cycle
also known as the feedback phase where the inputs and responses are
collected from the users and stakeholders. This phase supports the test
engineers to set the objective for the next phase in the cycle.
2. Agile Testing Planning: In this phase, the developers, customers, test
engineers, and stakeholders team up to plan the testing process schedules,
regular meetings, and deliverables.
3. Release Readiness: This is the third phase in the agile testing lifecycle
where the test engineers review the features which have been created
entirely and test if the features are ready to go live or not and the features
that need to be sent again to the previous development phase.
4. Daily Scrums: This phase involves the daily morning meetings to check
on testing and determine the objectives for the day. The goals are set daily to
enable test engineers to understand the status of testing.
5. Test Agility Review: This is the last phase of the agile testing lifecycle
that includes weekly meetings with the stakeholders to evaluate and assess
the progress against the goals.

Agile Test Plan


An agile test plan includes types of testing done in that iteration like test data
requirements, test environments, and test results. In agile testing, a test plan is
written and updated for every release. The test plan includes the following:

· Test Scope.
· Testing instruments.
· Data and settings are to be used for the test.
· Approaches and strategies used to test.
· Skills required to test.
· New functionalities are being tested.
· Levels or Types of testing based on the complexity of the features.
· Resourcing.
· Deliverables and Milestones.
· Infrastructure Consideration.
· Load or Performance Testing.
· Mitigation or Risks Plan.
Benefits of Agile Testing
Below are some of the benefits of agile testing:
· Saves time: Implementing agile testing helps to make cost estimates
more transparent and thus helps to save time and money.
· Reduces documentation: It requires less documentation to execute agile
testing.
· Enhances software productivity: Agile testing helps to reduce errors,
improve product quality, and enhance software productivity.
· Higher efficiency: In agile software testing the work is divided into
small parts thus developer can focus more easily and complete one part first
and then move on to the next part. This approach helps to identify minor
inconsistencies and higher efficiency.
· Improve product quality: In agile testing, regular feedback is obtained
from the user and other stakeholders, which helps to enhance the software
product quality.
Limitations of Agile Testing
Below are some of the limitations of agile software testing:

· Project failure: In agile testing, if one or more members leave the job
then there are chances for the project failure.
· Limited documentation: In agile testing, there is no or less
documentation which makes it difficult to predict the expected results as
there are explicit conditions and requirements.
· Introduce new bugs: In agile software testing, bug fixes, modifications,
and releases happen repeatedly which may sometimes result in the
introduction of new bugs in the system.
· Poor planning: In agile testing, the team is not exactly aware of the end
result from day one, so it becomes challenging to predict factors like cost,
time, and resources required at the beginning of the project.
· No finite end: Agile testing requires minimal planning at the beginning
so it becomes easy to get sidetracked while delivering the new product.
There is no finite end and there is no clear vision of what the final product
will look like.

Challenges During Agile Testing


Below are some of the challenges that are faced during agile testing:

· Changing requirements: Sometimes during product development


changes in the requirements or the specifications occur but when they occur
near the end of the sprint, the changes are moved to the next sprint and thus
become the overhead for developers and testers.
· Inadequate test coverage: In agile testing, testers sometimes miss
critical test cases because of the continuously changing requirements and
continuous integration. This problem can be solved by keeping track of test
coverage by analyzing the agile test metrics.
· Tester’s availability: Sometimes the testers don’t have adequate skills to
perform API and Integration testing, which results in missing important test
cases. One solution to this problem is to provide training for the testers so
that they can carry out essential tests effectively.
· Less Documentation: In agile testing, there is less or no documentation
which makes the task of the QA team more tedious.
· Performance Bottlenecks: Sometimes developer builds products
without understanding the end-user requirements and following only the
specification requirements, resulting in performance issues in the product.
Using load testing tools performance bottlenecks can be identified and fixed.
· Early detection of defects: In agile testing, defects are detected at the
production stage or at the testing stage, which makes it very difficult to fix
them.
· Skipping essential tests: In agile testing, sometimes agile testers due to
time constraints and the complexity of the test cases put some of the non-
functional tests on hold. This may cause some bugs later that may be
difficult to fix.
Risks During Agile Testing
· Automated UI slow to execute: Automated UI gives confidence in the
testing but they are slow to execute and expensive to build.
· Use a mix of testing types: To achieve the expected quality of the
product, a mixture of testing types and levels must be used.
· Poor Automation test plan: Sometimes automation tests plan is poorly
organized and unplanned to save time which results in a test failure.
· Lack of expertise: Automated testing sometimes is not the only solution
that should be used, it can sometimes lack the expertise to deliver effective
solutions.
· Unreliable tests: Fixing failing tests and resolving issues of brittle tests
should be the top priority to avoid false positives.

What is Extreme Programming (XP)?

Extreme programming (XP) is one of the most important software development


frameworks of Agile models. It is used to improve software quality and
responsiveness to customer requirements.

Good Practices in Extreme Programming

Some of the good practices that have been recognized in the extreme programming
model and suggested to maximize their use are given below:
· Code Review: Code review detects and corrects errors efficiently. It
suggests pair programming as coding and reviewing of written code carried
out by a pair of programmers who switch their work between them every
hour.
· Testing: Testing code helps to remove errors and improves its reliability.
XP suggests test-driven development (TDD) to continually write and
execute test cases. In the TDD approach, test cases are written even before
any code is written.
· Incremental development: Incremental development is very good
because customer feedback is gained and based on this development team
comes up with new increments every few days after each iteration.
· Simplicity: Simplicity makes it easier to develop good-quality code as
well as to test and debug it.
· Design: Good quality design is important to develop good quality
software. So, everybody should design daily.
· Integration testing: It helps to identify bugs at the interfaces of different
functionalities. Extreme programming suggests that the developers should
achieve continuous integration by building and performing integration
testing several times a day.

Basic principles of Extreme programming

XP is based on the frequent iteration through which the developers implement User
Stories. User stories are simple and informal statements of the customer about the
functionalities needed. A User Story is a conventional description by the user of a
feature of the required system. It does not mention finer details such as the
different scenarios that can occur. Based on User stories, the project team proposes
Metaphors. Metaphors are a common vision of how the system would work. The
development team may decide to build a Spike for some features. A Spike is a very
simple program that is constructed to explore the suitability of a solution being
proposed. It can be considered similar to a prototype. Some of the basic activities
that are followed during software development by using the XP model are given
below:
· Coding: The concept of coding which is used in the XP model is slightly
different from traditional coding. Here, the coding activity includes drawing
diagrams (modeling) that will be transformed into code, scripting a web-
based system, and choosing among several alternative solutions.
· Testing: The XP model gives high importance to testing and considers it
to be the primary factor in developing fault-free software.
· Listening: The developers need to carefully listen to the customers if
they have to develop good quality software. Sometimes programmers may
not have the depth knowledge of the system to be developed. So, the
programmers should understand properly the functionality of the system and
they have to listen to the customers.
· Designing: Without a proper design, a system implementation becomes
too complex, and very difficult to understand the solution, thus making
maintenance expensive. A good design results elimination of complex
dependencies within a system. So, effective use of suitable design is
emphasized.
· Feedback: One of the most important aspects of the XP model is to gain
feedback to understand the exact customer needs. Frequent contact with the
customer makes the development effective.
· Simplicity: The main principle of the XP model is to develop a simple
system that will work efficiently in the present time, rather than trying to
build something that would take time and may never be used. It focuses on
some specific features that are immediately needed, rather than engaging
time and effort on speculations of future requirements.
· Pair Programming: XP encourages pair programming where two
developers work together at the same workstation. This approach helps in
knowledge sharing, reduces errors, and improves code quality.
· Continuous Integration: In XP, developers integrate their code into a
shared repository several times a day. This helps to detect and resolve
integration issues early on in the development process.
· Refactoring: XP encourages refactoring, which is the process of
restructuring existing code to make it more efficient and maintainable.
Refactoring helps to keep the codebase clean, organized, and easy to
understand.
· Collective Code Ownership: In XP, there is no individual ownership of
code. Instead, the entire team is responsible for the codebase. This approach
ensures that all team members have a sense of ownership and responsibility
towards the code.
· Planning Game: XP follows a planning game, where the customer and
the development team collaborate to prioritize and plan development tasks.
This approach helps to ensure that the team is working on the most
important features and delivers value to the customer.
· On-site Customer: XP requires an on-site customer who works closely
with the development team throughout the project. This approach helps to
ensure that the customer’s needs are understood and met, and also facilitates
communication and feedback.

Applications of Extreme Programming (XP)

Some of the projects that are suitable to develop using the XP model are given
below:
· Small projects: The XP model is very useful in small projects consisting
of small teams as face-to-face meeting is easier to achieve.
· Projects involving new technology or Research projects: This type of
project faces changing requirements rapidly and technical problems. So XP
model is used to complete this type of project.
· Web development projects: The XP model is well-suited for web
development projects as the development process is iterative and requires
frequent testing to ensure the system meets the requirements.
· Collaborative projects: The XP model is useful for collaborative
projects that require close collaboration between the development team and
the customer.
· Projects with tight deadlines: The XP model can be used in projects
that have a tight deadline, as it emphasizes simplicity and iterative
development.
· Projects with rapidly changing requirements: The XP model is
designed to handle rapidly changing requirements, making it suitable for
projects where requirements may change frequently.
· Projects where quality is a high priority: The XP model places a
strong emphasis on testing and quality assurance, making it a suitable
approach for projects where quality is a high priority.

Extreme Programming (XP) is an Agile software development methodology that


focuses on delivering high-quality software through frequent and continuous
feedback, collaboration, and adaptation. XP emphasizes a close working
relationship between the development team, the customer, and stakeholders, with
an emphasis on rapid, iterative development and deployment.

Agile development approaches evolved in the 1990s as a reaction to documentation


and bureaucracy-based processes, particularly the waterfall approach. Agile
approaches are based on some common principles, some of which are:
1. Working software is the key measure of progress in a project.
2. For progress in a project, therefore software should be developed and
delivered rapidly in small increments.
3. Even late changes in the requirements should be entertained.
4. Face-to-face communication is preferred over documentation.
5. Continuous feedback and involvement of customers are necessary for
developing good-quality software.
6. A simple design that involves and improves with time is a better approach
than doing an elaborate design up front for handling all possible scenarios.
7. The delivery dates are decided by empowered teams of talented
individuals.

Extreme programming is one of the most popular and well-known approaches in


the family of agile methods. an XP project starts with user stories which are short
descriptions of what scenarios the customers and users would like the system to
support. Each story is written on a separate card, so they can be flexibly grouped.

XP, and other agile methods, are suitable for situations where the volume and
space of requirements change are high and where requirement risks are
considerable.

Extreme Programming practices


· Continuous Integration: Code is integrated and tested frequently, with
all changes reviewed by the development team.
· Test-Driven Development: Tests are written before code is written, and
the code is developed to pass those tests.
· Pair Programming: Developers work together in pairs to write code and
review each other’s work.
· Continuous Feedback: Feedback is obtained from customers and
stakeholders through frequent demonstrations of working software.
· Simplicity: XP prioritizes simplicity in design and implementation, to
reduce complexity and improve maintainability.
· Collective Ownership: All team members are responsible for the code,
and anyone can make changes to any part of the codebase.
· Coding Standards: Coding standards are established and followed to
ensure consistency and maintainability of the code.
· Sustainable Pace: The pace of work is maintained at a sustainable level,
with regular breaks and opportunities for rest and rejuvenation.
· XP is well-suited to projects with rapidly changing requirements, as it
emphasizes flexibility and adaptability. It is also well-suited to projects with
tight timelines, as it emphasizes rapid development and deployment.
· Refactoring: Code is regularly refactored to improve its design and
maintainability, without changing its functionality.
· Small Releases: Software is released in small increments, allowing for
frequent feedback and adjustments based on that feedback.
· Customer Involvement: Customers are actively involved in the
development process, providing feedback and clarifying requirements.
· On-Site Customer: A representative from the customer’s organization is
present with the development team to provide continuous feedback and
answer questions.
· Short Iterations: Work is broken down into short iterations, usually one
to two weeks in length, to allow for rapid development and frequent
feedback.
· Planning Game: The team and customer work together to plan and
prioritize the work for each iteration, to deliver the most valuable features
first.
· Metaphor: A shared metaphor is used to guide the design and
implementation of the system.
· Coding Standards: Coding standards are established and followed to
ensure consistency and maintainability of the code.

Advantages of Extreme Programming (XP)


· Slipped schedules: Timely delivery is ensured through slipping
timetables and doable development cycles.
· Misunderstanding the business and/or domain −
Constant contact and explanations are ensured by including the client on the
team.
· Canceled projects: Focusing on ongoing customer engagement
guarantees open communication with the consumer and prompt problem-
solving.
· Staff turnover: Teamwork that is focused on cooperation provides
excitement and goodwill. Team spirit is fostered by multidisciplinary
cohesion.
· Costs incurred in changes: Extensive and continuing testing ensures
that the modifications do not impair the functioning of the system. A
functioning system always guarantees that there is enough time to
accommodate changes without impairing ongoing operations.
· Business changes: Changes are accepted at any moment since they are
seen to be inevitable.
· Production and post-delivery defects: the unit tests to find and repair
bugs as soon as possible.

You might also like