Software Testing UNIT-4
Software Testing UNIT-4
Software Test automation makes use of specialized tools to control the execution
of tests and compares the actual results against the expected result. Usually
regression tests, which are repetitive actions, are automated.
Testing Tools not only help us to perform regression tests but also helps us to
automate data set up generation, product installation, GUI interaction, defect
logging, etc.
Types of Frameworks:
Typically, there are 4 test automation frameworks that are adopted while
automating the applications.
Testing Tools:
Tools from a software testing context can be defined as a product that supports one
or more test activities right from planning, requirements, creating a build, test
execution, defect logging and test analysis.
Classification of Tools
Types of Tools:
        A tool that analyzes source code without executing the code. Static code
        analyzers are designed to review bodies of source code (at the programming
        language level) or compiled code (at the machine language level) to identify
        poor coding practices. Static code analyzers provide feedback to developers
        during the code development phase on security flaws that might be
        introduced into code.
 Static analysis, also called static code analysis, is a method of computer program
debugging that is done by examining the code without executing the program. The
process provides an understanding of the code structure and can help ensure that
the code adheres to industry standards. Static analysis is used in software
engineering by software development and quality assurance teams. Automated
tools can assist programmers and developers in carrying out static analysis. The
software will scan all code in a project to check for vulnerabilities while validating
the code.
      ●   Programming errors
      ●   Coding standard violations
      ●   Undefined values
      ●   syntax violations
      ●   Security vulnerabilities
The static analysis process is also useful for addressing weaknesses in source code
that could lead to buffer overflows -- a common software vulnerability.
The static analysis process is relatively simple, as long as it's automated. Generally,
static analysis occurs before software testing in early development. In the DevOps
development practice, it will occur in the create phases.
Once the code is written, a static code analyzer should be run to look over the
code. It will check against defined coding rules from standards or custom
predefined rules. Once the code is run through the static code analyzer, the
analyzer will have identified whether or not the code complies with the set rules. It
is sometimes possible for the software to flag false positives, so it is important for
someone to go through and dismiss any. Once false positives are waived,
developers can begin to fix any apparent mistakes, generally starting from the most
critical ones. Once the code issues are resolved, the code can move on to testing
through execution.
Without having code testing tools, static analysis will take a lot of work, since
humans will have to review the code and figure out how it will behave in runtime
environments. Therefore, it's a good idea to find a tool that automates the process.
Getting rid of any lengthy processes will make for a more efficient work
environment.
There are several static analysis methods an organization could use, which include:
In a broader sense, with less official categorization, static analysis can be broken
into formal, cosmetic, design properties, error checking and predictive categories.
Formal meaning if the code is correct; cosmetic meaning if the code syncs up with
style standards; design properties meaning the level of complexities; error
checking which looks for code violations; and predictive, which asks how code
will behave when run.
This image shows some of the objectives within static analysis.
Benefits of using static analysis include:
However, static analysis comes with some drawbacks. For example, organizations
should stay aware of the following:
The principal advantage of static analysis is the fact that it can reveal errors that do
not manifest themselves until a disaster occurs weeks, months or years after
release. Nevertheless, static analysis is only a first step in a comprehensive
software quality-control regime. After static analysis has been done, Dynamic
analysis is often performed in an effort to uncover subtle defects or vulnerabilities.
In computer terminology, static means fixed, while dynamic means capable of
action and/or change. Dynamic analysis involves the testing and evaluation of a
program based on execution. Static and dynamic analysis, considered together, are
sometimes referred to as glass-box testing.
A test case is a document, which has a set of test data, preconditions, expected
results and postconditions, developed for a particular test scenario in order to
verify compliance against a specific requirement.
Test Case acts as the starting point for the test execution, and after applying a set of
input values, the application has a definitive outcome and leaves the system at
some end point or also known as execution postcondition.
   ●   Test Case ID
   ●   Test Scenario
   ●   Test Case Description
   ●   Test Steps
   ●   Prerequisite
   ●   Test Data
   ●   Expected Result
   ●   Test Parameters
   ●   Actual Result
   ●   Environment Information
   ●   Comments
Example:
Let us say that we need to check an input field that can accept maximum of 10
characters.
While developing the test cases for the above scenario, the test cases are
documented the following way. In the below example, the first case is a pass
scenario while the second case is a FAIL.
If the expected result doesn't match with the actual result, then we log a defect. The
defect goes through the defect life cycle and the testers address the same after fix.
1. Deriving test cases directly from a requirement specification or black box test
design technique. The Techniques include:
      ·      Boundary Value Analysis (BVA)
      ·      Equivalence Partitioning (EP)
      ·      Decision Table Testing
      ·      State Transition Diagrams
      ·      Use Case Testing
      ·      Statement Coverage
      ·      Branch Coverage
      ·      Path Coverage
      ·      LCSAJ Testing
      ·      Error Guessing
      ·      Exploratory Testing
Test Strategy is also known as test approach defines how testing would be carried
out. Test approach has two techniques:
There are many strategies that a project can adopt depending on the context and
some of them are:
Factors to be considered:
      ·       Risks of product or risk of failure or the environment and the
          company
      ·       Expertise and experience of the people in the proposed tools and
          techniques.
      ·       Regulatory and legal aspects, such as external and internal regulations
          of the development process
      ·       The nature of the product and the domain
Testing Tools:
Tools from a software testing context can be defined as a product that supports one
or more test activities right from planning, requirements, creating a build, test
execution, defect logging and test analysis.
Classification of Tools
Types of Tools:
   ● Automates Test Creation: It makes test cases without manual work. This
       saves time and effort.
   ●   Improves Test Coverage: It covers more parts of the software. This finds
       more bugs.
   ●   Reduces Human Error: Less manual work means fewer mistakes in tests.
   ●   Adapts to Changes: It can quickly make new tests when software changes.
   ●   Efficient Testing: It speeds up the testing process. This helps software reach
       users faster.
Test Case Generators are vital for quality software. They make testing efficient and
thorough. This is key for any software project.
Using a Test Case Generator can transform software testing. It brings efficiency
and accuracy. Here's why it's a smart choice:
   1. Time and Resource Saving: A Test Case Generator saves time. It creates test
      cases quickly. This means testers spend less time on routine tasks. They can
      focus on more complex issues. It also reduces the need for a large testing
      team. This saves resources.
   2. Consistent Quality: Manual test case creation can vary in quality. A Test
      Case Generator offers consistency. Every test case follows the same high
      standards. This means reliable results every time. Consistent quality is
      crucial for trustworthy software.
   3. Comprehensive Coverage: Manual testing might miss some areas. A Test
      Case Generator covers more ground. It checks every part of the software.
      This thorough approach finds more bugs. It ensures the software is robust
      and user-friendly.
   4. Adaptability to Changes: Software often changes during development. A
      Test Case Generator adapts quickly. It can create new test cases to match
      these changes. This keeps the testing process up-to-date. It ensures the final
      product meets all requirements.
How a Test Case Generator works is key in software testing. It's a tool that
automates and simplifies the process. Here's a breakdown:
   1. Input Analysis: The generator starts by analyzing inputs. These inputs are
      software requirements and specifications. It understands what the software
      should do. This step is crucial. It sets the foundation for relevant test cases.
   2. Test Case Design: Based on the analysis, the generator designs test cases. It
      uses algorithms to create scenarios. These scenarios test different aspects of
      the software. The goal is to cover all functionalities. This step ensures
      comprehensive testing.
   3. Output Generation: After designing, the generator produces test cases. These
      are detailed instructions for testing. They include steps to follow, expected
      outcomes, and test data. This output is ready for testers to use. It makes their
      work easier and more focused.
   4. Maintenance and Updates: Software changes over time. The Test Case
      Generator adapts to these changes. It updates test cases to match new
      requirements. This keeps the testing process relevant. It ensures ongoing
      quality control.
Creating a test case with the Onethread Generator is straightforward. Here's how to
do it:
By following these steps, you can efficiently create comprehensive test cases with
the Onethread Generator. This tool simplifies and streamlines the test case creation
process.
Test cases are vital in software development. They ensure the software works well.
Here's why they are important:
Advantages
  Efficiency and time-savings:
     The TestCase Generator from MicroNova automates the process of
     generating test cases, thereby increasing efficiency in safeguarding
     electronic control units many times over. More results are available more
     quickly.
  Quality assurance:Using the TCG for all test runs allows their results to be
     checked uniformly, and their progression to be reliably tracked.
     Standardizing test specifications will ensure consistently high quality in
     future.
  Flexibility and performance:
     The generator approach allows test departments to respond quickly to the
     rocketing rate of change in functional development.
  Simple traceability:
     It is sufficient to check the test specification. An additional review of
     implementation is no longer necessary.
  Optimum resource utilization:
     Thanks to automation, test engineers can spend more time on particularly
     complex cases.
  Quick commissioning with low costs:
     The TCG can be seamlessly integrated into EXAM, so no additional and
     cost-intensive interfaces are required.
How it works
Apart from these test case generation approaches, there are multiple other
processes available in the testing world. But whatever the approach, a proper test
case generation process is one of the most critical factors for successful project
implementation.
GUI capture replay tools have been developed for testing the applications against
graphical user interfaces. Using a capture and replay tool, testers can run an
application and record the interaction between a user and the application. The
Script is recorded with all user actions including mouse movements and the tool
can then automatically replay the exact same interactive session any number of
times without requiring a human intervention. This supports fully automatic
regression testing of graphical user interfaces.
and Web GUI applications. This tool is intended for both testers and
○ It allows regression and load test for Java Swing, Eclipse plug ins and
○ As it's a cross browser tool, one can perform testing of static and
applications. It offers the API's which are quite simple to read and write,
Widget Toolkit) and Eclipse. It has the feature to record and playback tests
former version of Jubula was GUI dancer. They are functionally the same.
testers.
● Jubula offers support for variety of platforms like Swing, JavaFX, HTML
operating systems, can be accessed across multi user database and also
tests that are simple to maintain. The Automated GUI Recorder allows
write Java based unit test cases. It has a script editor named Costello, that
   allows one to write manual test scripts as well as facilitates recording of test
      execution that is capturing the sequence of events that had taken place
components in GUI, replay action specific semantic events etc. Jacareto has
two front end applications- CleverPHL and Picorder. The former has the
capability to record, edit and replay user interaction scripts and the latter is a
It does not store interaction scripts as XML file unlike Abbot and Jacareto
● Marathon -It also has the facility to record and replay, it's interaction scripts
enable GUI testing. It allows developers to write GUI java tests and test case
methods. The recording feature has been added in the recent versions.
      and Replay is a script that documents both the test process and the test
      parameters.
   ● The scripts, which are usually defined and editable in XML formats, can be
      occur or errors occur. Output formats, database contents or GUI states are
      checked and the results documented accordingly.
   ● Test scenarios can be easily reproduced by repeatedly replaying (mode) the
      previously recorded scripts. Individual elements of the user interface are also
      recognised if their position or shape has changed. This works because the
      user input in capture mode, for example, not only saves the behavior of the
      mouse pointer, but also records the corresponding object ID at the same
      time.
Capture and Replay were developed to test the applications against graphical user
interfaces. With a capture and replay tool, an application can be tested in which an
intervention. This saves time and effort while providing valuable insights for
The recovery of the system from such phase (after stress) is very critical as it is
highly likely to happen in production environment
Stress testing is an important part of the software testing process. It assesses the
systems capacity to handle a substantial workload and identifies any potential
issues that might arise under demanding conditions.
Typically, stress tests are conducted on applications or systems in order to
determine their performance levels under various loading scenarios. It helps to
figure out where there are problems and areas for improvement before it is released
into production.
The comprehension of the necessity for stress testing is imperative for the upkeep
of high-quality software products, as it provides valuable insights into the
applications performance in challenging environments.
Prerequisite –
Characteristics of Stress Testing:
      1. Planning the stress test: This step involves gathering the system data,
          analyzing the system, and defining the stress test goals.
      2. Create Automation Scripts: This step involves creating the stress testing
          automation scripts and generating the test data for the stress test
          scenarios.
      3. Script Execution: This step involves running the stress test automation
          scripts and storing the stress test results.
      4. Result Analysis: This phase involves analyzing stress test results and
          identifying the bottlenecks.
      5. Tweaking and Optimization: This step involves fine-tuning the system
          and optimizing the code with the goal meet the desired benchmarks.
                                   Stress Testing
Metrics are used to evaluate the performance of the stress and it is usually carried
out at the end of the stress scripts or tests. Some of the metrics are given below.
      1. Pages Per Second: Number of pages requested per second and number of
          pages loaded per second.
      2. Pages Retrieved: Average time is taken to retrieve all information from a
          particular page.
      3. Byte Retrieved: Average time is taken to retrieve the first byte of
          information from the page.
      4. Transaction Response Time: Average time is taken to load or perform
          transactions between the applications.
      5. Transactions per Second: It takes count of the number of transactions
          loaded per second successfully and it also counts the number of failures
          that occurred.
      6. Failure of Connection: It takes count of the number of times that the
          client faced connection failure in their system.
      7. Failure of System Attempts: It takes count of the number of failed
          attempts in the system.
      8. Rounds: It takes count of the number of test or script conditions executed
         by the clients successfully and it keeps track of the number of rounds
         failed.
As the name suggests, the Client-Server application consists of two systems, one is
the Client and the other is the Server. Here, the client and server interact with each
other over the computer network.
In Client-Server application testing, the client sends requests to the server for
specific information and the server sends the response back to the client with the
requested information. Hence, this testing is also known as two-tier application
testing.
The picture below depicts what the Client-Server application looks like:
This type of testing is usually done for 2 tier applications (usually developed for
LAN). We will be having Front-end and Backend here.
Applications launched on the front-end will have forms and reports which will be
monitoring and manipulating data.
For Example, applications developed in VB, VC++, Core Java, C, C++, D2K,
PowerBuilder, etc., The backend for these applications would be MS Access, SQL
Server, Oracle, Sybase, MySQL, and Quadbase.
      components—the client, which runs on the user’s device, and the server,
      hosted on remote hardware. Testing involves evaluating the interaction
      between these distributed components.
   ● Communication Over a Network: Clients and servers communicate over a
      between the client and server is a key concern. Testing validates that the data
      sent and received is correct, complete, and secure.
   ● Caching and Messaging: Depending on the architecture, client-server
● Unit Testing: Evaluate individual client and server components to verify that
● Data Validation Testing: Check that data sent and received between the
      evaluate how well the system can recover without data loss.
   ● Redundancy Testing: Verify that redundant server setups work as intended
● Usability Testing: Evaluate the client application’s user interface and overall
● Test load balancers to ensure they distribute client requests effectively and
      client-server communication.
Message Queue Testing
   ● Validate the reliability and efficiency of message queues (e.g., RabbitMQ) in
Manual testing is a fundamental testing approach where human testers execute test
cases without the use of automation tools or scripts. It relies on human intuition
and expertise to evaluate an application’s functionality, user interface, and overall
quality. Manual testing encompasses several types:
   ● Functional Testing: This involves testers verifying that an application’s
Automated testing involves the use of specialized tools and scripts to perform
testing tasks automatically. It is especially valuable for repetitive or complex test
scenarios. Automated testing offers various advantages, such as consistency,
repeatability, and the ability to perform tests quickly and efficiently.
Example: In a client-server application, automated functional testing could involve
using a tool like Testsigma to create and execute test scripts that validate the
registration process. These scripts can simulate user actions such as filling out a
registration form, submitting it and verifying that the user’s data is correctly stored
on the server.
Automated Testing can help you save tons of testing time.
Black-Box Testing
White-box testing is an approach where testers have access to the internal code and
structure of the application. They design tests to evaluate the correctness of the
code, its logic, and the execution paths within the application. White-box testing
aims to uncover defects in the code’s implementation.
Example: In a client-server application, white-box testing might involve code
reviews and static code analysis of the server-side components to identify potential
vulnerabilities or code quality issues. Testers may inspect the server’s code to
ensure that it handles user authentication securely and adheres to coding standards.
Mocking and Simulation
The language processor that reads the complete source program written in high-
level language as a whole in one go and translates it into an equivalent program in
machine language is called a Compiler. Example: C, C++, C#.
In a compiler, the source code is translated to object code successfully if it is free
of errors. The compiler specifies the errors at the end of the compilation with line
numbers when there are any errors in the source code. The errors must be removed
before the compiler can successfully recompile the source code again the object
program can be executed number of times without translating it again.
2. Assembler
The Assembler is used to translate the program written in Assembly language into
machine code. The source program is an input of an assembler that contains
assembly language instructions. The output generated by the assembler is the
object code or machine code understandable by the computer. Assembler is
basically the 1st interface that is able to communicate humans with the machine.
We need an assembler to fill the gap between human and machine so that they can
communicate with each other. code written in assembly language is some sort of
mnemonics(instructions) like ADD, MUL, MUX, SUB, DIV, MOV and so on. and
the assembler is basically able to convert these mnemonics in binary code. Here,
these mnemonics also depend upon the architecture of the machine.
For example, the architecture of intel 8085 and intel 8086 are different.
3. Interpreter
The translation of a single statement of the source program into machine code is
done by a language processor and executes immediately before moving on to the
next line is called an interpreter. If there is an error in the statement, the interpreter
terminates its translating process at that statement and displays an error message.
The interpreter moves on to the next line for execution only after the removal of
the error. An Interpreter directly executes instructions written in a programming or
scripting language without previously converting them to an object code or
machine code. An interpreter translates one line at a time and then executes it.
Example: Perl, Python and Matlab.
Compiler Interpreter
web-enabled applications
Web app testing is a software testing practice that ensures the application's
functionality and quality as per the requirements. Before delivery, web testing must
identify all underlying issues, such as security breaches, integration issues,
functional inconsistencies, environmental challenges, or traffic load.
Web testing is a software testing technique to test web applications or websites for
finding errors and bugs. A web application must be tested properly before it goes
to the end-users. Also, testing a web application does not only mean finding
common bugs or errors but also testing the quality-related risks associated with the
application. Software Testing should be done with proper tools and resources and
should be done effectively. We should know the architecture and key areas of a
web application to effectively plan and execute the testing.
Testing a web application is very common while testing any other application like
testing functionality, configuration, or compatibility, etc. Testing a web application
includes the analysis of the web fault compared to the general software faults. Web
applications are required to be tested on different browsers and platforms so that
we can identify the areas that need special focus while testing a web application.
Types of Web Testing:
Basically, there are 4 types of web-based testing that are available and all four of
them are discussed below:
   ·       Static Website Testing:
      A static website is a type of website in which the content shown or displayed
      is exactly the same as it is stored in the server. This type of website has great
      UI but does not have any dynamic feature that a user or visitor can use. In
      static testing, we generally focus on testing things like UI as it is the most
      important part of a static website. We check things font size, color, spacing,
      etc. testing also includes checking the contact us form, verifying URLs or
      links that are used in the website, etc.
   ·       Dynamic Website Testing:
       A dynamic website is a type of website that consists of both a frontend i.e,
      UI, and the backend of the website like a database, etc. This type of website
      gets updated or change regularly as per the user’s requirements. In this
      website, there are a lot of functionalities involved like what a button will do
      if it is pressed, are error messages are shown properly at their defined time,
      etc. We check if the backend is working properly or not, like does enter the
      data or information in the GUI or frontend gets updated in the databases or
      not.
   ·       E-Commerce Website Testing:
      An e-commerce website is very difficult in maintaining as it consists of
      different pages and functionalities, etc. In this testing, the tester or developer
      has to check various things like checking if the shopping cart is working as
      per the requirements or not, are user registration or login functionality is also
      working properly or not, etc. The most important thing in this testing is that
      does a user can successfully do payment or not and if the website is secured.
      And there are a lot of things that a tester needs to test apart from the given
      things.
   ·       Mobile-Based Web Testing: In this testing, the developer or tester
      basically checks the website compatibility on different devices and generally
      on mobile devices because many of the users open the website on their
      mobile devices. So, keeping that thing in mind, we must check that the site
      is responsive on all devices or platforms.
Points to be Considered While Testing a Website:
As the website consists of a frontend, backend, and servers, so things like HTML
pages, internet protocols, firewalls, and other applications running on the servers
should be considered while testing a website. There are various examples of
considerations that need to be checked while testing a web application. Some of
them are:
   ·    Do all pages are having valid internal and external links or URLs?
   ·    Whether the website is working as per the system compatibility?
   ·    As per the user interface-Does the size of displays are the optimal and the
     best fit for the website?
  ·     What type of security does the website need (if unsecured)?
  ·     What are the requirements for getting the website analytics, and also
     controlling graphics, URLs, etc.?
  ·     The contact us or customer assistance feature should be added or not on
     the page, etc.?
Objectives of Web Based Testing:
   ·       Testing for functionality: Make that the web application performs as
       expected for all features and functions. Check that user interface elements
       like form submissions and navigation work as intended.
   ·       Testing for Compatibility: To make sure it is compatible, test the web
       application across a variety of devices, operating systems, and browsers.
       Verify that the program operates consistently in a range of settings.
   ·       Evaluation of Performance: Analyze the online application’s overall
       performance, speed, and responsiveness. Any performance bottlenecks, such
       as slow page loads or delayed server response times, should be located and
       fixed.
   ·       Testing for load: Examine how well the web application can manage a
       particular load or multiple user connections at once. Determine and fix
       performance problems when there is a lot of traffic.
   ·       Testing for accessibility: Make sure the online application complies with
       applicable accessibility standards (e.g., WCAG) and is usable by people
       with disabilities. Make sure the program can communicate with assistive
       technologies efficiently.
   ·       Testing Across Browsers: Make sure the operation and appearance of the
       web application are consistent by testing it in various web browsers.
      Determine and fix any problems that might develop with a particular
      browser.
Steps in Software Testing:
There is a total of 11 steps in software testing. You can read all of them from the
article “General Steps of Software Testing Process”. In web-based testing, various
areas have to be tested for finding the potential errors and bugs, and steps for
testing a web app are given below:
   ·     App Functionality: In web-based testing, we have to check the specified
     functionality, features, and operational behavior of a web application to
     ensure they correspond to its specifications. For example, Testing all the
     mandatory fields, Testing the asterisk sign should display for all the
     mandatory fields, Testing the system should not display the error message
     for optional fields, and also links like external linking, internal linking,
     anchor links, and mailing links should be checked properly and checked if
     there’s any damaged link, so that should be removed. We can do testing with
     the help of Functional Testing in which we test the app’s functional
     requirements and specifications.
   ·     Usability: While testing usability, the developers face issues with
     scalability and interactivity. As different numbers of users will be using the
     website, it is the responsibility of developers to make a group for testing the
     application across different browsers by using different hardware. For
     example, Whenever the user browses an online shopping website, several
     questions may come to his/her mind like, checking the credibility of the
     website, testing whether the shipping charges are applicable, etc.
   ·     Browser Compatibility: For checking the compatibility of the website to
     work the same in different browsers we test the web application to check
     whether the content that is on the website is being displayed correctly across
     all the browsers or not.
   ·     Security: Security plays an important role in every website that is
     available on the internet. As a part of security, the testers check things like
     testing the unauthorized access to secure pages should not be permitted, files
     that are confined to the users should not be downloadable without the proper
     access.
   ·    Load Issues: We perform this testing to check the behavior of the system
     under a specific load so that we can measure some important transactions
     and the load on the database, the application server, etc. are also monitored.
   ·    Storage and Database: Testing the storage or the database of any web
     application is also an important component and we must sure that the
     database is properly tested. We test things like finding errors while
     executing any DB queries, checking the response time of a\the query, testing
     whether the data retrieved from the database is correctly shown on the
     website or not.
Ad Hoc Tests are done after formal testing is performed on the application. Ad
Hoc methods are the least formal type of testing as it is NOT a structured
approach. Hence, defects found using this method are hard to replicate as there are
no test cases aligned for those scenarios.
Testing is carried out with the knowledge of the tester about the application and the
tester tests randomly without following the specifications/requirements. Hence the
success of Adhoc testing depends upon the capability of the tester, who carries out
the test. The tester has to find defects without any proper planning and
documentation, solely based on the tester's intuition.
Forms of AdhocTesting :
        1. Buddy Testing: Two buddies, one from the development team and one
             from the test team mutually work on identifying defects in the same
             module. Buddy testing helps the testers develop better test cases while
             the development team can also make design changes early. This kind of
             testing happens usually after completing the unit testing.
        2. Pair Testing: Two testers are assigned the same modules and they share
             ideas and work on the same systems to find defects. One tester executes
             the tests while another tester records the notes on their findings.
        3. Monkey Testing: Testing is performed randomly without any test cases
             in order to break the system.
● No Documentation.
● No Test cases.
      ● No Test Design.
As it is not based on any test cases or requires documentation or test design, resolving
issues that are identified at last becomes very difficult for developers. Sometimes very
interesting and unexpected errors or uncommon errors are found which would never
have been found in written test cases. This Ad Hoc testing is used in Acceptance
testing.
Adhoc testing saves a lot of time and one great example of Adhoc testing can be when
the client needs the product by 6 PM today but the product development will be
completed at 4 PM the same day. So in hand only limited time i.e. 2 hours only,
within that 2hrs the developer and tester team can test the system as a whole by taking
some random inputs and can check for any errors.
       1. Buddy Testing – Buddy testing is a type of Adhoc testing where two bodies
           will be involved one is from the Developer team and one from the tester
           team. So that after completing one module and after completing Unit testing
           the tester can test by giving random inputs and the developer can fix the
           issues too early based on the currently designed test cases.
       2. Pair Testing – Pair testing is a type of Adhoc testing where two bodies
           from the testing team can be involved to test the same module. When one
           tester can perform the random test another tester can maintain the record of
           findings. So when two testers get paired they exchange their ideas, opinions,
           and knowledge so good testing is performed on the module.
       3. Monkey Testing – Monkey testing is a type of Adhoc testing in which the
           system is tested based on random inputs without any test cases the behavior
        of the system is tracked and all the functionalities of the system are working
        or not is monitored. As the randomness approach is followed there is no
        constraint on inputs so it is called Monkey testing.
     ● It is good for finding bugs and inconsistencies that are mentioned in test
        cases.
     ● This test helps to build a strong product that is less prone to future
         problems.
     ● This testing can be performed at any time during Softthe ware Development
         Life Cycle Process (SDLC)
  ● This testing we do when the build is in the checked sequence, then we go for Adhoc
     testing by checking the application randomly.
  ● It is negative testing because we will test the application against the client's requirements.
● When the end-user uses the application randomly, he/she may see a bug, but the
    professional test engineer uses the software systematically, so he/she may not find the
    same bug.
● Suppose we will do one round of functional testing, integration, and system testing on the
    software.
● Then, we click on some feature instead of going to the login page, and it goes to the blank
    page, then it will be a bug.
● To avoid these sort of scenarios, we do one round of adhoc testing as we can see in the
    below image:
●
● Scenario 2
● ADVERTISEMENT
● In Adhoc testing, we don't follow the requirement because we randomly check the
    software. Our need is A?B?C?D, but while performing Adhoc testing, the test engineer
    directly goes to the C and test the application as we can see in the below image:
● Scenario 3
● Suppose we are using two different browsers like Google Chrome and Mozilla Firefox
    and login to the Facebook application in both the browsers.
● Then, we will change the password in the Google Chrome browser, and then in another
    browser (Firefox,) we will perform some action like sending a message.
   ● It should navigate to the login page, and ask to fill the login credentials again because we
      change our credentials in another browser (Chrome), this process is called adhoc testing.
   ● When the product is released to the market, we go for Adhoc testing because the
      customer never uses the application in sequence/systematically for that sake; we check
      the application by going for Adhoc testing by checking randomly.
   ● Checking the application randomly without following any sequence or procedure since
      the user doesn't know how to use the application, they may use it randomly and find some
      issues to cover this we do one round of Adhoc testing.
   ● We go for Adhoc testing when all types of testing are performed. If the time permits then
      we will check all the negative scenarios during adhoc testing.
● In this technique, two team members work on the same machine where
         one of the team members will work with the system and the other one is
         responsible for making notes and scenarios.
      ● One person known as the primary tester performs the testing while the
         other person, known as the buddy tester, observes and provides assistance
         as needed.
         finding errors in the code but also helps the tester to understand how the
         code is written and provides clarity on specifications.
      ● Helps to design better testing strategy: Buddy testing is normally done
         at the unit test phase, which helps testers to come out with a better testing
         strategy for subsequent planned and testing activities.
      ● Helpful for testing new modules: It is done for new or critical modules
         in the product where the specification is not clear to buddies who perform
         Buddy testing.
      ● Helps provide additional perspective on the testing process: The
1. Pair Testing
In pair testing, two people work closely together at a single workstation. As the
other person watches and evaluates the process, one person assumes the role of the
tester, carrying out test cases or utilizing the application. A dynamic interchange of
ideas and viewpoints is ensured by this cooperative method, which promotes more
thorough testing and early defect discovery.
Buddy testing for developers and testers is working together to find and fix bugs
early in the development cycle. This cooperation could be demonstrated by code
reviews, pair programming or cooperative testing. Facilitating communication
between these two crucial responsibilities will help the team improve the software
as a whole.
In this type of testing, two testers collaborate to examine the programme without
using pre-written test cases. This method fosters flexibility and inventiveness,
enabling testers to find unexpected problems and situations. The cooperation of
testers guarantees a deeper investigation of the functionality of the programme.
Through cooperative efforts, peer review testing focuses on the examination and
enhancement of testing artifacts. In order to find any problems, contradictions, or
places for improvement, testers go over each other’s test cases, scripts or plans.
This kind of buddy testing keeps a consistent and efficient testing procedure going
and improves the overall quality of testing documents.
This type of testing involves working together with individuals from several
functional domains, including development, testing and design. This method
encourages a variety of viewpoints and skill sets while advancing an overall
comprehension of the system. The cross-functional team’s ability to communicate
effectively helps them grasp the programme more thoroughly.
When to use Buddy Testing?
Buddy testing is typically used in the later stages of the software development
process when the software is almost complete and ready for final testing. It is
particularly useful for testing complex or critical systems, or for testing systems
that require specialized knowledge or expertise.
There are several factors that can influence the decision to use buddy testing,
including:
         development took a lot of time and the testing team has only a few days
         for testing the product.
      7. When the team is new: When there is a new team member in the team
1. Identify the primary tester and the buddy tester: The primary tester is
         the buddy tester should agree on the scope and objectives of the testing,
         including the specific features or functionality that will be tested and the
         expected results.
      3. Plan the testing: The primary tester and the buddy tester should develop
         a testing plan that outlines the specific test cases and test scenarios that
         will be executed, as well as the resources and tools needed to complete
         the testing.
     4. Execute the testing: The primary tester performs the testing while the
        the buddy tester should review the results of the testing and debrief to
        discuss any issues or challenges that were encountered.
        buddy tester can share knowledge and expertise, and catch defects more
        quickly.
     ● Increased efficiency: Buddy testing can help reduce the time and
        system being tested is of high quality, as defects and issues are more
        likely to be identified and addressed.
     ● Enhanced collaboration: Buddy testing promotes collaboration between
        team members and can help build trust and teamwork within the team.
     ● Less workload: There will be less workload in presence of another team
        member and the tester can think clearly and use more scenarios for
        testing.
        modalities and the terms of working before actually starting the testing
        work. They stay close together to be able to follow the agreed plan. The
        code is unit tested to ensure what it is supposed to do before buddy
        testing starts.
     ● Lengthy review session: After the code is successfully tested through
        unit testing the developer approaches the testing buddy. Starting buddy
        testing before completing unit testing may result in a lengthy review
        session for the buddy on a code that may not meet specified
        requirements. This in turn may cause unnecessary rework and erode the
        confidence of the buddy.
     ● Dependence on the buddy tester: If the buddy tester is not available or
        is not able to provide assistance, the testing process may be slowed down
        or disrupted.
     ● Limited scalability: Buddy testing may not be practical for large-scale
Pair Testing:
Pair Testing is verification in software by duo team members operating behind a
machine. The first member controls the mouse and keyboards and the second
member make notes discusses test scenarios and prepare/question. One of them has
to be a tester and the next has to be a developer or business analyst.
● Pairing up with Right Person: We can pair with anyone but it’s better
          to pair if both individuals have some sense with each other’s working
          process and goals effectively.
      ● Allocating Proper Space: The required pair need to allocate a device
          and space to seat together and perform the test properly. Basically, it will
          done via Video Conferencing tools where the driver would share the
          entire screen.
      ● Establish the Goals: After planning the structured approach to test, we
          need to keep an eye the areas to be covered, timebox the testing and
          aware about the required changes done.
      ● Decide the Roles: Before start the testing, we need to assign the role of
          maintain the bug log, when driver performs all the manual tasks. Once
          the process done, they should maintain a bug report and log all bugs.
Advantages Of Using Pair Testing:
       1. Developers have an approach to the software from a different or positive
          view. They discuss and discover what happens if I execute it or what will
          happen if a business analyst doesn’t implement it.
       2. While Pair Testing is enforced with a business analyst, they exchange
          and not test cases. We can’t use the outcome of the PT session right after
          test automation.
       2. When there are team members there are chances that two team members
          may end a classing with each other. That’s why we should not use PT
          when the team members not communicating or working together well.
       3. If you are planning to enforce structure test cases, it can’t add more value
          or zero value executing the test cases together. The task should be
          performed by one team member alone.
The setting, of Pair Testing:
1. The team members should obligate together. It is not going to work when
         perform without being interrupted. They should also switch off their
         mobile phone and notifications for better work.
      3. The work place consists of two people who would sit behind one desk.
4. We should fix a time limit to carry out the test. Normally one session is
         ninety minutes.
      5. We should plan the meeting in a coordinated manner.
● While carrying out the session the team members should discuss the test
         parts and how deeply the test should be. The test should be with the
         targets, focused and the part of the test as described in the ET charter.
      ● The first team member (the driver) controls the keyboard and mouse and
         the second team member analyses, ask a question, and prepare notes.
Pair Testing(PT) Finishing:
After the completion of PT session, the discoveries will be submitted to the bug
registration system. If needed and not completed. The ET charter will be upgraded,
the aim test will be tested where problems were found and other remarks will be
checked. The scope of the ET charter is a short calculation of the PT session, what
has been done well, and what should be looked after for improvement.
Finally, it is a blend of teamwork and testing but it has many advantages like
sharing knowledge about testing and SUT, training new members, making barriers
between members, and above that it is fun. we should use pair testing wisely.
Prior to the testing phase, they might list down some concepts or areas to explore,
but the essence of exploratory testing still emphasizes the tester’s personal freedom
and responsibility in simultaneous knowledge acquisition and quality checks. In a
way, exploratory testing is similar to an exciting adventure where testers don’t
really know what lies ahead of them, and they must utilize certain techniques to
uncover those mysteries.
During exploratory testing, testers explore the software by interacting with it,
trying out different features, then observing its behavior. They may even
intentionally break the software, input unexpected data, or explore edge cases to
uncover potential issues. As long as testers get to understand the system’s
workings and can suggest strategies to test the system more methodically, they
have accomplished the task.
A recent research compares exploratory testing (ET) with test case based testing
(TCT) in terms of their effectiveness for catching software defects. Results show
that test case based testing excels in catching immediately visible defects (Mode
0) or defects that require only 1 interaction (Mode 1) to cause failure, while
exploratory testing does a better job in catching complex defects requiring 2 and
3+ user inputs (Mode 2 and Mode 3).
In other words, exploratory testing helps testers catch bugs that automated testing
would have missed. Automated testing only catches bugs that we know may
happen (we can only create test scripts for something we know about), while
exploratory testing catches bugs that we don’t even know to be existing in the first
place, tapping in the Unknown Unknown region of our understanding.
Exploratory testing and manual testing in general expands the test coverage to
blind zones of automation testing, pushing product quality to a new level.
In the words of CemKarner who coined the term for this testing type:
“[exploratory testing treats] test-related learning, test design, test execution and
test result interpretation as mutually supportive activities that run in parallel
throughout the project.” As exploratory testing requires minimal to no planning, it
can be conducted almost whenever we want, giving us the autonomy we need to
learn about the product while also performing quality checks, saving time and
resources.
In this era, the entire Quality Engineering industry is moving towards shift-left
testing and continuous testing where testing is performed in synchronicity with
development. Shift-left testing allows testing to happen earlier, leaving ample time
for troubleshooting while improving the project’s agility.
Exploratory testing requires every tester to acknowledge this gap, and encourages
them to explore beyond scripted scenarios, uncovering new and unexpected
behaviors. A highly recommended approach is to combine manual testing with
automated testing to utilize the benefits of both.
You would interact with various functionalities of the website, such as browsing
product catalogs, adding items to the shopping cart, and completing the checkout
process. After that, you can explore different menus, screens, and buttons to ensure
smooth navigation and intuitive user experience. You would also input values and
scenarios into certain field forms, such as using different payment methods or
applying discount codes, to assess the accuracy of calculations and the transaction
processes. There is no rule - your goal is to explore and familiarize yourself with
the website.
In the process, you may find critical issues such payment gateway failures and
security vulnerabilities, or minor problems like broken links and inconsistent
product descriptions. Even if you did not find any bugs, you still learn a lot about
how the system works. This knowledge will come in handy when you start
developing automation test scripts in the future.
There are many website or application-specific features that a tester needs to pay
attention to when doing exploratory testing. For example, exploring a personal
finance application would require them to apply the mindset of a customer needing
security and accuracy, compared to exploring a website that places stronger
emphasis on interactivity.
Pros and Cons To Exploratory Testing
Pros:
   ● Encourages creative and innovative thinking
   ● Identifies defects that are usually missed by formal testing methods
   ● Simultaneously provides a comprehensive understanding of software
     functionality and quality
   ● Requires less preparation time compared to formal testing
   ● Efficient in quickly identifying major defects
Cons:
   ●    Difficult to measure and control without a formal test script
   ●    Requires skilled and experienced testers for effective defect identification
   ●    Replication and automation of results can be challenging
   ●    Inconsistencies may arise from different testers using different approaches
   ●    Time-consuming if we can only uncover minor issues with no impact on
        software performance
Agile Testing is a type of software testing that follows the principles of agile
software development to test the software application. All members of the project
team along with the special experts and testers are involved in agile testing. Agile
testing is not a separate phase and it is carried out with all the development phases
i.e. requirements, design and coding, and test case generation. Agile testing takes
place simultaneously throughout the Development Life Cycle. Agile testers
participate in the entire development life cycle along with development team
members and the testers help in building the software according to the customer
requirements and with better design and thus code becomes possible. The agile
testing team works as a single team towards the single objective of achieving
quality. Agile Testing has shorter time frames called iterations or loops. This
methodology is also called the delivery-driven approach because it provides a
better prediction on the workable products in less duration time.
   ·     Agile testing is an informal process that is specified as a dynamic type of
     testing.
  ·      It is performed regularly throughout every iteration of the Software
     Development Lifecycle (SDLC).
  ·      Customer satisfaction is the primary concern for agile test engineers at
     some stage in the agile testing process.
Features of Agile Testing
Some of the key features of agile software testing are:
1. Iteration 0
It is the first stage of the testing process and the initial setup is performed in this
stage. The testing environment is set in this iteration.
   ·        This stage involves executing the preliminary setup tasks such as finding
        people for testing, preparing the usability testing lab, preparing resources,
        etc.
   ·        The business case for the project, boundary situations, and project scope
        are verified.
   ·        Important requirements and use cases are summarized.
   ·        Initial project and cost valuation are planned.
   ·        Risks are identified.
   ·        Outline one or more candidate designs for the project.
2. Construction Iteration
It is the second phase of the testing process. It is the major phase of the testing and
most of the work is performed in this phase. It is a set of iterations to build an
increment of the solution. This process is divided into two types of testing:
This phase is also known as the transition phase. This phase includes the full
system testing and the acceptance testing. To finish the testing stage, the product is
tested more relentlessly while it is in construction iterations. In this phase, testers
work on the defect stories. This phase involves activities like:
   ·       Training end-users.
   ·       Support people and operational people.
   ·       Marketing of the product release.
   ·       Back-up and restoration.
   ·       Finalization of the system and user documentation.
       4. Production
It is the last phase of agile testing. The product is finalized in this stage after the
removal of all defects and issues raised.
1. Quadrant 1 (Automated)
The first agile quadrat focuses on the internal quality of code which contains the
test cases and test components that are executed by the test engineers. All test cases
are technology-driven and used for automation testing. All through the agile first
quadrant of testing, the following testing can be executed:
   ·      Unit testing.
   ·      Component testing.
       2. Quadrant 2 (Manual and Automated)
The second agile quadrant focuses on the customer requirements that are provided
to the testing team before and throughout the testing process. The test cases in this
quadrant are business-driven and are used for manual and automated functional
testing. The following testing will be executed in this quadrant:
   ·      Pair testing.
   ·      Testing scenarios and workflow.
   ·      Testing user stories and experiences like prototypes.
3. Quadrant 3 (Manual)
The third agile quadrant provides feedback to the first and the second quadrant.
This quadrant involves executing many iterations of testing, these reviews and
responses are then used to strengthen the code. The test cases in this quadrant are
developed to implement automation testing. The testing that can be carried out in
this quadrant are:
   ·       Usability testing.
   ·       Collaborative testing.
   ·       User acceptance testing.
   ·       Collaborative testing.
   ·       Pair testing with customers.
4. Quadrant 4 (Tools)
   1.     Impact Assessment: This is the first phase of the agile testing life cycle
        also known as the feedback phase where the inputs and responses are
        collected from the users and stakeholders. This phase supports the test
        engineers to set the objective for the next phase in the cycle.
   2.  Agile Testing Planning: In this phase, the developers, customers, test
     engineers, and stakeholders team up to plan the testing process schedules,
     regular meetings, and deliverables.
   3. Release Readiness: This is the third phase in the agile testing lifecycle
     where the test engineers review the features which have been created
     entirely and test if the features are ready to go live or not and the features
     that need to be sent again to the previous development phase.
   4. Daily Scrums: This phase involves the daily morning meetings to check
     on testing and determine the objectives for the day. The goals are set daily to
     enable test engineers to understand the status of testing.
   5. Test Agility Review: This is the last phase of the agile testing lifecycle
     that includes weekly meetings with the stakeholders to evaluate and assess
     the progress against the goals.
  ·      Test Scope.
  ·      Testing instruments.
  ·      Data and settings are to be used for the test.
  ·      Approaches and strategies used to test.
  ·      Skills required to test.
  ·      New functionalities are being tested.
  ·      Levels or Types of testing based on the complexity of the features.
  ·      Resourcing.
  ·      Deliverables and Milestones.
  ·      Infrastructure Consideration.
  ·      Load or Performance Testing.
  ·      Mitigation or Risks Plan.
Benefits of Agile Testing
Below are some of the benefits of agile testing:
   ·     Saves time: Implementing agile testing helps to make cost estimates
     more transparent and thus helps to save time and money.
  ·      Reduces documentation: It requires less documentation to execute agile
     testing.
  ·      Enhances software productivity: Agile testing helps to reduce errors,
     improve product quality, and enhance software productivity.
  ·      Higher efficiency: In agile software testing the work is divided into
     small parts thus developer can focus more easily and complete one part first
     and then move on to the next part. This approach helps to identify minor
     inconsistencies and higher efficiency.
  ·      Improve product quality: In agile testing, regular feedback is obtained
     from the user and other stakeholders, which helps to enhance the software
     product quality.
Limitations of Agile Testing
Below are some of the limitations of agile software testing:
   ·       Project failure: In agile testing, if one or more members leave the job
       then there are chances for the project failure.
   ·       Limited documentation: In agile testing, there is no or less
       documentation which makes it difficult to predict the expected results as
       there are explicit conditions and requirements.
   ·       Introduce new bugs: In agile software testing, bug fixes, modifications,
       and releases happen repeatedly which may sometimes result in the
       introduction of new bugs in the system.
   ·       Poor planning: In agile testing, the team is not exactly aware of the end
       result from day one, so it becomes challenging to predict factors like cost,
       time, and resources required at the beginning of the project.
   ·       No finite end: Agile testing requires minimal planning at the beginning
       so it becomes easy to get sidetracked while delivering the new product.
       There is no finite end and there is no clear vision of what the final product
       will look like.
Some of the good practices that have been recognized in the extreme programming
model and suggested to maximize their use are given below:
  ·      Code Review: Code review detects and corrects errors efficiently. It
     suggests pair programming as coding and reviewing of written code carried
     out by a pair of programmers who switch their work between them every
     hour.
  ·      Testing: Testing code helps to remove errors and improves its reliability.
     XP suggests test-driven development (TDD) to continually write and
     execute test cases. In the TDD approach, test cases are written even before
     any code is written.
  ·      Incremental development: Incremental development is very good
     because customer feedback is gained and based on this development team
     comes up with new increments every few days after each iteration.
  ·      Simplicity: Simplicity makes it easier to develop good-quality code as
     well as to test and debug it.
  ·      Design: Good quality design is important to develop good quality
     software. So, everybody should design daily.
  ·      Integration testing: It helps to identify bugs at the interfaces of different
     functionalities. Extreme programming suggests that the developers should
      achieve continuous integration by building and performing integration
      testing several times a day.
XP is based on the frequent iteration through which the developers implement User
Stories. User stories are simple and informal statements of the customer about the
functionalities needed. A User Story is a conventional description by the user of a
feature of the required system. It does not mention finer details such as the
different scenarios that can occur. Based on User stories, the project team proposes
Metaphors. Metaphors are a common vision of how the system would work. The
development team may decide to build a Spike for some features. A Spike is a very
simple program that is constructed to explore the suitability of a solution being
proposed. It can be considered similar to a prototype. Some of the basic activities
that are followed during software development by using the XP model are given
below:
    ·      Coding: The concept of coding which is used in the XP model is slightly
       different from traditional coding. Here, the coding activity includes drawing
       diagrams (modeling) that will be transformed into code, scripting a web-
       based system, and choosing among several alternative solutions.
    ·      Testing: The XP model gives high importance to testing and considers it
       to be the primary factor in developing fault-free software.
    ·      Listening: The developers need to carefully listen to the customers if
       they have to develop good quality software. Sometimes programmers may
       not have the depth knowledge of the system to be developed. So, the
       programmers should understand properly the functionality of the system and
       they have to listen to the customers.
    ·      Designing: Without a proper design, a system implementation becomes
       too complex, and very difficult to understand the solution, thus making
       maintenance expensive. A good design results elimination of complex
       dependencies within a system. So, effective use of suitable design is
       emphasized.
    ·      Feedback: One of the most important aspects of the XP model is to gain
       feedback to understand the exact customer needs. Frequent contact with the
       customer makes the development effective.
    ·      Simplicity: The main principle of the XP model is to develop a simple
       system that will work efficiently in the present time, rather than trying to
       build something that would take time and may never be used. It focuses on
       some specific features that are immediately needed, rather than engaging
       time and effort on speculations of future requirements.
   ·       Pair Programming: XP encourages pair programming where two
       developers work together at the same workstation. This approach helps in
       knowledge sharing, reduces errors, and improves code quality.
   ·       Continuous Integration: In XP, developers integrate their code into a
       shared repository several times a day. This helps to detect and resolve
       integration issues early on in the development process.
   ·       Refactoring: XP encourages refactoring, which is the process of
       restructuring existing code to make it more efficient and maintainable.
       Refactoring helps to keep the codebase clean, organized, and easy to
       understand.
   ·       Collective Code Ownership: In XP, there is no individual ownership of
       code. Instead, the entire team is responsible for the codebase. This approach
       ensures that all team members have a sense of ownership and responsibility
       towards the code.
   ·       Planning Game: XP follows a planning game, where the customer and
       the development team collaborate to prioritize and plan development tasks.
       This approach helps to ensure that the team is working on the most
       important features and delivers value to the customer.
   ·       On-site Customer: XP requires an on-site customer who works closely
       with the development team throughout the project. This approach helps to
       ensure that the customer’s needs are understood and met, and also facilitates
       communication and feedback.
Some of the projects that are suitable to develop using the XP model are given
below:
   ·     Small projects: The XP model is very useful in small projects consisting
     of small teams as face-to-face meeting is easier to achieve.
   ·       Projects involving new technology or Research projects: This type of
       project faces changing requirements rapidly and technical problems. So XP
       model is used to complete this type of project.
   ·       Web development projects: The XP model is well-suited for web
       development projects as the development process is iterative and requires
       frequent testing to ensure the system meets the requirements.
   ·       Collaborative projects: The XP model is useful for collaborative
       projects that require close collaboration between the development team and
       the customer.
   ·       Projects with tight deadlines: The XP model can be used in projects
       that have a tight deadline, as it emphasizes simplicity and iterative
       development.
   ·       Projects with rapidly changing requirements: The XP model is
       designed to handle rapidly changing requirements, making it suitable for
       projects where requirements may change frequently.
   ·       Projects where quality is a high priority: The XP model places a
       strong emphasis on testing and quality assurance, making it a suitable
       approach for projects where quality is a high priority.
XP, and other agile methods, are suitable for situations where the volume and
space of requirements change are high and where requirement risks are
considerable.