0% found this document useful (0 votes)
35 views17 pages

Unit-1 (STM)

The document provides an overview of software testing, detailing its importance in the software development lifecycle, types of testing, and methodologies. It discusses the evolution of testing practices, common myths, goals, and psychological aspects of testing, as well as various testing models and the Software Testing Life Cycle (STLC). Additionally, it explains the concepts of verification and validation, emphasizing their roles in ensuring software quality and user satisfaction.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views17 pages

Unit-1 (STM)

The document provides an overview of software testing, detailing its importance in the software development lifecycle, types of testing, and methodologies. It discusses the evolution of testing practices, common myths, goals, and psychological aspects of testing, as well as various testing models and the Software Testing Life Cycle (STLC). Additionally, it explains the concepts of verification and validation, emphasizing their roles in ensuring software quality and user satisfaction.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Unit I: Introduction to Software Testing

Introduction to software testing


Software testing is the process of evaluating and verifying that a software application meets
its specified requirements and is free of defects. It ensures that the software functions
correctly, is reliable, and performs efficiently under various conditions. Testing is a critical
part of the software development lifecycle (SDLC) and helps in identifying bugs, security
vulnerabilities, and performance issues before the software is deployed.
Software testing can be classified into two main types: manual testing (where testers
execute test cases manually) and automated testing (where scripts and tools are used to
perform tests). Common testing methodologies include unit testing, integration testing,
system testing, and acceptance testing.
The primary objectives of software testing are to ensure software quality, reliability,
security, and performance, ultimately leading to better user satisfaction.

Evolution of Software Testing


Software testing has evolved significantly over the decades:
1. Early Days (1950s-1970s): Initially, software testing was performed in an ad-hoc
manner, mainly focusing on debugging. Developers were responsible for testing their
own code without a structured approach.
2. Structured Testing (1980s): With the growth of complex software systems, structured
testing methods were introduced, such as white-box and black-box testing. Formal
testing strategies, test planning, and documentation became essential.
3. Automated Testing (1990s): As software systems grew in scale, manual testing
became inefficient. Automated testing tools like Selenium and LoadRunner emerged,
enabling faster and more reliable test execution.
4. Agile and Continuous Testing (2000s): The adoption of Agile methodologies
emphasized iterative development and continuous testing. Test-driven development
(TDD) and behavior-driven development (BDD) gained popularity, ensuring that
testing was integrated throughout the development process.
5. DevOps and AI-Driven Testing (2010s-Present): The rise of DevOps practices
introduced continuous integration/continuous deployment (CI/CD), where testing
became a part of the automation pipeline. AI and machine learning are now being
used to optimize test case generation, predictive defect analysis, and intelligent
automation.

Myths and facts


Myths and Facts of Software Testing
Myth 1: Testing is only about finding bugs
Fact: Testing is not just about finding defects; it also ensures software quality, usability,
security, and performance. It helps verify whether the software meets user expectations
and business requirements.
Myth 2: Complete testing is possible
Fact: It is impossible to test every possible input, scenario, and combination in a software
application. Instead, testers prioritize test cases based on risk and critical functionalities.
Myth 3: Testing is required only at the end of development
Fact: Modern software development practices (such as Agile and DevOps) emphasize
continuous testing from the early stages to avoid costly fixes later. Early testing saves
time and reduces risks.
Myth 4: Automated testing can replace manual testing
Fact: While automation helps speed up repetitive tasks, manual testing is still essential
for exploratory testing, usability testing, and scenarios requiring human intuition and
judgment.
Myth 5: A successfully tested software is 100% defect-free
Fact: No software can be guaranteed to be completely bug-free. Testing reduces the
number of defects but cannot eliminate all issues due to the complexity of real-world
environments.
Myth 6: Developers should not test their own code
Fact: Developers can and should perform unit testing to catch defects early. However,
independent testers are also needed to provide unbiased testing.
Myth 7: Testing increases project costs
Fact: Testing helps in identifying defects early, preventing costly fixes after deployment. It
reduces risks and ensures a higher-quality product, ultimately saving costs in the long
run.
Myth 8: More test cases mean better testing
Fact: The quality of test cases is more important than quantity. Effective test case design
focuses on covering critical functionalities with minimal redundancy.
Myth 9: Testing is only for large projects
Fact: Regardless of project size, testing is necessary to ensure that even small
applications function correctly and securely.
Myth 10: Only testers are responsible for quality
Fact: Quality is a shared responsibility among developers, testers, business analysts, and
stakeholders. A strong collaboration ensures better software quality.
Goals of Software Testing
The primary goal of software testing is to ensure that the software meets business and
user requirements while being reliable, secure, and defect-free. The key objectives
include:
1. Detecting and Preventing Defects – Identify bugs, security vulnerabilities, and
performance issues before deployment.
2. Ensuring Software Quality – Validate that the software meets functional,
performance, usability, and security requirements.
3. Verifying Compliance with Requirements – Ensure that the application functions as
per the defined specifications and business needs.
4. Enhancing User Satisfaction – Improve user experience by ensuring ease of use,
efficiency, and reliability.
5. Minimizing Risks – Reduce risks related to software failure, data loss, and security
breaches.
6. Improving Performance and Scalability – Ensure that the system performs well
under different conditions and can handle expected workloads.
7. Supporting Continuous Improvement – Provide feedback for refining development
and testing processes over time.
8. Facilitating Smooth Deployment – Help in achieving a stable, bug-free software
release.

Psychology of Software Testing


The psychology of software testing refers to the mindset and approach testers adopt
to effectively find defects and improve software quality. Some key psychological
aspects include:
1. Skeptical and Critical Thinking – Testers must assume that software contains defects
and actively seek out potential issues rather than assuming everything works
correctly.
2. Attention to Detail – Small errors can lead to major failures; testers must focus on
even the tiniest inconsistencies.
3. Creative and Analytical Mindset – Testers need to think outside the box and
anticipate unexpected user behaviors and edge cases.
4. Empathy for End-Users – Understanding user expectations helps testers ensure a
better user experience.
5. Collaboration and Communication – Effective testers communicate issues clearly
with developers and stakeholders to improve the product.
6. Adaptability and Learning Attitude – Testing methodologies and technologies evolve
constantly, requiring testers to stay updated with new tools and techniques.
7. Patience and Persistence – Debugging complex software requires repeated testing
and troubleshooting to identify hidden issues.
8. Balance Between Breaking and Building – While testers aim to find defects, their
ultimate goal is to help create a high-quality product, not just break the software.
A strong understanding of testing psychology helps testers adopt the right approach,
ensuring efficient, effective, and unbiased testing.

Models for Testing in Software Testing Methodologies


Software testing models define the approach and sequence in which testing activities
are performed. These models ensure systematic testing, defect identification, and
software quality assurance. Below are some commonly used testing models:

1. Waterfall Model
• Description: A sequential model where each phase (Requirement, Design,
Implementation, Testing, Deployment, and Maintenance) is completed before
moving to the next.
• Testing Approach: Testing is done in a single phase after development.
• Pros: Simple and structured.
• Cons: Late detection of defects, making fixes costly.

2. V-Model (Validation & Verification Model)


• Description: An extension of the Waterfall model where testing is performed parallel
to development at each stage.
• Testing Approach: Every development phase has a corresponding testing phase (e.g.,
unit testing for coding, system testing for design).
• Pros: Early defect detection, better quality assurance.
• Cons: Rigid, does not handle changes well.

3. Incremental Model
• Description: Software is developed and tested in small, incremental modules, with
each adding functionality.
• Testing Approach: Each module is tested individually before integrating into the
system.
• Pros: Easier defect tracking, early feedback from users.
• Cons: Requires careful planning and design.

4. Agile Model
• Description: Focuses on iterative development with frequent releases and
continuous feedback.
• Testing Approach: Continuous and automated testing (unit testing, regression
testing, exploratory testing).
• Pros: Adaptable to changing requirements, fast delivery.
• Cons: Requires close collaboration and experienced testers.

5. Spiral Model
• Description: A risk-driven model that combines iterative and waterfall approaches,
emphasizing risk assessment at each cycle.
• Testing Approach: Testing is conducted in iterative cycles with risk analysis.
• Pros: Effective for complex projects, risk management integration.
• Cons: Expensive and requires expert risk analysis.

6. DevOps Model (Continuous Testing)


• Description: Integrates development and operations to enable continuous
integration and continuous deployment (CI/CD).
• Testing Approach: Automated testing tools are used throughout the development
cycle.
• Pros: Faster releases, high-quality software, early bug detection.
• Cons: Requires advanced automation skills and infrastructure.
Software Testing Terminology
1. Bug/Defect – A flaw or issue in software that causes incorrect behavior.
2. Test Case – A set of conditions and steps to check if a feature works correctly.
3. Test Plan – A document outlining the testing scope, objectives, and strategy.
4. Test Script – A program or automation code that runs test cases.
5. Test Scenario – A high-level description of what needs to be tested.
6. Regression Testing – Rechecking software after changes to ensure no new issues
appear.
7. Unit Testing – Testing individual parts (modules) of the software.
8. Integration Testing – Checking if different modules work together correctly.
9. System Testing – Testing the entire software as a whole.
10. Acceptance Testing – Final testing before release to ensure it meets business needs.
11. Load Testing – Checking how software performs under heavy usage.
12. Stress Testing – Pushing the software beyond its limits to see how it handles failures.
13. Smoke Testing – A quick test to check if the main functions of software work.
14. White-box Testing – Testing based on knowing the internal structure of the code.
15. Black-box Testing – Testing based on functionality without knowing the internal
code.
16. Exploratory Testing – Informal testing without predefined test cases.
17. Alpha Testing – Testing done internally before releasing to users.
18. Beta Testing – Testing done by real users before the official launch.

Software Testing Methodology


Software testing methodologies are different ways of conducting tests during software
development.
1. Waterfall Model – A step-by-step, one-directional process where testing happens
after development is complete.
2. V-Model – Testing happens at each stage of development in parallel.
3. Agile Testing – Continuous testing in short cycles (sprints) with frequent updates.
4. DevOps Testing – Automated and continuous testing integrated into software
delivery pipelines.
5. Incremental Testing – Testing each module separately before integrating them.
6. Exploratory Testing – Testers actively explore the software without predefined
scripts.
Each methodology is chosen based on project requirements and business needs. Agile and
DevOps are widely used today for fast and effective testing.
Software Testing Life Cycle (STLC)
The Software Testing Life Cycle (STLC) is a step-by-step process followed to test software
and ensure its quality. It consists of different phases, each with specific activities.

Phases of STLC

Phase Description

1. Requirement Understanding what needs to be tested by analyzing software


Analysis requirements. Testers identify testable requirements.

Creating a test plan that includes scope, objectives, testing strategy,


2. Test Planning
tools, timelines, and resources.

Writing detailed test cases and test scripts based on requirements.


3. Test Case Design
Also includes preparing test data.

4. Test Environment Setting up hardware, software, and tools required for testing.
Setup Ensures the testing environment is ready.

Running test cases and logging defects if issues are found. Can be
5. Test Execution
manual or automated.

6. Defect Reporting & Identifying, documenting, and tracking defects until they are fixed.
Tracking Tools like JIRA or Bugzilla are used.

Preparing test reports, analyzing results, and documenting lessons


7. Test Closure
learned for future improvements.

Key Points About STLC:


• Testing is not just one phase; it happens throughout development.
• STLC works alongside the Software Development Life Cycle (SDLC) to improve
quality.
• Automation and manual testing are used depending on project needs.
• Effective defect tracking ensures issues are fixed before release.
By following STLC, organizations improve software quality, reduce defects, and ensure a
smooth user experience before deployment.
Verification and Validation in Software Testing
Verification and Validation are two crucial concepts in software testing that ensure the
software product meets both its technical requirements and user expectations. While they
are related, they focus on different aspects of the testing process.

Verification
Definition:
Verification is the process of checking whether the software is being developed according to
the specifications and design documents. It ensures that the product is built right.
Verification is primarily concerned with confirming that the software adheres to its
requirements and design.
Activation Requirements for Verification:
1. Requirements and Design Specifications – Clear and detailed requirements and
design documentation are needed to verify if the software is built according to these
documents.
2. Static Testing – Involves techniques such as code reviews, walkthroughs, and
inspections, where the product is analyzed without execution.
3. Test Plans and Test Cases – Test cases should be written based on requirements to
verify that each requirement is met. These are typically reviewed early in the
development process.
4. Prototypes/Mockups – Often used in UI verification to confirm that the product
aligns with design expectations.
Types of Verification:
• Static Analysis – Reviewing code and documents for errors.
• Unit Testing – Testing individual components for correctness.
• Integration Testing – Checking if different modules work together as intended.

Validation
Definition:
Validation is the process of checking whether the software meets the user’s needs and
works in the real-world environment. It ensures that the right product is built and that it
satisfies the end user’s expectations and business requirements.
Activation Requirements for Validation:
1. User Requirements – Understanding the end-user needs and how the product will be
used in the real world.
2. System Testing – Testing the entire software as a whole to ensure it meets all user
requirements and functions as expected.
3. Acceptance Testing – Typically conducted with end-users to validate the product
before it is released.
4. Beta Testing – Providing the product to a limited number of real users to validate its
functionality in a production-like environment.
5. Real-World Scenarios – Testing the software under real user conditions, like load,
performance, and security.
Types of Validation:
• System Testing – Verifying that the complete system functions as expected.
• User Acceptance Testing (UAT) – End users validate whether the software meets
their requirements and expectations.
• Alpha and Beta Testing – Performed by the development team (Alpha) and actual
users (Beta) to validate software functionality.

Conclusion:
Both Verification and Validation are essential in software development to ensure quality.
Verification makes sure the product is being built according to the design, while Validation
ensures the product meets the user’s expectations and is functional in real-world scenarios.
Together, they help in delivering a high-quality product that satisfies both technical and
business requirements.

High-Level and Low-Level Designs


In software testing, the design refers to the planning and structuring of test cases, processes,
and strategies based on the software requirements and architecture. High-level and low-
level designs in software testing are terms that differentiate between broader, strategic test
planning and detailed, specific test case creation.

High-Level Design in Software Testing


Definition:
High-level design (also known as Test Strategy or Test Plan Design) focuses on the broader
testing approach and overall structure of the testing activities. It provides a big-picture view
of the testing process, describing what will be tested, how it will be tested, and the overall
testing goals.
Key Characteristics:
1. Focus: It covers testing goals, scope, and overall approach rather than specific test
cases.
2. Scope: Outlines the testing scope, defining what is in-scope and out-of-scope for
testing.
3. Test Types: Identifies the types of tests to be performed (e.g., functional testing,
performance testing, security testing, etc.).
4. Resources: Specifies the resources required, such as testing tools, environments, and
personnel.
5. Testing Phases: Describes the testing phases (unit testing, integration testing, system
testing, acceptance testing).
6. Risk Management: Identifies potential risks and how they will be mitigated.
7. Timeline: Provides an estimate of the testing schedule and milestones.
Example Activities in High-Level Design:
• Creating a Test Plan outlining the testing strategy.
• Deciding on the testing tools and resources required.
• Defining the testing environment setup.
• Identifying test deliverables such as test reports and documentation.
Benefits of High-Level Design:
• Provides a structured overview of the testing approach.
• Aligns testing with business and project goals.
• Helps in resource planning and scheduling.
• Ensures that all testing activities are coordinated and organized.

Low-Level Design in Software Testing


Definition:
Low-level design (also known as Test Case Design or Test Script Design) refers to the
detailed creation of test cases, steps, and scenarios to verify the functionality of specific
components or features of the software. This level focuses on how each individual test will
be executed.
Key Characteristics:
1. Focus: It involves the detailed steps and conditions to test specific functionality or
features.
2. Scope: Focuses on individual modules or components of the software.
3. Test Scenarios and Cases: Describes the exact test steps, inputs, expected results,
and pass/fail criteria.
4. Test Data: Specifies the data to be used in the tests (e.g., boundary values, invalid
inputs).
5. Test Execution: Focuses on the execution of individual tests and their outcomes.
6. Detailed Logging: Specifies how test results will be logged, tracked, and reported.
Example Activities in Low-Level Design:
• Writing test cases for specific features (e.g., logging in, password reset).
• Designing input data sets and expected results for each test case.
• Creating test scripts for automation where necessary.
• Performing test execution and tracking test results.
Benefits of Low-Level Design:
• Provides detailed steps to ensure every aspect of the software is tested.
• Allows testers to identify edge cases and boundary conditions.
• Automates repetitive tasks and ensures consistent test execution.
• Provides clear documentation of testing activities and results.

Conclusion:
• High-level design is about planning and strategy, ensuring that the testing process
aligns with project goals, resources, and timeline. It ensures that testing efforts are
well-organized and structured.
• Low-level design is focused on the execution of testing at a granular level, ensuring
each functionality and feature of the software is rigorously tested with precise
details.

Verifying Code in Software Testing


Code verification is an essential part of the software development process where the quality
and correctness of the code are validated against the defined requirements, specifications,
and standards. It helps ensure that the code is functioning as intended and does not
introduce defects or issues into the system. Verification is generally done through various
testing techniques and tools.
Ways to Verify Code
1. Code Reviews (Peer Review)
o Description: A formal process where other developers (or team members)
review the code to identify defects, potential improvements, and adherence
to coding standards.
o Purpose: Ensure the code is clean, follows best practices, and does not
introduce bugs or vulnerabilities.
o Benefits: Early identification of errors, improved code quality, knowledge
sharing among team members.
2. Static Code Analysis
o Description: Using tools to examine the code without executing it. These
tools can detect potential issues such as security vulnerabilities, code smells,
performance bottlenecks, or violations of coding standards.
o Tools: Examples include SonarQube, ESLint, FindBugs, PMD.
o Purpose: Catch issues like syntax errors, security flaws, and inefficiencies
before running the code.
o Benefits: Automatic error detection, early feedback on code quality,
adherence to coding standards.
3. Unit Testing
o Description: Writing and executing small tests to validate individual functions
or methods in the code. Each unit test is designed to ensure that a specific
piece of functionality works correctly.
o Tools: JUnit (Java), NUnit (.NET), PyTest (Python).
o Purpose: Ensure that each unit (function, method, class) works as expected
under various conditions.
o Benefits: Early detection of defects, faster debugging, test automation.
4. Integration Testing
o Description: Verifying that different parts of the system work together as
expected when integrated. This ensures that modules interact correctly, and
data flows seamlessly between them.
o Tools: Postman, JUnit (for integration tests), SoapUI (for web services).
o Purpose: Ensure smooth interaction between different modules or external
systems.
o Benefits: Identifies integration issues that may not be visible in individual unit
tests.
5. Automated Testing
o Description: Running automated test scripts that simulate user actions or
specific scenarios to verify if the code works as expected. Automation ensures
repeatable tests across different environments or systems.
o Tools: Selenium, Appium, TestNG, Cypress.
o Purpose: Increase testing efficiency, reduce human error, and continuously
test the software.
o Benefits: Saves time in repetitive tests, supports regression testing, and scales
testing efforts.
6. Regression Testing
o Description: After making changes or updates to the code, regression testing
is performed to ensure that the new code does not break any existing
functionality.
o Tools: Selenium, JUnit, TestComplete.
o Purpose: Ensure that modifications do not cause any unexpected issues or
break existing features.
o Benefits: Prevents re-introduction of old bugs, maintains software stability.

Importance of Verifying Code


• Detect Early Bugs: Verifying code helps catch errors in early stages, reducing the cost
of fixing bugs later in the development process.
• Improved Code Quality: Ensures that the code is efficient, readable, maintainable,

Validation
Validation in Software Testing
Validation is a crucial part of the software development lifecycle, focused on ensuring that
the software meets the user's needs and requirements. It confirms that the right product
has been built and that the product behaves as expected under real-world conditions.
Definition of Validation:
Validation is the process of evaluating a system or its components to ensure that it meets
the business requirements and user expectations. It involves confirming that the software
works as intended when used in its actual environment. Validation answers the question:
"Did we build the right product?
Key Characteristics of Validation:
1. User-Centric Focus:
Validation primarily focuses on ensuring that the software satisfies the end user's
needs and business goals.
2. Verification vs. Validation:
o Verification: Confirms that the software is being built according to
specifications and design.
o Validation: Confirms that the software fulfills the intended purpose for the
user.
3. Real-World Testing:
Validation often involves testing the software under conditions that simulate real-
world usage to ensure it operates correctly in practice.
4. Acceptance Testing:
Validation often involves activities like User Acceptance Testing (UAT), where end
users test the system to ensure it meets their requirements.

Types of Validation:
1. User Acceptance Testing (UAT):
o Description: The process of validating whether the software meets the
business needs and user requirements before the software is deployed.
o Participants: End-users or clients who interact with the product.
o Purpose: To validate that the software is ready for production and will meet
user needs in the real-world environment.
2. Alpha Testing:
o Description: Performed by the internal development team or quality
assurance (QA) team before releasing the software to a limited group of
external users.
o Purpose: To catch issues early before releasing the product to the external
world.
3. Beta Testing:
o Description: The product is released to a limited audience outside the
development team for validation under real-world conditions.
o Purpose: To gather feedback and find potential defects before the final
release to all users.
4. System Testing:
o Description: Validates that the system as a whole functions as expected. This
can include testing performance, security, and compatibility.
o Purpose: Ensures that the complete system works in an integrated
environment.
5. End-to-End Testing:
o Description: Validates the entire application workflow from start to finish,
ensuring that the system meets user requirements.
o Purpose: Ensures that all components of the system work together and
support user goals.

Importance of Validation:
1. Reduces Risk:
By validating the product before its final release, the likelihood of defects in the live
environment is reduced, minimizing the risk of failure.
2. Improves Product Quality:
Validating the software ensures it meets quality standards, performance
expectations, and security requirements.
3. Ensures Compliance:
For regulated industries (e.g., healthcare, finance), validation ensures that the
software meets legal and regulatory standards.

Validation Process:
1. Requirement Gathering and Analysis:
Before starting the validation, ensure that clear, well-defined requirements are in
place. These requirements should reflect the user's needs and the business
objectives.
2. Test Planning:
Develop a test plan that outlines the validation strategy, scope, objectives, and
criteria for success, focusing on validating the product against the user's
requirements.
3. Test Execution:
Conduct tests based on real-world scenarios to ensure the software functions as
expected in its intended environment.
4. User Feedback Collection:
Collect feedback from users during UAT or beta testing to identify any issues and
confirm whether the software meets user needs.
5. Defect Fixing and Retesting:
If issues are identified during validation, they are fixed, and retesting is done to
confirm the fix and ensure the software still meets the requirements.
6. Final Release:
Once the product is validated and defects are resolved, it is ready for final release.

Key Differences Between Verification and Validation:

Aspect Verification Validation

Ensures the software is built Ensures the software meets user needs
Definition
correctly according to specifications. and business requirements.

Technical correctness (is the product Business goals and user requirements
Focus
being built right?). (is the right product being built?).

Performed during the development Performed near the end of the


Testing Phase
phase (early). development cycle (before release).

Types of Unit testing, integration testing, UAT, beta testing, system testing,
Testing code reviews. acceptance testing.

Development team and testers


Involvement End users, clients, and stakeholders.
(technical teams).

To ensure compliance with To ensure the product is usable and


Objective
requirements and specifications. meets user expectations.

You might also like