0% found this document useful (0 votes)
22 views23 pages

Testing Question 1

The document outlines key concepts in software development and testing, including definitions of error, bug, and defect, as well as Agile methodologies and roles such as Product Owner and Scrum Master. It details the Software Development Life Cycle (SDLC) and Software Testing Life Cycle (STLC), including various testing techniques like unit testing, integration testing, and black box testing methods such as equivalence partitioning and boundary value analysis. Additionally, it discusses the importance of verification, validation, and quality assurance in ensuring software quality.

Uploaded by

Praveen Bhargava
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views23 pages

Testing Question 1

The document outlines key concepts in software development and testing, including definitions of error, bug, and defect, as well as Agile methodologies and roles such as Product Owner and Scrum Master. It details the Software Development Life Cycle (SDLC) and Software Testing Life Cycle (STLC), including various testing techniques like unit testing, integration testing, and black box testing methods such as equivalence partitioning and boundary value analysis. Additionally, it discusses the importance of verification, validation, and quality assurance in ensuring software quality.

Uploaded by

Praveen Bhargava
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

(1 ) .

Error (Human mistake) :--


An error is a mistake made by a developer during coding or
logic design.
Example:
The developer writes if(score > 60) instead of if(score >= 60).
➡ This is an error in logic.

(2 ) . Bug (Fault in code) :--


A bug is a fault in the software code that causes the system to
behave unexpectedly during execution.
Example:
Because of the above error, students who scored exactly 60 are
marked "Fail" instead of "Pass".
➡ This is a bug found while running the program.

(3) . Defect (Mismatch with requirement) :--


A defect is a difference between the actual result and the
expected result, found during testing.
Example:
The requirement says “Users must receive a confirmation email
after signup,” but no email is sent.
➡ This is a defect found during testing.

(4) . Agile : agile is a flexible & fast software development


method where work is done in small parts with regular
feedback & quick improvement.

(i) . Product Owner : A Product Owner is the person who


defines what to build and in what order,
Example: They decide to build the login page before the profile
page based on user needs.

(ii) . Scrum Master : A Scrum Master is the person who


guides the team to follow Scrum practices and removes any
blockers.
Example: If developers are stuck due to unclear requirements,
the Scrum Master arranges a meeting with the Product Owner
to resolve it.

(iii) . Agile Team : An Agile team is a cross-functional


group of professionals (like developers, testers, designers) who
work together to deliver small, working parts of a product in
short cycles.
Example: A team of 6 members builds and delivers a working
shopping cart feature in one sprint.
(iv) . Sprint : A Sprint is a short, fixed period (usually 1–4
weeks) in Agile during which a team builds and delivers a usable
part of the product.
Example: The team developed and tested the user registration
feature in a 2-week sprint.

(v). Scrum : is an Agile framework that organizes work into


short cycles (sprints) with defined roles and meetings to deliver
and improve the product continuously.
Example: A team using Scrum holds daily stand-ups, works in 2-
week sprints, and delivers a new feature at the end of each
sprint.

(vi). Burndown chart :A Burndown Chart is a graph that


shows how much work is left in a sprint and helps track
progress toward the sprint goal.
Example: If tasks decrease daily on the chart, the team is on
track to finish on time.

(vii). Product Backlog: Product Backlog is a list of all


features, enhancements, bug fixes, and technical tasks needed
for a product, prioritized by the product owner.
🧾 Example:
A to-do list like: "Add search bar", "Fix login bug", "Improve
page speed" — all in one place.
(viii). Prioritize backlog: A prioritized backlog is a list of
product features, bugs, or tasks arranged in order of
importance, so the team works on the most valuable items first.

(5). Verification :--


🔸 Before development starts, the team checks the requirement
document (SRS) and design to make sure everything is correct
and complete.
Example:
 QA reviews the login flow diagram to ensure it matches
the client’s requirement.
 Developer checks if the API list has proper
request/response formats before coding.

(6). Validation :--


🔸 After the software is built, testers actually run the app to see if
it behaves correctly.
Example:
Tester logs into the app with valid and invalid data to check if
login works as expected.
Client tests the “Add to Cart” feature and confirms the total
price is correct.
(7). QA (Quality Assurance) : is about preventing defects by
improving the process.
Example: Creating a test plan before development to ensure
quality steps are followed.
(8). QC (Quality Control) : is about finding defects in the actual
product.
Example: Testing the login page and reporting a bug when
incorrect login still allows access.

(9). SDLC(Software Development Life Cycle ) Phases


 Client Requirement Meeting
👉 Talk to the client/stakeholders to understand exactly what
they need.
Example: "We want an e-commerce website with online
payment and admin panel."
 Requirement Document (SRS - Software Requirement
Specification)
👉 Write all features, functions, and rules in a clear document for
team reference.
Example: Login must work with email or phone; payment via
UPI.
 Design (UI/UX + Architecture)
👉 Designers make screen designs (Figma, Adobe XD) and
developers plan structure.
Example: Homepage layout, database schema, API design.
 Environment Setup
👉 Developers set up tools like Git, VS Code, database, and
backend servers.
Example: Create repo on GitHub, install required packages,
connect database.
 Development (Frontend + Backend)
👉 Developers start coding features in parts — frontend UI,
backend logic, and APIs.
Example: Create product page, search feature, cart logic, login
API.
 Unit Testing (by Developers)
👉 Developers test their own code (small pieces) to ensure it's
working properly.
Example: Check if product API returns correct results.
 Integration Testing (by Testers)
👉 QA team checks that all parts work together smoothly.
Example: Add-to-cart works after login and calculates correct
total.
 Integration Testing (by Testers)
👉 QA team checks that all parts work together smoothly.
Example: Add-to-cart works after login and calculates correct
total.
 Bug Fixing + Feedback Loop
👉 QA reports bugs → devs fix → QA retests until clean.
Example: Fixing "payment fails on 2nd attempt" issue.
 UAT (User Acceptance Testing)
👉 Client or Product Owner tests the software to confirm it
meets needs.
Example: Client logs in, places orders, checks invoice – gives OK.
 Deployment
👉 Code is pushed to live server (e.g., AWS, Azure) or Play
Store/Web.
Example: E-commerce site goes live at www.example.com.
 Maintenance + Updates
👉 Fix live issues, add new features, and monitor performance.
Example: Add coupons after 1 month or fix real user bugs.

(10).STLC (Software Testing Life Cycle) –:


Requirement Analysis
👉 QA team reads the SRS (Requirement Doc) to understand
what to test.
Example: Is login required? Are there password rules?
1. Test Planning
👉 QA Lead prepares the test strategy, decides tools,
timeline, and who will test what.
Example: Manual testing or automation? Who tests login?
Who tests payment?
2. Test Case Design
👉 Testers write test cases based on the requirements.
Example: Write test steps for valid login, invalid login,
empty fields, etc.
3. Test Environment Setup
👉 DevOps or QA team sets up the testing environment
(test server, DB, tools).
Example: Load the test build on a staging server with
sample data.
4. Test Execution
👉 Testers execute the test cases and mark Pass/Fail.
Example: Run test cases on login form — login fails? Report
bug.
5. Defect Reporting
👉 Found bugs are logged in Jira or other tools with
screenshots and steps.
Example: "Login accepts wrong password – severity: High"
6. Test Closure
👉 After fixing and retesting, QA prepares a test summary
report.
Example: Out of 30 test cases, 27 passed, 3 failed and were
fixed.

(11). Levels of Testing --:


1. Unit Testing
Testing individual units/components of code in isolation.
Example: A developer tests the login() function separately using
test data.
2. Integration Testing
Testing how two or more modules work together after
combining.
Example: Checking if the login module properly connects with
the dashboard module after login.
Top-Down Approach
Testing starts from top modules and goes downward, using
stubs for unfinished lower modules.
Example: Testing the main menu module while using a stub to
mimic the login module it calls.
2. Bottom-Up Approach
Testing starts from lower-level modules first, using drivers to
simulate higher modules.
Example: Testing the login logic first and using a driver to
simulate the call from the dashboard.
3. Hybrid (Sandwich) Approach
A mix of top-down and bottom-up — both high-level and low-
level modules are tested together.
Example: Testing login and dashboard together while still using
stubs/drivers for unfinished parts.
Stub
A fake/dummy module that replaces a called (low-level)
module not yet developed.
Example: If the payment module isn't ready, use a stub to
return "Payment Successful" during testing.
🔹 Driver
A fake/dummy module that simulates a calling (high-level)
module not yet developed.
Example: If the cart module isn't ready to call checkout, a driver
manually calls the checkout code for testing.
3. System Testing:--
System Testing is the process of testing the entire software
application as a complete system to ensure it meets the specified
requirements.
It includes both functional and non-functional testing and is
performed in an environment similar to production.
This is a type of black-box testing done after integration testing and
before user acceptance testing.

Example: QA tests the full app flow — login, cart, checkout,


payment.

(i). UI Testing (User Interface Testing):--

Definition:
Tests how the application looks and behaves visually — layout, buttons,
fonts, colors, alignment, responsiveness, etc.

Example:
Is the "Submit" button correctly aligned and visible on all devices?

(ii). UX Testing (User Experience Testing):--

Definition:
Tests how smooth, intuitive, and user-friendly the software is for real
users.

Example:
Can a new user easily understand the navigation and complete checkout
without confusion?

Microsoft Six rule for UI/UX Tesing:--


Functional Testing :-- To ensure that each & every
functionality is working fine with request to customer
requirement
Black Box Testing :-- Black Box Testing is a method of software testing
where the tester checks the functionality of the application without
knowing its internal code or logic.Only inputs and expected outputs are
tested.
Example: If you're testing a login page:

 Input: Email and password


 Expected Output: User logs in or gets an error message
 Tester doesn’t see or check the backend code — only the result.
(12). Black Box vs White Box Testing :--
Black Box Testing:--

Tests the functionality of the software without knowing the internal


code.
Focus: What the system does.
Done by: Testers.
Example: Enter login details and check if login works.

White Box Testing:--

Tests the internal code, logic, and structure of the software.


🔹 Focus: How the system works.
🔹 Done by: Developers or testers with coding knowledge.
🔹 Example: Check how the login function handles input and errors.

(13). Black Box Testing Techniques :--


(i).Equivalence Partitioning :--
What: ECP (Equivalence Class Partitioning) is a testing technique
where input data is divided into valid and invalid groups, and one value
from each group is tested to reduce test cases efficiently.
Why: To avoid testing every possible input.
Example:
For age input (1–100):
Test with 25 (valid), 0 or 101 (invalid) — no need to test all 100 values.

What is Equivalent Class Partitioning?


Equivalent Class Partitioning is a black box technique (code is not
visible to tester) which can be applied to all levels of testing like unit,
integration, system, etc. In this technique, you divide the set of test
condition into a partition that can be considered the same.

 It divides the input data of software into different equivalence data


classes.
 You can apply this technique, where there is a range in the input
field.

(ii). Boundary Value Analysis (BVA):--


What: Boundary Value Analysis is a black box testing technique that
focuses on testing values at the edges (boundaries) of input ranges,
where bugs are most likely to occur.
Why: Errors often occur at boundaries.
Example:
If input range is 1–100, test with 0, 1, 100, 101.

What is Boundary Testing?


Boundary testing is the process of testing between extreme ends or boundaries
between partitions of the input values.

 So these extreme ends like Start- End, Lower- Upper, Maximum-Minimum,


Just Inside-Just Outside values are called boundary values and the testing is
called "boundary testing".
 The basic idea in boundary value testing is to select input variable values at
their:

1. Minimum

2. Just above the minimum

3. A nominal value
4. Just below the maximum

5. Maximum

Example 1: Equivalence and Boundary Value:--


 Let's consider the behavior of Order Pizza Text Box Below.
 Pizza values 1 to 10 is considered valid.
 A success message is shown. While value 11 to 99 are considered invalid for
order and an error message will appear, "Only 10 Pizza can be ordered".

Here is the test condition :--

1. Any Number greater than 10 entered in the Order Pizza field(let say 11) is
considered invalid.

2. Any Number less than 1 that is 0 or below, then it is considered invalid.

3. Numbers 1 to 10 are considered valid

4. Any 3 Digit Number say -100 is invalid.

We cannot test all the possible values because if done, the number of test cases
will be more than 100. To address this problem, we use equivalence partitioning
hypothesis where we divide the possible values of tickets into groups or sets as
shown below where the system behavior can be considered the same.
The divided sets are called Equivalence Partitions or Equivalence Classes. Then
we pick only one value from each partition for testing. The hypothesis behind this
technique is that if one condition/value in a partition passes all others will also
pass. Likewise, if one condition in a partition fails, all other conditions in that
partition will fail.

Boundary Value Analysis- in Boundary Value Analysis, you test boundaries


between equivalence partitions
In our earlier example instead of checking, one value for each partition you will
check the values at the partitions like 0, 1, 10, 11 and so on. As you may observe,
you test values at both valid and invalid boundaries. Boundary Value Analysis is
also called range checking.

Equivalence partitioning and boundary value analysis(BVA) are closely related


and can be used together at all levels of testing.

Example 2: Equivalence and Boundary Value

Following password field accepts minimum 6 characters and maximum 10


characters That means results for values in partitions 0-5, 6-10, 11-14 should be
equivalent

Test Scenario # Test Scenario Description ExpectedOutcome


1 Enter 0 to 5 characters in password field System should not
accept

2 Enter 6 to 10 characters in password field System should accept

3 Enter 11 to 14 character in password field System should not


accept

Examples 3: Input Box should accept the Number 1 to 10

Here we will see the Boundary Value Test Cases


Test Scenario Description Expected Outcome
Boundary Value = 0 System should NOT accept
Boundary Value = 1 System should accept
Boundary Value = 2 System should accept
Boundary Value = 9 System should accept
Boundary Value = 10 System should accept
Boundary Value = 11 System should NOT accept

Why Equivalence & Boundary Analysis Testing

1. This testing is used to reduce a very large number of test cases to manageable
chunks.

2. Very clear guidelines on determining test cases without compromising on the


effectiveness of testing.

3. Appropriate for calculation-intensive applications with a large number of


variables/inputs

Summary:

Boundary Analysis testing is used when practically it is impossible to test a large


pool of test cases individually

Two techniques - Equivalence Partitioning & Boundary Value Analysis testing


techniques are used

In Equivalence Partitioning, first, you divide a set of test condition into a partition
that can be considered.

In Boundary Value Analysis you then test boundaries between equivalence


partitions

Appropriate for calculation-intensive applications with variables that represent


physical quantities

(iii). Decision Table Testing – :


Decision Table Testing is a black-box testing technique used to test all
possible combinations of conditions and actions in a logical and
structured way.
It’s especially useful when the system behavior depends on multiple
rules or conditions.

(iv). State Transition Testing – :


State Transition Testing is a black-box testing technique where you
test the behavior of a system when it moves from one state to another
in response to events or inputs.
It’s useful when the system's output depends on current state and user
action.

(13). System Testing vs Black Box Testing :--


System Testing :--
 Definition: System Testing is a high-level testing that checks the
entire application as a whole, to ensure it meets all requirements.
 Scope: Tests end-to-end functionalities (login, payment, search,
etc.) including integrations.
 Type: It is a type of Black Box Testing.
 Tester: Done by QA team, not developers.

Example:
In Amazon, checking whether a user can search, add to cart, and place
an order successfully — all together — is system testing.

Black Box Testing --

 Definition: Black Box Testing is a testing technique where the


tester only checks the input-output without knowing the internal
code.
 Scope: Can be applied at various levels (unit, integration, system).
 Type: It is a testing approach, not a specific phase.
 Tester: QA/tester checks from the user's view.

Example:
Testing the login feature of Flipkart — enter correct username &
password → see if it logs in (without knowing how the backend works)
— is black box testing.

Alpha, beta, and User Acceptance Testing (UAT) are distinct phases in
software development, each with a specific focus. Alpha testing is an
internal, early-stage testing process, while beta testing involves a wider
group of external users. UAT, on the other hand, is the final phase
where end-users validate the software against their requirements
before official release. [1, 2, 3, 4, 5, 6]

Alpha Testing: --
• Purpose: To identify critical bugs and issues early in the development
cycle. [1, 6]

• Location: Typically conducted within the development team's


environment. [1, 6]

• Testers: Internal development team members or specialized QA


testers. [6, 7]

• Focus: Primarily on functionality and usability, often using both black-


box and white-box testing techniques. [1, 8]

• Outcome: Provides feedback to developers for fixing major issues


before moving to beta testing. [1, 6]
Beta Testing: --
• Purpose: To gather real-world feedback on the software's
performance, usability, and overall user experience.

• Location: In a real-world environment, often with a limited group of


external users.

• Testers: A select group of potential or actual end-users.

• Focus: Primarily on usability, functionality, and real-world


performance.

• Outcome: Provides feedback for final adjustments and bug fixes


before public release. [6, 9]

User Acceptance Testing (UAT): --


• Purpose: To validate that the software meets the specific business
requirements and user expectations.

• Location: Can be conducted in a testing environment or in the live


environment, depending on the specific type of UAT.

• Testers: End-users or clients who will be using the software.

• Focus: Ensuring the software is fit for its intended purpose and meets
the agreed-upon requirements.

• Outcome: Gives the final "go/no-go" decision before the software is


released to the public. [6, 6, 7, 7, 8, 10] . In essence: Alpha testing is for
early bug detection, beta testing is for real-world user feedback,andUAT
is the final validation of the software's readiness for release. [1, 5, 6]

You might also like