0% found this document useful (0 votes)
27 views10 pages

Answer of Theory Questions

The document provides detailed answers to theory questions related to software testing, including definitions of severity and priority, phases of the Software Development Life Cycle (SDLC), and types of testing. It also includes a case study with login test scenarios and a bug report, as well as answers to API testing questions and insights on AI in QA. Additionally, it covers practical QA assignments, efficiency analysis, and calculations related to test case effectiveness and effort estimation.

Uploaded by

rimep80956
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views10 pages

Answer of Theory Questions

The document provides detailed answers to theory questions related to software testing, including definitions of severity and priority, phases of the Software Development Life Cycle (SDLC), and types of testing. It also includes a case study with login test scenarios and a bug report, as well as answers to API testing questions and insights on AI in QA. Additionally, it covers practical QA assignments, efficiency analysis, and calculations related to test case effectiveness and effort estimation.

Uploaded by

rimep80956
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

1.

1 Answers of Theory Questions

• AQ1. Severity is the impact of the bug on the system's functionality.


Priority is the urgency of fixing the bug based on business needs.

• AQ2. Software Development Life Cycle (SDLC) is a process used to design, develop, test,
and deploy software efficiently.

SDLC Phases are stated below:

1. Requirement Gathering – Understanding what needs to be built.


2. Design – Planning the system architecture and design.
3. Development – Writing the code.
4. Testing (QA) – Testing the software for bugs and issues.
5. Deployment – Releasing the product to users.
6. Maintenance – Fixing issues and updating features post-release.

QA is involved mainly in the Testing phase, but in modern SDLCs (like Agile), QA participates
from the beginning to ensure quality at every stage.

In short: QA ensures the product meets quality standards before and after release.

• AQ3. Regression testing is re-testing all existing functionality after changes (like bug fixes or
new features) to ensure nothing is broken.

It’s important because it helps catch unintended side effects and ensures the system still
works as expected.

• AQ4. Two types of testing that don't require code execution are:

1. Static Testing – Reviewing code, documents, or requirements (e.g., code reviews).


2. Inspection – Formal review process to find defects in design or documentation.

• AQ5.

Boundary Value Analysis (BVA) tests values at the edges of input ranges (e.g., min, max).
Equivalence Partitioning (EP) divides inputs into valid and invalid groups, testing one value
from each group.

• AQ6.

Roles in an Agile team include:

● Product Owner – Defines requirements and priorities.

● Scrum Master – Facilitates the process and removes blockers.


● Development Team – Builds the product (developers, designers, etc.).

● QA (Tester) – Ensures product quality through testing.

QA fits into the team by working closely with developers and the product owner to test early
and often, provide feedback, and help ensure each sprint delivers a high-quality product.

• AQ7. The 4 key principles of the Agile Manifesto are:

1. Individuals and interactions over processes and tools

2. Working software over comprehensive documentation

3. Customer collaboration over contract negotiation

4. Responding to change over following a plan

1.2 Answer of Case Study: Bug Report

5 Login Test Scenarios:

1. Login with valid email and password – should succeed.

2. Login with valid email and invalid password – should show error.

3. Login with invalid email format – should show validation error.

4. Login with empty fields – should prompt to fill required fields.

5. Check case sensitivity of email and password – should behave correctly.

Bug Report: Broken "Forgot Password" Link

● Title: "Forgot Password" link not working

● Severity: Medium

● Priority: High

● Steps:

1. Open login page

2. Click "Forgot Password"


3. Link does nothing or shows error

● Expected: Should redirect to password reset page

● Actual: Link is broken or inactive

Impact in Production:

● Users can’t reset passwords → login blocked

● Increase in support requests

● Poor user experience

● Potential customer loss and damage to reputation

Section 2: Answer of Basic API Testing Questions:

1. Authentication verifies the user (e.g., via API key, token, or credentials).

Authorization checks if the user has permission to access a resource.

In API testing, its handled by:

● Adding auth tokens (e.g., Bearer Token) in headers

● Testing role-based access (e.g., admin vs user)

● Validating proper error codes (e.g., 401 Unauthorized, 403 Forbidden)

● Using tools like Postman or automated scripts to manage tokens dynamically

2. For a banking site API, the following tests are essential for Authentication and
Authorization:
● Token Validation – Check access with valid, invalid, and expired tokens.

● Role-Based Access – Ensure users (e.g., customer, admin) access only allowed
endpoints.

● Unauthenticated Access – Ensure APIs reject requests without tokens (401).


● Permission Testing – Verify restricted actions (e.g., fund transfer) fail for unauthorized
roles (403).

● Session Handling – Test token expiry, refresh, and logout behavior.

These tests help secure sensitive financial operations.

3. When a token is missing, the expected API status code is: 401 Unauthorized – The
request lacks valid authentication credentials.

4. Postman supports 4 types of variables:


● Global – Accessible in any request or collection.

● Environment – Specific to a selected environment (e.g., dev, prod).

● Collection – Scoped to a particular collection.

● Local – Temporary, used within a single request or script.

These help manage data and reuse values efficiently.

5. POST – Creates a new resource.


PUT – Updates or replaces an existing resource completely.
PATCH – Updates part of a resource (partial update).

Section 3: Answer of AI Testing in QA

1. Yes. An AI tool once flagged a minor UI misalignment as a high-severity bug. I


disagreed, as it didn't impact functionality. I manually adjusted the bug report to low
severity and low priority and prioritized functional issues instead, ensuring testing
efforts stayed focused on critical areas.

2. AI (Artificial Intelligence) enables machines to mimic human thinking and decision-


making.

In software testing, AI helps by:

● Auto-generating test cases

● Detecting defects faster

● Improving test coverage


● Predicting risk areas

● Speeding up regression testing

It boosts efficiency, accuracy, and test automation.

3. Yes, AI is used in QA for:


● Test case generation
● Defect prediction
● Visual UI testing
● Test maintenance
● Performance anomaly detection

These help improve speed, accuracy, and test reliability.

4. Self-healing tests can help. This AI feature automatically detects UI changes (like
element locators) and updates the test scripts to prevent failures, reducing manual
maintenance.
5. AI can analyze application behavior and user flows to auto-generate test cases,
prioritize high-risk areas, and suggest reusable test patterns, reducing the need for
writing extensive manual test cases.

Yes, I’ve worked with AI-powered tool like:

1. Applitools

This is to use AI for smart element detection, self-healing tests, and visual validations. The
experience was positive—tests were more stable and required less maintenance.

Section 4: Answer of Practical QA Assignment

Here's a detailed breakdown based on the web application: https://todobackend.com/client

5 Bugs Found:

Bug Description Severity Priority


ID

B1 No validation for empty to-do items – allows adding blank tasks Medium High

B2 Items marked as completed are not visually distinguished Low Medium


clearly (no color change)
B3 Refreshing the page clears all tasks (no data persistence) High High

B4 Deleting an item doesn’t show any confirmation prompt Low Medium

B5 Filter buttons (All, Active, Completed) sometimes don’t update Medium Medium
the list immediately

Test Cases (Positive & Negative)

Test Case Scenario Expected Result


ID

TC1 Add a valid to-do item Item should appear in the list

TC2 Try adding an empty item App should block or show an error

TC3 Mark an item as Item should move to completed filter and change
completed appearance

TC4 Delete a to-do item Item should be removed from the list

TC5 Apply filters after adding Correct list (All / Active / Completed) should be
tasks displayed

TC6 Refresh after adding tasks Items should remain (if persistence is expected)

Suggested UX Improvement
Add persistent storage (localStorage or backend DB) to retain user tasks even after
refreshing or closing the browser. This will greatly improve usability and user satisfaction.

Section 5: Answer of QA team Efficiency Analysis

1. Each field (username and password) has 3 possible values:

● Valid

● Invalid

● Empty

So, the total number of combinations = 3 × 3 = 9 test cases

Possible Test Case Combinations:

● Valid username + Valid password

● Valid username + Invalid password

● Valid username + Empty password

● Invalid username + Valid password

● Invalid username + Invalid password

● Invalid username + Empty password

● Empty username + Valid password

● Empty username + Invalid password

● Empty username + Empty password

So, 9 test cases are needed to cover all combinations.

2. Bug Leakage Rate = (Bugs found after release ÷ Total bugs (before + after release)) × 100

Example Calculation:

● Bugs found during testing: 80

● Bugs found after release: 10

● Total bugs: 80 + 10 = 90

Bug Leakage Rate = (10 ÷ 90) × 100 = 11.11%


3. Pass Percentage Formula:

Pass % = (Number of Passed Test Cases ÷ Total Test Cases) × 100

Given:

● Total test cases = 300

● Failed test cases = 90

● Passed test cases = 300 - 90 = 210

Calculation:

Pass % = (210 ÷ 300) × 100 = 70%

Final Answer:

Pass Percentage = 70%

4. Effort Estimation

Formula:
Number of Testers = Total Test Cases ÷ (Test Cases per Tester per Day × Number of Days)

Given:

● Total Test Cases = 1,000


● Test Cases per Tester per Day = 50
● Number of Days = 2

Calculation:
Number of Testers = 1,000 ÷ (50 × 2)
Number of Testers = 1,000 ÷ 100
Number of Testers = 10

Final Answer:
10 testers are needed to execute 1,000 test cases in 2 days.

5. Time Allocation Problem – Word-Friendly Explanation

Scenario:
We have 6 hours (360 minutes) to execute 20 test cases.
Each test case takes either 15 minutes or 30 minutes.
We must complete all 20 test cases within 360 minutes.

Let:
● x = number of 15-minute test cases

● y = number of 30-minute test cases

We need to solve the system of equations:

1. x + y = 20 (total test cases)

2. 15x + 30y = 360 (total time in minutes)

Step-by-step Solution:

From equation 1:
x = 20 - y

Substitute into equation 2:


15(20 - y) + 30y = 360
300 - 15y + 30y = 360
15y = 60
y=4

Now, substitute back:


x = 20 - 4 = 16

Final Answer:

You can include:

● 16 test cases that take 15 minutes

● 4 test cases that take 30 minutes

This fits exactly into 6 hours (360 minutes).

6. Test Case Effectiveness – Word-Friendly Format

Formula:
Test Case Effectiveness (%) = (Bugs found by test cases ÷ Total bugs) × 100

Given:

● Bugs found by test cases = 90


● Bugs found by users after release = 30
● Total bugs = 90 + 30 = 120

Calculation:
Test Case Effectiveness = (90 ÷ 120) × 100
Test Case Effectiveness = 75%

Final Answer:

Test Case Effectiveness = 75%

You might also like