SDET
SDET
Answer:
"I'm a Mechanical Engineering graduate with strong analytical skills and a growing passion for
software testing and data analysis. I recently completed a 6-month data science internship at
Cuvette where I worked on real-world datasets, wrote Python scripts for data validation, and
built dashboards using Power BI and Excel. Currently, I'm working as a Floor Engineer at Ashok
Leyland where I track and analyze workshop data, which mirrors the bug-tracking and issue-
resolution process in QA. My technical toolkit includes Python, SQL, Excel, and Tableau, and I’m
now focused on transitioning into a Software Development Engineer in Test role to apply these
skills in a more test-driven, automated environment."
Answer:
"I'm excited about the opportunity to work at Visa because of its global impact and emphasis on
innovation in the payments domain. As someone passionate about building reliable systems,
the SDET role aligns perfectly with my skillset — blending testing, data analysis, and scripting.
The company’s focus on security and quality matches my interest in test validation and defect
prevention. I'm particularly drawn to the opportunity to work with automation tools,
collaborate with developers, and contribute to ensuring seamless payment experiences for
millions."
Answer:
"While I haven't worked in a formal QA automation team yet, I’ve written Python scripts to
validate data in real-world projects. I’ve also simulated test scenarios during my internship,
including creating test cases for dashboards and analyzing results manually and through Excel-
based metrics. I'm currently learning Selenium to build automated test cases and am
comfortable designing test plans, documenting bugs, and using SQL for backend data checks.
These experiences help me approach QA tasks methodically and with an eye for accuracy."
Answer:
"I start by thoroughly understanding the requirement or feature. Then, I define the test
scenario, identify input values, expected output, and outline preconditions. I write clear, step-
by-step test cases and include positive and negative test data. During execution, I manually
follow the steps or use scripts if automated. Results are logged — if the actual result doesn't
match the expected, I document the defect, severity, and steps to reproduce it. I’ve practiced
this process during my internship with dashboard testing and e-commerce data validation."
Answer:
"During my internship, while analyzing e-commerce data, I noticed that some customer
segmentation outputs didn’t match the sales logic. Using Python and Excel, I traced the issue to
a filter mismatch in the dataset. I documented the finding, communicated it to the mentor, and
updated the filter logic. This improved segmentation accuracy. Though it was data-focused, the
process mirrored defect identification and resolution — understanding the system, identifying
anomalies, and correcting logic."
Answer:
"I’m confident with Python at a beginner-to-intermediate level, especially for data tasks. I’ve
used libraries like Pandas and NumPy to clean, process, and analyze large datasets. In SQL, I’ve
written complex queries to join tables, filter data, and extract summaries, earning a 5-star rating
on HackerRank. I regularly used SQL during my internship for backend checks and validation
steps."
Answer:
"In the e-commerce project, I worked with over 10,000 transaction records. My role involved
checking for inconsistencies like mismatched prices or invalid timestamps. I wrote Python
scripts to clean the data, applied logical rules to validate the fields, and generated reports. I also
visualized findings in Excel to communicate anomalies clearly — very similar to what’s expected
in manual testing and test reporting."
9. What tools are you familiar with that relate to QA or SDET roles?
Answer:
"I'm familiar with Python, SQL, MS Excel, Power BI, and Tableau. I’ve also used Excel for bug
tracking and reporting. While I’m currently learning Selenium and TestNG, I understand their
role in writing and executing automation scripts. I’ve explored JIRA as part of training modules
and plan to deepen my knowledge of test frameworks soon."
Answer:
"My immediate goal is to join a high-performing QA team where I can contribute to test
planning, automation, and quality assurance using Python and SQL. Over time, I want to
become proficient with tools like Selenium, Postman, and Jenkins, and build reusable test
frameworks. Long-term, I see myself growing into a lead QA engineer who not only finds bugs
but improves systems to prevent them."
1. What is an SDET?
An SDET (Software Development Engineer in Test) bridges the gap between developers and
testers. They write code to test software automatically, ensure quality at every stage, and
often contribute to CI/CD pipelines.
I’ve used Python for scripting, MySQL for database validation, and Excel for manual test
tracking. For visualization, I’ve worked with Power BI and Tableau, and I’m currently learning
Selenium for web automation.
3. Difference between QA and Testing?
QA ensures that the processes used during development prevent bugs. Testing focuses on
finding bugs in the product by executing scenarios and validating outcomes.
A test case is a documented scenario that includes test steps, input data, and expected
output. It helps ensure the application behaves as intended.
Regression testing ensures that new changes or features haven’t negatively impacted existing
functionality. It is vital for product stability over time.
Manual testing is the process of manually executing test cases without using tools. It's ideal
for exploratory testing or when the test case requires visual observation.
Automation testing uses scripts or tools to run tests automatically. It saves time and effort for
large, repeatable test suites like regressions.
8. What is Selenium?
Selenium is a widely used open-source tool for automating web applications. It supports
various languages like Java, Python, and JavaScript and works across multiple browsers.
A bug report documents issues found during testing. It usually includes steps to reproduce,
actual vs expected results, screenshots, severity, and environment details.
10. How is SQL used in software testing?
SQL is used to validate backend data, check if inputs are saved correctly, and retrieve test
data. It’s essential when testing applications with databases.
An error is a mistake made by a developer. A defect is a deviation from the requirement, and
a bug is a reported fault that affects the application’s behavior.
Data validation is the process of ensuring that data entered or processed is accurate and
meets specified rules. It’s crucial in both form validation and backend checks.
Unit testing focuses on testing individual components or functions in isolation. It's usually
done by developers before the integration stage.
Integration testing checks how different modules interact with each other. It ensures the
system works correctly when components are combined.
Test coverage is a metric that shows how much of the application’s code or features have
been tested. It helps ensure comprehensive testing.
Software Development Life Cycle (SDLC) is the process of planning, creating, testing, and
deploying software. It includes phases like requirement analysis, design, development,
testing, and maintenance.
17. What is STLC?
Software Testing Life Cycle (STLC) involves stages like test planning, test case development,
test execution, defect reporting, and closure activities.
Black-box testing involves testing software without knowledge of its internal structure. Testers
focus on input/output and functionality only.
White-box testing involves testing with full knowledge of internal code and logic. It's used to
validate code paths, loops, and conditions.
Test scenarios are high-level concepts that define what needs to be tested. They serve as a
base to create detailed test cases.
A test plan outlines the scope, strategy, objectives, resources, schedule, and deliverables for
testing. It acts as a guide for the QA process.
Exploratory testing is an unscripted approach where testers learn and test the application
simultaneously. It’s useful when requirements are unclear.
A QA dashboard displays key metrics like test execution progress, bug status, and test
coverage. I’ve built dashboards in Excel and Power BI during my projects.
24. What is data cleaning?
Data cleaning involves identifying and correcting invalid, incomplete, or duplicate data. It’s
critical for reliable analysis and test inputs.
MySQL is an open-source relational database used to manage and query structured data. It
supports SQL for retrieving and validating data in test cases.
Joins are used to retrieve related data from multiple tables in a database. Common joins
include INNER JOIN, LEFT JOIN, and RIGHT JOIN.
A primary key uniquely identifies each record in a table and prevents duplicates. It's often
used in validation during database testing.
Exception handling in Python uses try-except blocks to catch and manage runtime errors. It
ensures graceful recovery from unexpected issues.
Pandas is a Python library used for data manipulation and analysis. I used it in projects to
clean, transform, and validate large datasets.
Power BI is a Microsoft business intelligence tool that creates interactive dashboards and
reports. I used it to visualize key performance metrics in my internship.
31. What is Tableau?
Tableau is a data visualization tool used for creating interactive charts and dashboards. It
helps in analyzing trends and comparing metrics.
Automation testing reduces manual effort, improves test accuracy, and increases test
coverage. It’s especially effective in regression testing.
I prioritize test cases based on risk, business impact, and frequency of use. Critical
functionalities and high-risk areas come first.
The defect lifecycle includes New → Assigned → Open → Fixed → Retest → Closed. It tracks
the journey of a bug from detection to resolution.
Smoke testing is a quick check of critical features to ensure the software build is stable
enough for deeper testing.
Sanity testing verifies that a particular bug fix or change didn’t break existing functionality. It’s
usually narrow and focused.
Dashboard filters allow users to view data based on specific criteria like region, product type,
or date range. I’ve applied filters in Power BI and Tableau.
38. What are your strengths?
I’m analytical, detail-oriented, and a quick learner. I adapt fast to new tools and love solving
problems through logic and testing.
I enjoy breaking things to make them better. QA allows me to contribute to product reliability
and build confidence in every release.
I aim to master automation tools like Selenium and build robust test frameworks. I also want
to contribute to CI/CD integration in testing.
I worked on real datasets using Python, SQL, and Power BI. I created dashboards, performed
data cleaning, and validated results to simulate QA tasks.
In the e-commerce data project, I found a filter logic mismatch causing wrong customer
segments. I corrected it in Python, improving segmentation accuracy.
I use Excel sheets and Power BI dashboards to present test status, pass/fail ratios, and issues.
It helps stakeholders make informed decisions.
I’ve used Excel to track test cases and issues. I’ve started exploring JIRA to manage tasks,
bugs, and sprint cycles effectively.
45. How do you ensure quality in tight deadlines?
I prioritize critical test cases, automate repetitive ones, and communicate clearly with the
team. Fast feedback loops are essential.
Validation checks if the product meets the user needs. Verification ensures the product was
built according to specs and design.
I reach out to stakeholders for clarification and rely on exploratory testing to cover gaps. I also
document assumptions clearly.
Python is readable, fast to write, and has powerful libraries like Pandas and PyTest. It’s ideal
for data validation and test automation.
Yes, I tested filters, calculated fields, and visuals in Power BI and Tableau. I ensured data
mapping matched backend sources.
I bring a strong foundation in data, an eye for detail, and a passion for quality. I’m a fast
learner, and my hands-on experience aligns well with the SDET responsibilities at Visa.
End-to-end testing validates the entire application flow from start to finish to ensure everything
works as expected. It simulates real user scenarios across modules and systems.
52. What is the difference between severity and priority?
Severity indicates the impact of a defect on the system, while priority defines how soon it
should be fixed. A high severity bug may have low priority if it occurs rarely.
Boundary value analysis is a testing technique that focuses on the edge cases of input ranges.
It’s based on the idea that errors often occur at the boundaries.
It’s a technique that divides input data into valid and invalid classes to minimize the number of
test cases. You select one input from each class assuming the system treats them the same.
A test data strategy defines how data is created, managed, and used in test cases. It can be
static, dynamic, or fetched from external sources like databases.
Mocks and stubs simulate the behavior of real components. They help isolate units during
testing by replacing dependencies.
API testing checks if API endpoints function as expected. It verifies response codes, payloads,
error messages, and performance under load.
While I haven’t done live API testing yet, I understand how to use tools like Postman to send
requests, validate responses, and write basic tests.
59. What is cross-browser testing?
It ensures the application behaves the same across multiple browsers like Chrome, Firefox, and
Edge. It’s crucial for consistent user experience.
Assertions check that the actual output matches the expected result. If the condition fails, the
test fails.
CI is the practice of merging code into a shared repository frequently. It helps detect problems
early through automated builds and tests.
Continuous testing runs automated tests as part of the CI/CD pipeline. It ensures fast feedback
on every code change.
Defect density measures the number of bugs per module or per lines of code. It helps identify
risk-prone areas in the codebase.
Load testing evaluates application performance under expected or peak user load. It helps
identify bottlenecks and response times.
It checks system behavior under various workloads. It includes load, stress, and endurance
testing to ensure speed and stability.
66. What is a test environment?
A test environment is a setup that includes hardware, software, databases, and tools required to
execute test cases. It mimics the production setup as closely as possible.
Version control like Git helps track changes in code, collaborate with teams, and manage
releases. It's crucial in modern QA environments.
Key components include bug ID, title, description, steps to reproduce, severity, priority,
environment, screenshots, and status.
TDD is a practice where tests are written before the code. It ensures all functionality is testable
and minimizes bugs.
BDD focuses on writing test cases in natural language using tools like Cucumber. It improves
collaboration among QA, devs, and business teams.
QA in Agile is involved in sprint planning, writing test cases, testing user stories, and
collaborating closely with devs to ensure continuous delivery.
A test suite is a collection of test cases grouped for execution. It may cover a module, a feature,
or a scenario end-to-end.
73. What is static testing?
Static testing involves reviewing code or documents without executing them. Examples include
walkthroughs and code reviews.
Dynamic testing requires code execution. It validates runtime behavior and outputs for given
inputs.
RCA identifies the original reason behind a defect. It helps prevent similar issues in future by
addressing the source, not just the symptom.
Negative testing checks how the system behaves with invalid or unexpected input. It ensures
robustness and error handling.
A positive test case checks if the application works with valid and expected inputs. It confirms
that functionality behaves correctly.
Data-driven testing runs the same test logic with multiple input sets. It helps test boundary
values, combinations, and input variations.
Code coverage measures the percentage of source code executed during testing. High coverage
reduces the chance of undetected bugs.
80. What is a flaky test?
A flaky test is one that fails intermittently without actual code issues. It’s often caused by
timing, network, or environmental instability.
A test strategy is a high-level document that outlines the approach to testing, types of testing to
be done, tools to use, and entry/exit criteria.
Usability testing checks how intuitive and user-friendly an application is. It often involves real
users performing typical tasks.
Acceptance testing ensures the system meets business requirements. It’s usually done before
final release, often by end-users or stakeholders.
It means keeping versions of test cases, plans, and data to track changes and maintain
traceability across releases.
I review test cases after each sprint or feature update to keep them relevant. I remove obsolete
ones and update those impacted by changes.
87. What is parallel testing?
Parallel testing executes multiple tests at the same time across different environments or
browsers. It saves time and increases efficiency.
Localization testing checks that the UI and messages are translated correctly and fit culturally
for different regions or languages.
It ensures the application is usable by people with disabilities. It follows standards like WCAG
and uses tools like screen readers.
Common KPIs include test coverage, pass/fail rate, defect leakage, mean time to detect/fix, and
test execution time.
Traceability links test cases to requirements, ensuring all business needs are validated. It also
helps in impact analysis.
EDA is a technique in data science to understand data patterns, outliers, and distributions using
visual and statistical tools.
It’s a summary showing how many test cases passed, failed, or were blocked. It helps
stakeholders assess product readiness.
94. What is cron in testing or devops?
A cron job is a scheduled task in Unix-like systems. It can be used to run test scripts, backups, or
reports at specific intervals.
A REST API lets applications communicate using HTTP methods like GET, POST, PUT, DELETE. It
returns data in formats like JSON or XML.
Latency is the time delay between a request and the first byte of response. Low latency is key to
good user experience.
Throughput is the number of transactions processed within a given time. It indicates system
capacity under load.
Failover testing checks the system’s ability to switch to backup systems or recover gracefully
after failure.
Smoke tests check core features in a new build, while sanity tests verify specific functionalities
after changes or bug fixes.
100. What do you do when you find a critical bug right before release?
I immediately report it to the dev and QA leads, provide evidence, and help assess impact.
Based on severity, we decide to fix or delay the release.