1. Tell me about yourself.
I’m Vignesh Thota, a QA Automation Engineer with 8+ years of hands-on experience in both
manual and automation testing across domains like e-commerce, healthcare, and finance. I
specialize in Selenium WebDriver with Java, TestNG, Cucumber, Jenkins, and REST API testing.
Over the years, I’ve built scalable hybrid automation frameworks (POM + Data + BDD) from
scratch, working in Agile and DevOps teams. I’ve contributed to CI/CD pipelines using Jenkins,
automated mobile apps with Appium, and validated services using SOAP UI and Postman.
I bring a strong SDLC and STLC understanding, and have tested web, mobile, and cloud-based
apps hosted on AWS. I’m passionate about quality, fast feedback, and enabling continuous
delivery through smart test automation.
2. What types of testing have you performed?
I’ve performed a wide range of testing: functional, regression, integration, UAT, smoke, cross-
browser, and performance testing. I’m equally comfortable writing manual test cases as I am
building full-fledged automation suites.
For web testing, I use Selenium WebDriver and TestNG or Cucumber frameworks. For mobile,
I’ve used Appium to test Android and iOS apps. I’ve validated APIs through REST Assured,
Postman, and SOAP UI.
Database testing is another strong area—I write complex SQL queries to validate backend data. I
also do performance testing with JMeter and cross-browser testing via Selenium Grid or Sauce
Labs. My testing is always aligned with business priorities.
3. What frameworks have you used or built?
I’ve built multiple automation frameworks including Hybrid, Data-Driven, BDD, and POM-
based ones. I usually go with Java + Selenium + TestNG or Cucumber, structured using Maven
and versioned with Git.
For BDD, I’ve created frameworks using Cucumber, writing Gherkin features and Java step
definitions. I use tagging and hooks for modularity and maintenance. I also support API testing
through Rest Assured and Groovy scripting for SOAP.
Frameworks I built are CI/CD-ready and integrated with Jenkins for nightly and trigger-based
runs. I also include reporting via ExtentReports or Allure for visibility. My goal is always
modular, scalable, and maintainable test automation.
4. What’s your experience with Selenium WebDriver?
I’ve worked with Selenium WebDriver for over 6 years, automating test cases across browsers
like Chrome, Firefox, Safari, and Edge. I use it with Java and TestNG or JUnit, implementing
POM to manage page objects.
In many projects, I’ve used WebDriver with Data-Driven and BDD frameworks. I also use
Selenium Grid for parallel and cross-browser testing. Synchronization is handled using
explicit/implicit waits to ensure stable execution.
I've used tools like UI Automator Viewer, Firepath, and browser dev tools to identify complex
locators using XPath and CSS selectors. Whether it’s automating login flows or validating
dynamic UI updates, Selenium is my go-to for front-end test automation.
5. How do you perform API testing?
For REST APIs, I use Rest Assured and Postman. I validate responses, status codes, headers,
payloads, and error conditions. I also do parameterized testing and chained API flows where one
endpoint feeds into another.
For SOAP services, I’ve worked with SOAP UI Pro and used Groovy scripts to assert response
values and test various XML structures. I include both positive and negative test cases for
thorough coverage.
These tests are either standalone or integrated into Jenkins jobs, triggered alongside UI
automation or backend jobs. I also pull test data from Excel or property files when needed,
ensuring flexibility in execution.
6. What is your experience with Cucumber and BDD?
I’ve built Cucumber-based BDD frameworks where business scenarios are written in Gherkin
syntax. This enables collaboration between testers, developers, and product owners. I define step
definitions in Java and maintain them under proper folder structures.
Each test suite supports tagging, reusable hooks, and scenario outlines with Examples tables for
coverage. I integrate Cucumber with Selenium or Rest Assured, depending on the type of
validation needed.
BDD has helped me in aligning testing to business expectations while maintaining transparency.
The reports generated help stakeholders get visibility on functional coverage and test outcomes
without diving into code.
7. Describe your CI/CD experience.
I’ve integrated test automation with Jenkins for continuous integration. I write pipeline jobs to
trigger tests on every build, commit, or scheduled run. Jenkins pulls code from GitHub, compiles
using Maven, and runs my test suites headlessly.
Test reports are emailed or published to dashboards for quick review. I also use Jenkins for test
environment setup, like spinning up Selenium Grid nodes or Docker containers.
This setup ensures fast feedback, high reliability, and repeatability. It also supports parallel
execution, regression runs, and smoke testing, allowing me to catch issues early in the
development cycle.
8. How do you handle mobile automation?
I’ve used Appium with Java to automate Android and iOS applications. My frameworks support
device-specific capabilities, screen resolutions, and OS-level conditions like permissions or
popups.
I use Appium Inspector to locate mobile elements and validate UI interactions like gestures,
swipes, and form inputs. My mobile test cases are often parallelized using Sauce Labs or local
device farms.
I’ve validated flows like login, push notifications, and responsive layouts across devices. Mobile
automation is integrated with Jenkins for continuous testing and tracked in reports just like web
UI test runs.
9. How do you validate data in the backend?
I use SQL extensively to validate backend data, especially after UI actions or API triggers. I
work with Oracle, MongoDB, and SQL Server, writing complex joins, nested queries, and stored
procedure validations.
Data validation is part of both functional and regression testing phases. I often compare UI
values with DB records to catch sync issues or data truncation problems.
I also use JDBC and Apache POI to connect test scripts with databases and spreadsheets. This
helps me automate data comparisons or setup test conditions dynamically based on backend
values.
10. What’s your process in Agile environments?
I’ve worked in Agile teams using Scrum and Kanban. I attend sprint ceremonies—planning,
daily stand-ups, grooming, reviews, and retrospectives. I collaborate closely with devs and
product owners to clarify acceptance criteria.
In each sprint, I prepare test cases, automate features, and ensure regression coverage. I raise
blockers early, help write BDD scenarios, and prioritize bugs based on impact.
I also contribute to sprint demo prep by validating key flows and showcasing automation
coverage. Agile keeps me aligned with evolving requirements and enables fast turnaround on
quality feedback.
11. How do you manage test data?
I use a mix of techniques depending on the project. For UI tests, I use Apache POI to read from
Excel. For APIs, I use property files or JSON templates. Some tests connect directly to DB to
fetch dynamic values.
For BDD, I define Examples tables with varied input conditions. I also use data providers in
TestNG and parameterization in Postman collections for API suites.
Managing data is key to creating reusable, scalable, and non-redundant test cases. I sanitize data
regularly to avoid hardcoded values and ensure tests are robust across environments.
12. How do you report defects and track them?
I’ve used tools like JIRA, HP ALM, Rally, and Bugzilla to log and track defects. I write clear
steps to reproduce, expected vs actual behavior, and include screenshots or logs.
I triage bugs with developers, helping them reproduce and identify root causes. I tag bugs by
priority and severity, and follow up during daily stand-ups or defect triage meetings.
I also maintain traceability using RTMs, linking defects to requirements and test cases. Once
fixed, I validate the fixes and mark defects for closure after regression.
13. How do you perform cross-browser testing?
I use Selenium Grid to run tests on multiple browsers—Chrome, Firefox, Safari, Edge—on
different OS combinations. This helps me validate UI consistency and catch browser-specific
issues.
Tests are defined in TestNG with parameters for browsers, and Maven profiles help manage
different suites. I also use tools like Sauce Labs and BrowserStack for cloud-based cross-browser
testing.
This kind of testing is key for public-facing apps, especially in e-commerce and banking. It helps
ensure that customers get a consistent experience regardless of their device or browser.
14. How do you ensure your test scripts are maintainable?
I follow solid coding principles: modularization, reusability, and separation of concerns. I use
Page Object Model for UI automation and utility classes for common functions.
For data, I externalize inputs and avoid hardcoding. For assertions, I centralize checks and use
logging for traceability. I comment my code only where necessary and follow naming
conventions.
I also perform regular refactoring and peer reviews to ensure code health. Test scripts are
version-controlled in Git and integrated with Jenkins for validation at every commit.
15. How do you handle flaky tests or test failures?
First, I investigate if the failure is due to data, environment, or timing issues. I use logging,
screenshots, and reports to debug failures and isolate root causes. Flaky tests are often due to
timing—so I apply explicit waits.
If it's due to third-party dependencies, I mock them where possible. I also group flaky tests and
flag them for later review so they don't block CI builds. Stability matters more than volume.
Once identified, I either fix the issue or temporarily quarantine the test with proper
documentation. I regularly run dry runs to clean up the test suite and ensure consistent results.