ST Unit V
ST Unit V
Unit – V
TEST AUTOMATION
Software test automation – Skills needed for automation – Scope of automation – Design
and architecture for automation – Requirements for a test tool – Challenges in automation
- Test metrics and measurements –Project, progress and productivity metrics
❖ Automation saves time as software can execute test cases faster than human do. The time thus saved
can be used effectively for test engineers to
❖ Test automation can free the test engineers from mundane tasks and make them focus on more creative
tasks.
✓ Automated tests can be more reliable.
✓ Automation helps in immediate testing
✓ Automation can protect an organization against attrition of test engineers.
✓ Test automation opens up opportunities for better utilization of global resources.
✓ Certain types of test cannot be executed without automation.
✓ Automation means end-to-end, not test execution alone.
157
IT8076 SOFTWARE TESTING UNIT V
❖ This method helps in developing test scripts that generates the set of input conditions and
corresponding expected output. This enables the tests to be repeated for different input and output
conditions. The approach takes as much time and effort as the product.
❖ However the changes to application do not require the automated test cases to be changed as long as
the input conditions and expected output are still valid. This generation of automation focuses on input
and output conditions using the black box testing approach.
Third Generation Action-Driven
❖ This technique enables a layman to create automation tests. There are no input and expected output
conditions required for running the tests.
❖ All actions that appear on the application are automatically tested, based on a generic set of controls
defined for automation.
❖ The set of actions are represented as objects and those objects are reused.
❖ The user needs to specify only the operations [such as log in, download, and so on] and everything
else that is needed for those actions are automatically generated.
❖ Automation in third generation involves two major aspects- test automation and frame work design.
❖ The skills needed for automation are classified into four levels for three generations as the generation
of automation introduces two levels of skills for development of test cases and framework.
158
IT8076 SOFTWARE TESTING UNIT V
159
IT8076 SOFTWARE TESTING UNIT V
❖ Normally change in requirements cause scenarios and new features to be impacted, not the basic
functionality of the product.
160
IT8076 SOFTWARE TESTING UNIT V
External Modules
❖ There are two modules that are external modules to automation TCDB and defect DB, all the test
cases, the steps to execute them and the history of their execution are stored in the TCDB.
❖ The test cases in TCDB can be manual or automated. The interfaces are shown by thich arrows
represents the interaction between TCDB and the automation framework onl for automated test cases.
❖ Defect DB on defect database or defect repository contains details of all the defects that are found in
various products that are tested in a particular organization.
161
IT8076 SOFTWARE TESTING UNIT V
162
IT8076 SOFTWARE TESTING UNIT V
added to the test suite. The test case modification and new test case insertion should not result in the
existing test cases failing.
❖ Test tools are not only used for one product having one test suite. Hayes is used for various products
that may have multiple test suites. It is important for the test suits be added to the framework without
affecting other test suites.
Requirement 3: Reuse Of Code For Different Types Of Testing, Test Cases.
❖ The functionality of the product when subjected to different scenarios becomes test cases for different
type of testing. This encourages the reuse of code in automatic.
❖ By allowing the objectives of framework and test suites to take care of the ―how‖ and ―what‖ portions
of automation respectively.
❖ Reuse of test cases can be increased. The reuse of code is not only applicable to various types of
testing; it is also applicable for modules within automation.
Requirement 4: Automatic Set Up and Clean Up
❖ For each test case there could be some prerequisite to be met today before they are run.
❖ The test cases may expect some objects to be created or certain portion of the product to be
configured in a particular way. If this portion is not met by automation, then it introduces some manual
intervention before running the test cases.
❖ Each test program should have a set up program that will create the necessary setup before executing
the test cases. The test frame should have the intelligence to find out what test cases are executed and
call the appropriate set up program.
Requirement 5: Independent Test Cases
❖ The test cases need to be independent not only in design phase, but also in the execution phase. To
execute a particular test case, it should not expect any other test case to have been executed before nor
should it implicitly assume that certain other test case will be run after it.
❖ Each test case should be executed alone; there should be no dependency between test cases such as
test case -2 to be executed after test case 1 and so on.
❖ This requirement enables the test engineer to select and execute any test case at random without
worrying about other dependencies.
Requirement 6: Test Case Dependency
❖ Making test cases independent enables anyone case to be selected at random and executed. Making a
test case dependent on another makes it necessary for a particular test case to be executed before or
after a dependent test case is selected for execution.
❖ A test tool or a framework should provide both features. The framework should help to specify the
dynamic dependencies between test cases.
163
IT8076 SOFTWARE TESTING UNIT V
164
IT8076 SOFTWARE TESTING UNIT V
165
IT8076 SOFTWARE TESTING UNIT V
❖ There are number of problems that may be encountered in trying to automate testing. Having some
idea of the type of problems that encounter should help in implementing effective automation regime.
Few common problems are described below.
Unrealistic expectations:
❖ Generally there is a tendency to be optimistic/have high expectation about what can be achieved by a
new test tool. It is human nature to hope that this new test solution will at last solve all of the problems
we are currently experiencing.
❖ Vendors usually emphasize the benefits and successes, and may play down the amount of effort
needed to achieve the desired benefits. If management expectations are unrealistic, then no matter how
well the tool is implemented from a technical point of view, it will not meet expectations.
❖ A test might more likely find a defect the first time it is run. If a test has already run and passed,
running the same test again is much less likely to find a new defect (unless the test is exercising code
that has been changed or could be affected by a change made in a different part of the software, or is
being run in a different environment).
❖ Test execution tools are ‗record – replay‘ tools, i.e. regression testing tools. Their use is in repeating
tests that have already run. This is a very useful thing to do, but it is not likely to find a large number
of new defects, particularly when run in the same hardware and software environment as before.
166
IT8076 SOFTWARE TESTING UNIT V
❖ Knowing that a set of tests has passed again gives confidence that the software is still working as well
as it was before, and that changes elsewhere have not had unforeseen effects.
❖ If testing practice is poor, with poorly organized/designed tests, little or inconsistent documentation
and tests that are not very effective at finding defects, automating those tests is not a good idea.
❖ When software is changed it is often necessary to update some, or even entire test suite, so they can be
re-run successfully. This is particularly true for automated tests.
❖ Test maintenance effort is the biggest challenge and often reason to truncate many test automation
initiatives. When it takes more effort to update the tests than it would take to re-run those tests
manually, test automation will be stopped.
❖ Just because a test suite runs without finding any defects, it does not mean that there are no defects in
the software. The tests may be incomplete, or may contain defects themselves.
❖ If the expected outcomes are incorrect, automated tests will simply preserve those defective results.
❖ Commercial test execution tools are software products, sold by vendor companies, they are not
immune from defects or problems of support. Interoperability of the tool with other software, either
your own applications or third-party products, can be a serious problem. Many tools look ideal on
paper, but simply fail to work in some environments.
❖ In addition to technical problems with the tools themselves, we may experience technical problems
with the software we are trying to test. If software is not designed and built with testability in mind, it
can be very difficult to test, either manually or automatically. Trying to use tools to test such software
will add complication which will only make test automation even more difficult.
Organizational issues:
❖ Automating testing is not a trivial exercise, and it needs to be well supported by management and
implemented into the culture of the organization. Time must be allocated for choosing tools, for
training, for experimenting and learning what works best, and for promoting tool use within the
organization.
❖ Test automation is an infrastructure issue, not just a project issue. In large organizations, test
automation can rarely be justified on the basis of a single project, since the project will bear all of the
start-up costs and teething problems and may reap little of the benefits.
❖ If the scope of test automation is only for one project, people will then be assigned to new projects,
and the automation initiative will be lost.
167
IT8076 SOFTWARE TESTING UNIT V
❖ Software Metrics are used to measure the quality of the project. Simply, Metric is a unit used for
describing an attribute. Metric is a scale for measurement.
❖ Suppose, in general, ―Kilogram‖ is a metric for measuring the attribute ―Weight‖. Similarly, in
software, ―How many issues are found in thousand lines of code?‖, here No. of issues is one
measurement & No. of lines of code is another measurement. Metric is defined from these two
measurements.
❖ Test metrics example:
✓ How many defects are existed within the module?
✓ How many test cases are executed per person?
✓ What is the Test coverage %?
168
IT8076 SOFTWARE TESTING UNIT V
❖ As explained above, Test Metrics are the most important to measure the quality of the software.
Suppose, if a project does not have any metrics, then how the quality of the work done by a Test analyst
will be measured.
❖ For Example: A Test Analyst has to,
✓ Design the test cases for 5 requirements
✓ Execute the designed test cases
✓ Log the defects & need to fail the related test cases
✓ After the defect is resolved, need to re-test the defect & re-execute the corresponding failed test
case.
❖ In above scenario, if metrics are not followed, then the work completed by the test analyst will be
subjective i.e. the test report will not have the proper information to know the status of his
work/project.
❖ If Metrics are involved in the project, then the exact status of his/her work with proper
numbers/data can be published.
❖ In the Test report, we can publish:
1. How many test cases have been designed per requirement?
2. How many test cases are yet to design?
3. How many test cases are executed?
4. How many test cases are passed/failed/blocked?
5. How many test cases are not yet executed?
6. How many defects are identified & what is the severity of those defects?
7. How many test cases are failed due to one particular defect? etc.
❖ Based on the project needs we can have more metrics than above mentioned list, to know the status
of the project in detail.
❖ Based on the above metrics, test lead/manager will get the understanding of the below mentioned
key points.
❖ Based on the metrics, if the project is not going to complete as per the schedule, then the manager
will raise the alarm to the client and other stake holders by providing the reasons for lagging to
avoid the last minute surprises.
169
IT8076 SOFTWARE TESTING UNIT V
1. Base Metrics
2. Calculated Metrics
Base Metrics:
❖ Base Metrics are the Metrics which are derived from the data gathered by the Test Analyst during
the test case development and execution.
❖ This data will be tracked throughout the Test Life cycle. i.e. collecting the data like, Total no. of
test cases developed for a project (or) no. of test cases need to be executed (or) no. of test cases
passed/failed/blocked etc.
Calculated Metrics:
❖ Calculated Metrics are derived from the data gathered in Base Metrics. These Metrics are generally
tracked by the test lead/manager for Test Reporting purpose.
170
IT8076 SOFTWARE TESTING UNIT V
Project Metrics:
❖ A typical project starts with requirements gathering and ends with product release. All the phases
that fall in between these points need to be planned and tracked.
❖ In the planning cycle, the scope of the project is finalized. The project scope gets translated to size
estimates, which specify the quantum of work to be done.
❖ This size estimate gets translated to effort estimate for each of the phases and activities by using the
available productivity data available. This initial effort is called base lined effort.
❖ As the project progresses and if the scope of the project changes or if the available productivity
numbers are not correct, then the effort estimates are re-evaluated again and this re-evaluated effort
estimate is called revised effort.
❖ The estimates can change based on the frequency of changing requirements and other parameters
that impact the effort.
171
IT8076 SOFTWARE TESTING UNIT V
Progress Metrics:
❖ Any project needs to be tracked from two angles. One, how well the project is doing with respect to
effort and schedule.
❖ The other equally important angle is to find out how well the product is meeting the quality
requirements for the release. There is no point in producing a release on time and within the effort
estimate but with a lot of defects, causing the product to be unusable.
❖ One of the main objectives of testing is to find as many defects as possible before any customer
finds them. The number of defects that are found in the product is one of the main indicators of
quality.
❖ Defects get detected by the testing team and get fixed by the development team. Defect metrics are
further classified in to test defect metrics (which help the testing team in analysis of product quality
and testing) and development defect metrics (which help the development team in analysis of
development activities).
Productivity Metrics:
❖ Productivity metrics combine several measurements and parameters with effort spent on the
product. They help in finding out the capability of the team as well as for other purposes, such as
✓ Estimating for the new release.
✓ Finding out how well the team is progressing, understanding the reasons for (both positive
and negative) variations in results.
✓ Estimating the number of defects that can be found.
✓ Estimating release date and quality.
✓ Estimating the cost involved in the release.
QUESTION BANK
PART – A
172
IT8076 SOFTWARE TESTING UNIT V
8. What are the modules involved in designing the architecture for automation?
Modules:
✓ External Modules
✓ Scenario and configuration file modules
✓ Test cases and test framework modules
✓ Tools and results modules
✓ Report generator and report / metrics modules
173
IT8076 SOFTWARE TESTING UNIT V
14. What are the items that should have in a test data archive?
Archival of test data must include
1. What configuration variable was used?
2. What scenario was used and
3. What programs were executed and from what path.
174
IT8076 SOFTWARE TESTING UNIT V
Challenges in automation
15. Mention some challenges that are faced during test automation.[May 2017] [ Nov 2017]
1. Unrealistic expectations.
2. Expectation that automated tests will find a lot of new defects.
3. Poor testing practice
4. Maintenance of automated tests
5. False sense of security.
6. Technical problems of tools
7. Organizational issues
175
IT8076 SOFTWARE TESTING UNIT V
20. Differentiate between project monitoring and project controlling. [Nov / Dec 2012]
Project Monitoring Project Controlling
It refers to the activities and tasks managers engage It consists of developing and applying a set of
into periodically check the status of each project corrective actions to get a project on track when
Reports are prepared that compare the actual work monitoring shows a deviation from what was
done to the work that was planned planned
PART – B
1. Write short notes on software test automation. [4M] [Refer Pg.no:157]
2. Explain the skills needed for automation. [May 2017-16M] [Refer Pg.no:157]
3. Explain in detail about scope of automation.[8M] [Refer Pg.no:159]
4. Explain the design and architecture for automation. [ Nov 2017] [16M] [Refer Pg.no:160]
5. Write short notes on Testing tools.[Nov / Dec 2009 – 8M] [Refer Pg.no:162]
(Or)
List the requirements of test tool. Explain with 5 suitable examples. [May/ Jun-2012 – 8M]
6. Write short notes on challenges faced by tester in automation process. [8M] [Refer Pg.no:166]
7. Write short notes on Test metrics. [Nov / Dec 2009 – 8M][May 2017-16M] [Refer Pg.no:168]
(Or)
Narrate about the metrics or parameters to be considered for evaluating the software quality.
[Nov/Dec 2012 -16M] [ Nov 2017-16M] [Refer Pg.no:168]
8. Explain the types of Product metrics. [8M] [Refer Pg.no:171]
176