0% found this document useful (0 votes)
26 views101 pages

03 - Testing Life Cycle

This document outlines the Software Testing Life Cycle, detailing its phases from initial requirement gathering to delivery and maintenance. It emphasizes the importance of quality assurance at each stage, including various testing models such as the V-model and Waterfall model, and highlights the significance of User Acceptance Testing. Additionally, it covers different levels of testing, including unit, integration, system, and acceptance testing, to ensure that software meets user requirements and functions correctly.

Uploaded by

Fahad Ahamed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views101 pages

03 - Testing Life Cycle

This document outlines the Software Testing Life Cycle, detailing its phases from initial requirement gathering to delivery and maintenance. It emphasizes the importance of quality assurance at each stage, including various testing models such as the V-model and Waterfall model, and highlights the significance of User Acceptance Testing. Additionally, it covers different levels of testing, including unit, integration, system, and acceptance testing, to ensure that software meets user requirements and functions correctly.

Uploaded by

Fahad Ahamed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 101

– This lesson will provide leaner

• Ability to identify Software Test life Cycle


• Ability to use different Test models
• Understandability of Test varieties
• Ability to identify different levels of Testing
• Ability to review the Test Process.
• Test Life Cycle
• Identify Test models
• Identify Test Variety
• Identify Test levels
• Review and Test Process.
Software testing …

• It is critical to ensure the quality through out the software


development life cycle.
• At every stage, a check should be made that the work-product
for that stage meets its objectives
• If evaluation is successes then work can progress to the next
stage in confidence.
• In addition, it is cost effective to fix an error at the
development stage ( early stage ) rather than fixing the
problem at a later stage.
Testing Life Cycle

• The SDLC in Software Testing has 6 phases:-


– They are:-
a) Initial Phase in Software Testing
b)Analysis Phase in Software Testing
c) Design Phase in Software Testing
d)Cording Phase in Software Testing
e)Testing Phase in Software Testing
f) Delivery and Maintenance Phase in Software Testing
Initial Phase in Software Testing

• Gathering the Requirements:-


– The Business Analyst(BA) will gather the information of the company
through one template which is predefined and goes to client.
– He would collect all the information of what has to be developed and
in how many days and all the basic requirements of the company.
– Would prepare one document called as either
• BDD in Software Testing:--Business Development Design
• BRS in Software Testing:--Business Requirement Specification
• URS in Software Testing:--User Requirement Specification
• CRS in Software Testing:--Customer Requirement Specification
– All are the same.

• Discussing the Financial Terms and Conditions:--


– The Engagement Manager(EM) would discuss all the Financial
Matters.
Analysis Phase in Software Testing
• In this phase the BDD document is taken as the input.
• In this phase 4 steps are done.
– Analysis the requirements in Software Testing
• In this step all the requirements are analyzed and studied.
– Feasibility Study in Software Testing
• Feasibility means the possibility of the project developing.
– Deciding Technology in Software Testing
• Deciding which technology has to be used for example: Either to use the SUN or
Microsoft Technology etc.
– Estimation in Software Testing
• Estimating the resources. For example:-Time, Number of people etc.
• During the Analysis Phase the Project Manager prepares the Project Plan.
• The output document for this phase is the Software Requirement
Specification
• Which is prepared by Senior Analyst(SR).
Design Phase in Software Testing

• The designing will be in 2 levels


– High Level Designing in Software Testing
• In this level of Designing the project is divided into number
of modules.
The High Level Designing is done by Technical
Manager(TM) or Chief Architect(CA).
– Low Level Designing in Software Testing
• In this level of Designing the modules are further divided
into number of sub modules.
The Low Level Designing is done by Team Lead(TL)
• In this Phase the Chief Architect would prepare the Technical
Design Document or Detail Design Document.
Coding Phase in Software Testing

• In this Phase the Developers would write the Programs for the
Project by following the Coding standards.

• In this phase the Developers would prepare the Source Code.


Testing Phase in Software Testing

1. when the BDD is prepared the Test Engineer would study


the document, and send a Review Report to Business
Analyst(BA).
2. Review Report in Software Testing is nothing but a
document prepared by the Test Engineer
3. The Test Engineer would write the Test Cases of application.
4. In Manual Testing there would be up to 50% defect free and
in Automation Testing it would be 93% defect free.
• In this Phase Testing people document called as Defect Profile
Document
Delivery & Maintenance Phase in Software Testing:-

1. In this Phase after the project is done a mail is given to the


client mentioning the completing of the project.
• This is called as the Software Delivery Note.
2. The project is tested by client and is called as the User
Acceptance Testing.
3. The project is installed in client environment and there
testing is done called Port Testing in Software Testing.
• while installing if any problem occurs the maintenance people
would write the Deployment Document(DD) to Project
Manager(PM).
4. After some time if client want to have some changes in the
software and software changes are done by Maintenance.
User Acceptance Testing
1.User Acceptance Testing is a key feature of Project implementation.
2.User Acceptance Testing(UAT) is the formal means by which Company ensures that the new
system actually meets the essential user requirements.
3.Each module implemented will be subject to one or more user acceptance tests before sign
off.
4.This UAT Plan describes the test scenarios, test conditions, and test cycles that must be
performed to ensure that acceptance testing follows a precise schedule and that the system is
thoroughly tested before releasing.
5. The acceptance procedure ensures the intermediate or end-result supplied meets the users'
expectations by asking questions such as

• Is the degree of detail sufficient?


• Are the screens complete?
• Is the content correct from the user's point of view?
• Are the results usable?
• Does the system perform as required?
User Acceptance Testing cont..
6.In UAT, the software is tested for compliance with business rules as defined in the Software
Requirement Specifications and the Detailed Design documents.
7.UAT also allows designated personnel to observe how the application will behave under
business functional operational conditions.
Test Model

• There are Different type of Test models are available


– V-model in Software Testing.
– Sequential (the traditional waterfall model).model in Software Testing.
– Incremental (the function by function incremental model).
– Spiral Model in Software Testing(the incremental, iterative,
evolutionary, RAD, prototype model)
V- Model
V Model
• The V - model is SDLC model where execution of
processes happens in a sequential manner in V-
shape.
• It is also known as Verification and Validation model.
• V-Model links early development activities to their
corresponding later testing activities
• V - Model is an extension of the waterfall model
• It is based on association of a testing phase for each
corresponding development stage.
Cont.

• Further , for every single phase in the development cycle


there is a directly associated testing phase.
• This is a highly disciplined model and next phase starts only
after completion of the previous phase.
• There are two main process
– Verification
– Validation
Verification process
• Phases in Verification Process
• Business Requirement Analysis:
– Product requirements are understood from the customer
perspective.
– The acceptance test design planning is done
– Business requirements can be used as an input for
acceptance testing.
• System Design:
– Once you have the clear and detailed product
requirements, it's time to design the complete system.
– System test plan is developed based on the system design.
Cont..
• Architectural Design:
– Usually more than one technical approach is proposed
– Based on the technical and financial feasibility the final
decision is taken.
– System design is broken down further into modules taking
up different functionality.
– This is also referred to as High Level Design (HLD).
– The data transfer and communication between the
internal modules and with the outside world (other
systems) is clearly understood and defined in this stage.
– With this information, integration tests can be designed
and documented during this stage.
Cont..

• Module Design:
– In this phase the detailed internal design for all the system
modules is specified,
– referred to as Low Level Design (LLD).
– It is important that the design is compatible with the other
modules in the system architecture and the other external
systems.
– Tests are an essential part of any development process and
helps eliminate the maximum faults and errors at a very
early stage.
Coding Phase

– System modules designed in the design phase is taken up


in the Coding phase.
– The best suitable programming language is decided based
on the system and architectural requirements.
– The coding is performed based on the coding guidelines
and standards.
– The code goes through numerous code reviews and is
optimized for best performance before the final build is
checked into the repository.
Validation Process

• Validation phases in V-Model:


• Unit Testing:
– Unit tests designed in the module design phase are
executed on the code during this validation phase
• Integration Testing:
– Integration testing is associated with the architectural
design phase.
– Integration tests are performed to test the
coexistence and communication of the internal
modules within the system.
Cont..
• System Testing:
– System tests check the entire system functionality and the
communication of the system under development with
external systems.

• Acceptance Testing:
– The business requirement analysis phase and involves
testing the product in user environment.
– Acceptance tests uncover the compatibility issues with the
other systems available in the user environment.
– It also discovers the non functional issues such as load and
performance defects in the actual user environment
V- Model Application
• V- Model application is almost same as waterfall model, as
both the models are of sequential type.
• Requirements have to be very clear
• Following are the suitable scenarios to use V-Model:
– Requirements are well defined, clearly documented and
fixed.
– Product definition is stable.
– Technology is not dynamic and is well understood by the
project team.
– There are no ambiguous or undefined requirements. The
project is short.
– medical development field, as it is strictly disciplined
domain.
Pros & Cons

Pros Cons
•This is a highly disciplined model •High risk and uncertainty.
•Not a good model for complex and
•Phases are completed one at a time. object-oriented projects.
•Poor model for long and ongoing
•Works well for smaller projects where projects.
requirements are very well understood. •Not suitable for the projects where
requirements are at a moderate to high
•Simple and easy to understand and use. risk of changing.
•Once an application is in the testing
•Easy to manage due to the rigidity of the stage, it is difficult to go back and change
model . each phase has specific a functionality
deliverables and a review process. •No working software is produced until
late during the life cycle.
Waterfall Model
The waterfall model derives its name due to the cascading effect from one phase to the other.
In this model each phase well defined starting and ending point, with identifiable deliveries to
the next phase.
Note that this model is sometimes referred to as the linear sequential model or the software
life cycle.
Waterfall Model Cont..
1. In the requirements analysis phase
a) The problem is specified along with the desired service objectives (goals)
b) The constraints are identified
2. In the specification phase the system specification is produced from the detailed
definitions of (a) and (b) above. This document should clearly define the product function.
Note that in some text, the requirements analysis and specifications phases are combined
and represented as a single phase.
3.In the system and software design phase, the system specifications are translated into a
software representation. The software engineer at this stage is concerned with:
a) Data structure
b) Software architecture
c) Algorithmic detail and
d) Interface representations
The hardware requirements are also determined at this stage along with a picture of the
overall system architecture. By the end of this stage should the software engineer should be
able to identify the relationship between the hardware, software and the associated
interfaces. Any faults in the specification should ideally not be passed 'down stream'
Waterfall Model Cont..
4.In the implementation and testing phase stage the designs are translated into the
software domain
a) Detailed documentation from the design phase can significantly reduce the coding effort.
b) Testing at this stage focuses on making sure that any errors are identified and that the
software meets its required specification.
5.In the integration and system testing phase all the program units are integrated and
tested to ensure that the complete system meets the software requirements. After this
stage the software is delivered to the customer [Deliverable- The software product is
delivered to the client for acceptance testing.]
6.The maintenance phase the usually the longest stage of the software. In this phase the
software is updated to:
a) Meet the changing customer needs
b) Adapted to accommodate changes in the external environment
c) Correct errors and oversights previously undetected in the testing phases
d) Enhancing the efficiency of the software
Observe that feed back loops allow for corrections to be incorporated into the model. For
example a problem/update in the design phase requires a 'revisit' to the specifications
phase. When changes are made at any phase, the relevant documentation should be
updated to reflect that change.
Testing Levels
Testing levels are basically to identify missing areas and prevent overlap and repetition
between the development life cycle phases. In software development life cycle models there
are defined phases like requirement gathering and analysis, design, coding or
implementation, testing and deployment. Each phase goes through the testing. Hence there
are various levels of testing. The various levels of testing are:
1. Unit testing
2. Component testing
3. Integration testing
• Big bang integration testing
• Top down
• Bottom up
• Functional incremental
4. Component integration testing
5. System integration testing
6. System testing
7. Acceptance testing
8. Alpha testing
9. Beta testing
Unit Testing
• Unit testing is a method by which individual units of source code are tested to determine if
they are fit for use. A unit is the smallest testable part of an application like functions /
procedures, classes, interfaces.

• Unit tests are typically written and run by software developers to ensure that code meets its
design and behaves as intended.

• The goal of unit testing is to isolate each part of the program and show that the individual
parts are correct.

• A unit test provides a strict, written contract that the piece of code must satisfy. As a result,
it affords several benefits. Unit tests find problems early in the development cycle.
Integrating testing
• Integration testing tests integration or interfaces between components, interactions to
different parts of the system such as an operating system, file system and hardware or
interfaces between systems.

• Integration testing is done by a specific integration tester or test team.


• Big Bang integration testing:

• In Big Bang integration testing all components or modules are integrated simultaneously,
after which everything is tested as a whole.

• Big Bang testing has the advantage that everything is finished before integration testing
starts.

• The major disadvantage is that in general it is time consuming and difficult to trace the
cause of failures because of this late integration.
Cont.
System Testing
• It verify whether all the system elements have been integrated & perform
the allocated functions.
• Once all the components are integrated, the application as a whole is
tested rigorously to see that it meets Quality Standards.
• Performed by a specialized testing team.
• System testing is so important because of the following reasons:
– first step in the Software Development Life Cycle, where the
application is tested as a whole.
– The application is tested thoroughly to verify that it meets the
functional and technical specifications.
– The application is tested in an environment which is very close to the
production environment where the application will be deployed.
– Enables tp test, verify and validate both the business requirements as
well as the Applications Architecture.
Acceptance Testing
• Most Important Stage.
• It is conducted by the Quality Assurance Team .
– who will gauge whether the application meets the intended specifications and satisfies
the client's requirements.
• The QA team will have a set of pre written scenarios and Test Cases that
will be used to test the application.
• Can range from informal test drive to a planed and systematically executed
serious of tests.
• This can be conducted over period of weeks or months.
• Acceptance tests are not only intended to point out simple spelling
mistakes, cosmetic errors or Interface gaps, but also to point out any bugs
in the application that will result in system crashers or major errors in the
application.
• There are also legal and contractual requirements for acceptance of the
system
Alpha testing
Alpha testing is one of the most common software testing strategy used in software
development. Its specially used by product development organizations.
• This test takes place at the developer’s site. Developers observe the users and note
problems.
• Alpha testing is testing of an application when development is about to complete. Minor
design changes can still be made as a result of alpha testing.
• Alpha testing is typically performed by a group that is independent of the design team, but
still within the company, e.g. in-house software test engineers, or software QA engineers.
• Alpha testing is final testing before the software is released to the general public. It has two
phases:
• In the first phase of alpha testing, the software is tested by in-house developers. They
use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs
quickly.
• In the second phase of alpha testing, the software is handed over to the software QA
staff, for additional testing in an environment that is similar to the intended use.
• Alpha testing is simulated or actual operational testing by potential users/customers or an
independent test team at the developers’ site. Alpha testing is often employed for off-the-
shelf software as a form of internal acceptance testing, before the software goes to beta
testing.
Beta testing
• It is also known as field testing. It takes place at customer’s site. It sends the system to users
who install it and use it under real-world working conditions.

• A beta test is the second phase of software testing in which a sampling of the intended
audience tries the product out. (Beta is the second letter of the Greek alphabet.) Originally,
the term alpha test meant the first phase of testing in a software development process. The
first phase includes unit testing, component testing, and system testing. Beta testing can be
considered “pre-release testing.

• The goal of beta testing is to place your application in the hands of real users outside of
your own engineering team to discover any flaws or issues from the user’s perspective that
you would not want to have in your final, released version of the application.
Alpha Testing & Beta Testing
Cont..
Testing varieties

• Black box testing


• White box testing
• Unit testing
• Incremental testing
• Integration testing
• Functional testing
• System testing
• End-to-end testing
• Sanity testing
• Regression testing
• Acceptance testing
• Load testing
.
Cont..

• Usability testing
• Install/uninstall testing
• Performance testing
• Recovery testing
• Security testing
• Compatibility testing
• Exploratory testing
• Ad-hoc testing
• User acceptance testing
• Comparison testing
• Alpha testing
• Beta testing
• Mutation testing
Black box testing
• Specification-based testing technique is also known as ‘black-box’ or input/output driven
testing techniques because they view the software as a black-box with inputs and outputs.

• The testers have no knowledge of how the system or component is structured inside the box.
In black-box testing the tester is concentrating on what the software does, not how it does it.

• The definition mentions both functional and non-functional testing. Functional testing is
concerned with what the system does its features or functions. Non-functional testing is
concerned with examining how well the system does. Non-functional testing like
performance, usability, portability, maintainability.

• Specification-based techniques are appropriate at all levels of testing (component testing


through to acceptance testing) where a specification exists. For example, when performing
system or acceptance testing, the requirements specification or functional specification may
form the basis of the tests.
Black box testing Cont..
There are four specification-based or black-box technique:
1. Equivalence partitioning
2. Boundary value analysis
3. Decision tables
4. State transition testing
Equivalence partitioning
• Equivalence partitioning is a software testing technique that divides the input and/or
output data of a software unit into partitions of data from which test cases can be derived.

• The equivalence partitions are usually derived from the requirements specification for
input attributes that influence the processing of the test object.

• Test cases are designed to cover each partition at least once.


What can be found using equivalence partitioning?
• Equivalence partitioning technique uncovers classes of errors.

• Testing uncovers sets of inputs that causes errors or failures, not just individual inputs.
What can be partitioned?
• Usually it is the input data that is partitioned.

• However, depending on the software unit to be tested, output data can be partitioned as
well.

• Each partition shall contain a set or range of values, chosen such that all the values can
reasonably be expected to be treated by the component in the same way (i.e. they may be
considered ‘equivalent’).
Recommendations on defining partitions
A number of items must be considered:

• All valid input data for a given condition are likely to go through the same process.

• Invalid data can go through various processes and need to be evaluated more carefully. For
example:
 a blank entry may be treated differently than an incorrect entry,
 a value that is less than a range of values may be treated differently than a value that is
greater,
 if there is more than one error condition within a particular function, one error may
override the other, which means the subordinate error does not get tested unless the
other value is valid.
Equivalence partitioning example
• Example of a function which takes a parameter “month”.
• The valid range for the month is 1 to 12, representing January to December. This valid range
is called a partition.
• In this example there are two further partitions of invalid ranges.

x<1 1 ≤ x ≤ 12 12 < x
• Test cases are chosen so that each partition would be tested.

-2 5 17

x<1 1 ≤ x ≤ 12 12 < x
Boundary value analysis
• Equivalence partitioning is not a stand alone method to determine test cases. It is usually
supplemented by boundary value analysis.

• Boundary value analysis focuses on values on the edge of an equivalence partition or at the
smallest value on either side of an edge.
Equivalence partitioning with boundary value analysis
We use the same example as before.
Test cases are supplemented with boundary values.

-2 1 5 12 17
1 1
0 2
1 3
x<1 1 ≤ x ≤ 12 12 < x
Decision tables
Decision tables are a precise yet compact way to model complicated logic. Decision tables, like
if-then-else and switch-case statements, associate conditions with actions to perform.

But, unlike the control structures found in traditional programming languages, decision tables
can associate many independent conditions with several actions in an elegant way.
Decision Tables - Usage
Decision tables make it easier to observe that all possible conditions are
accounted for.

Decision tables can be used for:


Specifying complex program logic
Generating test cases (Also known as logic-based testing)

Logic-based testing is considered as:


structural testing when applied to structure (i.e. control flow graph of an
implementation).
functional testing when applied to a specification.
Decision Tables - Structure
Conditions - (Condition stub) Condition Alternatives –
(Condition Entry)
Actions – (Action Stub) Action Entries

•Each condition corresponds to a variable, relation or predicate


• Possible values for conditions are listed among the condition
alternatives
• Boolean values (True / False) – Limited Entry Decision Tables
• Several values – Extended Entry Decision Tables
• Don’t care value

• Each action is a procedure or operation to perform


• The entries specify whether (or in what order) the action is to be
performed
To express the program logic we can use a limited-entry decision table
consisting of 4 areas called the condition stub, condition entry, action stub and
the action entry:
Decision Tables – Structure Cont..
Condition entry

Rule1 Rule2 Rule3 Rule4

Condition1 Yes Yes No No

Condition Condition2 Yes X No X


stub Condition3 No Yes No X
Condition4 No Yes No Yes
Action1 Yes Yes No No

Action stub
Action2 No No Yes No
Action3 No No No Yes

Action Entry
Decision Tables – Structure Cont..
• We can specify default rules to indicate the action to be taken when
none of the other rules apply.
• When using decision tables as a test tool, default rules and their
associated predicates must be explicitly provided.

Rule5 Rule6 Rule7 Rule8

Condition1 X No Yes Yes

Condition2 X Yes X No

Condition3 Yes X No No

Condition4 No No Yes X

Default Yes Yes Yes Yes


action
Decision Table - Example
Printer does not print Y Y Y Y N N N N

A red light is flashing Y Y N N Y Y N N


Conditions

Printer is unrecognized Y N Y N Y N Y N

Heck the power cable X

Check the printer-computer cable X X

Actions Ensure printer software is installed X X X X

Check/replace ink X X X X

Check for paper jam X X

Printer Troubleshooting
Decision Table Example
State Transition Testing
What is State Transition Testing?
State Transition testing, a black box testing technique, in which outputs are triggered by changes
to the input conditions or changes to 'state' of the system. In other words, tests are designed to
execute valid and invalid state transitions.
When to use?
• When we have sequence of events that occur and associated conditions that apply to those
events
• When the proper handling of a particular event depends on the events and conditions that
have occurred in the past
• It is used for real time systems with various states and transitions involved
Deriving Test cases:
• Understand the various state and transition and mark each valid and invalid state
• Defining a sequence of an event that leads to an allowed test ending state
• Each one of those visited state and traversed transition should be noted down
• Steps 2 and 3 should be repeated until all states have been visited and all transitions
traversed
• For test cases to have a good coverage, actual input values and the actual output values have
to be generated
State Transition Testing Cont..
Example:
A System's transition is represented as shown in the below diagram:
State Transition Testing Cont..
The tests are derived from the above state and transition and below are the possible
scenarios that need to be tested.

Tests Test 1 Test 2 Test 3


Start State Off On On
Input Switch ON Switch Off Switch off
Output Light ON Light Off Fault
Finish State ON OFF On
White box Testing

• This testing is based on knowledge of the internal logic of an application’s


code
• known as Glass box Testing.
• Internal software and code working should be known for this type of
testing.
• Tests are based on coverage of code statements, branches, paths,
conditions.
Pseudocode and Control Flow Graphs

input(Y)
if (Y<=0) “nodes”
then
Y := −Y
end_if
while (Y>0)
do “edges”
input(X)
Y := Y-1
end_while
Unit testing

• Testing of individual software components or modules.

• Typically done by the programmer and not by testers, as it requires


detailed knowledge of the internal program design and code.

• May require developing test driver modules or test harnesses.


Integration Testing

• Testing of integrated modules to verify combined functionality after


integration.

• Modules are typically code modules, individual applications, client and


server applications on a network, etc.

• This type of testing is especially relevant to client/server and distributed


systems.
End –to – end & sanity Testing

• End-to-end testing
– Similar to system testing,
– involves testing of a complete application environment in a situation
that mimics real-world use, such as interacting with a database, using
network communications, or interacting with other hardware,
applications, or systems if appropriate.
• Sanity testing -
– Testing to determine if a new software version is performing well
enough to accept it for a major testing effort.
– If application is crashing for initial use then system is not stable
enough for further testing and build or application is assigned to fix.
Regression Testing
• Whenever a change in a software application is made it is quite
possible that other areas within the application have been affected
by this change.
• The intent of Regression testing is to ensure that a change,
– such as a bug fix did not result in another fault being uncovered in the
application.
• Regression testing is so important because of the following reasons:
– Minimize the gaps in testing when an application with changes made
has to be tested.
– Testing the new changes to verify that the change made did not affect
any other area of the application.
– Mitigates Risks when regression testing is performed on the
application.
– Test coverage is increased without compromising timelines.
– Increase speed to market the product.
Performance Testing
• It is mostly used to identify any bottlenecks or performance issues rather
than finding the bugs in software.
• There are different causes which contribute in lowering the performance
of software:
– Network delay.
– Client side processing.
– Database transaction processing.
– Load balancing between servers.
– Data rendering.
• Performance testing is considered as one of the important and mandatory
testing type in terms of following aspects:
– Speed (i.e. Response Time, data rendering and accessing)
– Capacity
– Stability
– Scalability
• It can be either qualitative or quantitative testing activity and can be
divided into different sub types such as Load testing and Stress testing.
Load Testing

• Testing the behavior of the Software by applying maximum load in


terms of Software accessing and manipulating large input data.
• It can be done at both normal and peak load conditions.
• Most of the time, Load testing is performed with the help of
automated tools such as
– Load Runner, AppLoader, IBM Rational Performance Tester,
Apache JMeter, Silk Performer, Visual Studio Load Test etc.
• Virtual users (VUsers) are defined in the automated testing tool and
the script is executed to verify the Load testing for the Software.
• The quantity of users can be increased or decreased concurrently or
incrementally based upon the requirements.
Stress Testing

• This testing type includes the testing of Software behavior under


abnormal conditions.
• Taking away the resources, applying load beyond the actual load
limit is Stress testing.
• This testing can be performed by testing different scenarios such as:
– Shutdown or restart of Network ports randomly.
– Turning the database on or off.
– Running different processes that consume resources such as
CPU, Memory, server etc.
Usability Testing

• How much system is efficientnce and effective to use.


• There are some standards and quality models and
methods which define the usability in the form of
attributes and sub attributes such as ISO-9126, ISO-9241-
11, ISO-13407 and IEEE std.610.12 etc.
Security Testing
• Security testing involves the testing of Software in order to identify any
flaws ad gaps from security and vulnerability point of view.
• Following are the main aspects which Security testing should ensure:
– Confidentiality, Integrity.
– Authentication, Availability.
– Authorization, Non-repudiation.
– Software is secure against known and unknown vulnerabilities.
– Software data is secure.
– Software is according to all security regulations.
– Input checking and validation.
– SQL insertion attacks.
– Injection flaws.
– Session management issues.
– Cross-site scripting attacks.
– Buffer overflows vulnerabilities.
– Directory traversal attacks.
Install/uninstall testing

• Tested for full, partial, or upgrade install/uninstall processes on different


operating systems under different hardware, software environment.
Recovery testing

• Testing how well a system recovers from crashes, hardware failures, or


other catastrophic problems.
Compatibility testing

• Testing how well software performs in a particular


– Hardware
– Software
– operating system
– network environment and different combination s of above.
An effective testing practice will see the above steps
applied to the testing policies of every organization and
hence it will make sure that the organization maintains
the strictest of standards when it comes to software
quality
Testing Review

• A review is a systematic examination of a document by one or more


people with the main aim of finding and removing errors.

• This can include documents such as requirement specifications, system


designs, code, test plans and test cases
Cont.

• The types of defects most typically found by reviews are:


– Deviations from standards either internally defined and managed or
regulatory/legally defined by Parliament or perhaps a trade
organization.
– Requirements defects—for example, the requirements are ambiguous,
or there are missing elements.
– Design defects—for example, the design does not match the
requirements.
– Insufficient maintainability—for example, the code is too complex to
maintain.
– Incorrect interface specifications—for example, the interface
specification does not match the design or the receiving or sending
interface.
Review Process

• Some types of review are completely informal, while others are very
formal.
• The decision on the appropriate level of formality for a review is usually
based on combinations of the following factors:
• The maturity of the development process: the more mature the process is,
the more formal reviews tend to be.
• Legal or regulatory requirements.
• The need for an audit trail. Formal review processes ensure that it is
possible to trace backwards throughout the software development life
cycle
• Reviews can also have a variety of objectives,
– Finding defects.
– Gaining understanding.
– Generating discussion.
– Decision making by consensus.
• The way a review is conducted will depend on what its specific objective is
Types of Review

1. Informal review (least formal)


2. Walkthrough.
3. Technical review
4. Inspection (most formal)
Successful factors of review

• Each review should have a clearly predefined and agreed


objective
• Right people should be involved to ensure the objective
is met.
• Review techniques (both formal and informal) need to be
suitable to the type and level of software work-products
and reviewers (this is especially important in
inspections).
• Checklists or roles should be used, where appropriate, to
increase effectiveness of defect identification
Cont.

• Management support is essential for a good review


process (e.g. by incorporating adequate time for
review activities in project schedules).
• There should be an emphasis on learning and
process improvement.
– Other more quantitative approaches to success
measurement could also be used:
– How many defects found.
– Time taken to review/inspect.
– Percentage of project budget used/saved.
Testing Process

• Test process was described as including the following


activities:
– Planning and control
– Analysis and design
– Implementation and execution
– Evaluating exit criteria and reporting
– Test closure activities
Test planning

• Test planning is concerned with setting out standards


for the testing process which will be carried out in
the near future.
– Software engineers - To design and Conduct
Testing
– Technical Staff - To get an overall picture of testing
Content of a test plan

• The testing process - description of the major phases


• Requirement tractability –
– users are interested on system meeting its requirements and
therefore testing should be planed so that all requirements
are individually tested.
• Testing item products to be tested should be specify.
• Testing schedule and resource allocation
• Test Recording procedure
– must be systematically recorded.
– it must be possible to audit the testing process.
– this will verify the correctness of the testing process.
Test control

• It is an ongoing activity.
• It involves comparing actual progress against the
plan and reporting the status, including deviations
from the plan.
• Test control guides the testing to fulfill
– the mission, strategies, and objectives, including revisiting
the test planning activities as needed.
Metrics to monitor test planning and control

• May include
– Risk and test coverage
– Defect discovery and information
– Planned versus actual hours to develop test ware
and execute test cases
Test Analysis & Design

• The process of test analysis and design uses to:


– Identify the test conditions
– Create test cases that exercise the identified test
conditions
– Prioritization criteria identified during risk analysis
and test planning should be applied throughout
the process, from analysis and design to
implementation and execution
Metrics to monitor test analysis & design
• May include:
– Percentage of requirements covered by test
conditions
– Percentage of test conditions covered by test
cases
– Number of defects found during test analysis and
design
Test Implementation and Execution

• Includes organizing the test cases into test


procedures (test scripts),
• Finalizing test data and test environments , and
forming a test execution schedule to enable test case
execution to begin.
• This also includes checking against explicit and
implicit entry criteria for the test level in question.
Test Execution

• Test execution begins once the test object is


delivered and entry criteria to test execution are
satisfied.
• Tests should be executed according to the test
procedures but freedom is given to the tester up to
some extend
• Automated tests will follow their defined instructions
without deviation.
Metrics to monitor test implementation &
execution

• Percentage of test environments configured


• Percentage of test data records loaded
• Percentage of test conditions and cases executed
• Percentage of test cases automated
Documenting and Reporting

• Test progress need to be documented for further


use.
• This includes measuring progress towards
completion.
Test Report
• For test reporting IEEE 829 specifies a Test Summary
Report, consisting of the following sections:
– Test summary report identifier
– Summary
– Variances
– Comprehensive assessment
– Summary of results
– Evaluation
– Summary of activities
– Approvals
Metrics to monitor test progress and
completion
• Number of test conditions, test cases or test specifications planned
and those executed broken down by whether they passed or failed
• Total defects raised, broken down by severity and priority for those
fixed and outstanding
• Number of changes (change requests) raised, accepted (built) and
tested
• Planned expenditure versus actual expenditure
• Planned elapsed time versus actual elapsed time
• Risks identified broken down by those mitigated by the test activity
and any left outstanding
• Percentage of test time lost due to blocking events
• Retested items
• Total test time planned against effective test time carried out
Test Closure Activities

• The key outputs should be captured and either passed to the


relevant person or archived.
• Test closure activities fall into four main groups:
– Ensuring that all test work is indeed concluded.
– Delivering valuable work products to those who need
them.
– Performing or participating in retrospective meetings
– Archiving results, logs, reports, and other documents and
work products in the configuration management system,
linked to the system itself.
Metrics to monitor test closure activities

• Percentage of test cases run during test execution


(coverage)
• Percentage of test cases checked into re-usable test
case repository
• Ratio of test cases automated: to be automated
• Percentage of test cases identified as regression tests
• Percentage of outstanding defect reports closed off
(e.g. deferred, no further action, change request,
etc.)
• Percentage of work products identified and archived.
• Software development life cycle consists of
different stages.
• When testing a software V model can be
followed.
• Different testing varieties are available to cover
both functional and non functional scenarios.
• Reviewing testing is very much necessary.
• Testing need to be done according to a process.
References
• http://www.etestinghub.com/whitebox_testing.php
• http://www.aptest.com/testtypes.html
• http://rajeevprabhakaran.wordpress.com/2008/11/2
0/different-types-of-testing/
• https://maheshm23.wordpress.com/tag/types-of-
reviews/
• http://aboutqtp10.blogspot.com/2011/06/software-
testing-reviews-test-process.html
• http://istqbexamcertification.com/what-are-
software-testing-levels/
Q&A
• What are the stages of STLC
• What are the different type of testing available
• What is the advantage if review process
• What are the successful factors of review process
• What is the difference between alpha testing and
beta testing
• What are steps of testing process.

You might also like