0% found this document useful (0 votes)
10 views44 pages

BGM 576 Week 4 en

The document outlines various software testing and quality evaluation techniques, focusing on test design methodologies, documentation, and traceability. It covers both specification-based and experience-based testing techniques, emphasizing the importance of selecting appropriate test conditions and ensuring comprehensive coverage. Additionally, it discusses the role of exploratory testing and the factors influencing the choice of testing techniques.

Uploaded by

Salih KILIC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views44 pages

BGM 576 Week 4 en

The document outlines various software testing and quality evaluation techniques, focusing on test design methodologies, documentation, and traceability. It covers both specification-based and experience-based testing techniques, emphasizing the importance of selecting appropriate test conditions and ensuring comprehensive coverage. Additionally, it discusses the role of exploratory testing and the factors influencing the choice of testing techniques.

Uploaded by

Salih KILIC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

BGM 576

SOFTWARE TEST AND QUALITY


EVALUATION TECHNIQUES

W4 : TEST DESIGN
TECHNIQUES

DR. SAVAŞ ÖZTÜRK

ŞEHİR UNIVERSITY
INFORMATION SECURITY ENGINEERING
GRADUATE LESSON
SCOPE OF THE COURSE
 FUNDAMENTALS OF TESTING
 SOFTWARE DEVELOPMENT LIFE CYCLE AND
TESTING
 STATIC TECNIQUES
 TEST DESIGN
 TEST MANAGEMENT
 TEST TOOLS AND TEST AUTOMATION
 SOFTWARE PRODUCT QUALITY MODELS
 SOFTWARE QUALITY METRICS
 APPLICATION SECURITY, STATIC CODE
ANALYSIS
 USABILITY, PERFORMANCE, SECURITY,
RELIABILITY AND MAINTAINABILITY
INTRODUCTION
 IEEE 829 – SOFTWARE TEST
DOCUMENTATION
 Test Conditions -> test design specification

 Test cases -> Test Case Specification

 Test Procedures -> Test Procedure Specification


(a.k.a test scripts)
FORMALITY OF TEST DOCUMENTATION
 Context : safety critical, web page
 Organisation : CMMI?

 Time pressure
TEST ANALYSIS: IDENTIFYING TEST CONDITIONS

 The test basis includes whatever the tests are


based on. (document, code, sometimes
experiences...)
 A test condition is simply something that
we could test.
 For example, if we are testing a customer
management and marketing system for a mobile
phone company, we might have test conditions
that are related to a marketing campaign, such
as age of customer (pre-teen, teenager, young
adult, mature), gender, postcode or zip code,
and purchasing preference (pay-as-you-go or
contract).
TEST CONDITION APPROACHES
 Testing experts use different names to represent
the basic idea of 'a list of things that we could
test'.
 Test requirements [Marick]

 Test objectives [Craig]

 Test inventory [Hutcheson]

 Test possibilities [Black]


TEST CONDITIONS & TRACEABILITY
 A testing technique helps us select a good set of
tests from the total number of all possible tests
for a given system.
 The test conditions that are chosen will depend
on the test strategy or detailed test approach. For
example, they might be based on risk, models of
the system, likely failures, compliance
requirements, expert advice or heuristics.
 Test conditions should be able to be linked back
to their sources in the test basis - this is called
traceability.
TRACEABILITY
 Horizental (through test doc. For a given test level)
 Vertical (e.g. Requirements to components)

 How many tests will actually be affected by this


change in the requirements?

 Traceability between the tests and the requirement


being tested enables the functions or features affected
to be identified more easily.

 Before delivering a new release, we want to know


whether or not we have tested all of the specified
requirements in the requirements specification.
TEST CONDITIONS
TEST DESIGN: SPECIFYING TEST CASES
TEST IMPLEMENTATION: SPECIFYING TEST
PROCEDURES OR SCRIPTS
CONDITION->CASE->PROCEDURE
 Login Functionalities
 Successful Login
 Admin login
 Student Login
 Lecturer login

 Unsuccessful Login
 Username wrong
 Password wrong
 Both wrong

 Login Non-Functional Tests


 Performance
 Load
 Usability
CATEGORIES OF TEST DESIGN
TECHNIQUES
SPECIFICATION-BASED OR BLACK-
BOX TECHNIQUES

 equivalence partitioning;
 boundary value analysis;

 decision tables;

 state transition testing.


BOUNDARY VALUE ANALYSIS
 Valid higher
 İnvalid higher

 Valid lower

 Invalid lower
EQUIVALENCE PARTITIONING

The software should correctly


handle values from the invalid
partition, by replying with an
error message such as 'Balance
must be at least $0.00'.
EP – EXEMPLARY CASE
 How would you test the table?
 (SIG Maintainability model)
OPEN BOUNDARıES
 Open boundaries are more difficult to test, but
there are ways to approach them. Actually the
best solution to the problem is to find out what
the bound-ary should be specified as!
 If we really cannot find any-thing about what
this boundary should be, then we probably need
to use an intuitive or experience-based approach
to probe various large values trying to make it
fail.
TEST TECHNıQUES
 We need to get a good balance between covering
too many and too few test conditions in our tests.
 When we come to test invalid partitions, the
safest option is probably to try to cover only one
invalid test condition per test case.
 To cover the boundary test cases, it may be
possible to combine all of the minimum valid
boundaries for a group of fields into one test case
and also the maximum boundary values.
DECISION TABLE
 A decision table is a good way to deal with
combinations of things (e.g. inputs). This technique is
sometimes also referred to as a 'cause-effect' table.
 Decision tables provide a systematic way of stating
complex business rules, which is useful for developers
as well as for testers.
 Decision tables can be used in test design whether or
not they are used in specifications, as they help
testers explore the effects of combinations of dif-ferent
inputs and other software states that must correctly
implement business rules. Helping the developers do
a better job can also lead to better relation-ships with
them.
DT- EXAMPLE

Works with assumptions:


- New customers can’t have Loyalty cards.
- New customers can’t use both new customer and coupon discounts.
SIMPLIFY DT

Design tables in order to


decrease the number of
possible test cases.
STATE TRANSITION
 Any system where you get a different output for
the same input, depending on what has happened
before, is a finite state system. A finite state
system is often shown as a state diagram
PARTS OF STATES
 A state transition model has four basic parts:
 • the states that the software may occupy
(open/closed or funded/insufficient funds);

 • the transitions from one state to another (not


all transitions are allowed);
 • the events that cause a transition (closing a file
or withdrawing money);
 • the actions that result from a transition (an
error message or being given your cash).
STATE EXAMPLE
STATE TABLES
 Deriving tests only from a state graph (also known
as a state chart) is very good for seeing the valid
transitions, but we may not easily see the negative
tests, where we try to generate invalid transitions.
In order to see the total number of combinations of
states and transitions, both valid and invalid, a
state table is useful.
USE CASE TESTING

 Use case testing is a technique that helps us


identify test cases that exercise the whole
system on a transaction by transaction
basis from start to finish.
STRUCTURE-BASED OR WHITE-
BOX TECHNIQUES
 code coverage, decision coverage, statement
coverage, structural testing, structure-
based testing and white-box testing

 Test coverage measures in some specific way the


amount of testing performed by a set of tests
(derived in some other way, e.g. using
specification-based techniques).
WARNıNGS!
 There is danger in using a coverage measure.
100% coverage does not mean 100% tested!
 Coverage techniques measure only one dimension
of a multi-dimensional concept.
 Two different test cases may achieve exactly the
same coverage but the input data of one may find
an error that the input data of the other doesn't.
 One drawback of code coverage measurement is
that it measures coverage of what has been
written, i.e. the code itself; it cannot say anything
about the software that has not been written.
COVERAGE MEASURES FOR SPECıFıCATıON-
BASED TESTıNG
 EP: percentage of equivalence partitions exercised (we
could measure valid and invalid partition coverage
separately if this makes sense);
 • BVA: percentage of boundaries exercised (we could also
separate valid and invalid boundaries if we wished);
 • Decision tables: percentage of business rules or decision
table columns tested;

 • State transition testing: there are a number of possible


coverage measures:
 - Percentage of states visited
 - Percentage of (valid) transitions exercised (this is known as Chow's 0-
switch coverage)
 - Percentage of pairs of valid transitions exercised ('transition pairs' or
Chow's 1-switch coverage) - and longer series of transitions, such as
tran sition triples, quadruples, etc.
 - Percentage of invalid transitions exercised (from the state table)
REQ. COVERAGE - CODE COV.
 When coverage is discussed by business analysts,
system testers or users, it most likely refers to
the percentage of requirements that have
been tested by a set of tests. This may be
measured by a tool such as a requirements
management tool or a test management tool.
 However, when coverage is discussed by
programmers, it most likely refers to the
coverage of code, where the structural elements
can be identified using a tool, since there is good
tool support for measuring code coverage.
COVERAGE TYPES
 Statement coverage
 Decision coverage (conditionals)

 Branch coverage (conditionals + unconditionals)

 linear code sequence and jump (LCSAJ) coverage

 condition coverage

 multiple condition coverage (also known as


condition combination coverage)
 condition determination coverage (also known as
multiple condition decision coverage or modified
condition decision coverage, MCDC)
 path coverage (not loop friendly)
STATEMENT COVERAGE

A B C Coverage
TEST 1 2 3 8 %83 (5/6)
TEST 2 0 25 50 %83
TEST 3 47 1 50 %83
TEST 4 20 25 70 %100
DECISION COVERAGE
DECISION
 Stronger than decision coverage
 It 'sub-sumes' statement coverage - this means
that 100% decision coverage always guarantees
100% statement coverage.
A B C Coverage
TEST 1 20 15 -10 Not yet
%100
TEST 2 10 2 6 %100
CODE COVERAGE TOOL
EXPERIENCE-BASED
TECHNIQUES
 Although it is true that testing should be
rigorous, thorough and systematic, this is not all
there is to testing.
 There is a definite role for non-systematic tech-
niques, i.e. tests based on a person's knowledge,
experience, imagination and intuition.
 The reason is that some defects are hard to find
using more system-atic approaches, so a good
'bug hunter' can be very creative at finding
those elusive defects.
ERROR GUESSING

 The success of error guessing is very much


dependent on the skill of the tester, as good
testers know where the defects are most likely to
lurk.
 In using more formal tech-niques, the tester is
likely to gain a better understanding of the
system, what it does and how it works. With this
better understanding, he or she is likely to be
better at guessing ways in which the system may
not work properly.
 There are no rules for error guessing. The tester
is encouraged to think of situations in which the
software may not be able to cope.
EXPLORATORY TESTING

 Exploratory testing is a hands-on approach in


which testers are involved in minimum planning and
maximum test execution.
 The planning involves the creation of a test charter, a
short declaration of the scope of a short (1 to 2 hour)
time-boxed test effort, the objectives and possible
approaches to be used.
 The test design and test execution activities are
performed in parallel typically without formally
documenting the test conditions, test cases or test
scripts. This does not mean that other, more formal
testing techniques will not be used.
 Some notes will be written during the exploratory-
testing session, so that a report can be produced
afterwards.
EXPLORATORY TESTING
 This is an approach that is most useful when
there are no or poor specifications and when time
is severely limited.
 It can also serve to complement other, more
formal testing, helping to establish greater
confidence in the software.
 exploratory testing can be used as a check on the
formal test process by helping to ensure that the
most serious defects have been found.
CHOOSING A TEST TECHNIQUE

 Which technique is best? This is the wrong


question! Each technique is good for certain
things, and not as good for other things.
 For example, one of the benefits of structure-
based techniques is that they can find things in
the code that aren't supposed to be there, such as
'Trojan horses' or other malicious code.
 For example, state transition testing is unlikely
to find boundary defects.
INTERNAL FACTORS
 Models used
 Tester knowledge and experience

 Likely defects

 Test objective

 Documentation

 Life cycle model


EXTERNAL FACTORS
 Risk

 Customer and contractual requirements

 Type of system

 Regulatory requirements

 Time and budget

You might also like