0 ratings0% found this document useful (0 votes) 46 views21 pagesstm13 Merged
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, 
claim it here.
Available Formats
Download as PDF or read online on Scribd
80
2
a4) (0)
2 1
2 ey (E)
af. 3
2-8
(4-4) (F)
2 ‘
(Ss)
2 = + 32,768
 
Alternatively, you could have substituted a"1" for each link in the path expression and then
simplified, as follows:
a(b+c)de(i)*fgj(m+k}*e(fi)*fgh
10+ DIG x Px 1x 1+ DL
241 x (2°12
=24x2)'x4
2x 8°x4 = 32,768
‘This is the same result we got graphically. Actually, the outer loop should be taken exactly four
times. That doesn't mean it will be taken zero or four times. Consequently, there is a superfluous
" on the outlink in the last step. Therefore the maximum number of different paths is 8192
rather than 32,768.
Ixifixixt
   
 
STRUCTURED FLOWGRAPH:
Structured code can be defined in several different ways that do not involve ad-hoc rules such as
not using GOTOs.
A structured flow graph is one that can be reduced to a single link by successive application of
the transformations of Figure 5.7.81
—O-9—- > —e—
—So—- bd —O—-O—
—8-— » —o—
—F—-  ) —o—
Figure 5.7: Structured Flowgraph Transformations.
‘The node-by-node reduction procedure can also be used as a test for structured code.Flow graphs
that DO NOT contains one or more of the graphs shown below (Figure 5.8) as subgraphs are
structured.
L Jumping into loops
2 Jumping out of loops
3 Branching into decisions
4 Branching out of decisions
 
 
Figure 5.8: Un-structured sub-graphs.If you block the paths of a graph for or aft by a graph that has no paths , there
won't be any paths,
REDUCTION PROCEDURE:
+ REDUCTION PROCEDURE ALGORITHM:
This section presents a reduction procedure for converting a flowgraph whose
links are labeled with names into a path expression that denotes the set of all
entry/exit paths in that flowgraph. The procedure is a node-by-node removal
algorithm,
‘The steps in Reduction Algorithm are as follows:
1, Combine all serial links by multiplying their path expressions
2. Combine all parallel links by adding their pathexpressions.
3. Remove all self-loops (from any node to itself) by replacing them with a
link of the form X*, where X is the path expression of the link in that loop.
STEPS 4 - 8 ARE IN THE ALGORIHTM'S LOOP:
4, Select any node for removal other than the initial or final node, Replace it
with a set of equivalent links whose path expressions correspond to all the
ways you can form a product of the set of inlinks with the set of outlinks
of that node.
Combine any remaining serial links by multiplying their pathexpressions.
Combine all parallel links by adding their pathexpressions.
Remove all self-loops as in step 3.
Does the graph consist of a single link between the entry node and the exit
node? If yes, then the path expression for that link is a path expression for
the original flowgraph; otherwise, return to step 4.
A flowgraph can have many equivalent path expressions between a given pair of
nodes; that is, there are many different ways to generate the set of all paths
between two nodes without affecting the content of that set.
‘The appearance of the path expression depends, in general, on the order in which
nodes are removed
waa
 
+ CROSS-TERM STEP (STEP 4):
The cross - term step is the fundamental step of the reduction algorithm.
It removes a node, thereby reducing the number of nodes by one.
Successive applications of this step eventually get you down to one entry and one
exit node. The following diagram shows the situation at an arbitrary node that has
been selected for removal:
 
From the above diagram, one can infer:
(at byc+d+e)=actad++ae+betbd tbe
4Department of CSE
 
‘A graph over a symmetric relation is called an undirected graph. The
matrix of an undirected graph is symmetric (aya) for alli)
Antisymmetrie Relations
Acrelation R is antisymmetric if for every a and b, if Rb and bRa, then a-b, or they are the sameelements.
Examples of antisymmetric relations: is greater than or equal to is a subset of, time
Examples of nonantisymmetric relations: is connected to, can be reached from, is greater than, is arelative
of, isa friend of
quivalence Relations
Ani equivalence relation is a relation that satisfies the reflexive, transitive, and symmetric properties.
Equality is the most familiar example of an equivalence relation.
 
Ifa set of objects satisfy an equivalence relation, we say that they form an equivalence class over that
relation
The importance of equivalence classes and relations is that any member of the equivalence class is, with
respect to the relation, equivalent to any other member of that class
‘The idea behind partition testing strategies such as domain testing and path testing, is that we ean
partition the input space into equivalence classes.
Testing any member of the equivalence class is as effective as testing them all
Partial Ordering Relations
‘A pattial ordering relation satisfies the reflexive, transitive, and antisymmetric properties
Partial ordered graphs have several important properties: they are loop fie, there is at least onemaximum,
element, and there is at least one minimum element,
The Powers of a Matrix
‘+ Each entry in the graph’s matrix expresses a relation between the pair of nodes that corresponds to that
entry.
Squaring the matrix yields a new matrix that expresses the relation between each pair of nodes via one
intermediate node under the assumption that the relation is transitive
‘The square of the matrix represents all path segments twolinks long. The
third power repre:
 
ents all path segments three links long.
PageDepartment of CSE
 
Matrix Powers and Products
Given a matrix whose entries are aij, the square of that matrix is obtained by replacing every entry with
en
© aE aay
more generally, given two matrices A and B with entries aik and bkj, respectively, their product is anew
matrix C, whose entries are cij, where:
‘on
+ CE andy
Partitioning Algorithm
Consider any graph over a transitive relation. The graph may have loops.
‘We would like to pattition the graph by grouping nodes in such a way that every loop is containedwithin
one group or another.
Such a graph is partially ordered.
‘There are many used for an algorithm that doesthat:
‘We might want to embed the loops within a subroutine so as to have a resulting graph which is loopfree at
the top level
Many graphs with loops are easy to analyze if you know where to break theloops.
© While you and I can recognize loops,
algorithm on which to base thetool.
 
1's much harder to program a tool to do it unless you have a solid
Node Reduction Algorithm (General)
‘The matrix powers usually tell us more than we want to know about most graphs.
Inthe context of testing, we usually interested in establishing a relation between two nodes- typicallythe
entry and exit nodes.
Ina debugging context it is unlikely that we would want to know the path expression between everynode
and every other node.
The advantage of matrix reduction method is that it is more methodical than the graphical methodcalled
as node by node removalalgorithm,
1. Select a node for removal; replace the node by equivalent links that bypass that node and add those
links to the links they parallel.
2. Combine the parallel terms and simplify as youcan,
3. Observe loop terms and adjust the out links of every node that had a self loop to account for the effectof
the loop.
4, The result is a matrix whose size has been reduced by 1. Continue until
only the two nodes of interestexist.
PageDepartment of CSE
 
Time Versus Sequence
‘State graphs don’t represent time-they represent sequence.A
tfansition might take microseconds or centuries;
‘ASystem could be in one state for milliseconds and another for years- the state graph would bethe same because it
has no notion of time,
‘Although the finite state machines model can be elaborated to include notions of time inaddition to sequence, such as
time Petti Ness
 
‘© Software implementation
There is rarely a direct correspondence between programs and the behavior of a processdescribed as a state graph.
The state graph represents, the total behavior consisting of the transport, the software, theexecutive, the status
returns, interrupts, and so on,
There is no simple correspondent
 
between lines of code and states, The state table forms thebasis,
Good State Graphs and Bad
What constitutes a good or a bad state graph is to some extent biased by the kinds of stategraphs that are likely to be
used in a software test design context.
Here are some principles for judging,
co The total number of states is equal to the product of the possibilities of factors that
make up the state.
© For every state and input there is exactly one transition specified to exactly one,
possibly the same, state.
© For every transition there is one output action specified. The output could be trivial, butat
least oné output does something sensible.
© For every state there is a sequence of inputs that will drive the system back to the same
state
Important graphs
A\
f\
m ,
2
2
“42
2
@ e@ States A and B are not reachable
GQ 22-@) 2we transitions are specified for
z an input of 1 in state A
State Bugs-Number of States
‘The rumber of states in a state graph is the number of states we choose to recognize or model
42
A
\
/\
Statf can never be left, the initial state can
@ never Be entered again.
| a |
 
2 State C cannot be entered.
4,
 
PageDepartment of CSE
 
The state i directly or indirectly recorded as a combination of values of variables that appear in the data base.
For example, the state could be composed of the value of a counter whose possible values ranged from 0 to 9,
combined with the setting of two bit flags, leading toa total of 2*2*10-40 states.
The numberof states ean be computed as follows:
© Identify all the component factors of the state,
© Identify all the allowable values for each factor.
© The number of states is the product of the number of allowable values of all the factors.
Before you do anything else, before you consider one test case, discuss the number of states
you think there are with the number of states the programmer thinks there are.
‘There is no point in designing tests intended to check the system’s behavior in various states if
there’s no agreement on how many states there are,
© Impossible States
‘Some times some combinations of factors may appear to be impossible
* The discrepancy between the programmer's state count and the tester’s state count is often due
toa difference of opinion concerning “impossible states”.
‘A robust piece of software will not ignore impossible states but will recognize them and invoke an illogical
condition handler when they appeat to have occurred
  
Equivalent States
Two states are Equivalent if every sequence of inputs starting from one state produces exactly the same sequence
of outputs when started from the other state, This notion can also be extended to set of states.
 
 
Merging of Equivalent States
a,b
Page113
* States or sets of states that have become dead.
© States or sets of states that have become unreachable,
Principles of State Testing
+ The strategy for state testing is analogous to that used for path testing flow graphs.
© Just as it’s impractical to go through every possible path in a flow graph, it’s impractical
to go through every path in a state graph.
© The notion of coverage is identical to that used for flow graphs.
‘* Even though more state testing is done as a single case in a grand tour, it's impractical to
do it that way for several reasons.
© In the early phases oftesting, you will never complete the grand tour because of bugs.
+ Later, in maintenance, testing objectives are understood, and only a few of the states and
transitions have to be tested. A grand tour is waste of time.
* Theirs is no much history in a long test sequence and so much has happened that
verification is difficult.
Starting point of state testing
* Define a set of covering input sequences that get back to the initial state when starting
from the initial state
© For each step in each input sequence, define the expected next state, the expected
transition, and the expected output code.
* Asset of tests, then, consists of three sets of sequences:
© Input sequences
© Corresponding transitions or next-state names
© Output sequences
Limitations and Extensions
© State transition coverage in a state graph model does not guarantee completetesting.
* How defines a hierarchy of paths and methods for combining paths to produce covers of
state graphs.
© The simplest is called a “O switch” which corresponds to testing each transition
individually,
* The next level consists of testing transitions sequences consisting of two transitions
called “1 switches”.
© The maximum length switch is “n-1 switch” where there are n numbers of states
© Situations at which state testing is useful
* Any processing where the output is based on the occurrence of one or more sequences of
events, such as detection of specified input sequences, sequential format validation,
parsing, and other situations in which the order of inputs is important.
* Most protocols between systems, between humans and machines, between components of
asystem,
* Device drivers such as for tapes and discs that have complicated retry and recovery
procedures if the action depends on the state.
Whenever a feature is directly and explicitly implemented as one or more state transition tables.102
NII = GBC + NOC.
BC+ (A + BOBC Substitaticn
= BC + ABC 2 Rules 16,9, 10 8,3.
ci + BA) = Roles 9.16
cw + A) + Rule 19.
AC + BC + Roles 16, 9, 9,4
NiI+ ABC __
a AC BC+ ABE
GB+ xm sac
A+ B) + AC
CA AC + BC
C+ BC
re oe
Nz = N24 (NB + CO
SCH BC HAY
= C+ BC+ BC
24GB +B)
 
 
Nz
 
 
 
  
 
 
 
 
‘The functions should have been:
AS BE: correct,
RBG AC : Seong, vasjus 2.
NB=C#ABC=C+AB i wrong, vasiusiC.
Loops complicate things because we may have to solve a boolean equation to determine what
predicate value combinations lead to where.
 
 
KV CHARTS:
+ INTRODUCTION:
© If you had to deal with expressions in four, five, or six variables, you could get
bogged down in the algebra and make as many errors in designing test cases as,
there are bugs in the routine you're testing
 Karnaugh-Veitch chart reduces boolean algebraic manipulations to graphical
trivia.
© _ Beyond six variables these diagrams get cumbersome and may not beeffective.
+ SINGLE VARIABLI
© Figure 6.6 shows all the boolean functions of a single variable and their
equivalent representation as a KV chart.
© The charts show all possible truth values that the variable A canhave.
AI” means the variable’s value is "I" or TRUE. A "0" means that the variable's
value is 0 or FALSE,
© The entry in the box (0 or 1) specifies whether the function that the chart
represents is true or false for that value of the variable.
© We usually do not explicitly put in 0 entries but specify only the conditions under
which the function is trueDepartment of CSE
 
REGULAR EXPRESSIONS AND FLOW ANOMALY DETECTION:
+ THE PROBLEM
> The generic flow-anomaly detection problem (note: not just data-flow anomalies, but any
flow anomaly) is that of looking for a specific sequence of options considering all possible
paths through a routine.
© Let the operations be SET and RESET, denoted by s and r respectively, and we want to
know if there is a SET followed immediately a SET or a RESET followed immediately by
a RESET (an ss or an rr sequence).
© Some more application examples:
1. A file can be opened (0), closed (c), read (£), or written (Ww). If the file is read or
written to after it's been closed, the sequence is nonsensical. Therefore, er and cw
are anomalous. Similarly, if the file is read before it's been written, just after
opening, we may have a bug, Therefore, or is also anomalous, Furthermore, 00 and
ec, though not actual bugs, are a waste of time and therefore should also be
examined,
2. A tape transport can do a rewind (4), fast-forward (1), read (1), write (w), stop (p),
and skip (k). There are rules concerning the use of the transport; for example, you
cannot go from rewind to fast-forward without an intervening stopor from rewind
or fast-forward to read or write without an intervening stop. The following
sequences are anomalous: df, dr, dw, fd, and fr. Does the flowgraph lead to
anomalous sequences on any path? If so, what sequences and under what
circumstances?
3. The data-flow anomalies discussed in Unit 4 requires us to detect the dd, dk, kk,
and ku sequences. Are there paths with anomalous data flows?
 
 
+ THE METHOD:
© Annotate each link in the graph with the appropriate operator or the null operator 1.
© Simplify things to the extent possible, using the fact that a+ a=aand 12= 1
You now have a regular expression that denotes all the possible sequences of operators in
that graph. You can now examine that regular expression for the sequences of interest.
© EXAMPLE: Let A,B, C, be nonempty sets of character sequences whose smallest string
i at least one character long. Let T be a two-character string of characters. Then if T is a
substring of (ie., if T appears within) AB*C, then T will appear in AB?C. (HUANG's
Theorem)
‘Asan example, let
oe A=ppB=srrC=rpT=ss
 
 
The theorem states that ss will appear in pp(srr)"rp if it appears in pp(srr)°rp.
SOFTWARE TESTING METHODOLO Page 91Department of CSE
 
specified domain can. A contradictory domain specification means that atleast two supposedly dstinet dom:
overlap,
+ Ambiguous domains: Ambiguous domains mean that union of the domains is incomplete.
That is there are missing domains or holes in the specified domains. Not specifying what
happens to points on the domain boundary is a common ambiguity.
 
+ Over specified Domains: his domain can be overloaded with so many conditions that the
result is a null domain. Another way to put it is to say that the domain's path is unachievable.
+ Boundary Errors: Errors caused in and around the boundary of a domain. Example,
boundary closure bug, shifted, tilted, missing, extra boundary.
+ Closure Reversal: A common bug. The predicate is defined in terms of
=, The programmer chooses to implement the logical complement and incorrectly uses <= for the new
predicate; Le., x>- 0 is incorrectly negated as x <_ 0, thereby shifting boundary values to adjacent domains.
+ Faulty Logic: Compound predicates (especially) are subject to faulty logic transformations
and improper simplification, If the predicates define domain boundaries, all kinds of domain
bugs can result from faulty logic manipulations.
 
+ RESTRICTIONS TO DOMAIN TESTING: Domain testing has restrictions, as do other
testing techniques. Some of them include:
Co-incidental Correctness: Domain testing isn't good at finding bugs for which the
outcome is correct for the wrong reasons. If we're plagued by coincidental correctness
we may misjudge an incorrect boundary. Note that this implies weakness for domain
testing when dealing with routines that have binary outcomes (ie., TRUE/FALSE)
   
 
Representative Outcome: Domain testing is an example of partition testing.
Partition-testing strategies divide the program's input space into domains such that all
inputs within a domain are equivalent (not equal, but equivalent) in the sense that any
input represents all inputs in that dom:
© If the selected input is shown to be correct by a test, then processing is presumed
correct, and therefore all inputs within that domain are expected (perhaps unjustifiably)
to be correct. Most test techniques, functional or structural, fall under partition testing
and therefore make this representative outcome assumption, For example, x? and 2* are
equal for x = 2, but the functions are different. The functional differences between
adjacent domains are usually simple, such as x + 7 versus x + 9, rather than x? versus
Pr
 
 
  
 
  
Simple Domain Boundaries and Compound Predicates: Compound predicates in which each
part of the predicate specifies a different boundary are not a problem: for example, x
>= 0 AND x < 17, just specifies two domain boundaries by one compound predicate. As
 
SOFTWARE TESTING METHODOLO Page 53Department of CSE
 
an example of a compound predicate that specifies one boundary, consider: x=0 AND y
AND y <= 14, This predicate specifies one boundary equation (x = 0) but alternates closure, putting it in one
or the other domain depending on whether y <7 or y> 14, Treat compound predicates with respect because they're
‘more complicated than they seem,
 
Functional Homogeneity of Bugs: Whatever the bug is, it will not change the
functional form of the boundary predicate. For example, if the predicate is ax >= b, the
bug will be in the value of a or b but it will not change the predicate to ax
>> by say.
   
 
Linear Vector Space: Most papers on domain testing, assume linear boundaries - not a
bad assumption because in practice most boundary predicates are linear.
Loop Free Software: Loops are problematic for domain testing. The trouble with
loops is that cach iteration can result in a different predicate expression (after
interpretation), which means a possible domain boundary change,
NICE AND UGLY DOMAINS:
+ NICE DOMAINS:
o Where do these domains come from?
Domains are and will be defined by an imperfect iterative process aimed at achieving (user, buyer, vor)
satisfaction
2 Implemented domains can't be incomplete or inconsistent. Every input will be
processed (rejection is a process), possibly forever. Inconsistent domains will be made
consistent.
Conversely, specified domains can be incomplete and/or inconsistent. Incomplete in
this context means that there are input vectors for which no path is specified, and
inconsistent means that there are at least two contradictory specifications over the same
segment of the input space.
© Some important properties of nice domains are: Linear, Complete, Systematic, And
Orthogonal, Consistently closed, Convex and simply connected.
© Tothe extent that domains have these properties domain testing is easy as testing gets.
o The bug frequency is lesser for nice domain than for ugly domains.
ur u2 us ua us
  
Figure 4.3: Nice Two-Dimensional Domains
Page 54Department of CSE
 
© An invalid input (e.g., value too big) is just a special processing case called ‘reject’.
The input then passes to a hypothetical subroutine rather than on calculations.
In domain testing, we focus on the classification aspect of the routine rather than on the
calculations.
6 Structural knowledge is not needed for this model - only a consistent, complete
specification of input values for each case.
© Weecan infer that for each case there must be at |
+ ADOMAINIS A SET:
o An input domain is a set.
© If the source language supports set definitions (E.g. PASCAL set types and C
enumerated types) less testing is needed because the compiler does much of it for us.
2 Domain testing does not work well with arbitrary discrete sets of dataobjects.
o Domain for a loop-free program corresponds to a set of numbers defined over the input
vector.
st one path to process that
       
 
 
+ DOMAINS, PATHS AND PREDICATES:
2 In domain testing, predicates are assumed to be interpreted in terms of input vector
variables.
© If domain testing is applied to structure, then predicate interpretation must be based on
actual paths through the routine - that is, based on the implementation control flow
graph.
© Conversely, if domain testing is applied to specifications, interpretation is based on a
specified data flow graph for the routine; but usually, as is the nature of specifications,
no interpretation is needed because the domains are specified directly.
For every domain, there is at least one path through the routine,
‘There may be more than one path if the domain consists of dis
domain is defined by the union of two or more domains.
Domains are defined their boundaries. Domain boundaries are also where most domain
bugs occur.
© Forevery boundary there is at least one predicate that specifies what numbers belong tothe domain and
what numbers dor
For example, inthe statement IF x°0 THEN ALPHA ELSE BETA we know that numbers greater than
zero belong to ALPHA processing domain(s) while zero and smaller numbers belong to BETA
domain(s).
2 A domain may have one or more boundaries = no matter how many variables define it,
For example, if the predicate is x2 + y2 < 16, the domain is the inside of a circle of
radius 4 about the origin. Similarly, we could define a spherical domain with one
boundary but in three variables.
2 Domains are usually defined by many boundary segments and therefore by many
predicates. i.c. the set of interpreted predicates traversed on that path (ic., the path’s
predicate expression) defines the domain's boundaries.
 
 
nected parts or ifthe
 
 
+ A DOMAIN CLOSURE:
 Adomain boundary is closed with respect to a domain if the points on the boundary
belong to the domain.
Ifthe boundary points belong to some other domain, the boundary is said to be
open.
SOFTWARE TESTING METHODOLO Page 5137
ALTERNATE! PARENT DAUGHTER
PARENT PARENT
ALTERNATE? DAUGHTER DAUGHTER
(2) Dession (©) Boss |e) Mitosis
Figure 3.2: Nodes with multiple outlinks
Mergers: Transaction flow junction points are potentially as troublesome as transaction flow
splits. There are three types of junctions: (1) Ordinary Junction (2) Absorption (3) Conjugation
1 Ordinary Junction: An ordinary junction which is similar to the junction in a control flow
graph. A transaction can arrive either on one link or the other. (See Figure 3.3 (a))
2 Absorption: In absorption case, the predator transaction absorbs prey transaction, The prey
‘gone but the predator retains its identity. (See Figure 3.3 (b))
3 Conjugation: In conjugation case, the two parent transactions merge to form a new
daughter. In keeping with the biological flavor this case is called as conjugation.(See Figure
3.3)
Figure 3.3: Transaction Flow Junctions and Mergers
We have no problem with ordinary decisions and junctions. Births, absorptions, and conjugations,
are as problematic for the software designer as they are for the software modeler and the test
designer; as a consequence, such points have more than their share of bugs. The common
problems are: lost daughters, wrongful deaths, and illegitimate births
TRANSACTION FLOW TESTING TECHNIQUES:
+ GET THE TRANSACTIONS FLOWS:
© Complicated systems that process a lot of different, complicated transactions
should have explicit representations of the transactions flows, or theequivalent.
© Transaction flows are like control flow graphs, and consequently we should
expect fo have them in increasing levels of detail.
© The system's design documentation should contain an overview section that
details the main transaction flows.
© Detailed transaction flows are a mandatory pre requisite to the rational design of a
system's functional test.
+ INSPECTIONS, REVIEWS AND WALKTHROUGHS:
© Transaction flows are natural agenda for system reviews orinspections
© In conducting the walkthroughs, you should:+ Discuss enough transaction types to account for 98%-99% of the
transaction the system is expected to process.
* Discuss paths through flows in functional rather than technical terms.
+ Ask the designers to relate every flow to the specification and to show
how that transaction, directly or indirectly, follows from therequirements.
© Make transaction flow testing the comer stone of system functional testing just as
path testing is the comer stone of unit testing,
Select additional flow paths for loops, extreme values, and domain boundaries.
Design more test cases to validate all births and deaths.
© Publish and distribute the selected test paths through the transaction flows as early
as possible so that they will exert the maximum beneficial effect on theproject,
 
 
+ PATHSELECTION:
© Select a set of covering paths (c1-+c2) using the analogous criteria you used for
structural path testing.
© Select a covering set of paths based on functionally sensible transactions as you
would for control flow graphs.
© Try to find the most tortuous, longest, strangest path from the entry to the exit of
the transaction flow
+ PATHSENSITIZATION
© Most of the normal paths are very easy to sensitize-80% - 95% transaction flow
coverage (cl+c2) is usually easy toachieve.
The remaining small percentage is often very difficult.
Sensitization is the act of defining the transaction. If there are sensitization
problems on the easy paths, then bet on either a bug in transaction flows or a
design bug.
 
 
+ PATH INSTRUMENTATION:
© Instrumentation plays a bigger role in transaction flow testing than in unit path
testing
© The information of the path taken for a given transaction must be kept with that
transaction and can be recorded by a central transaction dispatcher or by the
individual processing modules.
© In some systems, such traces are provided by the operating systems or a running
log.
BASICS OF DATA FLOW TESTIN¢
 
+ DATA FLOW TESTING:
© Data flow testing is the name given to a family of test strategies based on
selecting paths through the program's control flow in order to explore sequences,
of events related to the status of data objects
o For example, pick enough paths to assure that every data object has been
initialized prior to use or that all defined objects have been used forsomething,
© Motivation: It is our belief that, just as one would not feel confident about a
program without executing every statement in it as part of some test, one
shouldnot feel confident about a program without having seen the effect of using the
value produced by each and every computation,
3823
parameters that are subroutine labels, or in the above example there could be a GOTO that
targeted label 100 but could never achieve a value that would send the program to that label.
Only a Dynamic Analysis (that is, an analysis based on the code’s behavior while running -
which is to say, to all intents and purposes, testing) can determine whether code is reachable or
not and therefore distinguish between the ideal structure we think we have and the actual, buggy
structure
PATH TESTING CRITERIA:
Any testing strategy based on paths must at least both exercise every instruction and take
branches in all directions
A set of tests that does this is not complete in an absolute sense, but it is complete in the sense
that anything less must leave something untested,
So we have explored three different testing criteria or strategies out of a potentially infinite
family of strategies.
i, Path Testing (Piso):
1. Execute all possible control flow paths through the program: typically, this is
restricted to all possible entry/exit paths through the program.
2. If we achieve this prescription, we are said to have achieved 100% path coverage.
This is the strongest criterion in the path testing strategy family: it is generally
impossible to achieve
Statement Testing (P)):
1, Execute all statements in the program at least once under some test. If we do enough
tests to achieve this, we ate said to have achieved 100% statement coverage.
2. An alternate equivalent characterization is to say that we have achieved 100% node
coverage. We denote this by Cl
3. This is the weakest criterion in the family: testing less than this for new software is
unconscionable (unprincipled or cannot be accepted) and should be criminalized.
Branch Testing (P;):
1, Execute enough tests to assure that every branch alternative has been exercised at least
once under some test.
2. If we do enough tests to achieve this prescription, then we have achieved 100% branch.
coverage
An alternative characterization is to say that we have achieved 100% link coverage.
4, For structured software, branch testing and therefore branch coverage strictly includes
statement coverage
5. We denote branch coverage by C2.
Commonsense and Strategies:
+ Branch and statement coverage are accepted today as the minimum mandatory testing
requirement.To learn the domain testing, path testing and logic based testing to
explore the testing process easier.
Course Outcomes
Know the basic concepts of software testing and its essentials,
Able to identify the various bugs and correcting them after knowing the
consequences of the bug,
Use of program’s control flow as a structural model is the comer stone of testing,
Performing functional testing using control flow and transaction flow graphs
UNIT-I
UNIT-1
Introduction:-Purpose of testing, Dichotomies,model for testing,consequences of bugs, taxonomy of
bugs,Flow graphs and Path testing:- Basics concepts of path testing, predicates, path predicates and
achievable paths, path sensitizing, path instrumentation, application of path testing.
What is testing?
Testing is the process of exercising or evaluating a system or system components by manual
or automated means to verify that it satisfies specified requirements.
The Purpose of Testing
Testing consumes at least half of the time and work required to produce a functional program,
MYTH: Good programmers write code without bugs. (It's wrong!!!)
History says that even well written programs still have 1-3 bugs per hundred statements.
Productivity and Quality in Software:
In production of consumer goods and other products, every manufacturing stage is
subjected to quality control and testing from component to final stage
If flaws are discovered at any stage, the product is either discarded or cycled back for
rework and correction.
Productivity is measured by the sum of the costs of the material, the rework, and the
discarded components, and the cost of quality assurance and testing,
There is a tradeoff between quality assurance costs and manufacturing costs: If
sufficient time is not spent in quality assurance, the reject rate will be high and so will
be the net cost. If inspection is good and all errors are caught as they occur, inspection
costs will dominate, and again the net cost will suffer.‘esting and Quality assurance costs for ‘manufactured’ items can be as low as 2% in
consumer products or as high as 80% in products such as space-ships, nuclear reactors,
and aircrafts, where failures threaten life. Whereas the manufacturing cost of software is
trivial
‘The biggest part of software cost is the cost of bugs: the cost of detecting them, the cost
of correcting them, the cost of designing tests that discover them, and the cost of running
those tests.
For software, quality and productivity are indistinguishable because the cost ofa software
copy is trivial
Testing and Test Design are parts of quality assurance should also focus on bug prevention.
A prevented bug is better than a detected and corrected bug,
Phases in a tester's mental life:
Phases in a tester’s mental life can be categorized into the following 5 phases:
Phase 0: (Until 1956: Debugging Oriented) There is no difference between testing and
debugging. Phase 0 thinking was the norm in early days of software development til testing
emerged as a discipline.
Phase 1: (1957-1978: Demonstration Oriented) the purpose of testing here is to show that
software works. Highlighted during the late 1970s. This failed because the probability of
showing that software works ‘decreases’ as testing increases. Le. the more you test, the more
likely you will find a bug
Phase 2: (1979-1982: Destruction Oriented) the purpose of testing is to show that software
doesn’t work. This also failed because the software will never get released as you will find
one bug or the other. Also, a bug corrected may also lead to another bug.
Phase 3: (1983-1987: Evaluation Oriented) the purpose of testing is not to prove anything
but to reduce the perceived risk of not working to an acceptable value (Statistical Quality
Control). Notion is that testing does improve the product to the extent that testing catches
bugs and to the extent that those bugs are fixed, The product is released when the
confidence on that product is high enough. (Note: This is applied to large software products
with millions of code and years of use.)
Phase 4: (1988-2000: Prevention Oriented) Testability is the factor considered here. One
reason is to reduce the labor of testing. Other reason is to check the testable and non-testable
code. Testable code has fewer bugs than the code that's hard to test. Identifying the testing
techniques to test the code is the main key here.
Test Design:
We know that the software code must be designed and tested, but many appear to be unaware
that tests themselves must be designed and tested. Tests should be properly designed and tested
before applying it to the actual code,
Testing isn’t everything:
There are approaches other than testing to create better software. Methods other than testing
include:
Inspection Methods: Methods like walkthroughs, desk checking, formal inspections and
code reading appear to be as effective as testing but the bugs caught don’t completely
overlap.Design Style: While designing the software itself, adopting stylistic objectives suchas
testability, openness and clarity can do much to prevent bugs
Static Analysis Methods: Includes formal analysis of source code during compilation. In
carlier days, it is a routine job of the programmer to do that. Now, the compilers have
taken over that job.
Languages: The source language can help reduce certain kinds of bugs. Programmers find
new bugs while using new languages.
Development Methodologies and Development Environment: The development proc
and the environment in which that methodology is embedded can prevent many kinds of
bugs.
 
Dichotomies:
 
sting Versus Debugging:
Many people consider both as same. Purpose of testing is to show that a program has
bugs. The purpose of testing is to find the error or misconception that led to the program's
failure and to design and implement the program changes that correct the error.
Debugging usually follows testing, but they differ as to goals, methods and most
important psychology. The below tab le shows few important differences between testing
and debugging.
 
Testing Debugging
Testing starts with known conditions, _| Debugging starts from possibly unknown
uses predefined procedures and has _initial conditions and the end cannot be
predictable outcomes. predicted except statistically.
Testing can and should be Procedure and duration of debugging
planned, designed and scheduled. cannot be so constrained.
Testing is a demonstration of error
or apparent correctness. Debugging is a deductive process.
Debugging is the programmer's
Testing proves a programmer's failure. _ Vindication (Justification).
Testing, as executes, should strive to be [Debugging demands intuitive leaps,
 
   
predictable, dull, constrained, rigid and experimentation and freedom,
inhuman.
‘Much testing can be done without Debugging is impossible without detailed
design knowledge. design knowledge.
Testing can often be done by an Debugging must be done by an insider
outsider.
‘Much of test execution and design can
Automated debuggin;
be automated, Bsn
till a dream,
 
Function versus Structure:
Tests can be designed from a functional or a structural point of view.
In Functional testing, the program or system is treated as a black box. It is
subjected to inputs, and its outputs are verified for conformance to specifiedFlexible severity rather than absolutes:
Quality can be measured as a combination of factors, of which number of bugs
and their severity is only one component.
Many organizations have designed and used satisfactory, quantitative, quality
metrics.
Because bugs and their symptoms play a significant role in such metries, as
testing progresses, you see the quality rise to a reasonable value which is deemed
to be safe to ship the product.
The factors involved in bug severity are:
Correction Cost: Not so important because catastrophic bugs may be
corrected easier and small bugs may take major time to debug
Context and Application Dependency: Severity depends on the context
and the application in which it is used.
Creating Culture Dependency: What’s important depends on the
creators of software and their cultural aspirations. Test tool vendors are
more sensitive about bugs in their software then games software vendors.
User Culture Dependency: Severity also depends on user culture. Naive
users of PC software go crazy over bugs where as pros (experts) may just
ignore,
The software development phase: Severity depends on development
phase. Any bugs gets more severe as it gets closer to field use and more
severe the longer it has been around.
TAXONOMY OF BUGS:
‘There is no universally correct way categorize bugs. The taxonomy is not rigid.
A given bug can be put into one or another category depending on its history and the
programmer's state of mind.
The major categories are: (1) Requirements, Features and Functionality Bugs (2)
Structural Bugs (3) Data Bugs (4) Coding Bugs (5) Interface, Integration and System
Bugs (6) Test and Test Design Bugs.
Requirements, Features and Functionality Bugs: Various categories in Requirements,
Features and Fanctionality bugs include: e m ‘a
Requirements and Specifications Bugs:
Requirements and specifications developed from them can be incomplete ambiguous, ot
self-contradictory. They can be misunderstood or impossible to understand.
‘The specifications that don't have flaws in them may change while the design is in
progress. The features are added, modified and deleted.
Requirements, especially, as expressed in specifications are a major source of expensive
bugs.
The range is from a few percentages to more than 50%, depending on the application and
environment.
What hurts most about the bugs is that they are the earliest to invade the system and the
last to leave.Feature Bugs:
Specification problems usually create corresponding feature problems.
‘A feature can be wrong, missing, or superfluous (serving no usefull purpose). A missing
feature or case is easier to detect and correct. A wrong feature could have deep design
implications
Removing the features might complicate the software, consume more resources, and foster
more bugs.
Feature Interaction Bugs
Providing correct, clear, implementable and testable feature specifications is not enough.
Features usually come in groups or related features. The features of each group and the
interaction of features within the group are usually well tested,
The problem is unpredictable interactions between feature groups or even between individual
features. For example, your telephone is provided with call holding and call forwarding.
The interactions between these two features may have bugs
Every application has its peculiar set of features and a much bigger set of unspecified feature
interaction potentials and therefore result in feature interaction bugs.
Specification and Feature Bug Remedies
Most feature bugs are rooted in human to human communication problems. One solution is
to use high-level, formal specification languages or systems.
‘Such languages and systems provide short term support but in the long run, does not solve
the problem,
Short term Support: Specification languages facilitate formalization of requirements and
inconsistency and ambiguity analysis.
Long term Support: Assume that we have a great specification language and that can be
used to create unambiguous, complete specifications with unambiguous complete tests
and consistent test criteria,
The specification problem has been shifted to a higher level but not eliminated.
Testing Techniques for functional bugs: Most functional test techniques that is
those techniques which are based on a behavioral description of software, such as
transaction flow testing, syntax testing, domain testing, logic testing and state
testing are useful in testing functional bugs.
Structural bugs: Various categories in Structural bugs include:
Control and Sequence Bugs:
Control and sequence bugs include paths left out, unreachable code, improper nesting of
loops, loop-back or loop termination criteria incorrect, missing process steps, duplicated
processing, unnecessary processing, rampaging, GOTOS, ill-conceived (not properly
planned) switches, spaghetti code, and worst of all, pachinko cod
One reason for control flow bugs is that this area is amenable (supportive) to theoretical
treatment,
Most of the control flow bugs are easily tested and caught in unit testing,
Another reason for control flow bugs is that use of old code especially ALP & COBOL code
are dominated by control flow bugs.