DFT Key R20
DFT Key R20
***
Code No: A47D3 R 20
B V R A J U I N ST I T U T E O F T E C H N O L O G Y , N A R S A P U R
(UGC - AUTONOMOUS)
IV B.Tech I Semester Regular Examinations, Nov 2023
DESIGN FOR TESTABILITY
(Electronics and Communication Engineering)
SCHEME OF EVALUATION
e) BIST concepts
A built-in self-test (BIST) or built-in test (BIT) is a mechanism that permits a machine to test itself.
Engineers design BISTs to meet requirements such as:
1. high reliability
2. lower repair cycle times
or constraints such as:
1. limited technician accessibility
2. cost of testing during manufacture
The main purpose [1] of BIST is to reduce the complexity, and thereby decrease the cost and reduce
reliance upon external (pattern-programmed) test equipment. BIST reduces cost in two ways:
1. reduces test-cycle duration
2. reduces the complexity of the test/probe setup, by reducing the number of I/O signals that
must be driven/examined under tester control.
PART B
2. A stuck-at fault is a particular fault model used by fault simulators and automatic test pattern generation
(ATPG) tools to mimic a manufacturing defect within an integrated circuit. Individual signals and pins are
assumed to be stuck at Logical '1', '0' and 'X'. For example, an input is tied to a logical 1 state during test
generation to assure that a manufacturing defect with that type of behavior can be found with a specific test
pattern. Likewise the input could be tied to a logical 0 to model the behavior of a defective circuit that
cannot switch its output pin. Not all faults can be analyzed using the stuck-at fault model. Compensation
for static hazards, namely branching signals, can render a circuit untestable using this model. Also,
redundant circuits cannot be tested using this model, since by design there is no change in any output as a
result of a single fault.
Single stuck at line[edit]
Single stuck line is a fault model used in digital circuits. It is used for post manufacturing testing, not
design testing. The model assumes one line or node in the digital circuit is stuck at logic high or logic low.
When a line is stuck it is called a fault.
Digital circuits can be divided into:
1. Gate level or combinational circuits which contain no storage (latches and/or flip flops) but
only gates like NAND, OR, XOR, etc.
2. Sequential circuits which contain storage.
This fault model applies to gate level circuits, or a block of a sequential circuit which can be separated from
the storage elements. Ideally a gate-level circuit would be completely tested by applying all possible inputs
and checking that they gave the right outputs, but this is completely impractical: an adder to add two 32-bit
numbers would require 264 = 1.8*1019 tests, taking 58 years at 0.1 ns/test. The stuck at fault model assumes
that only one input on one gate will be faulty at a time, assuming that if more are faulty, a test that can
detect any single fault, should easily find multiple faults.
To use this fault model, each input pin on each gate in turn, is assumed to be grounded, and a test vector is
developed to indicate the circuit is faulty. The test vector is a collection of bits to apply to the circuit's
inputs, and a collection of bits expected at the circuit's output. If the gate pin under consideration is
grounded, and this test vector is applied to the circuit, at least one of the output bits will not agree with the
corresponding output bit in the test vector. After obtaining the test vectors for grounded pins, each pin is
connected in turn to a logic one and another set of test vectors is used to find faults occurring under these
conditions. Each of these faults is called a single stuck-at-0 (s-a-0) or a single stuck-at-1 (s-a-1) fault,
respectively.
This model worked so well for transistor-transistor logic (TTL), which was the logic of choice during the
1970s and 1980s, that manufacturers advertised how well they tested their circuits by a number called
"stuck-at fault coverage", which represented the percentage of all possible stuck-at faults that their testing
process could find. While the same testing model works moderately well for CMOS, it is not able to detect
all possible CMOS faults. This is because CMOS may experience a failure mode known as a stuck-
open fault, which cannot be reliably detected with one test vector and requires that two vectors be applied
sequentially. The model also fails to detect bridging faults between adjacent signal lines, occurring in pins
that drive bus connections and array structures. Nevertheless, the concept of single stuck-at faults is widely
used, and with some additional tests has allowed industry to ship an acceptable low number of bad circuits.
The testing based on this model is aided by several things:
1. A test developed for a single stuck-at fault often finds a large number of other stuck-at
faults.
2. A series of tests for stuck-at faults will often, purely by serendipity, find a large number of
other faults, such as stuck-open faults. This is sometimes called "windfall" fault coverage.
3. Another type of testing called IDDQ testing measures the way the power supply current of
a CMOS integrated circuit changes when a small number of slowly changing test vectors
are applied. Since CMOS draws a very low current when its inputs are static, any increase
in that current indicates a potential problem.
Multiple Faults
• The simultaneous presence of single faults, usually of the same type.
• Usually not considered in practice:
▪ We indicated earlier that there are 3n-1 possible multiple stuck-fault (MSF) in a
circuit with n SSF sites.
▪ Tests for SSFs cover a high percentage of MSFs.
• The three faults, SA1-3, are redundant because the presence of any one of them has no
effect on the logic function.
• As we will see, the vectors 00, 01, and 10 detect all other single SA faults.
• The test set T = {1111, 0111, 1110, 1001, 1010, 0101} detects all SSFs.
o The only test in T that detects both f = b SA1 and g = c SA1 is 1001.
o However, the circular masking of f and g under T prevents the MSF {f, g}
from being detected.
3. Explain the methods of equivalence fault collapsing and dominant fault collapsing with suitable example
Fault collapsing
There are two main ways for collapsing fault sets into smaller sets.
Equivalence collapsing
It is possible that two or more faults produce same faulty behavior for all input patterns. These faults are
called equivalent faults. Any single fault from the set of equivalent faults can represent the whole set. In
this case, much less than k×n fault tests are required for a circuit with n signal line. removing equivalent
faults from entire set of faults is called fault collapsing. fault collapsing significantly decreases the number
of faults to check.
In the example diagram, red faults are equivalent to the faults that being pointed to with the arrows, so those
red faults can be removed from the circuit. In this case, the fault collapse ratio is 12/20.
Dominance collapsing
The input and output s-a-0 & s-a-1 faults are indicated in the gate schematic. Let’s first generate a test
vector for a→s-a-0 at the input. We will use the path sensitization approach. To excite the fault, we need to
force logic-1 at the input ‘a’. A discrepancy ‘D’ will be created at ‘a’. Now, to propagate the fault (or ‘D’)
to the output (or make it observable), we need all other inputs (‘b’ and ‘c’) as logic-1. So, the test vector
for checking a→s-a-0 would be {a, b, c} = {1, 1, 1}. Now, repeat similar steps for ‘b’ and ‘c’ too.
Did you observe? For checking s-a-0 fault at ‘b’ and ‘c’, we again need to apply the same test vector. {1,
1, 1} is a very good test vector as it can check three faults together at once. Hence, we can conclude that s-
a-0 faults at the input of any order NAND gate are equivalent.
Can we do it for s-a-1 faults too? Let’s make a table consisting of all input and output stuck-at faults and
their test pattern.
Faults Type Fault Location Test Vector {a, b, c}
a {1, 1, 1}
b {1, 1, 1}
s-a-0
c {1, 1, 1}
a {0, 1, 1}
b {1, 0, 1}
s-a-1
c {1, 1, 0}
out {1, 1, 1}
Unfortunately, we can’t do this for s-a-1 faults because they all need different test vectors.
For out→s-a-0, we just need to excite the fault, no need to propagate, as it’s already at the output and always
observable. So, for s-a-0 fault excitation, we need to force out→1. This can be done by applying logic-0 at
any of the inputs. Hence, there are multiple test vectors.
For detection of out→s-a-1, it is required to force out→0. This can be only done using the test vector {a, b,
c} = {1, 1, 1}. Did you notice something special? Yes, even out→s-a-1 is equivalent to s-a-0 at a, b & c.
a, b, c {1, 1, 1}
s-a-0
out {0, x, x} or {x, 0, x} or {0, x, x}
a {0, 1, 1}
b {1, 0, 1}
s-a-1
c {1, 1, 0}
out {1, 1, 1}
In general, an m-input gate can have a total of (m+1) equivalent sets of faults. We have successfully reduced
the test pattern to a great extent. It can be concluded that s-a-0 faults at the input and s-a-1 fault at the output
of any order NAND gates are always equivalent.
We can collapse the equivalent faults in the gate schematic and keep only one. As they are equivalent,
testing only one of them is sufficient. This is a visual representation of fault reduction. By convention, we
usually keep the input faults; we will eventually come to know the reason for this later. Below are the
reduced table and schematic diagram.
a {0, 1, 1}
s-a-1
b {1, 0, 1}
c {1, 1, 0}
The equivalent collapsed faults are crossed in blue. These equivalent faults can be represented in a special
set known as equivalence set.
Equivalence Set = {a-sa0, b-sa0, c-sa0, out-sa1}
This equivalence set consists of four equivalent faults. We can collapse any three of them and retain one.
In the above figure, we have retained ‘a‘. We can retain any of the three input equivalents (a-sa0 or b-
sa0 or c-sa0}. The best practice is to always collapse the output first.
Fault Dominance
Faults Type Fault Location Test Vector {a, b, c}
a {0, 1, 1}
s-a-1
b {1, 0, 1}
c {1, 1, 0}
Notice that there can be several test patterns for out→s-a-0 in the table. Hence, test constraints on out→s-
a-0 are very less. This proves as an advantage, as it can intersect with the test pattern of other faults, further
reducing the test vector.
From the Venn-diagram, we can notice that out→s-a-0 is dominant to a→s-a-1, b→s-a-1, and c→s-a-1.
Note that these aren’t equivalent, we can only choose one test pattern at a time. For the dominant fault
(out→ s-a-0), only one test vector is required out of all these possible combinations (blue, red, or green).
Hence, even if we are testing any of the blue, red, or green (corresponding to a→s-a-1, b→s-a-1, and c→s-
a-1, respectively), the dominant one gets tested automatically.
Therefore, we don’t test dominant faults separately, as they are always tested automatically from test vectors
of other faults.
Now, we reduced another test vector. The resulting table and schematic are shown below.
The dominant collapsed fault is crossed in orange. The dominance relationship can be represented in
another special set known as a dominance set.
Dominance Set = {out-sa0: a-sa1, b-sa1, c-sa1}
The entity before the colon in the set represents the dominant fault, while those after the colon represents
its subsets. In dominance relation, we can only collapse the dominant entity (before the colon).
In this case, three test vectors or test patterns {a, b, c} = {(0, 1, 1), (1, 0, 1), (1, 1, 0)} are required to test
the above dominance set.
We did a pretty good job in collapsing the faults, straightway reducing eight vectors into four. The test
pattern size is now reduced by 50%. The final test pattern required to test all the faults is {a, b, c} = {(1, 1,
1), (0, 1, 1), (1, 0, 1), (1, 1, 0)}
Finally, we conclude that, for any input NAND gate, input s-a-0 faults and output s-a-1 faults are equivalent
while output s-a-0 fault is dominant to input s-a-1 faults.
Fault Collapsing for common gates
In the previous section, we analyzed the equivalence and dominance relations for the 3-input NAND gate.
Similarly, analysis can be done for common 2-input logic gates too. I recommend you apply a similar path
sensitization method with a tabular approach to figure out the fault relations of these gates and compare
your results with the following table.
OR GateNOR GateAND GateNAND GateNOT GateBuffer
a {1, 0}
s-
a- b {0, 1}
0
out {x, 1} or {1, x}
a {0, 0}
s-
a- b {0, 0}
1
out {0, 0}
Equivalence set: {a-sa1, b-sa1, out-sa1}
Dominance set: {out-sa0: a-sa0, b-sa0}
{in-sa0, out-sa0}
Buffer {in-sa1, out-sa1} null
{in-sa1, out-sa0}
NOT {in-sa0, out-sa1} null
Don’t worry; you won’t need to memorize the whole table, just memorize the first row (i.e. equivalence
and dominant set for OR gate). This table follows the principle of Boolean Duality. Hence, to find the
equivalence and dominance sets for AND gate, just replace sa0 with sa1 for both the sets.
Initially, there are 30 stuck-at faults (2 x No. of wires = 2 x 15) in the gate level schematic. For fault
collapsing, we move backwards, parallelly evaluating each gate in each level from the primary output(s) to
the primary inputs. Let’s highlight the highest gate-level first.
There is only one AND gate. Equivalent faults are: {a-sa0, b-sa0, out-sa0}. We need to test only one fault
out of these three. Hence, we collapse two and retain one. Our standard practice is to collapse the output
faults first and retain one of the inputs. In this case ‘out‘ is collapsed first. Now, either ‘a‘ or ‘b‘ can be
collapsed as per your choice. We collapse ‘b‘ and retain ‘a‘. Equivalent fault collapsing is shown in blue.
Dominance relation for AND gate is: {out-sa1: a-sa1, b-sa1}. Here, out-sa1 is dominant to the other two.
Unlike equivalence, in dominant fault collapsing we don’t have any choice, but to collapse the dominant
one only. Dominant fault collapsing is shown in orange.
Three faults are collapsed till now, and three remaining. The remaining three faults are considered for
collapsing (if possible) in the next lower gate level.
The remaining faults are bought closer to the lower level gates (just for simplicity; the stuck-at faults can
be represented anywhere in its respective wire). In this level, there are two OR gates. The fault collapsing
technique is followed similarly (as in step 1) using equivalence and dominance relations (from the table).
For OR gate all s-a-1 faults are equivalent while output s-a-0 is dominant. This way, outputs are collapsed
and inputs retained.
Same steps are repeated for step 3. Note that there are three AND gates but one NAND gate too. Make sure
you apply respective dominance and equivalence relations for respective gates. Note, even if the same faults
are collapsed for both the gates, there is a slight difference in which faults are dominant and which are
equivalent. This can be clearly seen in the diagram in orange and blue cross notations.
Yay! We have successfully collapsed 18 faults out of total 30 faults. Now, we only need to generate a test
pattern for the remaining 12 faults. This is insane!
The collapse ratio is a benchmarking parameter for DFT CAD tools and is defined as no. of remaining
faults divided by total no. of faults.
Collapse Ratio = 12/30 = 0.4
The smaller is the collapse ratio, means lesser faults are remaining and hence, better the CAD tool.
Also, notice that the output, as well as all the internal stuck-at faults, are collapsed. We now only need to
test the primary inputs. This concludes to the following theorem.
A test set that detects all single stuck-at faults on all primary inputs of a fanout free circuit must detect all
single stuck-at faults in that circuit.
The term ‘fanout free’ is used for a reason; you will know about this shortly.
Fanout Circuit
Q2. Reduce the stuck-at faults in the following circuit for minimum collapse ratio, using equivalence
and dominance relations.
• Singular cube
• Propagation D-cube (PDC)
• Primitive D-cube of a fault (PDCF)
The D algebra
The D-algebra is a 5-value logic consisting of logic: 1, 0, D, D', X. The D stands for Discrepancy, as
discussed in the path sensitization method.
Following algebraic rules are applicable in D algorithm for intersection:
0∩0=0∩x=x∩0=0
1∩1=1∩x=x∩1=1
x∩x=x
1∩0=D
0 ∩ 1 = D’
Note that intersection in D-algebra is quite different to what we are familiar with sets and relations in
mathematics. They do not follow properties like commutativity or associativity.
Cubes
Singular Cover
Singular Cover (SC) of any logic gate is the compact form of truth-table. This is done using don’t cares
(x). Following reduced truth table is the singular cover of an AND gate.
We know that, for an AND gate, the output is logic-1 only when both of its inputs are high. At the same
time, the output is logic-0 for all the other cases where any of its input is low. The output of the AND gate
is low for most of the cases. Hence, specifying separate columns for every possible input combinations
becomes redundant. Therefore we merge the rows of AND gate and define the truth-table using don’t
cares in a more condensed form.
Each row of a singular cover is termed as Singular Cube. The above singular cover of the AND gate has
three singular cubes.
Primitive D-cube of a Fault
D-cubes represent the input-output behavior of the good and faulty circuits.
Primitive D-cube of a Fault (PDCF) is used to specify the minimum input conditions required at inputs
of a gate to produce an error at its output. This is used for fault activation. PDCF can be derived from the
intersection of singular covers of gates in faulty and non-faulty conditions having different outputs.
Example:
Here is an AND gate with s-a-0 fault at the output. To generate the PDCF, we first draw the truth table of
the faulty and non-faulty circuit. Next, we derive the singular cover for faulty as well as non-faulty
circuits.
For faulty AND gate, the output is always stuck-at-0 independent of its input; hence its singular cover has
only one row with inputs (a, b) as don’t cares.
Now, we intersect the singular cubes of the non-faulty and faulty circuits. For PDCF we need to intersect
only those columns for which output is different for non-faulty and faulty circuits. Since for faulty
circuits, we only have one singular cube {x, x, 0}; we need to intersect it with a singular cube of the non-
faulty circuit having the opposite output value (i.e. logic-1). The singular cube {1, 1, 1} perfectly fits this
criterion.
{x, x, 0} ∩ {1, 1, 1} = {1 ∩ x, 1 ∩ x, 1 ∩ 0} = {1, 1, D}
Finally, PDCF of this faulty AND gate is {a, b, out} = {1, 1, D}. This is similar to fault excitation we did
in the path sensitization method, albeit in a more structured approach.
Here, D is interpreted as being logic-1 if the circuit is fault-free and logic-0 if the fault is present. Notice
the order in which the intersection is done! Always intersect as ⇒ singular-cube (non-faulty circuit) ∩
singular-cube (faulty circuit). The reverse will yield different results since the intersection is not
commutative and hence will hamper the above interpretation we are trying to opt for.
Propagation D-cube
Propagation D-cubes (PDCs) of a gate causes the output of the gate to depend upon the minimum
number of its specified inputs. It is used to propagate D or D’ from a specified input to the
output. Propagation D-Cubes can be derived from the intersection of singular cubes of gates of opposite
output values.
Example:
Here’s the truth table of an OR gate. To generate the PDC, we find the singular cover for the OR gate.
Now, we intersect the singular cubes of every possible combination(s) with opposite output
values. Intersecting the singular cubes of row2 and row1, also row3 and row1 serves the purpose.
{1, x, 1} ∩ {0, 0, 0} = {1 ∩ 0, x ∩ 0, 1 ∩ 0} = {D’, 0, D’}
{x, 1, 1} ∩ {0, 0, 0} = {x ∩ 0, 1 ∩ 0, 1 ∩ 0} = {0, D’, D’}
a b out
D’ 0 D’
0 D’ D’
In this case, the intersection doesn’t need any specific order. If we intersect in the other way, we obtain
PDCs as {D, 0, D} and {0, D, D}.
Moreover, there is another option in which we intersect {1, 1, 1} with {0, 0, 0} of the truth table. This
yields another possible PDC as {D, D, D} or {D', D', D'}. The following are the complete PDCs of an OR
gate.
a b out
0 D D
D 0 D
D D D
0 D’ D’
D’ 0 D’
D’ D’ D’
This is very similar to forward propagation we did in the path sensitization method.
But then why are we learning it? These D-cubes tend to be much more inconvenient and tiresome for us
as compared to path sensitization method. The answer is that we human beings have IQ, and hence we
can solve methods like path sensitization intuitively. A computer doesn’t the necessary intelligence
(yet). The D algorithm takes the creativity out of test generation and allows a computer to do it.
D-cubes of common gates
AND Gate OR Gate NAND Gate NOR Gate XOR Gate XNOR Gate NOT Gate
Singular Cover
a b out
0 x 0
x 0 0
1 1 1
Primitive D-cube of faults
Fault a b out
out→sa0 1 1 D
0 x D’
out→sa1
x 0 D’
Propagation D-cube
a b out
1 D/D’ D/D’
D/D’ 1 D/D’
5. Explain the different Testable combinational logic circuit design with examples
Combinational Logic Testing :
For testing the combinational logic circuitry a set of test patterns is generate which detect all possible fault
conditions. The first approach to testing an n input circuit is to generate all the possible 2N input signal
combinations by means of say an N bit counter (controllability) and observe the outputs for checking
(observability). This is called exhaustive testing and is very effective, but is only practicable where N is
relatively small. Many of patterns generated during exhaustive testing may not occur during the application
of the circuit. Thus, it is productive to the possible faults and then generate a set of appropriate test vectors.
The basic idea is to select a path from the site of the possible fault, through a sequence of gates leading to
an output of the logic circuitry under test. Figure below shows the Combinational Logic Testing block
schematic.
In the test mode, the scan-in signal is clocked into the scan path, and the output of the last stage latch is
scanned out. In the normal mode, the scan-in path is disabled and the circuit functions as a sequential circuit.
The testing sequence is as follows:
Step 1: Set the mode to test and, let latches accept data from scan-in input.
Step 2: Verify the scan path by shifting in and out the test data.
Step 3: Scan in (shift in) the desired state vector into the shift register.
Step 4: Apply the test pattern to the prim ary input pins.
Step 5: Set the mode to normal and observe the primary outputs of the circuit after sufficient time for
propagation.,
Step 6: Assert the circuit clock, for one machine cycle to capture the outputs of the combinational logic
into the registers.
Step 7: Return to test mode; scan out the contents of the registers, and at the same time scan in the next
pattern.
Step 8: Repeat steps 3-7 until all test patterns are applied.
An important approach among scan-based designs is the level sensitive scan design (LSSD), which
incorporates both the level sensitivity and the scan path approach using shift registers. The level sensitivity
is to ensure that the sequential circuit response is independent of the transient characteristics of the circuit,
such as the component and wire delays. Thus, LSSD removes hazards and races. Its ATPG is also simplified
since tests have to be generated only for the combinational part of the circuit.
The boundary scan test method is also used for testing printed circuit boards (PCBs) and multichip modules
(MCMs) carrying multiple chips. Shift registers are placed in each chip close to I/O pins in order to form a
chain around the board for testing. With successful implementation of the boundary scan method, a simpler
tester can be used for PCB testing.
On the negative side, scan design uses more complex latches, flip-flops, I/O pins, and interconnect wires
and, thus, requires more chip area. The testing time per test pattern is also increased due to shift time in
long registers.
7. Explain in detail about Ad Hoc design rules in sequential circuit testing
Ad Hoc Testable Design Techniques
One way to increase the testability is to make nodes more accessible at some cost by physically inserting
more access circuits to the original design. Listed below are some of the ad hoc testable design techniques.
Partition-and-Mux Technique :-
Since the sequence of many serial gates, functional blocks, or large circuits are difficult to test, such circuits
can be partitioned and multiplexors (muxes) can be inserted such that some of the primary inputs can be
fed to partitioned parts through multiplexers with accessible control signals. With this design technique,
the number of accessible nodes can be increased and the number of test patterns can be reduced. A case in
point would be the 32-bit counter. Dividing this counter into two 16-bit parts would reduce the testing time
in principle by a factor of 215. However, circuit partitioning and addition of multiplexers may increase the
chip area and circuit delay. This practice is not unique and is similar to the divide-and-conquer approach to
large, complex problems. Figure 1 illustrates this method.
When the sequential circuit is powered up, its initial state can be a random, unknown state. In this case, it
is not possible to start the test sequence correctly. The state of a sequential circuit can be brought to a known
state through initialization. In many designs, the initialization can be easily done by connecting
asynchronous preset or clear-input signals from primary or controllable inputs to flip-flops or latches.
To avoid synchronization problems during testing, internal oscillators and clocks should be disabled. For
example, rather than connecting the circuit directly to the on-chip oscillator, the clock signal can be ORed
with a disabling signal followed by an insertion of a testing signal as shown in Fig. 2.
The enhancement of testability requires serious tradeoffs. The speed of an asynchronous logic circuit can
be faster than that of the synchronous logic circuit counterpart. However, the design and test of an
asynchronous logic circuit are more difficult than for a synchronous logic circuit, and its state transition
times are difficult to predict. Also, the operation of an asynchronous logic circuit is sensitive to input test
patterns, often causing race problems and hazards of having momentary signal values opposite to the
expected values. Sometimes, designed-in logic redundancy is used to mask a static hazard condition for
reliability. However, the redundant node cannot be observed since the primary output value cannot be made
dependent on the value of the redundant node. Hence, certain faults on the redundant node cannot be tested
or detected. Figure 3 shows that the bottom NAND2 gate is redundant and the stuck-at- fault on its output
line cannot be detected. If a fault is undetectable, the associated line or gate can be removed without
changing the logic function.
Figure 3 : (a) A redundant logic gate example. (b) Equivalent gate with redundancy removed
8. Explain about the memory test requirements for MBIST along with different Delay faults.
MBIST is a self-testing and repair mechanism which tests the memories through an effective set of
algorithms to detect possibly all the faults that could be present inside a typical memory cell whether
it is stuck-at (SAF), transition delay faults (TDF), coupling (CF) or neighborhood pattern sensitive
faults (NPSF).
Delay Faults
• Delays along every path from PI to PO or between internal latches must be less
than the operational system clock interval.
• We have already discussed a number of defects that can cause delay faults:
• GOS defects
• Resistive shorting defects between nodes and to the supply rails
• Parasitic transistor leakages, defective pn junctions and incorrect or shifted threshold
voltages
• Certain types of opens
• Process variations can also cause devices to switch at a speed lower than the specification.
• An SA0 or SA1 can be modeled as a delay fault in which the signal takes an
"infinite" amount of time to change to 1 or 0, respectively.
o Passing stuck fault tests is usually not sufficient however for
systems that operate at any appreciable speed.
• Running stuck-at fault tests at higher speed can uncover some delay faults.
Delay Tests
• Test Definition:
• At time t 1 , the initializing vector of the two-pattern test, V 1 , is applied through the input
latches or PIs and the circuit is allowed to stabilize.
• At time t 2 , the second test pattern, V 2 , is applied.
o At time t 3 , a logic value measurement (a sample) is made at the output latches or POs.
• The delay test vectors V 1 and V 2 may sensitize one or more paths, p i .
Delay Tests
• Let:
• T C = (t 3 - t 2 ) represent the time interval between the application of vector V 2 at the PIs
and the sampling event at the POs
• The nominal delay of each of these paths be defined as pd i .
• The slack of each path be defined as sd i = T C - pd i.
o This is the difference between the propagation delay of each of the
sensitized paths in the nominal circuit and the test interval.
Delay Fault Test Generation
• Difficulties with delay fault test generation:
• The detection of a defect that introduces an additional delay, ad i , along a sensitized path is
dependent on satisfying the condition:
o ad i > sd i (or pd i + ad i > T C )
• Static
sensitization defines the case when all off-path nodes settle to non-
controlling values (0 for OR/NOR, 1 for AND/NAND) following app. of V2.
o This is a necessary condition to test a path for a delay fault.
• The gates
along the sensitized path have exactly one on-path input and zero or
more non-controlling off-path inputs.
o Delay fault tests are classified according to the voltage behavior of
the off-path nodes.
o Such tests can be invalidated under certain conditions.
• Note,unlike the previous example, the glitch occurs before the intended transition
in this case, and can invalidate the test (e.g. fault is not detected).
Delay Tests and Invalidation
• The critical path(s) of this circuit is 6 time units.
o Let's set the clock period T = 7.
• This
test is called a non-robust test for delay fault P3.
Delay Fault Path Classification
• Each of the paths in a circuit can be classified:
• Hazard-free robust testable
• Robust testable
• Non-robust testable
• Non-robust testable but not redundant
• Redundant
• However, robust tests are still possible even when static hazards are present on
the off-path inputs.
o Static hazards are necessary but not sufficient to make a delay test
non-robust.
•A delay test is a robust test if the on-path nodes control the first occurrence of a
transition through all gates along the tested path.
o This
ensures that a delay test is not invalidated or a path delay fault
masked by delay characteristics of gates not on the tested path.
• Thistest is robust since F will not change state until the transition on E has
occurred.
o In other words, any assignable delay to D can never mask a delay
fault that may occur on the tested path.
• This is true because the on-path node E holds the dominant input value on gate
G 4, and therefore determines the earliest transition possible on F .
o Therefore, D is allowed to delay the transition on F but not speed it
up.
Robust Test
• It is possible that:
• D can cause a transition to occur on F after the transition on-path node E has occurred.
• D may further delay the transition of F since it too can hold the dominant input value on
gate G 4 .
• The latter conditionimplies that a robust test does not require the sensitized path
to dominate the timing, or, to be the last transition to occur on all gates along the
sensitized path.
Robust Test
• For the first case, the off-path inputs of the gate must behave in either one of two
ways.
• If the off-path input node changes state, then it must make a transition from the dominant
to the non-dominant input state of the gate.
• If it does not change state, then it must remain in steady-state at the non-dominant value
during the entire test interval.
• When all off-path inputs honor these constraints, the outputs of the gates along
the test path will not make the transition until the last of all transitioning input
lines have toggled.
Robust Test
• For the second case, the off-path inputs must remain at their non-dominant states
during the entire test interval.
o No off-path transition is allowed.
• In either case, hazards will not be visible at the output until after the desired
transition has propagated along the tested path.
• However,for many circuits, even this weaker set of constraints permits only a
small percentage of path delay faults to be robust tested.
Non-Robust Test
• A non-robust tests allow the output to change before the on-path transition
propagates along the tested path.
o A non-robust test cannot guarantee the detection of a delay fault
along a test path in the presence of other faults.
• Non-robust tests only require static sensitization (arbitrary values for V1).
Path Delay Fault Test Generation
• There are no alternatives to generate the previous test, so we are stuck with a non-
robust test for the rising transition of P2.
o Note that in circuits with reconvergent fanout, backtracking is
frequently necessary.
•A path for which both rising and falling PDFs are singly (i.e. non-robustly)
testable is called a testable path.
• A path that has one singly testable and one singly untestable PDF is called
partially testable path and may be associated with a redundant fault.
o The fault q SA1 in our circuit is redundant -- AND gate can be
removed.
• When no non-robust test exists for both paths, its singly-untestable path.
o This path can be eliminated by circuit transformations.
False Paths
• The delay along false paths cannot affect the output transition time.
o Unfortunately, singly-untestable PDFs are not always false paths.
• Thisis why the delays of paths whose PDFs are untestable are still taken into
account while determining the clock period of the circuit.
o A point in favor of static timing analysis.
o
9. Write down the important features of ATPGA testing
ATPG (acronym for both Automatic Test Pattern Generation and Automatic Test Pattern
Generator) is an electronic design automation method or technology used to find an input (or test)
sequence that, when applied to a digital circuit, enables automatic test equipment to distinguish
between the correct circuit behavior and the faulty circuit behavior caused by defects. The generated
patterns are used to test semiconductor devices after manufacture, or to assist with determining the
cause of failure (failure analysis[1]). The effectiveness of ATPG is measured by the number of
modeled defects, or fault models, detectable and by the number of generated patterns. These metrics
generally indicate test quality (higher with more fault detections) and test application time (higher
with more patterns). ATPG efficiency is another important consideration that is influenced by the
fault model under consideration, the type of circuit under test (full scan, synchronous sequential, or
asynchronous sequential), the level of abstraction used to represent the circuit under test (gate,
register-transfer, switch), and the required test quality.
ATPG (Automatic Test Pattern Generation and Automatic Test Pattern Generator) is an EDA
method/technology used to find an input or test sequence.
When applied to a digital circuit, ATPG enables automatic test equipment to distinguish between
the correct circuit behavior and the faulty circuit behavior caused by defects. The generated patterns
are used to test semiconductor devices after manufacture, or to assist with determining the cause of
failure (failure analysis[1]).
The effectiveness of ATPG is measured by the number of modeled defects, or fault models,
detectable and by the number of generated patterns. These metrics generally indicate test quality
(higher with more fault detections) and test application time (higher with more patterns).
ATPG efficiency is another important consideration that is influenced by the fault model under
consideration, the type of circuit under test (full scan, synchronous sequential, or asynchronous
sequential), the level of abstraction used to represent the circuit under test (gate, register-transfer,
switch), and the required test quality.
10. Discuss the steps involved in design for self-test (BIST) at board level
Built-In Self Test (BIST) Techniques
BIST is a design-for-testability technique that places the testing functions physically with the circuit under
test (CUT), as illustrated in Figure 40.1 [1]. The basic BIST architecture requires the addition of three
hardware blocks to a digital circuit: a test pattern generator, a response analyzer, and a test controller. The
test pattern generator generates the test patterns for the CUT. Examples of pattern generators are a ROM
with stored patterns, a counter, and a linear feedback shift register (LFSR). A typical response analyzer is a
comparator with stored responses or an LFSR used as a signature analyzer. It compacts and analyzes the
test responses to determine correctness of the CUT. A test control block is necessary to activate the test and
analyze the responses. However, in general, several test-related functions can be executed through a test
controller circuit.
As shown in Figure 40.1, the wires from primary inputs (PIs) to MUX and wires from circuit output to
primary outputs (POs) cannot be tested by BIST. In normal operation, the CUT receives its inputs from
other modules and performs the function for which it was designed. During test mode, a test pattern
generator circuit applies a sequence of test patterns to the CUT, and the test responses are evaluated by a
output response compactor. In the most common type of BIST, test responses are compacted in output
response compactor to form (fault) signatures. The response signatures are compared with reference golden
signatures generated or stored onchip, and the error signal indicates whether chip is good or faulty. Four
primary parameters must be considered in developing a BIST methodology for embedded systems; these
correspond with the design parameters for on-line testing techniques discussed in earlier chapter [2].
Fault coverage: This is the fraction of faults of interest that can be exposed by the test patterns produced
by pattern generator and detected by output response monitor. In presence of input bit stream errors there
is a chance that the computed signature matches the golden signature, and the circuit is reported as fault
free. This undesirable property is called masking or aliasing.
Test set size: This is the number of test patterns produced by the test generator, and is closely linked to
fault coverage: generally, large test sets imply high fault coverage.
Hardware overhead: The extra hardware required for BIST is considered to be overhead. In most
embedded systems, high hardware overhead is not acceptable.
Performance overhead: This refers to the impact of BIST hardware on normal circuit performance such
as its worst-case (critical) path delays. Overhead of this type is sometimes more important than hardware
overhead.
In built-in self test (BIST) design, parts of the circuit are used to test the circuit itself. Online BIST is used
to perform the test under normal operation, whereas off-line BIST is used to perform the test off-line. The
essential circuit modules required for BIST include:
The roles of these two modules are illustrated in Fig. 1. The implementation of both PRPG and ORA can
be done with Linear Feedback Shift Registers (LFSRs).
To reduce the chip area penalty, data compression schemes are used to compare the compacted test
responses instead of the entire raw test data. One of the popular data compression schemes is the signature
analysis, which is based on the concept of cyclic redundancy checking. It uses polynomial division, which
divides the polynomial representation of the test output data by a characteristic polynomial and then finds
the remainder as the signature. The signature is then compared with the expected signature to determine
whether the device under test is faulty. It is known that compression can cause some loss of fault coverage.
It is possible that the output of a faulty circuit can match the output of the fault-free circuit; thus, the fault
can go undetected in the signature analysis. Such a phenomenon is called aliasing.
In its simplest form, the signature generator consists of a single-input linear feedback shift register (LFSR),
as shown in Fig. 3 in which all the latches are edge-triggered. In this case, the signature is the content of
this register after the last input bit has been sampled. The input sequence {an) is represented by polynomial
G(x) and the output sequence by Q(x). It can be shown that G(x) = Q(x) P(x) R(x), where P(x) is the
characteristic polynomial of LFSR and R(x) is the remainder, the degree of which is lower than that of P(x).
For the simple case in Fig. 3 the characteristic polynomial is
and the remainder term becomes R(x) = x4 x2 which corresponds to the register contents of {0 0 1 0 1}.
Figure 3 : Polynomial division using LFSR for signature analysis
Pseudo-exhaustive patterns
In pseudo-exhaustive pattern generation, the circuit is partitioned into several smaller subcircuits
based on the output cones of influence, possibly overlapping blocks with fewer than n inputs. Then
all possible test patterns are exhaustively applied to each sub-circuit. The main goal of pseudo-
exhaustive test is to obtain the same fault coverage as the exhaustive testing and, at the same time,
minimize the testing time. Since close to 100% fault coverage is guaranteed, there is no need for
fault simulation for exhaustive testing and pseudo-exhaustive testing. However, such a method
requires extra design effort to partition the circuits into pseudo-exhaustive testable sub-circuits.
Moreover, the delivery of test patterns and test responses is also a major consideration. The added
hardware may also increase the overhead and decrease the performance.
Circuit partitioning for pseudo-exhaustive pattern generation can be done by cone segmentation as
shown in Figure 40.4. Here, a cone is defined as the fan-ins of an output pin. If the size of the largest
cone in K, the patterns must have the property to guarantee that the patterns applied to any K inputs
must contain all possible combinations. In Figure 40.4, the total circuit is divided into two cones
based on the cones of influence. For cone 1 the PO h is influenced by X1, X2, X3, X4 and X5 while
PO f is influenced by inputs X4, X5, X6, X7 and X8. Therefore the total test pattern needed for
exhaustive testing of cone 1 and cone 2 is (25 +25 ) = 64. But the original circuit with 8 inputs
requires 28 = 256 test patterns exhaustive test.
Pseudo-Random Pattern Generation
A string of 0’s and 1’s is called a pseudo-random binary sequence when the bits appear to be random
in the local sense, but they are in someway repeatable. The linear feedback shift register (LFSR)
pattern generator is most commonly used for pseudo-random pattern generation. In general, this
requires more patterns than deterministic ATPG, but less than the exhaustive test. In contrast with
other methods, pseudo-random pattern BIST may require a long test time and necessitate evaluation
of fault coverage by fault simulation. This pattern type, however, has the potential for lower
hardware and performance overheads and less design effort than the preceding methods. In
pseudorandom test patterns, each bit has an approximately equal probability of being a 0 or a 1. The
number of patterns applied is typically of the order of 103 to 107 and is related to the circuit's
testability and the fault coverage required.
Linear feedback shift register reseeding [5] is an example of a BIST technique that is based on
controlling the LFSR state. LFSR reseeding may be static, that is LFSR stops generating patterns
while loading seeds, or dynamic, that is, test generation and seed loading can proceed
simultaneously. The length of the seed can be either equal to the size of the LFSR (full reseeding)
or less than the LFSR (partial reseeding). In [5], a dynamic reseeding technique that allows partial
reseeding is proposed to encode test vectors. A set of linear equations is solved to obtain the seeds,
and test vectors are ordered to facilitate the solution of this set of linear equations
.
Figure 40.5 shows a standard, external exclusive-OR linear feedback shift register. There are n flip-
flops (Xn-1,……X0) and this is called n-stage LFSR. It can be a near-exhaustive test pattern
generator as it cycles through 2n -1 states excluding all 0 states. This is known as a maximal length
LFSR. Figure 40.6 shows the implementation of a n-stage LFSR with actual digital circuit. [1]