0% found this document useful (0 votes)
16 views40 pages

DFT Key R20

Uploaded by

Pikachu 1707
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views40 pages

DFT Key R20

Uploaded by

Pikachu 1707
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Code No: A47D3 R 20

BVRAJU INSTITUTE OF TECHNOLOGY, NARSAPUR


(UGC - AUTONOMOUS)
IV B.Tech I Semester Regular Examinations, Nov 2023
DESIGN FOR TESTABILITY
(Electronics and Communication Engineering)
Time: 3 Hours Max Marks: 60
Note: This Question Paper contains two Parts A and B
 Part A is compulsory which carries 10 marks. Five questions from six units. Answer all questions in Part A
at one place only.
 Part-B consists of 5 Questions (numbered from 2 to 11) carrying 10 marks each. Each of these questions
is from one unit and may contain a, b, c as sub-questions. For each question there will be an either/or
choice (that means there will be two questions from each unit and the student should answer only one
question).
PART – A (5x2 = 10 Marks)
1. Marks Bloom Level CO
a List some of the applications of fault simulation 2 1 1
b What are the Test generation basics for Combinational circuit 2 1 2
c Discuss about observability & Controllability 2 2 3
d What is RAM fault model 2 1 4
e Explain any two BIST concepts 2 2 5
PART – B (5x10 = 50 Marks)
Marks Bloom Level CO
2 Explain single stuck and multiple stuck at fault models with examples 10 2,3 1
OR
3 Explain the methods of equivalence fault collapsing and dominant fault 10 2,3 1
collapsing with suitable example
***
4 Explain how test vector is generated using D algorithm with an 10 2,3 2
example
OR
5 Explain the different Testable combinational logic circuit design with 10 2 2
examples
***
6 Explain different Scan Path techniques & storage cells for scan design 10 2 3
OR
7 Explain in detail about Ad Hoc design rules in sequential circuit testing 10 2 3
***
8 Explain about the memory test requirements for MBIST along with 10 2 4
different Delay faults.
OR
9 Write down the important features of ATPGA testing 10 1,2 4
***
10 Discuss the steps involved in design for self-test (BIST) at board level 10 3 5
OR
11 Explain how test pattern is generated in BIST. 10 2,3 5

***
Code No: A47D3 R 20
B V R A J U I N ST I T U T E O F T E C H N O L O G Y , N A R S A P U R
(UGC - AUTONOMOUS)
IV B.Tech I Semester Regular Examinations, Nov 2023
DESIGN FOR TESTABILITY
(Electronics and Communication Engineering)

SCHEME OF EVALUATION

S.NO QUESTION MARKS


PART A
List some of the applications of fault
1a 2 Applications 1 mark each total - 2 marks
simulation
What are the Test generation basics for
b 2 points each 1 mark Total - 2 marks
Combinational circuit
Discuss about Observability & Observability definition - 1 mark
c
Controllability Controllability definition - 1 mark
d What is RAM fault model Explanation - 2 marks
e Explain any two BIST concepts Any two concepts each 1 mark total - 2 marks
PART B
Single stuck at fault definition - 2 marks
Explain single stuck and multiple stuck at Example for single stuck at fault - 3 marks
2
fault models with examples Double stuck at fault definition - 2 marks
Example for Double stuck at fault - 3 marks
OR
equivalence fault collapsing definition - 2 marks
Explain the methods of equivalence fault
Example for equivalence fault collapsing - 3 marks
3 collapsing and dominant fault collapsing with
dominant fault collapsing definition - 2 marks
suitable example
Example for dominant fault collapsing - 3 marks
D algorithm Explanation - 5 marks
Explain how test vector is generated using D
4 Example - 5 marks
algorithm with an example
OR
Explain the different Testable combinational Classification – 2 marks
5
logic circuit design with examples Explanation of any two concepts – 8 marks
Explain different Scan Path techniques & Scan Path Techniques – 6 marks
6
storage cells for scan design Storage cells – 4 marks
OR
Explain in detail about Ad Hoc design rules
7 Explanation of Ad Hoc rules – 10 marks
in sequential circuit testing
Explain about the memory test requirements Memory requirements – 2 marks
8
for MBIST along with different Delay faults Delay Faults – 8 marks
OR
Write down the important features of ATPGA ATPGA definition – 2 marks
9
testing Features – 8 marks
Discuss the steps involved in design for self- BIST diagram – 5 marks
10
test (BIST) at board level Explanation – 5 marks
OR
Explain how test pattern is generated in Classification of methods – 2 marks
11
BIST. Any one method explanation – 8 marks
PART A
1 a) Applications of Fault Simulation
1. Fault Coverage
2. Test Generation
3. Fault Dictionary

b) Test generation basics for Combinational circuit


Combinational circuit test generation is a method of testing the individual nodes or flip-flops of a logic
circuit without being concerned with the operation of the overall circuit.
1. The use of full-scan enables tests to be generated using a combinational test generator.
2. The input to the test generator is only the combinational part of the circuit under test (CUT),
obtained by removing all the flip-flops and considering all the inputs and outputs of the
combinational circuit as primary inputs and outputs, respectively

c) Observability & Controllability


Observability: In order to see what is going on inside the system under observation, the system must
be observable.
Controllability: In order to be able to do whatever we want with the given dynamic system under
control input, the system must be controllable.

d) RAM fault model


The functional fault models of memory can be a static fault or a dynamic fault. A static fault model
can be sensitized by only one operation (read or write), while a dynamic fault model requires more
than one operation to be sensitized.

e) BIST concepts
A built-in self-test (BIST) or built-in test (BIT) is a mechanism that permits a machine to test itself.
Engineers design BISTs to meet requirements such as:
1. high reliability
2. lower repair cycle times
or constraints such as:
1. limited technician accessibility
2. cost of testing during manufacture
The main purpose [1] of BIST is to reduce the complexity, and thereby decrease the cost and reduce
reliance upon external (pattern-programmed) test equipment. BIST reduces cost in two ways:
1. reduces test-cycle duration
2. reduces the complexity of the test/probe setup, by reducing the number of I/O signals that
must be driven/examined under tester control.

PART B
2. A stuck-at fault is a particular fault model used by fault simulators and automatic test pattern generation
(ATPG) tools to mimic a manufacturing defect within an integrated circuit. Individual signals and pins are
assumed to be stuck at Logical '1', '0' and 'X'. For example, an input is tied to a logical 1 state during test
generation to assure that a manufacturing defect with that type of behavior can be found with a specific test
pattern. Likewise the input could be tied to a logical 0 to model the behavior of a defective circuit that
cannot switch its output pin. Not all faults can be analyzed using the stuck-at fault model. Compensation
for static hazards, namely branching signals, can render a circuit untestable using this model. Also,
redundant circuits cannot be tested using this model, since by design there is no change in any output as a
result of a single fault.
Single stuck at line[edit]
Single stuck line is a fault model used in digital circuits. It is used for post manufacturing testing, not
design testing. The model assumes one line or node in the digital circuit is stuck at logic high or logic low.
When a line is stuck it is called a fault.
Digital circuits can be divided into:

1. Gate level or combinational circuits which contain no storage (latches and/or flip flops) but
only gates like NAND, OR, XOR, etc.
2. Sequential circuits which contain storage.
This fault model applies to gate level circuits, or a block of a sequential circuit which can be separated from
the storage elements. Ideally a gate-level circuit would be completely tested by applying all possible inputs
and checking that they gave the right outputs, but this is completely impractical: an adder to add two 32-bit
numbers would require 264 = 1.8*1019 tests, taking 58 years at 0.1 ns/test. The stuck at fault model assumes
that only one input on one gate will be faulty at a time, assuming that if more are faulty, a test that can
detect any single fault, should easily find multiple faults.

To use this fault model, each input pin on each gate in turn, is assumed to be grounded, and a test vector is
developed to indicate the circuit is faulty. The test vector is a collection of bits to apply to the circuit's
inputs, and a collection of bits expected at the circuit's output. If the gate pin under consideration is
grounded, and this test vector is applied to the circuit, at least one of the output bits will not agree with the
corresponding output bit in the test vector. After obtaining the test vectors for grounded pins, each pin is
connected in turn to a logic one and another set of test vectors is used to find faults occurring under these
conditions. Each of these faults is called a single stuck-at-0 (s-a-0) or a single stuck-at-1 (s-a-1) fault,
respectively.
This model worked so well for transistor-transistor logic (TTL), which was the logic of choice during the
1970s and 1980s, that manufacturers advertised how well they tested their circuits by a number called
"stuck-at fault coverage", which represented the percentage of all possible stuck-at faults that their testing
process could find. While the same testing model works moderately well for CMOS, it is not able to detect
all possible CMOS faults. This is because CMOS may experience a failure mode known as a stuck-
open fault, which cannot be reliably detected with one test vector and requires that two vectors be applied
sequentially. The model also fails to detect bridging faults between adjacent signal lines, occurring in pins
that drive bus connections and array structures. Nevertheless, the concept of single stuck-at faults is widely
used, and with some additional tests has allowed industry to ship an acceptable low number of bad circuits.
The testing based on this model is aided by several things:

1. A test developed for a single stuck-at fault often finds a large number of other stuck-at
faults.
2. A series of tests for stuck-at faults will often, purely by serendipity, find a large number of
other faults, such as stuck-open faults. This is sometimes called "windfall" fault coverage.
3. Another type of testing called IDDQ testing measures the way the power supply current of
a CMOS integrated circuit changes when a small number of slowly changing test vectors
are applied. Since CMOS draws a very low current when its inputs are static, any increase
in that current indicates a potential problem.

Multiple Faults
• The simultaneous presence of single faults, usually of the same type.
• Usually not considered in practice:
▪ We indicated earlier that there are 3n-1 possible multiple stuck-fault (MSF) in a
circuit with n SSF sites.
▪ Tests for SSFs cover a high percentage of MSFs.

• Situations in which it is important to consider them:


▪ Diagnostic or fault location procedures don't work with multiple faults.
▪ A circuit with redundant single SSF can malfunction even when it passes the SSF
tests.

• Solution is to enhance test set to cover multiple faults or remove redundancy.


▪ The latter is usually easier.
Multiple Faults
• An example:

• The three faults, SA1-3, are redundant because the presence of any one of them has no
effect on the logic function.

• As we will see, the vectors 00, 01, and 10 detect all other single SA faults.

• However, in combination they can cause a problem.


o For input combination 11, the function is incorrect, but its not part of the
test set.
Multiple Stuck Fault Model
• Intuitively, it seems that detecting all SSFs is sufficient to detect the MSFs.
o Unfortunately, functional masking introduced by MSFs can prevent
detection of SSFs.
• Functional masking implies masking under any test set.
o However, it's possible that a multiple fault is masked under a given test and
not under another.
o The test abc = 010 detects the MSF {c SA0, a SA1}.
Multiple Stuck Fault Model
• Given a complete test set T for SSFs, can there exist a MSF F = {f1, f2, ..., fk} such that F
is not detected by T?
o Unfortunately, the answer is yes.

• The test set T = {1111, 0111, 1110, 1001, 1010, 0101} detects all SSFs.
o The only test in T that detects both f = b SA1 and g = c SA1 is 1001.

o However, the circular masking of f and g under T prevents the MSF {f, g}
from being detected.

• Fortunately, circular masking relations are seldom encountered in practice.

3. Explain the methods of equivalence fault collapsing and dominant fault collapsing with suitable example

Fault collapsing
There are two main ways for collapsing fault sets into smaller sets.
Equivalence collapsing
It is possible that two or more faults produce same faulty behavior for all input patterns. These faults are
called equivalent faults. Any single fault from the set of equivalent faults can represent the whole set. In
this case, much less than k×n fault tests are required for a circuit with n signal line. removing equivalent
faults from entire set of faults is called fault collapsing. fault collapsing significantly decreases the number
of faults to check.
In the example diagram, red faults are equivalent to the faults that being pointed to with the arrows, so those
red faults can be removed from the circuit. In this case, the fault collapse ratio is 12/20.
Dominance collapsing

Fault dominance example for a NAND gate


Fault F is called dominant to F' if all tests of F' detects F. In this case, F can be removed from the fault list.
If F dominates F' and F' dominates F, then these two faults are equivalent.[3]
In the example, a NAND gate has been shown, the set of all input values that can test output's SA0 is
{00,01,10}. the set of all input values that can check first input's SA1 is {01}. In this case, output SA0 fault
is dominant and can be removed from fault list.
Functional collapsing
Two faults are functionally equivalent if they produce identical faulty functions[4] or we can say, two faults
are functionally equivalent if we can not distinguish them at primary outputs (PO) with any input test vector
Fault Equivalence
A single test can detect more than one fault in a circuit, and many tests in a set detect the same faults. In
other words, the subsets of faults detected by each test from a test set are not disjoint. A significant objective
in test generation is to reduce the total number of faults. This is done by grouping equivalent faults in
subsets. It is then sufficient only to test one fault from each equivalent set to cover all the faults. Such
subsets are called functionally equivalence classes.
Let’s take an example of a 3-input NAND gate. This gate has 2x(3+1) = 8 stuck-at faults. So, initially,
without any optimization, it would require eight different test vectors to detect all the faults. Our task is to
minimize the number of test vectors as much as possible.

The input and output s-a-0 & s-a-1 faults are indicated in the gate schematic. Let’s first generate a test
vector for a→s-a-0 at the input. We will use the path sensitization approach. To excite the fault, we need to
force logic-1 at the input ‘a’. A discrepancy ‘D’ will be created at ‘a’. Now, to propagate the fault (or ‘D’)
to the output (or make it observable), we need all other inputs (‘b’ and ‘c’) as logic-1. So, the test vector
for checking a→s-a-0 would be {a, b, c} = {1, 1, 1}. Now, repeat similar steps for ‘b’ and ‘c’ too.
Did you observe? For checking s-a-0 fault at ‘b’ and ‘c’, we again need to apply the same test vector. {1,
1, 1} is a very good test vector as it can check three faults together at once. Hence, we can conclude that s-
a-0 faults at the input of any order NAND gate are equivalent.
Can we do it for s-a-1 faults too? Let’s make a table consisting of all input and output stuck-at faults and
their test pattern.
Faults Type Fault Location Test Vector {a, b, c}

a {1, 1, 1}

b {1, 1, 1}
s-a-0
c {1, 1, 1}

out {0, x, x} or {x, 0, x} or {0, x, x}

a {0, 1, 1}

b {1, 0, 1}
s-a-1
c {1, 1, 0}

out {1, 1, 1}

Unfortunately, we can’t do this for s-a-1 faults because they all need different test vectors.
For out→s-a-0, we just need to excite the fault, no need to propagate, as it’s already at the output and always
observable. So, for s-a-0 fault excitation, we need to force out→1. This can be done by applying logic-0 at
any of the inputs. Hence, there are multiple test vectors.
For detection of out→s-a-1, it is required to force out→0. This can be only done using the test vector {a, b,
c} = {1, 1, 1}. Did you notice something special? Yes, even out→s-a-1 is equivalent to s-a-0 at a, b & c.

Faults Type Fault Location Test Vector {a, b, c}

a, b, c {1, 1, 1}
s-a-0
out {0, x, x} or {x, 0, x} or {0, x, x}

a {0, 1, 1}

b {1, 0, 1}
s-a-1
c {1, 1, 0}

out {1, 1, 1}
In general, an m-input gate can have a total of (m+1) equivalent sets of faults. We have successfully reduced
the test pattern to a great extent. It can be concluded that s-a-0 faults at the input and s-a-1 fault at the output
of any order NAND gates are always equivalent.
We can collapse the equivalent faults in the gate schematic and keep only one. As they are equivalent,
testing only one of them is sufficient. This is a visual representation of fault reduction. By convention, we
usually keep the input faults; we will eventually come to know the reason for this later. Below are the
reduced table and schematic diagram.

Faults Type Fault Location Test Vector {a, b, c}

s-a-0 out {0, x, x} or {x, 0, x} or {0, x, x}


a, b, c
{1, 1, 1}
out

a {0, 1, 1}
s-a-1
b {1, 0, 1}

c {1, 1, 0}

The equivalent collapsed faults are crossed in blue. These equivalent faults can be represented in a special
set known as equivalence set.
Equivalence Set = {a-sa0, b-sa0, c-sa0, out-sa1}
This equivalence set consists of four equivalent faults. We can collapse any three of them and retain one.
In the above figure, we have retained ‘a‘. We can retain any of the three input equivalents (a-sa0 or b-
sa0 or c-sa0}. The best practice is to always collapse the output first.

Fault Dominance
Faults Type Fault Location Test Vector {a, b, c}

out {0, x, x} or {x, 0, x} or {0, x, x}


s-a-0
a, b, c
{1, 1, 1}
out

a {0, 1, 1}
s-a-1
b {1, 0, 1}

c {1, 1, 0}

Notice that there can be several test patterns for out→s-a-0 in the table. Hence, test constraints on out→s-
a-0 are very less. This proves as an advantage, as it can intersect with the test pattern of other faults, further
reducing the test vector.
From the Venn-diagram, we can notice that out→s-a-0 is dominant to a→s-a-1, b→s-a-1, and c→s-a-1.
Note that these aren’t equivalent, we can only choose one test pattern at a time. For the dominant fault
(out→ s-a-0), only one test vector is required out of all these possible combinations (blue, red, or green).
Hence, even if we are testing any of the blue, red, or green (corresponding to a→s-a-1, b→s-a-1, and c→s-
a-1, respectively), the dominant one gets tested automatically.
Therefore, we don’t test dominant faults separately, as they are always tested automatically from test vectors
of other faults.
Now, we reduced another test vector. The resulting table and schematic are shown below.

Test Vector {a,


Faults
b, c}

a→ s-a-0, b→ s-a-0, c→ s-a-0, out→ s-a-1 {1, 1, 1}

a→ s-a-1, out→ s-a-0 {0, 1, 1}

b→ s-a-1, out→ s-a-0 {1, 0, 1}

c→ s-a-1, out→ s-a-0 {1, 1, 0}

The dominant collapsed fault is crossed in orange. The dominance relationship can be represented in
another special set known as a dominance set.
Dominance Set = {out-sa0: a-sa1, b-sa1, c-sa1}
The entity before the colon in the set represents the dominant fault, while those after the colon represents
its subsets. In dominance relation, we can only collapse the dominant entity (before the colon).
In this case, three test vectors or test patterns {a, b, c} = {(0, 1, 1), (1, 0, 1), (1, 1, 0)} are required to test
the above dominance set.
We did a pretty good job in collapsing the faults, straightway reducing eight vectors into four. The test
pattern size is now reduced by 50%. The final test pattern required to test all the faults is {a, b, c} = {(1, 1,
1), (0, 1, 1), (1, 0, 1), (1, 1, 0)}
Finally, we conclude that, for any input NAND gate, input s-a-0 faults and output s-a-1 faults are equivalent
while output s-a-0 fault is dominant to input s-a-1 faults.
Fault Collapsing for common gates

In the previous section, we analyzed the equivalence and dominance relations for the 3-input NAND gate.
Similarly, analysis can be done for common 2-input logic gates too. I recommend you apply a similar path
sensitization method with a tabular approach to figure out the fault relations of these gates and compare
your results with the following table.
OR GateNOR GateAND GateNAND GateNOT GateBuffer

Fault Test Vector {a, b}

a {1, 0}
s-
a- b {0, 1}
0
out {x, 1} or {1, x}

a {0, 0}
s-
a- b {0, 0}
1
out {0, 0}
Equivalence set: {a-sa1, b-sa1, out-sa1}
Dominance set: {out-sa0: a-sa0, b-sa0}

Here’s a summary of all the above tabs.

Gate Equivalence Set(s) Dominant Set(s)

OR {a-sa1, b-sa1, out-sa1} {out-sa0: a-sa0, b-sa0}

NOR {a-sa1, b-sa1, out-sa0} {out-sa1: a-sa0, b-sa0}

AND {a-sa0, b-sa0, out-sa0} {out-sa1: a-sa1, b-sa1}

NAND {a-sa0, b-sa0, out-sa1} {out-sa0: a-sa1, b-sa1}

{in-sa0, out-sa0}
Buffer {in-sa1, out-sa1} null
{in-sa1, out-sa0}
NOT {in-sa0, out-sa1} null

Don’t worry; you won’t need to memorize the whole table, just memorize the first row (i.e. equivalence
and dominant set for OR gate). This table follows the principle of Boolean Duality. Hence, to find the
equivalence and dominance sets for AND gate, just replace sa0 with sa1 for both the sets.

Fault Collapsing in circuits

Let’s try out a few examples on fault collapsing.


Fanout-free Circuit
Q. Reduce the stuck-at faults in the following circuit using equivalence and dominance relations.

Initially, there are 30 stuck-at faults (2 x No. of wires = 2 x 15) in the gate level schematic. For fault
collapsing, we move backwards, parallelly evaluating each gate in each level from the primary output(s) to
the primary inputs. Let’s highlight the highest gate-level first.

There is only one AND gate. Equivalent faults are: {a-sa0, b-sa0, out-sa0}. We need to test only one fault
out of these three. Hence, we collapse two and retain one. Our standard practice is to collapse the output
faults first and retain one of the inputs. In this case ‘out‘ is collapsed first. Now, either ‘a‘ or ‘b‘ can be
collapsed as per your choice. We collapse ‘b‘ and retain ‘a‘. Equivalent fault collapsing is shown in blue.
Dominance relation for AND gate is: {out-sa1: a-sa1, b-sa1}. Here, out-sa1 is dominant to the other two.
Unlike equivalence, in dominant fault collapsing we don’t have any choice, but to collapse the dominant
one only. Dominant fault collapsing is shown in orange.

Three faults are collapsed till now, and three remaining. The remaining three faults are considered for
collapsing (if possible) in the next lower gate level.

The remaining faults are bought closer to the lower level gates (just for simplicity; the stuck-at faults can
be represented anywhere in its respective wire). In this level, there are two OR gates. The fault collapsing
technique is followed similarly (as in step 1) using equivalence and dominance relations (from the table).
For OR gate all s-a-1 faults are equivalent while output s-a-0 is dominant. This way, outputs are collapsed
and inputs retained.

Same steps are repeated for step 3. Note that there are three AND gates but one NAND gate too. Make sure
you apply respective dominance and equivalence relations for respective gates. Note, even if the same faults
are collapsed for both the gates, there is a slight difference in which faults are dominant and which are
equivalent. This can be clearly seen in the diagram in orange and blue cross notations.
Yay! We have successfully collapsed 18 faults out of total 30 faults. Now, we only need to generate a test
pattern for the remaining 12 faults. This is insane!
The collapse ratio is a benchmarking parameter for DFT CAD tools and is defined as no. of remaining
faults divided by total no. of faults.
Collapse Ratio = 12/30 = 0.4
The smaller is the collapse ratio, means lesser faults are remaining and hence, better the CAD tool.

Also, notice that the output, as well as all the internal stuck-at faults, are collapsed. We now only need to
test the primary inputs. This concludes to the following theorem.
A test set that detects all single stuck-at faults on all primary inputs of a fanout free circuit must detect all
single stuck-at faults in that circuit.
The term ‘fanout free’ is used for a reason; you will know about this shortly.
Fanout Circuit
Q2. Reduce the stuck-at faults in the following circuit for minimum collapse ratio, using equivalence
and dominance relations.

The fanout branches (X, Y, Z) are indicated in the circuit.


The stuck-at fault should be considered separately for every fanout branches and even the fanout stem.
As we know that stuck-at faults are just the manifestations of physical defects in transistors, hence each
fanout branches and stems represent their respective gate’s transistor defects. For example, the two fanout
branches at ‘Z‘ represent faults inside their respective OR gates. Similarly, the fanout stem represents faults
in the previous level AND gate. Hence, even being emerging from the same wire, fanout branches and stem
must be considered as separate stuck-at faults. Therefore, considering all fanout stems and branches as
separate wires, there are 2×16 = 32 total stuck-at faults.

4. Explain how test vector is generated using D algorithm with an example


The D algorithm was developed by Roth at IBM in 1966 and was the first complete test pattern algorithm
designed to be programmable on a computer. The D algorithm is a deterministic ATPG method for
combinational circuits, guaranteed to find a test vector if one exists for detecting a fault. It uses cubical
algebra for the automatic generation of tests.
Three types of cubes are considered:

• Singular cube
• Propagation D-cube (PDC)
• Primitive D-cube of a fault (PDCF)

The D algebra

The D-algebra is a 5-value logic consisting of logic: 1, 0, D, D', X. The D stands for Discrepancy, as
discussed in the path sensitization method.
Following algebraic rules are applicable in D algorithm for intersection:
0∩0=0∩x=x∩0=0
1∩1=1∩x=x∩1=1
x∩x=x
1∩0=D
0 ∩ 1 = D’
Note that intersection in D-algebra is quite different to what we are familiar with sets and relations in
mathematics. They do not follow properties like commutativity or associativity.
Cubes

Singular Cover
Singular Cover (SC) of any logic gate is the compact form of truth-table. This is done using don’t cares
(x). Following reduced truth table is the singular cover of an AND gate.

We know that, for an AND gate, the output is logic-1 only when both of its inputs are high. At the same
time, the output is logic-0 for all the other cases where any of its input is low. The output of the AND gate
is low for most of the cases. Hence, specifying separate columns for every possible input combinations
becomes redundant. Therefore we merge the rows of AND gate and define the truth-table using don’t
cares in a more condensed form.

Each row of a singular cover is termed as Singular Cube. The above singular cover of the AND gate has
three singular cubes.
Primitive D-cube of a Fault
D-cubes represent the input-output behavior of the good and faulty circuits.
Primitive D-cube of a Fault (PDCF) is used to specify the minimum input conditions required at inputs
of a gate to produce an error at its output. This is used for fault activation. PDCF can be derived from the
intersection of singular covers of gates in faulty and non-faulty conditions having different outputs.
Example:
Here is an AND gate with s-a-0 fault at the output. To generate the PDCF, we first draw the truth table of
the faulty and non-faulty circuit. Next, we derive the singular cover for faulty as well as non-faulty
circuits.

For faulty AND gate, the output is always stuck-at-0 independent of its input; hence its singular cover has
only one row with inputs (a, b) as don’t cares.
Now, we intersect the singular cubes of the non-faulty and faulty circuits. For PDCF we need to intersect
only those columns for which output is different for non-faulty and faulty circuits. Since for faulty
circuits, we only have one singular cube {x, x, 0}; we need to intersect it with a singular cube of the non-
faulty circuit having the opposite output value (i.e. logic-1). The singular cube {1, 1, 1} perfectly fits this
criterion.
{x, x, 0} ∩ {1, 1, 1} = {1 ∩ x, 1 ∩ x, 1 ∩ 0} = {1, 1, D}
Finally, PDCF of this faulty AND gate is {a, b, out} = {1, 1, D}. This is similar to fault excitation we did
in the path sensitization method, albeit in a more structured approach.
Here, D is interpreted as being logic-1 if the circuit is fault-free and logic-0 if the fault is present. Notice
the order in which the intersection is done! Always intersect as ⇒ singular-cube (non-faulty circuit) ∩
singular-cube (faulty circuit). The reverse will yield different results since the intersection is not
commutative and hence will hamper the above interpretation we are trying to opt for.
Propagation D-cube
Propagation D-cubes (PDCs) of a gate causes the output of the gate to depend upon the minimum
number of its specified inputs. It is used to propagate D or D’ from a specified input to the
output. Propagation D-Cubes can be derived from the intersection of singular cubes of gates of opposite
output values.
Example:
Here’s the truth table of an OR gate. To generate the PDC, we find the singular cover for the OR gate.
Now, we intersect the singular cubes of every possible combination(s) with opposite output
values. Intersecting the singular cubes of row2 and row1, also row3 and row1 serves the purpose.
{1, x, 1} ∩ {0, 0, 0} = {1 ∩ 0, x ∩ 0, 1 ∩ 0} = {D’, 0, D’}
{x, 1, 1} ∩ {0, 0, 0} = {x ∩ 0, 1 ∩ 0, 1 ∩ 0} = {0, D’, D’}

a b out

D’ 0 D’

0 D’ D’
In this case, the intersection doesn’t need any specific order. If we intersect in the other way, we obtain
PDCs as {D, 0, D} and {0, D, D}.
Moreover, there is another option in which we intersect {1, 1, 1} with {0, 0, 0} of the truth table. This
yields another possible PDC as {D, D, D} or {D', D', D'}. The following are the complete PDCs of an OR
gate.

a b out

0 D D

D 0 D

D D D

0 D’ D’

D’ 0 D’

D’ D’ D’
This is very similar to forward propagation we did in the path sensitization method.
But then why are we learning it? These D-cubes tend to be much more inconvenient and tiresome for us
as compared to path sensitization method. The answer is that we human beings have IQ, and hence we
can solve methods like path sensitization intuitively. A computer doesn’t the necessary intelligence
(yet). The D algorithm takes the creativity out of test generation and allows a computer to do it.
D-cubes of common gates

AND Gate OR Gate NAND Gate NOR Gate XOR Gate XNOR Gate NOT Gate
Singular Cover

a b out
0 x 0

x 0 0

1 1 1
Primitive D-cube of faults

Fault a b out

out→sa0 1 1 D

0 x D’
out→sa1
x 0 D’
Propagation D-cube

a b out

1 D/D’ D/D’

D/D’ 1 D/D’

D/D’ D/D’ D/D’

5. Explain the different Testable combinational logic circuit design with examples
Combinational Logic Testing :
For testing the combinational logic circuitry a set of test patterns is generate which detect all possible fault
conditions. The first approach to testing an n input circuit is to generate all the possible 2N input signal
combinations by means of say an N bit counter (controllability) and observe the outputs for checking
(observability). This is called exhaustive testing and is very effective, but is only practicable where N is
relatively small. Many of patterns generated during exhaustive testing may not occur during the application
of the circuit. Thus, it is productive to the possible faults and then generate a set of appropriate test vectors.
The basic idea is to select a path from the site of the possible fault, through a sequence of gates leading to
an output of the logic circuitry under test. Figure below shows the Combinational Logic Testing block
schematic.

Combinational Test Generation


• Test Generation (TG) Methods –
1. From truth table
2. Using Boolean equation
3. Using Boolean difference
4. From circuit structure
• Test Generation from Circuit Structure -
Algorithms :
1. D-Algorithm (Roth 1967),
2. PODEM (Goel 1981),
6. Explain different Scan Path techniques & storage cells for scan design
The scan design technique is a structured approach to design sequential circuits for testability. The storage
cells in registers are used as observation points, control points, or both. By using the scan design techniques,
the testing of a sequential circuit is reduced to the problem of testing a combinational circuit.
The controllability and observability can be enhanced by providing more accessible logic nodes with use
of additional primary input lines and multiplexors.However, the use of additional I/O pins can be costly not
only for chip fabrication but also for packaging. A popular alternative is to use scan registers with both shift
and parallel load capabilities
In general, a sequential circuit consists of a combinational circuit and some storage elements. In the scan-
based design, the storage elements are connected to form a long serial shift register, the so-called scan path,
by using multiplexors and a mode (test/ normal) control signal, as shown in Fig. 1 .

In the test mode, the scan-in signal is clocked into the scan path, and the output of the last stage latch is
scanned out. In the normal mode, the scan-in path is disabled and the circuit functions as a sequential circuit.
The testing sequence is as follows:

Step 1: Set the mode to test and, let latches accept data from scan-in input.
Step 2: Verify the scan path by shifting in and out the test data.
Step 3: Scan in (shift in) the desired state vector into the shift register.
Step 4: Apply the test pattern to the prim ary input pins.
Step 5: Set the mode to normal and observe the primary outputs of the circuit after sufficient time for
propagation.,
Step 6: Assert the circuit clock, for one machine cycle to capture the outputs of the combinational logic
into the registers.
Step 7: Return to test mode; scan out the contents of the registers, and at the same time scan in the next
pattern.
Step 8: Repeat steps 3-7 until all test patterns are applied.

Figure 1: The general structure of scan-based design


The storage cells in scan design can be implemented using edge-triggered D flipflops, master-slave flip-
flops, or level-sensitive latches controlled by complementary clock signals to ensure race-free
operation. Figure 2 shows a scan-based design of an edge-triggered D flip-flop. In large high-speed circuits,
optimizing a single clock signal for skews, etc., both for normal operation and for shift operation, is
difficult. To overcome this difficulty, two separate clocks, one for normal operation and one for shift
operation, are used. Since the shift operation does not have to be performed at the target speed, its clock is
much less constrained.

An important approach among scan-based designs is the level sensitive scan design (LSSD), which
incorporates both the level sensitivity and the scan path approach using shift registers. The level sensitivity
is to ensure that the sequential circuit response is independent of the transient characteristics of the circuit,
such as the component and wire delays. Thus, LSSD removes hazards and races. Its ATPG is also simplified
since tests have to be generated only for the combinational part of the circuit.

Figure 2 : Scan-based design of an edge-triggered D flip-flop

The boundary scan test method is also used for testing printed circuit boards (PCBs) and multichip modules
(MCMs) carrying multiple chips. Shift registers are placed in each chip close to I/O pins in order to form a
chain around the board for testing. With successful implementation of the boundary scan method, a simpler
tester can be used for PCB testing.

On the negative side, scan design uses more complex latches, flip-flops, I/O pins, and interconnect wires
and, thus, requires more chip area. The testing time per test pattern is also increased due to shift time in
long registers.
7. Explain in detail about Ad Hoc design rules in sequential circuit testing
Ad Hoc Testable Design Techniques

One way to increase the testability is to make nodes more accessible at some cost by physically inserting
more access circuits to the original design. Listed below are some of the ad hoc testable design techniques.

Partition-and-Mux Technique :-

Since the sequence of many serial gates, functional blocks, or large circuits are difficult to test, such circuits
can be partitioned and multiplexors (muxes) can be inserted such that some of the primary inputs can be
fed to partitioned parts through multiplexers with accessible control signals. With this design technique,
the number of accessible nodes can be increased and the number of test patterns can be reduced. A case in
point would be the 32-bit counter. Dividing this counter into two 16-bit parts would reduce the testing time
in principle by a factor of 215. However, circuit partitioning and addition of multiplexers may increase the
chip area and circuit delay. This practice is not unique and is similar to the divide-and-conquer approach to
large, complex problems. Figure 1 illustrates this method.

Figure 1 : Partition-and-mux method for large circuits


Initialize Sequential Circuit :-

When the sequential circuit is powered up, its initial state can be a random, unknown state. In this case, it
is not possible to start the test sequence correctly. The state of a sequential circuit can be brought to a known
state through initialization. In many designs, the initialization can be easily done by connecting
asynchronous preset or clear-input signals from primary or controllable inputs to flip-flops or latches.

Disable Internal Oscillators and Clocks :-

To avoid synchronization problems during testing, internal oscillators and clocks should be disabled. For
example, rather than connecting the circuit directly to the on-chip oscillator, the clock signal can be ORed
with a disabling signal followed by an insertion of a testing signal as shown in Fig. 2.

Figure 2 : Avoid synchronization problems-via disabling of the oscillator

Avoid Asynchronous Logic and Redundant Logic :-

The enhancement of testability requires serious tradeoffs. The speed of an asynchronous logic circuit can
be faster than that of the synchronous logic circuit counterpart. However, the design and test of an
asynchronous logic circuit are more difficult than for a synchronous logic circuit, and its state transition
times are difficult to predict. Also, the operation of an asynchronous logic circuit is sensitive to input test
patterns, often causing race problems and hazards of having momentary signal values opposite to the
expected values. Sometimes, designed-in logic redundancy is used to mask a static hazard condition for
reliability. However, the redundant node cannot be observed since the primary output value cannot be made
dependent on the value of the redundant node. Hence, certain faults on the redundant node cannot be tested
or detected. Figure 3 shows that the bottom NAND2 gate is redundant and the stuck-at- fault on its output
line cannot be detected. If a fault is undetectable, the associated line or gate can be removed without
changing the logic function.

Figure 3 : (a) A redundant logic gate example. (b) Equivalent gate with redundancy removed
8. Explain about the memory test requirements for MBIST along with different Delay faults.
MBIST is a self-testing and repair mechanism which tests the memories through an effective set of
algorithms to detect possibly all the faults that could be present inside a typical memory cell whether
it is stuck-at (SAF), transition delay faults (TDF), coupling (CF) or neighborhood pattern sensitive
faults (NPSF).
Delay Faults
• Delays along every path from PI to PO or between internal latches must be less
than the operational system clock interval.

• We have already discussed a number of defects that can cause delay faults:
• GOS defects
• Resistive shorting defects between nodes and to the supply rails
• Parasitic transistor leakages, defective pn junctions and incorrect or shifted threshold
voltages
• Certain types of opens
• Process variations can also cause devices to switch at a speed lower than the specification.

• An SA0 or SA1 can be modeled as a delay fault in which the signal takes an
"infinite" amount of time to change to 1 or 0, respectively.
o Passing stuck fault tests is usually not sufficient however for
systems that operate at any appreciable speed.

• Running stuck-at fault tests at higher speed can uncover some delay faults.
Delay Tests

• Delay tests consist of vector-pairs.


• All input transitions occur at the same time.
• The longest delay combinational path is referred to as the critical path, which determines
the shortest clock period.
o A delay fault means that the delay of one or more paths (not
necessarily the critical path) exceeds the clock period

• Test Definition:
• At time t 1 , the initializing vector of the two-pattern test, V 1 , is applied through the input
latches or PIs and the circuit is allowed to stabilize.
• At time t 2 , the second test pattern, V 2 , is applied.
o At time t 3 , a logic value measurement (a sample) is made at the output latches or POs.

• The delay test vectors V 1 and V 2 may sensitize one or more paths, p i .
Delay Tests
• Let:
• T C = (t 3 - t 2 ) represent the time interval between the application of vector V 2 at the PIs
and the sampling event at the POs
• The nominal delay of each of these paths be defined as pd i .
• The slack of each path be defined as sd i = T C - pd i.
o This is the difference between the propagation delay of each of the
sensitized paths in the nominal circuit and the test interval.
Delay Fault Test Generation
• Difficulties with delay fault test generation:

• Test generation requires a sensitized path that extends from a PI to a PO.


• Path selection heuristics must be used because the total number of paths is exponentially
related to the number of inputs and gates in the circuit.
• The application of the test set must be performed at the rated speed of the device.
o This requires test equipment that is capable of accurately timing
two-vector test sequences.

• The detection of a defect that introduces an additional delay, ad i , along a sensitized path is
dependent on satisfying the condition:
o ad i > sd i (or pd i + ad i > T C )

o Therefore,the effectiveness of the delay fault test is dependent on


both the delay defect size and the delay of the tested path.
Hazards
•A path sensitized by a delay test consists of on-path nodes and off-path nodes.
o The nodes along the sensitized path are referred to as on-path
nodes.

• Static
sensitization defines the case when all off-path nodes settle to non-
controlling values (0 for OR/NOR, 1 for AND/NAND) following app. of V2.
o This is a necessary condition to test a path for a delay fault.

• The gates
along the sensitized path have exactly one on-path input and zero or
more non-controlling off-path inputs.
o Delay fault tests are classified according to the voltage behavior of
the off-path nodes.
o Such tests can be invalidated under certain conditions.

• Hazards can invalidate tests:


• Static hazard: describes a circuit condition where off-path nodes change momentarily
when they are supposed to remain constant.
• Dynamic hazard: describes a circuit condition where off-path nodes make several
transitions when they are supposed to make a single transition.
Hazards
• Static Hazards:

• Two vector sequence is ABC = (111), (101).


o Gate G 1 introduces an additional delay of 1 unit.
o Output E of gate G 3 is driven to a logic 1, one time unit behind D -
> 0.
o Produces a glitch on F.
Hazards
• Dynamic Hazards:

• Two vector sequence is AB = ( 01 ), ( 11 ).


o Gate G 2 has a delay value of 3 time units, due either to a defect or
a different physical implementation of the NAND gate.
Hazards and Invalidation
• Static hazards can create dynamic hazards along tested paths and need to be
considered during test generation.

• Note,unlike the previous example, the glitch occurs before the intended transition
in this case, and can invalidate the test (e.g. fault is not detected).
Delay Tests and Invalidation
• The critical path(s) of this circuit is 6 time units.
o Let's set the clock period T = 7.

• Assume only one faulty path.


o No delay fault is detected if path delay along P3 is less than 7
units.
o This test will not detect single delay faults along paths P1 or P2.

• Assume there can be multiple faulty paths.


o Assume P2 and P3 are faulty and P2 extends the "static glitch" at
the output beyond 7 units, then it masks P3's delay fault.

• This
test is called a non-robust test for delay fault P3.
Delay Fault Path Classification
• Each of the paths in a circuit can be classified:
• Hazard-free robust testable
• Robust testable
• Non-robust testable
• Non-robust testable but not redundant
• Redundant

• Hazard-free robust test

o Off-path inputs are stable and hazard-free throughout the test


interval, T C -- m ost desirable test since invalidation is not possible.
Robust Test
• Hazard-freerobust tests are desirable but it's not possible in many cases to
generate them.
o Transitions that occur at fan-out points often reconverge as off-
path inputs along the tested path.

• However, robust tests are still possible even when static hazards are present on
the off-path inputs.
o Static hazards are necessary but not sufficient to make a delay test
non-robust.

•A delay test is a robust test if the on-path nodes control the first occurrence of a
transition through all gates along the tested path.
o This
ensures that a delay test is not invalidated or a path delay fault
masked by delay characteristics of gates not on the tested path.

•A robust path-delay test guarantees to produce an incorrect value at the output if


the delay of the path exceeds the clock period, irrespective of the delay
distribution in the circuit.
Robust Test
• Robust delay test:

• Thistest is robust since F will not change state until the transition on E has
occurred.
o In other words, any assignable delay to D can never mask a delay
fault that may occur on the tested path.

• This is true because the on-path node E holds the dominant input value on gate
G 4, and therefore determines the earliest transition possible on F .
o Therefore, D is allowed to delay the transition on F but not speed it
up.
Robust Test
• It is possible that:
• D can cause a transition to occur on F after the transition on-path node E has occurred.
• D may further delay the transition of F since it too can hold the dominant input value on
gate G 4 .

• The former condition is sufficient to cause a glitch on F (as shown) .

• The latter conditionimplies that a robust test does not require the sensitized path
to dominate the timing, or, to be the last transition to occur on all gates along the
sensitized path.

• Anon-input node will make the transition either:


• From the dominant input state of the gate to the non-dominant input state.
• From the non-dominant input state of the gate to the dominant input state.

Robust Test
• For the first case, the off-path inputs of the gate must behave in either one of two
ways.
• If the off-path input node changes state, then it must make a transition from the dominant
to the non-dominant input state of the gate.
• If it does not change state, then it must remain in steady-state at the non-dominant value
during the entire test interval.
• When all off-path inputs honor these constraints, the outputs of the gates along
the test path will not make the transition until the last of all transitioning input
lines have toggled.
Robust Test
• For the second case, the off-path inputs must remain at their non-dominant states
during the entire test interval.
o No off-path transition is allowed.

• In either case, hazards will not be visible at the output until after the desired
transition has propagated along the tested path.

• However,for many circuits, even this weaker set of constraints permits only a
small percentage of path delay faults to be robust tested.
Non-Robust Test
• A non-robust tests allow the output to change before the on-path transition
propagates along the tested path.
o A non-robust test cannot guarantee the detection of a delay fault
along a test path in the presence of other faults.

o Although the delay fault introduced by the inverter is detected (as


shown), a delay fault along A-C may cause the output to remain at
0 or it may push the pulse beyond T = 3 -- which invalidates!
Non-Robust Test
• A non-robust path delay test guarantees the detection of a path-delay fault only
when no other path delay fault is present.
o Single fault assumption (similar to the Stuck-At fault model).
• The fault
is called a singly-testable path-delay fault in cases where a test exists.
Path Delay Fault Test Generation
• To generate test for falling transition on path P3:

• This test is a robust, i.e., it cannot be invalidated irrespective of P2's delay.

• Non-robust tests only require static sensitization (arbitrary values for V1).
Path Delay Fault Test Generation
• There are no alternatives to generate the previous test, so we are stuck with a non-
robust test for the rising transition of P2.
o Note that in circuits with reconvergent fanout, backtracking is
frequently necessary.

• Single input change (SIC): a simpler method of generating non-robust tests.


o Use a combinational ATPG algorithm to statically sensitize the
entire path for V2.
o V1 is obtained by changing one bit in V2 that corresponds to the
origin of the path.

• Validatable non-robust tests


o It is desirable to find as many robust tests as possible.
o The presence of robust tests for some paths improves the reliability
of non-robust tests for other paths.

o Forexample, there are 6 robustly testable paths in the previous


circuit.
o With these tests, the rising transition test of P2 as good as a robust
test.
Non-Robust and Redundancy
• Some robust untestable paths are not even non-robust testable paths.

• This path has no delay test.

•A path for which both rising and falling PDFs are singly (i.e. non-robustly)
testable is called a testable path.
• A path that has one singly testable and one singly untestable PDF is called
partially testable path and may be associated with a redundant fault.
o The fault q SA1 in our circuit is redundant -- AND gate can be
removed.
• When no non-robust test exists for both paths, its singly-untestable path.
o This path can be eliminated by circuit transformations.
False Paths
• The delay along false paths cannot affect the output transition time.
o Unfortunately, singly-untestable PDFs are not always false paths.

o It's possible to multiple singly-untestable PDF to be co-sensitized


and for them to affect the circuit timing, if all have excess delays.

• These paths belong to the classes multiply-testable PDFs and functionally


sensitizable PDFs.

• Thisis why the delays of paths whose PDFs are untestable are still taken into
account while determining the clock period of the circuit.
o A point in favor of static timing analysis.
o
9. Write down the important features of ATPGA testing
ATPG (acronym for both Automatic Test Pattern Generation and Automatic Test Pattern
Generator) is an electronic design automation method or technology used to find an input (or test)
sequence that, when applied to a digital circuit, enables automatic test equipment to distinguish
between the correct circuit behavior and the faulty circuit behavior caused by defects. The generated
patterns are used to test semiconductor devices after manufacture, or to assist with determining the
cause of failure (failure analysis[1]). The effectiveness of ATPG is measured by the number of
modeled defects, or fault models, detectable and by the number of generated patterns. These metrics
generally indicate test quality (higher with more fault detections) and test application time (higher
with more patterns). ATPG efficiency is another important consideration that is influenced by the
fault model under consideration, the type of circuit under test (full scan, synchronous sequential, or
asynchronous sequential), the level of abstraction used to represent the circuit under test (gate,
register-transfer, switch), and the required test quality.
ATPG (Automatic Test Pattern Generation and Automatic Test Pattern Generator) is an EDA
method/technology used to find an input or test sequence.
When applied to a digital circuit, ATPG enables automatic test equipment to distinguish between
the correct circuit behavior and the faulty circuit behavior caused by defects. The generated patterns
are used to test semiconductor devices after manufacture, or to assist with determining the cause of
failure (failure analysis[1]).
The effectiveness of ATPG is measured by the number of modeled defects, or fault models,
detectable and by the number of generated patterns. These metrics generally indicate test quality
(higher with more fault detections) and test application time (higher with more patterns).
ATPG efficiency is another important consideration that is influenced by the fault model under
consideration, the type of circuit under test (full scan, synchronous sequential, or asynchronous
sequential), the level of abstraction used to represent the circuit under test (gate, register-transfer,
switch), and the required test quality.

10. Discuss the steps involved in design for self-test (BIST) at board level
Built-In Self Test (BIST) Techniques

BIST is a design-for-testability technique that places the testing functions physically with the circuit under
test (CUT), as illustrated in Figure 40.1 [1]. The basic BIST architecture requires the addition of three
hardware blocks to a digital circuit: a test pattern generator, a response analyzer, and a test controller. The
test pattern generator generates the test patterns for the CUT. Examples of pattern generators are a ROM
with stored patterns, a counter, and a linear feedback shift register (LFSR). A typical response analyzer is a
comparator with stored responses or an LFSR used as a signature analyzer. It compacts and analyzes the
test responses to determine correctness of the CUT. A test control block is necessary to activate the test and
analyze the responses. However, in general, several test-related functions can be executed through a test
controller circuit.

As shown in Figure 40.1, the wires from primary inputs (PIs) to MUX and wires from circuit output to
primary outputs (POs) cannot be tested by BIST. In normal operation, the CUT receives its inputs from
other modules and performs the function for which it was designed. During test mode, a test pattern
generator circuit applies a sequence of test patterns to the CUT, and the test responses are evaluated by a
output response compactor. In the most common type of BIST, test responses are compacted in output
response compactor to form (fault) signatures. The response signatures are compared with reference golden
signatures generated or stored onchip, and the error signal indicates whether chip is good or faulty. Four
primary parameters must be considered in developing a BIST methodology for embedded systems; these
correspond with the design parameters for on-line testing techniques discussed in earlier chapter [2].

ƒ Fault coverage: This is the fraction of faults of interest that can be exposed by the test patterns produced
by pattern generator and detected by output response monitor. In presence of input bit stream errors there
is a chance that the computed signature matches the golden signature, and the circuit is reported as fault
free. This undesirable property is called masking or aliasing.

ƒ Test set size: This is the number of test patterns produced by the test generator, and is closely linked to
fault coverage: generally, large test sets imply high fault coverage.

ƒ Hardware overhead: The extra hardware required for BIST is considered to be overhead. In most
embedded systems, high hardware overhead is not acceptable.

ƒ Performance overhead: This refers to the impact of BIST hardware on normal circuit performance such
as its worst-case (critical) path delays. Overhead of this type is sometimes more important than hardware
overhead.

In built-in self test (BIST) design, parts of the circuit are used to test the circuit itself. Online BIST is used
to perform the test under normal operation, whereas off-line BIST is used to perform the test off-line. The
essential circuit modules required for BIST include:

* Pseudo random pattern generator (PRPG)


* Output response analyzer (ORA)

The roles of these two modules are illustrated in Fig. 1. The implementation of both PRPG and ORA can
be done with Linear Feedback Shift Registers (LFSRs).

Pseudo Random Pattern Generator :-


To test the circuit, test patterns first have to be generated either by using a pseudo random pattern generator,
a weighted test generator, an adaptive test generator, or other means. A pseudo random test generator circuit
can use an LFSR, as shown in Fig. 2.

Figure 1 : A procedure for BIST

Figure 2 : A pseudo-random sequence generator using LFSR

Linear Feedback Shift Register as an ORA :-

To reduce the chip area penalty, data compression schemes are used to compare the compacted test
responses instead of the entire raw test data. One of the popular data compression schemes is the signature
analysis, which is based on the concept of cyclic redundancy checking. It uses polynomial division, which
divides the polynomial representation of the test output data by a characteristic polynomial and then finds
the remainder as the signature. The signature is then compared with the expected signature to determine
whether the device under test is faulty. It is known that compression can cause some loss of fault coverage.
It is possible that the output of a faulty circuit can match the output of the fault-free circuit; thus, the fault
can go undetected in the signature analysis. Such a phenomenon is called aliasing.

In its simplest form, the signature generator consists of a single-input linear feedback shift register (LFSR),
as shown in Fig. 3 in which all the latches are edge-triggered. In this case, the signature is the content of
this register after the last input bit has been sampled. The input sequence {an) is represented by polynomial
G(x) and the output sequence by Q(x). It can be shown that G(x) = Q(x) P(x) R(x), where P(x) is the
characteristic polynomial of LFSR and R(x) is the remainder, the degree of which is lower than that of P(x).
For the simple case in Fig. 3 the characteristic polynomial is

For the 8-bit input sequence { 1 1 1 1 0 1 0 1, the corresponding input polynomial is

and the remainder term becomes R(x) = x4 x2 which corresponds to the register contents of {0 0 1 0 1}.
Figure 3 : Polynomial division using LFSR for signature analysis

11. Explain how test pattern is generated in BIST.


An automatic test pattern generation (ATPG) and fault simulation technique is used to generate the
test patterns. A good test pattern set is stored in a ROM on the chip. When BIST is activated, test
patterns are applied to the CUT and the responses are compared with the corresponding stored
patterns.
BIST Test Pattern Generation Techniques
Stored patterns
An automatic test pattern generation (ATPG) and fault simulation technique is used to generate the
test patterns. A good test pattern set is stored in a ROM on the chip. When BIST is activated, test
patterns are applied to the CUT and the responses are compared with the corresponding stored
patterns. Although stored-pattern BIST can provide excellent fault coverage, it has limited
applicability due to its high area overhead.
Exhaustive patterns
Exhaustive pattern BIST eliminates the test generation process and has very high fault coverage. To
test an n-input block of combinational logic, it applies all possible 2n -input patterns to the block.
Even with high clock speeds, the time required to apply the patterns may make exhaustive pattern
BIST impractical for a circuit with n>20.

Pseudo-exhaustive patterns
In pseudo-exhaustive pattern generation, the circuit is partitioned into several smaller subcircuits
based on the output cones of influence, possibly overlapping blocks with fewer than n inputs. Then
all possible test patterns are exhaustively applied to each sub-circuit. The main goal of pseudo-
exhaustive test is to obtain the same fault coverage as the exhaustive testing and, at the same time,
minimize the testing time. Since close to 100% fault coverage is guaranteed, there is no need for
fault simulation for exhaustive testing and pseudo-exhaustive testing. However, such a method
requires extra design effort to partition the circuits into pseudo-exhaustive testable sub-circuits.
Moreover, the delivery of test patterns and test responses is also a major consideration. The added
hardware may also increase the overhead and decrease the performance.
Circuit partitioning for pseudo-exhaustive pattern generation can be done by cone segmentation as
shown in Figure 40.4. Here, a cone is defined as the fan-ins of an output pin. If the size of the largest
cone in K, the patterns must have the property to guarantee that the patterns applied to any K inputs
must contain all possible combinations. In Figure 40.4, the total circuit is divided into two cones
based on the cones of influence. For cone 1 the PO h is influenced by X1, X2, X3, X4 and X5 while
PO f is influenced by inputs X4, X5, X6, X7 and X8. Therefore the total test pattern needed for
exhaustive testing of cone 1 and cone 2 is (25 +25 ) = 64. But the original circuit with 8 inputs
requires 28 = 256 test patterns exhaustive test.
Pseudo-Random Pattern Generation
A string of 0’s and 1’s is called a pseudo-random binary sequence when the bits appear to be random
in the local sense, but they are in someway repeatable. The linear feedback shift register (LFSR)
pattern generator is most commonly used for pseudo-random pattern generation. In general, this
requires more patterns than deterministic ATPG, but less than the exhaustive test. In contrast with
other methods, pseudo-random pattern BIST may require a long test time and necessitate evaluation
of fault coverage by fault simulation. This pattern type, however, has the potential for lower
hardware and performance overheads and less design effort than the preceding methods. In
pseudorandom test patterns, each bit has an approximately equal probability of being a 0 or a 1. The
number of patterns applied is typically of the order of 103 to 107 and is related to the circuit's
testability and the fault coverage required.
Linear feedback shift register reseeding [5] is an example of a BIST technique that is based on
controlling the LFSR state. LFSR reseeding may be static, that is LFSR stops generating patterns
while loading seeds, or dynamic, that is, test generation and seed loading can proceed
simultaneously. The length of the seed can be either equal to the size of the LFSR (full reseeding)
or less than the LFSR (partial reseeding). In [5], a dynamic reseeding technique that allows partial
reseeding is proposed to encode test vectors. A set of linear equations is solved to obtain the seeds,
and test vectors are ordered to facilitate the solution of this set of linear equations

.
Figure 40.5 shows a standard, external exclusive-OR linear feedback shift register. There are n flip-
flops (Xn-1,……X0) and this is called n-stage LFSR. It can be a near-exhaustive test pattern
generator as it cycles through 2n -1 states excluding all 0 states. This is known as a maximal length
LFSR. Figure 40.6 shows the implementation of a n-stage LFSR with actual digital circuit. [1]

You might also like