Atpg GD 2004
Atpg GD 2004
README FIRST!
Using Mentor Graphics Documentation
with Acrobat Reader
This document contains information that is proprietary to Mentor Graphics Corporation. The original recipient of this
document may duplicate this document in whole or in part for internal business purposes only, provided that this entire
notice appears in all copies. In duplicating any part of this document, the recipient agrees to make every reasonable
effort to prevent the unauthorized use and distribution of the proprietary information.
The terms and conditions governing the sale and licensing of Mentor Graphics products are set forth in
written agreements between Mentor Graphics and its customers. No representation or other affirmation
of fact contained in this publication shall be deemed to be a warranty or give rise to any liability of Mentor
Graphics whatsoever.
MENTOR GRAPHICS MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THIS MATERIAL
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OR MERCHANTABILITY AND
FITNESS FOR A PARTICULAR PURPOSE.
MENTOR GRAPHICS SHALL NOT BE LIABLE FOR ANY INCIDENTAL, INDIRECT, SPECIAL, OR
CONSEQUENTIAL DAMAGES WHATSOEVER (INCLUDING BUT NOT LIMITED TO LOST PROFITS)
ARISING OUT OF OR RELATED TO THIS PUBLICATION OR THE INFORMATION CONTAINED IN IT,
EVEN IF MENTOR GRAPHICS CORPORATION HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
U.S. Government Restricted Rights. The SOFTWARE and documentation have been developed entirely
at private expense and are commercial computer software provided with restricted rights. Use,
duplication or disclosure by the U.S. Government or a U.S. Government subcontractor is subject to the
restrictions set forth in the license agreement provided with the software pursuant to DFARS 227.7202-
3(a) or as set forth in subparagraph (c)(1) and (2) of the Commercial Computer Software - Restricted
Rights clause at FAR 52.227-19, as applicable.
Contractor/manufacturer is:
Mentor Graphics Corporation
8005 S.W. Boeckman Road, Wilsonville, Oregon 97070-7777.
Chapter 1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
What is Design-for-Test?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
DFT Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Top-Down Design Flow with DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
DFT Design Tasks and Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
User Interface Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Command Line Window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Control Panel Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
Getting Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12
Hierarchy Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-15
Running Batch Mode Using Dofiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-20
Generating a Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-21
Running UNIX Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-21
Conserving Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-21
Interrupting the Session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-22
Exiting the Session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-22
DFTAdvisor User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-23
FastScan User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-24
FlexTest User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-26
Chapter 2
Understanding Scan and ATPG Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1
Understanding Scan Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Internal Scan Circuitry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Scan Design Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Understanding Full Scan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
Understanding Partial Scan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5
Choosing Between Full or Partial Scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6
Understanding Partition Scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7
Understanding Test Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9
Chapter 3
Understanding Common Tool Terminology and Concepts . . . . . . . . . . . . . . . . . . . 3-1
Scan Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
Scan Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
Scan Chains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
Scan Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
Scan Clocks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6
Scan Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
Mux-DFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
Clocked-Scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
LSSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
Test Procedure Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
Model Flattening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10
Understanding Design Object Naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10
The Flattening Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-11
Simulation Primitives of the Flattened Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
Learning Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-15
Equivalence Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-15
Logic Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-16
Implied Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-16
Forbidden Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-17
Dominance Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-17
ATPG Design Rules Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-18
General Rules Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-18
Procedure Rules Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-19
Bus Mutual Exclusivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-19
Scan Chain Tracing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-20
Shadow Latch Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-20
Data Rules Checking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-21
Transparent Latch Identification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-21
Clock Rules Checking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-22
RAM Rules Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-22
Bus Keeper Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-22
Extra Rules Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-23
Scannability Rules Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-23
Chapter 4
Understanding Testability Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
Synchronous Circuitry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Synchronous Design Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Asynchronous Circuitry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Scannability Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Scannability Checking of Latches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Support for Special Testability Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Feedback Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Structural Combinational Loops and Loop-Cutting Methods . . . . . . . . . . . . . . . . . 4-4
Structural Sequential Loops and Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-11
Redundant Logic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-13
Asynchronous Sets and Resets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-13
Gated Clocks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Tri-State™ Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Non-Scan Cell Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15
Clock Dividers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-20
Pulse Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-21
JTAG-Based Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-22
Testing RAM and ROM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-22
Incomplete Designs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-29
Chapter 5
Inserting Internal Scan
and Test Circuitry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1
Understanding DFTAdvisor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1
The DFTAdvisor Process Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
DFTAdvisor Inputs and Outputs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Test Structures Supported by DFTAdvisor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-4
Invoking DFTAdvisor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7
Preparing for Test Structure Insertion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-8
Selecting the Scan Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-8
Defining Scan Cell and Scan Output Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-8
Enabling Test Logic Insertion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-9
Specifying Clock Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-12
Specifying Existing Scan Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-13
Handling Existing Boundary Scan Circuitry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-16
Changing the System Mode (Running Rules Checking) . . . . . . . . . . . . . . . . . . . . . 5-17
Identifying Test Structures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-17
Selecting the Type of Test Structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-17
Setting Up for Full Scan Identification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18
Setting Up for Clocked Sequential Identification. . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18
Setting Up for Sequential Transparent Identification . . . . . . . . . . . . . . . . . . . . . . . . 5-19
Setting Up for Partition Scan Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-19
Setting Up for Sequential (ATPG, Automatic, SCOAP, and Structure) Identification 5-21
Setting Up for Test Point Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-23
Chapter 6
Generating Test Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1
Understanding FastScan and FlexTest. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
FastScan and FlexTest Basic Tool Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
FastScan and FlexTest Inputs and Outputs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5
Understanding the FastScan ATPG Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-6
Understanding FlexTest’s ATPG Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-12
Performing Basic Operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Invoking the Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Setting the System Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-18
Setting Up Design and Tool Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-18
Setting Up the Circuit Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-19
Setting Up Tool Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-25
Setting the Circuit Timing (FlexTest Only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-30
Defining the Scan Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-33
Checking Rules and Debugging Rules Violations. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-36
Running Good/Fault Simulation on Existing Patterns. . . . . . . . . . . . . . . . . . . . . . . . . 6-37
Fault Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-37
Good Machine Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-40
Running Random Pattern Simulation (FastScan) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-42
Changing to the Fault System Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-42
Setting the Pattern Source to Random . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-42
Creating the Faults List. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-42
Running the Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-43
Setting Up the Fault Information for ATPG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-43
Changing to the ATPG System Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-43
Setting the Fault Type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-43
Creating the Faults List. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-44
Adding Faults to an Existing List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-44
Loading Faults from an External List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-45
Writing Faults to an External File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-45
Chapter 7
Test Pattern Formatting and Timing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1
Test Pattern Timing Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
Timing Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
General Timing Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Generating a Procedure File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-4
Defining and Modifying Timeplates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-5
Saving Timing Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-8
Features of the Formatter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-8
Pattern Formatting Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-9
Saving Patterns in Basic Test Data Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-12
Saving in ASIC Vendor Data Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-20
Chapter 8
Running Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1
Understanding FastScan Diagnostic Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1
Understanding Stuck Faults and Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Creating the Failure File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Failure File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4
Performing a Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-5
Viewing Fault Candidates in Calibre DESIGNrev . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8
Appendix A
Getting Started with ATPG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
Preparing the Tutorial Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
Full Scan ATPG Tool Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-2
Running DFTAdvisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-4
Running FastScan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-6
Accessing Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-9
Tool Guide (DFTAdvisor, FastScan, and FlexTest only). . . . . . . . . . . . . . . . . . . . . A-9
Command Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-9
Query Help (DFTAdvisor, FastScan, and FlexTest only) . . . . . . . . . . . . . . . . . . . . A-10
Popup Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-10
Informational Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-10
Online Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-11
SupportNet help (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-12
Appendix B
Clock Gaters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1
PI Scan Clock Enables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1
Latched (Registered) Scan Clock Enable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-2
Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-4
Debugging Clock Gate Problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-5
Debugging a C1 Violation Involving a Gated Clock . . . . . . . . . . . . . . . . . . . . . . . . B-7
Debugging a T3 Violation Involving a Clock Gate . . . . . . . . . . . . . . . . . . . . . . . . . B-8
OR Based Clock Gating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-10
Appendix C
Figure 6-41. Basic Scan Pattern Creation Flow with MacroTest . . . . . . . . . . . . . . . . 6-112
Figure 6-42. Mismatch Diagnosis Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-132
Figure 6-43. ModelSim Waveform Viewer Display . . . . . . . . . . . . . . . . . . . . . . . . . . 6-138
Figure 6-44. DFTInsight Display of the ix1286 Mismatch Source . . . . . . . . . . . . . . . 6-138
Figure 6-45. Clock-Skew Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-139
Figure 7-1. Defining Basic Timing Process Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1
Figure 8-1. Diagnostics Process Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-5
Figure 8-2. FastScan-Calibre Diagnostics Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8
Figure 8-3. Loading the GDS Layout Database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-9
Figure 8-4. Specifying the Calibre Application to Run . . . . . . . . . . . . . . . . . . . . . . . . 8-10
Figure 8-5. Invoking Calibre RVE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10
Figure 8-6. Accessing the FastScan Diagnostics Report . . . . . . . . . . . . . . . . . . . . . . . 8-11
Figure 8-7. Layout View of the Net Connected to a Candidate Fault Site . . . . . . . . . 8-12
Figure A-1. Tool Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-2
Figure A-2. Scan and ATPG Tool and Command Flow . . . . . . . . . . . . . . . . . . . . . . . A-3
Figure A-3. DFTAdvisor dofile dfta_dofile.do . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-4
Figure A-4. FastScan dofile fs_dofile.do. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-7
Figure B-1. PI Scan Clock Enable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1
Figure B-2. PI Scan Clock Enable for LE and/or TE Clock . . . . . . . . . . . . . . . . . . . . B-2
Figure B-3. Scan Clock Enable with Latch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-2
Figure B-4. Enable Latch with D Changes on LE and TE of Clock . . . . . . . . . . . . . . B-3
Figure B-5. Wrong Off Value: Constraint Enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . B-4
Figure B-6. Debugging C1 Using Design View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-7
Figure B-7. Debugging C1 Using Primitive View. . . . . . . . . . . . . . . . . . . . . . . . . . . . B-8
Figure B-8. Debugging T3 Using Design View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-9
Figure B-9. Debugging T3 by Expanding to Primitive View . . . . . . . . . . . . . . . . . . . B-10
The Scan and ATPG Process Guide gives an overview of ASIC/IC Design-for-Test (DFT)
strategies and shows the use of Mentor Graphics ASIC/IC DFT products as part of typical DFT
design processes. This document discusses the following DFT products: DFTAdvisor,
FastScan, and FlexTest.
• Chapter 1 discusses the basic concepts behind DFT, establishes the framework in which
Mentor Graphic ASIC DFT products are used, and briefly describes each of these
products.
• Chapter 2 gives conceptual information necessary for determining what test strategy
would work best for you.
• Chapter 3 provides tool methodology information, including common terminology and
concepts used by the tools.
• Chapter 4 outlines characteristics of testable designs and explains how to handle special
design situations that can affect testability.
• Chapters 5 through 8 discuss the common tasks involved at each step within a typical
process flow using Mentor Graphics DFT tools.
• Appendix A provides a brief introduction and short lab exercises to help you quickly
become familiar with DFTAdvisor and FastScan.
• Appendix B introduces the topic of gated clocks, and provides some guidance on how to
avoid DRC errors related to them.
• Appendix C describes how to run FastScan as a batch process.
Online Documentation
This manual is part of a documentation bookcase provided in Adobe Portable Document Format
(PDF). This PDF-based documentation provides both online manuals and online help for most
Mentor Graphics applications. Each Mentor Graphics product typically has several PDF files
for documentation; these files are linked together with blue hypertext links. Within this manual,
these blue links will take you to either another section within the manual, or to a related
publication for reference.
Also, each group of related PDF files has a bookcase interface for ease of navigation, and a full-
text search index to facilitate searches across the library of online manuals associated with your
product flow (see “Searching This Manual” on page ATM-5). Manual excerpts may also appear
as “PDF Online Help” (page ATM-5) for many applications.
This application uses Adobe Acrobat Reader as its online help and documentation viewer.
Online help requires that you install Acrobat Reader and the Mentor Graphics-specific search
index plug-in from the Mentor Graphics CD. For more information on PDF-based
documentation, and details on performing find and search operations, refer to Using Mentor
Graphics Documentation with Acrobat Reader.
Related Publications
This section gives references to both Mentor Graphics product documentation and industry DFT
documentation.
Design-for-Test
Common Resources
Manual
Built-in-Self-Test LBISTArchitect
Process Guide Reference Manual
MBISTArchitect
Reference Manual
ATPG Tools Reference Manual — provides reference information for FastScan (full-
scan ATPG), FlexTest (non- to partial-scan ATPG), TestKompress (full-scan EDT) and
DFTInsight (schematic viewer) products.
Boundary Scan Process Guide — provides process, concept, and procedure information
for the boundary scan product, BSDArchitect. It also includes information on how to
integrate boundary scan with the other DFT technologies.
BSDArchitect Reference Manual — provides reference information for BSDArchitect,
the boundary scan product.
Built-in Self-Test Process Guide — provides process, concept, and procedure
information for using MBISTArchitect, LBISTArchitect, and other Mentor Graphics
tools in the context of your BIST design process.
Design-for-Test Common Resources Manual — provides information common to many
of the DFT tools: design rule checks (DRC), DFTInsight (schematic viewer), library
creation, VHDL support, Verilog support, core test description language, and test
procedure file format.
Design-for-Test Release Notes — provides release information that reflects changes to
the DFT products for the software version release.
DFTAdvisor Reference Manual — provides reference information for DFTAdvisor
(internal scan insertion) and DFTInsight (schematic viewer) products.
EDT Process Guide — provides process, concept, and procedure information for using
TestKompress in the context of your EDT (Embedded Deterministic Test) design
process.
LBISTArchitect Reference Manual — provides reference information for
LBISTArchitect, the logic built-in self-test product.
Managing Mentor Graphics DFT Software — provides information about configuration
and system management issues unique to DFT applications.
MBISTArchitect Reference Manual — provides reference information for
MBISTArchitect, the memory BIST product and the memory BIST-in-place capabilities
of MBISTArchitect.
Scan and ATPG Process Guide — provides process, concept, and procedure information
for using DFTAdvisor, FastScan, and FlexTest in the context of your ATPG design
process.
Using Mentor Graphics Documentation with Acrobat Reader — describes how to set up
online manuals and help, open documents, and implement full-text searches. Also
includes guidance for System Administrators on the setup and use of Acrobat Reader
with the search index plug-in, and management of the PDF-based documentation system
when coresident with earlier versions of Mentor Graphics products.
• Abramovici, Miron, Melvin A. Breuer, and Arthur D. Friedman. Digital Systems Testing
and Testable Design. New York: Computer Science Press, 1990.
• Agarwal, V. D. and S. C. Seth. Test Generation for VLSI Chips. Computer Society
Press, 1988.
• Fujiwara, Hideo. Logic Testing and Design for Testability. Cambridge: The MIT Press,
1985.
• Huber, John P. and Mark W. Rosneck. Successful ASIC Design the First Time Through.
New York: Van Nostrand Reinhold, 1991.
• IEEE Std 1149.1-1990, IEEE Standard Test Access Port and Boundary-Scan
Architecture. New York: IEEE, 1990.
• McCluskey, Edward J. Logic Design Principles with Emphasis on Testable Semicustom
Circuits. Englewood Cliffs: Prentice-Hall, 1986.
• Rajsuman, Rochit, Digital Hardware Testing: Transistor-Level Fault Modeling and
Testing. Boston: Artech House, 1992.
IDDQ Documentation
• Aitken, R. C. “Fault Location with current monitoring,” Proceedings ITC-1991, pp. 623-
632.
• Chen, Chun-Hung and J. Abraham, “High Quality tests for switch level circuits using
current and logic test generation algorithms,” Proceedings ITC-1991, pp. 615-622.
• Ferguson, F. Joel and Tracy Larrabee, “Test Pattern Generation for Realistic Bridge
Faults in CMOS ICs,” Proceedings ITC 1991, pp. 492-499.
• Mao, W., R.K. Gulati, D.K. Goel, and M. D. Ciletti, “QUIETEST: A quiescent current
testing methodology for detecting leakage faults,” Proceedings ICCAD-90, pp. 280-283.
• Marston, Gregory “Automating IDDQ Test Generation,” Private Communication,
November 1993.
• Maxwell, Peter and Robert Aitken, “IDDQ testing as a component of a test suite: The
need for several fault coverage metrics,” Journal of Electronic Testing, theory and
applications, 3, pp 305-116 (1992).
• Soden, J. M., R. K. Treece, M.R. Taylor, and C.F. Hawkins, “CMOS IC Stuck-open
Faults Electrical Effects and Design Considerations,” Proceedings International Test
Conference 1989, pp. 423-430.
If you desire more in-depth information, each PDF online help file also contains a hypertext link
to its corresponding online manual. This link is identified by an open book icon that appears in
the upper right corner of the PDF. Consequently, you can review the PDF online help file, move
over to the main manual, browse that document, and then move to other documents using
hypertext links and full-text searches.
Linux
At present, Adobe does not support multiple-document, full-text search in Acrobat on the
Linux platform.
The MGC > Search > Query or the Edit > Search > Query menu option searches
across multiple documents and bookcases for a given text phrase. You should use this
type of search if you are not sure which document contains the information you need.
Use the MGC > Search > Query menu option first because it automatically loads all
Mentor Graphics search indexes included in your documentation tree prior to
performing the search. Once these indexes are loaded, you can use either menu option.
For more information on performing find and search operations, refer to
Using Mentor Graphics Documentation with Acrobat Reader.
DFT — design-for-test
PI — primary input
PO — primary output
What is Design-for-Test?
Testability is a design attribute that measures how easy it is to create a program to
comprehensively test a manufactured design’s quality. Traditionally, design and test processes
were kept separate, with test considered only at the end of the design cycle. But in
contemporary design flows, test merges with design much earlier in the process, creating what
is called a design-for-test (DFT) process flow. Testable circuitry is both controllable and
observable. In a testable design; setting specific values on the primary inputs results in values
on the primary outputs which indicate whether or not the internal circuitry works properly. To
ensure maximum design testability, designers must employ special DFT techniques at specific
stages in the development process.
DFT Strategies
At the highest level, there are two main approaches to DFT: ad hoc and structured. The
following subsections discuss these DFT strategies.
Ad Hoc DFT
Ad hoc DFT implies using good design practices to enhance a design's testability, without
making major changes to the design style. Some ad hoc techniques include:
Structured DFT
Structured DFT provides a more systematic and automatic approach to enhancing design
testability. Structured DFT’s goal is to increase the controllability and observability of a circuit.
Various methods exist for accomplishing this. The most common is the scan design technique,
which modifies the internal sequential circuitry of the design. You can also use the Built-in
Self-Test (BIST) method, which inserts a device’s testing function within the device itself.
Another method is boundary scan, which increases board testability by adding circuitry to a
chip. Chapter 2, “Understanding Scan and ATPG Basics,” describes these methods in detail.
This document discusses those steps shown in grey; it also mentions certain aspects of other
design steps, where applicable. This flow is just a general description of a top-down design
process flow using a structured DFT strategy. The next section, “DFT Design Tasks and
Products,” gives a more detailed breakdown of the individual DFT tasks involved.
1011
MBISTArchitect Insert/Verify
LBISTArchitect Built-in Self Test
Circuitry
P/F
Insert/Verify
Boundary Scan BSDArchitect
Circuitry
Generate/Verify FastScan
0 FlexTest
Test Patterns 1 ASIC Vector Interfaces
1 ModelSim
0
QuickPath
Hand off
to Vendor
As Figure 1-1 shows, the first task in any design flow is creating the initial RTL-level design,
through whatever means you choose. In the Mentor Graphics environment, you may choose to
create a high-level VHDL or Verilog description using ModelSim, or a schematic using Design
Architect. You then verify the design’s functionality by performing a functional simulation,
using ModelSim or another vendor's VHDL/Verilog simulator.
If your design’s format is in VHDL or Verilog format and it contains memory models, at this
point you can add built-in self-test (BIST) circuitry. MBISTArchitect creates and inserts RTL-
level customized internal testing structures for design memories. Additionally, if your design’s
format is in VHDL, you can use LBISTArchitect to synthesize BIST structures into its random
logic design blocks.
Also at the RTL-level, you can insert and verify boundary scan circuitry using BSDArchitect
(BSDA). Then you can synthesize and optimize the design using either Design Compiler or
another synthesis tool.
At this point in the flow you are ready to insert internal scan circuitry into your design using
DFTAdvisor. You then perform a timing optimization on the design because you added scan
circuitry. Once you are sure the design is functioning as desired, you can generate test patterns.
You can use FastScan or FlexTest (depending on your scan strategy) and ASIC Vector
Interfaces to generate a test pattern set in the appropriate format.
Now you should verify that the design and patterns still function correctly with the proper
timing information applied. You can use ModelSim, QuickPath, or some other simulator to
achieve this goal. You may then have to perform a few additional steps required by your ASIC
vendor before handing the design off for manufacture and testing.
Note
It is important for you to check with your vendor early on in your design process for
specific requirements and restrictions that may affect your DFT strategies. For example,
the vendor's test equipment may only be able to handle single scan chains (see page 2-2),
have memory limitations, or have special timing requirements that affect the way you
generate scan circuitry and test patterns.
The following list briefly describes each of the tasks presented in Figure 1-2.
1. Understand DFT Basics —Before you can make intelligent decisions regarding your
test strategy, you should have a basic understanding of DFT. Chapter 2, “Understanding
Scan and ATPG Basics,” prepares you to make decisions about test strategies for your
design by presenting information about full scan, partial scan, boundary scan, partition
scan, and the variety of options available to you.
2. Understand Tool Concepts — The Mentor Graphics DFT tools share some common
functionality, as well as terminology and tool concepts. To effectively utilize these tools
in your design flow, you should have a basic understanding of what they do and how
they operate. Chapter 3, “Understanding Common Tool Terminology and Concepts,”
discusses this information.
3. Understand Testability Issues — Some design features can enhance a design's
testability, while other features can hinder it. Chapter 4, “Understanding Testability
Issues,” discusses synchronous versus asynchronous design practices, and outlines a
number of individual situations that require special consideration with regard to design
testability.
4. Insert/Verify Memory BIST Circuitry — MBISTArchitect is a Mentor Graphics
RTL-level tool you use to insert built-in self test (BIST) structures for memory devices.
MBISTArchitect lets you specify the testing architecture and algorithms you want to
use, and creates and connects the appropriate BIST models to your VHDL or Verilog
memory models. The Build-in Self-Test Process Guide discusses how to prepare for,
insert, and verify memory BIST circuitry using MBISTArchitect.
Understand
DFT Basics
Understand
Tool Concepts
Understanding
DFT and the
DFT Tools Understand
Testability Issues
Insert/Verify
Memory BIST
(MBISTArchitect)
Insert/Verify
Logic BIST
(LBISTArchitect)
Insert/Verify
BScan Circuitry
(BSDArchitect)
Performing
Test Synthesis Insert Internal
and ATPG Scan Circuitry
(DFTAdvisor)
Generate/Verify
Test Patterns
(FastScan/FlexTest)
ASIC Vendor
Hand Off
Creates ASIC,
Runs Tests to Vendor
Plug ASIC
Run Diagnostics into Board,
(FastScan) Run Board Tests
models. The Build-in Self-Test Process Guide discusses how to prepare for, insert, and
verify logic BIST circuitry using LBISTArchitect.
6. Insert/Verify Boundary Scan Circuitry —BSDArchitect is a Mentor Graphics IEEE
1149.1 compliant boundary scan insertion tool. BSDA lets you specify the boundary
scan architecture you want to use and inserts it into your RTL-level design. It generates
VHDL, Verilog, and BSDL models with IEEE 1149.1 compliant boundary scan
circuitry and an HDL test bench for verifying those models. The Boundary Scan Process
Guide discusses how to prepare for, insert, and verify boundary scan circuitry using
BSDA.
7. Insert Internal Scan Circuitry — Before you add internal scan or test circuitry to your
design, you should analyze your design to ensure that it does not contain problems that
may impact test coverage. Identifying and correcting these problems early in the DFT
process can minimize design iterations downstream. DFTAdvisor is the Mentor
Graphics testability analysis and test synthesis tool. DFTAdvisor can analyze, identify,
and help you correct design testability problems early on in the design process. Chapter
5, “Inserting Internal Scan and Test Circuitry,” introduces you to DFTAdvisor and
discusses preparations and procedures for adding scan circuitry to your design.
8. Generate/Verify Test Patterns — FastScan and FlexTest are Mentor Graphics ATPG
tools. FastScan is a high performance, full-scan Automatic Test Pattern Generation
(ATPG) tool. FastScan quickly and efficiently creates a set of test patterns for your
(primarily full scan) scan-based design.
FlexTest is a high-performance, sequential ATPG tool. FlexTest quickly and efficiently
creates a set of test patterns for your full, partial, or non-scan design. FastScan and
FlexTest both contain an embedded high-speed fault simulator that can verify a set of
properly formatted external test patterns.
ASIC Vector Interfaces (AVI) is the optional ASIC vendor-specific pattern formatter
available through FastScan and FlexTest. AVI generates a variety of ASIC vendor test
pattern formats. FastScan and FlexTest can also generate patterns in a number of
different simulation formats so you can verify the design and test patterns with timing.
For example, within the Mentor Graphics environment, you can use ModelSim for this
verification. Chapter 6, “Generating Test Patterns,” discusses the ATPG process and
formatting and verifying test patterns.
9. Vendor Creates ASIC and Runs Tests — At this point, the manufacture of your
device is in the hands of the ASIC vendor. Once the ASIC vendor fabricates your
design, it will test the device on automatic test equipment (ATE) using test vectors you
provide. This manual does not discuss this process, except to mention how constraints of
the testing environment might affect your use of the DFT tools.
10. Vendor Runs Diagnostics — The ASIC vendor performs a diagnostic analysis on the
full set of manufactured chips. Chapter 8, “Running Diagnostics,” discusses how to
perform diagnostics using FastScan to acquire information on chip failures.
11. Plug ASIC into Board and Run Board Tests—When your ASIC design is complete
and you have the actual tested device, you are ready to plug it into the board. After board
manufacture, the test engineer can run the board level tests, which may include
boundary scan testing. This manual does not discuss these tasks.
Figure 1-3 shows a representation of the GUI elements that are common to both user interfaces.
Notice that the graphical user interfaces consist of two windows: the Command Line window
and the Control Panel window.
dof nocomp.do
Exit
Help
Prompt> |
When you invoke a DFT product in graphical user interface mode, it opens both the Command
Line and Control Panel windows. You can move these two windows at the same time by
pressing the left mouse button in the title bar of the Command Line window and moving the
mouse. This is called window tracking. If you want to disable window tracking, choose the
Windows > Control Panel > Tracks Main Window menu item.
The following sections describe each of the user interface common elements shown in
Figure 1-3.
Pulldown Menus
Pulldown menus are available for all the DFT products. The following lists the pulldown menus
that are shared by most of the products and the types of actions typically supported by each
menu:
• File > menu contains menu items that allow you to load a library or design, read
command files, view files or designs, save your session information, and exit your
session.
• Setup > menu contains menu items that allow you to perform various circuit or session
setups. These may include things like setting up your session logfiles or output files.
• Report > menu contains menu items that allow you to display various reports regarding
your session’s setup or run results.
• Window > menu contains menu items that allow you to toggle the visibility and
tracking of the Control Panel Window.
• Help > menu contains menu items that allow you to directly access the online manual
set for the DFT tools. This includes, but is not limited to, the individual command
reference pages, the user’s manual, and the release notes. For more information about
getting help, refer to “Getting Help” on page 1-12.
Within DFTAdvisor, FastScan, and FlexTest, you can add custom menu items. For information
on how to add menu items, refer to either “DFTAdvisor User Interface” on page 1-23,
“FastScan User Interface” on page 1-24, or “FlexTest User Interface” on page 1-26.
Session Transcript
The session transcript is the largest pane in the Command Line window, as shown in Figure 1-3
on page 1-8. The session transcript lists all commands performed and tool messages in different
colors:
Command Transcript
The command transcript is located near the bottom of the Command Line window, as shown in
Figure 1-3 on page 1-8. The command transcript lists all of the commands executed. You can
repeat a command by double-clicking on the command in the command transcript. You can
place a command on the command line for editing by clicking once on the command in the
command transcript. You also have a popup menu available by clicking the right mouse button
in the command transcript. The menu items are described in Table 1-2.
Command Line
The DFT products each support a command set that provide both user information and user-
control. You enter these commands on the command line located at the bottom of the Command
Line window, as shown in Figure 1-3 on page 1-8. You can also enter commands through a
batch file called a dofile. These commands typically fall into one of the following categories:
• Add commands - These commands let you specify architectural information, such as
clock, memory, and scan chain definition.
• Delete commands - These commands let you individually “undo” the information you
specified with the Add commands. Each Add command has a corresponding Delete
command.
• Report commands - These commands report on both system and user-specified
information.
• Set and Setup commands - These commands provide user control over the architecture
and outputs.
• Miscellaneous commands - The DFT products provides a number of other commands
that do not fit neatly into the previous categories. Some of these, such as Help, Dofile,
and System, are common to all the DFT/ATPG tools. Others, are specific to the
individual products.
Most DFT product commands follow the 3-2-1 minimum typing convention. That is, as a short
cut, you need only type the first three characters of the first command word, the first two
characters of the second command word, and the first character of the third command word. For
example, the DFTAdvisor command Add Nonscan Instance reduces to “add no i” when you use
minimum typing.
In cases where the 3-2-1 rule leads to ambiguity between commands, such as Report Scan Cells
and Report Scan Chains (both reducing to “rep sc c”), you need to specify the additional
characters to alleviate the ambiguity. For example, the DFTAdvisor command Report Scan
Chains becomes “rep sc ch” and Report Scan Cells becomes “rep sc ce”.
You should also be aware that when you issue commands with very long argument lists, you
can use the “\” line continuation character. For example, in DFTAdvisor you could specify the
Add Nonscan Instance command within a dofile (or at the system mode prompt) as follows:
add no i\
/CBA_SCH/MPI_BLOCK/IDSE$2263/C_A0321H$76/I$2 \
/CBA_SCH/MPI_BLOCK/IDSE$2263/C_A0321H$76/I$3 \
/CBA_SCH/MPI_BLOCK/IDSE$2263/C_A0321H$76/I$5 \
/CBA_SCH/MPI_BLOCK/IDSE$2263/C_A0321H$76/I$8
For more information on dofile scripts, refer to “Running Batch Mode Using Dofiles” on
page 1-20.
modify your run. The window also presents a series of buttons that represent the actions most
commonly performed.
Graphic Pane
The graphic pane is located on the left half of the Control Panel window, as shown in Figure 1-3
on page 1-8. The graphic pane can either show the functional blocks that represent the typical
relationship between a core design and the logic being manipulated by the DFT product or show
the process flow blocks that represent the groups of tasks that are a part of the DFT product
session. Some tools, such as DFTAdvisor or FastScan, have multiple graphic panes that change
based on the current step in the process.
When you move the cursor over a functional or process flow block, the block changes color to
yellow, which indicates that the block is active. When the block is active, you can click the left
mouse button to open a dialog box that lets you perform a task, or click the right mouse button
for popup help on that block. For more information on popup help, refer to “Popup Help” on
page 1-13.
Button Pane
The button pane is located on the right half of the Control Panel window, as shown in
Figure 1-3 on page 1-8. The button pane provides a list of buttons that are the actions commonly
used while in the tool. You can click the left mouse button on a button in the button pane to
perform the listed task, or you can click the right mouse button for popup help specific to that
button. For more information on popup help, refer to “Popup Help” on page 1-13.
Getting Help
There are many different types of online help. These different types include query help, popup
help, information messages, Tool Guide help, command usage, online manuals, and the Help
menu. The following sections describe how to access the different help types.
Query Help
Note
Query help is only supported in the DFTAdvisor, DFTInsight, FastScan, and FlexTest
user interfaces.
Query help provides quick text-based messages on the purpose of a button, text field, text area,
or drop-down list within a dialog box. If additional information is available in the online PDF
manual, a “Go To Manual” button is provided that opens that manual to that information. In
dialog boxes that contain multiple pages, query help is also available for each dialog tab.
You activate query help mode by clicking the “Turn On Query Help” button located at the
bottom of the dialog box. The mouse cursor changes to a question mark. You can then click the
left mouse button on the different objects in the dialog box to open a help window on that
object. You leave query help mode by clicking on the same button, but now named “Turn Off
Query Help”, or by hitting the Escape key.
Popup Help
Popup help is available on all active areas of the Control Panel. To activate this type of help,
click the right mouse button on a functional block, process block, or button. To remove the help
window:
Information Messages
Information messages are provided in some dialog boxes to help you understand the purpose
and use of the dialog box or its options. You do not need to do anything to get these messages to
appear.
Tool Guide
Note
The Tool Guide is only available in the DFTAdvisor, FastScan, and FlexTest user
interfaces.
The Tool Guide provides quick information on different aspects of the application. You can
click on the different topics listed in the upper portion of the window to change the information
displayed in the lower portion of the window. You can open the Tool Guide by clicking on the
Help button located at the bottom of the Control Panel or from the Help > Open Tool Guide
menu item.
Command Usage
To get the command syntax for any command, from either a shell window or the GUI command
line, use the Help command followed either by a full or partial command name. You can also
display a list of certain groups of commands by entering Help and a keyword such as Add,
Delete, Save, and so on. For example, to list all the “Add” commands in MBISTArchitect,
enter:
help add
// ADD DAta Backgrounds ADD MBist Algorithms
// ADD MEmory
To see the usage line for a command, enter the Help command followed by the command name.
For example, to see the usage for the DFTAdvisor Add Clocks command, enter:
If you are using the GUI, you can open the reference manual page excerpts for a command,
using the PDF viewer, by executing the menu item:
Next, double click on the desired command in the list, or select the command and click the
Display button. The PDF viewer opens to the reference page excerpt for the command.
To accomplish the same operation from the command line, in either a shell window or the GUI
command line, issue the Help command and add the -MANual switch after the command name.
If you type Help and include only the -MANual switch, the tool opens the Design-for-Test
Bookcase, giving access to all the DFT manuals.
Online Manuals
Application documentation is provided online in PDF format. You can access the manuals using
the Help menu (all tools) or the Go To Manual button in query help messages (DFTAdvisor,
FastScan, and FlexTest). You can also open a separate shell window and execute
$MGC_HOME/bin/mgcdocs. This opens the Mentor Graphics Bookcase in the PDF viewer.
Click on “Sys Design, Verification, Test” and then on “Design-for-Test” to open the bookcase
of DFT documentation.
For information on using the Help menu to open a manual, refer to the following “Help Menu”
section.
Help Menu
Many of the menu items use a PDF viewer to display the help text associated with the topic
request. To enable the viewer’s proper behavior, ensure that you have the proper environment.
To do so, select the following menu item:
• Open Tool Guide - Opens the ASCII help tool. For more information, refer to the
preceding Tool Guide section. This menu item is only supported in DFTAdvisor,
FastScan, and FlexTest user interfaces.
• On Commands > Open Reference Page - Displays a window that lists the commands
for which help is available. Select or specify a command and click Display. Help opens
the PDF viewer and displays reference information for that command.
• On Commands > Open Summary Table - Opens the PDF viewer and displays the
Command Summary Table from the current tool’s reference manual. You can then click
on the command name and jump to the reference page.
• On Key Bindings - Displays the key binding definitions for the text entry boxes.
• Open DFT Bookcase - Opens the PDF viewer and displays a list of the manuals that
apply to the current tool.
• Open User’s Manual - Opens the PDF viewer and displays the user’s manual that
applies to the current tool.
• Open Reference Manual - Opens the PDF viewer and displays the reference manual
that applies to the current tool.
• Open Release Notes - Opens the PDF viewer and displays the release note information
for this release of the current tool.
• Open Common Resources Manual - Opens the PDF viewer and displays the Design-
for-Test Common Resources Manual.
• Open Mentor Graphics Bookcase - Opens the PDF viewer and displays the Mentor
Graphics Bookcase.
• Customer Support - Displays helpful information regarding the Mentor Graphics
Customer Support organization.
• How to Use Help - Displays text on how to use help.
• Setup Environment - Displays a dialog box that assists you in setting up your Online
Help environment and PDF viewer.
• Version - Displays version information for the tool.
Hierarchy Browser
The Hierarchy Browser displays a hierarchical tree of the instances in your design from the top
level to the ATPG library model instances. The graphical representation provides an easy way
to navigate through your design to select particular instances and pins for the tool to use as
arguments for commands.
Once displayed, you can select the path of an instance or pin for use with other commands. You
can expand and collapse the hierarchy, block by block, as desired.
There are two types of hierarchy browser windows: a stand-alone version and a dialog version.
The stand-alone hierarchy browser window does not have exclusive control over the program:
you can access other windows and dialogs at any time.
A dialog hierarchy browser window can be accessed from the Browse Hierarchy buttons in
various dialogs but these are tied to the dialog it is called from.
Note
If you select a Browse Hierarchy button from within a dialog, the stand-alone browser
will be hidden until you close the browser you launched from the dialog.
Pane Separation
Bar
The following DFTInsight menu item brings up the stand-alone hierarchy browser window:
The following FastScan, FlexTest, and DFTAdvisor menu item also brings up the stand-alone
hierarchy browser window:
Pane Separation
Bar
The pop-up menus allow you to execute the following operations: Copy Add Display Instance
and Report Gates. Copy information can be pasted to a command line in noGUI mode (or to the
GUI’s command line). Copy, copies the selected path or pin to the clip board so that you may
paste it into a command.
If you select either the Add Display Instance or Report Gate command, the current path that you
have selected in the hierarchy browser will automatically be added to the end of the command,
and will be echoed in the command transcript window.
The Selected Instance(s) text entry box will display the path that is used with the command. A
popup is available over this Text Entry box from which you can copy, paste, cut, delete, and
select. This is useful for copying text to a command line, to be used with one of the tools
commands.
Note
The desired path must be selected before execution of the pop-up commands.
You can specify a dofile at invocation by using the -Dofile switch. You can also execute the
File > Command File menu item, the Dofile command, or click on the Dofile button to execute
a dofile at any time during a DFT application session.
If you place all commands, including the Exit command, in a dofile, you can run the entire
session as a batch process. Once you generate a dofile, you can run it at invocation. For
example, to run MBISTArchitect as a batch process using the commands contained in
my_dofile.do, enter:
By default, if an ATPG application encounters an error when running one of the commands in
the dofile, it stops dofile execution. However, you can turn this setting off by using the Set
Dofile Abort command
You can generate log files in one of three ways: by using the -Logfile switch when you invoke
the tool, by executing the Setup > Logfile menu item, or in DFTAdvisor, FastScan or FlexTest,
by issuing the Set Logfile Handling command. When setting up a log file, you can instruct the
DFT product to generate a new log file, replace an existing log file, or append information to a
log file that already exists.
Note
If you create a log file during a DFT product session, the log file will only contain notes,
warning, or error messages that occur after you issue the command. Therefore, it should
be entered as one of the first commands in the session.
prompt> system ls
Note
This command applies to all files that the tool reads from and writes to.
• Set Gzip Options - Specifies which GNU gzip options to use when the tool is processing
files that have the .gz extension.
Note
The file compression used by the tools to manage disk storage space is unrelated to the
pattern compression you apply to test pattern sets in order to reduce the pattern count.
You will see many references to the latter type of compression throughout the DFT
documentation.
prompt> exit
For information on an individual tool user interface, refer to the following sections:
When you invoke DFTAdvisor in graphical mode, the Command Line and Control Panel
windows are opened. An example of these two windows is shown in Figure 1-3 on page 1-8.
The DFTAdvisor Control Panel window, shown in Figure 1-6, lets you easily set up the
different aspects of your design in order to identify and insert test structures. The DFTAdvisor
Control Panel contains three panes: a graphic pane, a button pane, and a process pane. These
panes are available in each of the process steps identified in the process pane at the bottom of
the Control Panel window.
You use the process pane to step through the major tasks in the process. Each of the process
steps has a different graphic pane and a different set of buttons in the button pane. The current
process step is highlighted in green. Within the process step, you have sub-tasks that are shown
as functional or process flow blocks in the graphic pane. To get information on each of the these
tasks, click the right mouse button on the block. For example, to get help on the Clocks
functional block in Figure 1-6, click the right mouse button on it.
When you have completed the sub-tasks within a major task and are ready to move on to the
next process step, click on the “Done with” button in the graphic pane, or on the process button
in the process pane. If you have not completed all of the required sub-tasks associated with that
process step, DFTAdvisor asks you if you really want to move to the next step.
Within DFTAdvisor, you can add custom pulldown menus in the Command Line window and
help topics to the DFTAdvisor Tool Guide window. This gives you the ability to automate
common tasks and create notes on tool usage. For more information on creating these custom
menus and help topics, click on the Help button in the button pane and then choose the help
topic, “How can I add custom menus and help topics?”.
Session
Transcripting...
DFTAdvisor Setup
Modeling/DRC
Setup...
Internal
Circuitry
Clocks Test Synthesis
Setup...
Existing Scan
Report
Environment
Primary
Outputs
Invoke
DFTInsight
Primary
Inputs
RAM
RD
WR Dout
Dofile...
Exit...
DRC and DRC
Circuit Test
Setup Violation
Learning Synthesis
Debugging Help...
Graphic Pane
Process Pane Button Pane
Current Process
When you invoke FastScan in graphical mode, the Command Line and Control Panel windows
are opened. An example of these two windows is shown in Figure 1-3 on page 1-8. The
FastScan Control Panel window, shown in Figure 1-7, lets you set up the different aspects of
your design in order to identify and insert full-scan test structures. The FastScan Control Panel
contains three panes: a graphic pane, a button pane, and a process pane. These panes are
available in each of the process steps identified in the process pane at the bottom of the Control
Panel window.
You use the process pane to step through the major tasks in the process. Each of the process
steps has a different graphic pane and a different set of buttons in the button pane. The current
process step is highlighted in green. Within the process step, you have sub-tasks that are shown
as functional or process flow blocks in the graphic pane. You can get information on each of
these tasks by clicking the right mouse button on the block. For example, to get help on the
Clocks functional block in Figure 1-7, click the right mouse button on it.
When you have completed the sub-tasks within a major task and are ready to move on to the
next process step, simply click on the “Done with” button in the graphic pane or on the process
button in the process pane. If you have not completed all of the required sub-tasks associated
with that process step, FastScan asks you if you really want to move to the next step.
Within FastScan, you can add custom pulldown menus in the Command Line window and help
topics to the FastScan Tool Guide window. This gives you the ability to automate common
tasks and create notes on tool usage. For more information on creating these custom menus and
help topics, click on the Help button in the button pane and then choose the help topic, “How
can I add custom menus and help topics?”.
Session
Transcripting...
FastScan Setup
Modeling/DRC
Setup...
Internal
Circuitry
Clocks ATPG & Fault
Sim Setup...
Scan Circuitry
Report
Environment
Primary
Outputs
Invoke
DFTInsight...
Primary
Inputs
RAM
RD
WR Dout
Dofile...
Exit...
DRC and DRC ATPG
Setup Circuit Violation or
Learning Debugging Simulation Help...
Graphic Pane
Process Pane Button Pane
Current Process
When you invoke FlexTest in graphical mode, the Command Line and Control Panel windows
are opened. An example of these two windows is shown in Figure 1-3 on page 1-8. The
FlexTest Control Panel window, shown in Figure 1-8, lets you easily set up the different aspects
of your design in order to identify and insert partial-scan test structures. The FlexTest Control
Panel contains three panes: a graphic pane, a button pane, and a process pane. These panes are
available in each of the process steps identified in the process pane at the bottom of the Control
Panel window.
You use the process pane to step through the major tasks in the process. Each of the process
steps has a different graphic pane and a different set of buttons in the button pane. The current
process step is highlighted in green. Within the process step, you have sub-tasks that are shown
as functional or process flow blocks in the graphic pane. To get information on each of these
tasks, click the right mouse button on the block. For example, to get help on the Clocks
functional block in Figure 1-8, click the right mouse button on it.
When you have completed the sub-tasks within a major task and are ready to move on to the
next process step, simply click on the “Done with” button in the graphic pane or on the process
button in the process pane. If you have not completed all of the required sub-tasks associated
with that process step, FlexTest asks you if you really want to move to the next step.
Within FlexTest, you can add custom pulldown menus in the Command Line window and help
topics to the FlexTest Tool Guide window. This gives you the ability to automate common tasks
and create notes on tool usage. For more information on creating these custom menus and help
topics, click on the Help button in the button pane and then choose the help topic, “How can I
add custom menus and help topics?”.
Session
Transcripting...
FlexTest Setup
Modeling/DRC
Setup...
Internal
Circuitry
Clocks ATPG & Fault
Sim Setup...
Scan Circuitry
Report
Environment
Primary
Outputs
Cycle
Timing...
Primary
Inputs Invoke
RAM DFTInsight...
RD
WR Dout
Dofile...
Exit...
DRC and DRC ATPG
Setup Circuit Violation or
Learning Debugging Simulation Help...
Graphic Pane
Process Pane Button Pane
Current Process
Before you begin the DFT process, you must first have an understanding of certain DFT
concepts. Once you understand these concepts, you can determine the best test strategy for your
particular design. Figure 2-1 shows the concepts this section discusses.
Understand
Tool Concepts
Built-in self-test (BIST) circuitry, along with scan circuitry, greatly enhances a design’s
testability. BIST leaves the job of testing up to the device itself, eliminating or minimizing the
need for external test equipment. A discussion of BIST and the BIST process is provided in the
Built-in Self-Test Process Guide.
Scan circuitry facilitates test generation and can reduce external tester usage. There are two
main types of scan circuitry: internal scan and boundary scan. Internal scan (also referred to as
scan design) is the internal modification of your design’s circuitry to increase its testability. A
detailed discussion of internal scan begins on page 2-2.
While scan design modifies circuitry within the original design, boundary scan adds scan
circuitry around the periphery of the design to make internal circuitry on a chip accessible via a
standard board interface. The added circuitry enhances board testability of the chip, the chip I/O
pads, and the interconnections of the chip to other board circuitry. A discussion of boundary
scan and the boundary scan process is available in the Boundary Scan Process Guide.
The design shown in Figure 2-2 contains both combinational and sequential portions. Before
adding scan, the design had three inputs, A, B, and C, and two outputs, OUT1 and OUT2. This
“Before Scan” version is difficult to initialize to a known state, making it difficult to both
control the internal circuitry and observe its behavior using the primary inputs and outputs of
the design.
Before Scan
A
Combinational Logic OUT1
B
D Q D Q D Q
CLK
After Scan
A
Combinational Logic OUT1
B
sc_out
D Q D Q D Q
sc_in
sci sci sci
sen sen sen
CLK
sc_en
After adding scan circuitry, the design has two additional inputs, sc_in and sc_en, and one
additional output, sc_out. Scan memory elements replace the original memory elements so that
when shifting is enabled (the sc_en line is active), scan data is read in from the sc_in line.
1. Enable the scan operation to allow shifting (to initialize scan cells).
2. After loading the scan cells, hold the scan clocks off and then apply stimulus to the
primary inputs.
Scan Output
Scan Input
The black rectangles in Figure 2-4 represent scan elements. The line connecting them is the
scan path. Because this is a full scan design, all storage elements were converted and connected
in the scan path. The rounded boxes represent combinational portions of the circuit.
For information on implementing a full scan strategy for your design, refer to “Test Structures
Supported by DFTAdvisor” on page 5-4.
• Ease of use.
Using full scan methodology, you can insert both scan circuitry and run ATPG without
the aid of a test engineer.
• Assured quality.
Full scan assures quality because parts containing such circuitry can be tested
thoroughly during chip manufacture. If your end products are going to be used in market
segments that demand high quality, such as aircraft or medical electronics—and you can
afford the added circuitry—then you should take advantage of the full scan
methodology.
Scan Output
Scan Input
The rectangles in Figure 2-4 represent sequential elements of the design. The black rectangles
are storage elements that have been converted to scan elements. The line connecting them is the
scan path. The white rectangles are elements that have not been converted to scan elements and
thus, are not part of the scan chain. The rounded boxes represent combinational portions of the
circuit.
In the partial scan methodology, the test engineer, designer, or scan insertion tool selects the
desired flip-flops for the scan chain. For information on implementing a partial scan strategy for
your design, refer to “Test Structures Supported by DFTAdvisor” on page 5-4.
(Well-Behaved (Mostly-Sequential
Sequential Scan) Scan)
No Scan
Full Scan Partial Scan or Other
DFT
Techniques
TEST
GENERATION
AREA EFFORT
OVERHEAD
Combinational and
Scan-Sequential ATPG
(FastScan)
Sequential ATPG
(FlexTest)
Mentor Graphics provides two ATPG tools, FastScan and FlexTest. FastScan uses both
combinational (for full scan) and scan-sequential ATPG algorithms. Well-behaved sequential
scan designs can use scan-sequential ATPG. Such designs normally contain a high percentage
of scan but can also contain “well-behaved” sequential logic, such as non-scan latches,
sequential memories, and limited sequential depth. Although you can use FastScan on other
design types, its ATPG algorithms work most efficiently on full scan and scan-sequential
designs.
FlexTest uses sequential ATPG algorithms and is thus effective over a wider range of design
styles. However, FlexTest works most effectively on primarily sequential designs; that is, those
containing a lower percentage of scan circuitry. Because the ATPG algorithms of the two tools
differ, you can use both FastScan and FlexTest together to create an optimal test set on nearly
any type of design.
“Understanding ATPG” on page 2-12 covers ATPG, FastScan, and FlexTest in more detail.
Partition scan adds controllability and observability to the design via a hierarchical partition
scan chain. A partition scan chain is a series of scan cells connected around the boundary of a
design partition that is accessible at the design level. The partition scan chain improves both test
coverage and run time by converting sequential elements to scan cells at inputs (outputs) that
have low controllability (observability) from outside the block.
The architecture of partition scan is illustrated in the following two figures. Figure 2-6 shows a
design with three partitions, A, B, and C.
Design
Partition B
Design Design
Primary Primary
Inputs Partition A
Outputs
Partition C
The bold lines in Figure 2-6 indicate inputs and outputs of partition A that are not directly
controllable or observable from the design level. Because these lines are not directly accessible
at the design level, the circuitry controlled by these pins can cause testability problems for the
design.
Figure 2-7 shows how adding partition scan structures to partition A increases the
controllability and observability (testability) of partition A from the design level.
Note
Only the first elements directly connected to the uncontrollable (unobservable) primary
inputs (primary outputs) become part of the partition scan chain.
Design-Level
Partition A Scan Out
Design-Level Pin Added
Scan In
Pin Added
Uncontrollable
Inputs
Unobservable
Outputs
The partition scan chain consists of two types of elements: sequential elements connected
directly to uncontrolled primary inputs of the partition, and sequential elements connected
directly to unobservable (or masked) outputs of the partition. The partition also acquires two
design-level pins, scan in and scan out, to give direct access to the previously uncontrollable or
unobservable circuitry.
You can also use partition scan in conjunction with either full or partial scan structures.
Sequential elements not eligible for partition scan become candidates for internal scan.
For information on implementing a partition scan strategy for your design, refer to “Setting Up
for Partition Scan Identification” on page 5-19.
VCC
Fault Effects 1
Blocked From 1
Observation Uncontrollable
Circuitry
In this example, one input of an OR gate is tied to a 1. This blocks the ability to propagate
through this path any fault effects in circuitry feeding the other input. Thus, the other input must
become a test point to improve observation. The tied input also causes a constant 1 at the output
of the OR gate. This means any circuitry downstream from that output is uncontrollable. The
pin at the output of the gate becomes a test point to improve controllability. Once identification
of these points occurs, added circuitry can improve the controllability and observability
problems.
PO
VCC
Fault Effects 1
can now be 1 Circuitry can
Observed MUX now be
controlled
PI
Test_Mode
At the observability test point, an added primary output provides direct observation of the signal
value. At the controllability test point, an added MUX controlled by a test_mode signal and
primary input controls the value fed to the associated circuitry.
This is just one example of how test point circuitry can increase design testability. Refer to
“Setting Up for Test Point Identification” on page 5-23 for information on identifying test
points and inserting test point circuitry.
Test point circuitry is similar to test logic circuitry. For more information on test logic, refer to
“Enabling Test Logic Insertion” on page 5-9.
• Multiple formats.
Reads and writes the following design data formats: GENIE, EDIF (2.0.0), TDL,
VHDL, or Verilog.
• Multiple scan types.
Supports insertion of three different scan types, or methodologies: mux-DFF, clocked-
scan, and LSSD.
• Multiple test structures.
Supports identification and insertion of full scan, partial scan (both sequential ATPG-
based and scan sequential procedure-based), partition scan, and test points.
• Scannability checking.
Provides powerful scannability checking/reporting capabilities for sequential elements
in the design.
• Design rules checking.
Performs design rules checking to ensure scan setup and operation are correct—before
scan is actually inserted. This rules checking also guarantees that the scan insertion done
by DFTAdvisor produces results that function properly in the ATPG tools, FastScan and
FlexTest.
• Interface to ATPG tools.
Automatically generates information for the ATPG tools on how to operate the scan
circuitry DFTAdvisor creates.
• Optimal partial scan selection.
Provides optimal partial scan analysis and insertion capabilities.
• Flexible scan configurations.
Allows flexibility in the scan stitching process, such as stitching scan cells in fixed or
random order, creating either single- or multiple-scan chains, and using multiple clocks
on a single-scan chain.
• Test logic.
Provides capabilities for inserting test logic circuitry on uncontrollable set, reset, clock,
tri-state enable, and RAM read/write control lines.
• User specified pins.
Allows user-specified pin names for test and other I/O pins.
• Multiple model levels.
Handles gate-level, as well as gate/transistor-level models.
• Online help.
Provides online help for every command along with online manuals.
For information on using DFTAdvisor to insert scan circuitry into your design, refer to
“Inserting Internal Scan and Test Circuitry” on page 5-1.
Understanding ATPG
ATPG stands for Automatic Test Pattern Generation. Test patterns, sometimes called test
vectors, are sets of 1s and 0s placed on primary input pins during the manufacturing test process
to determine if the chip is functioning properly. When the test pattern is applied, the Automatic
Test Equipment (ATE) determines if the circuit is free from manufacturing defects by
comparing the fault-free output—which is also contained in the test pattern—with the actual
output measured by the ATE.
The two most typical methods for pattern generation are random and deterministic.
Additionally, the ATPG tools can fault simulate patterns from an external set and place those
patterns detecting faults in a test set. The following subsections discuss each of these methods.
More specifically, the tool assigns a set of values to control points that force the fault site to the
state opposite the fault-free state, so there is a detectable difference between the fault value and
the fault-free value. The tool must then find a way to propagate this difference to a point where
it can observe the fault effect. To satisfy the conditions necessary to create a test pattern, the test
generation process makes intelligent decisions on how best to place a desired value on a gate. If
a conflict prevents the placing of those values on the gate, the tool refines those decisions as it
attempts to find a successful test pattern.
If the tool exhausts all possible choices without finding a successful test pattern, it must perform
further analysis before classifying the fault. Faults requiring this analysis include redundant,
ATPG-untestable, and possible-detected-untestable categories (see page 2-25 for more
information on fault classes). Identifying these fault types is an important by-product of
deterministic test generation and is critical to achieving high test coverage. For example, if a
fault is proven redundant, the tool may safely mark it as untestable. Otherwise, it is classified as
a potentially detectable fault and counts as an untested fault when calculating test coverage.
gated clocks, set, and reset lines. Additionally, FastScan has some sequential testing
capabilities for your design’s non-scan circuitry.
• Additions to scan ATPG.
FastScan provides easy and flexible scan setup using a test procedure file. FastScan also
provides DFT rules checking (before you can generate test patterns) to ensure proper
scan operation. FastScan's pattern compression abilities ensure that you have a small,
yet efficient, set of test patterns. FastScan also provides diagnostic capabilities, so you
not only know if a chip is good or faulty, but you also have some information to pinpoint
problems. FastScan also supports built-in self-test (BIST) functionality, and supports
both RAM/ROM components and transparent latches.
• Tight integration in Mentor Graphics top-down design flow.
FastScan is tightly coupled with DFTAdvisor in the Mentor Graphics top-down design
flow.
• Support for use in external tool environments.
You can use FastScan in many non-Mentor Graphics design flows, including Verilog
and Synopsys.
• Flexible packaging.
The standard FastScan package, fastscan, operates in both graphical and non-graphical
modes. FastScan also has a diagnostic-only package, which you install normally but
which licenses only the setup and diagnostic capabilities of the tool; that is, you cannot
run ATPG.
Refer to the ATPG Tools Reference Manual for the full set of FastScan functions.
Functional
Defects circuitry opens
circuitry shorts
At-Speed
IDDQ Defects
Defects
CMOS stuck-on slow transistors
CMOS stuck-open resistive bridges
bridging
Each of these defects has an associated detection strategy. The following subsection discusses
the three main types of test strategies.
Test Types
Figure 2-10 shows three main categories of defects and their associated test types: functional,
IDDQ, and at-speed. Functional testing checks the logic levels of output pins for a “0” and “1”
response. IDDQ testing measures the current going through the circuit devices. At-speed testing
checks the amount of time it takes for a device to change logic states. The following subsections
discuss each of these test types in more detail.
Functional Test
Functional test continues to be the most widely-accepted test type. Functional test typically
consists of user-generated test patterns, simulation patterns, and ATPG patterns.
Functional testing uses logic levels at the device input pins to detect the most common
manufacturing process-caused problem, static defects (for example, open, short, stuck-on, and
stuck-open conditions). Functional testing applies a pattern of 1s and 0s to the input pins of a
circuit and then measures the logical results at the output pins. In general, a defect produces a
logical value at the outputs different from the expected output value.
IDDQ Test
IDDQ testing measures quiescent power supply current rather than pin voltage, detecting device
failures not easily detected by functional testing—such as CMOS transistor stuck-on faults or
adjacent bridging faults. IDDQ testing equipment applies a set of patterns to the design, lets the
current settle, then measures for excessive current draw. Devices that draw excessive current
may have internal manufacturing defects.
Because IDDQ tests do not have to propagate values to output pins, the set of test vectors for
detecting and measuring a high percentage of faults may be very compact. FastScan and
FlexTest efficiently create this compact test vector set.
In addition, IDDQ testing detects some static faults, tests reliability, and reduces the number of
required burn-in tests. You can increase your overall test coverage by augmenting functional
testing with IDDQ testing.
• Every-vector
This methodology monitors the power-supply current for every vector in a functional or
stuck-at fault test set. Unfortunately, this method is relatively slow—on the order of 10-
100 milliseconds per measurement—making it impractical in a manufacturing
environment.
• Supplemental
This methodology bypasses the timing limitation by using a smaller set of IDDQ
measurement test vectors (typically generated automatically) to augment the existing
test set.
• Selective
This methodology intelligently chooses a small set of test vectors from the existing
sequence of test vectors to measure current.
FastScan and FlexTest support both supplemental and selective IDDQ test methodologies.
Three test vector types serve to further classify IDDQ test methodologies:
• Ideal
Ideal IDDQ test vectors produce a nearly zero quiescent power supply current during
testing of a good device. Most methodologies expect such a result.
• Non-ideal
Non-ideal IDDQ test vectors produce a small, deterministic quiescent power supply
current in a good circuit.
• Illegal
If the test vector cannot produce an accurate current component estimate for a good
device, it is an illegal IDDQ test vector. You should never perform IDDQ testing with
illegal IDDQ test vectors.
IDDQ testing classifies CMOS circuits based on the quiescent-current-producing circuitry
contained inside as follows:
• Fully static
Fully static CMOS circuits consume close to zero IDDQ current for all circuit states.
Such circuits do not have pullup or pull-down resistors, and there can be one and only
one active driver at a time in tri-state buses. For such circuits, you can use any vector for
ideal IDDQ current measurement.
• Resistive
Resistive CMOS circuits can have pullup/pull-down resistors and tristate buses that
generate high IDDQ current in a good circuit.
• Dynamic
Dynamic CMOS circuits have macros (library cells or library primitives) that generate
high IDDQ current in some states. Diffused RAM macros belong to this category.
Some designs have a low current mode, which makes the circuit behave like a fully static
circuit. This behavior makes it easier to generate ideal IDDQ tests for these circuits.
FastScan and FlexTest currently support only the ideal IDDQ test methodology for fully static,
resistive, and some dynamic CMOS circuits. The tools can also perform IDDQ checks during
ATPG to ensure the vectors they produce meet the ideal requirements. For information on
creating IDDQ test sets, refer to“Creating an IDDQ Test Set” on page 6-62.
At-Speed Test
Timing failures can occur when a circuit operates correctly at a slow clock rate, and then fails
when run at the normal system speed. Delay variations exist in the chip due to statistical
variations in the manufacturing process, resulting in defects such as partially conducting
transistors and resistive bridges.
The purpose of at-speed testing is to detect these types of problems. At-speed testing runs the
test patterns through the circuit at the normal system clock speed.
Fault Modeling
Fault models are a means of abstractly representing manufacturing defects in the logical model
of your design. Each type of testing—functional, IDDQ, and at-speed—targets a different set of
defects.
Fault Locations
By default, faults reside at the inputs and outputs of library models. However, faults can instead
reside at the inputs and outputs of gates within library models if you turn internal faulting on.
Figure 2-11 shows the fault sites for both cases.
To locate a fault site, you need a unique, hierarchical instance pathname plus the pin name.
Fault Collapsing
A circuit can contain a significant number of faults that behave identically to other faults. That
is, the test may identify a fault, but may not be able to distinguish it from another fault. In this
case, the faults are said to be equivalent, and the fault identification process reduces the faults to
one equivalent fault in a process known as fault collapsing. For performance reasons, early in
the fault identification process FastScan and FlexTest single out a member of the set of
equivalent faults and use this “representative” fault in subsequent algorithms. Also for
performance reasons, these applications only evaluate the one equivalent fault, or collapsed
fault, during fault simulation and test pattern generation. The tools retain information on both
collapsed and uncollapsed faults, however, so they can still make fault reports and test coverage
calculations.
a
c
b
Possible Errors: 6
“a” s-a-1, “a” s-a-0
“b” s-a-1, “b” s-a-0
“c” s-a-1, “c” s-a-0
For a single-output, n-input gate, there are 2(n+1) possible stuck-at errors. In this case, with
n=2, six stuck-at errors are possible.
FastScan and FlexTest use the following fault collapsing rules for the single stuck-at model:
FastScan and FlexTest use the following fault collapsing rules for the toggle fault model:
• Buffer - a fault on the input is equivalent to the same fault value at the output.
• Inverter - a fault on the input is equivalent to the opposite fault value at the output.
• Net between single output pin and multiple input pin - all faults of the same value are
equivalent.
FastScan and FlexTest support the pseudo stuck-at fault model for IDDQ testing. Testing
detects a pseudo stuck-at model at a node if the fault is excited and propagated to the output of a
cell (library model instance or primitive). Because FastScan and FlexTest library models can be
hierarchical, fault modeling occurs at different levels of detail.
The pseudo stuck-at fault model detects all defects found by transistor-based fault models—if
used at a sufficiently low level. The pseudo stuck-at fault model also detects several other types
of defects that the traditional stuck-at fault model cannot detect, such as some adjacent bridging
defects and CMOS transistor stuck-on conditions.
The benefit of using the pseudo stuck-at fault model is that it lets you obtain high defect
coverage using IDDQ testing, without having to generate accurate transistor-level models for all
library components.
The transistor leakage fault model is another fault model commonly used for IDDQ testing.
This fault model models each transistor as a four terminal device, with six associated faults. The
six faults for an NMOS transistor include G-S, G-D, D-S, G-SS, D-SS, and S-SS (where G, D,
S, and SS are the gate, drain, source, and substrate, respectively).
You can only use the transistor level fault model on gate-level designs if each of the library
models contains detailed transistor level information. Pseudo stuck-at faults on gate-level
models equate to the corresponding transistor leakage faults for all primitive gates and fanout-
free combinational primitives. Thus, without the detailed transistor-level information, you
should use the pseudo stuck-at fault model as a convenient and accurate way to model faults in
a gate-level design for IDDQ testing.
Figure 2-13 shows the IDDQ testing process using the pseudo stuck-at fault model.
IDD
VSS
The pseudo stuck-at model detects internal transistor shorts, as well as “hard” stuck-ats (a node
actually shorted to VDD or GND), using the principle that current flows when you try to drive
two connected nodes to different values. While stuck-at fault models require propagation of the
fault effects to a primary output, pseudo stuck-at fault models allow fault detection at the output
of primitive gates or library cells.
IDDQ testing detects output pseudo stuck-at faults if the primitive or library cell output pin goes
to the opposite value. Likewise, IDDQ testing detects input pseudo stuck-at faults when the
input pin has the opposite value of the fault and the fault effect propagates to the output of the
primitive or library cell.
By combining IDDQ testing with traditional stuck-at fault testing, you can greatly improve the
overall test coverage of your design. However, because it is costly and impractical to monitor
current for every vector in the test set, you can supplement an existing stuck-at test set with a
compact set of test vectors for measuring IDDQ. This set of IDDQ vectors can either be
generated automatically or intelligently chosen from an existing set of test vectors. Refer to
section “Creating an IDDQ Test Set” on page 6-62 for information.
The fault collapsing rule for the pseudo stuck-at fault model is as follows: for faults associated
with a single cell, pseudo stuck-at faults are considered equivalent if the corresponding stuck-at
faults are equivalent.
Related Commands
Set Transition Holdpi - freezes all primary inputs values other than clocks and RAM controls
during multiple cycles of pattern generation.
Figure 2-14 demonstrates the at-speed testing process using the transition fault model. In this
example, the process could be testing for a slow-to-rise or slow-to-fall fault on any of the pins of
the AND gate.
4) Measure Primary
Output Value
2) Apply Transition
Propagation Vector
A transition fault requires two test vectors for detection: an initialization vector and a transition
propagation vector. The initialization vector propagates the initial transition value to the fault
site. The transition vector, which is identical to the stuck-at fault pattern, propagates the final
transition value to the fault site. To detect the fault, the tool applies proper at-speed timing
relative to the second vector, and measures the propagated effect at an external observation
point.
The tool uses the following fault collapsing rules for the transition fault model:
• Buffer - a fault on the input is equivalent to the same fault value at the output.
• Inverter - a fault on the input is equivalent to the opposite fault value at the output.
• Net between single output pin and single input pin - all faults of the same value are
equivalent.
FlexTest Only - In FlexTest, a transition fault is modeled as a fault which causes a 1-cycle delay
of rising or falling. In comparison, a stuck-at fault is modeled as a fault which causes infinite
delay of rising or falling. The main difference between the transition fault model and the stuck-
at fault model is their fault site behavior. Also, since it is more difficult to detect a transition
fault than a stuck-at fault, the run time for a typical circuit may be slightly worse.
Related Commands
Set Fault Type - Specifies the fault model for which the tool develops or selects ATPG
patterns. The transition option for this command specifies the tool to develop or select ATPG
patterns for the transition fault model.
Path topology and edge type identify path delay faults. The path topology describes a user-
specified path from beginning, or launch point, through a combinational path to the end, or
capture point. The launch point is either a primary input or a state element. The capture point is
either a primary output or a state element. State elements used for launch or capture points are
either scan elements or non-scan elements that qualify for clock-sequential handling. A path
definition file defines the paths for which you want patterns generated.
The edge type defines the type of transition placed on the launch point that you want to detect at
the capture point. A “0” indicates a rising edge type, which is consistent with the slow-to-rise
transition fault and is similar to a temporary stuck-at-0 fault. A “1” indicates a falling edge type,
which is consistent with the slow-to-fall transition fault and is similar to a temporary stuck-at-1
fault.
FastScan targets multiple path delay faults for each pattern it generates. Within the (ASCII) test
pattern set, patterns that detect path delay faults include comments after the pattern statement
identifying the path fault, type of detection, time and point of launch event, time and point of
capture event, and the observation point. Information about which paths were detected by each
pattern is also included.
For more information on generating path delay test sets, refer to “Creating a Path Delay Test Set
(FastScan)” on page 6-76.
Fault Detection
Figure 2-15 shows the basic fault detection process.
Apply Stimulus
Actual Good
Circuit Circuit
Compare
Response
N
Difference? Repeat for
Next Stimulus
Y
Fault
Detected
Faults detection works by comparing the response of a known-good version of the circuit to that
of the actual circuit, for a given stimulus set. A fault exists if there is any difference in the
responses. You then repeat the process for each stimulus set.
The actual fault detection methods vary. One common approach is path sensitization. The path
sensitization method, which is used by FastScan and FlexTest to detect stuck-at faults, starts at
the fault site and tries to construct a vector to propagate the fault effect to a primary output.
When successful, the tools create a stimulus set (a test pattern) to detect the fault. They attempt
to do this for each fault in the circuit's fault universe. Figure 2-16 shows an example circuit for
which path sensitization is appropriate.
x1 s-a-0
x2
y1
y2
x3
Figure 2-16 has a stuck-at-0 on line y1 as the target fault. The x1, x2, and x3 signals are the
primary inputs, and y2 is the primary output. The path sensitization procedure for this example
follows:
1. Find an input value that sets the fault site to the opposite of the desired value. In this
case, the process needs to determine the input values necessary at x1 and/or x2 that set
y1 to a 1, since the target fault is s-a-0. Setting x1 (or x2) to a 0 properly sets y1 to a 1.
2. Select a path to propagate the response of the fault site to a primary output. In this case,
the fault response propagates to primary output y2.
3. Specify the input values (in addition to those specified in step 1) to enable detection at
the primary output. In this case, in order to detect the fault at y1, the x3 input must be set
to a 1.
Fault Classes
FastScan and FlexTest categorize faults into fault classes, based on how the faults were detected
or why they could not be detected. Each fault class has a unique name and two character class
code. When reporting faults, FastScan and FlexTest use either the class name or the class code
to identify the fault class to which the fault belongs.
Note
The tools may classify a fault in different categories, depending on the selected fault type.
Untestable
Untestable (UT) faults are faults for which no pattern can exist to either detect or possible-
detect them. Untestable faults cannot cause functional failures, so the tools exclude them when
calculating test coverage. Because the tools acquire some knowledge of faults prior to ATPG,
they classify certain unused, tied, or blocked faults before ATPG runs. When ATPG runs, it
immediately places these faults in the appropriate categories. However, redundant fault
detection requires further analysis.
• Unused (UU)
The unused fault class includes all faults on circuitry unconnected to any circuit
observation point. Figure 2-17 shows the site of an unused fault.
D Q
Master
CLK Latch QB
s-a-1/s-a-0
• Tied (TI)
The tied fault class includes faults on gates where the point of the fault is tied to a value
identical to the fault stuck value. The tied circuitry could be due to tied signals, or AND
and OR gates with complementary inputs. Another possibility is exclusive-OR gates
with common inputs. The tools will not use line holds (pins held at a constant logic
value during test and set by the FastScan and FlexTest Add Pin Constraints command)
to determine tied circuitry. Line holds, or pin constraints, do result in ATPG_untestable
faults. Figure 2-18 shows the site of a tied fault.
A B C D
s-a-0
GND
Because tied values propagate, the tied circuitry at A causes tied faults at A, B, C, and D.
• Blocked (BL)
The blocked fault class includes faults on circuitry for which tied logic blocks all paths
to an observable point. This class also includes faults on selector lines of multiplexers
that have identical data lines. Figure 2-19 shows the site of a blocked fault.
s-a-0
GND
Note
Tied faults and blocked faults can be equivalent faults.
• Redundant (RE)
The redundant fault class includes faults the test generator considers undetectable. After
the test pattern generator exhausts all patterns, it performs a special analysis to verify
that the fault is undetectable under any conditions. Figure 2-20 shows the site of a
redundant fault.
F
GND
In this circuit, signal G always has the value of 1, no matter what the values of A, B, and
C. If D is stuck at 1, this fault is undetectable because the value of G can never change,
regardless of the value at D.
Testable
Testable (TE) faults are all those faults that cannot be proven untestable. The testable fault
classes include:
• Detected (DT)
The detected fault class includes all faults that the ATPG process identifies as detected.
The detected fault class contains two subclasses:
o det_simulation (DS) - faults detected when the tool performs fault simulation.
o det_implication (DI) - faults detected when the tool performs learning analysis.
The det_implication subclass normally includes faults in the scan path circuitry, as
well as faults that propagate ungated to the shift clock input of scan cells. The scan
chain functional test, which detects a binary difference at an observation point,
guarantees detection of these faults. FastScan and FlexTest both provide the Update
Implication Detections command, which lets you specify additional types of faults
for this category. Refer to the Update Implication Detections command description
in the ATPG Tools Reference Manual.
For path delay testing, the detected fault class includes two other subclasses:
o det_robust (DR) - robust detected faults.
o det_functional (DF) - functionally detected faults.
For detailed information on the path delay subclasses, refer to “Path Delay Fault
Detection” on page 6-76.
• Posdet (PD)
The posdet, or possible-detected, fault class includes all faults that fault simulation
identifies as possible-detected but not hard detected. A possible-detected fault results
from a 0-X or 1-X difference at an observation point. The posdet class contains two
subclasses:
o posdet_testable (PT) - potentially detectable posdet faults. PT faults result when the
tool cannot prove the 0-X or 1-X difference is the only possible outcome. A higher
abort limit may reduce the number of these faults.
o posdet_untestable (PU) - proven ATPG_untestable and hard undetectable posdet
faults.
By default, the calculations give 50% credit for posdet faults. You can adjust the credit
percentage with the Set Possible Credit command.
Note
If you use FlexTest and change the posdet credit to 0, the tool does not place any faults in
this category.
status. To maintain fault simulation performance, the tool drops oscillatory faults from
the simulation. The tool calculates test coverage by classifying oscillatory faults as
posdet faults.
The oscillatory fault class contains two subclasses:
o osc_untestable (OU) - ATPG_untestable oscillatory faults
o osc_testable (OT) - all other oscillatory faults.
Note
These faults may stabilize after a long simulation time.
Note
Because these faults affect the circuit extensively, even though the tool may drop them
from the fault list (with accompanying lower fault coverage numbers), hypertrophic
faults are most likely detected.
o find an initialization pattern that creates the opposite value of the faulty value at the
fault pin.
o prove the fault is tied.
In sequential circuits, these faults indicate that the tool cannot initialize portions of the
circuit.
• ATPG_untestable (AU)
The ATPG_untestable fault class includes all faults for which the test generator is
unable to find a pattern to create a test, and yet cannot prove the fault redundant.
Testable faults become ATPG_untestable faults because of constraints, or limitations,
placed on the ATPG tool (such as a pin constraint or an insufficient sequential depth).
These faults may be possible-detectable, or detectable, if you remove some constraint,
or change some limitation, on the test generator (such as removing a pin constraint or
changing the sequential depth). You cannot detect them by increasing the test generator
abort limit.
The tools place faults in the AU category based on the type of deterministic test
generation method used. That is, different test methods create different AU fault sets.
Likewise, FastScan and FlexTest can create different AU fault sets even using the same
test method. Thus, if you switch test methods (that is, change the fault type) or tools, you
should reset the AU fault list using the Reset Au Faults command.
Note
FastScan and FlexTest place AU faults in the testable category, counting the AU faults in
the test coverage metrics. You should be aware that most other ATPG tools drop these
faults from the calculations, and thus may inaccurately report higher test coverage.
• Undetected (UD)
The undetected fault class includes undetected faults that cannot be proven untestable or
ATPG_untestable. The undetected class contains two subclasses:
o uncontrolled (UC) - undetected faults, which during pattern simulation, never
achieve the value at the point of the fault required for fault detection—that is, they
are uncontrollable.
o unobserved (UO) - faults whose effects do not propagate to an observable point.
All testable faults prior to ATPG are put in the UC category. Faults that remain UC or
UO after ATPG are aborted, which means that a higher abort limit may reduce the
number of UC or UO faults.
Note
Uncontrolled and unobserved faults can be equivalent faults. If a fault is both
uncontrolled and unobserved, it is categorized as UC.
For any given level of the hierarchy, FastScan and FlexTest assign a fault to one—and only
one—class. If the tools can place a fault in more than one class of the same level, they place it in
the class that occurs first in the list of fault classes.
Fault Reporting
When reporting faults, FastScan and FlexTest identify each fault by three ordered fields: the
stuck value (0 or 1), the 2 character fault class code, and the pin pathname of the fault site. If the
tools report uncollapsed faults, they display faults of a collapsed fault group together, with the
representative fault first followed by the other members (with EQ fault codes).
Testability Calculations
Given the fault classes explained in the previous sections, FastScan and FlexTest make the
following calculations:
• Test Coverage
Test coverage, which is a measure of test quality, is the percentage of faults detected
from among all testable faults. Typically, this is the number of most concern when you
consider the testability of your design. FastScan calculates test coverage using the
formula:
#DT + (#PD * posdet_credit)
——————————————————————————— x 100
#testable
• ATPG Effectiveness
ATPG effectiveness measures the ATPG tool’s ability to either create a test for a fault,
or prove that a test cannot be created for the fault under the restrictions placed on the
tool. FastScan calculates ATPG effectiveness using the formula:
#DT + #UT + #AU + #PU +(#PT *posdet_credit)
——————————————————————————————————————————— x 100
#full
Now that you understand the basic ideas behind DFT, scan design and ATPG, you can
concentrate on the Mentor Graphics DFT tools and how they operate. DFTAdvisor, FastScan,
and FlexTest not only work toward a common goal (to improve test coverage), they also share
common terminology, internal processes, and other tool concepts, such as how to view the
design and the scan circuitry. Figure 3-1 shows the range of subjects common to the three tools.
The following subsections discuss common terminology and concepts associated with scan
insertion and ATPG using DFTAdvisor, FastScan, and FlexTest.
Scan Terminology
This section introduces the scan terminology common to DFTAdvisor, FastScan, and FlexTest.
Scan Cells
A scan cell is the fundamental, independently-accessible unit of scan circuitry, serving both as a
control and observation point for ATPG and fault simulation. You can think of a scan cell as a
black box composed of an input, an output and a procedure specifying how data gets from the
input to the output. The circuitry inside the black box is not important as long as the specified
procedure shifts data from input to output properly.
Because scan cell operation depends on an external procedure, scan cells are tightly linked to
the notion of test procedure files. “Test Procedure Files” on page 3-9 discusses test procedure
files in detail. Figure 3-2 illustrates the black box concept of a scan cell and its reliance on a test
procedure.
Figure 3-3 gives one example of a scan cell implementation (for the mux-DFF scan type).
MUX
data
mux-DFF
data D1 Q sc_out
sc_in D2
sc_en EN
clk CK Q'
Each memory element may have a set and/or reset line in addition to clock-data ports. The
ATPG process controls the scan cell by placing either normal or inverted data into its memory
elements. The scan cell observation point is the memory element at the output of the scan cell.
Other memory elements can also be observable, but may require a procedure for propagating
their values to the scan cell’s output. The following subsections describe the different memory
elements a scan cell may contain.
Master Element
The master element, the primary memory element of a scan cell, captures data directly from the
output of the previous scan cell. Each scan cell must contain one and only one master element.
For example, Figure 3-3 shows a mux-DFF scan cell, which contains only a master element.
However, scan cells can contain memory elements in addition to the master. Figures 3-4, 3-5,
and 3-6 illustrate examples of master elements in a variety of other scan cells.
The shift procedure in the test procedure file controls the master element. If the scan cell
contains no additional independently-clocked memory elements in the scan path, this procedure
also observes the master. If the scan cell contains additional memory elements, you may need to
define a separate observation procedure (called master_observe) for propagating the master
element’s value to the output of the scan cell.
Slave Element
The slave element, an independently-clocked scan cell memory element, resides in the scan
chain path. It cannot capture data directly from the previous scan cell. When used, it stores the
output of the scan cell. The shift procedure both controls and observes the slave element. The
value of the slave may be inverted relative to the master element. Figure 3-4 shows a slave
element within a scan cell.
Bclk
Aclk Q
sc_in
sys_clk Latch Slave
data Element
Master Latch sc_out
Element
In the example of Figure 3-4, Aclk controls scan data input. Activating Aclk, with sys_clk
(which controls system data) held off, shifts scan data into the scan cell. Activating Bclk
propagates scan data to the output.
Shadow Element
The shadow element, either dependently- or independently-clocked, resides outside the scan
chain path. Figure 3-5 gives an example of a scan cell with an independently-clocked, non-
observable shadow element with a non-inverted value.
sys_clk Shadow
Master FF Element
Element
clk
data FF sc_out
MUX
sc_in S
sc_en
You load a data value into the shadow element with either the shift procedure or, if
independently clocked, with a separate procedure called shadow_control. You can optionally
make a shadow observable using the shadow_observe procedure. A scan cell may contain
multiple shadows but only one may be observable, because the tools allow only one
shadow_observe procedure. A shadow element’s value may be the inverse of the master’s
value.
Copy Element
The copy element is a memory element that lies in the scan chain path and can contain the same
(or inverted) data as the associated master or slave element in the scan cell. Figure 3-6 gives an
example of a copy element within a scan cell in which a master element provides data to the
copy.
clk
FF sc_out
Master
Element
data FF
MUX Copy
sc_in S Element
sc_en
The clock pulse that captures data into the copy’s associated scan cell element also captures
data into the copy. Data transfers from the associated scan cell element to the copy element in
the second half of the same clock cycle.
During the shift procedure, a copy contains the same data as that in its associated memory
element. However, during system data capture, some types of scan cells allow copy elements to
capture different data. When the copy’s value differs from its associated element, the copy
becomes the observation point of the scan cell. When the copy holds the same data as its
associated element, the associated element becomes the observation point.
Extra Element
The extra element is an additional, independently-clocked memory element of a scan cell. An
extra element is any element that lies in the scan chain path between the master and slave
elements. The shift procedure controls data capture into the extra elements. These elements are
not observable. Scan cells can contain multiple extras. Extras can contain inverted data with
respect to the master element.
Scan Chains
A scan chain is a set of serially linked scan cells. Each scan chain contains an external input pin
and an external output pin that provide access to the scan cells. Figure 3-7 shows a scan chain,
with scan input “sc_in” and scan output “sc_out”.
sc_in 0
N-1 N-2 N-3
clk sc_out
sc_en
data
The scan chain length (N) is the number of scan cells within the scan chain. By convention, the
scan cell closest to the external output pin is number 0, its predecessor is number 1, and so on.
Because the numbering starts at 0, the number for the scan cell connected to the external input
pin is equal to the scan chain length minus one (N-1).
Scan Groups
A scan chain group is a set of scan chains that operate in parallel and share a common test
procedure file. The test procedure file defines how to access the scan cells in all of the scan
chains of the group. Normally, all of a circuit’s scan chains operate in parallel and are thus in a
single scan chain group.
sci1 0
N-1 N-2 N-3
clk sco1
sc_en
sci2 0
N-1 N-2 N-3
sco2
You may have two clocks, A and B, each of which clocks different scan chains. You often can
clock, and therefore operate, the A and B chains concurrently, as shown in Figure 3-8.
However, if two chains share a single scan input pin, these chains cannot be operated in parallel.
Regardless of operation, all defined scan chains in a circuit must be associated with a scan
group. A scan group is a concept used by Mentor Graphics DFT and ATPG tools.
Scan groups are a way to group scan chains based on operation. All scan chains in a group must
be able to operate in parallel, which is normal for scan chains in a circuit. However when scan
chains cannot operate in parallel, such as in the example above (sharing a common scan input
pin), the operation of each must be specified separately. This means the scan chains belong to
different scan groups.
Scan Clocks
Scan clocks are external pins capable of capturing values into scan cell elements. Scan clocks
include set and reset lines, as well as traditional clocks. Any pin defined as a clock can act as a
capture clock during ATPG. Figure 3-9 shows a scan cell whose scan clock signals are shown in
bold.
D1 CLR
D2 Q1
Q2
CK1 Q1'
CK2 Q2'
In addition to capturing data into scan cells, scan clocks, in their off state, ensure that the cells
hold their data. Design rule checks ensure that clocks perform both functions. A clock’s off-
state is the primary input value that results in a scan element’s clock input being at its inactive
state (for latches) or state prior to a capturing transition (for edge-triggered devices). In the case
of Figure 3-9, the off-state for the CLR signal is 1, and the off-states for CK1 and CK2 are both
0.
Scan Architectures
You can choose from a number of different scan types, or scan architectures. DFTAdvisor, the
Mentor Graphics internal scan synthesis tool, supports the insertion of mux-DFF (mux-scan),
clocked-scan, and LSSD architectures. Additionally, DFTAdvisor supports all standard scan
types, or combinations thereof, in designs containing pre-existing scan circuitry. You can use
the Set Scan Type command (see page 5-8) to specify the type of scan architecture you want
inserted in your design.
Each scan style provides different benefits. Mux-DFF or clocked-scan are generally the best
choice for designs with edge-triggered flip-flops. Additionally, clocked-scan ensures data hold
for non-scan cells during scan loading. LSSD is most effective on latch-based designs.
The following subsections detail the mux-DFF, clocked-scan, and LSSD architectures.
Mux-DFF
A mux-DFF cell contains a single D flip-flop with a multiplexed input line that allows selection
of either normal system data or scan data. Figure 3-10 shows the replacement of an original
design flip-flop with mux-DFF circuitry.
Original Replaced by
Flip Flop mux-DFF Scan Cell
data
D MUX
Q D Q sc_out
sc_in S
(Q)
CLK sc_en DFF
clk CLK
In normal operation (sc_en = 0), system data passes through the multiplexer to the D input of
the flip-flop, and then to the output Q. In scan mode (sc_en = 1), scan input data (sc_in) passes
to the flip-flop, and then to the scan output (sc_out).
Clocked-Scan
The clocked-scan architecture is very similar to the mux-DFF architecture, but uses a dedicated
test clock to shift in scan data instead of a multiplexer. Figure 3-11 shows an original design
flip-flop replaced with clocked-scan circuitry.
Original Replaced by
Flip Flop Clocked-Scan Cell
data D
D sc_in
Q Q sc_out
sc_clk (Q)
CLK
sys_clk CLK
In normal operation, the system clock (sys_clk) clocks system data (data) into the circuit and
through to the output (Q). In scan mode, the scan clock (sc_clk) clocks scan input data (sc_in)
into the circuit and through to the output (sc_out).
LSSD
LSSD, or Level-Sensitive Scan Design, uses three independent clocks to capture data into the
two polarity hold latches contained within the cell. Figure 3-12 shows the replacement of an
original design latch with LSSD circuitry.
Original Replaced by
Latch LSSD Scan Cell
data D Q Q
D Q sys_clk clk
sc_in Master
Latch Latch D Q sc_out
Aclk
Slave
clk Latch
Bclk
In normal mode, the master latch captures system data (data) using the system clock (sys_clk)
and sends it to the normal system output (Q). In test mode, the two clocks (Aclk and Bclk)
trigger the shifting of test data through both master and slave latches to the scan output (sc_out).
There are several varieties of the LSSD architecture, including single latch, double latch, and
clocked LSSD.
If your design contains scan circuitry, FastScan and FlexTest require a test procedure file. You
must create one before running ATPG with FastScan or FlexTest.
For more information on the new test procedure file format, see the “Test Procedure File”
chapter of the Design-for-Test Common Resources Manual, which describes the syntax and
rules of test procedure files, give examples for the various types of scan architectures, and
outline the checking that determines whether the circuitry is operating correctly.
Model Flattening
To work properly, FastScan, FlexTest, and DFTAdvisor must use their own internal
representations of the design. The tools create these internal design models by flattening the
model and replacing the design cells in the netlist (described in the library) with their own
primitives. The tools flatten the model when you initially attempt to exit the Setup mode, just
prior to design rules checking. FastScan and FlexTest also provide the Flatten Model command,
which allows flattening of the design model while still in Setup mode.
If a flattened model already exists when you exit the Setup mode, the tools will only reflatten
the model if you have since issued commands that would affect the internal representation of
the design. For example, adding or deleting primary inputs, tying signals, and changing the
internal faulting strategy are changes that affect the design model. With these types of changes,
the tool must re-create or re-flatten the design model. If the model is undisturbed, the tool keeps
the original flattened model and does not attempt to reflatten.
For a list of the specific DFTAdvisor commands that cause flattening, refer to the Set System
Mode command page in the DFTAdvisor Reference Manual. For FastScan and FlexTest related
commands, see below:
Related Commands
Flatten Model - creates a primitive gate simulation representation of the design.
Report Flatten Rules - displays either a summary of all the flattening rule
violations or the data for a specific violation.
Set Flatten Handling - specifies how the tool handles flattening violations.
/Top
A
AOI1 AND1
B A Z
C Y B
D AOI
E
Figure 3-14 shows this same design once it has been flattened.
Pin Pathname
/Top/AOI1/B
/Top/AOI1
B
/Top/AND1
C /Top/AOI1 A
Z
Y B
/Top/AOI1
D
E
After flattening, only naming preserves the design hierarchy; that is, the flattened netlist
maintains the hierarchy through instance naming. Figures 3-13 and 3-14 show this hierarchy
preservation. /Top is the name of the hierarchy’s top level. The simulation primitives (two AND
gates and a NOR gate) represent the flattened instance AOI1 within /Top. Each of these
flattened gates retains the original design hierarchy in its naming—in this case, /Top/AOI1.
The tools identify pins from the original instances by hierarchical pathnames as well. For
example, /Top/AOI1/B in the flattened design specifies input pin B of instance AOI1. This
naming distinguishes it from input pin B of instance AND1, which has the pathname
/Top/AND1/B. By default, pins introduced by the flattening process remain unnamed and are
not valid fault sites. If you request gate reporting on one of the flattened gates, the NOR gate for
example, you will see a system-defined pin name shown in quotes. If you want internal faulting
in your library cells, you must specify internal pin names within the library model. The
flattening process then retains these pin names.
You should be aware that in some cases, the design flattening process can appear to introduce
new gates into the design. For example, flattening decompose a DFF gate into a DFF simulation
primitive, the Q and Q’ outputs require buffer and inverter gates, respectively. If your design
wires together multiple drivers, flattening would add wire gates or bus gates. Bidirectional pins
are another special case that requires additional gates in the flattened representation.
• PI, PO - primary inputs are gates with no inputs and a single output, while primary
outputs are gates with a single input and no fanout.
• BUF - a single-input gate that passes the values 0, 1, or X through to the output.
• FB_BUF - a single-input gate, similar to the BUF gate, that provides a one iteration
delay in the data evaluation phase of a simulation. The tools use the FB_BUF gate to
break some combinational loops and provide more optimistic behavior than when TIEX
gates are used.
Note
There can be one or more loops in a feedback path. In Atpg mode, you can display the
loops with the Report Loops command. In Setup mode, use Report Feedback Paths.
The default loop handling is simulation-based, with the tools using the FB_BUF to
break the combinational loops. In Setup mode, you can change the default with the Set
Loop Handling command. Be aware that changes to loop handling will have an impact
during the flattening process.
• ZVAL - a single-input gate that acts as a buffer unless Z is the input value. When a Z is
the input value, the output is an X. You can modify this behavior with the Set Z
Handling command.
• INV - a single-input gate whose output value is the opposite of the input value. The INV
gate cannot accept a Z input value.
• AND, NAND - multiple-input gates (two to four) that act as standard AND and NAND
gates.
• OR, NOR - multiple-input (two to four) gates that act as standard OR and NOR gates.
• XOR, XNOR - 2-input gates that act as XOR and XNOR gates, except that when either
input is an X, the output is an X.
• MUX - a 2x1 mux gate whose pins are order dependent, as shown in Figure 3-15.
sel
d1 MUX out
d2
The sel input is the first defined pin, followed by the first data input and then the second
data input. When sel=0, the output is d1. When sel=1, the output is d2.
Note
FlexTest uses a different pin naming and ordering scheme, which is the same ordering as
the _mux library primitive; that is, in0, in1, and cnt. In this scheme, cnt=0 selects in0 data
and cnt=1 selects in1 data.
• LA, DFF - state elements, whose order dependent inputs include set, reset, and
clock/data pairs, as shown in Figure 3-16.
set
reset
C1 out
D1
C2
D2
Set and reset lines are always level sensitive, active high signals. DFF clock ports are
edge-triggered while LA clock ports are level sensitive. When set=1, out=1. When
reset=1, out=0. When a clock is active (for example C1=1), the output reflects its
associated data line value (D1). If multiple clocks are active and the data they are trying
to place on the output differs, the output becomes an X.
• TLA, STLA, STFF - special types of learned gates that act as, and pass the design rule
checks for, transparent latch, sequential transparent latch, or sequential transparent flip-
flop. These gates propagate values without holding state.
• TIE0, TIE1, TIEX, TIEZ - zero-input, single-output gates that represent the effect of a
signal tied to ground or power, or a pin or state element constrained to a specific value
(0,1,X, or Z). The rules checker may also determine that state elements exhibit tied
behavior and will then replace them with the appropriate tie gates.
• TSD, TSH - a 2-input gate that acts as a tri-state™ driver, as shown in Figure 3-17.
en
TSD out
d
When en=1, out=d. When en=0, out=Z. The data line, d, cannot be a Z. FastScan uses
the TSD gate, while FlexTest uses the TSH gate for the same purpose.
• SW, NMOS - a 2-input gate that acts like a tri-state driver but can also propagate a Z
from input to output. FastScan uses the SW gate, while FlexTest uses the NMOS gate
for the same purpose.
• BUS - a multiple-input (up to four) gate whose drivers must include at least one TSD or
SW gate. If you bus more than four tri-state drivers together, the tool creates cascaded
BUS gates. The last bus gate in the cascade is considered the dominant bus gate.
• WIRE - a multiple-input gate that differs from a bus in that none of its drivers are tri-
statable.
• PBUS, SWBUS - a 2-input pull bus gate, for use when you combine strong bus and
weak bus signals together, as shown in Figure 3-18.
(strong)
BUS
(weak) PBUS ZVAL
TIE0
The strong value always goes to the output, unless the value is a Z, in which case the
weak value propagates to the output. These gates model pull-up and pull-down resistors.
FastScan uses the PBUS gate, while FlexTest uses the SWBUS gate.
• ZHOLD - a single-input buskeeper gate (see page 3-22 for more information on
buskeepers) associated with a tri-state network that exhibits sequential behavior. If the
input is a binary value, the gate acts as a buffer. If the input value is a Z, the output
depends on the gate’s hold capability. There are three ZHOLD gate types, each with a
different hold capability:
o ZHOLD0 - When the input is a Z, the output is a 0 if its previous state was 0. If its
previous state was a 1, the output is a Z.
o ZHOLD1 - When the input is a Z, the output is a 1 if its previous state was a 1. If its
previous state was a 0, the output is a Z.
o ZHOLD0,1 - When the input is a Z, the output is a 0 if its previous state was a 0, or
the output is a 1 if its previous state was a 1.
In all three cases, if the previous value is unknown, the output is X.
• RAM, ROM- multiple-input gates that model the effects of RAM and ROM in the
circuit. RAM and ROM differ from other gates in that they have multiple outputs.
• OUT - gates that convert the outputs of multiple output gates (such as RAM and ROM
simulation gates) to a single output.
Learning Analysis
After design flattening, FastScan and FlexTest perform extensive analysis on the design to learn
behavior that may be useful for intelligent decision making in later processes, such as fault
simulation and ATPG. You have the ability to turn learning analysis off, which may be
desirable if you do not want to perform ATPG during the session. For more information on
turning learning analysis off, refer to the Set Static Learning command or the Set Sequential
Learning command reference pages in the ATPG Tools Reference Manual.
The ATPG tools perform static learning only once—after flattening. Because pin and ATPG
constraints can change the behavior of the design, static learning does not consider these
constraints. Static learning involves gate-by-gate local simulation to determine information
about the design. The following subsections describe the types of analysis performed during
static learning.
Equivalence Relationships
During this analysis, simulation traces back from the inputs of a multiple-input gate through a
limited number of gates to identify points in the circuit that always have the same values in the
good machine. Figure 3-19 shows an example of two of these equivalence points within some
circuitry.
Equivalence
Points
Logic Behavior
During logic behavior analysis, simulation determines a circuit’s functional behavior. For
example, Figure 3-20 shows some circuitry that, according to the analysis, acts as an inverter.
If the analysis process yields no information for a particular category, it does not issue the
corresponding message.
Implied Relationships
This type of analysis consists of contrapositive relation learning, or learning implications, to
determine that one value implies another. This learning analysis simulates nearly every gate in
the design, attempting to learn every relationship possible. Figure 3-21 shows the implied
learning the analysis derives from a piece of circuitry.
Forbidden Relationships
During forbidden relationship analysis, which is restricted to bus gates, simulation determines
that one gate cannot be at a certain value if another gate is at a certain value. Figure 3-22 shows
an example of such behavior.
0
1 TSD
TSD Tie 1
Tie 1 1 Z
1 0
BUS BUS
0 Z 1 0
TSD TSD
Tie 0 Tie 0
A 1 at each output would be forbidden
Dominance Relationships
During dominance relationship analysis, simulation determines which gates are dominators. If
all the fanouts of a gate go to a second gate, the second gate is the dominator of the first.
Figure 3-23 shows an example of this relationship.
Gate B is
B Dominator
A of Gate A
1
TSD 0
0
1 BUS
1
1 TSD
Many designs contain buses, but good design practices usually prevent bus contention. As a
check, the learning analysis for buses determines if a contention condition can occur within the
given circuitry. Once learning determines that contention cannot occur, none of the later
processes, such as ATPG, ever check for the condition.
Buses in a Z-state network can be classified as dominant or non-dominant and strong or weak.
Weak buses and pull buses are allowed to have contention. Thus the process only analyzes
strong, dominant buses, examining all drivers of these gates and performing full ATPG analysis
of all combinations of two drivers being forced to opposite values. Figure 3-25 demonstrates
this process on a simple bus system.
E1
D1 TSD
BUS
E2
TSD
D2
Analysis tries:
E1=1, E2=1, D1=0, D2=1
E1=1, E2=1, D1=1, D2=0
If ATPG analysis determines that either of the two conditions shown can be met, the bus fails
bus mutual-exclusivity checking. Likewise, if the analysis proves the condition is never
possible, the bus passes these checks. A third possibility is that the analysis aborts before it
completes trying all of the possibilities. In this circuit, there are only two drivers, so ATPG
analysis need try only two combinations. However, as the number of drivers increases, the
ATPG analysis effort grows significantly.
You should resolve bus mutual-exclusivity before ATPG. Extra rules E4, E7, E9, E10, E11,
E12, and E13 perform bus analysis and contention checking. Refer to “Extra Rules” in the
Design-for-Test Common Resources Manual for more information on these bus checking rules.
Trace rules violations are either errors or warnings, and for most rules you cannot change the
handling. The “Scan Chain Trace Rules” section in the Design-for-Test Common Resources
Manual describes the trace rules in detail.
If the circuitry allows, you can also make a shadow an observation point by writing a
shadow_observe test procedure. The section entitled “Shadow Element” on page 3-3 discusses
shadows in more detail.
The DRC process identifies shadow latches under the following conditions:
Between the PI force and PO measure, the tool constrains all pins and sets all clocks off. Thus,
for a latch to qualify as transparent, the analysis must determine that it can be turned on when
clocks are off and pins are constrained. TLA simulation gates, which rank as combinational,
represent transparent latches.
Tri-State
Device
BUS ZHOLD
Tri-State
Device
Rules checking determines the values of ZHOLD gates when clocks are off, pin constraints are
set, and the gates are connected to clock, write, and read lines. ZHOLD gates connected to
clock, write, and read lines do not retain values unless the clock off-states and constrained pins
result in binary values.
During rules checking, if a design contains ZHOLD gates, messages indicate when ZHOLD
checking begins, the number and type of ZHOLD gates, the number of ZHOLD gates connected
to clock, write, and read lines, and the number of ZHOLD gates set to a binary value during the
clock off-state condition.
Note
Only FastScan requires this type of analysis, because of the way it “flattens” or simulates
a number of events in a single operation.
For information on the bus_keeper model attribute, refer to “Inout and Output Attributes” in the
Design-for-Test Common Resources Manual.
Figure 3-27 gives an example of a tie value gate that constrains some surrounding circuitry.
0
PI 0
(TIE0)
Resulting Constrained
Constrained Value Value
Figure 3-28 gives an example of a tied gate, and the resulting forbidden values of the
surrounding circuitry.
1
0,1
TIEX
Resulting Forbidden
Forbidden Values
Value
Figure 3-29 gives an example of a tied gate that blocks fault effects in the surrounding circuitry.
Output Always X
Tied Value
Testability naturally varies from design to design. Some features and design styles make a
design difficult, if not impossible, to test, while others enhance a design's testability. Figure 4-1
shows the testability issues this section discusses.
Understand
Tool Concepts
1. Synchronous Circuitry
2. Asynchronous Circuitry
Understand
Testability Issues 3. Scannability Checking
4. Support for Special Testability Cases
Insert/Verify
BS Circuitry
(BSDArchitect)
The following subsections discuss these design features and describe their effect on the design's
testability.
Synchronous Circuitry
Using synchronous design practices, you can help ensure that your design will be both testable
and manufacturable. In the past, designers used asynchronous design techniques with TTL and
small PAL-based circuits. Today, however, designers can no longer use those techniques
because the organization of most gate arrays and FPGAs necessitates the use of synchronous
logic in their design.
A synchronous circuit operates properly and predictably in all modes of operation, from static
DC up to the maximum clock rate. Inputs to the circuit do not cause the circuit to assume
unknown states. And regardless of the relationship between the clock and input signals, the
circuit avoids improper operation.
Truly synchronous designs are inherently testable designs. You can implement many scan
strategies, and run the ATPG process with greater success, if you use synchronous design
techniques. Moreover, you can create most designs following these practices with no loss of
speed or functionality.
Asynchronous Circuitry
A small percentage of designs need some asynchronous circuitry due to the nature of the
system. Because asynchronous circuitry is often very difficult to test, you should place the
asynchronous portions of your design in one block and isolate it from the rest of the circuitry. In
this way, you can still utilize DFT techniques on the synchronous portions of your design.
Scannability Checking
DFTAdvisor performs the scannability checking process on a design’s sequential elements. For
the tool to insert scan circuitry into a design, it must replace existing sequential elements with
their scannable equivalents. Before beginning substitution, the original sequential elements in
the design must pass scannability checks; that is, the tool determines if it can convert sequential
elements to scan elements without additional circuit modifications. Scannable sequential
elements pass the following checks:
1. When all clocks are off, all clock inputs (including set and reset inputs) of the sequential
element must be in their inactive state (initial state of a capturing transition). This
prevents disturbance of the scan chain data before application of the test pattern at the
primary input. If the sequential element does not pass this check, its scan values could
become unstable when the test tool applies primary input values. This checking is a
modification of rule C1. For more information on this rule, refer to “C1 (Clock Rule
#1)” in the Design-for-Test Common Resources Manual.
2. Each clock input (not including set and reset inputs) of the sequential element must be
capable of capturing data when a single clock primary input goes active while all other
clocks are inactive. This rule ensures that this particular storage element can capture
system data. If the sequential element does not meet this rule, some loss of test coverage
could result. This checking is a modification of rule C7. For more information on this
rule, refer to “C7 (Clock Rule #7)” in the Design-for-Test Common Resources Manual.
When a sequential element passes these checks, it becomes a scan candidate, meaning that
DFTAdvisor can insert its scan equivalent into the scan chain. However, even if the element
fails to pass one of these checks, it may still be possible to convert the element to scan. In many
cases, you can add additional logic, called test logic, to the design to remedy the situation. For
more information on test logic, refer to “Enabling Test Logic Insertion” on page 5-9.
Note
If TIE0 and TIE1 nonscan cells are scannable, they are considered for scan. However, if
these cells are used to hold off sets and resets of other cells so that another cell can be
scannable, you must use the Add Nonscan Instances command to make them nonscan.
Feedback Loops
Designs containing loop circuitry have inherent testability problems. A structural loop exists
when a design contains a portion of circuitry whose output, in some manner, feeds back to one
of its inputs. A structural combinational loop occurs when the feedback loop, the path from the
output back to the input, passes through only combinational logic. A structural sequential loop
occurs when the feedback path passes through one or more sequential elements.
The tools, FastScan, FlexTest, and DFTAdvisor, all provide some common loop analysis and
handling. However, loop treatment can vary depending on the tool. The following subsections
discuss the treatment of structural combinational and structural sequential loops.
ABC P
0 0 0 0
0 0 1 1
A 0 1 0 0
0 1 1 0
B 1 0 0 0
C P 1 0 1 X
1 1 0 0
1 1 1 0
The flattening process, which each tool runs as it attempts to exit Setup mode, identifies and
cuts, or breaks, all structural combinational loops. The tools classify and cut each loop using the
appropriate methods for each category.
The following list presents the loop classifications, as well as the loop-cutting methods
established for each. The order of the categories presented indicates the least to most pessimistic
loop cutting solutions.
1. Constant value
This loop cutting method involves those loops blocked by tied logic or pin constraints.
After the initial loop identification, the tools simulate TIE0/TIE1 gates and constrained
inputs. Loops containing constant value gates as a result of this simulation, fall into this
category.
Figure 4-3 shows a loop with a constrained primary input value that blocks the loop’s
feedback effects.
Combinational
Logic
C0 PI 0
0
These types of loops lend themselves to the simplest and least pessimistic breaking
procedures. For this class of loops, the tool inserts a TIE-X gate at a non-constrained
input (which lies in the feedback path) of the constant value gate, as Figure 4-4 shows.
Combinational
Logic
TIEX
C0 PI 0
0
This loop cutting technique yields good circuit simulation that always matches the
actual circuit behavior, and thus, the tools employ this technique whenever possible. The
tools can use this loop cutting method for blocked loops containing AND, OR, NAND,
and NOR gates, as well as MUX gates with constrained select lines and tri-state drivers
with constrained enable lines.
2. Single gate with “multiple fanout”
This loop cutting method involves loops containing only a single gate with multiple
fanout.
Figure 4-2 on page 4-4 shows the circuitry and truth table for a single multiple-fanout
loop. For this class of loops, the tool cuts the loop by inserting a TIE-X gate at one of the
fanouts of this “multiple fanout gate” that lie in the loop path, as Figure 4-5 shows.
ABC P
TIEX
0 0 0 0
0 0 1 1
0 1 0 0
A 0 1 1 0
1 0 0 0
B 1 0 1 X
C P 1 1 0 0
1 1 1 0
P
Q
A
R AB PQR
B
0 0 0 0 1
0 1 XX X
1 0 0 1 0
1 1 0 1 0
Figure 4-7 shows how TIE-X insertion would add some pessimism to the simulation at
output P.
X
P
1
Q
A 1 1 0
R AB PQR
B 1 X 0 0 0 0 1
X 0 1 XX X
0 1 0 0 1 0
TIEX
X 1 1 X 1 0
Ambiguity added
by TIE-X Insertion
The loop breaking technique proves beneficial in many cases. Figure 4-8 provides a
more accurate simulation model than the direct TIE-X insertion approach.
A 1 1 0
R AB PQR
B 1 X 0 0 0 0 1
0 1 XX X
X 1 0 0 1 0
1 1 0 1 0
TIEX 1 1
Q
1 Ambiguity
0 0 removed by
P duplication
0
technique
However, it also has some drawbacks. While less pessimistic than the other approaches
(except breaking constant value loops), the gate duplication process can still introduce
some pessimism into the simulation model.
Additionally, this technique can prove costly in terms of gate count as the loop size
increases. Also, the tools cannot use this method on complex or coupled loops—those
loops that connect with other loops (because gate duplication may create loops as well).
4. Coupling loops
The tools use this technique to break loops when two or more loops share a common
gate. This method involves inserting a TIE-X gate at the input of one of the components
within a loop. The process selects the cut point carefully to ensure the TIE-X gate cuts as
many of the coupled loops as possible.
For example, assume the SR latch shown in Figure 4-6 was part of a larger, more
complex, loop coupling network. In this case, loop circuitry duplication would turn into
an iterative process that would never converge. So, the tools would have to cut the loop
as shown in Figure 4-9.
A Modified
P
Truth Table
AB PQ
0 0 1 1
B Q 0 1 1 X
1 0 0 1
1 1 X X
TIEX
The modified truth table shown in Figure 4-9 demonstrates that this method yields the
most pessimistic simulation results of all the loop-cutting methods. Because this is the
most pessimistic solution to the loop cutting problem, the tools only use this technique
when they cannot use any of the previous methods.
SET LOop Handling {Tiex [-Duplication {ON | OFf}]} | {Simulation [-Iterations n]}
A learning process identifies feedback networks after flattening, and an iterative simulation is
used in the feedback network. For an iterative simulation, FastScan inserts FB_BUF gates to
break the combinational loops. Although you can define the number of iterations used to
stabilize values in the feedback networks, excessive values will reduce performance and
increase memory usage.
FastScan also has the ability to insert TIE-X gates to break the combinational loops. The gate
duplication option reduces the impact that a TIE-X gate places on the circuit to break
combinational loops. By default, this duplication switch is off.
Note
The Set Loop Handling command replaces functionality previously available by the Set
Loop Duplication command.
• Simulation Method
In some cases, using TIEX gates decreases test coverage, and causes DRC failures and
bus contentions. Also, using delay elements can cause too optimistic test coverage and
create output mismatching and bus contentions. Therefore, by default, FlexTest uses a
simulation process to stabilize values in the combinational loop.
FLexTest has the ability to perform DRC simulation of circuits containing
combinational feedback networks by using a learning process to identify feedback
networks after flattening, and an iterative simulation process is used in the feedback
network. The state is not maintained in a feedback network from one cycle of a
sequential pattern to the next.
Some loop structures may not contain loop behavior. The FlexTest loop cutting point
has buffer behavior. However, if loop behavior exists, this buffer has an unknown
output. Essentially, during good simulation, this buffer is always initialized to have an
unknown output value at each time frame. Its value stays unknown until a dominate
value is generated from outside the loop.
To improve performance, for each faulty machine during fault simulation, this loop
cutting buffer does not start with an unknown value. Instead, the good machine value is
the initial value. However, if the value is changed to the opposite value, an unknown
value is then used the first time to ensure loop behavior is properly simulated.
During test generation, this loop cutting buffer has a large SCOAP controllability
number for each simulation value.
• TIEX or DELAY gate insertion
Because of its sequential nature, FlexTest can insert a DELAY element, instead of a
TIE-X gate, as a means to break loops. The DELAY gate retains the new data for one
timeframe before propagating it to the next element in the path. Figure 4-10 shows a
DELAY element inserted to break a feedback path.
Delay
Because FlexTest simulates multiple timeframes per test cycle, DELAY elements often
provide a less pessimistic solution for loop breaking as they do not introduce additional
X states into the good circuit simulation.
Note
In some cases, inserted DELAY elements can cause mismatches between FlexTest
simulation and a full-timing logic simulator. If you experience either of these problems,
use TIE-X gates instead of DELAY gates for loop cutting.
You can report on loops using the Report Loops or the Report Feedback Paths commands.
While both involved with loop reporting, these commands behave somewhat differently. Refer
to the DFTAdvisor Reference Manual for details. You can write all identified structural
combinational loops to a file using the Write Loops command.
You can use the loop information DFTAdvisor provides to handle each loop in the most
desirable way. For example, assuming you wanted to improve the test coverage for a coupling
loop, you could use the Add Test Points command within DFTAdvisor to insert a test point to
control or observe values at a certain location within the loop.
RST
D Q
Flip-flop
Note
The tools model RAM and ROM gates as combinational gates, and thus, they consider
loops involving only combinational gates and RAMs (or ROMs) as combinational loops–
not sequential loops.
The following sections provide tool-specific issues regarding sequential loop handling.
Within FastScan, sequential loops typically trigger C3 and C4 design rules violations. When
one sequential element (a source gate) feeds a value to another sequential element (a sink gate),
FastScan simulates old data at the sink. You can change this simulation method using the Set
Capture Handling command. For more information on the C3 and C4 rules, refer to “Clock
Rules” in the Design-for-Test Common Resources Manual. For more information on the Set
Capture Handling command refer to its reference page in the ATPG Tools Reference Manual.
Similar to fake combinational loops, fake sequential loops do not exhibit loop behavior. For
example, Figure 4-12 shows a fake sequential loop.
RST Q RST Q
D Combinational D
Logic
PH1 Flip-flop Flip-flop
PH2
While this circuitry involves flip-flops that form a structural loop, the two-phase clocking
scheme (assuming properly-defined clock constraints) ensures clocking of the two flip-flops at
different times. Thus, FlexTest does not treat this situation as a loop.
Only the timeframe considerations vary between the two loop cutting methods. Different
timeframes may require different loop cuts. FlexTest additively keeps track of the loop cuts
needed, and inserts them at the end of the analysis process.
You set whether FlexTest uses a TIE-X gate or DELAY element for sequential loop cutting
with the Set Loop Handling command. By default, FlexTest inserts DELAY elements to cut
loops.
Redundant Logic
In most cases, you should avoid using redundant logic because a circuit with redundant logic
poses testability problems. First, classifying redundant faults takes a great deal of analysis
effort. Additionally, redundant faults, by their nature, are untestable and therefore lower your
fault coverage. Figure 2-20 on page 2-27 gives an example of redundant circuitry.
Some circuitry requires redundant logic; for example, circuitry to eliminate race conditions or
circuitry which builds high reliability into the design. In these cases, you should add test points
to remove redundancy during the testing process.
Figure 4-13 shows a situation with an asynchronous reset line and the test logic added to control
the asynchronous reset line.
B B
D Q D Q
Clk Clk
R R
A A
RST RST Q
D Q D
Clk Clk
test_mode
In this example, DFTAdvisor adds an OR gate that uses the test_mode (not scan_enable) signal
to keep the reset of flip-flop B inactive during the testing process. You would then constrain the
test_mode signal to be a 1, so flip-flop B could never be reset during testing. To insert this type
of test logic, you can use the DFTAdvisor command Set Test Logic (see page 5-9 for more
information).
DFTAdvisor also allows you to specify an initialization sequence in the test procedure file to
avoid the use of this additional test logic. For additional information, refer to the Add Scan
Groups command in the DFTAdvisor Reference Manual.
Gated Clocks
Primary inputs typically cannot control the gated clock signals of sequential devices. In order to
make some of these sequential elements scannable, you may need to add test logic to modify
their clock circuitry.
For example, Figure 4-14 shows an example of a clock that requires some test logic to control it
during test mode.
D Q
D Q Clk
Clk
D Q
D Q Clk
Clk
test_clock
test_mode
In this example, DFTAdvisor makes the element scannable by adding a test clock, for both scan
loading/unloading and data capture, and multiplexing it with the original clock signal. It also
adds a signal called test_mode to control the added multiplexer. The test_mode signal differs
from the scan_mode or scan_enable signals in that it is active during the entire duration of the
test—not just during scan chain loading/unloading. To add this type of test logic into your
design, you can use the Set Test Logic and Setup Scan Insertion commands. For more
information on these commands, refer to pages 5-9 and 5-31, respectively.
Tri-State™ Devices
Tri-state buses are another testability challenge. Faults on tri-state bus enables can cause one of
two problems: bus contention, which means there is more than one active driver, or bus float,
which means there is no active driver. Either of these conditions can cause unpredictable logic
values on the bus, which allows the enable line fault to go undetected. Figure 4-15 shows a tri-
state bus with bus contention caused by a stuck-at-1 fault.
DFTAdvisor can add gating logic that turns off the tri-state devices during scan chain shifting.
The tool gates the tri-state device enable lines with the scan_enable signal so they are inactive
and thus prevent bus contention during scan data shifting. To insert this type of gating logic,
you can use the DFTAdvisor command Set Test Logic (see page 5-9 for more information).
In addition, FastScan and FlexTest let you specify the fault effect of bus contention on tri-state
nets. This capability increases the testability of the enable line of the tri-state drivers. Refer to
the Set Net Dominance command in the ATPG Tools Reference Manual for details.
• TIEX — In this category, FastScan considers the output of a flip-flop or latch to always
be an X value during test. This condition may prevent the detection of a number of
faults.
• TIE0 — In this category, FastScan considers the output of a flip-flop or latch to always
be a 0 value during test. This condition may prevent the detection of a number of faults.
• TIE1 —In this category, FastScan considers the output of a flip-flop or latch to always
be a 1 value during test. This condition may prevent the detection of a number of faults.
• Transparent (combinational) — In this category, the non-scan cell is a latch, and the
latch behaves transparently. When a latch behaves transparently, it acts, in effect, as a
buffer—passing the data input value to the data output. The TLA simulation gate models
this behavior. Figure 4-16 shows the point at which the latch must exhibit transparent
behavior.
clock2
SI SO
Seq_trans Procedure
------------------------
scan Region 1 DFF Region 2 scan force clock2 0 0;
cell1 cell2 force clock2 1 1;
force clock2 0 2;
restore_pis;
PIs/scan cells PIs/scan cells
The DFF shown in Figure 4-17 behaves sequentially transparent when the tool pulses its
clock input, clock2. The sequential transparent procedure shows the events that enable
transparent behavior.
Note
To be compatible with combinational ATPG, the value on the data input line of the non-
scan cell must have combinational behavior, as depicted by the combinational Region 1.
Also, the output of the state element, in order to be useful for ATPG, must propagate to
an observable point.
When DRC performs scan cell checking, it also checks non-scan cells. When the
checking process completes, the rules checker issues a message indicating the number of
non-scan cells that qualify for clock sequential handling.
You instruct FastScan to use clock sequential handling by selecting the -Sequential
option to the Set Pattern Type command. During test generation, FastScan generates test
patterns for target faults by first attempting combinational, and then RAM sequential
techniques. If unsuccessful with these techniques, FastScan performs clock sequential
test generation if you specify a non-zero sequential depth.
Note
Setting the -Sequential switch to either 0 (the default) or 1 results in patterns with a
maximum sequential depth of one, but FastScan creates clock sequential patterns only if
the setting is 1 or higher.
To report on clock sequential cells, you use the Report Nonscan Cells command. For
more information on setting up and reporting on clock sequential test generation, refer to
the Set Pattern Type and Report Nonscan Cells reference pages in the ATPG Tools
Reference Manual.
Limitations of clock sequential non-scan cell handling include:
o The maximum allowable sequential depth is 255 (a typical depth would range from
2 to 5).
o Copy and shadow cells cannot behave sequentially.
o The tool cannot detect faults on clock/set/reset lines.
o You cannot use the read-only mode of RAM testing with clock sequential pattern
generation.
o FastScan simulates cells that capture data on a trailing clock edge (when data
changes on the leading edge) using the original values on the data inputs.
o Non-scan cells that maintain a constant value after load_unload simulation are
treated as tied latches.
o This type of testing has high memory and performance costs.
• HOLD — The learning process separates non-scan elements into two classes: those that
change state during scan loading and those that hold state during scan loading. The
HOLD category is for those non-scan elements that hold their values: that is, FlexTest
assumes the element retains the same value after scan loading as prior to scan loading.
• INITX — When the learning process cannot determine any useful information about the
non-scan element, FlexTest places it in this category and initializes it to an unknown
value for the first test cycle.
• INIT0 — When the learning process determines that the load_unload procedure forces
the non-scan element to a 0, FlexTest initializes it to a 0 value for the first test cycle.
• INIT1 — When the learning process determines that the load_unload procedure forces
the non-scan element to a 1, FlexTest initializes it to a 1 value for the first test cycle.
• TIE0 — When the learning process determines that the non-scan element is always a 0,
FlexTest assigns it a 0 value for all test cycles.
• TIE1 — When the learning process determines that the non-scan element is always a 1,
FlexTest assigns it a 1 value for all test cycles.
• DATA_CAPTURE — When the learning process determines that the value of a non-
scan element depends directly on primary input values, FlexTest places it in this
category. Because primary inputs (other than scan inputs or bidirectionals) do not
change during scan loading, FlexTest considers their values constant during this time.
The learning process places the non-scan cells into one of the preceding categories. You can
report on the non-scan cell handling with the Report Nonscan Handling command. You can
override the default categorization with the Add Nonscan Handling command.
Clock Dividers
Some designs contain uncontrollable clock circuitry; that is, internally-generated signals that
can clock, set, or reset flip-flops. If these signals remain uncontrollable, DFTAdvisor will not
consider the sequential elements controlled by these signals “scannable”. And consequently,
they could disturb sequential elements during scan shifting. Thus, the system cannot convert
these elements to scan.
Figure 4-19 shows an example of a sequential element (B) driven by a clock divider signal and
with the appropriate circuitry added to control the divided clock signal.
DATA
DATA D Q
D Q
B B
D Q
D Q'
Q Q' A
A CLK Q'
CLK Q' TST_CLK
TST_EN
DFTAdvisor can assist you in modifying your circuit for maximum controllability (and thus,
maximum scannability of sequential elements) by inserting special circuitry, called test logic, at
these nodes when necessary. DFTAdvisor typically gates the uncontrollable circuitry with chip-
level test pins. In the case of uncontrollable clocks, DFTAdvisor adds a MUX controlled by the
test_clk and test_en signals.
For more information on test logic, refer to “Enabling Test Logic Insertion” on page 5-9.
Pulse Generators
Pulse generators are circuitry that create pulses when active. Figure 4-20 gives an example of
pulse generator circuitry.
A
A
C B
B C
When designers use this circuitry in clock paths, there is no way to create a stable on state.
Without a stable on state, the fault simulator and test generator have no way to capture data into
the scan cells. Pulse generators also find use in write control circuitry. This use impedes RAM
testing
FastScan and FlexTest identify the reconvergent pulse generator sink gates, or simply “pulse
generators”, during the learning process. For the tools to provide support, “pulse generators”
must satisfy the following requirements:
• The “pulse generator” gate must have a connection to a clock input of a memory
element or a write line of a RAM.
• The “pulse generator” gate must be an AND, NAND, OR, or NOR gate.
• Two inputs of the “pulse generator” gate must come from the reconvergent source gate.
• The two reconvergent paths may only contain inverters and buffers.
• There must be an inversion difference in the two reconvergent paths.
• The two paths must have different lengths.
• The input gate of the “pulse generator” gate in the long path must only go to gates of the
same gate type. The tools model this input gate as tied to the non-controlling value of the
“pulse generator” gate.
FastScan and FlexTest provide two commands that deal with pulse generators: Set Pulse
Generators, which controls the identification of the “pulse generator” gates, and Report Pulse
Generators, which displays the list of “pulse generator” gates. Refer to the ATPG Tools
Reference Manual for information on the Set Pulse Generators and Report Pulse Generators
commands.
Additionally, rules checking includes some checking for “pulse generator” gates. Specifically,
Trace rules #16 and #17 check to ensure proper usage of “pulse generator” gates. Refer to “T16
(Trace Rule #16)” and “T17 (Trace Rule #17)” in the Design-for-Test Common Resources
Manual for more details on these rules.
JTAG-Based Circuits
Boundary scan circuitry, as defined by IEEE standard 1149.1, can result in a complex
environment for the internal scan structure and the ATPG process. The two main issues with
boundary scan circuitry are 1) connecting the boundary scan circuitry with the internal scan
circuitry, and 2) ensuring that the boundary scan circuitry is set up properly during ATPG. For
information on connecting boundary scan circuitry to internal scan circuitry, refer to
“Connecting Internal Scan Circuitry” in the Boundary Scan Process Guide. For an example test
procedure file that sets up a JTAG-based circuit, refer to page 6-100.
The ATPG tools, FastScan and FlexTest, do not test the internals of the RAM/ROM, although
FastScan MacroTest (separately licensed but available in the FastScan product) lets you create
tests for small memories such as register files by converting a functional test sequence or
algorithm into a sequence of scan tests. For large memories, built-in test structures within the
chip itself are the best methods of testing the internal RAM or ROM. MBISTArchitect lets you
to insert the access and control hardware for testing large memories.
However, FastScan and FlexTest need to model the behavior of the RAM/ROM so that tests can
be generated for the logic on either side of the embedded memory. This allows FastScan and
FlexTest to generate tests for the circuitry around the RAM/ROM, as well as the read and write
controls, data lines, and address lines of the RAM/ROM unit itself.
Figure 4-21 shows a typical configuration for a circuit containing embedded RAM.
L L
O CONTROL O
G D G
I E I
C DATA POs
PIs C OUT C
ADDR O RAM
B D B and SLs
and SLs L E L
O R O
DATA IN C
C
K K
A B
ATPG must be able to operate the illustrated RAM to observe faults in logic block A, as well as
to control the values in logic block B to test faults located there. FastScan and FlexTest each
have unique strategies for operating the RAMs.
FastScan supports the following strategies for propagating fault effects through the RAM:
• Read-only mode — FastScan assumes the RAM is initialized prior to scan test and this
initialization must not change during scan. This assumption allows the tool to treat a
RAM as a ROM. As such, there is no requirement to write to the RAM prior to reading,
so the test pattern only performs a read operation. Important considerations for read-
only mode test patterns are as follows:
o The read-only testing mode of RAM only tests for faults on data out and read
address lines, just as it would for a ROM. The tool does not test the write port I/O.
o To use read-only mode, the circuit must pass rules A1 and A6.
o Values placed on the RAM are limited to initialized values.
o Random patterns can be useful for all RAM configurations.
o You must define initial values and assume responsibility that those values are
successfully placed on the correct RAM memory cells. The tool does not perform
any audit to verify this is correct, nor will the patterns reflect what needs to be done
for this to occur.
o Because the tester may require excessive time to fully initialize the RAM, it is
allowed to do a partial initialization.
• Pass-through mode — FastScan has two separate pass-through testing modes:
o Static pass-through — To detect faults on data input lines, you must write a known
value into some address, read that value from the address, and propagate the effect to
an observation point. In this situation, the tool handles RAM transparently, similar to
the handling of a transparent latch. This requires several simultaneous operations.
The write and read operations are both active and thus writing to and reading from
the same address. While this is a typical RAM operation, it allows testing faults on
the data input and data output lines. It is not adequate for testing faults on read and
write address lines.
o Dynamic pass-through — This testing technique is similar to static pass-through
testing except one pulse of the write clock performs both the write and read
operation (if the write and read control lines are complementary). While static pass-
through testing is comparable to transparent latch handling, dynamic pass-through
testing compares to sequential transparent testing.
• Sequential RAM test mode — This is the recommended approach to RAM testing.
While the previous testing modes provide techniques for detecting some faults, they
treat the RAM operations as combinational. Thus, they are generally inadequate for
generating tests for circuits with embedded RAM. In contrast, this testing mode tries to
separately model all events necessary to test a RAM, which requires modeling
sequential behavior. This enables testing of faults that require detection of multiple
pulses of the write control lines. These faults include RAM address and write control
lines.
RAM sequential testing requires its own specialized pattern type. RAM sequential
patterns consist of one scan pattern with multiple scan chain loads. A typical RAM
sequential pattern contains the events shown in Figure 4-22.
Note
For RAM sequential testing, the RAM’s read_enable/write_enable control(s) can be
generated internally. However, the RAM’s read/write clock should be generated from a
PI. This ensures RAM sequencing is synchronized with the RAM sequential patterns.
In this example of an address line test, assume that the MSB address line is stuck at 0.
The first write would write data into an address whose MSB is 0 to match the faulty
value, such as 0000. The second write operation would write different data into a
different address (the one obtained by complementing the faulty bit). For this example,
it would write into 1000. The read operation then reads from the first address, 0000. If
the highest order address bit is stuck-at-0, the 2nd write would have overwritten the
original data at address 0, and faulty circuitry data would be read from that address in
the 3rd step.
Another technique that may be useful for detecting faults in circuits with embedded RAM is
clock sequential test generation. It is a more flexible technique, which effectively detects faults
associated with RAM. “Clock Sequential Patterns” on page 6-9 discusses clock sequential test
generation in more detail.
If the clock that captures the data from the RAM is the same clock which is used for reading,
FastScan issues a C6 clock rules violation. This indicates that you must set the clock timing so
that the scan cell can successfully capture the newly read data.
If the clock that captures the data from the RAM is not the same clock that is used for reading,
you will likely need to turn on multiple clocks to detect faults. The default Set Clock Restriction
On command is conservative, so FastScan will not allow these patterns, resulting in a loss in test
coverage. If you issue the Set Clock Restriction Off command, FastScan will allow these
patterns, but there is a risk of inaccurate simulation results because the simulator will not
propagate captured data effects.
• You can define a pin as both a write control line and a clock if the off-states are the same
value. FastScan then displays a warning message indicating that a common write control
and clock has been defined.
• The rules checker issues a C3 clock rule violation if a clock can propagate to a write line
of a RAM, and the corresponding address or data-in lines are connected to scan latches
which has a connection to the same clock.
• The rules checker issues a C3 clock rule violation if a clock can propagate to a read line
of a RAM, and the corresponding address lines are connected to scan latches which has
a connection to the same clock.
• The rules checker issues a C3 clock rule violation if a clock can capture data into a scan
latch that comes from a RAM read port that has input connectivity to latches which has
a connection to the same clock.
• If you set the simulation mode to Ram_sequential, the rules checker will not issue an A2
RAM rule violation if a clock is connected to a write input of a RAM. Any clock
connection to any other input (including the read lines) will continue to be a violation.
• If a RAM write line is connected to a clock, you cannot use the dynamic pass through
test mode.
• Patterns which use a common clock and write control for writing into a RAM will be in
the form of ram_sequential patterns. This requires you to set the simulation mode to
Ram_sequential.
• If you change the value of a common write control and clock line during a test
procedure, you must hold all write, set, and reset inputs of a RAM off. FastScan will
consider failure to satisfy this condition as an A6 RAM rule violation and will disqualify
the RAM from being tested using read_only and ram_sequential patterns.
when writing data to and reading data from the RAM. Testing designs with RAM is a challenge
for FastScan because of the combinational nature. FlexTest, however, due to its sequential
nature, is able to handle designs with RAM without complication. RAMs are just treated as non-
scan sequential blocks. However, in order to generate the appropriate RAM tests, you do need
to specify the appropriate control lines.
For more information on any of these commands, refer to the Command Dictionary chapter in
the ATPG Tools Reference Manual.
• The checker reads the RAM/ROM initialization files and checks them for errors. If you
selected random value initialization, the tool gives random values to all RAM and ROM
gates without an initialized file. If there are no initialized RAMs, you cannot use the
read-only test mode. If any ROM is not initialized, an error condition occurs. A ROM
must have an initialization file but it may contain all Xs. Refer to the Read Modelfile
command in the ATPG Tools Reference Manual for details on initialization of
RAM/ROM.
• The RAM/ROM instance name given must contain a single RAM or ROM gate. If no
RAM or ROM gate exists in the specified instance, an error condition occurs.
• If you define write control lines and there are no RAM gates in the circuit, an error
condition occurs. To correct this error, delete the write control lines.
• When the write control lines are off, the RAM set and reset inputs must be off and the
write enable inputs of all write ports must be off. You cannot use RAMs that fail this
rule in read-only test mode. If any RAM fails this check, you cannot use dynamic pass-
through. If you defined an initialization file for a RAM that failed this check, an error
condition occurs. To correct this error, properly define all write control lines or use
lineholds (pin constraints).
• A RAM gate must not propagate to another RAM gate. If any RAM fails this check, you
cannot use dynamic pass-through.
• A defined scan clock must not propagate directly (unbroken by scan or non-scan cells)
to a RAM gate. If any RAM fails this check, you cannot use dynamic pass-through.
• The tool checks the write and read control lines for connectivity to the address and data
inputs of all RAM gates. It gives a warning message for all occurrences and if
connectivity fails, there is a risk of race conditions for all pass-through patterns.
• A RAM that uses the edge-triggered attribute must also have the read_off attribute set
to hold. Failure to satisfy this condition results in an error condition when the design
flattening process is complete.
• If the RAM rules checking identifies at least one RAM that the tool can test in read-only
mode, it sets the RAM test mode to read-only. Otherwise, if the RAM rules checking
passes all checks, it sets the RAM test mode to dynamic pass-through. If it cannot set the
RAM test mode to read-only or dynamic pass-through, it sets the test mode to static
pass-through.
• A RAM with the read_off attribute set to hold must pass Design Rule A7 (when read
control lines are off, place read inputs at 0). The tool treats RAMs that fail this rule as:
o a TIE-X gate, if the read lines are edge-triggered.
o a read_off value of X, if the read lines are not edge-triggered.
• The read inputs of RAMs that have the read_off attribute set to hold must be at 0 during
all times of all test procedures, except the test_setup procedure.
• The read control lines must be off at time 0 of the load_unload procedure.
• A clock cone stops at read ports of RAMs that have the read_off attribute set to hold,
and the effect cone propagates from its outputs.
For more information on the RAM rules checking process, refer to “RAM Rules” in the Design-
for-Test Common Resources Manual.
Incomplete Designs
FastScan, FlexTest, and DFTAdvisor can invoke on incomplete Verilog, VHDL, or EDIF
designs due to their ability to automatically generate blackboxes. The VHDL, Verilog, and
EDIF parsers automatically blackbox any instantiated module or instance that is not defined in
either the ATPG library or the design netlist. The tool issues a warning message for each
blackboxed module similar to the following:
For Verilog designs, if the tool instantiates an undefined module, it generates a module
declaration based on the instantiation. If ports are connected by name, the tool uses those port
names in the generated module. If ports are connected by position, the parser generates the port
names. Calculating port directions is problematic and must be done by looking at the other pins
on the net connected to the given instance pin. For each instance pin, if the connected net has a
non-Z-producing driver, the tool considers the generated module port an input, otherwise the
port is an output. The tool never generates inout ports since they cannot be inferred from the
other pins on the net.
For VHDL and EDIF designs, the tool uses the component declaration to generate a module
declaration internally using the port names and directions.
Modules that are automatically blackboxed default to driving X on the outputs while inputs are
fault sinks. To change the output values driven, refer to the Add Black Box reference page in
the ATPG Tools Reference Manual.
DFTAdvisor is the Mentor Graphics tool that provides comprehensive testability analysis and
inserts internal test structures into your design. Figure 5-1 shows the layout of this chapter, as it
applies to the process of inserting scan and other test circuitry.
Insert/Verify
BScan Circuitry 1. Understanding DFTAdvisor
(BSDArchitect) 2. Preparing for Test Structure Insertion
This section discusses each of the tasks outlined in Figure 5-1, providing details on using
DFTAdvisor in different environments and with different test strategies. For more information
on all available DFTAdvisor functionality, refer to the DFTAdvisor Reference Manual.
Understanding DFTAdvisor
DFTAdvisor functionality is available in two modes: graphical user interface (GUI) or
command-line. For information on using basic GUI functionality, refer to “User Interface
Overview” on page 1-8 and “DFTAdvisor User Interface” on page 1-23.
Before you use either mode of DFTAdvisor, you should get familiar with the basic process
flow, the inputs and outputs, the supported test structures, and the DFTAdvisor invocation as
described in the following subsections.
You should also have a good understanding of the material in both Chapter 2, “Understanding
Scan and ATPG Basics”, and Chapter 3, “Understanding Common Tool Terminology and
Concepts.”
From
Synthesis
DFT Synthesized
Library Netlist
Setup
Mode Set Up Circuit and
Tool Information
Insert
Test Structures Test
Procedure
File
Netlist with Save Design and
Test ATPG Information
Structures
Dofile
To ATPG
You start with a DFT library and a synthesized design netlist. The library is the same one that
FastScan and FlexTest use. “DFTAdvisor Inputs and Outputs” on page 5-3 describes the netlist
formats you can use with DFTAdvisor. The design netlist you use as input may be an individual
block of the design, or the entire design.
After invoking the tool, your first task is to set up information about the design—this includes
both circuit information and information about the test structures you want to insert. “Preparing
for Test Structure Insertion” on page 5-8 describes the procedure for this task. The next task
after setup is to run rules checking and testability analysis, and debug any violations that you
encounter. “Changing the System Mode (Running Rules Checking)” on page 5-17 documents
the procedure for this task.
Note
To catch design violations early in the design process, you should run and debug design
rules on each block as it is synthesized.
After successfully completing rules checking, you will be in the Dft system mode. At this point,
if you have any existing scan you want to remove, you can do so. “Deleting Existing Scan
Circuitry” on page 5-15 describes the procedure for doing this. You can then set up specific
information about the scan or other testability circuitry you want added and identify which
sequential elements you want converted to scan. “Identifying Test Structures” on page 5-17
describes the procedure for accomplishing this. Finally, with these tasks completed, you can
insert the desired test structures into your design. “Inserting Test Structures” on page 5-30
describes the procedure for this insertion.
Circuit
Setup
Design (Dofile) Library
Test
DFTAdvisor Procedure
File
ATPG
Design Setup
(Dofile)
• Design (netlist)
The supported design data formats are Electronic Design Interchange Format (EDIF
2.0.0), GENIE, Tegas Design Language (TDL), VHDL, and Verilog.
• Circuit Setup (or Dofile)
This is the set of commands that gives DFTAdvisor information about the circuit and
how to insert test structures. You can issue these commands interactively in the
DFTAdvisor session or place them in a dofile.
• Library
The design library contains descriptions of all the cells the design uses. The library also
includes information that DFTAdvisor uses to map non-scan cells to scan cells and to
select components for added test logic circuitry. The tool uses the library to translate the
design data into a flat, gate-level simulation model on which it runs its internal
processes.
• Test Procedure File
This file defines the stimulus for shifting scan data through the defined scan chains. This
input is only necessary on designs containing preexisting scan circuitry or requiring test
setup patterns.
DFTAdvisor produces the following outputs:
• Design (Netlist)
This netlist contains the original design modified with the inserted test structures. The
output netlist formats are the same type as the input netlist formats, with the exception of
the NDL format. The NDL, or Network Description Language, format is a gate-level
logic description language used in LSI Logic’s C-MDE environment. This format is
structurally similar to the TDL format.
• ATPG Setup (Dofile)
DFTAdvisor can automatically create a dofile that you can supply to the ATPG tool.
This file contains the circuit setup information that you specified to DFTAdvisor, as
well as information on the test structures that DFTAdvisor inserted into the design.
DFTAdvisor creates this file for you when you issue the command Write Atpg Setup.
• Test Procedure File
When you issue the Write Atpg Setup command, DFTAdvisor writes a simple test
procedure file for the scan circuitry it inserted into the design. You use this file with the
downstream ATPG tools, FastScan and FlexTest.
Test Structures
Structure- Clocked
Automatic
Based Sequential
The following list briefly describes the test structures DFTAdvisor supports:
• Full scan — a style that identifies and converts all sequential elements (that pass
scannability checking) to scan. “Understanding Full Scan” on page 2-4 discusses the full
scan style.
• Partial scan — a style that identifies and converts a subset of sequential elements to
scan. “Understanding Partial Scan” on page 2-5 discusses the partial scan style.
DFTAdvisor provides five alternate methods of partial scan selection:
o Sequential ATPG-based — chooses scan circuitry based on FlexTest’s sequential
ATPG algorithm. Because of its ATPG-based nature, this method provides
predictable test coverage for the selected scan cells. This method selects scan cells
using the sequential ATPG algorithm of FlexTest.
o Automatic — chooses as much scan circuitry as needed to achieve a high fault
coverage. It combines several scan selection techniques. It typically achieves higher
test coverage for the same allocation of scan. If it is limited, it attempts to select the
best scan cells within the limit.
o SCOAP-based — chooses scan circuitry based on controllability and observability
improvements determined by the SCOAP (Sandia Controllability Observability
Analysis Program) approach. DFTAdvisor computes the SCOAP numbers for each
memory element and chooses for scan those with the highest numbers. This method
provides a fast way to select the best scan cells for optimum test coverage.
o Structure-Based — chooses scan circuitry using structure-based scan selection
techniques. These techniques include loop breaking, self-loop breaking, and limiting
the design’s sequential depth.
Note
This technique is useful for data path circuits.
* = Not recommended. Scan selection should be performed prior to test point selection.
A = Allowed.
N = Nothing more to identify.
E = Error. Can not mix given scan identification types.
Second Pass
Full Clock Seq. Parti- Seq. None Test
Scan Seq. Trans- tion Point
parent Scan
F Full Scan Ν N N Α N A A
i
Clock A Α E Α N A A
r
Sequential
s
t Sequential A E Α Α E A A
P Transparent
a
s Partition Scan Α Α Α Α A A A
s Sequential A E E A A A A
None A A A A A A A
Test Point * * * * * A Α
“Selecting the Type of Test Structure” on page 5-17 discusses how to use the Setup Scan
Identification command.
Invoking DFTAdvisor
Note
Your design must be in either EDIF, TDL, VHDL, Verilog, or Genie format. You can
choose whether to run DFTAdvisor in 32-bit or 64-bit mode. 64-bit mode supports larger
designs with increased performance and design capacity.
You can invoke DFTAdvisor in either a graphical (GUI) or command line mode. To use the
GUI option, just enter the application name on the shell command line, which opens
DFTAdvisor in GUI mode.
$MGC_HOME/bin/dftadvisor
Once the tool invokes, a dialog box prompts you for the required arguments (design_name,
design type, and library). Browser buttons on the GUI provide navigation to the design and
library directories. After the design and library finish loading, the tool is in Setup mode, ready
for you to begin working on your design. You then use the Setup mode to define the circuit and
scan data, which is the next step in the process.
Using the command line option requires you to enter all required arguments (those in bold), as
well as the -Nogui switch, at the shell command line.
The invocation syntax for DFTAdvisor in either mode includes a number of other switches and
options. For a list of available options and explanations of each, you can refer to “Shell
Commands” in the DFTAdvisor Reference Manual or enter:
$ $MGC_HOME/bin/<application> -help
You use the Set Scan Type command to specify the type of scan architecture you want to insert.
The usage for this command is as follows:
You have the option to customize the scan cell and the cell’s scan output mapping behavior.
You can change the mapping for an individual instance, all instances under a hierarchical
instance, all instances in all occurrences of a module in the design, or all occurrences of the
model in the entire design, using the Add Mapping Definition command. You can also delete
scan cell mapping and report on its current status using the Delete Mapping Definition and
Report Mapping Definition commands.
For example, you can map the fd1 nonscan model to the fd1s scan model for all occurrences of
the model in the design by entering:
The following example maps the fd1 nonscan model to the fd1s scan model for all matching
instances in the “counter” module and for all occurrences of that module in the design:
Additionally, you can change the scan output pin of the scan model in the same manner as the
scan cell. Within the scan_definition section of the model, the scan_out attribute defines which
pin is used as the scan output pin. During the scan stitching process, DFTAdvisor selects the
output pin based on the lowest fanout count of each of the possible pins. If you have a
preference as to which pin to use for a particular model or instance, you can also issue the Add
Mapping Definition command to define that pin.
For example, if you want to use “qn” instead of “q” for all occurrences of the fd1s scan model in
the design, enter:
For additional information and examples on using these commands, refer to Add Mapping
Definition, Delete Mapping Definition, or Report Mapping Definition in the DFTAdvisor
Reference Manual.
Test logic provides a useful solution to a variety of common problems. First, some designs
contain uncontrollable clock circuitry; that is, internally-generated signals that can clock, set, or
reset flip-flops. If these signals remain uncontrollable, DFTAdvisor will not consider the
sequential elements controlled by these signals scannable. Second, you might want to prevent
bus contention caused by tri-state devices during scan shifting.
DFTAdvisor can assist you in modifying your circuit for maximum controllability (and thus,
maximum scannability of sequential elements) and bus contention prevention by inserting test
logic circuitry at these nodes when necessary.
Note
DFTAdvisor does not attempt to add test logic to user-defined non-scan instances or
models; that is, those specified by Add Nonscan Instance or Add Nonscan Model.
DFTAdvisor typically gates the uncontrollable circuitry with a chip-level test pin. Figure 5-5
shows an example of test logic circuitry.
Before After
Uncontrollable Clock Added Test Logic
Test_en
You can specify the types of signals for which you want test logic circuitry added, using the Set
Test Logic command. This command’s usage is as follows:
SET TEst Logic {-Set {ON | OFf} | -REset {ON | OFf} | -Clock {ON | OFf} |
-Tristate {ON | OFf} | -Bidi {ON | Scan | OFf} | -RAm {ON | OFf}}...
This command specifies whether or not you want to add test logic to all uncontrollable (set,
reset, clock, or RAM write control) signals during the scan insertion process. Additionally, you
can specify to turn on (or off) the ability to prevent bus contention for tri-state devices. By
default, DFTAdvisor does not add test logic, except to bidirectional input/output pins used for
scan chains. You must explicitly enable the use of test logic by issuing this command.
In adding the test logic circuitry, DFTAdvisor performs some basic optimizations in order to
reduce the overall amount of test logic needed. For example, if the reset line to several flip-flops
is a common internally-generated signal, DFTAdvisor gates it at its source before it fans out to
all the flip-flops.
Note
You must turn the appropriate test logic on if you want DFTAdvisor to consider latches
as scan candidates. Refer to “D6 (Data Rule #6)” in the Design-for-Test Common
Resources Manual for more information on scan insertion with latches.
If your design uses bidirectional pins as scan I/Os, DFTAdvisor controls the scan direction for
the bidirectional pins for correct shift operation. This can be specified by the default option “-
Bidi Scan”. If the enable signal of the bidirectional pin is controlled by a primary input pin, then
DFTAdvisor adds a “force” statement for the enable pin in the new load_unload procedure to
enable/disable the correct direction. Otherwise, DFTAdvisor inserts gating logic to control the
enable line. The gate added to the bidirectional enable line is either a 2-input AND or OR.
There are four possible cases between the scan direction and the active values of a tri-state
driver, as shown in Table 5-2. The second input of the gate is controlled from the scan_enable
signal, which might be inverted. You will need to specify AND and OR models through the
cell_type keyword in the ATPG library or use the Add Cell Model command.
Table 5-2. Scan Direction and Active Values
Driver Scan Direction Gate Type
active high input AND
active high output OR
active low input OR
active low output AND
If the user specifies “-Bidi ON” option, DFTAdvisor controls all bidirectional pins. The
bidirectional pins that are not used as scan I/Os are put into input mode (Z state) during scan
shifting by either “force” statements in the new load_unload procedure or by using gating logic.
DFTAdvisor adds a “force Z” statement in the test procedure file for the output of the
bidirectional pin if it is used as scan output pin. This ensures that the bus is not driven by the
tristate drivers of both bidirectional pin and the tester at the same time.
ADD CEll Models dftlib_model {-Type {INV | And | {Buf -Max_fanout integer} | OR | NAnd
| NOr | Xor | INBuf | OUtbuf | {Mux selector data0 data1} | {ScanCELL clk data} | {DFf
clk data} | {DLat enable dat [-Active {High | Low}]}}} [{-Noinvert | -Invert} output_pin]
The model_name argument specifies the exact name of the model within the library. The -Type
option specifies the type of the gate. The possible cell_model_types are INV, AND, OR,
NAND, NOR, XOR, BUF, INBUF, OUTBUF, DLAT, MUX, ScanCELL, and DFF.
Refer to the DFTAdvisor Reference Manual for more details on the Add Cell Models command.
Additionally, you should re-optimize a design to ensure that fanout resulting from test logic is
correctly compensated and passes electrical rules checks.
In some cases, inserting test logic requires the addition of multiple test clocks. Analysis run
during DRC determines how many test clocks DFTAdvisor needs to insert. The Report Scan
Chains command reports the test clock pins used in the scan chains.
For example, you might have two system clocks, called “clk1” and “clk2”, whose off-states are
0 and a global reset line called “rst_l” whose off-state is 1 in your circuit. You can specify these
as clock lines as follows:
You can specify multiple clock pins with the same command if they have the same off-state.
You must define clock pins prior to entering Dft mode. Otherwise, none of the non-scan
sequential elements will successfully pass through scannability checks. Although you can still
enter Dft mode without specifying the clocks, DFTAdvisor will not be able to convert elements
that the unspecified clocks control.
Note
If you are unsure of the clocks within a design, you can use the Analyze Control Signals
command to identify and then define all the clocks. It also defines the other control
signals in the design.
Related Commands:
Delete Clocks - deletes primary input pins from the clock list.
Report Clocks - displays a list of all clocks.
Report Primary Inputs - displays a list of primary inputs.
Write Primary Inputs - writes a list of primary inputs to a file.
Note
If you are performing block-by-block scan synthesis, you should refer to “Inserting Scan
Block-by-Block” on page 5-38.
If your design contains existing scan chains that you want to use, you must specify this
information to DFTAdvisor while you are in Setup mode; that is, before design rules checking.
If you do not specify existing scan circuitry, DFTAdvisor treats all the scan cells as non-scan
cells and performs non-scan cell checks on them to determine if they are scan candidates.
• Remove the existing scan chain(s) from the design and reverse the scan insertion
process. DFTAdvisor will replace the scan cells with their non-scan equivalent cells.
The design can then be treated as you would any other new design to which you want to
add scan circuitry. This technique is often used when re-stitching scan cells based on
placement and routing results.
• Add additional scan chains based on the non-scan cells while leaving the original scan
chains intact.
• Stitch together existing scan cells that were previously unstitched.
The remainder of this section includes details related to these methodologies.
For information on creating test procedure files, refer to “Test Procedure Files” on page 3-9.
Additionally, defining these existing scan cells prevents DFTAdvisor from performing possibly
undesirable default actions, such as scan cell mapping and generation of unnecessary mux gates.
1. Declare the “data_in = <port_name>” in the scan_definition section of the scan cell’s
model in the ATPG library.
If you have a hierarchy of scan cell definitions, where one library cell can have another
library cell as its scan version, using the data_in declaration in a model causes
DFTAdvisor to consider that model as the end of the scan definition hierarchy, so that
no mapping of instances of that model will occur.
Note
It is not recommended that you create a hierarchy of scan cell model definitions. If, for
instance, your data_in declaration is in the scan_definitions section of the third model in
the definitions hierarchy, but DFTAdvisor encounters an instance of the first model in the
hierarchy, it will replace the first model with the second model in the hierarchy, not the
desired third model. If you have such a hierarchy, you can use the Add Mapping
Definition command to point to the desired model. Add Mapping Definition overrides the
mapping defined in the library model.
2. The scan enable port of the instance of the cell model must be either dangling or tied (0
or 1) or pre-connected to a global scan enable pin(s). In addition, the scan input port
must be dangling or tied or connected to the cell’s scan output port as a self loop or a self
loop with (multiple) buffers or inverters.
Dangling implies that there are no connected fan-ins from other pins except tied pins or
tied nets. To identify an existing (global) scan enable, use the Setup Scan Insertion
command:
SETup SCan Insertion -SEN name
Setup Scan Insertion should be issued before using the Insert Test Logic command.
If you use criteria (a) as the means of preventing scan cell mapping, DFTAdvisor also checks
the scan enable and scan in ports. If either one is driven by system logic, then the tool inserts a
new mux gate before the data input and uses it as a mux in front of the preexisting scan cell.
(This is only for mux-DFF scan; this mux is not inserted for LSSD or clocked_scan types of
scan.)
If you use a combination of criteria (a) and (b), or just criteria (b), as the means of preventing
scan cell mapping, DFTAdvisor will not insert a mux gate before the data input.
Once DFTAdvisor can identify existing scan cells, they can be stitched into scan chains in the
normal scan insertion process.
preceding subsection describes this procedure. Then, to remove defined scan circuitry from the
design, switch to Dft mode and use the Ripup Scan Chains command as follows:
RIPup SCan Chains {-All | chain_name…} [-Output] [-Keep_scancell [Off | Tied | Loop |
Buffer]] [-Model model_name]
It is recommended that you use the -All option to remove all defined scan circuitry. You can
also remove existing scan chain output pins with the -Output option, when you remove a chain.
Note that lockup latch insertion is optional. Normally, you would not allow lockup latch
insertion during the DFTAdvisor session(s) before layout. Lockup latch insertion should be
activated during the DFTAdvisor session after placement.
Note
If the design contains test logic in addition to scan circuitry, this command only removes
the scan circuitry, not the test logic.
Note
This process involves backward mapping of scan to non-scan cells. Thus, the library you
are using must have valid scan to non-scan mapping.
If you want to keep the existing scan cells but disconnect them as a chain, use the -
Keep_scancell switch, which specifies that only the connection between the scan input/output
ports of each scan cell should be removed. The connections of all other ports are not altered and
the scan cells are not mapped to their nonscan models. This is useful when you have preexisting
scan cells that have non-scan connections that you want to preserve, such as scan enable ports
connected to a global scan enable pin.
Another reason you might use the Ripup Scan Chains command is in the normal process of scan
insertion, ripup, and re-stitch. A normal flow involves the following steps:
1. Insert scan
2. Determine optimal scan routing from a layout tool
3. Rip-up scan chains
4. Re-stitch scan chains using an order file:
INSert TEst Logic filename -Fixed
used some other method for generating the boundary scan architecture, you must ensure proper
connection of the scan chains’ scan_in and scan_out ports to the TAP controller.
If an error occurs during the rules checking process, the application remains in Setup mode,
where you must correct the error. You can clearly identify and easily resolve the cause of many
errors. Other errors, such as those associated with proper clock definitions and test procedure
files, can be more complex. “Troubleshooting Rules Violations” in the Design-for-Test
Common Resources Manual discusses the procedure for debugging rules violations. You can
also use DFTInsight to visually investigate the causes of DRC violations. For more information,
refer to “Using DFTInsight” in the Design-for-Test Common Resources Manual.
Most of these test structures include additional setup options (which are omitted from the
preceding usage). Depending on your scan selection type, you should refer to one of the
following subsections for additional details on the test structure type and its setup options:
Full scan is the fastest identification method, converting all scannable sequential elements to
scan. You can use FastScan for ATPG on full scan designs. This is the default upon invocation
of the tool. For more information on full scan, refer to “Understanding Full Scan” on page 2-4.
Clock sequential identification selects scannable cells by cutting sequential loops and limiting
sequential depth based on the -Depth switch. Typically, this method is used to create structured
partial scan designs that can use the FastScan clock sequential ATPG algorithm. For more
information on clock sequential scan, refer to “FastScan Handling of Non-Scan Cells” on
page 4-16.
Note
This technique is useful for data path circuits. Scan cells are selected such that all
sequential loops, including self loops, are cut. The -Reconvergence option specifies to
remove sequential reconvergent paths by selecting a scannable instance on the sequential
path for scan. For more information on sequential transparent scan, refer to “FastScan
Handling of Non-Scan Cells” on page 4-16.
With the sequential transparent identification type, you do not necessarily need to perform any
other tasks prior to the identification run. However, if a clock enable signal gates the clock input
of a sequential element, the sequential element will not behave sequentially transparent without
proper constraints on the clock enable signal.
You specify these constraints, which constrain the clock enable signals during the sequential
transparent procedures, with the Add Seq_transparent Constraints command. This command’s
usage is as follows:
Partition scan identification provides controllability and observability of embedded blocks. You
can also set threshold limits to control the overhead sometimes associated with partition scan
identification. For example, overhead extremes may occur when DFTAdvisor identifies a large
number of partition cells for a given uncontrollable primary input or unobservable primary
output. By setting the partition threshold limit for primary inputs (-Input_threshold switch) and
primary outputs (-Output_threshold switch), you maintain control over the trade-off of whether
to scan these partitioned cells or, instead, insert a controllability/observability scan cell.
When DFTAdvisor reaches the specified threshold for a given primary input or primary output,
it terminates the partition scan identification process on that primary input or primary output
and unmarks any partition cell identified for that pin. For more information on partition scan,
refer to “Understanding Partition Scan” on page 2-7 .
Note
With the partition scan identification type, you must perform several tasks before exiting
Setup mode. These tasks include specifying partition pins and setting the partition
threshold. Partition pins may be input pins or output pins. You must constrain input pins
to an X value and mask output pins from observation.
After constraining the input partition pins to X values, you can analyze the controllability for
each of these inputs. This analysis is useful because sometimes there is combinational logic
between the constrained pin and the sequential element that gets converted to an input partition
scan cell. Constraining a partition pin can impact the fault detection of this combinational logic.
DFTAdvisor determines the controllability factor of a partition pin by removing the X
constraint and calculating the controllability improvement on the affected combinational gates.
You can analyze controllability of input partition pins as follows:
Similar to the issue with input partition pins, there may be combinational logic between the
sequential element (which gets converted to an output partition cell) and a masked primary
output. Thus, it is useful to also analyze the observability of each of these outputs because
masking an output partition pin can impact the fault detection of this combinational logic.
DFTAdvisor determines the observability factor of a partition pin by removing the mask and
calculating the observability improvement on the affected combinational gates. You can
analyze observability of output partition pins as follows:
The benefit of ATPG-based scan selection is that ATPG runs as part of the process, giving test
coverage results along the way.
It is recommended that during the first scan selection and ATPG iteration, you use the default
(not specifying -Percent and -Number) to allow the tool to determine the amount of scan
needed. Then based on the ATPG results and how they compare to the required test coverage
criteria, you can specify the exact amount of scan to select. The amount of scan selected in the
first (default) iteration can be used as a reference point for determining how much more or less
scan to select in subsequent iterations (i.e. what limit to specify).
SCOAP-based selection is typically faster than ATPG-based selection, and produces an optimal
set of scan candidates.
The Structure technique includes loop breaking, self-loop breaking, and limiting the design’s
sequential depth. These techniques are proven to reduce the sequential ATPG problem and
quickly provide a useful set of scan candidates.
SET COntention Check OFf | {ON [-Warning | -Error] [-ATpg] [-Start frame#]} [-Bus | -Port |
-ALl]
By default, contention checking is on for buses, with violations considered warnings. This
means that during the scan identification process, DFTAdvisor considers the effects of bus
contention and issues warning messages when two or more devices concurrently drive a bus. If
you want to consider contention of clock ports of flip-flops or latches, or change the severity of
this type of problem to error instead of warning, you can do so with this command. For further
information on this command, refer to the Set Contention Check command page in the
DFTAdvisor Reference Manual.
You set the number of control and observe points with the Setup Test_point Identification
command. This command’s usage is as follows:
The following locations in the design will not have test points automatically added by
DFTAdvisor:
• Any site in the fanout cone of a declared clock (defined with the Add Clock command).
• The outputs of scanned latches or flip flops.
• The internal gates of library cells. Only gates driving the top library boundary can have
test points.
• Notest points which are set using the Add Notest Points command.
• The outputs of primitives that can be tri-state.
• The primary inputs for control or observation points.
• The primary outputs for observation points. A primary output driver which also fans out
to internal logic could have a control point added, if needed.
• No control points at unobservable sites.
• No observation points at uncontrollable sites.
Related Test Point Commands:
Delete Test Points - deletes the information specified by the Add Test Points
command.
Report Test Points - displays identified/specified test points.
Program) approach and determine the locations of the difficult-to-control and difficult-to-
observe points. To analyze the design for controllability and observability, you use the Analyze
Testability command with the -Scoap_only switch:
Note
The Analyze Testability and Report Testability Analysis are general purpose commands.
You can use these commands at any time—not just in the context of automatic test point
identification—to get a better understanding of your design’s testability. They are
presented in this section because they are especially useful with regards to test points.
In many cases, a sequential element may not have a scan equivalent of the currently selected
scan type. For example, a cell may have an equivalent mux-DFF scan cell but not an equivalent
LSSD scan cell. If you set the scan type to LSSD, DFTAdvisor places these models in the non-
scan model list. However, if you change the scan type to mux-DFF, DFTAdvisor updates the
non-scan model list, in this case removing the models from the non-scan model list.
Another method of eliminating some components from consideration for scan cell conversion is
to specify that certain models should not be converted to scan. To exclude all instances of a
particular model type, you can use the Add Nonscan Models command. This command’s usage
is as follows:
Note
DFTAdvisor automatically treats sequential models without scan equivalents as non-scan
models, adding them to the nonscan model list.
To include particular instances in the scan identification process, use the Add Scan Instances
command. This command’s usage is as follows:
For example, the following command ensures the conversion of instances /I$145/I$116 and
/I$145/I$138 to scan cells when DFTAdvisor inserts scan circuitry.
To include all instances of a particular model type for conversion to scan, use the Add Scan
Models command. This command’s usage is as follows
For more information on these commands, refer to the Add Scan Instances and Add Scan
Models reference pages in the DFTAdvisor Reference Manual.
the scannability status of all the non-scan sequential instances in your design. To display this
information, you use the Report Dft Check command, whose usage is as follows:
When you perform a Report Dft Check command there is typically a large number of nonscan
instances displayed, as shown in the sample report in Figure 5-6.
The fields at the end of each line in the nonscan instance report provide additional information
regarding the classification of a sequential instance. Using the instance /I_266 (highlighted in
maroon), the “Clock” statement indicates a problem with the clock input of the sequential
instance. In this case, when the tool does a trace back of the clock, the signal doesn’t trace back
to a defined clock. The message indicates that the signal traced connects to the clock input of
this non-scan instance, and doesn’t trace back to a primary input defined as a clock. If several
nodes are listed (similarly for “Reset” and “Set), it means that the line is connected to several
endpoints (sequential instances or primary inputs).
This “Clock # 1 F /I_266/clk” issue can be resolved by either defining the specified input as a
clock or allowing DFTAdvisor to add a test clock for this instance.
Related Commands:
Report Control Signals - displays control signal information.
Report Statistics - displays a statistics report.
Report Sequential Instances - displays information and testability data for
sequential instances.
DFT> run
While running the identification process, this command issues a number of messages about the
identified structures.
You may perform multiple identification runs within a session, changing the identification
parameters each time. However, be aware that each successive scan identification run adds to
the results of the previous runs. For more information on which scan types you can mix in
successive runs, refer to Table 5-1 on page 5-7.
Note
If you want to start the selection process anew each time, you must use the Reset State
command to clear the existing scan candidate list.
This command lists the total number of sequential instances, user-defined non-scan instances,
user-defined scan instances, system-identified scan instances, scannable instances with test
logic, and the scan instances in preexisting chains identified by the rules checker.
Related Commands:
Report Sequential Instances - displays information and testability data for
sequential instances.
Write Scan Identification - writes identified/specified scan instances to a file.
To give scan ports specific names (other than those created by default), you can use the Add
Scan Pins command. This command’s usage is as follows:
After the scan cells are partitioned and grouped into potential scan chains (before scan chain
insertion occurs) DFTAdvisor considers some conditions in assigning scan pins to scan chains:
1. Whether the potential scan chain has all or some of the scan cells driven by the specified
clock (Add Scan Pins -Clock). If yes, then the scan chain is assigned to the specified
scan input and output pins.
2. Whether the output of the scan candidate is directly connected to a declared output pin.
If yes, then the scan input and output pins are assigned to the scan chain containing the
scan cell candidate.
3. Any scan chains not assigned to scan input/output pins using conditions 1 and 2 are
assigned based on the order in which you declared the scan input/output pins using the
Add Scan Pins command.
If a fixed-order file is specified along with the -Fixed option in the Insert Test Logic command,
conditions 1 and 2 are ignored and the chain_id in the fixed-order file is then sorted in
increasing order. The chain with the smallest chain_id receives the first specified scan
input/output pins. The chain with the second smallest chain_id receives the second specified
scan input/output pins, and so on. If you did not specify enough scan input/output pins for all
scan chains, then DFTAdvisor creates new scan input/output pins for the remaining scan chains.
For information on the format of the fixed-order file, refer to the Insert Test Logic command in
the DFTAdvisor Reference Manual.
Related Commands:
Delete Scan Pins - deletes scan chain inputs, outputs, and clock names.
Report Scan Pins - displays scan chain inputs, outputs, and clock names.
Setup Scan Pins - specifies the index or bus naming conventions for scan
input and output pins.
SETup SCan INsertion [{-SEN name [-Isolate] | -TEn name} [-Active {Low | High}]}] [-TClk
name] [-SClk name] [-SMclk name] [-SSclk name] {{[-SET name] | [-RESet name] |
[-Write name] | [-REAd name]}... [-Muxed | -Disabled | -Gated]}
If you do not specify this command, the default pins names are scan_en, test_en, test_clk,
scan_clk, scan_mclk, scan_sclk, scan_set, scan_reset, write_clk, and read_clk, respectively. If
you want to specify the names of existing pins, you can specify top module pins or dangling
pins of lower level modules.
Note
If DFTAdvisor adds more than one test clock, it names the first test clock the specified or
default <name> and names subsequent test clocks based on this name plus a unique
number.
The -Muxed and -Disabled switches specify whether DFTAdvisor uses an AND gate or MUX
gate when performing the gating. If you specify the -Disabled option, then for gating purposes
DFTAdvisor ANDs the test enable signal with the set and reset to disable these inputs of flip-
flops. If you specify the -Muxed option, then for muxing purposes DFTAdvisor uses any set and
reset pins defined as clocks to multiplex with the original signal. You can specify the -Muxed
and -Disabled switches for individual pins by successively issuing the Setup Scan Insertion
command.
If DFTAdvisor writes out a test procedure file, it places the scan enable at 1 (0) if you specify -
Active high (low).
Note
If the test enable and scan enable have different active values, you must specify them
separately in different Setup Scan Insertion commands. For more information on the
Setup Scan Insertion command, refer to the DFTAdvisor Reference Manual.
After setting up for internal scan insertion, refer to “Running the Insertion Process” on
page 5-34 to complete insertion of the internal scan circuitry.
DFTAdvisor uses the head register (specified by the scan_input_pin) and the tail register
(specified by the scan_output_pin) to determine the beginning and ending points of the scan
chain. Scan cells are inserted between these registers.
During test logic insertion, DFTAdvisor attaches the non-scan head register’s output to the
beginning of the scan chain, performs scan replacement on the tail register, and then attaches
the scan tail register’s input to the end of the scan chain. If there is no scan replacement in the
ATPG library for the tail register, a MUX is added to include the tail DFF into the scan chain.
Note
No design rule checks are performed from the scan_in pin to the output of the head
register and from the output of the tail register to the scan_out pin. You are responsible
for making those paths transparent for scan shifting.
Note
DFTAdvisor does not determine the associated top-level pins that are required to be
identified for the Add Scan Chains command. You are responsible for adding this
information to the dofile that DFTAdvisor creates using the Write ATPG Setup
command. You must also provide the pin constraints that cause the correct behavior of
the head and tail registers.
To attach registers to the head and tail of the scan chain, you can use the Add Scan Pins
command, specifying the scan input (head register output pin) and scan output (tail register
input pin) of the registers along with the -Registered switch. This command’s usage is as
follows:
After setting up for test point insertion, refer to “Running the Insertion Process” on page 5-34 to
complete insertion of the test point circuitry.
Related Commands:
Delete Buffer Insertion - deletes added buffer insertion information.
Report Buffer Insertion - displays inserted buffer information.
When you issue this command for scan insertion (assuming appropriate prior setup),
DFTAdvisor converts all identified scannable memory elements to scan elements and then
stitches them into one or more scan chains. If you select partition scan for insertion,
DFTAdvisor converts the non-scan cells identified for partition scan to partition scan cells and
stitches them into scan chains separate from internal scan chains.
The scan circuitry insertion process may differ depending on whether you insert scan cells and
connect them up front, or insert and connect them after layout data is available. DFTAdvisor
allows you to insert scan using both methods.
To insert scan chains and other test structures into your design, you use the Insert Test Logic
command. This command’s usage is as follows:
INSert TEst Logic [filename [-Fixed]] [-Scan {ON | OFf}] [-Test_point {ON | OFf}] [-Ram
{ON | OFf}] {[-NOlimit] | [-Max_length integer] | [-NUmber [integer]]} [-Clock {Nomerge
| Merge}] [-Edge {Nomerge | Merge}] [-COnnect {ON | OFf | Tied | Loop | Buffer}] [-
Output {Share | New}] [-MOdule {Norename | Rename}] [-Verilog]
The Insert Test Logic command has a number of different options, most of which apply
primarily to internal scan insertion.
• If you are using specific cell ordering, you can specify a filename of user-identified
instances (in either a fixed or random order) for the stitching order.
• The -Max_length option lets you specify a maximum length to the chains.
• The -NOlimit switch allows an unlimited chain length.
• The -NUmber option lets you specify the number of scan chains for the design.
• The -Clock switch lets you choose whether to merge two or more clocks on a single
chain.
• The -Edge switch lets you choose whether to merge stable high clocks with stable low
clocks on chains.
The subsection that follows, “Merging Chains with Different Shift Clocks“, discusses
some of the issues surrounding merging chains with different clocks.
• The -COnnect option lets you specify whether to connect the scan cells and scan-
specific pins (scan_in, scan_enable, scan_clock, etc.) to the scan chain (which is the
default mode), or just replace the scan candidates with scan equivalent cells. If you want
to use layout data, you should replace scan cells (using the -connect off switch), perform
layout, obtain a placement order file, and then connect the chain in the appropriate order
(using the -filename <filename> -fixed options). This option is affected by the settings
in the Set Test Logic command. The other options to the -COnnect switch specify how
to handle the input/output scan pins when not stitching the scan cells into a chain.
• The -Scan, -Test_point, and -Ram switches let you turn scan insertion, test point
insertion and RAM gating on or off.
• The -Verilog switch causes DFTAdvisor to insert buffer instances, rather than use the
“assign” statement, for scan output pins that also fan out as functional outputs.
If you do not specify any options, DFTAdvisor stitches the identified instances into default scan
chain configurations. Because this command contains many options, refer to the Insert Test
Logic command reference page for additional information.
Note
Because the design is significantly changed by the action of this command, DFTAdvisor
frees up (or deletes) the original flattened, gate-level simulation model it created when
you entered the DFT system mode.
When you have cells that do not share the same shift clock, you can have them use the same
scan chain by adding them to a clock group. This informs DFTAdvisor which scan cells to place
together in the chain. Note that lockup latches cannot be placed between the cells from different
clock groups since such cells will be in different scan chains. However, lockup latches will still
be inserted between the cells of different shift clocks, within the same clock group. You specify
clock groups using the Add Clock Groups command, whose usage is as follows:
Note
To have the clocks merged into one, you must specify the “-Clock merge” option when
specifying the Insert Test Logic command.
If you want to insert lockup latches, you must first specify the two-input D latch you want to use
with the Add Cell Models command. You specify for DFTAdvisor to insert lockup latches with
the Set Lockup Latch command. This command’s usage is as follows:
Figure 5-7 illustrates lockup latch insertion. Notice the extra inverter on the clock line of the
lockup cell to ensure a half a cycle delay for synchronization of the clock domains. The lockup
latch is inserted only on the scan path therefore does not interfere with the functional operation
of the circuit.
Before After
d o d o d o
d o d o si si
si si SC LL SC
SC SC clka clk clk clk
clka clk clk
clkb clkb
If you specify the -Last option, DFTAdvisor can also insert a lockup latch between the last scan
cell in the chain and the scan out pin. The -Nolast option is the default, which means
DFTAdvisor does not insert a lockup latch as the last element in the chain. For more
information on inserting lockup latches, please refer to the Set Lockup Latch and Insert Test
Logic commands in the DFTAdvisor Reference Manual.
Related Commands:
Delete Clock Groups - deletes the specified clock groups.
Report Clock Groups - reports the added clock groups.
Report Dft Check - displays and writes the scannability check status for all
non-scan instances.
Report Scan Cells - displays a list of all scan cells.
Report Scan Chains - displays scan chain information.
Report Scan Groups - displays scan chain group information.
WRIte NEtlist filename [-Edif | -Tdl | -Verilog | -VHdl | -Genie | -Ndl] [-Replace]
• DFTAdvisor is not intended for use as a robust netlist translation tool. Thus, you should
always write out the netlist in the same format in which you read the original design.
• If a design contains only one instantiation of a module, and DFTAdvisor modifies the
instance by adding test structures, the instantiation retains the original module name.
• When DFTAdvisor identically modifies two or more instances of the same module, all
modified instances retain the original module name. This generally occurs for full scan
designs.
• If a design contains multiple instantiations of a module, and DFTAdvisor modifies them
differently, DFTAdvisor derives new names for each instance based on the original
module name.
• DFTAdvisor assigns “net” as the prefix for new net names and “uu” as the prefix for
new instance names. It then compares new names with existing names (in a case-
insensitive manner) to check for naming conflicts. If it encounters naming conflicts, it
changes the new name by appending an index number.
• When writing directory-based Genie netlists, DFTAdvisor writes out modules based on
directory names in uppercase. Instance names within the netlist, however, remain in
their original case.
To create test procedure files, issue the Write Atpg Setup command. This command’s usage is
as follows:
For example, if DFTAdvisor adds a single scan chain and writes out an ATPG setup file named
scan_design.dofile, enter something like the following:
Exiting DFTAdvisor
When you finish the DFTAdvisor session, exit the application by executing the File > Exit
menu item, then click the Exit button in the Control Panel window, or type:
DFT> exit
Using block-by-block scan insertion, the tool inserts scan (referred to as “sub-chains) into
blocks A, B, and C, prior to insertion in the Top module. When A, B, and C already contain
scan, inserting scan into the Top module is equivalent to inserting any scan necessary at the top
level, and then connecting the existing scan circuitry in A, B, and C at the top level.
b. Insert scan.
Set up the circuit, run rules checking, insert the desired scan circuitry.
c. Write out scan-inserted netlist.
Write the scan-inserted netlist to a new filename, such as a_scan.hdl. The new
module interface may differ, for example:
A(a_i, a_o, sc_i, sc_o, sc_en)
e. Exit DFTAdvisor.
2. Insert scan into block B.
Follow the same procedure as in block A.
3. Insert scan into block C.
Follow the same procedure as in blocks A and B.
4. Concatenate the individual scan-inserted netlists into one file.
$ cat top.hdl a_scan.hdl b_scan.hdl c_scan.hdl > all.hdl
Figure 5-9 shows a schematic view of the design with scan connected in the Top
module.
all.hdl
TOP
top_i Combinational Logic
b_i c_i
a_i
sc_out sc_out sc_out
sc_in A sc_in B sc_in C sc_out
a_o sc_en b_o sc_en c_o
sc_en
FastScan and FlexTest are the Mentor Graphics ATPG tools for generating test patterns.
Figure 6-1 shows the layout of this chapter and the process for generating test patterns for your
design.
This section discusses each of the tasks outlined in Figure 6-1. You will use FastScan and/or
FlexTest (and possibly ModelSim, depending on your test strategy) to perform these tasks.
Before you use FastScan and/or FlexTest, you should learn the basic process flow, the tool’s
inputs and outputs, and its basic operating methods. The following subsections describe this
information.
You should also have a good understanding of the material in both Chapter 2, “Understanding
Scan and ATPG Basics“, and Chapter 3, “Understanding Common Tool Terminology and
Concepts“.
Invocation
Design
Flattened? Y
N
Flatten Model
Learn Circuitry
Test
Procedure Perform DRC
File
Pass
Checks? N
Y
Good Mode Fault Mode ATPG Mode
Fault
Read in Read in Create/Read Fault
Fault List File
Patterns Patterns File
Patterns
Create/Read Run
Fault Fault List
File
Run Compress
Patterns
Save
Patterns Patterns
The following list describes the basic process for using FastScan and/or FlexTest:
1. FastScan and FlexTest require a structural (gate-level) design netlist and a DFT library.
“FastScan and FlexTest Inputs and Outputs” on page 6-5 describes which netlist formats
you can use with FastScan and FlexTest. Every element in the netlist must have an
equivalent description in the specified DFT library. The “Design Library” section in the
Design-for-Test Common Resources Manual gives information on the DFT library. At
invocation, the tool first reads in the library and then the netlist, parsing and checking
each. If the tool encounters an error during this process, it issues a message and
terminates invocation.
2. After a successful invocation, the tool goes into Setup mode. Within Setup mode, you
perform several tasks, using commands either interactively or through the use of a
dofile. You can set up information about the design and the design’s scan circuitry.
“Setting Up Design and Tool Behavior” on page 6-18 documents this setup procedure.
Within Setup mode, you can also specify information that influences simulation model
creation during the design flattening phase.
3. After performing all the desired setup, you can exit the Setup mode. Exiting Setup mode
triggers a number of operations. If this is the first attempt to exit Setup mode, the tool
creates a flattened design model. This model may already exist if a previous attempt to
exit Setup mode failed or you used the Flatten Model command. “Model Flattening” on
page 3-10 provides more details on design flattening.
4. Next, the tool performs extensive learning analysis on this model. “Learning Analysis”
on page 3-15 explains learning analysis in more detail.
5. Once the tool creates a flattened model and learns its behavior, it begins design rules
checking. The “Design Rules Checking” section in the Design-for-Test Common
Resources Manual gives a full discussion of the design rules.
6. Once the design passes rules checking, the tool enters either Good, Fault, or Atpg mode.
While typically you would enter the Atpg mode, you may want to perform good
machine simulation on a pattern set for the design. “Good Machine Simulation” on
page 6-40 describes this procedure.
7. You may also just want to fault simulate a set of external patterns. “Fault Simulation” on
page 6-37 documents this procedure.
8. At this point, you may typically want to create patterns. However, you must perform
some additional setup steps, such as creating the fault list. “Setting Up the Fault
Information for ATPG” on page 6-43 details this procedure. You can then run ATPG on
the fault list. During the ATPG run, the tool also performs fault simulation to verify that
the generated patterns detect the targeted faults.
If you started ATPG by using FastScan, and your test coverage is still not high enough
because of sequential circuitry, you can repeat the ATPG process using FlexTest.
Because the FlexTest algorithms differ from those of FastScan, using both applications
on a design may lead to a higher test coverage. In either case (full or partial scan), you
can run ATPG under different constraints, or augment the test vector set with additional
test patterns, to achieve higher test coverage. “Performing ATPG” on page 6-48 covers
this subject.
After generating a test set with FastScan or FlexTest, you should apply timing
information to the patterns and verify the design and patterns before handing them off to
the vendor. “Verifying Test Patterns” on page 6-130 documents this operation.
Test
Design Procedure ATPG
Netlist File Library
FastScan or Fault
Test FlexTest List
Patterns
ATPG
Info.
Files
• Design
The supported design data formats are GENIE, Tegas Design Language (TDL), Verilog,
and VHDL. Other inputs also include 1) a cell model from the design library and 2) a
previously-saved, flattened model (FastScan Only).
• Test Procedure File
This file defines the operation of the scan circuitry in your design. You can generate this
file by hand, or DFTAdvisor can create this file automatically when you issue the
command Write Atpg Setup.
• Library
The design library contains descriptions of all the cells used in the design.
FastScan/FlexTest use the library to translate the design data into a flat, gate-level
simulation model for use by the fault simulator and test generator.
• Fault List
FastScan and FlexTest can both read in an external fault list. They can use this list of
faults and their current status as a starting point for test generation.
• Test Patterns
FastScan and FlexTest can both read in externally generated test patterns and use those
patterns as the source of patterns to be simulated.
FastScan and FlexTest produce the following outputs:
• Test Patterns
FastScan and FlexTest generate files containing test patterns. They can generate these
patterns in a number of different simulator and ASIC vendor formats. “Test Pattern
Formatting and Timing” on page 7-1 discusses the test pattern formats in more detail.
• ATPG Information Files
These consist of a set of files containing information from the ATPG session. For
example, you can specify creation of a log file for the session.
• Fault List
This is an ASCII-readable file that contains internal fault information in the standard
Mentor Graphics fault format.
Note
ATPG constraints and circuitry that can have bus contention are not optimal conditions
for random pattern generation. If you specify ATPG constraints, FastScan will not
perform random pattern generation.
During this process, FastScan identifies and removes redundant faults from the fault list. After it
creates enough patterns for a fault simulation pass, it displays a message that indicates the
number of redundant faults, the number of ATPG untestable faults, and the number of aborted
faults that the test generator identifies. FastScan then once again invokes the fault simulator,
removing all detected faults from the fault list and placing the effective patterns in the test set.
FastScan then selects another set of patterns and iterates through this process until no faults
remain in the current fault list, except those aborted during test generation (that is, those in the
UC or UO categories).
The most commonly used test cycle contains the events: force_pi, measure_po,
capture_clock_on, and capture_clock_off. The test vectors used to read or write into RAMs
contain the events force_pi, ram_clock_on, and ram_clock_off. You can associate real times
with each event via the timing file.
Because FastScan is optimized for use with scan designs, the basic scan pattern contains the
events from which the tool derives all other pattern types.
Clock PO Patterns
Figure 6-4 shows that in some designs, a clock signal may go to a primary output through some
combinational logic.
Comb.
Logic
Clock Primary
Outputs
...
LA LA
FastScan considers any pattern that measures a PO with connectivity to a clock, regardless of
whether or not the clock is active, a clock PO pattern. A normal scan pattern has all clocks off
during the force of the primary inputs and the measure of the primary outputs. However, in the
clocked primary output situation, if the clock is off, a condition necessary to test a fault within
this circuitry might not be met and the fault may go undetected. In this case, in order to detect
the fault, the pattern must turn the clock on during the force and measure. This does not happen
in the basic scan pattern. FastScan allows this within a clock PO pattern, to observe primary
outputs connected to clocks.
2. Force values on all primary inputs, (potentially) including clocks (with constrained pins
at their constrained values).
3. Measure all primary outputs that are connected to scan clocks.
FastScan generates clock PO patterns whenever it learns that a clock connects to a primary
output and if it determines that it can only detect faults associated with the circuitry by using a
clock PO pattern. If you do not want FastScan to generate clock PO patterns, you can turn off
the capability as follows:
A depth of zero indicates combinational circuitry. A depth greater than one indicates limited
sequential circuitry. You should, however, be careful of the depth you specify. You should start
off using the lowest sequential depth and analyzing the run results. You can perform several
runs, if necessary, increasing the sequential depth each time. Although the maximum allowable
depth limit is 255, you should typically limit the value you specify to five or less, for
performance reasons.
When you activate this capability, you allow the tool to include a scan load in any pattern cycle
except the capture cycle.
You can also get the tool to generate multiple load clock sequential patterns to test RAMs. The
following command enables this capability:
A minimum sequential depth of 4 is required to enable the tool to create the multiple cycle
patterns necessary for RAM testing. The patterns are very similar to RAM sequential patterns,
but for many designs will give better coverage than RAM sequential patterns. This method also
supports certain tool features (MacroTest, dynamic compression, split-capture cycle, clock-off
simulation) not supported by RAM sequential patterns.
Conversely, if the fault on the highest order bit of the address line is a stuck-at-0 fault, you
would want to write the initial data, D, to location 0000. You would then write different data,
D’, to location 1000. If a stuck-at-0 fault were present on the highest address bit, the faulty
machine would overwrite location 0000 with the value D’. Next, you would attempt to read
from address location 0000. With the stuck-at-0 fault on the address line, you would read D’.
You can instruct FastScan to generate RAM sequential patterns by issuing the Set Pattern Type
command as follows:
For latches that do not behave transparently, a user-defined procedure can force some of them to
behave transparently between the primary input force and primary output measure. A test
procedure, which is called seq_transparent, defines the appropriate conditions necessary to
force transparent behavior of some latches. The events in sequential transparent patterns
include:
This is typically how FlexTest performs ATPG. However, FlexTest can also generate functional
vectors based on the instruction set of a design. The ATPG method it uses in this situation is
significantly different from the sequential-based ATPG method it normally uses. For
information on using FlexTest in this capacity, refer to “Creating Instruction-Based Test Sets
(FlexTest)” on page 6-107.
Primary Primary
Inputs Combinational
Outputs
Block
Storage
Elements
Clk
In Figure 6-5, all the storage elements are edge-triggered flip-flops controlled by the rising edge
of a single clock. The primary outputs and the final values of the storage elements are always
stable at the end of each clock cycle, as long as the data and clock inputs of all flip-flops do not
change their values at the same time. The clock period must be longer than the longest signal
path in the combinational block. Also, stable values depend only on the primary input values
and the initial values on the storage elements.
For the multiple-phase design, relative timing among all the clock inputs determines whether
the circuit maintains its cycle-based behavior.
In Figure 6-6, the clocks PH1 and PH2 control two groups of level-sensitive latches which
make up this circuit’s storage elements.
PH1 PH2
A Storage B C Storage D
Combinational
Element 1 Block Element 2
When PH1 is on and PH2 is off, the signal propagates from point D to point C. On the other
hand, the signal propagates from point B to point A when PH1 is off and PH2 is on. Designers
commonly use this cycle-based methodology in two-phase circuits because it generates
systematic and predictable circuit behavior. As long as PH1 and PH2 are not on at the same
time, the circuit exhibits cycle-based behavior. If these two clocks are on at the same time, the
circuit can operate in an unpredictable manner and can even become unstable.
In FlexTest, as opposed to FastScan, you must specify the timing information for the test cycles.
FlexTest provides a sophisticated timing model that you can use to properly manage timing
relationships among primary inputs—especially for critical signals, such as clock inputs.
FlexTest uses a test cycle, which is conceptually the same as an ATE test cycle, to represent the
period of each primary input. If the input cycle of a primary input is longer (for example, a
signal with a slower frequency) than the length you set for the test cycle, then you must
represent its period as a multiple of test cycles.
A test cycle further divides into timeframes. A timeframe is the smallest time unit that FlexTest
can simulate. The tool simulates whatever events occur in the timeframe until signal values
stabilize. For example, if data inputs change during a timeframe, the tool simulates them until
the values stabilize. The number of timeframes equals the number of simulation processes
FlexTest performs during a test cycle. At least one input must change during a defined
timeframe. You use timeframes to define the test cycle terms offset and the pulse width. The
offset is the number of timeframes that occur in the test cycle before the primary input goes
active. The pulse width is the number of timeframes the primary input stays active.
Figure 6-7 shows a primary input with a positive pulse in a six timeframe test cycle. In this
example, the period of the primary input is one test cycle. The length of the test cycle is six
timeframes, the offset is two timeframes, and the width of its pulse is three timeframes.
0 6
timeframes for Pin Constraints
1 2 3 4 5
1 2 3 4 5
In this example, if other primary inputs have periods longer than the test cycle, you must define
them in multiples of six timeframes (the defined test cycle period). Time 0 is the same as time 6,
except time 0 is treated as the beginning of the test cycle, while time 6 is treated as the end of
the test cycle.
Note
To increase the performance of FlexTest fault simulation and ATPG, you should try to
define the test cycle to use as few timeframes as possible.
For most automatic test equipment, the tester strobes each primary output only once in each test
cycle and can strobe different primary outputs at different timeframes. In the non-scan
environment, FlexTest strobes primary outputs at the end of each test cycle by default.
FlexTest groups all primary outputs with the same pin strobe time in the same output bus array,
even if the outputs have different pin strobe periods. At each test cycle, FlexTest displays the
strobed values of all output bus arrays. Primary outputs not strobed in the particular test cycle
receive unknown values.
In the scan environment, if any scan memory element capture clock is on, the scan-in values in
the scan memory elements change. Therefore, in the scan test, right after the scan load/unload
operation, no clocks can be on. Also, the primary output strobe should occur before any clocks
turn on. Thus, in the scan environment, FlexTest strobes primary outputs after the first
timeframe of each test cycle by default.
If you strobe a primary output while the primary inputs are changing, FlexTest first strobes the
primary output and then changes the values at the primary inputs. To be consistent with the
boundary of the test cycle (using Figure 6-7 as an example), you must describe the primary
input’s value change at time 6 as the change in value at time 0 of the next test cycle. Similarly,
the strobe time at time 0 is the same as the strobe time at time 6 of the previous test cycle.
Cycle-based test patterns are easy to use and tend to be portable among the various automatic
test equipment. For most ATE, the tester allows each primary input to change its value up to
two times within its own input cycle period. A constant value means that the value of the
primary input does not change. If the value of the primary input changes only once (generally
for data inputs) in its own cycle, then the tester holds the new value for one cycle period. A
pulse input means that the value of the primary input changes twice in its own cycle. For
example, clock inputs behave in this manner.
Also refer to “User Interface Overview” on page 1-8 for more general information.
For FastScan:
$MGC_HOME/bin/fastscan
For FlexTest:
$MGC_HOME/bin/flextest
Once the tool is invoked, a dialog box prompts you for the required arguments (design name,
design format, and library). Browser buttons are provided for navigating to the appropriate files.
Once the design and library are loaded, the tool is in Setup mode and ready for you to begin
working on your design.
Using the second option requires you to enter all required arguments at the shell command line.
For FastScan:
$MGC_HOME/bin/fastscan {{{design_name [-VERILOG | -VHDL | -TDL | -GENIE | -EDIF |
-FLAT]} | {-MODEL cell_name}} {-LIBrary library_name} [-INSENsitive | -SENsitive]
[-LOGfile filename [-REPlace]] [-NOGui] [-TOP model_name]
[-DOFile dofile_name [-History]] [-LICense retry_limit] [-DIAG] [-32 | -64]} |
{[-HELP] | [-USAGE] | [-VERSION]}
For FlexTest:
$MGC_HOME/bin/flextest {{{design_name [-VERILOG | -VHDL | -TDL | -GENIE | -EDIF |
-FLAT]} | {-MODEL cell_name}} {-LIBrary library_name} [-INSENsitive | -SENsitive]
[-LOGfile filename] [-REPlace] [-NOGui] [-FaultSIM] [-TOP model_name] [-32 | -64]
[-DOFile dofile_name [-History]] [-LICense retry_limit] [-Hostfile host_filename]} |
{[-HELP] | [-USAGE] | [-VERSION]}
When the tool is finished invoking, the design and library are also loaded. The tool is now in
Setup mode and ready for you to begin working on your design. By default, the tool invokes in
graphical mode so if you want to use the command-line interface, you must specify the -Nogui
switch using the second invocation option.
The application argument is either “fastscan” or “flextest”. The design_name is a netlist in one
of the appropriate formats. EDIF is the default format. The library contains descriptions of all
the library cells used in the design.
Note
The invocation syntax for both FastScan and FlexTest includes a number of other
switches and options. For a list of available options and explanations of each, you can
refer to “Shell Commands” in the ATPG Tools Reference Manual or enter:
$ $MGC_HOME/bin/<application> -help
You invoke this version of FastScan using the -Diag switch. Using the -Diag switch checks for
the diagnostics-only license, and if found, invokes the FastScan diagnostics-only capabilities.
You invoke this version of FlexTest using the -Faultsim switch, which checks for the fault
simulation license, and if found, invokes the fault simulation package.
• Help
• all Report commands
• all Write commands
• Set Abort Limit
• Set Atpg Limits
• Set Checkpoint
• Set Fault Mode
• Set Gate Level
• Set Gate Report
• Set Logfile Handling
• Save Patterns
You may find these commands useful in determining whether or not to resume the process. By
default, interrupt handling is off, thus aborting interrupted processes. If instead of aborting, you
want an interrupted process to remain in a suspended state, you can issue the Set Interrupt
Handling command as follows:
After you turn interrupt handling on and interrupt a process, you can either terminate the
suspended process using the Abort Interrupted Process command or continue the process using
the Resume Interrupted Process command.
For more information on interrupt capabilities, see “Interrupting the Session” on page 1-22.
Note
Drc mode applies to FlexTest only. While FastScan uses the same model for design rules
checking and other processes, FlexTest creates a slightly different version of the design
after successfully passing rules checking. Thus, Drc mode allows FlexTest to retain this
intermediate design model.
To change the system mode, you use the Set System Mode command, whose usage is as
follows:
Related Commands:
To add primary inputs to a circuit, at the Setup mode prompt, use the Add Primary Inputs
command. This command’s usage is as follows:
When you add previously undefined primary outputs, they are called user class primary outputs,
while the original primary outputs are called system class primary outputs.
To add primary outputs to a circuit, at the Setup mode prompt, use the Add Primary Outputs
command. This command’s usage is as follows:
Related Commands:
From the tool’s perspective, a bidi consists of several gates and includes an input port and an
output port. In FastScan, you can use the commands, Report Primary Inputs and Report Primary
Outputs, to view PIs and POs. Pins that are listed by both commands are bidirectional pins. The
usage for the Report Primary Inputs command is as follows (Report Primary Outputs is similar):
Note
Altering the design’s interface will result in generated patterns that are different than
those the tool would generate for the original interface. It also prevents verification of the
saved patterns using the original netlist interface. If you want to be able to verify saved
patterns by performing simulation using the original netlist interface, you must use the
commands described in the following subsections instead of the Delete Primary
Inputs/Outputs commands.
Pins listed in the output of both commands (shown in bold font) are pins the tool will treat as
bidis during test generation. To force the tool to treat a bidi as a PI or PO, you can remove the
definition of the unwanted input or output port. The following example removes the input port
definition, then reports the PIs and POs. You can see the tool now only reports the bidis as POs,
which reflects how those pins will be treated during ATPG:
SYSTEM: /my_inout[2]
SYSTEM: /my_inout[1]
SYSTEM: /my_inout[0]
Because the preceding approach alters the design’s interface within the tool, it may not be
acceptable in all cases. Another approach, explained earlier, is to have the tool treat a bidi as a
PI or PO during ATPG only, without altering the design interface. To obtain PO treatment for a
bidi, constrain the input part of the bidi to the high impedance state. The following command
does this for the /my_inout[0] bidi:
To have the tool treat a bidi as a PI during ATPG only, direct the tool to mask (ignore) the
output part of the bidi. The following example does this for the /my_inout[0] and /my_inout[1]
pins:
The “TIEX” in the output of “report output masks” indicates the two pins are now tied to X,
which blocks their observability and prevents the tool from using them during ATPG.
To add tied signals, at the Setup mode prompt, use the Add Tied Signals command. This
command’s usage is as follows:
This command assigns a fixed value to every named floating net or pin in every module of the
circuit under test.
Related Commands:
Setup Tied Signals - sets default for tying unspecified undriven signals.
Delete Tied Signals - deletes the current list of specified tied signals.
Report Tied Signals - displays current list of specified tied nets and pins.
You can specify one or more primary input pin pathnames to be constrained to one of the
following formats: constant 0 (C0), constant 1 (C1), high impedance (CZ), or unknown (CX).
For FlexTest, the Add Pin Constraint command supports a number of additional constraint
formats for specifying the cycle-based timing of primary input pins. Refer to “Defining the
Cycle Behavior of Primary Inputs” on page 6-31 for the FlexTest-specific timing usage of this
command.
For detailed information on the tool-specific usages of this command, refer to Add Pin
Constraint in the ATPG Tools Reference Manual.
To prevent a problem caused by this loopback, use the Add Slow Pad command to modify the
simulated behavior of the bidirectional I/O pin, on a pin by pin basis. This command’s usage is
as follows:
For a slow pad, the simulation of the I/O pad changes so that the value propagated into the
internal logic is X whenever the primary input is not driven. This causes an X to be captured for
all observation points dependent on the loopback value.
Related Commands:
Delete Slow Pad - resets the specified I/O pin back to the default simulation mode.
Report Slow Pads - displays all I/O pins marked as slow.
Related Commands:
Set Learn Report - enables access to certain data learned during analysis.
Set Loop Handling - specifies the method in which to break loops.
Set Pattern Buffer - enables the use of temporary buffer files for pattern data.
Set Possible Credit - sets credit for possibly-detected faults.
Set Pulse Generators - specifies whether to identify pulse generator sink gates during
learning analysis.
Set Race Data - specifies how to handle flip-flop race conditions.
Set Rail Strength - sets the strongest strength of a fault site to a bus driver.
Set Redundancy Identification - specifies whether to perform redundancy
identification during learning analysis.
For FastScan:
SET COntention Check OFf | {{ON | CAPture_clock} [-Warning | -Error] [-Bus | -Port | -ALl]
[-BIdi_retain | -BIDI_Mask] [-ATpg | -CATpg] [-NOVerbose | -Verbose | -VVerbose]}
For FlexTest:
SET COntention Check OFf | {ON [-Warning | -Error] [-Bus | -Port | -ALl] [-ATpg]
[-Start frame#]}
By default, contention checking is on, as are the switches -Warning and -Bus, causing the tool
to check tri-state driver buses and issue a warning if bus contention occurs during simulation.
FastScan and FlexTest vary somewhat in their contention checking options. For more
information on the different contention checking options, refer to the Set Contention Check
command page in the ATPG Tools Reference Manual.
To display the current status of contention checking, use the Report Environment command.
Related Commands:
On the other hand, if you have a net with multiple non-tri-state drivers, you may want to specify
this type of net’s output value when its drivers have different values. Using the Set Net
Resolution command, you can set the net’s behavior to And, Or, or Wire (unknown behavior).
The default Wire option requires all inputs to be at the same state to create a known output
value. Some loss of test coverage can result unless the behavior is set to And (wired-and) or Or
(wired-or). To set the multi-driver net behavior, at the Setup mode prompt, you use the Set Net
Resolution command. This command’s usage is as follows:
The default for FastScan and FlexTest is to treat a Z state as an X state. If you want to account
for Z state values during simulation, you can issue the Set Z Handling command.
Internal Z handling specifies how to treat the high impedance state when the tri-state network
feeds internal logic gates. External handling specifies how to treat the high impedance state at
the circuit primary outputs. The ability of the tester normally determines this behavior.
To set the internal or external Z handling, use the Set Z Handling command at the Setup mode
prompt. This command’s usage is as follows:
Note
This command is not necessary if the circuit model already reflects the existence of a pull
gate on the tri-state net.
For example, to specify that the tester does not measure high impedance, enter the following:
For external tri-state nets, you can also specify that the tool measures high impedance as a 0
state and distinguished from a 1 state (0), measures high impedance as a 1 state and
distinguished from a 0 state (1), measures high impedance as unique and distinguishable from
both a 1 and 0 state (Z), or (for FlexTest only) measures high impedance from its previous state
(Hold).
For example, FastScan and FlexTest lets you turn the learning process off or change the amount
of effort put into the analysis. You can accomplish this for combinational logic using the Set
Static Learning command, whose usage is as follows:
In FlexTest, you can also use the Set Sequential Learning command to turn the learning process
off for sequential elements. This command’s usage is as follows:
FlexTest also performs state transition graph extraction as part of its learning analysis activities
in an attempt to reduce the state justification effort during ATPG. FlexTest gives you the ability
to turn on or off the state transition process. You accomplish this using the Set Stg Extraction
command, whose usage is as follows:
For example, examine the design of Figure 6-8. It shows a design fragment which fails the C3
rules check.
1
d d
0
Q1 Q2
(source) (sink)
To allow greater flexibility of capture handling for these types of situations, FastScan provides
some commands that alter the default simulation behavior. The Set Split Capture_cycle
command, for example, effects whether or not the tool updates simulation data between clock
edges. When set to “on”, the tool is able to determine correct capture values for trailing edge
and level-sensitive state elements despite C3 and C4 violations. If you get these violations, issue
a “set split capture_cycle on” command. The command’s usage is as follows:
SET CApture Handling {-Ls {Old | New | X} | -Te {Old | New | X}} [-Atpg | -NOAtpg]
You can select modified capture handling for level sensitive or trailing edge gates. For these
types of gates, you select whether you want simulation to use old data, new data, or X values. If
you specify the -Atpg option, FastScan not only uses the specified capture handling for rules
checking but for the ATPG process as well.
The Set Capture Handling command changes the data capture handling globally for all the
specified types of gates that fail C3 and C4. If you want to selectively change capture handling,
you can use the Add Capture Handling command. The usage for this command is as follows:
Note
When you change capture handling to simulate new data, FastScan just performs new
data simulation for one additional level of circuitry. That is, sink gates capture new values
from their sources. However, if the sources are also sinks that are set to capture new data,
FastScan does not simulate this effect.
For more information on Set Capture Handling or Add Capture Handling, refer to the ATPG
Tools Reference Manual. For more information on C3 and C4 rules violations, refer to
“Clock Rules” in the Design-for-Test Common Resources Manual.
Related Commands:
Delete Capture Handling - removes special data capture handling for the specified
objects.
Set Drc Handling - specifies violation handling for a design rules check.
Set Sensitization Checking - specifies if DRC must determine path sensitization during
the C3 rules check.
With transient detection off, DRC simulation treats all events on state elements as valid.
Because the simulator is a zero delay simulator, it is possible for DRC to simulate zero-width
monostable circuits with ideal behavior, which is rarely matched in silicon. The tool treats the
resulting zero-width output pulse from the monostable circuit as a valid clocking event for other
state elements. Thus, state elements change state although their clock lines show no clocking
event.
With transient detection on, the tool sets state elements to a value of X if the zero-width event
causes a change of state in the state elements. This is the default behavior upon invocation of the
tool.
REPort ENvironment
If you are using the graphical user interface, select the Report > Environment pulldown menu
item.
This command reports on the tool’s current user-controllable settings. If you issue this
command before specifying any setup commands, the application lists the system defaults for
all the setup commands. To write this information to a file, use the Write Environment
command
By default, FlexTest assumes a test cycle of one timeframe. However, typically you will need to
set the test cycle to two timeframes. And if you define a clock using the Add Clocks command,
you must specify at least two timeframes. In a typical test cycle, the first timeframe is when the
data inputs change (forced and measured) and the second timeframe is when the clock changes.
If you have multi-phased clocks, or want certain data pins to change when the clock is active,
you should set three or more timeframes per test cycle.
At least one input or set of inputs should change in a given timeframe. If not, the timeframe is
unnecessary. Unnecessary timeframes adversely affect FlexTest performance. When you
attempt to exit Setup mode, FlexTest checks for unnecessary timeframes, just prior to design
flattening. If the check fails, FlexTest issues an error message and remains in Setup mode.
To set the number of timeframes in a test cycle, you use the Set Test Cycle command. This
command’s usage is as follows:
There are three components to describing the cyclic behavior of signals. A pulse signal contains
a period (that is equal to or a multiple of test cycles), an offset time, and a pulse width.
Constraining a pin lets you define when its signal can change in relation to the defined test
cycle. To add pin constraints to a specific pin, you use the Add Pin Constraints command. This
command’s usage is as follows:
There are eleven constraint formats from which to chose. The constraint values (or waveform
types) further divide into the three waveform groups used in all automatic test equipment:
To specify a unique strobe time for certain primary outputs, you use the Add Pin Strobes
command. You can also optionally specify the period for each pin strobe. This command’s
usage is as follows:
Any primary output without a specified strobe time uses the default strobe time. To set the
default strobe time for all unspecified primary output pins, you use the Setup Pin Strobes
command. This command’s usage is as follows:
The -Default switch resets the strobe time to the FlexTest defaults, such that the strobe takes
place in the last timeframe of each test cycle, unless there is a scan operation during the test
period. If there is a scan operation, FlexTest sets time 1 as the strobe time for each test cycle.
FlexTest groups all primary outputs with the same pin strobe time in the same output bus array,
even if the outputs have different pin strobe periods. At each test cycle, FlexTest displays the
strobed values of all output bus arrays. Primary outputs not strobed in the particular test cycle
receive unknown values.
Related Commands:
You must specify the off-state for pins you add to the clock list. The off-state is the state in
which clock inputs of latches are inactive. For edge-triggered devices, the off-state is the clock
value prior to the clock’s capturing transition. You add clock pins to the list by using the Add
Clocks command. This command’s usage is as follows:
You can constrain a clock pin to its off-state to suppress its usage as a capture clock during the
ATPG process. The constrained value must be the same as the clock off-state, otherwise an
error occurs. If you add an equivalence pin to the clock list, all of its defined equivalent pins are
also automatically added to the clock list.
Related Commands:
Delete Clocks - deletes the specified pins from the clock list.
Report Clocks - reports all defined clock pins.
Related Commands:
Delete Scan Groups - deletes specified scan groups and associated chains.
Report Scan Groups - displays current list of scan chain groups.
Note
Scan chains of a scan group can share a common scan input pin, but this condition
requires that both scan chains contain the same data after loading.
Related Commands:
to have multiple active clocks. Domain_clock (FastScan only), allows more than just clock_po
patterns to have multiple active clocks.
Note
If you choose to turn off the clock restriction, you should verify the generated pattern set
using a timing simulator—to ensure there are no timing errors.
You identify a scan cell by either a pin pathname or a scan chain name plus the cell’s position in
the scan chain.
To add constraints to scan cells, you use the Add Cell Constraints command. This command’s
usage is as follows:
If you specify the pin pathname, it must be the name of an output pin directly connected
(through only buffers and inverters) to a scan memory element. In this case, the tool sets the
scan memory element to a value such that the pin is at the constrained value. An error condition
occurs if the pin pathname does not resolve to a scan memory element.
If you identify the scan cell by chain and position, the scan chain must be a currently-defined
scan chain and the position is a valid scan cell position number. The scan cell closest to the
scan-out pin is in position 0. The tool constrains the scan cell’s MASTER memory element to
the selected value. If there are inverters between the MASTER element and the scan cell output,
they may invert the output’s value.
Related Commands:
Delete Cell Constraints - deletes the constraints from the specified scan cells.
Report Cell Constraints - reports all defined scan cell constraints.
You can specify that the listed pin pathnames, or all the pins on the boundary and inside the
named instances, are not allowed to have faults included in the fault list.
Related Commands:
FastScan and FlexTest perform model flattening, learning analysis, and rules checking when
you try to exit the Setup mode. Each of these processes is explained in detail in “Understanding
Common Tool Terminology and Concepts” on page 3-1. As mentioned previously, to change
from Setup to one of the other system modes, you enter the Set System Mode command, whose
usage is as follows:
If you are using FlexTest, you can also troubleshoot rules violations from within the Drc mode.
This system mode retains the internal representation of the design used during the design rules
checking process.
Note
FastScan does not require the Drc mode because it uses the same internal design model
for all of its processes.
Fault Simulation
The following subsections discuss the procedures for setting up and running fault simulation
using FastScan and FlexTest.
This places the tool in Fault mode, from which you can enter the commands shown in the
remaining fault simulation subsections.
If you are using the graphical user interface, you can click on the palette menu item MODES >
Fault.
If you wish to change the fault type to toggle, pseudo stuck-at (IDDQ), transition, or path delay
(FastScan only), you can issue the Set Fault Type command. This command’s usage is as
follows:
“Setting Up the Fault Information for ATPG” on page 6-43 provides more information on
creating the fault list and specifying other fault information.
Note
You may notice a slight drop in test coverage when using an external pattern set as
compared to using generated patterns. This is an artificial drop. See the Set Pattern
Source command in the ATPG Tools Reference Manual for more details.
For FastScan only, the tool can perform simulation with a select number of random patterns.
FlexTest can additionally read in Table format, and also lets you specify what value to use for
pattern padding. Refer to the ATPG Tools Reference Manual for additional information on these
application-specific Set Pattern Source command options.
Related Commands: The following related commands apply if you select the Random pattern
source option:
Set Capture Clock - specifies the capture clock for random pattern simulation.
Set Random Clocks - specifies the selection of clock_sequential patterns for random
pattern simulation.
Set Random Patterns - specifies the number of random patterns to be simulated.
FAULT> run
FlexTest has some options to the run command, which can aid in debugging fault simulation
and ATPG. Refer to the ATPG Tools Reference Manual for information on the Run command
options.
Related Commands:
WRIte FAults filename [-Replace] [-Class class_type] [-Stuck_at {01 | 0 | 1}] [-All |
object_pathname...] [-Hierarchy integer] [-Min_count integer] [-Noeq]
Refer to “Writing Faults to an External File” on page 6-45 or the Write Faults command page in
the ATPG Tools Reference Manual for command option details.
To read the faults back in for ATPG, go to Atpg mode (using Set System Mode) and enter the
Load Faults command. This command’s usage is as follows:
For FastScan
LOAd FAults filename [-Restore | -Delete | -Delete_Equivalent | -Retain]
For FlexTest
LOAd FAults filename [-Restore | -Delete] [-Column integer]
First, set the system mode to Atpg if you are not already in that system mode. Next, you must
specify that the patterns you want to simulate are in an external file (named table.flex in this
example). Then generate the fault list including all faults, and run the simulation. You could
then set the pattern source to be internal and run the basic ATPG process on the remaining
undetected faults.
for running good simulation on existing hand- or ATPG-generated pattern sets using FastScan
and FlexTest.
SET OUtput Comparison OFf | {ON [-X_ignore [None | Reference | Simulated | Both]]}
[-Io_ignore]
By default, the output comparison of good circuit simulation is off. FlexTest
performs the comparison if you specify ON. The -X_ignore options will allow you
to control whether X values, in either simulated results or reference output, should
be ignored when output comparison capability is used.
To execute the simulation comparison, enter the Run command at the Good mode prompt as
follows:
GOOD> run
The Add Lists command specifies which pins to report. The Set List File command specifies the
name of the file in which you want to place simulation values for the selected pins.
If you prefer to perform interactive debugging, you can use the Run and Report Gates
commands to examine internal pin values. If using FlexTest, you can use the -Record switch
with the Run command to store the internal states for the specified number of test cycles.
If you are using the graphical user interface, you can click on the palette menu item MODES >
Fault.
The Delete Faults command with the -untestable switch removes faults from the fault list that
are untestable using random patterns.
FAULT> run
After the simulation run, you can display the undetected faults with the Report Faults command.
Some of the undetected faults may be redundant. You can run ATPG on the undetected faults to
identify those that are redundant.
After the application identifies all the faults, it implements a process of structural equivalence
fault collapsing from the original uncollapsed fault list. From this point on, the application
works on the collapsed fault list. However, the results are reported for both the uncollapsed and
collapsed fault lists. Executing any command that changes the fault list causes the tool to
discard all patterns in the current internal test pattern set due to the probable introduction of
inconsistencies. Also, whenever you re-enter the Setup mode, it deletes all faults from the
current fault list. The following subsections describe how to create a fault list and define fault
related information.
Assuming your circuit passes rules checking with no violations, you can exit the Setup system
mode and enter the Atpg system mode as follows:
If you are using the graphical user interface, you can click on the palette menu item MODES >
ATPG.
If you wish to change the fault type to toggle, pseudo stuck-at (IDDQ), transition, or path delay
(FastScan only), you can issue the Set Fault Type command. This command’s usage is as
follows:
If you are using the graphical user interface, you can click on the palette icon item ADD
FAULTS and specify All in the dialog box that appears.
If you do not want all possible faults in the list, you can use other options of the Add Faults
command to restrict the added faults. You can also specify no-faulted instances to limit placing
faults in the list. You flag instances as “Nofault” while in Setup mode. For more information,
refer to “Adding Nofault Settings” on page 6-35.
When the tool first generates the fault list, it classifies all faults as uncontrolled (UC).
Related Commands:
Delete Faults - deletes the specified faults from the current fault list.
Report Faults - displays the specified types of faults.
You must enter either a list of object names (pin pathnames or instance names) or use the -All
switch to indicate the pins whose faults you want added to the fault list. You can use the -Stuck-
at switch to indicate which stuck faults on the selected pins you want added to the list. If you do
not use the Stuck-at switch, the tool adds both stuck-at-0 and stuck-at-1 faults. FastScan and
FlexTest initially place faults added to a fault list in the undetected-uncontrolled (UC) fault
class.
For FastScan
LOAd FAults filename [-Restore | -Delete | -Delete_Equivalent | -Retain]
For FlexTest
LOAd FAults filename [-Restore | -Delete] [-Column integer]
The applications support external fault files in the 3, 4, or 6 column formats. The only data they
use from the external file is the first column (stuck-at value) and the last column (pin
pathname)—unless you use the -Restore option.
The -Restore option causes the application to retain the fault class (second column of
information) from the external fault list. The -Delete option deletes all faults in the specified file
from the internal faults list. The -DELETE_Equivalent option, in FastScan, deletes from the
internal fault list all faults in the file, as well as all their equivalent faults. The -Column option,
in FlexTest, specifies the column format of the fault file.
Note
In FastScan, the filename specified cannot have fault information lines with comments
appended to the end of the lines or fault information lines greater than five columns. The
tool will not recognize the line properly and will not add the fault on that line to the
faultlist.
WRIte FAults filename [-Replace] [-Class class_type] [-Stuck_at {01 | 0 | 1}] [-All |
object_pathname...] [-Hierarchy integer] [-Min_count integer] [-Noeq]
You must specify the name of the file you want to write. For information on the remaining
Write Faults command options, refer to the ATPG Tools Reference Manual.
splitting the test set without losing test coverage. Some pattern compaction routines also rely on
the self-initializing properties of sequences. Each self-initialized test sequence is defined as a
test pattern (to be compatible with FastScan).
The Set Self Initialization command allows you to turn this feature on or off. By default, self-
initializing behavior is on.
Note
Only the ASCII pattern format includes this test pattern information.
To set the fault mode, you use the Set Fault Mode command. This command’s usage is as
follows:
Note
The Report Statistics command always reports both uncollapsed and collapsed statistics.
Therefore, the Set Fault Mode command is useful only for the Report Faults and Write
Faults commands.
Note
If you are using FlexTest and you set the possible detection credit to 0, it does not place
any faults in the possible-detected category. If faults already exist in these categories, the
tool reclassifies PT faults as UO and PU faults as AU.
Performing ATPG
Obtaining the optimal test set in the least amount of time is a desirable goal. Figure 6-9 outlines
how to most effectively meet this goal.
Set Up for
ATPG
Create Patterns
N
Coverage Adjust
Good? ATPG Approach
Save Patterns
The first step in the process is to perform any special setup you may want for ATPG. This
includes such things as setting limits on the pattern creation process itself. The second step is to
create patterns with default settings (see page 6-56). This is a very fast way to determine how
close you are to your testability goals. You may even obtain the test coverage you desire from
your very first run. However, if your test coverage is not at the required level, you may have to
troubleshoot the reasons for the inadequate coverage and create additional patterns using other
approaches (see page 6-58).
During deterministic pattern generation, the tool allows only the restricted values on the
constrained circuitry. Unlike pin and scan cell constraints, which are only available in Setup
mode, you can define ATPG constraints in any system mode—after design flattening. If you
want to set ATPG constraints prior to performing design rules checking, you must first create a
flattened model of the design using the Flatten Model command.
ATPG constraints are useful when you know something about the way the circuit behaves that
you want the ATPG process to examine. For example, the design may have a portion of
circuitry that behaves like a bus system; that is, only one of various inputs may be on, or
selected, at a time. Using ATPG constraints, combined with a defined ATPG function, you can
specify this information to FastScan or FlexTest. ATPG functions let you place artificial
Boolean relationships on circuitry within your design. After defining the functionality of a
portion of circuitry with an ATPG function, you can then constrain the value of the function as
desired with an ATPG constraint. This can be far more useful than just constraining a point in a
design to a specific value.
FlexTest allows you to specify temporal ATPG functions by using a Delay primitive to delay
the signal for one timeframe. Temporal constraints can be achieved by combining ATPG
constraints with the temporal function options.
To define ATPG functions, use the Add Atpg Functions command. This command’s usage is as
follows:
You can specify ATPG constraints with the Add Atpg Constraints command. This command’s
usage is as follows:
Test generation considers all current constraints. However, design rules checking considers only
static constraints. You can only add or delete static constraints in Setup mode. Design rules
checking does not consider dynamic constraints unless you explicitly use the -ATPGC switch
with the Set Drc Handling command. You can add or delete dynamic constraints at any time
during the session. By default, ATPG constraints are dynamic.
Figure 6-10 and the following commands give an example of how you use ATPG constraints
and functions together.
0
/u1
/u2 0 contention-
/u5 free
1
/u3
0
/u4
The circuitry of Figure 6-10 includes four gates whose outputs are the inputs of a fifth gate.
Assume you know that only one of the four inputs to gate /u5 can be on at a time, such as would
be true of four tri-state enables to a bus gate whose output must be contention-free. You can
specify this using the following commands:
ATPG> add atpg functions sel_func1 select1 /u1/o /u2/o /u3/o /u4/o
ATPG> add atpg constraints 1 sel_func1
These commands specify that the “select1” function applies to gates /u1, /u2, /u3, and /u4 and
the output of the select1 function should always be a 1. Deterministic pattern generation must
ensure these conditions are met. The conditions causing this constraint to be true are shown in
Table 6-1. When this constraint is true, gate /u5 will be contention-free.
Table 6-1. ATPG Constraint Conditions
/u1 /u2 /u3 /u4 sel_func1 /u5
0 0 0 1 1 contention-free
0 0 1 0 1 contention-free
0 1 0 0 1 contention-free
1 0 0 0 1 contention-free
Given the defined function and ATPG constraint you placed on the circuitry, FastScan and
FlexTest only generate patterns using the values shown in Table 6-1.
Typically, if you have defined ATPG constraints, the tools do not perform random pattern
generation during ATPG. However, using FastScan you can force the pattern source to random
(using Set Pattern Source Random). In this situation, FastScan rejects patterns during fault
simulation that do not meet the currently-defined ATPG constraints.
Related Commands:
Analyze Atpg Constraints - analyzes a given constraint for either its ability to be
satisfied or for mutual exclusivity.
Analyze Restrictions - performs an analysis to automatically determine the source of
the problems from a failed ATPG run.
Delete Atpg Constraints - removes the specified constraint from the list.
Delete Atpg Functions - removes the specified function definition from the list.
Report Atpg Constraints - reports all ATPG constraints in the list.
Report Atpg Functions - reports all defined ATPG functions.
FlexTest Only - The last test sequence generated by an ATPG process is truncated to make sure
the total test cycles do not exceed cycle limit.
FastScan uses its clock sequential fault simulator to simulate multiple events in a single cycle.
Figure 6-11 illustrates the possible events.
ordinary PIs
clock PIs
Measure POs
1 2 3
Event 1 represents a simulation where all clock primary inputs are at their “off” value, other
primary inputs have been forced to values, and state elements are at the values scanned in or
resulting from capture in the previous cycle. When simulating this event, FastScan provides the
capture data for inputs to leading edge triggered flip-flops. The Set Clock_off Simulation
command enables or disables the simulation of this event.
Event 2 corresponds to the default simulation performed by FastScan. It represents a point in the
simulation cycle where the clocks have just been pulsed. State elements have not yet changed,
although all combinational logic, including that connected to clocks, has been updated.
Event 3 corresponds to a time when level-sensitive and leading edge state elements have
updated as a result of the applied clocks. This simulation correctly calculates capture values for
trailing edge and level sensitive state elements, even in the presence of C3 and C4 violations.
The Set Split Capture_cycle command enables or disables the simulation of this event. This
command’s usage is as follows:
All Zhold gates hold their value between events 1 and 2, even if the Zhold is marked as having
clock interaction. All latches maintain state between events 1 to 2 and 2 to 3, although state will
not be held in TLAs between cycles.
If you issue both commands, each cycle of the clock results in up to 3 simulation passes with the
leading and falling edges of the clock simulated separately.
Note
These are not available for RAM sequential simulations. Because clock sequential ATPG
can test the same faults as RAM sequential, this is not a real limitation.
Another advantage of invoking the tool on a flattened netlist rather than from a regular (for
instance, Verilog) netlist, is that you will save memory and have room for more patterns.
Note
Take care, before you save a flattened version of your design, that you have specified all
necessary settings accurately. Some design information, such as that related to hierarchy,
is lost when the design is flattened. Therefore, commands that require this information
will not operate with the flattened netlist. Also, some settings, once incorporated in the
flattened netlist, cannot be changed; a tied constraint you apply to a primary input pin, for
example.
There are two checkpoint commands: Setup Checkpoint, which identifies the time period
between each write of the test patterns and the name of the pattern file to which the tool writes
the patterns, and Set Checkpoint, which turns the checkpoint functionality on or off.
Before turning on the checkpoint functionality, you must first issue the Setup Checkpoint
command. This command’s usage is as follows:
To turn the checkpoint functionality on or off, use the Set Checkpoint command. This
command’s usage is as follows:
Example Checkpointing
Suppose a large design takes several days for FastScan to process. You do not want to restart
pattern creation from the beginning if a system failure ends ATPG one day after it begins. The
following dofile segment defines a checkpoint interval of 90 minutes and enables
checkpointing.
Note
The -Faultlist and -Keep_aborted switches write a fault list in which the aborted faults are
identified, and will save time if you have to resume a run after a system failure.
If you need to perform a continuation run, invoking on a flattened model can be much faster
than reflattening the netlist (see “Using a Flattened Model to Save Time and Memory” on
page 6-53 for more information). After the tool loads the design, but before you continue the
interrupted run, be sure to set all the same constraints you used in the interrupted run. The next
dofile segment uses checkpoint data to resume the interrupted run:
After it executes the above commands, FastScan should be at the same fault grade and number
of patterns as when it last saved checkpoint data during the interrupted run. To complete the
pattern creation process, you can now use the Create Patterns command as described in the next
section.
conditions for various sequential depths. This command displays an estimate of the maximum
test coverage possible at different sequential depth settings.
Patterns generated early on in the pattern set may no longer be necessary because later patterns
also detect the faults detected by these earlier patterns. Thus, you can compress the pattern set
by rerunning fault simulation on the same patterns, first in reverse order and then in random
order, keeping only those patterns necessary for fault detection. This method normally reduces
an uncompressed original test pattern set by 30 to 40 percent with very little effort.
To apply static compression to test patterns, you use the Compress Patterns command. This
command’s usage is as follows:
o The integer option lets you specify how many compression passes the fault
simulator should make. If you do not specify any number, it performs only one
compression pass.
o The -MAx_useless_passes option lets you specify a maximum number of passes
with no pattern elimination before the tool stops compression.
o The -MIn_elim_per_pass option lets you constrain the compression process by
specifying that the tool stop compression when a single pass does not eliminate a
minimum number of patterns.
The -Effort switch specifies the kind of compression strategy the tool will use. The Low option
uses the original reverse and random strategy. The higher the effort level selected, the more
complex the strategy. For more detail, refer to the Compress Patterns command in the ATPG
Tools Reference Manual.
Note
The tool only performs pattern compression on independent test blocks; that is, for
patterns generated for combinational or scan designs. Thus, FlexTest first does some
checking of the test set to determine whether it can implement pattern compression.
When trying to establish the cause of low test coverage, you should examine the messages the
tool prints during the deterministic test generation phase. These messages can alert you to what
might be wrong with respect to Redundant (RE) faults, ATPG_untestable (AU) faults, and
aborts. If you do not like the progress of the run, you can terminate the process with CTRL-C.
If a high number of aborted faults (UC or UO) appears to cause the problem, you can set the
abort limit to a higher number, or modify some command defaults to change the way the
application makes decisions. The number of aborted faults is high if reclassifying them as
Detected (DT) or Posdet (PD) would result in a meaningful improvement in test coverage. In
the tool’s coverage calculation (see “Testability Calculations” on page 2-31), these reclassified
faults would increase the numerator of the formula. You can quickly estimate how much
improvement would be possible using the formula and the fault statistics from your ATPG run.
The following subsections discuss several ways to handle aborted faults.
Note
Changing the abort limit is not always a viable solution for a low coverage problem. The
tool cannot detect ATPG_untestable (AU) faults, the most common cause of low test
coverage, even with an increased abort limit. Sometimes you may need to analyze why a
fault, or set of faults, remain undetected to understand what you can do.
Also, if you have defined several ATPG constraints or have specified Set Contention Check On
-Atpg, the tool may not abort because of the fault, but because it cannot satisfy the required
conditions. In either of these cases, you should analyze the buses or ATPG constraints to ensure
the tool can satisfy the specified requirements.
You can also report data from the ATPG run using the Report Testability Data command within
FastScan or FlexTest for a specific category of faults. This command displays information
about connectivity surrounding the problem areas. This information can give you some ideas as
to where the problem might lie, such as with RAM or clock PO circuitry. Refer to the Report
Testability Data command in the ATPG Tools Reference Manual for more information.
The Set Abort Limit command for FlexTest has the following usage:
The application classifies any faults that remain undetected after reaching the limits as aborted
faults—which it considers undetected faults.
Related Commands:
Report Aborted Faults - displays and identifies the cause of aborted faults.
This facilitates the ATPG process, however, it minimizes random pattern detection. This is not
always desirable, as you typically want generated patterns to randomly detect as many faults as
possible. To maximize random pattern detection, FastScan provides the Set Decision Order
command to allow flexible selection of control inputs and observe outputs during pattern
generation. Usage for the Set Decision Order command is:
For FastScan:
For FlexTest:
Additionally, FastScan and FlexTest support both selective and supplemental IDDQ test
generation. The tool creates a selective IDDQ test set when it selects a set of IDDQ patterns
from a pre-existing set of patterns originally generated for some purpose other than IDDQ test.
The tool creates a supplemental IDDQ test set when it generates an original set of IDDQ
patterns based on the pseudo stuck-at fault model. Before running either the supplemental or
selective IDDQ process, you must first set the fault type to IDDQ with the Set Fault Type
command.
Using FastScan and FlexTest, you can either select or generate IDDQ patterns using several
user-specified checks. These checks can help ensure that the IDDQ test vectors do not increase
IDDQ in the good circuit. The following subsections describe IDDQ pattern selection, test
generation, and user-specified checks in more detail.
By default, FastScan and FlexTest place these statements at the end of patterns (cycles) that can
contain IDDQ measurements. You can manually add these statements to patterns (cycles)
within the external pattern set.
When you want to select patterns from an external set, you must specify which patterns can
contain an IDDQ measurement. If the pattern set contains no IDDQ measure statements, you
can specify that the tools assume the tester can make a measurement at the end of each pattern
or cycle. If the pattern set already contains IDDQ measure statements (if you manually added
these statements), you can specify that simulation should only occur for those patterns that
already contain an IDDQ measure statement, or label. To set this measurement information, use
the Set Iddq Strobes command.
Additionally, you can set up restrictions that the selection process must abide by when choosing
the best IDDQ patterns. “Specifying IDDQ Checks and Constraints” on page 6-66 discusses
these IDDQ restrictions. To specify the IDDQ pattern selection criteria and run the selection
process, use Select Iddq Patterns. This command’s usage is as follows:
Note
FlexTest supplies some additional arguments for this command. Refer to Select Iddq
Patterns in the ATPG Tools Reference Manual for details.
1. Invoke FastScan or FlexTest on the design, set up the appropriate parameters for ATPG
run, pass rules checking, and enter the ATPG mode.
...
SETUP> set system mode atpg
This example assumes you set the fault type to stuck-at, or some fault type other than
IDDQ.
2. Run ATPG.
ATPG> run
7. Assume IDDQ measurements can occur within each pattern or cycle in the external
pattern set.
ATPG> set iddq strobe -all
8. Specify to select the best 15 IDDQ patterns that detect a minimum of 10 IDDQ faults
each.
Note
You could use the Add Iddq Constraints or Set Iddq Checks commands prior to the
ATPG run to place restrictions on the selected patterns.
The generated IDDQ pattern set may contain more patterns than you want for IDDQ testing. At
this point, you just set up the IDDQ pattern selection criteria and run the selection process using
Select Iddq Patterns.
Instead of creating a new fault list, you could load a previously-saved fault list. For
example, you could write the undetected faults from a previous ATPG run and load
them into the current session with Load Faults, using them as the basis for the IDDQ
ATPG run.
4. Run ATPG, generating patterns that target the IDDQ faults in the current fault list.
Note
You could use the Add Iddq Constraints or Set Iddq Checks commands prior to the
ATPG run to place restrictions on the generated patterns.
ATPG> run
5. Select the best 15 IDDQ patterns that detect a minimum of 10 IDDQ faults each.
ATPG> select iddq patterns -max_measure 15 -threshold 10
Note
You did not need to specify which patterns could contain IDDQ measures with Set Iddq
Strobes, as the generated internal pattern source already contains the appropriate measure
statements.
Related Commands:
Delete Iddq Constraints - deletes internal and external pin constraints during IDDQ
measurement.
Report Iddq Constraints - reports internal and external pin constraints during IDDQ
measurement.
SET IDdq Checks [-NONe | {-Bus | -WEakbus | -Int_float | -Pull | -Clock | -WRite | -REad |
-WIre | -WEAKHigh | -WEAKLow | -VOLTGain | -VOLTLoss}…] [-WArning | -ERror]
[-NOAtpg | -ATpg]
By default, neither tool performs IDDQ checks. Both ATPG and fault simulation processes
consider the checks you specify. Refer to the Set Iddq Checks reference page in the ATPG Tools
Reference Manual for details on the various capabilities of this command.
With the following command, you can force a set of internal pins to a specific state during
IDDQ measurement to prevent high IDDQ:
Note
This command is similar to the Add Atpg Constraints command. However, ATPG
constraints specify pin states for all ATPG generated test cycles, while IDDQ constraints
specify values that pins must have only during IDDQ measurement. You can change both
during ATPG or fault simulation to achieve higher coverage.
Define Capture
Procedures
(Optional)
Analyze Coverage
Create Patterns
Your process may be different and it may involve multiple iterations through some of the steps,
based on your design and coverage goals. This section describes these two test types in more
detail and how you create them using FastScan. The following topics are covered:
Figure 6-13 illustrates the six potential transition faults for a simple AND gate. These are
comprised of slow-to-rise and slow-to-fall transitions for each of the three terminals. Because a
transition delay test checks the speed at which a device can operate, it requires a two cycle test.
First, all the conditions for the test are set. In the figure, A and B are 0 and 1 respectively. Then
a change is launched on A, which should cause a change on Y within a pre-determined time. At
the end of the test time, a circuit response is captured and the value on Y is measured. Y might
not be stuck at 0, but if the value of Y is still 0 when the measurement is taken at the capture
point, the device is considered faulty. The ATPG tool automatically chooses the launch and
capture scan cells.
A
Y
B AND
Predetermined
test time
Y Measure/Capture
Launch
Y
Fail
Capture
Launch 0-1 Event
Event PI or 0-1 (measure PO)
(force PI) scan cell AND
X-1
X-0
X-0 NOR
X-0
PO or
X-1 AND scan cell
To detect a transition fault, a typical FlexTest or FastScan pattern includes the events in
Figure 6-15.
This is a clock sequential pattern, commonly referred to as a “broadside” pattern. It has basic
timing similar to that shown in Figure 6-16 and is the kind of pattern FastScan attempts to create
by default when the clock-sequential depth (the depth of non-scan sequential elements in the
design) is two or larger. You specify this depth with the Set Pattern Type command’s
-Sequential switch. the default setting of this switch upon invocation is 0, so you would need to
change it to at least 2 to enable the tool to create broadside patterns.
Typically, this type of pattern eases restrictions on scan enable timing because of the relatively
large amount of time between the last shift and the launch. After the last shift, the clock is
pulsed at speed for the launch and capture cycles.
clk
scan_en
If it fails to create a broadside pattern, FastScan next attempts to generate a pattern that includes
the events shown in Figure 6-17.
In this type of pattern, commonly referred to as a “launch off last shift” or just “launch off shift”
pattern, the transition occurs because of the last shift in the load scan chains procedure (event
#2) or the forcing of the primary inputs (event #3). Figure 6-18 shows the basic timing for a
launch that is triggered by the last shift.
clk
scan_en
This type of pattern requires the scan enable signal for mux-scan designs to transition from shift
to capture mode at speed. Therefore, the scan enable must be globally routed and timed similar
to a clock. If your design cannot support this requirement, you can direct FastScan not to create
launch off shift patterns by including the -No_shift_launch switch when specifying transition
faults with the Set Fault Type command. The usage for this command is as follows:
Random pattern generation in FastScan always tries to produce launch off shift patterns. To
avoid this, use “set random atpg off” in addition to the “set fault type transition
-no_shift_launch” command. Again, mux-scan architectures are a good example of where this
might be desirable.
The following are example commands you could use at the command line or in a dofile to
generate broadside transition patterns:
SETUP> add pin constraint scan_en c0 //force for launch & capture.
SETUP> set output masks on //do not observe primary outputs.
SETUP> set transition holdpi on //freeze primary input values.
SETUP> add nofaults <x, y, z> //ignore non-functional logic like
... // boundary scan.
ATPG> set fault type transition -no_shift_launch //prohibit launch off last shift.
ATPG> set pattern type -sequential 2 //sequential depth depends on design.
ATPG> create patterns
To create transition patterns that launch off the last shift, use a sequence of commands similar to
this:
Related Commands:
Set Abort Limit - specifies the abort limit for the test pattern generator.
Set Fault Type - specifies the fault model for which the tool develops or selects ATPG
patterns.
Set Pattern Type - specifies the type of test patterns the ATPG simulation run uses.
4. Enter Atpg system mode. This triggers the tool’s automatic design flattening and rules
checking processes.
SETUP> set system mode atpg
Within the test procedure file, timeplates are the mechanism used to define tester cycles and
specify where all event edges are placed in each cycle. As shown conceptually in Figure 6-16
for broadside testing, slow cycles are used for shifting (load and unload cycles) and fast cycles
for the launch and capture. Figure 6-19 shows the same diagram with example timing added.
clk
scan_en
This diagram now shows 400 nanosecond periods for the slow shift cycles defined in a
timeplate called tp_slow and 40 nanosecond periods for the fast launch and capture cycles
defined in a timeplate called tp_fast.
The following are example timeplates and procedures that would provide the timing shown in
Figure 6-19. For brevity, these excerpts do not comprise a complete test procedure. Normally,
there would be other procedures as well, like setup procedures.
timeplate tp_slow = timeplate tp_fast =
force_pi 0; force_pi 0;
measure_po 100; measure_po 10;
pulse clk 200 100; pulse clk 20 10;
period 400; period 40;
end; end;
In this example, there are 40 nanoseconds between the launch and capture clocks. If you want to
create this same timing between launch and capture events, but all your clock cycles have the
same period, you can skew the clock pulses within their cycle periods—if your tester can
provide this capability. Figure 6-20 shows how this skewed timing might look.
Launch Capture
clk
scan_en
The following timeplate and procedure excerpts show how skewed launch off shift pattern
events might be managed by timeplate definitions called tp_late and tp_early, in a test
procedure file:
Note
For brevity, these excerpts do not comprise a complete test procedure. The shift
procedure is not shown and normally there would be other procedures as well, like setup
procedures.
By moving the clock pulse later in the period for the load_unload and shift cycles and earlier in
the period for the capture cycle, the 40 nanosecond time period between the launch and capture
clocks is achieved.
Capture
Launch 0-1 Event
Event PI or 0-1 (measure PO)
(force PI) scan cell AND
1-1 1-0
0-0 NOR
1-0
AND PO or
1-1 scan cell
Path delay patterns are a variant of clock-sequential patterns. A typical FastScan pattern to
detect a path delay fault includes the following events:
The additional force_pi/pulse_clock cycles may occur before or after the launch or capture
events. The cycles depend on the sequential depth required to set the launch conditions or
sensitize the captured value to an observe point.
Note
Path delay testing often requires greater depth than for stuck-at fault testing. The
sequential depths that FastScan calculates and reports are the minimums for stuck-at
testing.
To get maximum benefit from path delay testing, the launch and capture events must have
accurate timing. The timing for all other events is not critical.
FastScan detects a path delay fault with either a robust test, a transition test, or a functional test.
If you save a path delay pattern in ASCII format, the tool includes comments in the file that
indicate which of these three types of detection the pattern uses. Robust detection occurs when
the gating inputs used to sensitize the path are stable from the time of the launch event to the
time of the capture event. Robust detection keeps the gating of the path constant during fault
detection and thus, does not affect the path timing. Because it avoids any possible reconvergent
timing effects, it is the most desirable type of detection and for that reason is the approach
FastScan tries first. However, FastScan cannot use robust detection on many paths because of
its restrictive nature and if it is unable to create a robust test, it will automatically try to create a
non-robust test. The application places faults detected by robust detection in the DR
(det_robust) fault class.
Figure 6-22 gives an example of robust detection for a rising-edge transition within a simple
path. Notice that, due to the circuitry, the gating value at the second OR gate was able to retain
the proper value for detection during the entire time from launch to capture events.
Initial State
Launch Point
Capture Point
1
0 X
AND 1 1
1 OR X
1 0
0 1 0
OR
1
After Transition
Launch Point
Capture Point
0
1 X
AND 0 0
1 0 OR X
1
1
1
1
OR
0
Gating Value Constant
During Transition
Transition detection does not require constant values on the gating inputs used to sensitize the
path. It only requires the proper gating values at the time of the capture event. FastScan places
faults detected by transition detection in the DS (det_simulation) fault class.
Figure 6-23 gives an example of transition detection for a rising-edge transition within a simple
path.
Initial State
Launch Point
Capture Point
1
0X
AND 1 1
1 1 OR X
1
0 1
AND 0
1
After Transition
Launch Point
Capture Point
0
1X
AND 0 0
1 0 OR X
1
1 0
AND 1
1 Gating Value Changed
During Transition
Notice that due to the circuitry, the gating value on the OR gate changed during the 0 to 1
transition placed at the launch point. Thus, the proper gating value was only at the OR gate at
the capture event.
Functional detection further relaxes the requirements on the gating inputs used to sensitize the
path. The gating of the path does not have to be stable as in robust detection, nor does it have to
be sensitizing at the capture event, as required by transition detection. Functional detection
requires only that the gating inputs not block propagation of a transition along the path.
FastScan places faults detected by functional detection in the det_functional (DF) fault class.
Figure 6-24 gives an example of functional detection for a rising-edge transition within a simple
path. Notice that, due to the circuitry, the gating (off-path) value on the OR gate is neither
stable, nor sensitizing at the time of the capture event. However, the path input transition still
propagates to the path output.
Initial State
Launch Point
Capture Point
0
0X
AND 0 0
1 0 OR X
1
0 1 1 0
AND
1
1
After Transition
Launch Point
Capture Point
1
1 X
AND 1 1
1 1 OR X
1
1 0
AND 0 1
1 1 Gating Value Changed
During Transition
Related Commands:
Add Ambiguous Paths - specifies the number of paths FastScan should select when
encountering an ambiguous path.
Analyze Fault - analyzes a fault, including path delay faults, to determine why it was
not detected.
Delete Paths - deletes paths from the internal path list.
Load Paths - loads in a file of path definitions from an external file.
Report Paths - reports information on paths in the path list.
Report Statistics - displays simulation statistics, including the number of detected
faults in each fault class.
Set Pathdelay Holdpi - sets whether non-clock primary inputs can change after the first
pattern force, during ATPG.
Write Paths - writes information on paths in the path list to an external file.
0-1
PI or 0-1 0-1
scan cell 0-1 PO or
AND scan cell
1 - 1 (tool’s preference) Capture
Launch
Event 0 - 1 (your preference) Event
(force PI) (measure PO)
d To other circuit
elements requiring
a 0-1 transition
A defined path includes a 2-input AND gate with one input on the path, the other
connected to the output of a scan cell. For a robust test, the AND gate’s off-path or
gating input needs a constant 1. The tool, in exercising its preference for a robust test,
would try to create a pattern that achieved this. Suppose however that you wanted the
circuit elements fed by the scan cell to receive a 0-1 transition. You could add a
transition_condition statement to the path definition, specifying a rising transition for
the scan cell. The path capture point maintains a 0-1 transition, so remains testable with
a non-robust test, and you also get the desired transition for the other circuit elements.
• Pin - A required statement that identifies a pin in the path by its full pin pathname. Pin
statements in a path must be ordered from launch point to capture point. A “+” or “-”
after the pin pathname indicates the inversion of the pin with respect to the launch point.
A “+” indicates no inversion, while a “-” indicates inversion.
You must specify a minimum of two pin statements, the first being a valid launch point
(primary input or data output of a state element) and the last being a valid capture point
(primary output or data or clk input of a state element). The current pin must have a
combinational connectivity path to the previous pin and the edge parity must be
consistent with the path circuitry. If a statement violates either of these conditions, the
tool issues an error. If the path has edge or path ambiguity, it issues a warning.
Paths can include state elements (through data or clock inputs), but you must explicitly
name the data or clock pins in the path. If you do not, FastScan does not recognize the
path and issues a corresponding message.
• End - A required statement that signals the completion of data for the current path.
Optionally, following the end statement, you can specify the name of the path. However,
if the name does not match the pathname specified with the path statement, the tool
issues an error.
The following shows the path definition syntax:
PATH <pathname> =
CONDition <pin_pathname> <0|1|Z>;
PATH "path0" =
PIN /I$6/Q + ;
PIN /I$35/B0 + ;
PIN /I$35/C0 + ;
PIN /I$1/I$650/IN + ;
PIN /I$1/I$650/OUT - ;
PIN /I$1/I$951/I$1/IN - ;
PIN /I$1/I$951/I$1/OUT + ;
PIN /A_EQ_B + ;
END ;
PATH "path1" =
PIN /I$6/Q + ;
PIN /I$35/B0 + ;
PIN /I$35/C0 + ;
PIN /I$1/I$650/IN + ;
PIN /I$1/I$650/OUT - ;
PIN /I$1/I$684/I1 - ;
PIN /I$1/I$684/OUT - ;
PIN /I$5/D - ;
END ;
PATH "path2" =
PIN /I$5/Q + ;
PIN /I$35/B1 + ;
PIN /I$35/C1 + ;
PIN /I$1/I$649/IN + ;
PIN /I$1/I$649/OUT - ;
PIN /I$1/I$622/I2 - ;
PIN /I$1/I$622/OUT - ;
PIN /A_EQ_B + ;
END ;
PATH "path3" =
PIN /I$5/QB + ;
PIN /I$6/TI + ;
END ;
You use the Load Paths command to read in the path definition file. The tool loads the paths
from this file into an internal path list. You can add to this list by adding paths to a new file and
re-issuing the Load Paths command with the new filename.
Gate3 Gate5
Gate4
Defined Points
In this example, the defined points are an input of Gate2 and an input of Gate7. Two paths exist
between these points, thus creating path ambiguity. When FastScan encounters this situation, it
issues a warning message and selects a path, typically the first fanout of the ambiguity. If you
want FastScan to select more than one path, you can specify this with the Add Ambiguous Paths
command.
During path checking, FastScan can also encounter edge ambiguity. Edge ambiguity occurs
when a gate along the path has the ability to either keep or invert the path edge, depending on
the value of another input of the gate. Figure 6-27 shows a path with edge ambiguity due to the
XOR gate in the path.
Path Edges
/
Gate XOR
0/1
The XOR gate in this path can act as an inverter or buffer of the input path edge, depending on
the value at its other input. Thus, the edge at the output of the XOR is ambiguous. The path
definition file lets you indicate edge relationships of the defined points in the path. You do this
by specifying a “+” or “-” for each defined point, as was previously described in “The Path
Definition File” on page 6-81.
2. Constrain the scan enable pin to its inactive state. For example:
SETUP> add pin constraint scan_en c0
6. Enter Atpg system mode. This triggers the tool’s automatic design flattening and rules
checking processes.
7. Set the fault type to path delay:
ATPG> set fault type path_delay
8. Write a path definition file with all the paths you want to test. “The Path Definition File”
on page 6-81 describes this file in detail. If you want, you can do this prior to the
session. You can only add faults based on the paths defined in this file.
9. Load the path definition file (assumed for the purpose of illustration to be named
path_file_1):
ATPG> load path path_file_1
10. Specify any ambiguous paths you want the tool to add to its internal path list. The
following example specifies to add all ambiguous paths up to a maximum of 4.
ATPG> add ambiguous paths -all -max_paths 4
11. Define faults for the paths in the tool’s internal path list:
ATPG> add faults -all
This adds a rising edge and falling edge fault to the tool’s path delay fault list for each
defined path.
12. Perform an analysis on the specified paths and delete those the analysis proves are false:
ATPG> delete paths -false_paths
FastScan can generate patterns that use customized clock waveforms, provided you describe
each allowable waveform with a named capture procedure in the test procedure file. The tool
can use named capture procedures for stuck-at, path delay, and broadside transition patterns, but
not launch off shift transition patterns. You can also have multiple named capture procedures
within one test procedure file, in addition to the default capture procedure the file typically
contains. Each named capture procedure must reflect clock behavior the clocking circuitry is
actually capable of producing. FastScan assumes you have expert design knowledge when you
use a named capture procedure to define a waveform and does not verify that the clocking
circuitry is capable of delivering the waveform to the defined internal pins.
When the test procedure file contains named capture procedures, FastScan ATPG only
generates patterns that conform to the waveforms described by those procedures. Alternatively,
you can use the Set Capture Procedure command to specify a subset of the named capture
procedures and the tool will use only that subset. You might want to exclude, for example,
named capture procedures that are unable to detect certain types of faults during test generation.
This command’s usage line is as follows:
Note
If a DRC error prevents use of a capture procedure, the run will abort.
clock generating circuitry as illustrated in Figure 6-28. In addition, an example timing diagram
for this circuit is shown in Figure 6-29. In this situation, there are only certain clock waveforms
a PLL can generate and there needs to be a mechanism to specify the allowed set of clock
waveforms to the ATPG tool. In this case, if there are multiple named capture procedures, the
ATPG engine will use these instead of assuming the default capture behavior.
Integrated Circuit
clk1
system clk PLL
PLL clk2
begin_ac
Control
scan_en cntrl Design
Core
scan_clk1
scan_clk2
system_clk
scan_en
begin_ac
scan_clk1
scan_clk2
clk1
clk2
The internal mode is used to describe what happens on the internal side of the on-chip PLL
control logic, while the external mode is used to describe what happens on the external side of
the on-chip PLL. Figure 6-28 shows how this might look. The internal mode uses the internal
clocks (/pll/clk1 & /pll/clk2) and signals while the external mode uses the external clocks
(system_clk) and signals (begin_ac & scan_en). If any external clocks or signals go to both the
PLL and to other internal chip circuitry (scan_en), their behavior needs to be specified in both
modes and needs to match, as shown in the following example (timing is from Figure 6-29):
timeplate tp_cap_clk_slow =
force_pi 0;
pulse /pll/clk1 20 20;
pulse /pll/clk2 40 20;
period 80;
end;
timeplate tp_cap_clk_fast =
force_pi 0;
pulse /pll/clk1 10 10;
pulse /pll/clk2 20 10;
period 40;
end;
timeplate tp_ext =
force_pi 0;
measure_po 10;
force begin_ac 60;
pulse system_clk 0 60;
period 120;
end;
mode internal =
cycle slow =
timeplate tp_cap_clk_slow;
force scan_en 0;
force_pi;
force /pll/clk1 0;
force /pll/clk2 0;
pulse /pll/clk1;
end;
// launch cycle
cycle =
timeplate tp_cap_clk_fast;
pulse /pll/clk2;
end;
// capture cycle
cycle =
timeplate tp_cap_clk_fast;
pulse /pll/clk1;
end;
cycle slow =
timeplate tp_cap_clk_slow;
pulse /pll/clk2;
end;
end;
mode external =
timeplate tp_ext;
cycle =
force scan_en 0;
force_pi;
force begin_ac 1;
pulse system_clk;
end;
cycle =
force begin_ac 0;
pulse system_clk;
end;
end;
end;
The number of cycles used and the timeplates used can be different between the two modes, as
long as the total period of both modes is the same. Signal events you use in both internal and
external modes must happen at the same time. These events are usually things like force_pi,
measure_po, and other signal forces, but also includes clocks that can be used in both modes.
Other requirements include:
• If used, a measure_po statement can only appear in the last cycle of the external or
internal mode.
• If no measure_po statement is used, the tool issues a warning that the primary outputs
will not be observed.
• The external mode cannot pulse any internal clocks or force any internal control signals.
• A force_pi statement needs to exist in the first cycle of both modes and occur before the
first pulse of a clock.
• If an external clock goes to the PLL and to other internal circuitry, the tool will issue a
C2 DRC violation.
DRC rules W20 (Timing Rule #20) through W31 (Timing Rule #31) are specifically for
checking named capture procedures. You can find reference information on each of these rules
in Chapter 2 of the Design-for-Test Common Resources Manual.
The pulse_capture_clock statement is not used in the named capture procedure; instead, the
specific clocks used are explicitly pulsed by name. In addition to the other statements supported
by the default capture procedure, the condition statement is allowed at the beginning of the
named capture procedure to specify what internal conditions need to be met at certain scan cells
in order to enable this clock sequence. Also, a new observe_method statement allows a specific
observe method to be defined for each named capture procedure.
Finally, an optional “slow” or “load” type can be added to the cycle definition. A slow cycle is
one that cannot be used for at-speed launch or capture. This is important for accurate fault
coverage simulation numbers. A load cycle is one that can have an extra scan load, and can be
used for at-speed launch, but not capture. For additional information on the slow and load types,
refer to the “Named Capture Procedure” section of the Design-for-Test Common Resources
Manual.
DRC takes all of the allowed waveforms into consideration during state stability analysis. This
reduces the pessimism of DRC, and enables sequential ATPG to be used on designs where scan
is controlled by a JTAG test access port (boundary scan). DRC analysis is responsible for
breaking each test procedure into a sequence of cycles that map onto ATPG’s natural event
order (force pi, measure po, pulse capture clock).
Note
The tool does not currently support use of both named capture procedures and clock
procedures in a single ATPG session.
Random pattern ATPG will cycle through all of the capture procedures defined unless the user
issues the Set Capture Procedure command to specify a certain procedure(s).
50 ns 50 ns 50 ns
cycle C cycle A
clk
100 ns 50 ns
cycle A
clk1
clk2
0 ns 50 ns
Tool expands cycle A into 2 cycles internally for simulation.
cycle A cycle B
clk1
clk2
0 ns 50 ns
For example, when setting up to create patterns for the example circuit shown in Figure 6-28,
you would issue this command to define the internal clocks:
The two PLL clocks would then be available to the tool’s ATPG engine for pattern creation.
with the Add Clocks -Internal or Add Primary Inputs -Internal command, the tool uses those
internal clocks and signals for pattern creation and simulation. To save the patterns using the
same internal clocks and signals, you must use the -Mode_internal switch with the Save
Patterns command. The -Mode_internal switch is the default when saving patterns in ASCII or
binary format.
Note
The -Mode_internal switch is also necessary if you want patterns to include internal pin
events specified in scan procedures (test_setup, shift, load_unload).
To obtain pattern sets that can run on a tester, you need to save patterns that contain only the
true primary inputs to the chip. These are the clocks and signals used in the external mode of
any named capture procedures, not the internal mode. To accomplish this, you must use the
-Mode_external switch with the Save Patterns command. This switch directs the tool to map the
information contained in the internal mode blocks back to the external signals and clocks that
comprise the I/O of the chip. The -Mode_external switch is the default when saving patterns in
a tester format such as WGL.
Note
The -Mode_external switch ignores internal pin events in scan procedures (test_setup,
shift, load_unload)..
Mux-DFF Example
In a full scan design, the vast majority of transition faults are between scan cells (or cell to cell)
in the design. There are also some faults between the PI to cells and cells to the PO. Targeting
these latter faults can be more complicated, mostly because running these test patterns on the
tester can be challenging. For example, the tester performance or timing resolution at regular
I/O pins may not be as good as that for clock pins. This section shows a mux-DFF type scan
design example and covers some of the issues regarding creating transition patterns for the
faults in these three areas.
Figure 6-32 shows a conceptual model of an example chip design. There are two clocks in this
mux-DFF design, which increases the possible number of launch and capture combinations in
creating transition patterns. For example, depending on how the design is actually put together,
there might be faults that require these launch and capture combinations: C1-C1, C2-C2, C1-
C2, and C2-C1. The clocks may be either external or are created by some on-chip clock
generator circuitry or PLL.
“Timing for Transition Delay Tests” on page 6-73 shows the basic waveforms and partial test
procedure files for creating broadside and launch off shift transition patterns. For this example,
named capture procedures are used to specify the timing and sequence of events. The example
focuses on broadside patterns and shows only some of the possible named capture procedures
that might be used in this kind of design.
scan chain
logic logic
PIs POs
logic
C1
C2
A timing diagram for cell to cell broadside transition faults that are launched by clock C1 and
captured by clock C2 is shown in Figure 6-33.
120 ns 80 ns 40 ns 120 ns
scan_en
scan_clk
C2
C1
shift launch capture load/unload
Following is the capture procedure for a matching test procedure file that uses a named capture
procedure to accomplish the clocking sequence. Other clocking combinations would be handled
with additional named capture procedures that pulse the clocks in the correct sequences.
Be aware that this is just one example and your implementation may vary depending on your
design and tester. For example, if your design can turn off scan_en quickly and have it settle
before the launch clock is pulsed, you may be able to shorten the launch cycle to use a shorter
period; that is, the first cycle in the launch_c1_cap_c2 capture procedure could be switched
from using timeplate tp3 to using timeplate tp2.
Another way to make sure scan enable is turned off well before the launch clock is to add a
cycle to the load_unload procedure right after the “apply shift” line. This cycle would only need
to include the statement, “force scan_en 0;”.
Notice that the launch and capture clocks shown in Figure 6-33 pulse in adjacent cycles. The
tool can also use clocks that pulse in non-adjacent cycles, as shown in Figure 6-34 if the
intervening cycles are at-speed cycles.
120 ns 40 ns 40 ns 40 ns 120 ns
scan_en
3 at-speed cycles
scan_clk
allowed by “launch_capture_pair c3 c2”
C3
C2
default
default
C1
shift launch capture load/unload
To define a pair of nonadjacent clocks the tool can use for launch and capture, include a
“launch_capture_pair” statement at the beginning of the named capture procedure. Multiple
“launch_capture_pair” statements are permitted, and the tool will then choose one to use for a
given fault. Without this statement, the tool defaults to using adjacent clocks only. For
additional information about the use of the “launch_capture_pair” statement, refer to the
“Named capture procedure” in the Design-for-Test Common Resources Manual.
If you want to try to create transition patterns for faults between the scan cells and the primary
outputs, make sure your tester can accurately measure the PO pins with adequate resolution. In
this scenario, the timing looks similar to that shown in Figure 6-33 except that there is no
capture clock. Figure 6-35 shows the timing diagram for these cell to PO patterns.
120 ns 80 ns 40 ns 120 ns
scan_en
scan_clk
C1
shift launch meas. load/unload
PO
Note
You will need a separate named capture procedure for each clock in the design that can
cause a launch event.
What you specify in named capture procedures is what you get. As you can see in the two
preceding named capture procedures (launch_c1_cap_c2 and launch_c1_meas_PO), both
procedures used two cycles, with timeplate tp3 followed by timeplate tp2. The difference is that
in the first case (cell to cell), the second cycle only performed a pulse of C2 while in the second
case (cell to PO), the second cycle performed a measure_po. The key point to remember is that
even though both cycles used the same timeplate, they only used a subset of what was specified
in the timeplate.
To create effective transition patterns for faults between the PI and scan cells, you also may
have restrictions due to tester performance and tolerance. One way to create these patterns can
be found in the example timing diagram in Figure 6-36. The corresponding named capture
procedure is shown after the figure.
120 ns 40 ns 80 ns 120 ns
scan_en
scan_clk
C2
PI
shift setup launch and load/unload
initial capture
value
As before, you would need other named capture procedures for capturing with other clocks in
the design. This example shows the very basic PI to cell situation where you first set up the
initial PI values with a force, then in the next cycle force changed values on the PI and quickly
capture them into the scan cells with a capture clock.
Note
You do not need to perform at-speed testing for all possible faults in the design. You can
eliminate testing things like the boundary scan logic, the memory BIST, and the scan
shift path by using the Add Nofaults command in FastScan or TestKompress.
Path
Create Path Delay List
Patterns & Grade for
Netlist
Transition Coverage
Critical Path
Create add’l Transition Patterns
Patterns & Grade for
Stuck-at Coverage Transition
Patterns
Stuck-at
Top up with add’l Patterns
Stuck-at Patterns
1. Create path delay patterns for your critical path(s) and save them to a file. Fault grade
these patterns for transition fault coverage.
2. Create additional transition patterns for any remaining transition faults and add these
patterns to the original pattern set. Fault grade the enlarged pattern set for stuck-at fault
coverage.
3. Create additional stuck-at patterns for any remaining stuck-at faults and add them to the
pattern set.
The following example dofile shows one way to implement the flow illustrated in Figure 6-37.
//---------------------------------------------------------------
// Example dofile to create patterns using multiple fault models
//---------------------------------------------------------------
// Place setup commands for defining clocks, scan chains,
// constraints, etc. here.
create patterns
report statistics
run
report statistics
//---------------------------------------------------------------
create patterns
report statistics
order patterns 3 // optimize the pattern set
// Save original path delay patterns and add’l transition patterns.
save patterns pathdelay_trans_pat.ascii -ascii -replace
//---------------------------------------------------------------
run
report statistics
//---------------------------------------------------------------
You must define the tck signal as a clock because it captures data. There is one scan group,
group1, which uses the proc_fscan test procedure file (see page 6-102). There is one scan chain,
chain1, that belongs to the scan group. The input and output of the scan chain are tdi and tdo,
respectively.
The listed pin constraints only constrain the signals to the specified values during ATPG—not
during the test procedures. Thus, the tool constrains tms to a 0 during ATPG (for proper pattern
generation), but not within the test procedures, where the signal transitions the TAP controller
state machine for testing. The basic scan testing process is:
The Set Capture Clock TCK -ATPG command defines tck as the capture clock and that the
capture clock must be utilized for each pattern (as FastScan is able to create patterns where the
capture clock never gets pulsed). This ensures that the Capture-DR state properly transitions to
the Shift-DR state.
Test-Logic
1 -Reset
0 Data Register Instruction
(Scan & Boundary Scan) Register
Run-Test/ Select- Select-
0 Idle 1 DR-Scan 1 IR-Scan 1
0 0
Capture-DR Capture-IR
1 1
0 0
Shift-DR 0 Shift-IR 0
1 1
Exit1-DR Exit1-IR
1 1
0 0
Pause-DR 0 Pause-IR 0
1 1
0 0
Exit2-DR Exit2-IR
1 1
Update-DR Update-IR
1 0 1 0
The TMS signal controls the state transitions. The rising edge of the TCK clock captures the
TAP controller inputs. You may find this diagram useful when writing your own test procedure
file or trying to understand the example test procedure file that the next subsection shows.
timeplate tp0 =
force_pi 100;
measure_po 200;
pulse TCK 300 100;
period 500;
end;
procedure test_setup =
timeplate tp0;
cycle =
force TMS 1;
force TDI 0;
force TRST 0;
pulse TCK;
end;
cycle =
force TMS 0;
force TRST 1;
pulse TCK;
end;
cycle =
force TMS 1;
pulse TCK;
end;
cycle =
force TMS 1;
pulse TCK;
end;
cycle =
force TMS 0;
pulse TCK;
end;
cycle =
force TMS 0;
pulse TCK;
end;
cycle =
force TMS 0;
pulse TCK;
end;
cycle =
force TMS 0;
pulse TCK;
end;
cycle =
force TMS 0;
pulse TCK;
end;
cycle =
force TMS 1;
force TDI 1;
pulse TCK;
end;
cycle =
force TMS 1;
force TDI 1;
pulse TCK;
end;
cycle =
force TMS 1;
pulse TCK;
end;
cycle =
force TMS 0;
pulse TCK;
end;
cycle =
force TMS 0;
force TEST_MODE 1;
force RESETN 1;
pulse TCK;
end;
end;
procedure shift =
scan_group grp1;
timeplate tp0;
cycle =
force_sci;
measure_sco;
pulse TCK;
end;
end;
procedure load_unload =
scan_group grp1;
timeplate tp0;
cycle =
force TMS 0;
force CLK 0;
end;
apply shift 77;
cycle =
force TMS 1;
end;
apply shift 1;
cycle =
force TMS 1;
pulse TCK;
end;
cycle =
force TMS 1;
pulse TCK;
end;
cycle =
force TMS 0;
pulse TCK;
end;
end;
Upon completion of the test_setup procedure, the tap controller is in the shift-DR state in
preparation for loading the scan chain(s). It is then placed back into the shift-DR state for the
next scan cycle. This is achieved by the following:
• The items that result in the correct behavior are the pin constraint on tms of C1 and the
fact that the capture clock has been specified as TCK.
• At the end of the load_unload procedure, FastScan asserts the pin constraint on TMS,
which forces tms to 0.
• The capture clock (TCK) occurs for the cycle and this results in the tap controller
cycling from the run-test-idle to the Select-DR-Scan state.
• The load_unload procedure is again applied. This will start the next load/unloading the
scan chain.
The first procedure in the test procedure file is test_setup. This procedure begins by resetting
the test circuitry by forcing trstz to 0. The next set of actions moves the state machine to the
Shift-IR state to load the instruction register with the internal scan instruction code (1000) for
the MULT_SCAN instruction. This is accomplished by shifting in 3 bits of data (tdi=0 for three
cycles) with tms=0, and the 4th bit (tdi=1 for one cycle) when tms=1 (at the transition to the
Exit1-IR state). The next move is to sequence the TAP to the Shift-DR state to prepare for
internal scan testing.
The second procedure in the test procedure file is shift. This procedure forces the scan inputs,
measures the scan outputs, and pulses the clock. Because the output data transitions on the
falling edge of tck, the measure_sco command at time 0 occurs as tck is falling. The result is a
rules violation unless you increase the period of the shift procedure so tck has adequate time to
transition to 0 before repeating the shift. The load_unload procedure, which is next in the file,
calls the shift procedure.
Given information on the instruction set of a design, FlexTest randomly combines these
instructions and determines the best data values to generate a high test coverage functional
pattern set. You enable this functionality by using the Set Instruction Atpg command, whose
usage is as follows:
For example, Table 6-2 shows the pin value requirements for an ADD instruction which
completes in three test cycles.
Note
An N value indicates the pin may take on a new value, while an H indicates the pin must
hold its current value.
As Table 6-2 indicates, the value 1010 on pins Ctrl1, Ctrl2, Ctrl3, and Ctrl4 defines the ADD
instruction. Thus, a vector to test the functionality of the ADD instruction must contain this
value on the control pins. However, the tool does not constrain the data pin values to any
particular values. That is, FlexTest can test the ADD instruction with many different data
values. Given the constraints on the control pins, FlexTest generates patterns for the data pin
values, fault simulates the patterns, and keeps those that achieve the highest fault detection.
• The file consists of three sections, each defining a specific type of information: control
inputs, data inputs, and instructions.
• You define control pins, with one pin name per line, following the “Control Input:”
keyword.
• You define data pins, with one pin name per line, following the “Data Input:” keyword.
• You define instructions, with all pin values for one test cycle per line, following the
“Instruction” keyword. The pin values for the defined instructions must abide by the
following rules:
o You must use the same order as defined in the “Control Input:” and “Data Input:”
sections.
o You can use values 0 (logic 0), 1 (logic 1), X (unknown), Z (high impedance), N
(new binary value, 0 or 1, allowed), and H (hold previous value) in the pin value
definitions.
o You cannot use N or Z values for control pin values.
o You cannot use H in the first test cycle.
• You define the time of the output strobe by placing the keyword “STROBE” after the
pin definitions for the test cycle at the end of which the strobe occurs.
• You use “/” as the last character of a line to break long lines.
• You place comments after a “//” at any place within a line.
• All characters in the file, including keywords, are case insensitive.
During test generation, FlexTest determines the pin values most appropriate to achieve high test
coverage. It does so for each pin that is not a control pin, or a constrained data pin, given the
information you define in the instruction file.
Figure 6-39 shows an example instruction file for the ADD instruction defined in Table 6-2 on
page 6-107, as well as a subtraction (SUB) and multiplication (MULT) instruction.
Control Input::
Ctrl1
Ctrl2
Ctrl3
Ctrl4
Data Input::
Data1
Data2
Data3
Data4
Data5
Data6
Instruction: ADD
1010NNNNNN //start of 3 test cycle ADD Instruction
HHHHHHHHHH
HHHHHHHHHH
STROBE //strobe after last test cycle
Instruction: SUB
1101NNNNNN //start of 3 test cycle SUB Instruction
HHHHHHHHHH
HHHHHHHHHH
STROBE //strobe after last test cycle
Instruction: MULT
1110NNNNNN //start of 6 test cycle MULT Instruction
HHHHHHHHHH
1001NNNNNN //next part of MULT Instruction
HHHHHHHHHH
0101HHHHHH //last part of MULT, hold values
STROBE //strobe after 5th test cycle
HHHHHHHHHH
This instruction file defines four control pins, six data pins, and three instructions: ADD, SUB,
and MULT. The ADD and SUB instructions each require three test cycles and strobe the
outputs following the third test cycle. The MULT instruction requires six test cycles and strobes
the outputs following the fifth test cycle. During the first test cycle, the ADD instruction
requires the values 1010 on pins Ctrl1, Ctrl2, Ctrl3, Ctrl4, and allows FlexTest to place new
values on any of the data pins. The ADD instruction then requires that all pins hold their values
for the remaining two test cycles. The resulting pattern set, if saved in ASCII format, contains
comments specifying the cycles for testing the individual instructions.
1 1 0 0
0 0
1 0 1
1 1
0
Logic
1
0
Macro 1
Logic
0 0 1
1 1
0
1 1
0 0
1 0
010001
01110101
01000010
Macro 10011100
10111000
Test Vectors
11110011
(user defined)
• Allows you to define macro output values that do not require observation
• Fault grades the logic surrounding the macro
• Reduces overall test generation time
• Has no impact on area or performance
• Setup Macrotest — Modifies two rules of FastScan’s DRC to allow otherwise illegal
circuits to be processed by MacroTest. Black box (un-modelled) macros may require
this command.
• Macrotest — Runs the MacroTest utility to read functional patterns you provide and
convert them into scan-based manufacturing test patterns.
The MacroTest flow requires a set of patterns and MacroTest. The patterns are a sequence of
tests (inputs and expected outputs) that you develop to test the macro. For a memory, this is a
sequence of writes and reads. You may need to take embedding restrictions into account as you
develop your patterns. Next, you set up and run MacroTest to convert these cycle-based patterns
into scan-based test patterns. The converted patterns, when applied to the chip, reproduce your
input sequence at the macro’s inputs through the intervening logic. The converted patterns also
ensure that the macro’s output sequence is as you specified in your set of patterns.
Note
You can generate a wide range of pattern sets: From simple patterns that verify basic
functionality, to complex, modified March algorithms that exercise every address
location multiple times. Some embeddings (the logic surrounding the macro) do not
allow arbitrary sequences, however.
Figure 6-41 shows the basic flow for creating scan-based test patterns with MacroTest.
Simulate macro
stand-alone
Invoke FastScan
on top-level
design
Setup Mode
Setup Macrotest
(if necessary)
ATPG Mode
Macrotest
Save Patterns
When you run the Macrotest command, MacroTest reads your pattern file and begins analyzing
the patterns. For each pattern, the tool searches back from each of the macro’s inputs to find a
scan flip-flop or primary input. Likewise, the tool analyzes observation points for the macro’s
output ports. When it has justified and recorded all macro input values and output values,
MacroTest moves on to the next pattern and repeats the process until it has converted all the
patterns. The default MacroTest effort exhaustively tries to convert all patterns. If successful,
then the set of scan test patterns MacroTest creates will detect any defect inside the macro that
changes any macro output from the expected value.
If you add faults prior to running MacroTest, then FastScan will automatically fault simulate
them using the converted patterns output by MacroTest. FastScan targets faults in the rest of the
design with these patterns and reports the design’s test coverage as MacroTest successfully
converts each vector to a scan pattern. This fault simulation is typically able to detect as much
as 40% to 80% of a design’s total faults. So, by using MacroTest, you save resources in two
areas:
1. MacroTest performs all the time consuming back-tracing work for you. This can save
literally months of test generation time, without the overhead of additional test logic.
2. MacroTest scan patterns, although constructed solely for the purpose of delivering your
test patterns to the macro, usually provide a significant amount of test coverage for the
rest of the design. You may only need a supplemental ATPG run to obtain enough
additional test coverage to meet your overall design test specification.
The patterns you supply to MacroTest must be consistent with the macro surroundings
(embedding) to assure success. In addition, the macro must meet certain design requirements.
The following sections detail these requirements, describe how and when to use MacroTest, and
conclude with some examples.
1. The design has at least one combinational observation path for each macro output pin
that requires observation (usually all outputs).
2. All I/O of the RAM/macro block to be controlled or observed are unidirectional.
3. The macro/block can hold its state while the scan chain shifts, if the test patterns require
that the state be held across patterns. This is the case for a March algorithm, for
example.
If you write data to a RAM macro (RAM), for example, then later read the data from the RAM,
typically you will need to use one scan pattern to do the write, and a different scan pattern to do
the read. Each scan pattern has a load/unload that shifts the scan chain, and you must ensure that
the DFT was inserted, if necessary, to allow the scan chain to be shifted without writing into the
RAM. If the shift clock can also cause the RAM to write and there is no way to protect the
RAM, then it is very likely that the RAM contents will be destroyed during shift; the data
written in the early pattern will not be preserved for reading during the latter pattern. Only if it is
truly possible to do a write followed by a read, all in one scan pattern, then you may be able to
use MacroTest even with an unprotected RAM.
Because converting such a multi-cycle pattern is a sequential ATPG search problem, success is
not guaranteed even if success is possible. Therefore, you should try to convert a few patterns
before you depend on MacroTest to be able to successfully convert a given embedded macro.
This is a good idea even for combinational conversions.
If you intend to convert a sequence of functional cycles to a sequence of scan patterns, you can
insert the DFT to protect the RAM during shift: The RAM should have a write enable that is PI-
controllable throughout test mode to prevent destroying the state of the RAM. This ensures the
tool can create a state inside the macro and retain the state during the scan loading of the next
functional cycle (the next scan pattern after conversion by MacroTest).
The easiest case to identify is where FastScan issues a message saying it can use the RAM test
mode, RAM_SEQUENTIAL. This message occurs because FastScan can independently
operate the scan chains and the RAM. The tool can operate the scan chain without changing the
state of the macro as well as operate the macro without changing the state loaded into the scan
chain. This allows the most flexibility for ATPG, but the most DFT also.
However, there are cases where the tool can operate the scan chain without disturbing the
macro, while the opposite is not true. If the scan cells are affected or updated when the macro is
operated (usually because a single clock captures values into the scan chain and is also an input
into the macro), FastScan cannot use RAM_SEQUENTIAL mode. Instead, FastScan can use a
sequential MacroTest pattern (multiple cycles per scan load), or it can use multiple single cycle
patterns if the user’s patterns keep the write enable or write clock turned off during shift.
For example, suppose a RAM has a write enable that comes from a PI in test mode. This makes
it possible to retain written values in the RAM during shift. However, it also has a single edge-
triggered read control signal (no separate read enable) so the RAM’s outputs change any time
the address lines change followed by a pulse of the read clock/strobe. The read clock is a shared
clock and is also used as the scan clock to shift the scan chains (composed of MUX scan cells).
In this case, it is not possible to load the scan chains without changing the read values on the
output of the macro. For this example, you will need to describe a sequential read operation to
MacroTest. This can be a two-cycle operation. In the first cycle, MacroTest pulses the read
clock. In the second cycle, MacroTest observes and captures the macro outputs into the
downstream scan cells. This works because there is no intervening scan shift to change the
values on the macro’s output pins. If a PI-controllable read enable existed, or if you used a non-
shift clock (clocked scan and LSSD have separate shift and capture clocks), an intervening scan
load could occur between the pulse of the read clock and the capture of the output data. This is
possible because the macro read port does not have to be clocked while shifting the scan chain.
Note
Although the ATPG library has specific higher level collections of models called macros,
MacroTest is not limited to testing these macros. It can test library models and HDL
modules as well.
Here, the term “macro” simply means some block of logic, or even a distributed set of lines that
you want to control and observe. You must provide the input values and expected output values
for the macro. Typically you are given, or must create, a set of tests. You can then simulate
these tests in some time-based simulator, and use the results predicted by that simulator as the
expected outputs of the macro. For memories, you can almost always create both the inputs and
expected outputs without any time-based simulation. For example, you might create a test that
writes a value, V, to each address. It is trivial to predict that when subsequent memory reads
occur, the expected output value will be V.
MacroTest converts these functional patterns to scan patterns that can test the device after it is
embedded in systems (where its inputs and outputs are not directly accessible, and so the tests
cannot be directly applied and observed). For example, a single macro input enable might be the
output of two enables which are ANDed outside the macro. The tests must be converted so that
the inputs of the AND are values which cause the AND’s output to have the correct value at the
single macro enable input (the value specified by the user as the macro input value). MacroTest
converts the tests (provided in a file) and provides the inputs to the macro as specified in the
file, and then observes the outputs of the macro specified in the file. If a particular macro output
is specified as having an expected 0 (or 1) output, and this output is a data input to a MUX
between the macro output and the scan chain, the select input of that MUX must have the
appropriate value to propagate the macro’s output value to the scan chain for observation.
MacroTest automatically selects the path(s) from the macro output(s) to the scan chain(s), and
delivers the values necessary for observation, such as the MUX select input value in the
example above.
Often, each row of a MacroTest file converts to a single 1-system cycle scan test (sometimes
called a basic scan pattern in FastScan). A scan chain load, PI assertion, output measure, clock
pulse, and scan chain unload result for each row of the file if you specify such patterns. To
specify a write with no expected known outputs, specify the values to apply at the inputs to the
device and give X output values (don't care or don't measure). To specify a read with expected
known outputs, specify both the inputs to apply, and the outputs that are expected (as a result of
those and all prior inputs applied in the file so far). For example, an address and read enable
would have specified inputs, whereas the data inputs could be X (don’t care) for a memory read.
Mentor Graphics highly recommends that you not over-specify patterns. It may be impossible,
due to the surrounding logic, to justify all inputs otherwise. For example, if the memory has a
write clock and write enable, and is embedded in a way that the write enable is independent but
the clock is shared with other memories, it is best to turn off the write using the write enable,
and leave the clock X so it can be asserted or de-asserted as needed. If the clock is turned off
instead of the write enable, and the clock is shared with the scan chain, it is not possible to pulse
the shared clock to capture and observe the outputs during a memory read. If instead, the write
enable is shared and the memory has its own clock (not likely, but used for illustration), then it
is best to turn off the write with the clock and leave the shared write enable X.
Realize that although the scan tests produced appear to be independent tests, FastScan assumes
that the sequence being converted has dependencies from one cycle to the next. Thus, the scan
patterns have dependencies from one scan test to the next. Because this is atypical, FastScan
marks MacroTest patterns as such, and you must save such MacroTest patterns using the Save
Patterns command. The MacroTest patterns cannot be reordered or reduced using Compress
Patterns; reading back MacroTest patterns is not allowed for that reason. You must preserve the
sequence of MacroTest patterns as a complete, ordered set, all the way to the tester, if the
assumption of cycle-to-cycle dependencies in the original functional sequence is correct.
To illustrate, if you write a value to an address, and then read the value in a subsequent scan
pattern, this will work as long as you preserve the original pattern sequence. If the patterns are
reordered, and the read occurs before the write, the patterns will then mismatch during
simulation or fail on the tester. The reason is that the reordered scan patterns try to read the data
before it has been written. This is untrue of all other FastScan patterns. They are independent
and can be reordered (for example, to allow pattern compaction to reduce test set size).
Macrotest patterns are never reordered or reduced, and the number of input patterns directly
determines the number of output patterns.
The definition of the instance/macro is accessed to determine the pin order as defined in the port
list of the definition. MacroTest expects that pin order to be used in the file specifying the I/O
(input and expected output) values for the macro (the tests). For example, the command:
would specify for MacroTest to find the instance “regfile_8”, look up its model definition, and
record the name and position of each pin in the port list. Given that the netlist is written in
Verilog, with the command:
the portlist of regfile_definition_name (not the instance port list “net1, net2, …”) is used to get
the pin names, directions, and the ordering expected in the test file, file_with_tests. If the library
definition is:
model "regfile_definition_name"
("Dout_0", "Dout_1", Addr_0", "Addr_1", "Write_enable", ...)
( input ("Addr_0") () ... output ("Dout_0") () ... )
then MacroTest knows to expect the output value Dout_0 as the first value (character)
mentioned in each row (test) of the file, file_with_tests. The output Dout_1 should be the 2nd
pin, input pin Addr_0 should be the 3rd pin value encountered, etc. If it is inconvenient to use
this ordering, the ordering can be changed at the top of the test file, file_with_tests. This can be
done using the following syntax:
which would cause MacroTest to expect the value for input Addr_0 to be the first value in each
test, followed by the value for input Addr_1, the expected output value for Dout_1, the input
value for Write_enable, and so on.
Note
Only the pin names need be specified, because the instance name “regfile_8” was given
on the MacroTest command line.
macro_inputs regfile_8/Addr_0regfile_8/Addr_1
macro_outputregfile_8/Dout_1
macro_inputsregfile_8/write_enable
...
end
The above example defines the same macro boundary as was previously defined for regfile_8
using only pin names to illustrate the format. Because the macro is a single instance, this would
not normally be done, because the instance name is repeated for each pin. However, you can use
this entire pathname form to define a distributed macro that covers pieces of different instances.
This more general form of boundary definition allows a macro to be any set of pins at any
level(s) of hierarchy down to the top library model. If you use names which are inside a model
in the library, the pin pathname must exist in the flattened data structures. (In other words, it
must be inside a model where all instances have names, and it must be a fault site, because these
are the requirements for a name inside a model to be preserved in FastScan).
This full path/pin name form of “macro boundary” definition is a way to treat any set of
pins/wires in the design as points to be controlled, and any set of pins/wires in the design as
points to be observed. For example, some pin might be defined as a macro_input which is then
given {0,1} values for some patterns, but X for others. In some sense, this “macro input” can be
thought of as a programmable ATPG constraint (see Add ATPG Constraint), whose value can
Although rarely done, you can specify for one macro output at a time exactly which of those
reported scan cells is to be used to observe that particular macro output pin. Any subset can be
so specified. For example, if you want to force macro output pin Dout_1 to be observed at one
of its reported observation sites, such as “/top/middle/bottom/ (13125)”, then you can specify
this as follows:
macro_output regfile_8/Dout_1
observe_at13125
Note
There can be only one macro_output statement on the line above the observe_at directive.
Also, you must specify only one observe_at site, which is always associated with the
single macro_output line that precedes it. If a macro_input line immediately precedes the
observe_at line, MacroTest will issue an error message and exit.
The preceding example uses the gate id (number in parentheses in the -Report output) to specify
the scan cell DFF to observe at, but you can also use the instance pathname. Instances inside
models may not have unique names, so the gate id is always an unambiguous way to specify
exactly where to observe. If you use the full name and the name does not exactly match, the tool
selects the closest match from the reported candidate observation sites. The tool also warns you
that an exact match did not occur and specifies the observation site that it selected.
for each pattern. This allows black-boxed macros to be used, or you to create models for normal
ATPG using FastScan’s _cram primitive, but treat the macro as a black box for internal testing.
A _cram primitive may be adequate for passing data through a RAM, for example, but not for
modelling it for internal faults. Macrotest trusts the output values you provide regardless of
what would normally be calculated in FastScan, allowing the user to specify outputs for these
and other situations.
Due to its black box treatment of even modelled RAMs/macros, MacroTest must sometimes get
additional information from you. Macrotest assumes that all macro inputs capture on the leading
edge of any clock that reaches them. So, for a negative pulse, MacroTest assumes that the
leading (falling) edge causes the write into the macro, whereas for a positive pulse, MacroTest
assumes that the leading (rising) edge causes the write. If these assumptions are not true, you
must specify which data or address inputs (if such pins occur) are latched into the macro on a
trailing edge.
Occasionally, a circuit uses leading DFF updates followed by trailing edge writes to the
memory driven by those DFFs. For trailing edge macro inputs, you must indicate that the
leading edge assumption does not hold for any input pin value that must be presented to the
macro for processing on the trailing edge. For a macro which models a RAM with a trailing
edge write, you must specify this fact for the write address and data inputs to the macro which
are associated with the falling edge write. To specify the trailing edge input, you must use a
boundary description which lists the macro’s pins (you cannot use the instance name only
form).
Regardless of whether you use just pin names or full path/pin names, you can replace
“macro_inputs” with “te_macro_inputs” to indicate that the inputs that follow must have their
values available for the trailing edge of the shared clock. This allows MacroTest to ensure that
the values arrive at the macro input in time for the trailing edge, and also that the values are not
overwritten by any leading edge DFF or latch updates. If a leading edge DFF drives the trailing
edge macro input pin, the value needed at the macro input will be obtained from the D input side
of the DFF rather than its Q output. The leading edge will make Q=D at the DFF, and then that
new value will propagate to the macro input and be waiting for the trailing edge to use. Without
the user specification as a trailing edge input, MacroTest would obtain the needed input value
from the Q output of the DFF. This is because MacroTest would assume that the leading edge of
the clock would write to the macro before the leading edge DFF could update and propagate the
new value to the macro input.
It is not necessary to specify leading edge macro inputs because this is the default behavior. It is
also unnecessary to indicate leading or trailing edges for macro outputs. You can control the
cycle in which macro outputs are captured. This ensures that the tool correctly handles any
combination of macro outputs and capturing scan cells as long as all scan cells are of the same
polarity (all leading edge capture/observe or all trailing edge capture/observe).
In the rare case that a particular macro output could be captured into either a leading or a trailing
edge scan cell, you must specify which you prefer by using the -Le_observation_only switch or
-Te_observation_only switch with the Macrotest command for that macro. For more
information on these switches, see “Example 4 — Using Leading Edge & Trailing Edge
Observation Only” and the Macrotest reference page in the ATPG Tools Reference Manual.
macro_input clock
te_macro_inputs Addr_0 Addr_1 // TE write address inputs
macro_output Dout_1
...
end
Note
It is the declaration of the PI pin driving the macro input, not any declaration of the macro
input itself, which determines whether a pin can be pulsed in FastScan.
Normal observable output values include {L,H}, which are analogous to {0,1}. L represents
output 0, and H represents output 1. You can give X as an output value to indicate Don't
Compare, and F for a Floating output (output Z). Neither a Z nor an X output value will be
observed. Occasionally an output cannot be observed, but must be known in order to prevent
bus contention or to allow observation of some other macro output.
If you provide a file with these characters, a check is done to ensure that an input pin gets an
input value, and an output pin gets an output value. If an “L” is specified in an input pin
position, for example, an error message is issued. This helps detect ordering mismatches
between the port list and the test file. If you prefer to use 0 and 1 for both inputs and outputs,
then use the -No_l_h switch with the Macrotest command:
Assuming that the -L_h default is used, the following might be the testfile contents for our
example register file, if the default port list pin order is used.
XX 00 0
XX 00 1
HH 00 0
The example file above has only comments and data; spaces are used to separate the data into
fields for convenience. Each row must have exactly as many value characters as pins mentioned
in the original port list of the definition, or the exact number of pins in the header, if pins were
specified there. Pins can be left off of an instance if macro_inputs and macro_outputs are
specified in the header, so the header names are counted and that count is used unless the
instance name only form of macro boundary definition is used (no header names exist).
To specify less than all pins of an instance, omit the pins from the header when reordering the
pins. The omitted pins are ignored for purposes of MacroTest. If the correct number of values
do not exist on every row, an error occurs and a message is issued.
The following is an example where the address lines are exchanged, and only Dout_0 is to be
tested:
// t
// e
// _
// D AA e
// o dd n
// u dd a
// t rr b
// _ __ l
// 0 10 e
X 00 0
X 00 1
H 00 0
It is not necessary to have all macro_inputs together. You can repeat the direction designators as
necessary:
macro_input write_enable
macro_output Dout_0
macro_inputs Addr_1 Addr_0
macro_outputs Dout_1 ...
...
end
For example, if the write enable line outside the macro is the complement of the read enable line
(perhaps due to a line which drives the read enable directly and also fans out to an inverter
which drives the write enable), and you specify that both the read enable and write enable pins
should be 0 for some test, then MacroTest will be unable to deliver both values. It stops and
reports the line of the test file, as well as the input pins and values that cannot be delivered. If
you change the enable values in the MacroTest patterns file to always be complementary,
MacroTest would then succeed. Alternatively, if you add a MUX to make the enable inputs
independently controllable in test mode and keep the original MacroTest patterns unchanged,
MacroTest would use the MUX to control one of the inputs to succeed at delivering the
complementary values.
MacroTest can fault simulate the scan output patterns it creates from the sequence of MacroTest
input patterns as it converts them. This is described in more detail later, but it is recommended
that you use this feature even if you do not want the stuck-at fault coverage outside the macro.
That is because the fault simulation outputs a new coverage output line for each new macrotest
pattern, so you see each pattern generated and simulated. This lets you monitor the progression
of MacroTest pattern by pattern. Otherwise, the tool only displays a message upon completion
or failure, giving you no indication of how a potentially long MacroTest run is progressing.
If you decide to ignore the stuck-at fault coverage once MacroTest has completed, you can save
the patterns using the Save Patterns command, and then remove the patterns and coverage using
the Reset State command; so it is highly recommended that you use Add Faults -all before
running MacroTest, and allow the default -fault_simulate option to take effect. No simulation
(and therefore no pattern by pattern report) will occur unless there are faults to simulate.
Once MacroTest is successful, you should simulate the resulting MacroTests in a time-based
simulator. This verifies that the conversion was correct, and that no timing problems exist.
FastScan does not simulate the internals of primitives, and therefore relies on the fact that the
inputs produced the expected outputs given in the test file. This final simulation ensures that no
errors exist due to modeling or simulation details that might differ from one simulator to the
next. Normal FastScan considerations hold, and it is suggested that DRC violations be treated as
they would be treated for a stuck-at fault ATPG run.
To prepare to macrotest an empty (TieX) macro that needs to be driven by a write control (to
allow pulsing of that input pin on the black box), issue the Setup Macrotest command. This
command prevents a G5 DRC violation and allows you to proceed. Also, if a transparent latch
(TLA) on the control side of an empty macro is unobservable due to the macro, the Setup
Macrotest command prevents it from becoming a TieX, as would normally occur. Once it
becomes a TieX, it is not possible for MacroTest to justify macro values back through the latch.
If in doubt, when preparing to MacroTest any black box, issue the Setup Macrotest command
before exiting Setup mode. No errors will occur because of this, even if none of the conditions
requiring the command exist.
FastScan ATPG commands and options apply within MacroTest, including cell constraints,
ATPG constraints, clock restrictions (it only pulses one clock per cycle), and others. If
MacroTest fails and reports that it aborted, you can use the Set Abort Limit command to get
MacroTest to work harder, which may allow MacroTest to succeed. Mentor Graphics
recommends that you set a moderate abort limit for a normal MacroTest run, then increase the
limit if MacroTest fails and issues a message saying that a higher abort limit might help.
ATPG effort should match the simulation checks for bus contention to prevent MacroTest
patterns from being rejected by simulation. Therefore, if you specify Set Contention Check On,
you should use the -Atpg option. Normally, if you use Set Contention Check Capture_clock,
you should use the -Catpg option instead. Currently, MacroTest does not support the -Catpg
option, so this is not advised. Set Decision Order Random is strongly discouraged. It can
mislead the search and diagnosis in MacroTest.
In a MacroTest run, as each row is converted to a test, that test is stored internally (similar to a
normal FastScan ATPG run). You can save the patterns to write out the tests in a desired format
(perhaps Verilog to allow simulation and WGL for a tester). The tool supports the same formats
for MacroTest patterns as for patterns generated by a normal ATPG run. However, because
MacroTest patterns cannot be reordered, and because the expected macro output values are not
saved with the patterns, it is not possible to read macrotest patterns back into FastScan. The user
should generate Macrotest patterns, then save them in all desired formats.
Macros are typically small compared to the design that they are in. It is possible to get coverage
of normal faults outside the macro while testing the macro. The default is for MacroTest to
randomly fill any scan chain or PI inputs not needed for a particular test so that fortuitous
detection of other faults occurs. If you add faults using the Add Faults -All command before
invoking MacroTest, then the random fill and fault simulation of the patterns occurs, and any
faults detected by the simulation will be marked as DS.
MacroTest Examples
Example 1 — Basic 1-Cycle Patterns
Verilog Contents:
Note
Vectors are treated as expanded scalars.
Because Dout is declared as “array 7:0”, the string “Dout” in the port list is equivalent to
“Dout<7> Dout<6> Dout<5> Dout<4> Dout<3> Dout<2> Dout<1> Dout<0>”. If the
declaration of Dout had been Dout “array 0:7”, then the string “Dout” would be the reverse of
the above expansion. Vectors are always allowed in the model definitions. Currently, vectors
are not allowed in the macrotest input patterns file, so if you redefine the pin order in the header
of that file, scalars must be used. Either “Dout<7>”, “Dout(7)”, or “Dout[7]” can be used to
match a bit of a vector.
Dofile Contents:
CHAIN_TEST =
pattern = 0;
apply "grp1_load" 0 =
chain "chain1" = "0011001100110011001100";
end;
apply "grp1_unload" 1 =
chain "chain1" = "0011001100110011001100";
end;
end;
SCAN_TEST =
pattern = 0 macrotest ;
apply "grp1_load" 0 =
chain "chain1" = "0110101010000000000000";
end;
force "PI" "001X0XXXXXXXX" 1;
pulse "/scanen_early" 2;
measure "PO" "1" 3;
pulse "/clk" 4;
apply "grp1_unload" 5 =
chain "chain1" = "XXXXXXXXXXXXXXXXXXXXXX";
end;
pattern = 1 macrotest ;
apply "grp1_load" 0 =
chain "chain1" = "1000000000000000000000";
end;
force "PI" "001X0XXXXXXXX" 1;
measure "PO" "1" 2;
pulse "/clk" 3;
apply "grp1_unload" 4=
chain "chain1" = "XXXXXXXXXXXXXX10101010";
end;
... skipping some output ...
SCAN_CELLS =
scan_group "grp1" =
scan_chain "chain1" =
scan_cell = 0 MASTER FFFF "/rden_reg/ffdpb0"...
scan_cell = 1 MASTER FFFF "/wren_reg/ffdpb0"...
scan_cell = 2 MASTER FFFF "/datreg1/ffdpb7"...
... skipping some scan cells ...
scan_cell = 20 MASTER FFFF "/doutreg1/ffdpb1"...
scan_cell = 21 MASTER FFFF "/doutreg1/ffdpb0"...
end;
end;
end;
The above command and file cause MacroTest to try to test two different macros
simultaneously. It is not necessary that they have the same test set length (same number of
tests/rows in their respective test files). That is the case in this example, where inside the .pat
files, one has two tests while the other has four. Before making a multiple macro run, it is best to
ensure that each macro can be tested without any other macros. It is possible to test each macro
individually, discard the tests it creates, move its command into a file and iterate. The final run
would try to test all of the individually successful macros at the same time. The user indicates
this by referencing the file containing the individually successful macrotest commands, and
referencing that file in a -multiple_macros run. The multiple macros file can be thought of as a
specialized dofile with nothing but MacroTest commands. One -multiple_macro file defines
one set of macros for macrotest to test all at the same time (in one MacroTest run). This is the
most effective way of reducing test set size for testing many embedded memories.
In the above example, an instance named “test0” has an instance named “mem1” inside it that is
a macro to test using file ram_patts0.pat, while an instance named “test1” has an instance
named “mem1” inside it that is another macro to test using file ram_patts2.pat as the test file.
For this example, the RAM is as before, except a single clock is connected to an edge-triggered
read and edge-triggered write pin of the macro to be tested. It is also the clock going to the
MUX scan chain. There is also a separate write enable. As a result, it is possible to write using a
one-cycle pattern, and then to preserve the data written during shift by turning the write enable
off in the shift procedure. However, for this example, a read must be done in two cycles—one to
pulse the RAM’s read enable and make the data come out of the RAM, and another to capture
that data into the scan chain before shifting changes the RAM’s output values. There is no
independent read enable to protect the outputs during shift, so they must be captured before
shifting, necessitating a 2 cycle read/observe.
Note that because the clock is shared, it is important to only specify one of the macro values for
RdClk or WrClk, or to make them consistent. X means “Don’t Care” on macro inputs, so it will
be used to specify one of the two values in all patterns to ensure that any external embedding
can be achieved. It is easier to not over-specify MacroTest patterns, which allows using the
patterns without having to discover the dependencies and change the patterns.
Dofile Contents:
CHAIN_TEST =
pattern = 0;
apply "grp1_load" 0 =
chain "chain1" = "0011001100110011001100";
end;
apply "grp1_unload" 1 =
chain "chain1" = "0011001100110011001100";
end;
end;
SCAN_TEST =
pattern = 0 macrotest ;
apply "grp1_load" 0 =
chain "chain1" = "0110101010000000000000";
end;
force "PI" "001X0XXXXXXXX" 1;
pulse "/scanen_early" 2;
measure "PO" "1" 3;
pulse "/clk" 4;
apply "grp1_unload" 5 =
chain "chain1" = "XXXXXXXXXXXXXXXXXXXXXX";
end;
pattern = 1 macrotest ;
apply "grp1_load" 0 =
chain "chain1" = "1000000000000000000000";
end;
force "PI" "001X0XXXXXXXX" 1;
pulse "/clk" 2;
force "PI" "001X0XXXXXXXX" 3;
measure "PO" "1" 4;
pulse "/clk" 5;
apply "grp1_unload" 6=
chain "chain1" = "XXXXXXXXXXXXXX10101010";
end;
... skipping some output ...
On the other hand, if you invoke MacroTest with the -Le_observation_only switch and indicate
in the MacroTest patterns that the macro’s outputs should be observed in the cycle after pulsing
the read pin on the macro, the rising edge of one cycle would cause the read of the macro, and
then the rising edge on the next cycle would capture into the TE scan cells.
For additional information on the use of these switches, refer to the Macrotest command in the
ATPG Tools Reference Manual.
Note
Using the -Start and -End switches will limit file size as well, but the portion of internal
patterns saved will not provide a very reliable indication of pattern characteristics when
simulated. Sampled patterns will more closely approximate the results you would obtain
from the entire pattern set.
If you selected -Verilog or -Vhdl as the format in which to save the patterns, the application
automatically creates a test bench that you can use in a timing-based simulator such as
ModelSim to verify that the FastScan-generated vectors behave as predicted by the ATPG tools.
For example, assume you saved the patterns generated in FastScan or Flextest as follows:
The tool writes the test patterns out in one or more pattern files and an enhanced Verilog test
bench file that instantiates the top level of the design. These files contain procedures to apply
the test patterns and compare expected output with simulated output.
After compiling the patterns, the scan-inserted netlist, and an appropriate simulation library,
you simulate the patterns in a Verilog simulator. If there are no miscompares between
FastScan’s expected values and the values produced by the simulator, a message reports that
there is “no error between simulated and expected patterns.” If any of the values do not match, a
“simulation mismatch” has occurred and must be corrected before you can use the patterns on a
tester.
Be sure to simulate parallel patterns and at least a few serial patterns. Parallel patterns simulate
relatively quickly, but do not detect problems that occur when data is shifted through the scan
chains. One such problem, for example, is data shifting through two cells on one clock cycle
due to clock skew. Serial patterns can detect such problems. Another reason to simulate a few
serial patterns is that correct loading of shadow or copy cells depends on shift activity. Because
parallel patterns lack the requisite shift activity to load shadow cells correctly, you may get
simulation mismatches with parallel patterns that disappear when you use serial patterns.
Therefore, always simulate at least the chain test or a few serial patterns in addition to the
parallel patterns.
For a detailed description of the differences between serial and parallel patterns, refer to the first
two subsections under “Pattern Formatting Issues” on page 7-9. See also “Sampling to Reduce
Serial Loading Simulation Time” on page 7-11 for information on creating a subset of sampled
serial patterns. Serial patterns take much longer to simulate than parallel patterns (due to the
time required to serially load and unload the scan chains), so typically only a subset of serial
patterns is simulated.
Start
All
Scan Tests Y Timing Violations
Fail?
Library Problems
Parallel
Patterns Fail, DRC Issues
Shadow Cells Y
Serial Pass?
N
If you are viewing this document online, you can click on the links in the figure to see more
complete descriptions of issues often at the root of particular mismatch failures. These issues
are discussed in the following sections:
• Are the mismatches reported on primary outputs (POs), scan cells or both?
Mismatches on scan cells can be related to capture ability and timing problems on the
scan cells. For mismatches on primary outputs, the issue is more likely to be related to
an incorrect value being loaded into the scan cells.
• Are the mismatches reported on just a few or most of the patterns?
Mismatches on a few patterns indicates a problem that is unique to certain patterns,
while mismatches on most patterns indicate a more generalized problem.
• Are the mismatches observed on just a few pins/cells or most pins/cells?
Mismatches on a few pins/cells indicates a problem related to a few specific instances or
one part of the logic, while mismatches on most patterns indicate that something more
general is causing the problem.
• Do both the serial and the parallel test bench fail or just one of them?
A problem in the serial test bench only, indicates that the mismatch is related to shifting
of the scan chains (for example, data shifting through two cells on one clock cycle due to
clock skew). The problem with shadows mentioned in the preceding section, causes the
serial test bench to pass and the parallel test bench to fail.
• Does the chain test fail?
As described above, serial pattern failure can be related to shifting of the scan chain. If
this is true, the chain test (which simply shifts data from scan in to scan out without
capturing functional data) also fails.
• Do only certain pattern types fail?
If only ram sequential patterns fail, the problem is most certainly related to the RAMs
(for instance incorrect modeling). If only clock_sequential patterns fail, the problem is
probably related to nonscan flip-flops and latches. If clock_po patterns fail, it might be
due to a W17 violation. For designs with multiple clocks, it can be useful to see which
clock is toggled for the patterns that fail.
DRC Issues
The DRC violations that are most likely to cause simulation mismatches are:
• C3
• C4
• C6
• W17
For details on these violations, refer to Chapter 2, “Design Rules Checking” in the Design-For-
Test Common Resources Manual and SupportNet KnowledgeBase TechNotes describing each
of these violations. For most DRC-related violations, you should be able to see mismatches on
the same flip-flops where the DRC violations occurred.
The command “set split capture_cycle on” usually resolves the mismatches caused by the C3
and C4 DRC violation. You can avoid mismatches caused by the C6 violation by using the
command “set clock_off simulation on”. Refer to the section, “Setting Event Simulation
(FastScan Only)” for an overview on the use of these commands.
Note
These two commands do not remove the DRC violations; rather, they resolve the
mismatch by changing FastScan’s expected values.
A W17 violation is issued when you save patterns (in any format but ASCII or binary) if you
have clock_po patterns and you do not have a clock_po procedure in your test procedure file. In
most cases, this causes simulation mismatches for clock_po patterns (). The solution is to define
a separate clock_po procedure in the test procedure file. The “Test Procedure File” chapter in
the Design-for-Test Common Resources Manual has details on such procedures.
Shadow Cells
Another common problem is shadow cells. Such cells do not cause DRC violations, but the tool
issues the following message when going into ATPG mode:
A shadow flip-flop is a non-scan flip-flop that has the D input connected to the Q output of a
scan flip-flop. Under certain circumstances, such shadow cells are not loaded correctly in the
parallel test bench. If you see the above message, it indicates that you have shadow cells in your
design and that they may be the cause of a reported mismatch. For more information about
shadow cells and simulation mismatches, consult the online SupportNet KnowledgeBase. Refer
to “SupportNet help (optional)” on page A-12 for information about SupportNet.
Library Problems
A simulation mismatch can be related to an incorrect library model; for example, if the reset
input of a flip-flop is modeled as active high in the ATPG model used by FastScan, and as
active low in the Verilog model used by the simulator. The likelihood of such problems depends
on the library. If the library has been used successfully for several other designs, the mismatch
probably is caused by something else. On the other hand, a newly developed, not thoroughly
verified library could easily cause problems. For regular combinational and sequential elements,
this causes mismatches for all patterns, while for instances such as RAMs, mismatches only
occur for a few patterns (such as RAM sequential patterns).
Another library-related issue is the behavior of multi-driven nets and the fault effect of bus
contention on tristate nets. FastScan is conservative by default, so non-equal values on the
inputs to non-tristate multi-driven nets, for example, always results in an X on the net. For
additional information, see the commands Set Net Resolution and Set Net Dominance.
Timing Violations
Setup and hold violations during simulation of the test bench can indicate timing-related
mismatches. In some cases, you see such violations on the same scan cell that has reported
mismatches; in other cases, the problem might be more complex. For instance, during loading
of a scan cell, you may observe a violation as a mismatch on the cell(s) and PO(s) that the
violating cell propagates to. Another common problem is clock skew. This is discussed in the
section, “Checking for Clock-Skew Problems with Mux-DFF Designs.”
Another common timing related issue is that the timeplate and/or test procedure file has not
expanded. By default, the test procedure and timeplate files have one “time unit” between each
event. When you create test benches using the -Timingfile switch with the Save Patterns
command, the time unit expands to 1000 ns in the Verilog and VHDL test benches. When you
use the default -Procfile switch and a test procedure file with the Save Patterns command, each
time unit in the timeplate is translated to 1 ns. This can easily cause mismatches.
Based on the time and scan cell where the mismatch occurred, you can generate waveforms or
dumps that display the values just prior to the mismatch. You can then compare these values to
the values FastScan expected. With this information, you can trace back in the design (in both
FastScan and the simulator) to see where the mismatch originates.
A detailed example showing this process for a Verilog test bench is contained in FastScan
AppNote 3002, available on the CSD SupportNet.
Several FastScan commands and capabilities can help reduce the amount of time you spend
troubleshooting simulation mismatches. You can use Save Patterns -Debug when saving
patterns in Verilog format, for example, to cause FastScan to automatically run the ModelSim
timing-based simulator to verify the saved vectors. After ModelSim completes its simulation,
FastScan displays a summary report of the mismatch sources. For example:
You can thus simulate parallel patterns to quickly verify capture works as expected, or you can
simulate serial patterns to thoroughly verify scan chain shifting. In either case, the tool traces
through the design, locating the sources of mismatches, and displays a report of the mismatches
found. The report for parallel patterns includes the gate source and the system clock cycle where
mismatches start. For serial patterns, the report additionally includes the shift cycle of a
mismatch pattern and the scan cell(s) where the shift operation first failed, if a mismatch is
caused by a scan shift operation.
Another FastScan command, Analyze Simulation Mismatches, performs the same simulation
verification and analysis as “save patterns -debug”, but independent of the Save Patterns
command. In default mode, it analyzes the current internal pattern set. Alternatively, you can
analyze external patterns by issuing a “set pattern source external” command, then running the
Analyze Simulation Mismatches command with the -External switch, as in this example:
You need to perform just two setup steps before you use Save Patterns -Debug or Analyze
Simulation Mismatches:
1. Specify the invocation command for the external simulator with the
Set External Simulator command. For example, the following command specifies to
invoke the ModelSim simulator, vsim, in batch mode using the additional command line
arguments in the file, my_vsim_args.vf:
ATPG> set external simulator vsim -c -f my_vsim_args.vf
Several ModelSim invocation arguments support Standard Delay Format (SDF) back-
annotation. For an example of their use with this command, refer to the Set External
Simulator command description.
2. Compile the top level netlist and any required Verilog libraries into one working
directory. The following example uses the ModelSim vlib shell command to create such
a directory, then compiles the design and a Verilog parts library into it using the
ModelSim Verilog compiler, vlog:
ATPG> sys vlib results/my_work
ATPG> sys vlog -work results/my_work my_design.v -v my_parts_library.v
Information from “analyze simulation mismatches” is retained internally. You can access it at
any time within a session using the Analyze > Simulation Mismatches menu item in
DFTInsight, or the Report Mismatch Sources command from the FastScan command line:
The arguments available with this command give you significant analytic power. If you specify
a particular mismatch source and include the -Waveform switch:
FastScan displays mismatch signal timing for that source on the waveform viewer provided by
the external simulator. An example is shown in Figure 6-43. The viewer shows a waveform for
each input and output of the specified mismatch gate, with the cursor located at the time of the
first mismatch. The pattern numbers are displayed as well, so you can easily see which pattern
was the first failing pattern for a mismatch source. The displayed pattern numbers correspond to
the pattern numbers in the ASCII pattern file.
To see a DFTInsight schematic of the mismatch source, annotated with the input and output
values simulated by FastScan, specify the -Display switch. Figure 6-44 shows the DFTInsight
display for the mismatch source, a 4-input AND gate, whose waveforms appear in the example
waveform view. You can see that FastScan simulated a “1” on the output of the gate (even
though one of the inputs was a “0”), whereas ModelSim simulated a “0”. With this information,
you would know the mismatch likely resulted from an ATPG library problem. You could now
investigate the library model of this gate to find out why it simulated incorrectly. To see both
windows simultaneously, specify both switches in the same command.
For more detailed information on the commands discussed in this section, refer to the command
descriptions in the ATPG Tools Reference Manual.
Analyzing Patterns
Sometimes, you can find additional information that is difficult to access in the Verilog or
VHDL test benches in other pattern formats. When comparing different pattern formats, it is
useful to know that the pattern numbering is the same in all formats. In other words, pattern #37
in the ASCII pattern file corresponds to pattern #37 in the WGL or Verilog format.
Each of the pattern formats is described in detail in the section, “Saving Patterns in Basic Test
Data Formats,” beginning on page 7-12.
mux
delay setup
MUX MUX
sc_in
DFF DFF
sc_en
clk
clk delay
You can run into problems if the clock delay due to routing, modeled by the buffer, is greater
than the mux delay minus the flip-flop setup time. In this situation, the data does not get
captured correctly from the previous cell in the scan chain and therefore, the scan chain does not
shift data properly.
To detect this problem, you should run both critical timing analysis and functional simulation of
the scan load/unload procedure. You can use ModelSim or another HDL simulator for the
functional simulation, and a static timing analyzer such as SST Velocity for the timing analysis.
Refer to the ModelSim SE/EE User’s Manual or the SST Velocity User’s Manual for details on
performing timing verification.
Figure 7-1 shows a basic process flow for defining test pattern timing.
Test
Procedure Internal Test
File Pattern Set
Tester Format
Patterns
with Timing
While the ATPG process itself does not require test procedure files to contain real timing
information, automatic test equipment (ATE) and some simulators do require this information.
Therefore, you must modify the test procedure files you use for ATPG to include real timing
information. “General Timing Issues” on page 7-3 discusses how you add timing information to
existing test procedures.
After creating real timing for the test procedures, you are ready to save the patterns. You use the
Save Patterns command with the proper format to create a test pattern set with timing
information. For more information, refer to “Saving Timing Patterns” on page 7-8.
Test procedures contain groups of statements that define scan-related events. The “Test
Procedure File” chapter of the Design-for-Test Common Resources Manual explains test
procedures and statements.
Timing Terminology
The following list defines some timing-related terms:
• Non-return timing — primary inputs that change, at most, once during a test cycle.
• Offset — the timeframe in a test cycle in which pin values change.
• Period — the duration of pin timing—one or more test cycles.
• Return timing — primary inputs, typically clocks, that pulse high or low during every
test cycle. Return timing indicates that the pin starts at one logic level, changes, and
returns to the original logic level before the cycle ends.
• Suppressible return timing — primary inputs that can exhibit return timing during a
test cycle, although not necessarily.
Within a test cycle, a device under test must abide by the following restrictions:
• At most, each non-clock input pin changes once in a test cycle. However, different input
pins can change at different times.
• Each clock input pin is at its off-state at both the start and end of a test cycle.
• At most, each clock input pin changes twice in a test cycle. However, different clock
pins can change at different times.
• Each output pin has only one expected value during a test cycle. However, the
equipment can measure different output pin values at different times.
• A bidirectional pin acts as either an input or an output, but not both, during a single test
cycle.
To avoid adverse timing problems, the following timing requirements satisfy some ATE timing
constraints:
• Unused outputs
By default, test procedures without measure events (all procedures except shift) strobe
unused outputs at a time of cycle/2, and end the strobe at 3*cycle/4. The shift procedure
strobes unused outputs at the same time as the scan output pin.
• Unused inputs
By default, all unused input pins in a test procedure have a force offset of 0.
There are three ways to load existing procedure file information into FastScan and FlexTest:
• During SETUP mode, use the “Add Scan Groups <procedure_filename>” command.
Any timing information in these procedure files will be used when “Save Patterns” is
issued if no other timing information or procedure information is loaded.
• Use the “Read Procfile” command. This is only valid when not in SETUP mode. Using
this command loads a new procedure file that will overwrite or merge with the
procedure and timing data already loaded. This new data is now in effect for all
subsequent “Save Patterns” commands.
• If you specify a new procedure file on the “Save Patterns” command line, the timing
information in that procedure file will be used for that “Save Patterns” command only,
and then the previous information will be restored.
After you have used “Write Procfile -Full” to generate a procedure file, you can examine the
procedure file, modifying timeplates with new timing if necessary. Any timing changes to the
existing TimePlates, cannot change the event order of the timeplate used for scan procedures.
The times may change, but the event order must be maintained.
In the following example, there are two events happening at time 20, and both are listed as event
4. These may be skewed, but they may not interfere with any other event. The events must stay
in the order listed in the comments:
force_pi 0; // event 1
bidi_force_pi 12; // event 3
measure_po 31; // event 7
bidi_measure_po 32; // event 8
force InPin 9; // event 2
measure OutPin 35; // event 9
pulse Clk1 20 5; // event 4 & 5 respectively
pulse Clk2 20 10; // event 4 & 6 respectively
period 50; // no events but all events
// have to happen in period
[set_statement ...]
[alias_definition]
timeplate_definition [timeplate_definition]
procedure_definition [procedure_definition]
The timeplate definition describes a single tester cycle and specifies where in that cycle all
event edges are placed. You must define all timeplates before they are referenced. A procedure
file must have at least one timeplate definition. The timeplate definition has the following
format:
timeplate timeplate_name =
timeplate_statement
[timeplate_statement ...]
period time;
end;
The following list contains available timeplate_statement statements. The timeplate definition
should contain at least the force_pi and measure_po statements.
Note
You are not required to include pulse statements for the clocks. But if you do not “pulse”
a clock, the Vector Interfaces code uses two cycles to pulse it, resulting in larger patterns.
timeplate_statement:
offstate pin_name off_state;
force_pi time;
bidi_force_pi time;
measure_po time;
bidi_measure_po time;
force pin_name time;
measure pin_name time;
pulse pin_name time width;
• timeplate_name
A string that specifies the name of the timeplate.
• offstate pin_name off_state
A literal and double string that specifies the inactive, off-state value (0 or 1) for a
specific named pin that is not defined as a clock pin by the Add Clocks command. This
statement must occur before all other timeplate_statement statements. This statement is
only needed for a pin that is not defined as a clock pin by the “Add Clocks” command
but will be pulsed within this timeplate.
• force_pi time
A literal and string pair that specifies the force time for all primary inputs.
• bidi_force_pi time
A literal and string pair that specifies the force time for all bidirectional pins. This
statement allows the bidirectional pins to be forced after applying the tri-state control
signal, so the system avoids bus contention. This statement overrides “force_pi” and
“measure_po”.
• measure_po time
A literal and string pair that specifies the time at which the tool measures (or strobes) the
primary outputs.
• bidi_measure_po time
A literal and string pair that specifies the time at which the tool measures (or strobes) the
bidirectional pins. This statement overrides “force_pi” and “measure_po”.
• force pin_name time
A literal and double string that specifies the force time for a specific named pin.
Note
This force time overrides the force time specified in force_pi for this specific pin.
Note
This measure time overrides the measure time specified in measure_po for this specific
pin.
Example 1
timeplate tp1 =
force_pi 0;
pulse T 30 30;
pulse R 30 30;
measure_po 90;
period 100;
end;
Example 2
The following example shows a shift procedure that pulses b_clk with an off-state value of 0.
The timeplate tp_shift defines the off-state for pin b_clk. The b_clk pin is not declared as a
clock in the ATPG tool.
timeplate tp_shift =
offstate b_clk 0;
force_pi 0;
measure_po 10;
pulse clk 50 30;
pulse b_clk 140 50;
period 200;
end;
procedure shift =
timeplate tp_shift;
cycle =
force_sci;
measure_sco;
pulse clk;
pulse b_clk;
end;
end;
• Generating basic test pattern data formats: FastScan Text, FlexTest Text, Lsim, Verilog,
VHDL, WGL (ASCII and binary), and Zycad.
• Generating ASIC Vendor test data formats (with the purchase of the ASIC Vector
Interfaces option): TDL 91, Compass, FTDL-E, UTIC, MITDL, TSTL2, and LSITDL.
• Supporting parallel load of scan cells (in Verilog format).
• Reading in external input patterns and output responses, and directly translating to one
of the formats.
• Reading in external input patterns, performing good or faulty machine simulation to
generate output responses, and then translating to any of the formats.
• Writing out just a subset of patterns in any test data format.
• Facilitating failure analysis by having the test data files cross-reference information
between tester cycle numbers and FastScan/FlexTest pattern numbers.
• Supporting differential scan input pins for each simulation data format.
The primary advantage of simulating serial loading is that it emulates how patterns are loaded
on the tester. You thus obtain a very realistic indication of circuit operation. The disadvantage is
that for each pattern, you must clock the scan chain registers at least as many times as you have
scan cells in the longest chain. For large designs, simulating serial loading takes an extremely
long time to process a full set of patterns.
The primary advantage of simulating parallel loading of the scan chains is it greatly reduces
simulation time compared to serial loading. You can directly (in parallel) load the simulation
model with the necessary test pattern values because you have access, in the simulator, to
internal nodes in the design. Parallel loading makes it practical for you to perform timing
simulations for the entire pattern set in a reasonable time using popular simulators like
ModelSim that utilize Verilog and VHDL formats.
After the parallel load, you apply the shift procedure a few times (depending on the number of
scan cells in the longest subchain, but usually only once) to load the scan-in value into the sub-
chains. Simulating the shift procedure only a few times can dramatically improve timing
simulation performance. You can then observe the scan-out value at the scan output pin of each
sub-chain.
Parallel loading ensures that all memory elements in the scan sub-chains achieve the same states
as when serially loaded. Also, this technique is independent of the scan design style or type of
scan cells the design uses. Moreover, when writing patterns using parallel loading, you do not
have to specify the mapping of the memory elements in a sub-chain between the timing
simulator and FastScan or FlexTest. This method does not constrain library model development
for scan cells.
Note
When your design contains at least one stable-high scan cell, the shift procedure period
must exceed the shift clock off time. If the shift procedure period is less than or equal to
the shift clock off time, you may encounter timing violations during simulation. The test
pattern formatter checks for this condition and issues an appropriate error message when
it encounters a violation.
For example, the test pattern timing checker would issue an error message when reading in the
following shift procedure and its corresponding timeplate:
timeplate gen_tp1 =
force_pi 0;
measure_po 100;
pulse CLK 200 100;
period 300; // Period same as shift clock off time
end;
procedure shift =
scan_group grp1;
timeplate gen_tp1;
cycle =
force_sci;
measure_sco;
pulse CLK; // Force shift clock on and off
end;
end;
timeplate gen_tp1 =
force_pi 0;
measure_po 100;
pulse CLK 200 100;
period 400; // Period greater than shift clock off time
end;
Note
Using the -Start and -End switches limits file size as well, but the portion of internal
patterns saved does not provide a very reliable indication of pattern characteristics when
simulated. Sampled patterns more closely approximate the results you would obtain from
the entire pattern set.
After performing initial verification with parallel loading, you can use a sampled pattern set for
simulating series loading until you are satisfied test coverage is reasonably close to desired
specification. Then, perform a series loading simulation with the unsampled pattern set only
once, as your last verification step.
Note
The Set Pattern Filtering command serves a similar purpose to the -Sample switch of the
Save Patterns command. The Set Pattern Filtering command creates a temporary set of
sampled patterns within the tool.
Several ASIC test pattern data formats support IDDQ testing. There are special IDDQ
measurement constructs in TDL 91(Texas Instruments), MITDL (Mitsubishi), UTIC
(Motorola), TSTL2 (Toshiba), and FTDL-E (Fujitsu). The tools add these constructs to the test
data files. All other formats (WGL, Verilog, VHDL, Compass, Lsim, and LSITDL) represent
these statements as comments.
For FastScan
SAVe PAtterns pattern_filename [-Replace] [format_switch]
{{proc_filename -PRocfile}[-NAme_match | -POsition_match]
[-PARAMeter param_filename]}] [-PARALlel | -Serial] [-EXternal]
[-NOInitialization] [-BEgin {pattern_number | pattern_name}]
[-END {pattern_number | pattern_name}] [-TAg tag_name]
[-CEll_placement {Bottom | Top | None}] [-ENVironment] [-One_setup]
[-ALl_test | -CHain_test | -SCan_test] [-NOPadding | -PAD0 | -PAD1] [-Noz]
[-MAP mapping_file] [-PATtern_size integer] [-MAxloads load_number] [-MEMory_size
size_in_KB] [-SCAn_memory_size size_in_KB]
[-SAmple [integer]] [-IDDQ_file] [-DEBug [-Lib work_dir]]
[-MODE_Internal | -MODE_External]
For FlexTest
SAVe PAtterns filename [format_switch] [-EXternal] [-CHain_test | -CYcle_test
| -ALl_test] [-BEgin begin_number] [-END end_number]
[-CEll_placement {Bottom | Top | None}] [proc_filename -PROcfile]
[-PAttern_size integer] [-Serial | -Parallel] [-Noz]
[-NOInitialization] [-NOPadding | -PAD0 | -PAD1] [-Replace] [-One_setup]
For more information on this command and its options, see Save Patterns in the ATPG Tools
Reference Manual.
The basic test data formats include FastScan text, FlexTest text, FastScan binary, Verilog,
VHDL, Lsim, WGL (ASCII and binary), and Zycad. The test pattern formatter can write any of
these formats as part of the standard FastScan and FlexTest packages—you do not have to buy a
separate option. You can use these formats for timing simulation.
FastScan Text
This is the default format that FastScan generates when you run the Save Patterns command.
This is one of only two formats (the other being FastScan binary format) that FastScan can read
back in, so you should generate a pattern file in either this or binary format to save intermediate
results.
This format contains test pattern data in a text-based parallel format, along with pattern
boundary specifications. The main pattern block calls the appropriate test procedures, while the
header contains test coverage statistics and the necessary environment variable settings. This
format also contains each of the scan test procedures, as well as information about each scan
memory element in the design.
To create a basic FastScan text format file, enter the following at the application command line:
The formatter writes the complete test data to the file named filename.
For more information on the Save Patterns command and its options, see Save Patterns in the
ATPG Tools Reference Manual.
Note
This pattern format does not contain explicit timing information. For more information
on this test pattern format, refer to the “Test Pattern File Formats” chapter in the ATPG
Tools Reference Manual.
FlexTest Text
This is the default format that FlexTest generates when you run the Save Patterns command.
This is one of only two formats (the other being FlexTest table format) that FlexTest can read
back in, so you should always generate a pattern file in this format to save intermediate results.
This format contains test pattern data in a text-based parallel format, along with cycle boundary
specifications. The main pattern block calls the appropriate test procedures, while the header
contains test coverage statistics and the necessary environment variable settings. This format
also contains each of the scan test procedures, as well as information about each scan memory
element in the design.
To create a FlexTest text format file, enter the following at the application command line:
The formatter writes the complete test data to the file named filename.
For more information on the Save Patterns command and its options, see Save Patterns in the
ATPG Tools Reference Manual.
Note
This pattern format does not contain explicit timing information. For more information
on this test pattern format, refer to the “Test Pattern File Formats” chapter in the ATPG
Tools Reference Manual.
format with that of the text format for debugging purposes. This section provides detailed
information necessary for this task.
Often, the first cycle in a test set must perform certain tasks. The first test cycle in all test data
formats turns off the clocks at all clock pins, drives Z on all bidirectional pins, drives an X on all
other input pins, and disables measurement at any primary output pins.
The FastScan and FlexTest test pattern sets can contain two main parts: the chain test block, to
detect faults in the scan chain, and the scan test or cycle test block, to detect other system faults.
The test procedure file applies each event in a test procedure at the specified time. Each test
procedure corresponds to one or more test cycles. Each test procedure can have a test cycle with
a different timing definition. By default, all events use a timescale of 1 ns.
Note
If you specify a capture clock with the FastScan Set Capture Clock command, the test
pattern formatter does not produce the chain test block. For example, the formatter does
not produce a chain test block for IEEE 1149.1 devices in which you specify a capture
clock during FastScan setup.
Each event has a sequence number within the test cycle. The sequence number’s default time
scale is 1 ns.
Unloading of the scan chains for the current pattern occurs concurrently with the loading of
scan chains for the next pattern. Therefore the last pattern in the test set contains an extra
application of the load_unload sequence.
More complex scan styles (for example, like LSSD) use master_observe and skewed_load
procedures in the pattern. For designs with sequential controllers, like boundary scan designs,
each test procedure may have several test cycles in it to operate the sequential scan controller.
Some pattern types (for example, RAM sequential and clock sequential types) are more
complex than the basic patterns. RAM sequential patterns involve multiple loads of the scan
chains and multiple applications of the RAM write clock. Clock sequential patterns involve
multiple capture cycles after loading the scan chains. Another special type of pattern is the
clock_po pattern. In these patterns, clocks may be held active throughout the test cycle and
without applying capture clocks.
If the test data format supports only a single timing definition, FastScan cannot save both
clock_po and non-clock_po patterns in one pattern set. This is so because the tester cannot
reproduce one clock waveform that meets the requirements of both types of patterns. Each
pattern type (combinational, clock_po, ram_sequential, and clock_sequential) can have a
separate timing definition.
Using FlexTest, you can completely define the number of timeframes and the sequence of
events in each test cycle. Each timeframe in a test cycle has a force event and a measure event.
Therefore, each event in a test cycle has a sequence number associated with it. The sequence
number’s default time scale is 1 ns.
Unloading of the scan chains for the current pattern occurs concurrently with the loading of
scan chains for the next pattern. For designs with sequential controllers, like boundary scan
designs, each test procedure may contain several test cycles that operate the sequential scan
controller.
General Considerations
During a test procedure, you may leave many pins unspecified. Unspecified primary input pins
retain their previous state. FlexTest does not measure unspecified primary output pins, nor does
it drive (drive Z) or measure unspecified bidirectional pins. This prevents bus contention at
bidirectional pins.
Note
If you run ATPG after setting pin constraints, you should also ensure that you set these
pins to their constrained states at the end of the test_setup procedure. The Add Pin
Constraints command constrains pins for the non-scan cycles, not the test procedures. If
you do not properly constrain the pins within the test_setup procedure, the tool does it
for you, internally adding the extra force events after the test_setup procedure. This
increases the period of the test_setup procedure by one time unit. This increased period
can conflict with the test cycle period, potentially forcing you to re-run ATPG with the
modified test procedure file.
All test data formats contain comment lines that indicate the beginning of each test block and
each test pattern. You can use these comments to correlate the test data in the FastScan and
FlexTest text formats with other test data formats.
These comment lines also contain the cycle count and the loop count, which help correlate tester
pattern data with the original test pattern data. The cycle count represents the number of test
cycles, with the shift sequence counted as one cycle. The loop count represents the number of
all test cycles, including the shift cycles. The cycle count is useful if the tester has a separate
memory buffer for scan patterns, otherwise the loop count is more relevant.
Note
The cycle count and loop count contain information for all test cycles—including the test
cycles corresponding to test procedures. You can use this information to correlate tester
failures to a FastScan pattern or FlexTest cycle for fault diagnosis.
To create a FastScan binary format file, enter the following at the FastScan command line:
FastScan writes the complete test data to the file named filename.
For more information on the Save Patterns command and its options, see Save Patterns in the
ATPG Tools Reference Manual.
Verilog
This format contains test pattern data and timing information in a text-based format readable by
both the Verilog and Verifault simulators. This format also supports both serial and parallel
loading of scan cells. The Verilog format supports all FastScan and FlexTest timing definitions,
because Verilog stimulus is a sequence of timed events.
To generate a basic Verilog format test pattern file, use the following arguments with the Save
Patterns command:
For more information on the Save Patterns command and its options, see Save Patterns in the
ATPG Tools Reference Manual.
For more information on the Verilog format, refer to the Verilog-XL Reference Manual,
available through Cadence Design Systems.
VHDL
The VHDL interface supports both a serial and parallel test bench.
The serial test bench is almost identical to the Verilog serial test bench. It consists of a top level
module which declares an input bus, an output bus, and an expected output bus. The module
also instantiates the device under test and connects these buses to the device. The rest of the test
bench then consists of assignment statements to the input bus, and calls to a compare procedure
to check the results of the output bus.
The parallel test bench is similar to the serial test bench in how it applies patterns to the primary
inputs and observes results from the primary outputs. However, the VHDL language does not,
at this time, support any way to force and observe values on internal nodes below the top level
of hierarchy. Because of this, it is necessary to create a second file, which is a simulator specific
dofile that uses simulator commands to force and observe values on the internal scan cell. This
dofile runs in sync with the test bench file by using run commands to simulate the test bench
and device under test for certain time periods.
For more information on the Save Patterns command and its options, see Save Patterns in the
ATPG Tools Reference Manual.
Some test data flows verify patterns by translating WGL to stimulus and response files for use
by the chip foundry’s golden simulator. Sometimes this translation process uses its own parallel
loading scheme, called memory-to-memory mapping, for scan simulation. In this scheme, each
scan memory element in the ATPG model must have the same name as the corresponding
memory element in the simulation model. Due to the limitations of this parallel loading scheme,
you should ensure the following:
1) there is only one scan cell for each DFT library model (also called a scan subchain), 2) the
hierarchical scan cell names in the netlist and DFT library match those of the golden simulator
(because the scan cell names in the ATPG model appear in the scan section of the parallel WGL
output), and 3) the scan-in and scan-out pin names of all scan cells are the same.
To generate a basic WGL format test pattern file, use the following arguments with the Save
Patterns command:
For more information on the WGL format, contact Integrated Measurement Systems, Inc.
For more information on the Save Patterns command and its options, see Save Patterns in the
ATPG Tools Reference Manual.
For more information on the WGL format, contact Integrated Measurement Systems, Inc.
For more information on the STIL format, refer to the IEEE Standard Test Interface Language
(STIL) for Digital Test Vector Data, IEEE Std. 1450-1999.
Zycad
You can use Zycad format patterns to verify ATPG patterns on the Zycad hardware-accelerated
timing and fault simulator. Zycad patterns do not have any special constructs for scan.
Currently, the test pattern formatter creates only serial format Zycad patterns.
Zycad patterns consist of two sections: the first section defines all design pins, and the second
section defines all pin values at any time in which at least one pin changes.
To generate a basic Zycad format test pattern file, use the following arguments with the Save
Patterns command:
A comment line in Zycad format includes the pattern number, cycle number, and loop number
information of a pattern. At the user’s request, the simulation time is also provided in the
comment line:
All the ASIC vendor data formats are text-based and load data into scan cells in a parallel
manner. Also, ASIC vendors usually impose several restrictions on pattern timing. Most ASIC
vendor pattern formats support only a single timing definition. Refer to your ASIC vendor for
test pattern formatting and other requirements.
The following subsections briefly describe the ASIC vendor pattern formats.
TI TDL 91
This format contains test pattern data in a text-based format.
Currently, FastScan and FlexTest support features of TDL 91 version 3.0. However, when using
the enhanced AVI output, FastScan and FlexTest can support features of TDL 91 version 6.0.
The version 3.0 format supports multiple scan chains, but allows only a single timing definition
for all test cycles. Thus, all test cycles must use the timing of the main capture cycle. TI’s ASIC
division imposes the additional restriction that comparison should always be done at the end of
a tester cycle.
To generate a basic TI TDL 91 format test pattern file, use the following arguments with the
Save Patterns command:
For more information on the Save Patterns command and its options, see Save Patterns in the
ATPG Tools Reference Manual.
Compass Scan
This format contains test pattern data in a text-based format. To generate a basic Compass
format test pattern file, use the following arguments with the Save Patterns command:
For more information on the Compass Scan format, refer to the Vector Reference Manual,
available through Compass Design Automation.
Fujitsu FTDL-E
This format contains test pattern data in a text-based format. The FTDL-E format splits test data
into patterns that measure 1 or 0 values, and patterns that measure Z values. The test patterns
divide into test blocks that each contain 64K tester cycles.
To generate a basic FTDL-E format test pattern file, use the following arguments with the Save
Patterns command:
For more information on the Save Patterns command and its options, see Save Patterns in the
ATPG Tools Reference Manual.
For more information on the Fujitsu FTDL-E format, refer to the FTDL-E User's Manual for
CMOS Channel-less Gate Array, available through Fujitsu Microelectronics.
Motorola UTIC
This format contains test pattern data in a text-based format. It supports multiple scan chains,
but allows only two timing definitions. One timing definition is for scan shift cycles and one is
for all other cycles. When saving patterns, the formatter does not check the shift procedure for
timing rules. You must ensure that all the non-scan cycle timing and the test procedures (except
for the shift procedure) have compatible timing. This format also supports the use of differential
scan pins.
Because Universal Test Interface Code (UTIC) supports only two timing definitions, one for the
shift cycle and one for all other test cycles, all test cycles except the shift cycle must use the
timing of the main capture cycle. If you do not check for compatible timing, the resulting test
data may have incorrect timing.
To generate a basic Motorola UTIC format test pattern file, use the following arguments with
the Save Patterns command:
Some test data verification flows do pattern verification by translating UTIC (via Motorola
ASIC tools) into stimulus and response files for use by the chip factory’s golden simulator.
Sometimes this translation process uses its own parallel loading scheme, called memory-to-
memory mapping, for scan simulation. In this scheme, each scan memory element in the ATPG
model must have the same name as the corresponding memory element in the simulation model.
Due to the limitations of this parallel loading scheme, you should ensure that the hierarchical
scan cell names in the netlist and DFT library match those of the golden simulator. This is
because the scan cell names in the ATPG model appear in the scan section of the parallel UTIC
output.
For more information on the Motorola UTIC format, refer to the Universal Test Interface Code
Language Description, available through Motorola Semiconductor Products Sector.
Mitsubishi TDL
This format contains test pattern data in a text-based format. To generate a basic Mitsubishi Test
Description Language (TDL) format test pattern file, use the following arguments with the Save
Patterns command:
For more information on the Save Patterns command and its options, see Save Patterns in the
ATPG Tools Reference Manual.
For more information on Mitsubishi's TDL format, refer to the TD File Format document,
which Hiroshi Tanaka produces at Mitsubishi Electric Corporation.
Toshiba TSTL2
This format contains only test pattern data in a text-based format. The test pattern data files
contain timing information. This format supports multiple scan chains, but allows only a single
timing definition for all test cycles. TSTL2 represents all scan data in a parallel format.
To generate a basic Toshiba TSTL2 format test pattern file, use the following arguments with
the Save Patterns command:
For more information about the Toshiba TSTL2 format, refer to Toshiba ASIC Design Manual
TDL, TSTL2, ROM data, (document ID: EJFB2AA), available through the Toshiba Corporation
This chapter discusses running chip failure diagnostics, as shown in the following outline:
You can use FastScan to diagnose chip failures during the ASIC testing process.
Note
FlexTest does not provide this capability.
You can use fault diagnosis on chips that fail during the application of the scan test patterns to
narrow down the search for faults to localized areas, given the actual response of a faulty circuit
to a test pattern set.
You perform a diagnosis by first collecting the full set of failing pattern data from the tester.
The FastScan Diagnose Failures command utilizes this data during fault simulation to
determine the set of faults whose simulated failures most closely match the actual failures. The
more data (failing patterns) FastScan has to draw from, the more accurate the diagnosis.
Because uncompressed patterns may have slightly better isolation, if you intend to perform fault
diagnosis, you may not want to compress the pattern set when you run ATPG with FastScan.
Note
If you want to break up patterns, you must divide the tester pattern file and the ASCII
pattern file in the same way. For example, if you use the -Begin and -End switches when
you save the pattern files with the Save Patterns command, be sure to specify the same
pattern numbers when saving ASCII patterns. This ensures that, if failure occurs, the
tester pattern file is associated with the correct ASCII pattern file.
Compared to the standard fault dictionary approach, post-test fault simulation (which considers
all failing patterns) not only improves precision but also provides the capability to diagnose
non-stuck fault defects and multiple defects. The ability to precisely identify a fault site depends
on the faults associated with a single fault equivalence class. FastScan achieves this level of
precision for most defects that behave as stuck-at faults.
If your test patterns include a chain test, the ATE failure output will indicate if the chain test
fails. You then should direct FastScan to perform a chain diagnosis on the scan test fail data.
You get the tool to perform a chain diagnosis by using the -Chain switch with the Diagnose
Failures command. If you include the chain test fail data in the diagnosis input, the -Chain
switch is unnecessary; FastScan will perform a chain diagnosis by default, rather than its
“normal” diagnosis. Instead of reporting a fault site, chain diagnosis reports the last scan cell in
each chain that appears to unload in a plausible way.
FastScan performs diagnosis by looking at the actual values unloaded from the scan cells. This
is achieved by XOR-ing the fail data with the expected data. The tool assumes that a chain
failure will cause constant data to be shifted out past the fault site. The diagnosis is performed
by looking for the scan cell nearest scan-in that unloads constant data. Assuming that over a few
patterns, every cell at some time will capture both a zero and one, this gives a way to localize
the fault site.
Depending on the degree to which the defect behaves like a stuck-at fault, the diagnosis
categorizes it into one of the following three defect classes:
Diagnosis for this fault class identifies a single defect that fully explains both failing and
passing pattern results. Examples of defects in this class include open lines in bipolar
chips and cell defects that cause an output to remain at a constant value.
• Non-SSF Single Site Defects
Defects in this class do not always behave like stuck-at faults, but the source of all
failures is a single defect site. The stuck-at fault associated with the defect site explains
all failing patterns, but can cause some passing patterns to fail. FastScan cannot use
passing patterns to resolve between fault candidates because this degrades the precision
of the diagnosis.
Diagnosis for this fault class identifies a single defect that fully explains all of the failing
patterns. However, FastScan issues a warning message indicating the fault candidate
causes passing patterns to fail. Examples of defects in this class include AC defects,
CMOS opens, and intermittent defects.
• Non-SSF Multiple Site Defects
Defects in this class require more than one stuck-at fault to explain all failures. In
diagnosing these defects, FastScan assumes that a single fault explains all single pattern
failures. The diagnosis identifies faults that explain the first failing pattern and, in
addition, provide the best match for all of the failures. FastScan then eliminates the
explained failing patterns from further consideration and repeats the process for the
remaining failures. FastScan records patterns that it cannot explain by any one stuck
fault and then continues diagnosis on the next unexplained failure.
Diagnosis for this fault class identifies multiple defects, however, it may not explain all
failing patterns. Examples of defects in this class include shorts and any combination of
defects in the first two classes.
Note
Because FastScan creates patterns for the transition fault model using a stuck-at method,
performing a diagnostics run is equally simple for either pattern type. Basically, you read
in the patterns, ensure the fault type is specified as “stuck” and then enter the Diagnose
Failures command, as detailed in the section, “Performing a Diagnosis.”
You can use this failure file as input to the Diagnose Failures command, which identifies the
most likely cause of the failures.
If the file does not include all failing patterns, you must identify the last pattern applied. The file
must include the failing output measurements of all failing patterns up to that point.
It is important that this file contain all observed failures for a given pattern. Because of the scan
output’s serial nature, you can easily truncate the list of failures not on a pattern boundary,
which hinders diagnostic resolution. Providing the tool with as many failures as possible allows
maximum resolution of the diagnosis.
The failure information must track failing patterns using the same ordering and numbering as
the original pattern set. For example, if a failure occurs at the tester on the scan chain while
unloading a particular pattern, pattern N, the failing pattern is actually pattern N-1. This is
because each current scan pattern unloads the captured values from the previous scan patterns.
In this case, you would need to reduce the number of the failing pattern in the failure file from N
to N-1 to align with the pattern number in the original pattern set.
Note
The following situation, although rare, also causes scan chain cell alignment problems in
diagnostics: If a load_unload procedure includes n shift clocks prior to calling the shift
procedure, the scan out values will appear to be off by n cycles. The failure file must
account for this by adjusting the cell value shifted out by n, or determining which cell
shifted out fails, starting from the last shift of the unload.
• The keyword, “scan test” is the initial default until the keyword “chain test” is read.
These two keywords are optional and must appear on a line by themselves, with failure
data following the lines. The “chain test” keyword has no effect unless followed by scan
chain failure data, in which case it has the same effect as the Diagnose Failures -Chain
command.
Note
If you use the -Chain switch, you can avoid having to provide chain test failure
information in the failure file. This will conserve tester memory.
chain test
scan test
10 output17
10 output29
10 chain1 314
10 chain3 75
195 output29
311 chain2 0
Performing a Diagnosis
Figure 8-1 gives a pictorial representation of the chip testing and diagnostic process.
Netlist ATPG
Library
Setup
Generate tests Test Vectors
Dofile
(Vendor
(FastScan/FlexTest) format)
Test
Procedure Test chip Failure
File (ATE) File
The following list provides a basic process for performing failure diagnosis within a FastScan
session (from either the Atpg, Fault, or Good system mode):
1. Use the Save Flattened Model command to save the flattened netlist used in the original
ATPG run. Mentor Graphics highly recommends that you perform diagnostics on the
same flattened netlist. The design will load faster and rules checking also will be much
faster. Most importantly, the flattened design will contain all setup information,
including simulation switches, you used when you generated the original patterns.
Note
If you load the standard netlist and do not set up the same switches used in the original
ATPG run, the diagnostic results may be bogus. There is one exception for transition
patterns: you use “set fault type transition” when generating them, but you must use “set
fault type stuck” when performing a diagnostics run on them.
2. Prior to running a diagnosis, you must store the failing pattern data in a file in the proper
format. “Creating the Failure File” on page 8-3 describes the format of this file.
3. Set the pattern source to external and specify the test pattern file name (pattern_file).
ATPG> SET PAttern Source external pattern_file
4. Enter the Diagnose Failures command, identifying the failure file (fails_file), and the
last pattern used from the pattern file (in this case, pattern number 284), if you did not
wish to apply all patterns.
ATPG> DIAgnose FAilures fails_file -last 284
After you use the Diagnose Failures command to write the list of fault candidates (the
diagnostic report) to a file, the next step is to map the netlist locations identified in the report to
actual locations on the failing chip. Typically, failure analysis laboratories perform this
mapping and then validate the failure sites by visual or x-ray analysis.
Because the physical layout environment uses a layout database and models quite different from
the HDL in which the netlist is written, the mapping process can be very time-consuming if
done manually. It is greatly simplified if automated by use of a layout viewing tool such as the
Calibre DESIGNrev tool described in the next section.
As mentioned in the preceding section, you can easily view the fault candidates listed in a
FastScan diagnostics report on the physical layout in the Calibre DESIGNrev tool included with
Calibre 2004.2 and later releases. Figure 8-2 shows where this added step (highlighted in bold)
fits in the FastScan diagnostics flow shown in Figure 8-1.
Netlist ATPG
Library
Setup
Generate tests Test Vectors
Dofile
(Vendor
(FastScan/FlexTest) format)
Test
Procedure Test chip Failure
File (ATE) File
GDS Layout
Database
Calibre DESIGNrev is one of several tools in the Calibre verification toolset. This toolset is
described in the Calibre Verification User’s Manual. For detailed information about Calibre
DESIGNrev, refer to the Calibre DESIGNrev User’s Manual. Following is a brief overview of
the use of Calibre DESIGNrev to view candidate fault sites assuming as inputs, a gate level
Verilog netlist, applicable Calibre layout data and files, and a FastScan diagnostics report.
1. Set MGC_HOME to the location of the Calibre software and be sure your PATH
contains $MGC_HOME/bin.
2. Invoke Calibre DESIGNrev:
calibredrv
3. From the DESIGNrev main menu, choose File > Open Layout and load in the GDS
layout database as illustrated in Figure 8-3.
Figure 8-3. Loading the GDS Layout Database
3. Click
4. From the DESIGNrev main menu, choose Tools > Calibre Interactive to display the
Calibre Interactive server window shown in Figure 8-4. This window lets you specify
which Calibre application to invoke. Select Calibre RVE Options and Multi-layer
Highlights as shown and click Run. This brings up the Calibre RVE startup window
illustrated in Figure 8-5.
5. In the Calibre RVE startup window, verify the correct Database path is shown and that
the Database Type selected is LVS; then click Open to invoke Calibre RVE and load the
LVS Query database. It is assumed that Calibre LVS has been run on the design and that
a clean LVS Query database already exists. To learn more about this database and how it
is created, refer to the “Hierarchical Query Database” section of the Verification User’s
Manual and to the description of the Mask SVDB Directory statement in the SVRF
Manual. Both manuals are part of the Calibre Documentation set.
1. Select application
2. Click
1. Confirm settings
2. Click
6. When the Calibre - LVS RVE window comes up, choose the File > FastScan Report
menu item as shown in Figure 8-6, and open the diagnostics report you generated
previously in FastScan.
The defect locations listed in the diagnostics report show up as active links in the
FastScan Diagnostics Report window. For the Verilog netlist used in this example, the
defect locations are Verilog pin pathnames.
Click a link in this window and the trace corresponding to the net connected to that pin
is highlighted on the layout. You can also right click to get a menu of other display and
information options, such as for displaying the cell itself or for obtaining information
about the pathname.
Figure 8-7 shows the trace highlighted by simply clicking the first link in this example.
Figure 8-7. Layout View of the Net Connected to a Candidate Fault Site
This appendix contains a brief guide to usage and introductory lab exercises to run with
DFTAdvisor and FastScan. It is intended to familiarize new users with the operation of these
two products in an ATPG flow. The chapter does not provide full details for running these tools,
but rather, contains enough information to help you get started. It also introduces the various
information sources available to users of Mentor Graphics DFT tools.
Included in the DFT software package is a directory containing tutorial design data. The next
section describes how to access and prepare the tutorial data from that directory.
Note
This procedure requires that the MGC tree contain the training package “atpgng” as
source for the training data you will copy. The path to this object is:
$MGC_HOME/shared/training/atpg003ng. If this object does not exist, you (or your site
administrator) need to install this training package (ATPG Gttg Strd) before proceeding.
The procedure for installing training packages is contained in the workstation-specific
MGC software installation manual.
The procedure assumes this training package has been properly installed.
5. In the shell you will use for the examples, specify a pathname for environment variable
ATPGNG that points to your local copy of the tutorial data. For example, in a C shell,
use:
Design
Requirements
RTL Coding
Before Scan
A
Combinational Logic OUT1
RTL B
Design
D Q D Q D Q
Synthesis
CLK
A
Combinational Logic OUT1
B
sc_out
DRC Scan Inserted D Q D Q D Q
sc_in
ATPG Netlist sci
sen
sci
sen
sci
sen
CLK
FastScan sc_en
Test
Patterns
ATE
Figure A-2 shows a more detailed breakdown of the basic tool flow and the commands you
typically use to insert scan and perform ATPG. Following that is a brief lab exercise in which
you run DFTAdvisor and FastScan using these commands. The goal of the exercise is to expose
you to the tools and demonstrate the ease with which you can start using them.
Non-scan DFT
Commands:
Netlist Library
SETUP> add pin constraints
Setup SETUP> analyze control signals -auto_fix
Scan/Test Logic
Configuration typically use defaults
Test
Patterns
Running DFTAdvisor
The following is an example dofile for inserting scan chains with DFTAdvisor. The commands
required for a typical run are shown in bold font. In this part of the lab exercise, you will invoke
DFTAdvisor and insert scan into a gate level netlist using just these commands. When starting
out, be sure you learn the purpose of these commands.
A few other commands, commented out with double slashes (//), are included to pique your
curiosity but are not required for a typical run. You can find out more about any of these
commands in Chapter 5 of this manual and/or in the DFTAdvisor Reference Manual.
Note
The dofile dfta_dofile_template.do, included in the training data, contains additional
commands and explanations. It is provided for use as a starting point to develop your own
custom dofiles.
// dfta_dofile.do
//
// DFTAdvisor dofile to insert scan chains.
// Enable test logic insertion for control of clocks, sets, & resets
//set test logic -clock on -reset on -set on
You can run the entire session as a batch process with the preceding dofile, as described next, or
you can manually enter each of the dofile commands on the tool’s command line in the
command line window (Figure 1-3 on page 1-8). Choose one of these methods:
Note
DFTAdvisor requires a gate-level netlist as input.
DFTAdvisor does not alter the original netlist when it inserts test logic. The tool creates
an internal copy of the original netlist, then makes all required modifications to this copy.
When finished, the tool writes out the modified copy as a new scan-inserted gate-level
netlist.
1. Enter the following shell command to invoke DFTAdvisor on the tutorial design (a gate
level netlist in Verilog) and run the dofile:
$MGC_HOME/bin/dftadvisor pipe_noscan.v -verilog \
-lib adk.atpg -dofile dfta_dofile.do \
-log dfta_out/logfile -replace
2. Alternatively, enter the following shell command to invoke the tool on the tutorial
netlist, ready for you to begin entering DFTAdvisor commands:
When the Command Line Window appears, enter each command shown in bold font in
the preceding dofile in the same order it occurs in the dofile. Review the transcript for
each command as you enter it and try to understand the purpose of each command.
After the tool finishes its run and exits, you can review the command transcript in the logfile at
$ATPGNG/dfta_out/logfile. The transcript will help you understand what each command does.
Running FastScan
Note
Because DFTAdvisor creates files needed by FastScan, you must complete the preceding
section, “Running DFTAdvisor” before you perform the steps in this section.
Next, you will generate test vectors for the scan-inserted design. Figure A-4 shows the example
dofile you will use for generating test vectors with FastScan. As in the preceding section,
commands required for a typical run are shown in bold font. This dofile also contains examples
of other FastScan commands that modify some aspect of the pattern generation process (the
commands that are commented out). They are included to illustrate a few of the customizations
you can use with FastScan, but are not required for a typical run. You can find detailed
information about each of these commands in Chapter 6 of the Scan and ATPG Process Guide
(this manual) and/or in the ATPG Tools Reference Manual.
Note
The dofile fs_dofile_template.do, included in the training data, contains additional
commands and explanations. It is provided for use as a starting point to develop your own
custom dofiles.
// fs_dofile.do
//
// FastScan dofile to generate test vectors.
// Create reports.
//report statistics
//report testability data -class au
Note
Prior to saving patterns, be sure the procedure file has the desired timing for the tester.
You can run the entire session as a batch process with the preceding dofile, as described next, or
you can manually enter each of the dofile commands on the tool’s command line in the
command line window. Be sure you are in the $ATPGNG directory, then choose one of these
methods:
1. Enter the following shell command to invoke FastScan on the scan-inserted, gate level
netlist and run the dofile:
$MGC_HOME/bin/fastscan dfta_out/pipe_scan.v -verilog\
-lib adk.atpg -dofile fs_dofile.do \
-log fs_out/logfile -replace
After FastScan completes the ATPG process, you can scroll up through the transcript in
the main window and review the informational messages. The transcript enables you to
see when commands were performed, the tasks associated with the commands, and any
error or warning messages. A copy of the transcript is saved in the file,
$ATPGNG/fs_out/logfile.
2. Alternatively, enter the following shell command to invoke the tool on the scan-inserted
netlist, ready for you to begin entering FastScan commands:
$MGC_HOME/bin/fastscan dfta_out/pipe_scan.v -verilog\
-lib adk.atpg -log fs_out/logfile -replace
When the Command Line Window appears (Figure 1-3 on page 1-8 of the Scan and
ATPG Process Guide), enter each command shown in bold font in the preceding dofile
in the same order it appears in the dofile. Try to understand what each command does by
reviewing the transcript as you enter each command. A copy of the transcript is saved in
the file, $ATPGNG/fs_out/logfile.
Leave FastScan open if you intend to continue with the exercises in the next section.
This concludes the ATPG portion of the exercises. You have now finished:
Accessing Information
There are many different types of online help available. The different types include query help,
popup help, information messages, Tool Guide help, command usage, online manuals, and the
Help menu.
All of the DFT documentation is available online using Adobe Acrobat Reader. You can browse
the documents or have the document open to a specific page containing the information on a
specific command.
Also, you can access Application Notes and Tech Notes to problem-solve specific DFT tool
issues from the SupportNet website.
In this exercise, you will examine the many different ways of getting help. You will find online
documentation. Also, you will access Application Notes and Tech Notes from the SupportNet
website.
If you just completed the “Running FastScan” section, FastScan is up and running. If FastScan
is not running, invoke it using one of the invocation commands described in that section.
The following sections list the many different types of online help and describe how to access
them:
1. Open the Tool Guide by clicking the Help button located at the bottom of the Control
Panel or select the Help > Open Tool Guide menu item.
2. Click different topics listed in the upper portion of the window to change the
information displayed in the lower portion of the window. When finished, dismiss the
Tool Guide.
Command Usage
You can get the command syntax for any command from the command line by using the Help
command, followed by either a full or partial command name.
1. For example, enter the following command to see a list of all of the “Add” commands in
FastScan:
help add
2. To see the usage line for a specific “Add” command, enter the Help command followed
by the full command name. For example, to see the usage line for the FastScan Add
Clocks command, enter:
3. To view information about the Add Clocks command within the on-line ATPG Tools
Reference Manual, enter:
help add clocks -manual
1. Click on the Generated Patterns functional block in the control panel window.
2. Click the Turn On Query Help button. The mouse cursor changes to a question mark.
Click on different objects in the dialog box to open a help window on that object.
3. Click the same button (now named “Turn Off Query Help”), or press the Escape key to
turn off Query Help.
4. Click the Cancel button to close the dialog box.
Popup Help
Popup help is available on all active areas of the control panel. To activate this type of help,
click the right mouse button (RMB) on a functional block, process block, or button. To remove
the help window:
Informational Messages
Informational messages are provided in some dialog boxes to help you understand the purpose
and use of the dialog box or its options. You do not need to do anything to get these messages to
appear.
Online Help
Application documentation is provided online in PDF format. You can open manuals using the
Help menu or the Go to MANUAL button in Query Help messages.
You can also open a separate shell window and execute the $MGC_HOME/bin/mgcdocs
command. This opens the Mentor Graphics Bookcase. You then click on the Sys Design,
Verification, Test button and then the Design-for-Test link (blue text) to open the bookcase of
DFT documentation.
1. Choose Help > Open DFT Bookcase, then open the Scan and ATPG Process Guide.
2. Press Page Down to flip forward. Press Page Up to flip back.
3. Click items from the Table of Contents (left side of the display) to automatically jump to
specific chapters.
Note
Anything you see in blue in the documentation is a link. When you click on the blue text,
you will automatically jump to the referenced topic.
SupportNet Web is maintained by Mentor Graphics to provide quick access to information that
will help you resolve technical issues. The goal of Mentor Graphics is to create a more open,
flexible approach to customer support. The Support Services Latitudes Program provides you
the following choices:
Note
To do the following steps, you need to be a registered SupportNet user. You will need
your User Name (ID) and Password.
Application Notes go into greater detail about a specific topic. For example: Debugging
Simulation Mismatches in FastScan.
1. Scroll down through the TechNotes and look at the various TechNote topics. Click on
the title to view the note.
2. Go back to the ATPG Documentation window. Click “Back” two times. Review the list
of links on the left side of the page, then click Design-for-Test. Look at the following
areas:
o Release Info & Downloads
o Documentation/Solutions
o Technical Papers
o Training
o Technical Newsletter
3. From among the links on the left side of the Design-for-Test SupportNet page, click
DFT Products & Features. The Mentor Graphics DFT homepage appears. Continue
exploring the technical events, news, and other areas of interest to you in both this site
and the SupportNet site.
4. Exit the web.
5. Exit FastScan by clicking Exit in the button pane and then click Exit in the dialog box.
Discard all changes to the design.
D1 Q
D2
scan-FF
EN
en gclk
CK Q'
clk
The scan cell shown in Figure B-1 is a positive edge triggered device and normally would have
the clock off value defined as 0 at its CK input, due to a clock off value of 0 at clk (see Add
Clocks command).
For proper scan chain operation, the load_unload procedure must ensure all clocks are off until
shift. ATPG must ensure all clocks are off except during capture. This is required in order for
scan cells to hold their captured values. If the scan clock is gated as shown in Figure B-1, the
load_unload procedure must also ensure that pulsing clk during shift pulses gclk. The following
test procedure file excerpt shows how this may be done:
procedure load_unload
scan group grp1 // Identify the scan operation definitions.
timeplate gen_tp1; // Identify the timing for those ops.
cycle =
force en 1; // en=1, to ensure gclk=clk for shift
force clk 0; // Ensure clocks are off in the 1st cycle.
end;
apply shift ... // Safely do all shifting for load_unload.
It is possible that a single clock gater will drive both leading edge (LE) and trailing edge (TE)
triggered scan cells. Figure B-2 illustrates this possibility.
D1 Q D1 Q
D2 D2
scan-FF scan-FF
EN EN
en gclk
CK Q' CK Q'
clk
During capture, ATPG will set en to 1 when it needs to pulse the clock. This is true whether en
is a primary input as in this example, or a signal derived from scanned registers as described in
the next section. If the clock off value at clk is 0, the capture pulse will be positive (0 -> 1 -> 0),
and the positive edge-triggered flip-flop is LE; the negative edge-triggered flip-flop is TE. If the
clock off value is 1, these are reversed.
Again, when the clock is off, the scan cells must be able to hold capture values stable until they
can be unloaded. The C1 (Clock Rule #1) DRC ensures this will be the case. It takes pin
constraints into account (which are always enforced during capture, but can be overridden in
test procedures), and ensures that all scan cell clock inputs are stable when all clocks are off. In
the clock gating arrangement shown in Figure B-2, the required stability occurs for a clock off
value of 1 at clk only if pin constraints in the dofile cause en to be 1 (see Add Pin Constraint
command).
In this circuit, when clk is pulsed from low to high, the latch is disabled and remains so as long
as the clk signal stays high. Therefore, even if the output of dff1 changes from high to low as a
result of the leading edge of the pulse, that value change cannot propagate through the latch and
effect clk_en until clk goes low again, enabling the latch. For a clock off state of 0, no C1 DRC
violations will occur because gclk will be known (0) regardless of the value of clk_en.
Equally important, scan chains must operate correctly. DRC T3 checks for this and the check is
called a “scan chain trace.” To ensure that when the clk signal pulses during shift, gclk also
pulses (so the scan chain operates properly), it is important that the nonscan latch be a
transparent latch (TLA). This allows input se to be used to ensure shift by having the tester force
se to 1. You can force se to 1 in the load_unload procedure; however, it must be done before any
“apply shift” statement. The se signal must be controllable to 1 from the chip’s primary inputs
(IC pins).
The situation is more challenging if the clock off state is 1. The top part of Figure B-4 shows an
example of such a scan cell implementation (for the mux-DFF scan type).
se
The latch would transfer the en_se value to clk_en only when clk is pulsed low. As a result,
clk_en is always holding an old value at the leading edge of the clk capture pulse. If en_se does
a leading edge transfer of 0 to clk_en, then the AND gate cuts off the trailing edge of gclk’s
capture pulse, as well as the leading edge of the first shift pulse: the clocking of the LE triggered
dff2 is not guaranteed during unload, so a T3 DRC violation will occur.
If en is the input to the clock gating cell as it usually is, then often the easiest fix is to place a
cell constraint on that pin to hold it at 1. If not all instances of that cell should be constrained for
some reason, then just constrain specific instances.
The following are example commands you could put in the dofile prior to “set system mode
atpg”:
Note
For mux scan cycles where se is 0 (almost all final capture cycles), if the dff1/D1 input
cannot be controlled to a 1 in cycle j, no test of length j+1 is possible. The clk_en signal
must be 1 upon entering unload.
Initialization
For the latched clock gate circuit described in the preceding section (see Figure B-5), clk_en
must be held to a 1 throughout testing. This is a requirement similar to that described earlier for
the en signal in the simple gated clock circuit shown in Figure B-1. You can do this using the
scan enable signal during shift, and either the PI “se” or the scan DFF output “dff1/Q” during
capture.
It is still possible however to miss the first edge of the first shift pulse if, upon entering test
mode, the clk_en signal is 0. To ensure that clk_en is initialized to 1 for the first cycle of the
first shift of the first pattern of the test program, you can use a test_setup procedure that forces
the scan enable signal, se, to 1, turns off the clock, and then pulses the clock. If there are
multiple latch-gated clocks with clock off problems, you can set each to be off and pulsed in the
test_setup procedure to ensure initialization of the clock enable upon entering test. The
following is an example timeplate and procedure to accomplish this:
timeplate gen_tp1 =
force_pi 0;
measure_po 0;
pulse clk 100 100;// offset 100, width 100
period 1000;
end;
procedure test_setup =
timeplate gen_tp1;
cycle =
force se 1;// D=1 at latch (OR output)
force clk 0;// PI off value of “clk”
pulse clk;// Make latch Q=1
end;
end
The preceding assumes the clk signal drives all clock gating cells that need to be initialized. If
there are other clock PIs that are gated, they should be pulsed as well. Unless race free operation
in test mode is guaranteed, you should also ensure the clocks are pulsed consecutively—to
prevent race conditions.
• T3 (Trace Rule #3) DRC error—because the tool cannot trace the scan chains
• C1 (Clock Rule #1) DRC error—because a clock input is X when all clocks are off
In both cases, you can use the following debugging sequence after DRC:
a. Open DFTInsight and choose the Setup > Reporting Detail menu item. In the Set
DFTInsight Reporting Detail dialog box, select the Simulated Values Causing DRC
Error button and click OK.
b. Choose the Display > Additions menu item. Copy and paste the gate ID or instance
name displayed in the DRC error message into the Named Instances dialog and click
OK. DFTInsight displays a design view of the instance.
2. Locate the X on the clock input to this instance.
3. Using EZ-Trace Mode to backtrack, locate any blocks where the clock input is known,
but the clock output is X. Each such block could be a clock gating cell; however, the
design view does not show enough detail to be sure. Note the instance name of each of
these blocks.
4. Change to Primitive view by choosing the Setup > Design Level > Primitive.
5. In primitive view, check if the circuit elements comprising the block function as a clock
gating circuit. Typically, a clock gating cell has a 1 clock input, usually two other X
inputs (scan and system enables typically), and an X output.
6. Apply the appropriate fix, as described in the next two sections:
Debugging a C1 Violation Involving a Gated Clock
Debugging a T3 Violation Involving a Clock Gate
ClkGat rh
1) Display a design view of the
problem instance.
1 CP dff2 re1
X EN
X SE CPB X
X D
X CP
X SI
X SE Q X
In the primitive view for this example, shown in Figure B-7, you can see the X coming from an
AND gate that is preceded by a latch (other input to the AND will be a 1); so this represents a
latched clock gating cell. The latch and AND gate combination drive an X on the clock input of
the DFF. To change the X to a known value, you need to constrain the system enable input to
the ClkGat cell (en) to 1.
The key point to remember when tracing Xs back through the circuit is to stop at any block
where the clock input is known, but the clock output is X. Be aware that if there are both coarse
and fine enables and they have different enable logic, then the violation may just move
upstream. In this case, you would need to perform the debugging sequence twice; once for the
fine enable logic and once for the coarse enable logic.
In the primitive view for this example, shown in Figure B-9, you can see the X (highlighted in
bold font) coming from an AND gate that is preceded by a latch (other input to the AND will be
a 1). The latch and AND gate combination drive the X on the clock input of the DFF, so this
represents a latched clock gater.
Knowing this is a T3 DRC issue, you know that an uninitialized gate (the latch) is a problem.
The fix was described earlier in the “Initialization” section and basically involves the test_setup
procedure. Also, you must constrain the system enable input to the ClkGat cell (en) to 1; this is
the same fix that was required for the C1 violation.
MUX /re1
111
SE
NOR /rh LA /rh/UDP1 XXX
XXX OUT
EN XXX D
000 000 DFF /re1/UDP1
OUT S XXX SI
111 000
SE 000 DO. OUT X00 BUF /re1
16 13 S
XXX DO OUT XXX XXX IO XXX
010 - . AND /rh Q
BUF /rh CO X11 - IO.
101 CP OUT 101 R CPB X01 X10 - CO 11
000 101 I1
R
17 22 19 000
24
A user can interact with FastScan in a number of ways. The graphical user interface (GUI) can
be used in an interactive or non-interactive mode. If preferred, the nogui mode or command line
mode can be used. Again, this mode can be used in an interactive or non-interactive manner.
When using either the GUI or command line modes of operation the ATPG run can be
completely scripted and driven using a FastScan dofile. This non-interactive mode of operation
allows the entire ATPG run to be performed without user interaction. This method of using
FastScan can be further expanded to allow the ATPG run to be scheduled and run as a true batch
or cron job. This appendix focuses on the features of FastScan that support its use in a batch
environment.
This allows the shell script used to launch the FastScan run to control process flow based on the
success or failure of the ATPG run. A copy of a Bourne shell script used to invoke FastScan
follows. The area of interest is the check for the exit status following the line that invokes
FastScan.
#!/bin/sh
##
## Depending on the environment it may be necessary to define ## the
MGC_HOME environment variable.
##
## MGC_HOME="/path_to_mgc_home" ; export MGC_HOME
##
DESIGN=`pwd`; export DESIGN
##
##
$MGC_HOME/bin/fastscan $DESIGN/tst_scan.v -verilog -lib \
$DESIGN/atpglib -dof $DESIGN/fastscan.do -nogui -License 30 \ -log
$DESIGN/`date +log_file_%m_%d_%y_%H:%M:%S`
status=$? ;export status
case $status in
0) echo "ATPG was successful";;
1) echo "ATPG failed";;
*) echo " The exit code is: " $? ;;
esac echo $status " is the exit code value."
A C shell script can be used to perform the same function. An example of a C shell script
follows:
#!/bin/csh -b
##
## Depending on the environment it may be necessary to define ## the
MGC_HOME environment variable.
##
## setenv MGC_HOME "/path_to_mgc_home"
##
setenv DESIGN=`pwd`
##
##
${MGC_HOME}/bin/fastscan ${DESIGN}/tst_scan.v -verilog -lib \
${DESIGN}/atpglib -dofile ${DESIGN}/fastscan.do -nogui \
-License 30 -log ${DESIGN}/`date +log_file_%m_%d_%y_%H:%M:%S`
setenv proc_status $status
if ("$proc_status" == 0 ) then
echo "ATPG was successful"
echo " The exit code is: " $proc_status
else echo "ATPG failed"
echo " The exit code is: " $proc_status
endif
echo $proc_status " is the exit code value."
Environment variables can also be used in the FastScan dofile. For example, the DESIGN
environment variable is set to the current working directory in the shell script. When a batch job
is created, the process may not inherit the same environment that existed in the shell
environment. To assure that the process has access to the files referenced in the dofile, the
DESIGN environment variable is used. A segment of a FastScan dofile displaying the use of an
environment variable follows:
//
// Here the use of variables is displayed. In the past, when
// running a batch job, it was necessary to define the
// complete network neutral path to all the files related to
// the run. Now, shell variables can be used. As an example: //
add scan group g1 ${DESIGN}/procfile
//
add scan chain c1 g1 scan_in CO
…
//
write faults ${DESIGN}/fault_list -all -replace
//
A user-defined startup file can be used to alias common commands. An example of this can be
found in the sample dofile. To setup the predefined alias commands, the file .fastscan_startup
can be used. In this example, the contents of the .fastscan_startup file will be:
The following dofile segment displays the use of the alias that was defined in the
.fastscan_startup file.
The last item to address is to exit in a graceful manner from the dofile. This is required to assure
that FastScan will exit as opposed to waiting for additional command line input.
//
// Here we display the use of the exit command to terminate
// the FastScan dofile. Note that the "exit -discard" is used
// to perform this function.
//
exit -discard
//
The -nogui option is used to assure that FastScan doesn't attempt to open the graphical user
interface. In that a tty process is not associated with the batch job, FastScan would be unable to
open the GUI and this again could result in hanging the process.
Another item of interest is the logfile name created using the UNIX “date” command. A unique
logfile name will be created for each FastScan run. The logfile will be based on the month, day,
year, hour, minute, and second that the batch job was launched. An example of the logfile name
that would be created follows:
log_file_05_30_03_08:42:37
The time can be specified using “midnight”, “noon”, or “now”. A more common method is to
enter the time as a one, two, or four digit field. One and two digit numbers are taken as hours,
four-digit numbers to be hours and minutes or the time can be entered as two numbers separated
by a colon, meaning hour:minute. An AM/PM indication can follow the time, otherwise a 24-
hour time is understood. Note that if a Bourne shell is used, you will need to specify the -s
option. If a C shell is used, then use the -c option to the at command
at -s -m -f run_bourne now
or
at -c -m -f run_csh 09 12 AM
An example of the command and it's response follows. When the -m option is used a transcript
of the FastScan run will be mailed to the user that started the batch process
zztop: at -c -m -f run_csh 09 12 AM
commands will be executed using /bin/sh
job 1054311120.a at Fri May 30 09:12:00 2003
In general, it is recommended that an X window server be running on the system the batch jobs
are scheduled to be run on.
Example
The Design-for-Test Circuits and Solutions includes a test circuit that displays running
FastScan as a batch job. The name of the test circuit is “batch_2003” and the following URL
provides a pointer to the Design-for-Test Circuits and Solutions web site.
Note
You must have a SupportNet account in order to access the test circuits.
http://www.mentor.com/dft/customer/circuits/
Index
— Symbols — —B—
.gz, 1-21 BACK algorithm, 6-12
.Z, 1-21 Batch mode, 1-20
see also Appendix C
—A— Bidirectional pin
Abort limit, 6-60 as primary input, 6-20
Aborted faults, 6-59 as primary output, 6-20
changing the limits, 6-60 Binary WGL format, 7-18
reporting, 6-59 Blocks, functional or process flow, 1-12
Acronyms, ATM-7 Boundary scan, defined, 2-1
Ambiguity Browsing instance hierarchy, 1-15
edge, 6-84 Bus
path, 6-83 dominant, 3-14
ASCII WGL format, 7-18 float, 4-14
ASIC Vector Interfaces, 7-8, 7-20 to 7-23 Bus contention, 4-14
ATPG checking during ATPG, 6-25
applications, 2-13 fault effects, 6-26
basic procedure, 6-1 Button Pane, 1-12
constraints, 6-49
default run, 6-56 —C—
defined, 2-12 Capture handling, 6-28
for IDDQ, 6-62 to 6-66 Capture point, 2-23
for path delay, 6-76 to 6-86 Capture procedure, see Named capture
for transition fault, 6-68 to 6-73 procedure
full scan, 2-13 Chain test, 7-14
function, 6-49 Checkpointing example, 6-55
increasing test coverage, 6-58 to 6-61 Checkpoints, setting, 6-54
instruction-based, 6-12, 6-107 to 6-109 Clock
partial scan, 2-14 capture, 6-33, 6-101
process, 6-48 list, 6-33
scan identification, 5-21 off-state, 6-33
setting up faults, 6-37, 6-43 scan, 6-33
with FastScan, 6-6 to 6-11 Clock gaters in FastScan, B-1
with FlexTest, 6-12 Clock groups, 5-35
At-speed test, 2-15, 6-68 to 6-98 Clock PO patterns, 6-8
Automatic scan identification, 5-22 Clock procedure, 6-8
Automatic test equipment, 1-7, 6-13 Clock sequential patterns, 6-9, 6-10
Clocked sequential test generation, 4-18
Clocks, merging chains with different, 5-35
Combinational loop, 4-4, 4-5, 4-6, 4-7, 4-8 Data_capture gate, 4-20
cutting, 4-5 Debugging simulation mismatches, 6-131
Command Line window, 1-9 Debugging simulation mismatches
Command usage, help, 1-13 automatically, 6-136
Commands Decompressing files
command line entry, 1-10 .gz filename extension for, 1-21
command transcript, 1-10 .Z filename extension for, 1-21
interrupting, 1-22 Defect, 2-15
running UNIX system, 1-21 Design Compiler, handling pre-inserted scan
transcript, session, 1-9 cells, 5-13, 5-15
Compass Scan format, 7-21 Design flattening, 3-10 to 3-15
Compressing files Design flow, delay test set, 6-68, 6-98
.gz filename extension for, 1-21 Design rules checking
.Z filename extension for, 1-21 blocked values, 3-23
set file compression command, 1-21 bus keeper analysis, 3-22
set gzip options command, 1-21 bus mutual-exclusivity, 3-19
Compressing pattern set, 6-57 clock rules, 3-22
Conserving disk space constrained values, 3-23
UNIX utilities for, 1-21 data rules, 3-21
Constant value loops, 4-5 extra rules, 3-23
Constraints forbidden values, 3-23
ATPG, 6-49 general rules, 3-18
IDDQ, 6-67 introduction, 3-18
pin, 6-24, 6-31 procedure rules, 3-19
scan cell, 6-35 RAM rules, 3-22
Contention, bus, 3-19 scan chain tracing, 3-20
Continuation character, 1-11 scannability rules, 3-23
Control Panel window, 1-11 shadow latch identification, 3-20
Control points transparent latch identification, 3-21
automatic identification, 5-24 Design-for-Test, defined, 1-1
manual identification, 5-24 Deterministic test generation, 2-12
Controllability, 2-9 DFTAdvisor
Copy, scan cell element, 3-4 block-by-block scan insertion, 5-38 to 5-41
Coupling loops, 4-8 features, 2-11
Creating a delay test set, 6-68, 6-98 help topics, customizing, 1-23
Creating patterns, default run, 6-56 inputs and outputs, 5-3
Customizing invocation, 5-7
help topics, 1-23, 1-25, 1-27 menus, customizing, 1-23
menus, 1-23, 1-25, 1-27 process flow, 5-2
Cycle count, 7-16 supported test structures, 5-4
Cycle test, 7-14 user interface, 1-23
Cycle-based timing, 6-13 DFTAdvisor commands
add buffer insertion, 5-34
—D— add cell models, 5-11
Data capture simulation, 6-28 add clock groups, 5-35
—I— —M—
IDDQ testing, 6-62 to 6-66 Macro, 2-6
creating the test set, 6-62 to 6-66 Macros, 2-17
defined, 2-15 MacroTest, 6-110
methodologies, 2-16 basic flow, 6-110, 6-111
performing checks, 6-66 capabilities, summary, 6-110
psuedo stuck-at fault model, 2-20 examples, 6-124, 6-126, 6-127, 6-129
setting constraints, 6-67 basic 1-Cycle Patterns, 6-124
test pattern formats, 7-11 leading & trailing edge observation,
vector types, 2-16 6-129
Incomplete designs, 4-29 multiple macro invocation, 6-126
Init0 gate, 4-20 synchronous memories, 6-127
Init1 gate, 4-20 macro boundary
InitX gate, 4-20 defining, 6-116
Instance, definition, 3-10 with instance name, 6-116
Instances with trailing edge inputs, 6-118
finding, 1-15 without instance name, 6-117
Instruction-based ATPG, 6-12, 6-107 to 6-109 reporting & specifying observation
Internal scan, 2-1, 2-2 sites, 6-118
Interrupting commands, 1-22 overview, 6-110
qualifying macros, 6-113
—L— recommendations for using, 6-122
Latches test values, 6-120
handling as non-scan cells, 4-15 when to use, 6-114
scannability checking of, 4-4 Manuals, viewing, 1-14
Launch point, 2-23 Manufacturing defect, 2-15
Layout-sensitive scan insertion, 5-35 Mapping scan cells, 5-8
—V—
Verilog, 7-16 to 7-17
Viewing online manuals, 1-14
—W—
Windows
Command Line, 1-9
Control Panel, 1-11
2
End-User License Agreement
IMPORTANT - USE OF THIS SOFTWARE IS SUBJECT TO LICENSE RESTRICTIONS.
CAREFULLY READ THIS LICENSE AGREEMENT BEFORE USING THE SOFTWARE.
This license is a legal “Agreement” concerning the use of Software between you, the end user, either
individually or as an authorized representative of the company acquiring the license, and Mentor
Graphics Corporation and Mentor Graphics (Ireland) Limited acting directly or through their
subsidiaries or authorized distributors (collectively “Mentor Graphics”). USE OF SOFTWARE
INDICATES YOUR COMPLETE AND UNCONDITIONAL ACCEPTANCE OF THE TERMS AND
CONDITIONS SET FORTH IN THIS AGREEMENT. If you do not agree to these terms and conditions,
promptly return, or, if received electronically, certify destruction of Software and all accompanying items
within five days after receipt of Software and receive a full refund of any license fee paid.
1. GRANT OF LICENSE. The software programs you are installing, downloading, or have acquired with this Agreement,
including any updates, modifications, revisions, copies, documentation and design data (“Software”) are copyrighted,
trade secret and confidential information of Mentor Graphics or its licensors who maintain exclusive title to all Software
and retain all rights not expressly granted by this Agreement. Mentor Graphics grants to you, subject to payment of
appropriate license fees, a nontransferable, nonexclusive license to use Software solely: (a) in machine-readable, object-
code form; (b) for your internal business purposes; and (c) on the computer hardware or at the site for which an applicable
license fee is paid, or as authorized by Mentor Graphics. A site is restricted to a one-half mile (800 meter) radius. Mentor
Graphics’ standard policies and programs, which vary depending on Software, license fees paid or service plan purchased,
apply to the following and are subject to change: (a) relocation of Software; (b) use of Software, which may be limited,
for example, to execution of a single session by a single user on the authorized hardware or for a restricted period of time
(such limitations may be communicated and technically implemented through the use of authorization codes or similar
devices); (c) support services provided, including eligibility to receive telephone support, updates, modifications, and
revisions. Current standard policies and programs are available upon request.
2. ESD SOFTWARE. If you purchased a license to use embedded software development (“ESD”) Software, Mentor
Graphics grants to you a nontransferable, nonexclusive license to reproduce and distribute executable files created using
ESD compilers, including the ESD run-time libraries distributed with ESD C and C++ compiler Software that are linked
into a composite program as an integral part of your compiled computer program, provided that you distribute these files
only in conjunction with your compiled computer program. Mentor Graphics does NOT grant you any right to duplicate
or incorporate copies of Mentor Graphics' real-time operating systems or other ESD Software, except those explicitly
granted in this section, into your products without first signing a separate agreement with Mentor Graphics for such
purpose.
3. BETA CODE. Portions or all of certain Software may contain code for experimental testing and evaluation (“Beta
Code”), which may not be used without Mentor Graphics’ explicit authorization. Upon Mentor Graphics’ authorization,
Mentor Graphics grants to you a temporary, nontransferable, nonexclusive license for experimental use to test and
evaluate the Beta Code without charge for a limited period of time specified by Mentor Graphics. This grant and your use
of the Beta Code shall not be construed as marketing or offering to sell a license to the Beta Code, which Mentor Graphics
may choose not to release commercially in any form. If Mentor Graphics authorizes you to use the Beta Code, you agree
to evaluate and test the Beta Code under normal conditions as directed by Mentor Graphics. You will contact Mentor
Graphics periodically during your use of the Beta Code to discuss any malfunctions or suggested improvements. Upon
completion of your evaluation and testing, you will send to Mentor Graphics a written evaluation of the Beta Code,
including its strengths, weaknesses and recommended improvements. You agree that any written evaluations and all
inventions, product improvements, modifications or developments that Mentor Graphics conceived or made during or
subsequent to this Agreement, including those based partly or wholly on your feedback, will be the exclusive property of
Mentor Graphics. Mentor Graphics will have exclusive rights, title and interest in all such property. The provisions of this
subsection shall survive termination or expiration of this Agreement.
4. RESTRICTIONS ON USE. You may copy Software only as reasonably necessary to support the authorized use. Each
copy must include all notices and legends embedded in Software and affixed to its medium and container as received from
Mentor Graphics. All copies shall remain the property of Mentor Graphics or its licensors. You shall maintain a record of
the number and primary location of all copies of Software, including copies merged with other software, and shall make
those records available to Mentor Graphics upon request. You shall not make Software available in any form to any
person other than employees and contractors, excluding Mentor Graphics' competitors, whose job performance requires
access. You shall take appropriate action to protect the confidentiality of Software and ensure that any person permitted
access to Software does not disclose it or use it except as permitted by this Agreement. Except as otherwise permitted for
purposes of interoperability as specified by applicable and mandatory local law, you shall not reverse-assemble, reverse-
compile, reverse-engineer or in any way derive from Software any source code. You may not sublicense, assign or
otherwise transfer Software, this Agreement or the rights under it, whether by operation of law or otherwise (“attempted
transfer”), without Mentor Graphics’ prior written consent and payment of Mentor Graphics’ then-current applicable
transfer charges. Any attempted transfer without Mentor Graphics' prior written consent shall be a material breach of this
Agreement and may, at Mentor Graphics' option, result in the immediate termination of the Agreement and licenses
granted under this Agreement.
The terms of this Agreement, including without limitation, the licensing and assignment provisions shall be binding upon
your heirs, successors in interest and assigns. The provisions of this section 4 shall survive the termination or expiration of
this Agreement.
5. LIMITED WARRANTY.
5.1. Mentor Graphics warrants that during the warranty period Software, when properly installed, will substantially
conform to the functional specifications set forth in the applicable user manual. Mentor Graphics does not warrant
that Software will meet your requirements or that operation of Software will be uninterrupted or error free. The
warranty period is 90 days starting on the 15th day after delivery or upon installation, whichever first occurs. You
must notify Mentor Graphics in writing of any nonconformity within the warranty period. This warranty shall not be
valid if Software has been subject to misuse, unauthorized modification or installation. MENTOR GRAPHICS'
ENTIRE LIABILITY AND YOUR EXCLUSIVE REMEDY SHALL BE, AT MENTOR GRAPHICS' OPTION,
EITHER (A) REFUND OF THE PRICE PAID UPON RETURN OF SOFTWARE TO MENTOR GRAPHICS OR
(B) MODIFICATION OR REPLACEMENT OF SOFTWARE THAT DOES NOT MEET THIS LIMITED
WARRANTY, PROVIDED YOU HAVE OTHERWISE COMPLIED WITH THIS AGREEMENT. MENTOR
GRAPHICS MAKES NO WARRANTIES WITH RESPECT TO: (A) SERVICES; (B) SOFTWARE WHICH IS
LICENSED TO YOU FOR A LIMITED TERM OR LICENSED AT NO COST; OR (C) EXPERIMENTAL BETA
CODE; ALL OF WHICH ARE PROVIDED “AS IS.”
5.2. THE WARRANTIES SET FORTH IN THIS SECTION 5 ARE EXCLUSIVE. NEITHER MENTOR GRAPHICS
NOR ITS LICENSORS MAKE ANY OTHER WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, WITH
RESPECT TO SOFTWARE OR OTHER MATERIAL PROVIDED UNDER THIS AGREEMENT. MENTOR
GRAPHICS AND ITS LICENSORS SPECIFICALLY DISCLAIM ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT OF
INTELLECTUAL PROPERTY.
7. LIFE ENDANGERING ACTIVITIES. NEITHER MENTOR GRAPHICS NOR ITS LICENSORS SHALL BE
LIABLE FOR ANY DAMAGES RESULTING FROM OR IN CONNECTION WITH THE USE OF SOFTWARE IN
ANY APPLICATION WHERE THE FAILURE OR INACCURACY OF THE SOFTWARE MIGHT RESULT IN
DEATH OR PERSONAL INJURY.
8. INDEMNIFICATION. YOU AGREE TO INDEMNIFY AND HOLD HARMLESS MENTOR GRAPHICS AND ITS
LICENSORS FROM ANY CLAIMS, LOSS, COST, DAMAGE, EXPENSE, OR LIABILITY, INCLUDING
ATTORNEYS' FEES, ARISING OUT OF OR IN CONNECTION WITH YOUR USE OF SOFTWARE AS
DESCRIBED IN SECTION 7.
9. INFRINGEMENT.
9.1. Mentor Graphics will defend or settle, at its option and expense, any action brought against you alleging that
Software infringes a patent or copyright or misappropriates a trade secret in the United States, Canada, Japan, or
member state of the European Patent Office. Mentor Graphics will pay any costs and damages finally awarded
against you that are attributable to the infringement action. You understand and agree that as conditions to Mentor
Graphics' obligations under this section you must: (a) notify Mentor Graphics promptly in writing of the action;
(b) provide Mentor Graphics all reasonable information and assistance to defend or settle the action; and (c) grant
Mentor Graphics sole authority and control of the defense or settlement of the action.
9.2. If an infringement claim is made, Mentor Graphics may, at its option and expense: (a) replace or modify Software so
that it becomes noninfringing; (b) procure for you the right to continue using Software; or (c) require the return of
Software and refund to you any license fee paid, less a reasonable allowance for use.
9.3. Mentor Graphics has no liability to you if infringement is based upon: (a) the combination of Software with any
product not furnished by Mentor Graphics; (b) the modification of Software other than by Mentor Graphics; (c) the
use of other than a current unaltered release of Software; (d) the use of Software as part of an infringing process; (e) a
product that you make, use or sell; (f) any Beta Code contained in Software; (g) any Software provided by Mentor
Graphics’ licensors who do not provide such indemnification to Mentor Graphics’ customers; or (h) infringement by
you that is deemed willful. In the case of (h) you shall reimburse Mentor Graphics for its attorney fees and other
costs related to the action upon a final judgment.
9.4. THIS SECTION 9 STATES THE ENTIRE LIABILITY OF MENTOR GRAPHICS AND ITS LICENSORS AND
YOUR SOLE AND EXCLUSIVE REMEDY WITH RESPECT TO ANY ALLEGED PATENT OR COPYRIGHT
INFRINGEMENT OR TRADE SECRET MISAPPROPRIATION BY ANY SOFTWARE LICENSED UNDER
THIS AGREEMENT.
10. TERM. This Agreement remains effective until expiration or termination. This Agreement will immediately terminate
upon notice if you exceed the scope of license granted or otherwise fail to comply with the provisions of Sections 1, 2, or
4. For any other material breach under this Agreement, Mentor Graphics may terminate this Agreement upon 30 days
written notice if you are in material breach and fail to cure such breach within the 30-day notice period. If Software was
provided for limited term use, this Agreement will automatically expire at the end of the authorized term. Upon any
termination or expiration, you agree to cease all use of Software and return it to Mentor Graphics or certify deletion and
destruction of Software, including all copies, to Mentor Graphics’ reasonable satisfaction.
11. EXPORT. Software is subject to regulation by local laws and United States government agencies, which prohibit export
or diversion of certain products, information about the products, and direct products of the products to certain countries
and certain persons. You agree that you will not export any Software or direct product of Software in any manner without
first obtaining all necessary approval from appropriate local and United States government agencies.
12. RESTRICTED RIGHTS NOTICE. Software was developed entirely at private expense and is commercial computer
software provided with RESTRICTED RIGHTS. Use, duplication or disclosure by the U.S. Government or a U.S.
Government subcontractor is subject to the restrictions set forth in the license agreement under which Software was
obtained pursuant to DFARS 227.7202-3(a) or as set forth in subparagraphs (c)(1) and (2) of the Commercial Computer
Software - Restricted Rights clause at FAR 52.227-19, as applicable. Contractor/manufacturer is Mentor Graphics
Corporation, 8005 SW Boeckman Road, Wilsonville, Oregon 97070-7777 USA.
13. THIRD PARTY BENEFICIARY. For any Software under this Agreement licensed by Mentor Graphics from Microsoft
or other licensors, Microsoft or the applicable licensor is a third party beneficiary of this Agreement with the right to
enforce the obligations set forth herein.
14. AUDIT RIGHTS. With reasonable prior notice, Mentor Graphics shall have the right to audit during your normal
business hours all records and accounts as may contain information regarding your compliance with the terms of this
Agreement. Mentor Graphics shall keep in confidence all information gained as a result of any audit. Mentor Graphics
shall only use or disclose such information as necessary to enforce its rights under this Agreement.
15. CONTROLLING LAW AND JURISDICTION. THIS AGREEMENT SHALL BE GOVERNED BY AND
CONSTRUED UNDER THE LAWS OF THE STATE OF OREGON, USA, IF YOU ARE LOCATED IN NORTH OR
SOUTH AMERICA, AND THE LAWS OF IRELAND IF YOU ARE LOCATED OUTSIDE OF NORTH AND SOUTH
AMERICA. All disputes arising out of or in relation to this Agreement shall be submitted to the exclusive jurisdiction of
Dublin, Ireland when the laws of Ireland apply, or Wilsonville, Oregon when the laws of Oregon apply. This section shall
not restrict Mentor Graphics’ right to bring an action against you in the jurisdiction where your place of business is
located. The United Nations Convention on Contracts for the International Sale of Goods does not apply to this
Agreement.
16. SEVERABILITY. If any provision of this Agreement is held by a court of competent jurisdiction to be void, invalid,
unenforceable or illegal, such provision shall be severed from this Agreement and the remaining provisions will remain in
full force and effect.
17. PAYMENT TERMS AND MISCELLANEOUS. You will pay amounts invoiced, in the currency specified on the
applicable invoice, within 30 days from the date of such invoice. This Agreement contains the parties' entire
understanding relating to its subject matter and supersedes all prior or contemporaneous agreements, including but not
limited to any purchase order terms and conditions, except valid license agreements related to the subject matter of this
Agreement (which are physically signed by you and an authorized agent of Mentor Graphics) either referenced in the
purchase order or otherwise governing this subject matter. This Agreement may only be modified in writing by authorized
representatives of the parties. Waiver of terms or excuse of breach must be in writing and shall not constitute subsequent
consent, waiver or excuse. The prevailing party in any legal action regarding the subject matter of this Agreement shall be
entitled to recover, in addition to other relief, reasonable attorneys' fees and expenses.