Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were...

178
Search State Extensibility based Learning Framework for Model Checking and Test Generation Maheshwar Chandrasekar Dissertation submitted to the Faculty of Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Engineering Michael S. Hsiao, Chair Sandeep K. Shukla Dong S. Ha Allen B. MacKenzie Ezra A. Brown September 10, 2010 Blacksburg, Virginia Keywords: Design Verification, Model Checking, Fault Model, Automatic Test Generation and Fault Diagnosis Copyright c 2010, Maheshwar Chandrasekar

Transcript of Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were...

Page 1: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Search State Extensibility based Learning Framework

for Model Checking and Test Generation

Maheshwar Chandrasekar

Dissertation submitted to the Faculty of

Virginia Polytechnic Institute and State University

in partial fulfillment of the requirements for the degree of

Doctor of Philosophy

in

Computer Engineering

Michael S. Hsiao, Chair

Sandeep K. Shukla

Dong S. Ha

Allen B. MacKenzie

Ezra A. Brown

September 10, 2010

Blacksburg, Virginia

Keywords: Design Verification, Model Checking, Fault Model, Automatic Test Generation

and Fault Diagnosis

Copyright c© 2010, Maheshwar Chandrasekar

Page 2: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Search State Extensibility based Learning Framework for Model

Checking and Test Generation

Maheshwar Chandrasekar

ABSTRACT

The increasing design complexity and shrinking feature size of hardware designs have created

resource intensive design verification and manufacturing test phases in the product life-cycle

of a digital system. On the contrary, time-to-market constraints require faster verification and

test phases; otherwise it may result in a buggy design or a defective product. This trend in the

semiconductor industry has considerably increased the complexity and importance of Design

Verification, Manufacturing Test and Silicon Diagnosis phases of a digital system production

life-cycle. In this dissertation, we present a generalized learning framework, which can be

customized to the common solving technique for problems in these three phases.

During Design Verification, the conformance of the final design to its specifications is verified.

Simulation-based and Formal verification are the two widely known techniques for design

verification. Although the former technique can increase confidence in the design, only the

latter can ensure the correctness of a design with respect to a given specification. Originally,

Design Verification techniques were based on Binary Decision Diagram (BDD) but now such

techniques are based on branch-and-bound procedures to avoid space explosion. However,

branch-and-bound procedures may explode in time; thus efficient heuristics and intelligent

learning techniques are essential. In this dissertation, we propose a novel extensibility relation

between search states and a learning framework that aids in identifying non-trivial redundant

search states during the branch-and-bound search procedure. Further, we also propose a

Page 3: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

probability based heuristic to guide our learning technique. First, we utilize this framework

in a branch-and-bound based preimage computation engine. Next, we show that it can be used

to perform an upper-approximation based state space traversal, which is essential to handle

industrial-scale hardware designs. Finally, we propose a simple but elegant image extraction

technique that utilizes our learning framework to compute over-approximate image space.

This image computation is later leveraged to create an abstraction-refinement based model

checking framework.

During Manufacturing Test, test patterns are applied to the fabricated system, in a test

environment, to check for the existence of fabrication defects. Such patterns are usually

generated by Automatic Test Pattern Generation (ATPG) techniques, which assume certain

fault types to model arbitrary defects. The size of fault list and test set has a major impact

on the economics of manufacturing test. Towards this end, we propose a fault collapsing

approach to compact the size of target fault list for ATPG techniques. Further, from the

very beginning, ATPG techniques were based on branch-and-bound procedures that model the

problem in a Boolean domain. However, ATPG is a problem in the multi-valued domain; thus

we propose a multi-valued ATPG framework to utilize this underlying nature. We also employ

our learning technique for branch-and-bound procedures in this multi-valued framework.

To improve the yield for high-volume manufacturing, silicon diagnosis identifies a set of can-

didate defect locations in a faulty chip. Subsequently physical failure analysis - an extremely

time consuming step - utilizes these candidates as an aid to locate the defects. To reduce

the number of candidates returned to the physical failure analysis step, efficient diagnostic

patterns are essential. Towards this objective, we propose an incremental framework that

utilizes our learning technique for a branch-and-bound procedure. Further, it learns from

iii

Page 4: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

the ATPG phase where detection-patterns are generated and utilizes this information during

diagnostic-pattern generation. Finally, we present a probability based heuristic for X-filling of

detection-patterns with the objective of enhancing the diagnostic resolution of such patterns.

We unify these techniques into a framework for test pattern generation with good detection

and diagnostic ability. Overall, we propose a learning framework that can speed up design

verification, test and diagnosis steps in the life cycle of a hardware system.

iv

Page 5: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

To my beloved family

- parents, brother, Rajani, Pavan and Yuvan

v

Page 6: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Acknowledgments

It is a honor to acknowledge all the people who made this dissertation possible. I owe my

deepest gratitude to my advisor Dr. Michael S. Hsiao for his immense patience in listening

to my not so well cooked ideas and for his smart guidance through out my graduate life. I

am also greatly indebted to my brother Kamesh for introducing me to Dr. Hsiao. I would

like to thank Dr. Dong S. Ha, Dr. Ezra A. Brown, Dr. Allen B. MacKenzie and Dr. Sandeep

K. Shukla for serving on my thesis committee.

I would like to acknowledge Prof. Tom Walker for providing me with an opportunity to

teach Freshmen during my stay at Virginia Tech. I would like to thank Lila B. Wills and

Prof. Vinod K. Lohani for their timely financial support. I would also like to express my

sincere gratitude to the staff at Virginia Tech for helping me in my paper work. Last though

not least, I would like to thank my family, friends and relatives for their emotional support

and encouragement.

Maheshwar Chandrasekar

September 2010

vi

Page 7: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Contents

1 Introduction 1

1.1 Design Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.1.1 Symbolic Model Checking . . . . . . . . . . . . . . . . . . . . . . . . 4

1.1.2 State Reduction Techniques for Model Checking . . . . . . . . . . . . 5

1.2 Manufacturing Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2.1 Fault Collapsing and Test Generation . . . . . . . . . . . . . . . . . . 6

1.2.2 Automatic Diagnostic Test Generation . . . . . . . . . . . . . . . . . 8

1.3 Dissertation Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2 Background 12

2.1 Symbolic Model Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.1.1 State Space Traversal . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

vii

Page 8: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

2.1.2 Preimage Computation Model . . . . . . . . . . . . . . . . . . . . . . 16

2.1.3 Bounded and Unbounded Model Checking . . . . . . . . . . . . . . . 16

2.1.4 Over-approximate State Space Traversal . . . . . . . . . . . . . . . . 18

2.2 Automatic Test Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.2.1 Fault Collapsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.2.2 Detection and Diagnostic Test Generation . . . . . . . . . . . . . . . 21

2.2.3 ADTG Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.3 Other Related concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.3.1 Basic Definitions/Terminology . . . . . . . . . . . . . . . . . . . . . . 24

2.3.2 Antecedent Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3 Symbolic Model Checking 29

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.2.1 Preimage Computation Example . . . . . . . . . . . . . . . . . . . . 31

3.2.2 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.3 Search State Extensibility Driven Learning . . . . . . . . . . . . . . . . . . . 34

3.3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

viii

Page 9: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

3.3.2 The Proposed Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.3.3 Over-approximated Preimage . . . . . . . . . . . . . . . . . . . . . . 43

3.3.4 Computing the exact pre-image - Implementation Details . . . . . . . 46

3.3.5 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.3.6 Heuristic for guiding AT . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.4 Experimental Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.4.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

3.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4 Tight Image Extraction for Model Checking 57

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

4.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.3 Our Model Checking Approach based on Image Extraction . . . . . . . . . . 64

4.3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

4.3.2 Image Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

4.4 CEGAR Framework with our Learning . . . . . . . . . . . . . . . . . . . . . 67

4.4.1 CEGAR Framework - overall approach . . . . . . . . . . . . . . . . . 71

ix

Page 10: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

4.5 Experimental Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

4.6 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

5 Fault Collapsing and Test Generation 76

5.1 Fault Collapsing based on a Novel Extensibility Relation . . . . . . . . . . . 77

5.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

5.1.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

5.1.3 Extensibility based Dominance Analysis . . . . . . . . . . . . . . . . 83

5.1.4 Algorithm and Implementation Details . . . . . . . . . . . . . . . . . 88

5.1.5 Experimental Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 90

5.1.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

5.2 Multi-Valued SAT-based ATPG . . . . . . . . . . . . . . . . . . . . . . . . . 97

5.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

5.2.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

5.2.3 Multi-Valued SAT Framework for ATPG . . . . . . . . . . . . . . . . 104

5.2.4 Search State Based Learning . . . . . . . . . . . . . . . . . . . . . . . 107

5.2.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

5.2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

x

Page 11: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

5.3 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

6 Diagnostic Test Generation 117

6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

6.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

6.2.1 E-Frontier based learning . . . . . . . . . . . . . . . . . . . . . . . . . 120

6.2.2 Output Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

6.3 Our Proposed Learning Framework . . . . . . . . . . . . . . . . . . . . . . . 124

6.3.1 Success Driven Learning (SDL) based on Search State Extensibility . 124

6.3.2 Conflict Driven Learning (CDL) based on Search State Extensibility . 127

6.3.3 Learning Framework for ADTG . . . . . . . . . . . . . . . . . . . . . 132

6.3.4 Incremental Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

6.4 Output Deviation based X-filling . . . . . . . . . . . . . . . . . . . . . . . . 134

6.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

6.5.1 Incremental Learning Framework . . . . . . . . . . . . . . . . . . . . 136

6.5.2 Output Deviation based X-filling . . . . . . . . . . . . . . . . . . . . 138

6.6 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

xi

Page 12: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

7 Conclusion 142

Bibliography 145

xii

Page 13: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

List of Figures

1.1 Hardware System Production Life Cycle . . . . . . . . . . . . . . . . . . . . 2

2.1 Example FSM and Preimage Computation . . . . . . . . . . . . . . . . . . . 14

2.2 Preimage Computation Model. . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.3 Model for Bounded Model Checking . . . . . . . . . . . . . . . . . . . . . . . 17

2.4 Split Circuit Model for Test Generation . . . . . . . . . . . . . . . . . . . . . 22

2.5 ADTG Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.6 Success-driven learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.1 Example Circuit and DT for Preimage Computation . . . . . . . . . . . . . 33

3.2 DT showing non-trivial redundant search states . . . . . . . . . . . . . . . . 36

3.3 Search State Extensibility Based Learning . . . . . . . . . . . . . . . . . . . 40

3.4 Preimage Computation based on Search State Extensibility . . . . . . . . . . 45

xiii

Page 14: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

3.5 Probabilistic heuristic to guide AT . . . . . . . . . . . . . . . . . . . . . . . 48

4.1 Venn Diagram showing over-approximation of Reachable State Space . . . . 58

4.2 Model for Interpolation-based Model Checking . . . . . . . . . . . . . . . . . 61

4.3 Decision Tree for BMCk(I, Tk, F ) . . . . . . . . . . . . . . . . . . . . . . . . 64

4.4 Missed Image cubes due to Non-chronological Backtracking . . . . . . . . . . 68

5.1 Extensibility based dominance analysis . . . . . . . . . . . . . . . . . . . . . 84

5.2 Special case of detection dominance . . . . . . . . . . . . . . . . . . . . . . . 86

5.3 Cut-sets in the Search Space. . . . . . . . . . . . . . . . . . . . . . . . . . . 103

5.4 Multi-Valued Clauses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

5.5 Example Circuit 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.6 Multi-Valued clauses for k s@1 . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.7 Decision Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

5.8 Example Circuit 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

5.9 Decision Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

6.1 ADTG Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

6.2 False Positive in SDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

xiv

Page 15: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

6.3 Conflict Driven Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

6.4 False Negative in CDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

6.5 Proposed ADTG Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

xv

Page 16: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

List of Tables

3.1 Comparison with SAT/ATPG-based approaches . . . . . . . . . . . . . . . . 53

3.2 Comparison with Cofactor-based approaches . . . . . . . . . . . . . . . . . . 53

4.1 Our Image Extraction vs. Interpolation . . . . . . . . . . . . . . . . . . . . . 74

5.1 EXTRACTOR vs.GRADER [124]- ISCAS85 [76] . . . . . . . . . . . . . . . 91

5.2 EXTRACTOR vs.GRADER [124] for full-scan ISCAS89 [77] . . . . . . . . . 92

5.3 Resource Usage for ISCAS85 Benchmark . . . . . . . . . . . . . . . . . . . . 93

5.4 Resource Usage for full-scan ISCAS89 Benchmark . . . . . . . . . . . . . . . 94

5.5 Efficiency of Our Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

6.1 Original Flow vs. Proposed Flow . . . . . . . . . . . . . . . . . . . . . . . . 135

6.2 Output Deviation based X-fill vs. Random X-fill . . . . . . . . . . . . . . . . 139

xvi

Page 17: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Chapter 1

Introduction

With the ever-increasing design complexity and shrinking feature sizes several errors may

manifest in the manufactured digital system (VLSI circuit). In general, design errors are re-

ferred to as bugs and fabrication errors as defects. Figure 1.1 illustrates the current industrial

practice for a hardware system production life-cycle. To ensure error-free hardware systems

it is essential to verify the design for bugs (Phase II in Figure 1.1) and to test the fabricated

chips for defects (Phase IV in Figure 1.1). Both these phases usually provide feedback to

design/manufacturing engineers. This feedback is then used as an aid by the engineers to

fix the errors.

1

Page 18: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 1. Introduction 2

Figure 1.1: Hardware System Production Life Cycle

1.1 Design Verification

Design verification is the process of ensuring if a given implementation adheres to its spec-

ification. In today’s design cycle, more than 70% [98] of the resources are spent in design

verification. The two techniques for design verification are Simulation-based Verification

and Formal Verification. In the former, designs are extensively simulated to check for failure

of pre-determined design assertions. In the latter, mathematical models and analysis are

employed to either prove or disprove the conformance of a design to its specification or a

given property. Although Simulation-based Verification can increase confidence in the de-

sign, only Formal Verification can ensure the correctness of a design (with respect to the

specification/property used). Model Checking is the most widely known technique for For-

mal Verification of digital designs [99]. The two types of Model Checking are Equivalence

Checking and Property Checking. Equivalence checking, in general, checks if a given imple-

Page 19: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 1. Introduction 3

mentation model is equivalent to its specification (golden) model. Pragmatically, in the VLSI

circuit production life-cycle, Equivalence Checking is used to determine if an unoptimized

design is equivalent to its optimized version. Although it works well with combinational op-

timizations, researchers are still in pursuit of effective equivalence checking techniques that

can robustly verify sequential optimizations (like different state encoding and aggressive re-

timing) for large-scale designs. On the other hand, Property Checking techniques verify

properties on the optimized designs. The properties are usually obtained from the design

specification or designers. By verifying several essential properties on the optimized design,

designers can ensure that the Design Under Verification (DUV) indeed conforms to its spec-

ification. Equivalence Checking can be viewed as a special case of Property Checking, where

the property is the assertion that the optimized and unoptimized versions are equivalent.

In the sequel, we will refer to Property Checking as Model Checking in general. To verify

properties on the design, efficient state space traversal engines are needed. In the infancy

of Model Checking explicit state space exploration was utilized. However, with increasing

design complexity the number of state variables increased significantly. Hence explicit state

traversal methods cannot scale to current large-size industrial designs due to the enormous

number of states in them. This is usually referred to as the state explosion issue and in

order to overcome this, symbolic methods, where implicit state traversal is performed, are

essential.

Page 20: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 1. Introduction 4

1.1.1 Symbolic Model Checking

Currently, Symbolic Model Checking (SMC) approaches are widely used for formal verifica-

tion of hardware systems. At the heart of such approaches lies an efficient image (preim-

age) computation engine that performs an implicit state space traversal. Essentially, image

(preimage) computation involves the generation of the set of all next (previous) states that

can be visited from a given set of states in one cycle. In [74], McMillan proposed the

first symbolic method based on Reduced Ordered BDDs (ROBDDs) for implicit state space

traversal. He was able to model check small to medium sized designs using his approach.

However, his technique did not scale to large designs since ROBDDs may explode in space

for such designs. Later, SAT/ATPG based approaches [42,38,44,45,87,90,91] were proposed

as an alternative to ROBDD based methods. These approaches trade-off space with time;

they may explode in time. In order to avoid time explosion, it is necessary to avoid visiting

already explored (redundant) search states during the state space traversal.

In our work, we propose a novel learning technique based on a new notion of search state ex-

tensibility. Our technique will help in identifying several non-trivial redundant search states.

We also introduce a probability-based heuristic essential to guide our learning. We incor-

porated the learning technique into an ATPG-based preimage (image) computation engine.

Further, our learning framework facilitates for over-approximate state space traversal, which

can be leveraged to design an unified Model Checking framework.

Page 21: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 1. Introduction 5

1.1.2 State Reduction Techniques for Model Checking

Model checking of industrial scale hardware designs is severely limited by the state explosion

problem, even for symbolic methods. To handle this problem, in the past, a number of

methods for state reduction in the model used during verification, have been proposed [81].

Such techniques include symmetry reductions [100,101,102,103,104], partial order reductions

[105,106] and abstraction techniques [107,108,109,110,87]. Techniques based on abstraction

have been shown to be quite promising in tackling state explosion problem for industrial-

scale large sized designs. Such techniques alter the original behavior of the design under

verification in an attempt to reduce verification effort. However, these techniques need

to take care that the altered behavior does not affect its final conclusions on the validity

of properties. The authors in [111, 112] present a detailed survey of the model checking

techniques published in the literature.

1.2 Manufacturing Test

With the greater advances in VLSI design and fabrication technology, several defects that

are hard to detect may manifest in the chip. These defects range from an interconnect line

stuck-at a constant value to shorts and opens in transistors, signals, and vias. For the past

several decades, Manufacturing Test has been a prominent test methodology to screen and

identify defective Integrated Chips (ICs). In this method, test patterns are applied to the

primary inputs of the chip and then the primary outputs are observed. If the expected and

Page 22: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 1. Introduction 6

observed output values differ, then it can be concluded that the chip is defective. To this

end, effective test patterns are required. The process of generating such patterns is referred

to as Automatic Test Pattern Generation (ATPG). Further, these patterns are generated

assuming fault-models like stuck-at faults, bridge faults, transition faults and path-delay

faults. Such fault models are essential to represent defects reflecting the physical condition

that causes a circuit to fail to perform in a desired manner [2]. It is widely accepted in both

industry and academia that test patterns generated using single stuck-at fault model can

help in detecting several arbitrary defects. Thus a significant portion of the test patterns

employed during Manufacturing Test are usually generated assuming single stuck-at fault

model.

1.2.1 Fault Collapsing and Test Generation

Fault collapsing is the process of obtaining a compact fault list (CFL) from an uncollapsed

fault list (UFL) by including only a representative fault in CFL for a set of faults in UFL.

The CFL is then given as an input to the ATPG engine. Fault Collapsing is a widely re-

searched topic mainly due to its potential benefits on factors affecting test economics [2, 3].

The two broad types of collapsing are structural and functional collapsing. The former tech-

nique collapses faults in a fanout-free region in the circuit into a smaller representative set

and is usually low-cost. The latter technique has the capacity to identify all fault collapsing

opportunities; however it is usually expensive since it may require an ATPG engine or a SAT

solver. In order to perceive the full benefits of fault collapsing on test economics, it is neces-

Page 23: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 1. Introduction 7

sary to design a low-cost fault collapsing engine that can enhance structural fault collapsing

towards results that are achievable by functional fault collapsing. In this dissertation, we

use our novel extensibility notion and unique requirements to test a fault for designing such

a low-cost fault collapsing engine.

In 1966, Roth proposed the D-algorithm [4], which is essentially based on the branch-and-

bound procedure, to generate test patterns for a given combinational circuit. Later, in 1981,

Goel observed that the search space explored by D-algorithm can be significantly reduced

if the decisions during the search are made only on the primary inputs [15]. Subsequently,

there has been a plethora of work proposed in the literature to enhance ATPG like in

[49, 67, 53, 54, 55, 19, 2], to mention a few. Sequential test pattern generation for sequential

circuits is in general hard. To overcome this issue, customary industrial practice is to enhance

the controllability and/or observability of state elements in the circuit for testing purposes.

This version is referred to as the full-scan version of the Circuit Under Test (CUT). Thus

an arbitrary combinational ATPG engine can be employed to generate test patterns for

the full-scan version of the sequential CUT. Prior research work, published so far in the

literature, models the ATPG problem in Boolean domain. However, in reality, ATPG is

a problem in the multi-valued domain ({0, 1, D,D,X}). In this dissertation, we propose a

novel ATPG framework based on multi-valued SAT and attempt to solve the ATPG problem

directly in the multi-valued domain itself. Further, we also integrate powerful search space

pruning techniques that learn from path sensitization conflicts. In recent years, there has

been increased interest in generating diagnostic test patterns that can ideally differentiate

Page 24: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 1. Introduction 8

between arbitrary faults in a CUT. The process of generating such diagnostic patterns is

referred to as Automatic Diagnostic Test Generation (ADTG). Note that ATPG attempts to

differentiate a fault-free circuit from a faulty circuit whereas ADTG attempts to differentiate

one faulty circuit from another.

1.2.2 Automatic Diagnostic Test Generation

Test escapes result whenever defective chips pass the test. For both defective chips that are

captured during manufacturing test and those returned test escapes, a process of locating the

defects is needed to improve the yield for high-volume manufacturing. The process of locating

the defect in a defective chip is referred to as silicon diagnosis [1]. The diagnosis process

returns a set of possible defect locations (candidates) in a defective chip. Subsequently,

physical failure analysis - an extremely time consuming process - is performed on the failed

chips with the aid of these candidates. For efficient physical analysis, cardinality of the

candidate set should be as small as possible [2]. To this end, diagnostic test patterns that

can differentiate between different faults (candidates) are critical.

In the literature, several complete ADTG algorithms [8, 116, 10, 118] have been proposed.

Also, low-cost preprocessors [11, 12, 13, 117] have been proposed to reduce the number of

fault-pairs considered during ADTG. It should be noted that the fault-pairs in the final

candidate set, from silicon diagnosis, are usually either equivalent or hard-to distinguish.

Thus a complete and aggressive ADTG engine is necessary. In our work, we propose an

Page 25: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 1. Introduction 9

efficient ADTG engine with the ability to prune non-trivial redundant search states. This is

again based on the notion of search state extensibility. Further, we propose an incremental

learning framework, where the information learned during ATPG is efficiently utilized during

ADTG. Finally, the number of specified bits in a generated test pattern is usually low. So

we propose a probability-based heuristic to fill the unspecified bits in the ATPG patterns

with the objective of enhancing the diagnostic-ability of such patterns.

1.3 Dissertation Organization

Chapter 2 discusses the background and related work necessary to understand the disser-

tation. First, we discuss concepts related to Symbolic Model Checking, where we explain

the model and techniques used for model checking. Next, we explain concepts related to

automatic test generation, viz., fault collapsing and the model used for diagnostic test gen-

eration. Finally, we present the basic definitions and antecedent tracing technique that will

be used in the rest of the dissertation.

In Chapter 3, we first elicit the previous work done in Symbolic Model Checking. Second,

we introduce our novel extensibility relation between a pair of search states. Next, based on

this extensibility relation, we present our learning framework for preimage computation with

a proof for its soundness and completeness. Further, we also introduce our probability-based

heuristic to guide the learning framework. Finally, we also discuss the unified model checking

framework based on our preimage computation engine.

Page 26: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 1. Introduction 10

Chapter 4 discusses the prior literature work on over-approximate state space traversal based

model checking. Next, it presents our image extraction technique. This technique, unlike

existing methods (like those based on interpolation), attempts to implicitly compute an

over-approximate image during the branch-and-bound search performed for bounded model

checking. We incorporate our extensibility based learning framework for the branch-and-

bound search and prove that the over-approximate image obtained by our technique is indeed

an interpolant. Further, if the search excites multiple unsatisfiable cores, that may inherently

exist in the problem, then our over-approximate image may be tighter than those obtained

by interpolation methods, which usually uses only a single unsatisfiable core. Finally, using

our image extraction technique, we present an abstraction-refinement based model checking

framework.

Chapter 5 first surveys the prior work in fault collapsing and test generation. Next, we

present an efficient fault collapsing technique based on our novel extensibility relation and

unique requirements to test a fault. We also present a simple but elegant method to store

these pre-computed unique requirements, which significantly reduces the necessary memory

capacity. Finally, in this chapter, we observe that test generation is inherently a multi-

valued problem; however, existing test generation techniques attempt to solve it in Boolean

domain. So we present a novel multi-valued ATPG framework for test generation along with

a powerful technique that learns on sensitization conflicts.

Chapter 6 first discusses the previous work in Diagnostic Test Generation. Second, it explains

our search state extensibility based learning framework for automatic test generation. Next,

Page 27: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 1. Introduction 11

we show that this framework can be employed for diagnostic test generation too. Further,

we propose an incremental learning framework that employs the information learned during

detection-oriented test generation in diagnostic-oriented test generation. Finally, we propose

an output-deviation based probabilistic heuristic to fill the unspecified bits in a test pattern

with the objective of enhancing its diagnostic ability.

Chapter 7 concludes our contributions in this dissertation.

Page 28: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Chapter 2

Background

In this chapter, we explain the necessary background to understand our contribution in the

area of Symbolic Model Checking (SMC) and Automatic Test Generation (ATG). First, we

discuss the related concepts in SMC. Next, we explain the background for ATG. Finally, we

discuss other concepts related to our work.

2.1 Symbolic Model Checking

The three basic aspects of any Symbolic Model Checking technique are (i) the model used

for design representation, (ii) language used for property specification and (iii) the method

employed for verification. In this dissertation, we are interested in gate level representation of

the hardware design and temporal logic for verifying safety properties. Note that a property

P is a safety property in a synchronous sequential design D if and only if D is P-safe (i.e.) P

12

Page 29: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 13

holds true in the reachable state space of D. We refer the reader to [99,98,74] for a detailed

discussion on model representation, property specification language and different types of

properties. The core of any SMC approach is the preimage and/or image computation

technique used for state space traversal during property checking. These techniques are dual

to each other; in the rest of this chapter, we will be referring to preimage computation to

keep our discussion simple.

2.1.1 State Space Traversal

A synchronous sequential design D can be represented by a Mealy-type Deterministic Finite

State Machine (FSM). A FSM M is formally defined [72] as a 5-tuple < Σ, Q, δ,Q0, F >,

where

• Σ represents the primary input variables (I), primary output variables (O) and state

variables (S),

• Q represents the finite set of states in M defined over the variables in S,

• δ : represents the transition function Q × I ⇒ Q,

• Q0 ⊆ Q is the set of initial states of M , and

• F : Q × I ⇒ O is the output function

For ease of discussion, we assume that the property is defined on the state variables (S)

of the design D. However, generalization of the approach to a property on any internal

Page 30: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 14

����������

����������

��������������������

������ �����������

������ �����������

���

���

���

���

���

���

���

�������� ���������

����������������������������� !����"�

#������$����%� ��&"������� �������������������������

Figure 2.1: Example FSM and Preimage Computation

variables of D is straight-forward. The pre-image of a set of states B (property) ⊆ Q is

the set of states A ⊆ Q from which there is at least one path to a state in B in the FSM

M . Figure 2.1A shows the FSM of a sequential design, which has three state variables.

The value within each state represent the state variables encoding . Figure 2.1B shows the

iterative pre-image computation steps for B = {000, 110}. In each iteration, we include

only the newly visited states in the pre-image. Since the total number of states that can be

defined on n state variables is bounded by a finite number the pre-image computation will

eventually halt with no new states to be added to the pre-image set. This is referred to as

the fix-point. In the sequel, unless specified, we use pre-image to refer to an iteration in the

fix-point computation.

For pre-image computation, we are interested in the transition relation between next state

(Y ) and present state variables (X). Let δTR represent the transition relation for D. It is

Page 31: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 15

defined as

δTR(X, I, Y ) =∧

y∈Y

(y ≡ δy(X, I)) (2.1)

where

• X/Y are present/next state variables and

• δy is the next state relation of next state variable y ∈ Y with the present state variables

in X.

Formally, pre-image of a set of states B (defined over Y variables) is defined as

Preimage(B) = ∃I,Y δTR(X, I, Y ) ∧ B. (2.2)

(I)

Primary

Presentstate

Inputs

(X)Variables

Objectiveout=1

out(B)

(Y)VariablesstateNext

δTR(X, I, Y )

Figure 2.2: Preimage Computation Model.

Page 32: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 16

2.1.2 Preimage Computation Model

Note that B is the characteristic function of the set of the states (Q), represented as a

function defined over the primary inputs I and the present state variables X. Thus we can

represent the right hand side of Equation 2.2 simply as the monitor circuit shown in Figure

2.2. Now, in order to obtain Preimage(B), we need to compute the assignments to the

present state variables X for which there exists an assignment to the inputs I such that the

single output out in the monitor is set true. The set of all such assignments on X represents

the Preimage(B). In our work, we use this model for pre-image computation. For image

computation a similar model can be constructed. However, note that the states for which

image needs to be computed must be asserted on the present state variables X.

2.1.3 Bounded and Unbounded Model Checking

Bounded model checking (BMCk) attempts to verify the underlying design for a fixed bound

k. This bound defines the length of state paths (formed by state transitions) from the initial

state of the design. Thus BMCk cannot conclude on the property validity beyond the bound

k. In other words, it can only detect a counter-example (if any) of length less than or equal

to k. In general, the BMCk model for a sequential design D shown in Figure 2.3A can be

represented as shown in Figure 2.3B. Here, the design D is unrolled for k time-frames and

the initial state is asserted on state variables on the left side of the first time-frame. Further,

a monitor sub-circuit asserts that the negation of the property (P ) can be reached in at least

Page 33: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 17

������

����� ����� �

� � �

��� ��� ���

����� � � �������� �"! � ��!#� � �

� $�%���$&� ��� ')(* � �+!#� � �� �

,#- . /1023- 4#/65

798 ��$ %:��$&� ��� ')( * � �+!� � � ; 8 ; ������<�$)<=�>��<�$ ( *�?�$�!#@ � � A � �CB �D' ��!�$E� F( $�� A � ?G@

H IJ KLIM

N OPQR STHNU

H IJ KLIM

V QR PQR STHVU

� � � � � � � �W��3�

Figure 2.3: Model for Bounded Model Checking

one of the k time-frames. If this formulation is satisfiable then it implies that there exists

a counter-example, which can be extracted from the satisfiable assignment. Otherwise, it

can be concluded that the design D is P-safe within the bound k. It has been shown that

bounded model checking is extremely beneficial in generating counter-examples for industrial

designs, especially in the error-prone early stages of design life-cycle.

If no counter-example is found in BMCk, in order to make the model checking approach

a complete algorithm, one increments the bound k to perform BMCk+1. This process of

incrementing the bound is continued until either the problem becomes intractable (resource

exhaustion) or a completeness threshold CT is reached. CT is a number which helps in

Page 34: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 18

concluding that D is P-safe, if P is valid in BMCCT . In general, CT is defined as a

function of the design diameter (length of the longest ‘shortest path’ between any 2 states).

Computing CT is as hard as model checking itself; to overcome this issue fix-point checks are

employed during state space traversal (image computation of initial state and/or preimage

computation of property negation) for termination condition. This process of proving that

D is P-safe is referred to as unbounded model checking. We refer the reader to [113] for a

survey of techniques for state space traversal and model checking proposed in the literature.

2.1.4 Over-approximate State Space Traversal

In the recent past, design abstraction - removal of design behavior irrelevant to proving

a given property - has been shown to be effective in overcoming the state-explosion issue.

Abstraction, in general, can be either over-approximate or under-approximate. In the former,

false-negative cases can occur - a spurious counter-example illustrating a trace from the initial

state to a state violating the property in the design may be identified. Similarly, in the later

false-positive cases can occur. In other words, identification of a counter-example in the

under-approximated model guarantees the existence of a counter-example in the original

design. However, no counter-example in the under-approximated model does not guarantee

that the target property holds in the original design. In the real-world, abstraction-based

model checking is essential to handle large-scale industrial sized designs. In [81], an automatic

Counter-Example Guided Abstraction Refinement (CEGAR) framework was proposed; this

framework provided for the verification of an industrial design. In [87], the authors proposed

Page 35: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 19

an interesting abstraction technique based on Craig interpolants [86]. Essentially, they utilize

the proof-logging capability of modern-day SAT solvers like [95, 96] for unsatisfiable runs.

These proofs can then be used to obtain an over-approximate reachable state space of the

system. Given this possibility, the basic idea is to make use of the proofs from unsatisfiable

BMC runs to determine an over-approximate image/preimage state set. This approach,

unlike previous SAT-based methods like [89], is bounded by the longest shortest path between

any two states. Although this can be significantly longer than the diameter of the DUV’s

reachable state space, this approach was shown to be effective on several large verification

instances. However, Craig interpolant-based method suffers from two major drawbacks - the

computed interpolants representing over-approximate state space may be highly redundant

and the over-approximations may not be tighter. Recently in [90, 91], the authors proposed

to integrate over-approximations obtained from other than unsatisfiable proofs, like those

based on relationships between state variables, within the Craig interpolant-based model

checking framework. However, even more tighter abstractions are necessary for verifying

industrial sized designs.

2.2 Automatic Test Generation

To ensure defect free chips and higher yield during high-volume manufacturing it is essential

to generate effective test patterns. The process of automatically generating test patterns

is usually referred to as automatic test generation. Test patterns are usually generated to

Page 36: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 20

test the circuit for its functionality and timing at various levels of design abstraction that

include behavioral (architecture), register-transfer, logical (gate) and physical (transistor)

levels [2]. Fault models to represent arbitrary defects are necessary for automatic test gen-

eration and for quantitative analysis of generated test patterns. A good fault model should

satisfy two criteria: (i) it should reflect the defect behavior accurately and (ii) it should be

computationally inexpensive in terms of test generation (and fault simulation). A number

of fault models like stuck-at fault, bridge fault, transition fault and path delay fault have

been published in the literature. Further, based on the fault multiplicity used to represent

defects in the circuit, there are two types of fault models - single and multiple fault model.

Although the latter can accurately model defects in the circuit, the number of such faults

may be significantly large for industrial scale circuits. Fortunately, it was shown in [3] that

high fault coverage 1 obtained under single fault model will yield a high fault coverage in

multiple fault model as well. Thus, in industry, typically single fault model is used for test

generation. In our work, we focus on functional test generation at logical level using single

stuck-at fault model.

2.2.1 Fault Collapsing

Typically a set of faults (fault-list) is given as an input to an automatic test generator for

test pattern generation. The size of this fault-list has a direct impact on the computational

complexity of the test generator and test set size returned by it. Fortunately, a single test

1Fault coverage for a test set T = (Number of faults detected by T)/ (Total Number of faults)

Page 37: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 21

vector t for a fault f can serve as a test for several other distinct faults in the fault-list.

Thus it is sufficient to include just f in the fault-list, as a representative fault, excluding

other faults that can be tested with t. However, this is beneficial, if and only if the test

generator includes vector t in the final test set T . Otherwise, certain excluded faults may

not have a test in T . The process of eliminating faults from the fault-list is referred to as fault

collapsing. A number of fault collapsing approaches have been proposed in the literature.

These include fault equivalence based collapsing [114,115,116,117,118,128], fault dominance

based collapsing [119, 120, 121, 122, 124] and fault concurrence based collapsing [125, 126].

Of these, collapsing approaches based on the former two have been widely studied and

incorporated in several commercial test generation packages. The problem of identifying a

representative fault f , for eliminating a set of faults from fault-list, can be as hard as test

generation for f itself. Thus, low cost mechanisms for identifying most of the collapsing

opportunities are essential.

2.2.2 Detection and Diagnostic Test Generation

Conventional test generation techniques [4, 15, 53, 19, 54] are based on branch-and-bound

procedure and either explicitly or implicitly utilize the split circuit model [129]. Cheng

showed that the D-algorithm performed better with the split circuit model than with just 5-

valued or 9-valued model. Figure 2.4 illustrates a conceptual representation of this model for

test generation. It uses two versions of the circuit - a fault-free version and a faulty-version

which is injected with the fault f for which a test needs to be generated. The corresponding

Page 38: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 22

XZYF[F\ ]�^_a`3b�c�[�` ]

`d:be`gfhYFb+^ikjmlm[&]�nGopd�irq

X#YF[�\ ]�s+t�b�u:u_a`gbvc�[�` ]X#YF[m\ ]wt

Figure 2.4: Split Circuit Model for Test Generation

outputs in these two versions are XOR’ed and an OR gate with all these XOR gates as

inputs is added to the model. Since primary inputs are shared between the two versions, any

assignment to the primary inputs that can imply a logic 1 on the OR gate is a test vector

for f .

Traditionally, test generation was performed with the objective of differentiating a fault

free machine from a faulty machine. Since test generation is a problem in the multi-valued

domain (like 5-valued or 9-valued) conventional techniques encode the problem into Boolean

domain. In recent years, there has been renewed interest in diagnostic test generation with

application to silicon diagnosis - the process of locating defect sites in defective chips. The

diagnostic test patterns are generated by distinguishing one faulty circuit from another. In

the literature different models for diagnostic test generation have been proposed [8,116,10].

In [118], the authors proposed to use a single pass ATPG to perform ADTG for a given

Page 39: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 23

fault-pair. In our ADTG engine we employ this ADTG model and it is described next.

2.2.3 ADTG Model

In [118], the authors proposed a novel circuit model for ADTG. Without loss of generality,

consider a fault-pair (f0, f1). Let f0 be the line B → C stuck-at 1 (sa− 1) and f1 be the line

D → F sa − 0 in Figure 2.5A. Now, they introduce two multiplexers with a common select

line S (new primary input) at the two fault sites as shown in Figure 2.5B. The key is that

the original signals - line B → C and line D → F - are connected to the opposing polarity

inputs in the two MUXes. For example, in Figure 2.5B, the line B → C is connected to the

1-input of MUX M0 and the line D → F is connected to the 0-input of MUX M1. Further,

constants representing the faulty values at their respective fault sites are connected to the

remaining input port of both the MUXes. In Figure 2.5B, a constant 1 is connected to the

0-input of MUX M0 and a constant 0 is connected to the 1-input of MUX M1. When the

select line S = 0, the circuit can be simulated under the presence of fault f0. Similarly, the

circuit can be simulated under the presence of fault f1 when S = 1. Note that the fault-free

circuit is absent in this model. Thus, if an ATPG engine returns a test vector t for the

select line S stuck-at-0 (or equivalently for S sa− 1), then t is a diagnostic pattern that can

distinguish the fault-pair (f0, f1). On the contrary, if the ATPG engine exhausts the search

space and returns that the fault on select line S (either sa− 0 or sa− 1) is untestable, then

the fault-pair (f0, f1) is indistinguishable. Overall, this construction allows the ADTG to

leverage the advances made in conventional ATPG engine to prove the fault-equivalence or

Page 40: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 24

Figure 2.5: ADTG Model

generate a diagnostic pattern for a given fault-pair in an atomic step.

2.3 Other Related concepts

2.3.1 Basic Definitions/Terminology

The operation of an ATPG engine such as PODEM [15] can be regarded as a systematic

and intelligent search of all possible Boolean assignments on the (pseudo) primary inputs

in the DUV/CUT. This systematic search conceptually builds a binary tree often referred

to as the decision tree (DT ). Each node N in the DT is characterized by its decision level

d(N). Note that node N corresponds to a (pseudo) primary input in the DUV/CUT. In

the search process, a decision is referred to as a selective Boolean assignment to the node

N . A decision (say on node N), along with a set of previously assigned nodes (obtained due

Page 41: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 25

to the decisions made before node N) in the DUV/CUT, may imply an internal gate G in

the DUV/CUT to a value v. For this implication, we say that G is implied at the decision

level d(N). The set of gate assignments in the DUV/CUT that imply G = v is referred

to as the antecedents of G = v. Note that, by definition, none of the gate assignments in

the antecedents of G = v is assigned at a decision level higher than d(N). Further, each

implication is defined by the input-output relationship of gates in the DUV/CUT. In the

sequel, we use Gd(N),v to represent that gate G is assigned a value v at decision level d(N).

Also, after each decision, usually a breadth-first traversal of the DUV/CUT is performed to

determine its implications.

• Search State Each decision in the decision tree (DT ) leads the branch-and-bound

procedure to a new state and is referred to as the search state.

• Search State Representation A naive way to represent a search state SS is by

the sequence of decisions in the DT that leads to SS. Let this set of assignments be

represented by A. The frontier of assigned nodes in the DUV/CUT (obtained by the

implications of A) can also be used to represent the search state SS. This frontier of

assigned nodes is referred to as cutset in [38, 68, 45] and E-frontier in [50].

• Search State Classification We classify SS into three types - conflict, success and

intermediate. A conflict search state occurs when there exists no other assignments to

yet to be decided (pseudo) primary inputs such that the objective of the ATPG-engine

can be satisfied. A success search state occurs when the objective of the ATPG-engine

Page 42: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 26

xCy z3y

{v|�}�~{v|�}g�{v|�}��{v|�}g�{v|�}��{v|�}g�{v|�}��

x�� �x � �x�� �x � �x y �

x y � � x�� �W� x � �Wy x � � ��"�g�g�

xCy � xCy �x y �xCy �

x y �x � �

x�� �x � �

���C�)� � �6�v��� �������� �g�  ��m¡£¢6¤e¥�¡e� ¦ ¡e§�� �

¨C��©�ª3«C¬v­�®°¯±

x y �xCy �

xCy �² � © ª3³ ¬v­�®�´

xCy �xCy �x y �

±pµ"¶ ·�¸ ¹ µ"º»p¼+½�¾À¿ÂÁÂÃÀıeµ"¶ ¹ Å ¿�ÁÂÃ�Æ

Ç�È+É+È ºÂÅ Á

±{v|�}k{£Ê�Ë�Ì ÍkÌ Î°Ï6|rÊkÐrÊ°Ñ

{v|�}��{v|°}g�{v|�}��

Figure 2.6: Success-driven learning

is satisfied; an intermediate search state is one which may lead to either a success or

conflict search state based on the future decisions in the search process. For instance,

suppose the objective of the ATPG-engine is to generate a test pattern. Then a conflict

state occurs if there is no X-path to propagate the fault effect to a primary output.

Further, a success state occurs if a fault effect is propagated to at least one primary

output. Finally, SS is usually defined by a set of gate assignments. For instance, if

SS is a success search state obtained during test generation then it is represented by

the assignment o = v, where o is the output at which fault effect v is observed. In the

rest of the proposal document, we refer to the search state implied by the current set

of decisions, made by the ATPG-engine during search process, as the current search

state.

Page 43: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 27

2.3.2 Antecedent Tracing

The basic idea of Antecedent Tracing (AT) was originally published in [46] for path sen-

sitization problem in test generation. Later, it was used in [17, 39] to drastically improve

the performance of modern-day SAT solvers. Here, we briefly explain AT with an example.

Consider the slightly modified partial c432 circuit from ISCAS85 benchmark suite and the

partial DT for the fault primary input G14 sa− 1 shown in Figure 2.6. After the decision at

level 7 (current decision level) the search reaches a success state - fault effect is observed at

the primary output G203. Now, AT can be used to determine the antecedents of this success

state. Essentially, we backward traverse the CUT from the gate that represents the state

(i.e.) gate G203. During such traversal, we identify the antecedents that were assigned at

the current decision level and schedule them for further backward traversal. We stop when

we reach the primary input that was decided at the current decision level. Further, the

antecedents that were assigned at a level lower than the current decision level are recorded

in a set R. In the considered example, during the backward traversal through G203, the an-

tecedent G5,D202 (G202 = D at level 5) is recorded in the set R and G171 (implied at level 7) is

scheduled for further backward traversal. Next, when traversing through G171, we find that

its antecedent is the primary input decided at the current decision level and the traversal

ends. It can be seen that the assignments in the set R ({G5,D202}) together with the decision

made at the current decision level (G7,034 ) implied the success state G7,D

203 . The authors in [46]

used AT to determine the antecedents for a given conflict state. However, as shown above,

AT can be utilized to determine the antecedents of any given state, whether success, conflict

Page 44: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 2. Background 28

or intermediate state - defined by a set of gate assignments. Further, they use AT only to

determine the non-chronological backtrack level. In our learning framework, in Sections 3.3

and 6.3, we show that the information learned from AT can be used to identify non-trivial

redundant search states in future search.

Page 45: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Chapter 3

Symbolic Model Checking

3.1 Introduction

Symbolic Model Checking techniques traverses the design state space implicitly to verify if

the Design Under Verification (DUV) holds a given property. For this purpose, efficient state

space exploration methods are essential. One approach - image computation - starts from

the initial states of the design and attempts to show that no state satisfying the negation of

the property can be reached. A dual to this approach is to compute pre-image of the states

representing the negation of the property until fix-point. If the computed pre-image states

include the initial states then the property is violated; otherwise it can be concluded that

the property is valid provided fix-point is reached. Either way it is essential to efficiently

explore the design state space by avoiding already explored search spaces (redundant search

29

Page 46: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 30

spaces).

Initially Reduced Ordered Binary Decision Diagram (ROBDD) based methods were used for

symbolic state space traversal [74]. These methods were attractive since ROBDDs provide

for a canonical representation of functions and efficient Boolean manipulations. However,

these methods explode in memory for large designs and are thus limited in their applicability.

Although variable ordering and partitioning techniques [30,31,32,33,34] have been proposed

to alleviate this problem, these methods can still result in exponential space complexity for

large designs. Later, graph-based variants [40,41] were proposed to represent the transition

relation as an alternative to BDD-based methods. However, the formula size may still grow

exponentially.

In [71,41], the authors proposed techniques that try to synergistically combine the potentials

of BDD and SAT solvers for state space exploration. However, BDDs are inherently prone

to memory explosion. Subsequently, interest in pure SAT solver based state space traversal

increased. The authors in [75] proposed to use pure SAT-solvers for model checking instead

of BDDs. Later, in [42], McMillan proposed a method to elegantly modify the SAT solvers

to traverse the state space. They use the solutions obtained by SAT-solvers to represent the

state space traversed and refer them as blocking clauses. Finally in [44,45], the authors pro-

posed hybrid-SAT solvers that uses the design information but still works on the conjunctive

normal form representation of the problem. Although these methods avoid the exponential

space complexity, they may potentially explode in time. Thus efficient learning mechanisms

to prune redundant search states are imperative.

Page 47: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 31

In this chapter, our contributions [35, 36, 37] can be summarized as follows:

1. We propose an efficient search state representation that allows us to determine non-

trivial redundant search states. A novel extensibility concept is proposed to facilitate

this process.

2. We show that our learning method can be used to compute over-approximated pre-

image, which may be of interest for certain verification problems like Model Checking

and Pre-silicon Design Debugging.

3. Finally, we propose a probability-based heuristic to guide our learning process.

Outline: Section 3.2 discusses the basic concepts for understanding the rest of the chap-

ter. Section 3.3 discusses our contributions in this chapter and Section 3.4 provides the

experimental evaluation. Finally, we summarize our contributions in Section 3.5.

3.2 Preliminaries

3.2.1 Preimage Computation Example

A systematic way to enumerate all possible assignments on a given set of variables is to

use a branch-and-bound procedure. This procedure conceptually builds a decision tree,

which can also be thought of as a free BDD. In our work, we decide only on the variables

in I and X, unlike the SAT-based methods. This is sufficient to compute the pre-image

Page 48: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 32

since the monitor circuit is a function of these variables. Further, it effectively reduces the

search space that needs to be explored. Figure 3.1B shows the partial decision tree obtained

during pre-image computation for the monitor shown in Figure 3.1A. Let {a, b, d, e} ∈ X

and {c, f, g, q} ∈ I. We recursively build the pre-image bottom-up; the logical expression

to the left of each decision node N in Figure 3.1B represents the pre-image P computed

bottom-up after exploring the search space underneath N in the decision tree. The ITE

operator is used to construct P . Let v represent the variable decided at N . Also, let P0

represent the pre-image obtained for the else decision at N (v = 0) and P1 for the then

decision (v = 1). Now P = ((v ∧ P1)∨

(v ∧ P0)) if v ∈ X; otherwise P = (P1

∨P0) since

v ∈ I and by Equation 2.2 input variables must be existentially quantified in the pre-image.

Note that N represents the sub-tree T underneath it in the decision tree and P represents

the pre-image that is obtained by exploring T . For instance, in the Figure 3.1B, ((d ∧ 1)

∨(d ∧ 1)) (= Constant 1) represents the pre-image that is computed after exploring the

search space under the decision node with v = d. Therefore, if N is the root node of the

decision tree then P will represent the required pre-image.

3.2.2 Terminology

• Redundant Search State Suppose a sequence of decisions A leads to a search state

SS in the DT . Let the sub-tree that needs to be explored in the DT (after SS search

state is obtained) be represented by T . If this sub-tree T was previously explored during

the branch-and-bound procedure then SS is referred to as redundant search state. Also,

Page 49: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 33

��

��

���

�����������

�����

������������������

��

��

� �

� �

��!�������"���#�� �$����#��%� �

������������������

&��� �� ��#��'����#(�

���������������)�� ����

��*�����#��

��

+,��-����.�

+��-���

+,��-����.�

+��-���

/�� � �

"�#����/� �0���*

������/� �0����

� �������

� �� ����

� �� #�� ���

Figure 3.1: Example Circuit and DT for Preimage Computation

the search space represented by the sub-tree T is referred to as redundant search space.

Example: The sub-trees that are bold-faced in Figure 3.1B represent redundant search

spaces since these sub-trees have been already explored under decision d. Further,

we refer to the search space represented by T as the search space under SS and the

pre-image computed bottom-up after exploring T as the pre-image under SS.

• Search State Extensibility Consider two search states SS1 and SS2. Let A2 rep-

resent the sequence of assignments in the DT that leads to SS2. We define that SS1

is extensible to SS2 (SS1ExtSS2) if a certain subset (A2′) of assignments in A2 can

lead to SS1. Example: In Figure 3.2, the assignments in curly braces to the right of

each decision node N represent the search state in the DT just before N . Let SS1 =

{k} - search state obtained due to the decision g. Also let SS2 = {n} obtained by A2

Page 50: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 34

= {g, f , q}. Now SS1ExtSS2 since A2′ = {q} leads to SS1.

3.3 Search State Extensibility Driven Learning

3.3.1 Motivation

Branch-and-bound procedures explicitly searches all possible assignments. For efficient

search process, it is imperative to identify redundant search spaces and to direct the solver

to avoid such spaces. SAT and Hybrid-SAT methods [42, 44] use AT and solution-cube en-

largement to prune redundant search states. To avoid exploring previously explored solution

assignments, these methods propose to use AT to determine the reason for the solution.

Then this reason is enlarged to capture several additional solution assignments. However,

the reason is a function of the state variables alone and it prohibits from exploring only the

same assignments on the state variables. In [39, 73], the authors have empirically shown

that the pruning capability of a solver is enhanced if the reason (a cut in the implication

graph IG) learned for a conflict is in proximity to the conflict in the IG. We refer readers

to [39,73] for the notion of proximity. However, the key is that several distinct assignments

on the decision variables can imply the same reason for the conflict. Hence learning a reason

that is close to the conflict in the IG can avoid all such distinct assignments in the future

search.

In [38,68,69,45], the authors propose to represent explored search states using cutsets. They

Page 51: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 35

use an ATPG engine to perform the branch-and-bound procedure. Whenever a learned

cutset LC occurs again, they can readily conclude that the current search state CS is

redundant. To obtain the pre-image under CS they use the pre-image obtained under

the search state represented by LC. Figure 3.2 illustrates the identification of a previously

learned cutset. Suppose the current sequence of decisions is {g, a, d, e}. Then the ATPG-

based methods identify that the cutset obtained for this sequence is the same as that for

{g, a, d} sequence of decisions. Thus they use the pre-image under the decision d (constant

1) as the pre-image under the recent decision e. This allows them to avoid exploring the

search space under the decision e. Note that for the ATPG engine, the monitor circuit

serves as the IG and the cutsets in the monitor are in proximity to the relevant search states

that it represents in the DT . This is unlike the SAT/Hybrid-SAT based methods where

the blocking clauses are defined on the state variables alone. Since different assignments on

the state and input variables can lead to the same cutset (as discussed in the above case)

the ATPG-based methods can determine all such assignments as redundant. However, the

ATPG-based methods require cutsets to be learned at each decision node in the DT . This

may be an overkill on the memory resources and time spent on Boolean reasoning of these

learned cutsets. We explain the motivation for our work using the following example.

Example: Consider the Figure 3.2; in ATPG-based methods, the search state for d decision

is represented by the cutset {k, h, i}. These methods learn that whenever this cutset occurs

again during the subsequent search process, the pre-image under the current search state

will be constant 1. However, antecedent tracing for the decision c identifies that {k, h, c}

Page 52: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 36

��

��

� �

� �

���� ���� �� ���

�� ����

�� �

�����������

�������

�� ���

�����������

��������

� �

����

������

�� ���

������������������

� ���� ����� �� ��������

!"��� ������ ���������

��� � �������

#$%�

#$%&

#$%'

#$%(

$�����

#�� ���$���)��%*

�"����$���)��%�

� �"����"�

�"�+����

�"��������

#$)�#�����"��$����

Figure 3.2: DT showing non-trivial redundant search states

Page 53: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 37

is a sufficient reason (R0) to imply the solution underneath the decision c. Since AT uses

the monitor circuit as the IG we have two choices as we backtrace through gate p when

determining R0. This is because both l and m imply p. We propose a probabilistic heuristic

in Section 3.3.6 for making a decision in such cases; for now let us assume that we backtrace

through gate l. Similarly, the sufficient reason (R1) for the conflict underneath decision c

is {c}. Now, it is obvious that the binary resolution of R0 and R1 - {k, h} - is sufficient to

represent the search state under the decision d. Note that this is a stronger learning since

we learn a smaller set of assignments to represent the search state unlike the cutsets used

in ATPG-based methods. Further, it allows us to perform Non-Chronological backtracking

(NCB) during the branch-and-bound procedure. For the example case discussed, we can

directly backtrack to level 2 and avoid the redundant search space under decision d. The

NCB also helps in significantly reducing the number of learned search states - can avoid

learning the trivial search state at decision level 3 and those under the decision d.

Furthermore, a search state (SS2) is identified as a redundant search state in ATPG-based

methods only when the cutset representing SS2 is equivalent to a previously learned cutset

(for a search state SS1). This may hint that the learning of smaller set of assignments

to represent search states may not be beneficial to the ATPG-based methods.For instance,

let the current search state (SS2) be the one obtained by the set of decisions {g, a, e, b} in

Figure 3.2. It is represented by the cutset {k, h, e}. Since the current ATPG-based methods

determine a mismatch of this cutset with the search state {k, h} (SS1), previously learned

by AT , they do not conclude that the current search state SS2 is a redundant search state.

Page 54: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 38

However, it may be observed that SS1ExtSS2. Further, we have already explored the search

space under SS1. Thus, it is intuitive that SS2 is a redundant search state (formally proved

later). In Figure 3.2, it can be seen that the sub-tree under SS2 is a sub-tree of that under

SS1. Therefore, our search state representation is stronger as it allows us to identify several

non-trivial redundant search states based on search state extensibility. Note that when we

identify the current search state to be redundant, we also need to compute the pre-image

under SS2. This is essential because we build the pre-image bottom-up. We discuss this in

the next section.

Overall, in order to determine non-trivial redundant search states based on search state

extensibility

1. We use AT to determine a smaller set of assignments sufficient to represent a search

state unlike the ATPG-based methods. This succinct representation allows to perform

NCB and thereby significantly reduce the number of learned search states than existing

ATPG-based methods.

2. Based on our search state representation and search state extensibility, we propose a

method to efficiently determine several non-trivial redundant search spaces than exist-

ing methods and prune them. This may effectively reduce the number of enumerations

performed during pre-image computation. We also present a method to compute pre-

image under the current search state when it is determined as a redundant search

state.

Page 55: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 39

3. We propose a probability-based heuristic that directs the AT to make appropriate

choices (if any) as it backtraces through the monitor circuit. Our heuristic is directed

towards obtaining a smaller frequently occurring search state quickly.

3.3.2 The Proposed Learning

Consider a node N in the decision tree DT . Let SS1 be the search state obtained due to

the sequence of decisions on the path from the root of DT until node N . After the sub-tree

T rooted at decision node N in the DT is explored, we use AT to obtain a smaller set of

assignments A that is sufficient to represent the search space represented by T . Thus A is

sufficient to represent SS1. Since we compute the pre-image bottom-up, pre-image P1 for

the search space represented by T will be obtained once it is explored. Now, we learn that

the pre-image under SS1 is P1. In the future search, if a search state SS2 is obtained such

that SS1ExtSS2, we can use P1 to obtain the pre-image P2 under SS2.

We store the learned search states as Boolean clauses and store them in a clause database

DB. For each assignment made during the branch-and-bound procedure, we propagate this

constraint in the DB to see if any learned search state SS1 (in DB) becomes extensible to

the current search state SS2. This Boolean Constraint Propagation can be efficiently done

using a literal-watching scheme [18]. Note that we do not perform any implications learning

in the DB since we are just interested in identifying extensibility relationship among search

states. During the branch-and-bound procedure suppose we determine that a previously

Page 56: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 40

Ò�Ó+ÔÀÓ£Õ�Ö Ö×�Ø�ÙWÚ3ÛgÜeÝgÞ ßWÛràWÜ�ágâã�äkå Þ Ü�ÝgÞ ßWÛkà3Ügáræã ã°äkå çWè Þ äkßé é+ärß ê å Þ ëWèâ éÀä�ßWÙWè ØkßWè3â

ì

í

î ï

ðñ

ò

óôõ�ö

÷Cø�ù Ó+ú°û ü ý°ÓþDÿ û ���

������ î� ��� ì ��� �Wð�õ�� ö

ú

�� � �������

� ø

� �

Ö

ø

� �

!#"

$

% � & ' � (�)

* + Ø�, + ê , + -#.

% � �/' � & ' � (�)

* ë0, + - .

* ë .

1���2 î�� ö�� î3��4�ì�ð5� 6�� ô�798:� ì6ìFigure 3.3: Search State Extensibility Based Learning

learned search state SS1 is extensible to the current search state SS2. Then SS2 is a

redundant search state. This is formally proved below.

For further discussion, let SS1 be a previously learned search state and SS2 be the current

search state. Let SS1ExtSS2 and A2 be the set of decisions in the DT that imply the

current search state SS2. Let A1 represent the set of decisions that lead to SS1 when it was

learned. Also, let T1 represent the sub-tree in the DT under SS1 and T2 that under SS2.

In the subsequent discussions we refer to out as the single output of the monitor discussed

in Section 2.1.2. Further, consider the following definitions

1. Solution/Conflict cube under a search state SS: A cube m under SS is referred to

as the set of assignments under SS in the DT that leads to a terminal node in the DT .

Page 57: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 41

m is a solution (conflict) cube if the terminal node is a solution (conflict). Example:

Consider Figure 3.3. If SS = {a, f , g} then b is a solution cube and b is a conflict cube.

2. Cofactor of a cube m under a search state SS with respect to an assignment A (mA)

is the cube m restricted to the sub-domain where A is true. Example: Let m = b and

A = {c, b}. Then mA is constant 0 (false). If A = {c} then mA = b. Also, let T be

the sub-tree under SS in the DT . Then we refer to cofactoring each cube in T with

respect to A as cofactoring T with respect to A (TA).

3. Let A and B be two assignments. We say that A and B are inconsistent if A ∧ B

evaluates to constant 0 (false); otherwise they are consistent.

Lemma 1. Let m1 be a solution (conflict) cube under SS1. If m1 and A2 are consistent,

then (m1 ∧ A2) ⇒ out ((m1 ∧ A2) ⇒ out).

Proof: We prove for solution cube; proof for conflict cube is similar. Since m1 is under SS1,

m1 and SS1 are consistent. Since SS1ExtSS2, A2 ⇒ SS1. So A2 and SS1 are consistent.

Also, m1 and A2 are consistent is given. Thus m1 ∧ A2 ∧ SS1 does not evaluate to constant

0. Since m1 is a solution cube m1 ∧ SS1 ⇒ out. Therefore, m1 ∧ A2 ∧ SS1 ⇒ out. Since

A2 ⇒ SS1, m1 ∧ A2 ⇒ out.

Theorem 1. T2 = {m2 | ∀ cube m1 under SS1 m2 = m1A2}. In other words T2 = T1A2

Proof: Complete: For completeness, we need to show that T1A2 is a complete sub-tree

defined over the variables V 2 = {(I ∪ X)\A2}. T1A2 will be free of variables in A2 due to

Page 58: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 42

cofactoring of T1 with respect to A2; thereby it will be defined over the variables in V 2.

Further, T1 is a complete sub-tree defined over the variables V 1 = {(I ∪ X)\A1}. Thus

T1A2 will be a complete sub-tree over variables in V 2.

Sound: We need to show that the construction of T2 does not add a conflict cube as a

solution cube (Case A) and vice-versa (Case B).

Case A: Suppose an originally conflict cube m2 in T2 is added as a solution cube by our

construction. Then T1 has a corresponding solution cube m1 such that m2 = m1A2. Note

that m1 and A2 are consistent; otherwise m2 does not exist. Then by Lemma 1, (m1 ∧ A2)

⇒ out. Since m2 = m1A2and (m1 ∧ A2) ⇒ out, (m2 ∧ A2) ⇒ out. Thus m2 is indeed a

solution cube in T2.

Case B: Suppose an originally solution cube m2 in T2 is added as a conflict cube by our

construction. Then T1 has a corresponding cube m1 such that m2 = m1A2. Note that either

m1 and A2 are inconsistent or m1 is a conflict cube in T1. If m1 and A2 are inconsistent

then m2 = m1A2cannot be a solution cube in T2. So they must be consistent. Therefore,

m1 is a conflict cube in T1 and m1 and A2 are consistent. By Lemma 1, (m1 ∧ A2) ⇒ out.

Further, since m2 = m1A2, (m2 ∧ A2) ⇒ out. Thus m2 is indeed a conflict cube in T2.

Theorem 2. The search space represented by T2 is redundant.

Proof: T1 is already explored. Further, by Theorem 1, T2 can be constructed from T1.

Thus T2 is a redundant search space. ¦

Example: Let us consider Figure 3.3. The sets in curly braces next to each decision

Page 59: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 43

node N represents the search state learned by AT after exploring the space under N . The

logical expression represents the pre-image (computed bottom-up) at N obtained after this

exploration. Assume that we have explored the space under decision c and that the current

search state SS2 is {b, f , g} obtained by the set of decisions {c, d, b} (A2). Further let a

and b be the state variables. Also, let SS1 be the learned search state - {f , g} - under the

decision c. Thus SS1ExtSS2. However, it can be seen that the pre-image (P2) under SS2

is empty (constant 0) whereas that under SS1 is a ∧ b (P1). Here, P2 can be obtained by

computing the cofactor of P1 with respect to A2.

3.3.3 Over-approximated Preimage

Theorem 1 can be used to obtain the set of solutions under the current search state SS2

whenever SS1ExtSS2 where SS1 is a previously learned search state. Thus it helps in

determining SS2 as a redundant search state during pre-image computation, which is not

possible by existing methods. One issue in using this learning for pre-image computation is

that the pre-image obtained at any decision node in the DT will have the input variables

I existentially quantified (Equation 2.2). Thus, by the construction of Theorem 1, certain

additional solution cubes will be added to the set M2. For instance, consider the previous

Example discussed above. Here, we assumed that a and b are state variables. Suppose only

a is a state variable and b is an input variable. Then the pre-image P1 under the search

state SS1, obtained by the decision c, will be a since b is existentially quantified in the

pre-image computation. Later, when SS2 is encountered, we determine that SS1ExtSS2.

Page 60: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 44

In this scenario, we will compute the cofactor of P1 with respect to A2 (P1A2) to obtain

P2 as the pre-image under SS2. Now P2 = a; this is not correct since under SS2 there

is no solution as shown in the Figure 3.3. The reason is that when we learned P1, we lost

the information due to existential quantification that under SS1, we need b = 1 to get a

solution. However, if we had learned P1 as a ∧ b then P1A2 would have given the desired

result. Note that the Theorem 1 is sound and complete in determining that the search state

SS2 is redundant. However, the issue is in computing the pre-image P2 under search state

SS2. This is the reason that we refer to cubes under a search state in general and not the

pre-image cubes under a search state in the Lemma and Theorem discussed above.

One interesting observation is that if we just use the cofactor of pre-image P1 with respect

to A2, we obtain an over-approximation of P2. This is due to the nature of existential

quantification of input variables. Precisely, in each solution cube under SS1, we only lose the

necessary assignments on I variables when computing P1 due to existential quantification.

Thus no solution under SS1 is missed in P1. For instance, in the above example, we lost

the information that b = 1 is necessary under SS1 for the solution cube a ∧ b. Still we did

not miss this solution - P1 contains a. So if we use the cofactor of P1 with respect to A2

as the pre-image under SS2, the soundness (Case A)of Theorem 1 may be compromised.

Therefore, for such cases, the final pre-image computed may be an over-approximation of

the original pre-image.

Page 61: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 45

;

<=

> ?@ >

A?

B CEDF=

GIH

J�B K L B M�NJ#B COL B K L B M�N

J�B =EL B K L B M/NJ PEL B M/N

J P�N

QSRT>VUXWZY�[ \�]^\`_a\cbed f�d UXgihkj�\l\m@T\cn�j/b:o

prqsK CEP�t qvuvqsK�Gxwy{z u z t^J P/DEB |EDEB =#Nr}prqv~s��t CO~�tO�

GIwG��G�HG�� B C

� B =sD�GIHs�I��� =sDFGI���p�qsK CEP�t qOu�qsK�GI�y{z u z tOJ P/D/|/Nx}�� B CF�

G��G��G��G��

� B |ED�GI�s�I��� |EDFGI���� B PsDFG��s����� PsDFG��s�

���:�:�0�3� � <�� �����r� ;O� � ����A��r� ; ��� � � � ���r� < � �0� �5 < �#�k�5¡3��¢ �3���rA£� � �0� ��¤:� ; � ��� ���S¥¦�0���

GIw

�¨§a©ª�{«r¬ ­0®�¯ °/®/¯ ±/²�¦�����r��ASA��x� � �³e� < � ;r� � �#<r� � ; �3�k� <r� � ? � ¢ � �  

<> @

J ´ NGI�GO�

G��

��µ£©ª�¦¶ ¬ ­0®�°/²J ´ NG��

GI�G�� · � � ���5A A

¸5CE�F¹sºE|:»^´ ~/ºO¼s|E}O�½xqO¾ ´ |:»E´ ~sºO¼s|E}Ow½ ½xqO¾ ¿�t ´ qO~p prqO~�K ¾ ´ P�t

�¨«

��À�¨Á

��Â

nà b

Ä

>aR >�UXWÅYc[ \�]F\Å@kUX[ Æ�]Od UXgÇ@T\�]�ÈmÃ�]Fn�d g:\:ÄÉneÊ�]�\�jÃ�jFnXg:b:o{ËOnXg:Ä�ËIÃ:UXÆ�g{ÄZf:\�nXjFbeo

�{«

��À�¨Á�Ì

�¨Â�Ìb

_ÍRkÎ�ÏÐd f¨]F\�g�]Ed nX[�ÑiÆ:nXg{]^d ÊEd b{n�]Od UXgÇUeÊ�d#gcYcƨ]ÐÒ:nXjEd n9Ã�[ \`Ó Ä�Ô

�w

nà ��«

�¨À�¨Á�Ì

��ÂrÌb

�¨Â�Ì�Ìbn

Î�RkÕ�d g:nX[{ÖTjF\Xd W×neØl\Zn:Ê�]F\�jl\{Ïld f¨]^\lg¨]Od nX[�Ù�Æ:n�g¨]^d ÊFd b{n{]�d UXgUeÊ�d#gcY�ƨ]lÒ�nXjEd n�Ãc[ \efZÓ Ä�Ô�nXg{ÄÚÓ ÃcÔ

Figure 3.4: Preimage Computation based on Search State Extensibility

Page 62: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 46

3.3.4 Computing the exact pre-image - Implementation Details

We explain our exact pre-image computation procedure using the same example discussed

above. However, for this discussion, we assume that b and d are input variables and a

and c are state variables for the design shown in Figure 3.3. The complete decision tree

DT depicting the branch-and-bound procedure is illustrated in Figure 3.4A. To the left of

each decision node N represents the pre-image function, which is defined in Figure 3.4B.

The pre-image is computed bottom-up after exploring the sub-tree rooted at N . Note that

in order to leverage the proposed learning technique and to compute the exact pre-image,

we do not existentially quantify the input variables in the computed pre-image during the

decision tree search. Thus we can obtain P2 by computing the cofactor of P1 with respect

to A2. This is because P1 is not existentially quantified and thereby no information about

necessary assignments on solution cubes is lost in P1. We store the pre-image computed as

an AND/INVERTER Graph (AIG) [70]. Note that efficient packages to construct memory

efficient AIG structures [62, 63] are available. We leverage these packages to build memory

efficient pre-image functions. Finally, after the branch-and-bound procedure terminates and

returns the set of all solutions, we quantify the input variables. Quantifying the input

variables in this set of all solutions is just linear in complexity.

For instance, Figure 3.4C represents the final set of all solutions returned by the branch-and-

bound procedure. Now, we need to existentially quantify the input variables - b and d - in

order to obtain the original pre-image. Note that we use the ITE operator to compute the

set of all solutions under a decision node N . Let P0 represent the solution cubes explored

Page 63: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 47

under the else decision at N and P1 that under the then decision. Let v represent the variable

decided in the node N . Then the set of all solutions that represent the explored sub-tree

rooted at N is (v ∧ P1)∨

(v ∧ P0). Further, note that the DT represents a free BDD (i.e.)

in any path of the decision tree no variables occur more than once. Since we cofactor P1

with respect to A2 to obtain P2, this characteristic is still intact in the DT . Thus P0 and P1

does not depend on the variable v. So if we need to existentially quantify the variable v then

we need to replace this variable with non-controlling value in each of the gate g such that

v is an input to g. Figure 3.4D represents the scenario where primary input variable d is

existentially quantified from the set of all solutions represented in the Figure 3.4C. Further,

the Figure 3.4E represents the final pre-image after existentially quantifying the remaining

input variable b. Thus after the set of all solutions is obtained, the quantification step is just

linear in the number of input variables.

3.3.5 Related Work

Search State Extensibility based learning efficiently utilizes the extensibility information

among search states to identify non-trivial redundant search states. However, the challenge

is in obtaining preimage under the search state identified as redundant. The above section

addresses this issue. A closely related work is reported in [45]. However the main difference

between their method and ours is that they learn the entire cutsets obtained after each

decision in the decision tree. Our approach uses AT to compute a succinct assignment set

sufficient to represent the search state. Thus our learning framework can aid in identifying

Page 64: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 48

several non-trivial redundant search states than that proposed in [45]. Further, due to non-

chronological backtracking we do not learn search states at each node in the decision tree.

3.3.6 Heuristic for guiding AT

In Section 3.3.1, we pointed out that AT may encounter choices when backtracing through the

circuit to determine a reason. These choices occur due to two/more inputs (ci’s) controlling

the value at a gate cg independently. Albeit enumerating all possible reasons (for a given

search state) may be of interest, it may incur significant time overhead. This raises the

question of which ci to choose during the backward traversal through cg.

Controllability metrics of a signal have long been used for circuit related problems. However,

we are interested in the frequency of occurrence of the value at a ci, which implies the

controlling value at cg. Choosing a ci with high frequency of occurrence of the value (which

implies controlling value at cg) may heuristically allow the AT to determine a more frequently

occurring reason. As more frequent occurrence of a learned search state leads to more search

space pruning, this heuristic is a better choice.

S C

b

outa

c

d

e

NCB

a

b

cObj: out=1

(B) Partial Decision Tree(A) Circuit

{b, c}

0

0

01

Figure 3.5: Probabilistic heuristic to guide AT

Page 65: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 49

Consider the circuit in Figure 3.5. The decision tree shows the search state SS after the

decision {c}. Suppose the next decision is {a}, then the objective (out = 1) is satisfied. Now,

AT has two choices (d = 1 and e = 1) as it backtraces through the gate out to determine

the reason for out = 1. It can be seen that the frequency of occurrence of d = 1 is higher

than that of e = 1. Further, if AT backtraces through gate d then a = 1 is obtained as

the reason and for the backtrace through gate e, {a = 1, c = 0} is obtained as the reason.

Obviously the frequency of occurrence of a = 1 is higher than that of {a = 1, c = 0}. Thus

the heuristic helps in determining a reason with a high occurring frequency.

Choosing a ci with a greater frequency of occurrence may imply that a smaller reason is

obtained by the AT. This is because a smaller number of nodes may be involved in implying

the ci. For instance, to imply an AND gate to 0 it is sufficient that one of its input is

0. However, to imply the AND gate to 1, it is necessary that all of its inputs are 1. This

is reflected by the frequency of occurrence of 0 and 1 values at the AND gate’s output.

Obtaining a = 1 as the reason in the above example explains this case. Thus the heuristic

may direct AT to obtain the smallest possible reason.

Further, the above heuristic may boost the performance of the learning method by obtaining

a sufficient reason quickly. This again follows from the notion that a greater frequency of

occurrence of a value at ci leads to a smaller number of nodes in the fanin cone of ci to imply

it. Thus a smaller number of nodes needs to be processed in the fanin cone of ci during

the backward traversal. This justifies that the heuristic is beneficial even if the reason for a

given state is unique.

Page 66: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 50

Determining the exact frequency of a value occurrence at each gate in the monitor (Figure

2.2) could be tedious. So we use probability-based measures. Since, we represent the monitor

using 2-input AND and NOT gates (And-Inverter graph [70]), we illustrate the formula to

compute the probability of a value V occurrence at a 2-input AND gate G below:

P (Gc) =∑

i P (Gci) −∏

i P (Gci) and P (Gnc) =∏

i P (Gnci), where

• P (Gc)/P (Gnc) is the probability of G getting assigned its controlling/non-controlling

value

• P (Gci)/P (Gnci) is the probability of ith input of G getting assigned the controlling/non-

controlling value

Note that these formulae assume that the inputs to G are independent. For a primary input

pi, we use a value of 0.5 for both P (pi0) and P (pi1). Similar formulae can be obtained for

any type of gate. These frequency occurrence values are static in the sense that they can be

pre-computed quickly for any given design.

3.4 Experimental Evaluation

3.4.1 Setup

We implemented the proposed learning for computing the exact pre-image in the open source

verification platform ABC [62]. Similar to [38, 68], we modified the PODEM engine [15] to

Page 67: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 51

perform the required branch-and-bound search for pre-image computation. The pre-image

was computed iteratively until the fix-point.We used VIS invariant properties [63] and some

of the hard-to-reach states of ISCAS/ITC [77, 78] benchmark circuits for our experimental

runs apart from the TIP benchmark suite [79]. We conducted our experiments on a 3.0GHz

Intel Xeon machine running with 2GB RAM on a Linux operating system.

The current state-of-the-art techniques for pre-image computation can be classified into

SAT/ATPG based [42,38] techniques and Circuit Cofactor based (hybrid SAT) [44,45] tech-

niques. These methods were also implemented in the ABC platform for comparison with

our method.

Experiment 1: In this setup, we compare our method with the SAT/ATPG based methods.

The resource limitations for each method during an iteration of the pre-image computation

is described below:

• all-solutions ATPG with success driven learning [38] (Technique 0): Preimage circuit

size is limited to 125K gates and the hash table size (for storing learned cutsets) to 1

million.

• all-solutions SAT solver (using MINISAT SAT solver) [42] (Technique 1): Circuit

justification [43] for solution cube enlargement and ZBDD-to-clause conversion tech-

nique [47] for state set management were utilized. The number of learned clauses was

limited to 1 million and the number of ZBDD nodes to 200K.

• Our method (Technique 2): Preimage circuit size was limited to 125K gates and the

Page 68: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 52

number of learned search states to 1 million.

Experiment 2: Note that the cofactor expansion method [45] learns cutsets to identify re-

dundant search spaces. Thus our learning method elegantly fits into their cofactor expansion

framework. Instead of cutsets, we use AT to determine a succinct assignment set sufficient

to represent each search state. Then we identify and prune redundant search spaces based on

the extensibility relationship. In this setup, we compare cofactor expansion method with our

learning, cofactor expansion method without our learning, and a hybrid SAT based circuit

cofactor method [44]. The resource limitations for each method during an iteration of the

pre-image computation is described below:

• Circuit cofactor [44] (Technique 3): The number of learned clauses is limited to 1

million and the pre-image circuit size to 125K gates.

• Cofactor expansion without our learning (Technique 4): The number of learned clauses

is limited to 1 million and the pre-image circuit size to 125K gates.

• Cofactor expansion with our learning (Technique 5): The memory limitations were the

same as that of Technique 4.

The time limit for these fix-point computations were set to 4 hours. The results for Exper-

iment 1 is summarized in Table 3.1 and for Experiment 2 in Table 3.2. For each method,

we report the number of steps (depth) iterated during fix-point computation, the total num-

ber of cube enumerations (#Enum) and the time taken in seconds for these iterations. An

asterisk implies that the method could not iterate until fix-point due to resource limitations.

Page 69: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 53

Table 3.1: Comparison with SAT/ATPG-based approaches

All Solutions SAT [43] All Solutions ATPG [38] Our ApproachCkt D #E T/S D #E T/S D #E T/S

vis.coherence.3.E 6* 866K TO 12* 4.4M 12.5K(MO) 14 800K 1K

eijk.S344.S 3* 587K 13.5K(MO) 5* 2.4M 7K(MO) 5 1.4M 4.6K

b11.inv 14* 1.2M TO 17 235K 213.6 17 54K 29.8

texas.ifetch1.5.E 13* 885K TO 28 246K 90.2 28 81K 43.2

vis.coherence.2.E 6* 894K TO 10 519K 200.2 10 364K 175.7

b09 3* 1M TO 12 1.4M 5.7K 12 783K 3.7K

vis.arbiter.E 11 218K 973.5 11 881K 2.1K 11 296K 641.1

texas.ifetch1.8.E 8* 1.2M TO 12 110K 19.4 12 91K 16.5

b11.1 8* 1.1M TO 16* 1.4M 10K(MO) 22* 1.2M TOb05 5* 1M TO 29* 4.1M TO 33* 2.2M TO

eijk.bs3271.S 2* 1.3M TO 2* 113.7K 5.7(MO) 3* 539K 3.4K(MO)∗: Incomplete D: Depth #E: Number of cube enumerations TO: Time Out MO: Memory Out

Table 3.2: Comparison with Cofactor-based approaches

Technique 3 [44] Technique 4 [45] Technique 5Ckt D #E T/S D #E T/S D #E T/S

s3271.1 3* 1M 3.9K(MO) 4* 1.4M TO 5 1.1M 3.3K

s5378.1 5* 2M 3.8K(MO) 5* 36.7K TO 7* 41.5K TOs1269 8 282.5K 323.2 8 10.3K 431 8 9.6K 75.4

texas.parsesys3.E 17 1.1K 4.57 17 722 6.5 17 941 3.7

s3271.2 3* 1M 1.1K 2* 1M 6K(MO) 5* 2.2M TOeijk.bs3271.S 7* 1M 14.1K(MO) 11* 952K TO 12* 734K TO

s38417 1* 539.8K TO 2* 49.5K 7.2K(MO) 3* 151.8K 5.7K(MO)eijk.bs3384.S 1* 1M 1K(MO) 2* 1M 2.9K(MO) 7* 967.6K 8.4K(MO)

∗: Incomplete D: Depth #E: Number of cube enumerations TO: Time Out MO: Memory Out

Page 70: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 54

3.4.2 Results

Experiment 1: The all-solutions SAT method timed out for almost all runs. However,

the all-solutions ATPG and our method performs better than the all-solutions SAT method.

The reason is that SAT methods need to explore each unique solution (a state variable

assignment) and block them one by one. Thus the blocking clauses are defined on the state

variables. However, as discussed in Section 3.3.1, different assignments on the input and state

variables may lead the search process to the same search state in the DT . Thus distinct

solution assignments may lead to the same search state and the all solutions SAT method may

waste time exploring the redundant search spaces under such solution assignments. However,

the search state representation employed by the other two methods allows one to determine

such redundant search states and can backtrack from this state immediately. The results for

instances like texas.ifetch1.5.E, texas.ifetch1.8.E and vis.coherence.2.E support this claim.

This is analogous to the observation made in [39,73]; empirical evidence is provided in [39,73]

to show that the performance of a SAT solver is enhanced if the learned reason for a conflict

is a cut in the implication graph (IG) that is closer to the conflict (in the IG) rather than

being defined on the decision variables.

The succinct representation proposed by our method to represent explored search space al-

lows us to determine non-trivial redundant search states. Thus our method can prune the

search space represented by such search states. This is reflected by the number of enu-

merations required for the fix-point computation. For instance, for b11.inv our method

performs almost 180K fewer enumerations than the all-solutions ATPG method. Fur-

Page 71: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 55

ther, for b09, our method performs 617K fewer enumerations than the all-solutions ATPG

method. For both these runs, the all solutions SAT method times out due to the enor-

mous amount of solution enumerations required to compute the pre-image. Further, for

eijk.S344.S and vis.coherence.3.E only our method could compute the fix-point. For b11.1,

b05 and eijk.bs3271.S our method could reach deeper than the other two methods. Note

that each iteration involves implicitly enumerating an exponential number of assignments in

the size of the input and state variables. This indicates that our method achieves significant

improvement in both time and memory compared to the other two methods.

Experiment 2: The circuit cofactor method [44], similar to all-solutions SAT, defines the

explored solution space using the state variables. However, they propose an efficient solution

cube enlargement technique that allows to capture several non-trivial solution assignments

on the state variables for each solution obtained in the DT . This is analogous to the objective

of our method. We try to represent the explored search state using a succinct representation

so that distinct assignments on the state and input variables that lead to this explored

search state, in the future search process, can be identified as redundant. Whereas, the

circuit cofactor method attempts to learn the entire distinct solution assignments that may

lead to this explored search state. The learning techniques and search process has a complex

interplay and the effectiveness of these techniques can only be determined by empirical data

for the search process.

The reduction in the number of enumerations for s1269, eijk.bs3384.S and eijk.bs3271.S indi-

cates that our succinct state representation indeed helps in identifying non-trivial redundant

Page 72: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 3. Symbolic Model Checking 56

search states. Further, for s3271.1, only our method could reach the fix-point. Overall, our

method performs significantly better than the other two methods.

3.5 Chapter Summary

We propose a succinct search state representation for pre-image computation. We incorpo-

rate this representation in our learning mechanism to determine non-trivial redundant search

states using extensibility relationship among search states. Thus, we can quickly prune re-

dundant search spaces, essential for an efficient branch-and-bound procedure. Further, we

showed that our method can be used to compute an over-approximate pre-image. Also,

we proposed a probability based heuristic to guide our learning technique. Experiments

showed significant improvements over the existing state-of-the-art techniques for pre-image

computation.

Page 73: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Chapter 4

Tight Image Extraction for Model

Checking

In the literature, over-approximate state space traversal has been shown to be quite promis-

ing in verifying properties on industrial-sized hardware designs. Essentially, they compute an

over-approximation of reachable design state space and try to show that the desired property

is valid in this over-approximated space (Figure 4.1). However, a counter-example in this

over-approximated space may not translate to one in the original design; thus efficient mecha-

nisms to identify and eliminate (refinement) such false-negatives from the over-approximation

are essential to make such model checking approaches complete. In this chapter, we pro-

pose an interesting over-approximate image computation technique. Further, we utilize this

technique to design an unified abstraction refinement framework for model checking.

57

Page 74: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 58

Û�ÜFÝ�Ü�ÞXÛ5ß�Ý�à�Þ9á:ârã ä�Þ5å/æsÞ

çxÞ5ÝxàOè�Ý5érê ÞlÛIÜ�Ý�ÜFÞlÛ�ß5ÝxàOÞë�äxÞrå#ì/Ý5ß3ßrå�íxî5ã ïÍÝIÜsã íâ�í�ðç5ÞxÝ�àIè5Ý5é3ê ÞXÛ�Ü�ÝIÜ�ÞXÛrßxÝ�àOÞ

Figure 4.1: Venn Diagram showing over-approximation of Reachable State Space

4.1 Introduction

Abstraction of design under verification is essential to combat state explosion problem in

Model Checking of industrial sized designs. Initially, generation of an initial abstract model

and subsequent refinements of the abstract model were done manually. The success of such

approaches largely depended on the creativeness and intelligence of the subject performing

it. Clarke et al. [81,82] were the first to propose a completely automatic technique to perform

abstraction and refinement. They used atomic formulae to create an initial abstract model of

the design and used information from the spurious counter-example (if any) for refinement.

Thus, they christened their framework as Counter-Example Guided Abstraction Refinement

Framework (CEGAR). Later, several variants of this automatic approach were proposed like

CEGAR for Probabilistic Systems [83], CEGAR with proof templates [84] and using CEGAR

to guide bounded model checking to find complex bugs [85].

In [86], Craig showed that given an inconsistent formula φ = A∧B, an interpolant P can be

Page 75: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 59

derived such that A ⇒ P and P ∧B is inconsistent. Further, P can be defined over the vari-

ables common to A and B. In [87], McMillan proposed an interesting abstraction technique

based on Craig interpolants [86]. Essentially, the idea of interpolant generation was utilized

by McMillan to obtain an over-approximate image space during Bounded Model Checking

(BMC). We refer readers to Section 4.2 for a complete discussion of this technique. This

approach, unlike previous SAT-based methods like [89], is bounded by the longest shortest

path between any two states. Note that this can be significantly longer than the diameter

of the DUV’s reachable state space but this approach was shown to be effective on several

large verification instances. Although Craig interpolant-based method has been shown to be

promising, they suffer from two major drawbacks - the computed interpolants representing

over-approximate state space may be highly redundant and the over-approximations may

not be tighter. Recently in [90, 91], the authors proposed to integrate over-approximations

obtained from other than unsatisfiable proofs like those based on relationships between state

variables within the Craig interpolant-based model checking framework. However, even more

tighter abstractions are necessary for verifying industrial sized designs. In our approach, we

attempt to extract image cubes obtained by simulating each path in the decision tree formed

by the branch-and-bound search during BMC. Note that the branch-and-bound search may

excite multiple unsatisfiable cores that may exist in the problem. Thus our over-approximate

image are in a sense related to the multiple unsatisfiable cores inherent to the problem unlike

the interpolant based method, which are usually based on a single unsatisfiable core. Al-

though, interpolant from multiple unsatisfiable cores can be generated and conjoined, such

Page 76: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 60

approaches are in general tedious. Further, over-approximation in our technique is based on

the notion of extensibility among search states in the decision tree. Thus over-approximate

space computed by our technique may be tighter than those obtained by existing interpolant

methods.

Our main contribution, in this chapter, is an efficient method to extract over-approximate

image during BMC runs [80]. We show that our search state extensibility based learning

framework elegantly fits into our over-approximate image computation approach and pro-

vides for aggressive search space pruning. Further, we show that the over-approximate image

space computed by our technique is indeed an interpolant. Experimental results on hard-to

prove reachable/unreachable states and sequential equivalence benchmarks of ITC99 circuits

indicate the promise of our approach.

Outline: The next section introduces the interpolant based model checking approach pro-

posed in [87]. The subsequent section introduces our technique for over-approximate image

computation and shows that the implicitly computed image space is indeed an interpolant.

The next section discusses the application of our extensibility based learning framework in

the image-computation set up. Further, it presents an abstraction refinement framework for

model checking based on our image extraction technique. Experimental results and summary

are provided in the last two sections of this chapter.

Page 77: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 61

ñ:òOóô òñ�òIõ ñ:ò�ö0÷ õ

øù

Figure 4.2: Model for Interpolation-based Model Checking

4.2 Preliminaries

McMillan, in [87, 88], showed that Bounded Model Checking and interpolation can be com-

bined to provide an over-approximate image operator, which can significantly aid in avoiding

the costly quantification step for image computation. Consider the BMCk(I, Tk, F ) prob-

lem shown in Figure 4.2 - it consists of three constraints, viz., the initial constraint (I), the

Iterative Logic Array constraint (k time-frames - TF0 to TFk−1) and the final constraint (F

- property negation). The initial and the final constraints are asserted to the left of the first

frame and to the right of the last frame respectively. Now, McMillan, partitions this prob-

lem instance into two distinct sets A and B. A is set to the initial constraint and the first

time-frame (TF0), while B is set to the remaining k−1 time-frames (TF1 to TFk−1) and the

property negation constraint. Suppose, this BMCk problem instance is unsatisfiable, then

a Craig’s Interpolant P can be derived such that

• I ∧ TF0 ⇒ P

Page 78: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 62

• P ∧∑k−1

i=1 TFi ∧ F is inconsistent

• P is defined in terms of the state variables that are shared between time-frames TF0

and TF1

Since P is implied by the initial state I and the first time-frame TF0, it is an over-approximate

image of I. Further, from the second condition listed above, no state in P can reach a state

in F in k − 1 steps. This over-approximate image operation can be leveraged to design an

unbounded model checking algorithm as shown in Algorithm 1. It was shown in [87,88] that

this algorithm is sound and complete.

The main potential of interpolation based model checking approach lies in the elimination of

costly quantification step in image computation. Further, it does not increase the problem

size during each iteration of BMC, which may significantly affect the solving time. However,

such approaches require some form of recording by solvers when solving A ∧ B. This is

essential to derive an interpolant. Further, interpolants based on distinct unsatisfiable cores

in the problem may represent a tighter approximation. However, computing such tighter

interpolants may increase the burden on the solver and are in general tedious to compute [94].

Thus, in practice, a single unsatisfiable core is utilized for extracting the interpolant to reduce

computational costs.

Page 79: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 63

Algorithm 1 Interpolant based Model Checking for BMCk(I, Tk, F )

1: if I ∧ F is satisfiable then2: Counter-example of length zero exists, Return3: end if4: Let R = I5: Let nFrames = 06: while true do7: Construct BMCk(R, Tk, F )8: Let A = R ∧ TF0

9: Let B =∑k−1

i=1 TFi ∧ F10: if A ∧ B is satisfiable then11: if nFrames==0 then12: Counter-example of length k exists, Return13: else14: Let k = k + nFrames15: Let R = I and nFrames = 016: end if17: else18: Compute the interpolant P of A ∧ B19: if P ⇒ R then20: Property holds, Return21: end if22: Let R = R ∨ P23: Increment nFrames24: end if25: end while

Page 80: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 64

úû ü

ú¦û ý ú¨û ý

ú¨û þ úû þ ú�û þ ú¦û þ

ÿ ÿ ÿ ÿÿ ÿ ÿ ÿ

ÿúû �

ÿ�������� �� �������

������ ��� ������������������� �������� ��� ��!"�#� ����$ �lú¦û �

Figure 4.3: Decision Tree for BMCk(I, Tk, F )

4.3 Our Model Checking Approach based on Image

Extraction

4.3.1 Motivation

Our key idea lies in the observation that when solving a BMCk problem, over-approximate

image is implicitly computed during the branch-and-bound search. Suppose an ATPG engine

is used to perform the branch-and-bound search for the BMCk problem in Figure 4.2. The

decision tree formed by this search can be illustrated by the binary tree in Figure 4.3. Now,

consider the union of cube assignments to the state variables, sharing time-frames TF0 and

TF1, at each terminal in the decision tree. This will represent an interpolant P , as will be

shown later. Note that this approach does not require any recording by solvers to compute

Page 81: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 65

an interpolant other than the dynamic interpolant computation (just scanning the image

cubes). Further, the decision tree may excite distinct unsatisfiable cores that are inherent

to the problem. Thus, in a sense, the over-approximate image computed by our image

extraction technique may relate to the multiple unsatisfiable cores, which is necessary to

generate a tighter approximation.

Algorithm 2 BMCIE(BMCk(R, T, F ))

1: OAI = Constant 02: Return BMCIERec(BMCk(R, Tk, F )

Algorithm 3 BMCIERec(BMCk(R, T, F ))

1: if R assertion is violated then2: Return false3: end if4: if Counter example found then5: Return true6: end if7: if F assertion is violated then8: OAI = OAI ∨ scanImageCube()9: Return false

10: end if11: pi = backtrace()12: logicSimulate(pi=v)13: if BMCIERec(BMCk(R, Tk, F )) then14: Return true15: end if16: flipVal(pi)17: logicSimulate(pi=v)18: if BMCIERec(BMCk(R, Tk, F )) then19: Return true20: end if21: logicSimulate(pi=X)22: Return false;

Page 82: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 66

4.3.2 Image Extraction

Given a BMCk(R, Tk, F ) instance, our basic bounded model checking algorithm with image

extraction is illustrated in Algorithm 2. We use an ATPG engine to perform the counter-

example search. This engine decides only on the controllable signals (primary inputs) in

the BMCk(R, Tk, F ) instance. This algorithm will return true if the given BMC instance is

satisfiable; otherwise, it will return false. The scanImageCube function in line 8 of Algorithm

3, returns the conjunction of assignments to state variables that are shared between time-

frames TF0 and TF1. Whenever a conflict with F assertion arises, the conjunction returned

by scanImageCube function is dis-joined with the previously computed over-approximate

image OAI. Finally, when the given BMC instance is proved to be unsatisfiable, OAI will

represent an interpolant implied by A ≡ (R ∧ TF0) and will be inconsistent with B ≡

(∑k−1

i=1 TFi ∧ F ).

Theorem 3. If BMCIE(BMCk(R, Tk, F )) returns false, then OAI computed by it is an

interpolant of BMCk(R, Tk, F )

Proof: First, we show that OAI is an one-step over-approximate image of R. Suppose

OAI does not include an image minterm (say m) of R. Then a specific assignment PIASS to

(pseudo) primary input variables of time-frame TF0 must imply this minterm m on the state

variables shared between time-frames TF0 and TF1. Since the algorithm BMCIE(BMCk(R, Tk, F ))

explores all possible input combinations for BMCk(R, Tk, F ), it should have explored assign-

ment PIASS as well. Further, whenever a violation of F assertion occurs, the algorithm scans

Page 83: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 67

the cube assignments on variables common to A and B and dis-joins it with OAI. Note that

PIASS will not be inconsistent with R assertion; thus the image cube implied by PIASS will

be included in OAI. Note that this cube will contain the minterm m. Thus, no minterm of

image of R is excluded in OAI. Hence, OAI is an over-approximate image of R.

Next, we show that we cannot reach F in k−1 time-frames from any state in OAI. Suppose,

there exists a state cube assignment s in OAI that can reach F in k − 1 steps. Let PIASS0

represent a primary input assignment, explored by BMCIE(BMCk(R, Tk, F )), due to which

s is included in OAI. Further, let PIASS1represent a primary input assignment to the k− 1

time-frames that leads to F from s. Now, we can construct a primary input assignment from

PIASS0and PIASS1

that can lead R to F in k time-frames. This is in contradiction to the

theorem statement. Thus, no state cube assignment s in OAI can lead to F in k − 1 steps.

Finally, note that OAI is defined in terms of the variables common to A and B. Thus OAI

is an interpolant of BMCk(R, Tk, F ).¦

4.4 CEGAR Framework with our Learning

Algorithm 2 requires powerful learning mechanisms to perform an implicit exploration of

all possible input combinations. Otherwise, it may be very expensive in terms of run time

since it explicitly explores all possible combinations. We illustrate our updated image ex-

traction algorithm with our search state extensibility based learning framework in Algorithm

4. Similar to Section 3.3.2, we use Antecedent Tracing (AT) to compute and learn a succinct

Page 84: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 68

%�& '

%(& )

%�& *

+ +

+

%(& ,

+.-./103254�6�798;:.<(-

=>-9?�2A@B2 :C438D:.<�-/E-.F(/�-�@G-.49HI2549JKHIL�-M2N4(F.O9HP%.& ,

%�/QO(4�-�<R�O(SUT�@ F�69?"-

=PVGW.X

=;VGW�Y

=#V1W�Z=;VGW�[ \ H <(-�?�2 @2 :C4]75-U^9-�7�[

_(` a b cd�e ` a

` f `Ngh c i f

j i ck�l e i ckIm e `n f o pofq l `r

Figure 4.4: Missed Image cubes due to Non-chronological Backtracking

search state to represent an explored sub-tree in the decision tree. Further, the learned

search states always represent a conflict space; otherwise a counter example exists and the

algorithm would have terminated earlier.

Other than learning, the main difference between Algorithms 3 and 4 is in the image ex-

traction part. Due to the powerful non-chronological backtracking feature in our learning

framework, we cannot compute over-approximate image as illustrated in Algorithm 3. For

instance, consider the non-chronological backtracking shown in Figure 4.4. After, exploring

the sub-tree with decision node PI3 as the root, it non-chronologically backtracks to deci-

sion level 1 from level 3. In other words, the space under the flipped value of decision node

PI2 is ignored1. However, if PI2 is a primary input variable in time-frame TF0, then we

may be missing few image cubes that may only be obtained in the ignored sub-space. To

1This space is ignored because there will be no solution in this space for BMCk(R, Tk, F )

Page 85: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 69

Algorithm 4 BMCIERec(BMCk(R, T, F ))1: if R assertion is violated then

2: imaFn = Constant 03: Return false4: end if

5: if Counter example found then

6: imaFn = Constant 07: Return true8: end if

9: if F assertion is violated then

10: imaFn = Constant 111: Return false12: end if

13: Increment decision level14: pi = backtrace()15: retVal=false; imaFn0 = NULL;16: imaFn = Simulate(pi=v)17: if imaFn==NULL && retVal=BMCIERec(BMCk(R, Tk, F )) then

18: Return true19: end if

20: if Backtrack level < decision level then

21: Simulate(pi=X)22: Decrement decision level23: Return retVal24: end if

25: reason0 = conflictAnalysis() {Marks the variables common to A and B that are traversed duringAT}

26: imaFn0 = imaFn ∧ scanImageCube() {scanImageCube returns conjunction of marked vari-ables}

27: flipVal(pi)28: retVal=false; imaFn1 = NULL;29: imaFn = Simulate(pi=v)30: if imaFn==NULL && retVal=BMCIERec(BMCk(R, Tk, F )) then

31: Return true32: end if

33: if Backtrack level < decision level then

34: Simulate(pi=X)35: Decrement decision level36: Return retVal37: end if

38: reason1 = conflictAnalysis() {Marks the variables common to A and B that are traversed duringAT}

39: imaFn1 = imaFn ∧ scanImageCube() {scanImageCube returns conjunction of marked vari-ables}

40: imaFn = (pi is an input in TF0)?(imaFn0 ∨ imaFn1):(imaFn0 ∧ imaFn1)41: learn(binaryResolution(reason0, reason1), imaFn)42: Simulate(pi=X)43: Decrement decision level44: Return false;

Page 86: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 70

over-come this issue, we propose to compute the over-approximate image bottom-up. We

mark the state variables, common to time-frames TF0 and TF1, that are visited during the

antecedent tracing performed by conflict analysis. The over-approximate image imaFn for

the space represented by a node N is computed as follows. Let imaSubFn0 represent the

over-approximate image for the space represented under N = 0 decision. Let SV0 repre-

sent the conjunction of state variable assignments marked during the antecedent tracing2.

We compute imaFn0 as SV0 ∧ imaSubFn0. Similarly, for N = 1 decision, we compute

imaFn1 as SV1 ∧ imaSubFn1. Finally, if the input represented by N belongs to the left-

most time-frame TF0, then imaFn = imaFn0 ∨ imaFn1. Otherwise, imaFn = imaFn0 ∧

imaFn1.

When learning the search state S that represents the space rooted at N , we learn its corre-

sponding over-approximate image imaFn, which is computed bottom-up (line 41 in Algo-

rithm 4). In future, whenever this learned search state S becomes extensible to the current

search state, we use imaFn as the over-approximate image under the current search space

(lines 16 and 29 in Algorithm 4). Note that, by the nature of our learning, the state cubes

in imaFn with the assignments in S always leads to F in the (k − 1)th time-frame.

Theorem 4. If BMCIE(BMCk(R, Tk, F )) using Algorithm 4 returns false, then OAI com-

puted by it is an interpolant of BMCk(R, Tk, F )

Proof: Argument in the proof for Theorem 3 can be extended to prove this theorem.

2If none of the state variables is marked then the conjunction is a constant 1

Page 87: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 71

4.4.1 CEGAR Framework - overall approach

The overall model checking approach based on image extraction technique is illustrated in

Algorithm 5. It mainly differs from Algorithm 1 in the manner in which over-approximate

image is computed. Further, we optimize the BMCk set-up for each iteration of the while

loop by eliminating those states from the R assertion from which it is known (from previous

iterations) that F cannot be reached in k time-frames. We refer to this constrained R as RC

in Algorithm 5.

Algorithm 5 Image Extraction based Model Checking for BMCk(I, Tk, F )

1: if I ∧ F is satisfiable then2: Counter-example of length zero exists, Return3: end if4: Let R = RC = I5: Let nFrames = 06: while true do7: Construct BMCk(RC , Tk, F )8: Let A = RC ∧ TF0

9: Let B =∑k−1

i=1 TFi ∧ F10: if BMCIE(BMCk(RC , Tk, F )) is satisfiable then11: if nFrames==0 then12: Counter-example of length k exists, Return13: else14: Let k = k + nFrames15: Let R = RC = I and nFrames = 016: end if17: else18: if OAI ⇒ R then19: Property holds, Return20: end if21: Let RC = OAI ∧ R22: Let R = R ∨ OAI23: Increment nFrames24: end if25: end while

Page 88: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 72

4.5 Experimental Analysis

We implemented our image extraction based model checking approach presented in Algo-

rithm 5 in the open source verification platform ABC from University of California, Berke-

ley [62]. We used hard sequential equivalence benchmarks and hard to prove reachable or

unreachable states for ITC99 [78] benchmark suite. We compared our technique against the

state-of-the-art interpolation based model checking approach [87, 88] (available in ABC). A

maximum of 100 time-frames for unrolling, 1 million conflicts for each BMC run and a total

run time of 24 hours were set as the resource limitations for both the approaches. Our re-

sults are summarized in Table 4.5. We conducted our experiments on a 3.0 GHz Intel Xeon

machine with 2GB RAM running on Linux OS.

In Table 4.5, the first column lists the model checking instance used. A ”S” superscript

indicates that the instance is a sequential equivalence benchmark, whereas a ”J” superscript

indicates that the instance represents a hard to prove reachable/unreachable state. The

next four columns show the number of counter-example checks performed (refer line 9 in

Algorithm 1 ), the depth unrolled (k + nFrames in Algorithm 1 ), status indicating whether

the property holds (Y), violated (N) or undecided (U) and the time taken (in seconds) for

the interpolation method. The following four columns report the same values for our image

extraction based model checking approach. The last column indicates the speed-up achieved

by our technique over interpolation.

The Speedup column (SU) in Table 4.5 indicates that our image extraction technique is

Page 89: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 73

on an average 3× faster than the interpolation method, when both of them verified the

corresponding property. One reason may be due to the fact that our technique does not

require any recording during BMCk solving except for the dynamic over-approximate image

(OAI) computation. Another reason may be due to the fact that our approximations are

tighter since the branch-and-bound search may excite multiple unsatisfiable cores. Tighter

approximations may limit the number of times k is increased in Algorithm 1 or 5 since

they may avoid spurious counter-examples, which may otherwise be identified due to loose

approximations. This in turn will reduce the BMC problem size solved in each iteration of the

model checking approach, resulting in reduced solving time. Further, for invalid properties in

Table 4.5, the reduced number of counter-example validity checks performed by our method,

in comparison to the interpolation method, indicates that our approximations are indeed

tight. For example, in instance P5J , only 6 spurious counter examples needed to be checked

in our approach, compared with 15 spurious counterexamples in the interpolation-based

method, resulting in a 4× speedup.

For valid properties, loose approximations may converge sooner than tighter ones if the state

space where property negation holds is significantly small. We conjecture that instance P4S

in Table 4.5 is one such case; however such model checking instances may be easy to prove for

contemporary techniques since the property holds in a major portion of the state space. On

the other hand, results for instances from P0S to P3S and P11J indicate that our technique

can quickly verify valid properties where interpolation faced tremendous difficulty. In these

instances, two or three orders of magnitude speedups were achieved. Finally, our method

Page 90: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 74

Table 4.1: Our Image Extraction vs. Interpolation

Hard Interpolation [87] Our methodProp SC D S T SC D S T SU

P0S 51 100 U 8 4 9 Y 0.2 > 40

P1S 7 37 U 86400 7 56 Y 810 > 107

P2S 9 54 U 86400 8 99 Y 68 > 1271

P3S 7 17 Y 13 6 16 Y 10 1.3

P4S 4 9 Y < 1 17 18 U 84 < 0.01

P5S 9 40 N 22 5 40 N 39 0.56

P0J 6 57 N 17 5 57 N 7 2.43

P1J 9 99 N 171 7 99 N 33 5.18

P2J 11 55 N 30 8 55 N 15 2.00

P3J 12 66 N 39 8 66 N 16 2.44

P4J 12 55 N 35 8 55 N 14 2.50

P5J 15 71 N 57 6 71 N 14 4.07

P6J 13 69 N 52 5 69 N 12 4.33

P7J 14 69 N 59 5 69 N 13 4.54

P8J 13 71 N 58 5 71 N 14 4.14

P9J 15 71 N 59 5 71 N 13 4.54

P10J 10 53 N 17 9 53 N 6 2.83

P11J 23 100 U 92 3 41 Y 1.5 > 61.6

SC: # of spurious counter-example

checks D: Depth T: Time in seconds

SU: Speedup S: Y-Proved, N-Counter-example exists, U-Undecided

Page 91: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 4. Tight Image Extraction for Model Checking 75

was able to complete 4 instances , for which interpolation was not able to draw conclusions.

4.6 Chapter Summary

In this chapter, we propose a novel and tight image extraction technique for model checking.

Our over-approximate images are in a sense related to the multiple unsatisfiable cores that

may exist in the problem. Thus our approximations may be tighter than those obtained by

existing interpolation methods. Further, we also incorporated our search state extensibil-

ity based learning framework and proved that the generated over-approximate images are

indeed interpolants. Experimental results on sequential equivalence instances and hard-to

prove reachable/unreachable states of ITC99 benchmark circuits indicates the promise of

our approach.

Page 92: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Chapter 5

Fault Collapsing and Test Generation

The two important aspects of test generation are fault modeling and the underlying technique

used to generate test patterns. As mentioned in Chapter 2.2, a good fault model should

accurately model defect behavior and should also be computationally inexpensive in terms

of test generation (and fault simulation). Fault Collapsing is a pre-process to test generation

aimed at realizing the latter criteria mentioned above. In our work, we first propose an

efficient fault collapsing technique based on a novel extensibility relation exhibited by fault-

pairs. Further, traditional techniques encode the multi-valued ATPG problem into Boolean

domain. In addition to fault collapsing, we attempt to encode and solve the test generation

problem in its multi-valued domain itself. Finally, we also propose an efficient learning

mechanism for this multi-valued framework to learn from propagation conflicts.

76

Page 93: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 77

5.1 Fault Collapsing based on a Novel Extensibility

Relation

5.1.1 Introduction

Fault collapsing (FC) is the process of obtaining a compact fault list (CFL) from an uncol-

lapsed fault list (UFL) by including only a representative fault in CFL for a set of faults

in UFL. FC is a widely researched topic mainly due to its potential benefits on factors

affecting test economics [2, 3]. For instance, during test generation, a compact test set

may be obtained with reduced run-time since the number of target faults in CFL is usu-

ally much less than that in UFL. For the same reason, fault-simulation times for CFL will

be less than that for UFL. Note that the reduction in test generation time will be signif-

icant if (i) FC time is insignificant when compared with the test generation time and (ii)

the collapse ratio (CR = |CFL|/|UFL|) is much smaller and/or the faults not included

in CFL are very hard-to-test faults! Further, since a compact test set can be obtained,

FC may also indirectly aid in reducing test data volume and test application time during

Manufacturing Test. FC techniques depend primarily on two criteria used to perform col-

lapsing: (1) the algorithm (functional/structural) and (2) the relationship among fault-pairs

(equivalence/dominance). Traditional functional techniques utilize either an Automatic Test

Pattern Generator (ATPG) [19,128] or a SAT solver [17,39]; in theory, such techniques can

identify all possible opportunities for FC. However, they are usually not suitable for large

Page 94: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 78

scale designs or to serve as a low-cost preprocessor since the engines employed are known

to have exponential run-time complexity, in the worst case. On the contrary, structural

techniques utilize the design structure. Although these techniques miss some FC opportu-

nities, they scale well to large scale designs. They are known to identify a large percentage

of collapsible faults. If two faults α and β are found to be equivalent (α ≡ β), then it

is sufficient to include only one of them in the CFL. Further, if fault α dominates fault β

(α w β) in a full-scan design, then fault α need not be included in the CFL, assuming that

β is detectable. Note that α w β and β w α ⇔ α ≡ β; thus dominance-based FC will result

in a more compact fault list than the equivalence-based FC. However, care must be taken if

dominance based FC is used as a preprocessor for test generator. For instance, let fault α

be eliminated from the collapsed list due to α w β. Now, if β is a redundant fault then the

test generator must target fault α since it may be the case that α is testable.

Related Work In the past, several researchers have focused on identifying relationship

between fault-pairs based on equivalence [114, 115, 116, 117, 118, 128], dominance [119, 120,

121, 122, 124] and concurrence [125, 126]. In [114], McCluskey and Clegg proposed three

classes of equivalence - 2 classes of structural and 1 class of functional. Later in [115], Lioy

identified an efficient method based on unique requirements (UR) and D-frontier equivalence

to determine an incomplete set of functional equivalences among fault-pairs. The authors in

[116,118,128] proposed a complete method each to identify equivalent fault-pairs. However,

their techniques require an ATPG engine and may not be feasible for FC. In [130, 131], the

authors showed that for many real-life re-convergent fanout circuits, the set of faults at the

Page 95: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 79

PIs, fanout origins and branches is sufficient to derive a complete set of tests that will detect

all detectable faults in the circuit. Later, techniques to find relations between the faults

on a fan-out stem, its branches and re-convergent points were proposed in [132, 133, 134].

In [119], Lioy proposed a method to identify dominance relationship among fault pairs based

on UR and D-frontier equivalence. This was later generalized by Vimjam et al. in [124]. In

a different approach [125,126], Doshi and Agrawal implicitly utilized the concept that if two

faults are concurrent - have at least one common test vector t - then they can be collapsed

into a single fault. Note that in such cases t must be included in the final test set. For

practical applications of this approach, we either need an efficient technique to construct a

complete independence graph or to extend this method to incomplete independence graphs.

In addition, the ATPG engine needs to be customized to generate concurrent test patterns.

In [135], Hahn et al. were the first to propose hierarchical FC based on the idea that if

two faults are functionally equivalent in a combinational sub-module S embedded in a top-

level module M, then they are also functionally equivalent in M. Later, Prasad et al. [120]

proposed to first identify functional equivalences for logic cells (building blocks of complex

designs). They pre-compute dominance graphs for logic cells and perform transitive closure

on dominance graph for the complex design (built from the pre-computed graphs) to perform

hierarchical-based FC. In a sequel paper [121], the authors employed functional dominance

for the logic cells and claimed to have observed a collapse ratio of less than 25% for the

first time. In [122], Sandireddy et al. identified that the above two approaches require a

quadratic number of ATPG runs in the size of the target fault list. They reduced the problem

Page 96: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 80

complexity to a linear number of ATPG runs. Overall, dominance-based FC would provide a

compact fault list that can be directly targeted by an off-the shelf ATPG engine. For practical

purposes, even for an hierarchical approach, we need a low-cost efficient preprocessor that

can enhance the structural FC towards results that are achievable by functional FC.

In our work, we propose a new, low-cost technique to perform dominance analysis based on

the extensibility relationship exhibited by fault-pairs [123]. Further, from a theoretical

point of interest, we also give a lower bound on the size of a minimum collapsed fault list.

Experimental results indicate that, on an average, our technique achieved a 5% reduction

in the CFL size, 96% − 98% reduction in the memory consumed and 2.3× speed-up when

compared with the best known dominance based FC engine!

Outline The next section discusses the preliminaries necessary for the rest of the section.

The following two sections discuss the theory and algorithm of our fault collapsing engine.

Section 5.1.5 discusses the results obtained. Finally, Section 5.1.6 summarizes the section

with future direction.

5.1.2 Preliminaries

Definition 1. A fault α is said to diagnostic dominate [122] fault β if and only if each test

vector t that detects β also detects α at the exact same outputs at which β was detected.

Definition 2. A fault α is said to detection dominate [122] fault β if and only if each

test vector t that detects β also detects α (no restriction on the outputs at which faults are

Page 97: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 81

detected).

Similar definitions for diagnostic/detection equivalence were proposed in [122]. Diagnostic

dominance/equivalence ⇒ detection dominance/equivalence; the reverse is not necessarily

true.

Definition 3. Unique requirements for a fault α (URα) is the set of necessary gate assign-

ments that must be satisfied by any test vector t to detect α.

An inconsistent URα (no test vector can achieve all assignments in URα) ⇒ fault α is

redundant. Though it is usually hard to compute the complete URα set, there are low-cost

techniques to compute an incomplete URα [136,137].

Definition 4. D-Frontier for a fault α given URα (DFα(URα)) is the set of all gates G that

satisfy either (i) G is an internal gate whose output value is unknown and at least one of the

inputs to G has a value that is different in the good and faulty machines (D or D), or (ii)

G is an output that has a value that is different in the good and faulty machines.

We use the notation DFα(URβ) to indicate the D-frontier for fault α under the UR of

fault β. If DFα(URβ) is empty, then no vector t that detects fault β can detect fault α.

Further, if DFα(URα) is empty then α is redundant. Next, we present theorems proposed

in the literature with the objective of performing as much functional FC as possible at a

low computational cost. Note that, in the following, each theorem encompasses theorem(s)

presented before it (i.e.) it collapses fault list at least to the effect that is achieved by

theorem(s) presented before it.

Page 98: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 82

Theorem 5. Two faults α and β are said to be equivalent [115] if (i) URα ≡ URβ and (ii)

DFα(URα) ≡ DFβ(URβ).

Theorem 6. Fault α dominates fault β [119] if DFα(URβ) ≡ DFβ(URβ).

Worst case complexity of Theorem 6 is quadratic to that of Theorem 5 since the former

needs to compute the D-frontier of one fault under the UR of another.

Definition 5. D-Frontier Containment [124] - D-frontier DF0 is said to contain D-

frontier DF1 if and only if the following two conditions hold: (i) DF0 ⊇ DF1 and (ii) Either

the fanout cones of gates in (DF0 \DF1) do not overlap with those of gates in DF1 or if they

overlap, any gate G in the overlapping region has the same inversion parity from all paths

originating from the corresponding gates in (DF0 \ DF1) and DF1.

With the above definition, Vimjam et al. generalized the fault dominance relation based on

the following theorem [124].

Theorem 7. Fault α dominates another fault β if DFα(URβ) contains DFβ(URβ).

Unlike Theorem 6 that uses an equivalence check, Theorem 7 uses a containment check.

(DF0 ≡ DF1) ⇒ (DF0 contains DF1); however the reverse is not necessarily true. An in-

teresting observation is that all theorems presented above focus on identifying diagnostic

equivalence/dominance. In this work, we focus on efficiently identifying diagnostic dom-

inance relationship among fault-pairs based on extensibility relationship exhibited by the

pairs. Our technique can capture a special case of detection dominance as well. We show

Page 99: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 83

that our dominance analysis encompasses and supersedes the above theorems; thus our FC

technique will provide equal or more compact fault list with a comparable computational

complexity.

5.1.3 Extensibility based Dominance Analysis

Proposed Fault Collapsing

Observation 1. Superset condition in Definition 5 is an over-specification when considering

Theorem 7. It is sufficient for each G ∈ DFβ(URβ) to satisfy one of the following

1. G ∈ DFα(URβ) or

2. When fault α is present, G has a fault effect (D or D) under URβ or

3. Any X-path1 from G reconverges with a gate R ∈ DFβ(URβ) and the inversion parity

from G to R is such that it masks the fault effect(s) at the input(s) of R.

We explain the above observation with the following example. Consider the circuit shown

in Figure 5.1. Let fault α be G4’s first input (the fanout branch from gate G1) sa-1 and

fault β be G1’s output sa-1. Now, URβ ≡ {G1}, DFβ(URβ) ≡ {G3, G4} and DFα(URβ)

≡ {G7}. First, G3 ∈ DFβ(URβ) satisfies the third condition in the above observation.

There are 2 paths from G3 to any primary output; the path G3 → G5 → G6 → G8

is blocked at gate G6 due to the controlling value of side input G2. The only X-path is

1P is an X-path (to an output O) if the value of any G′ ∈ P is unknown.

Page 100: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 84

s ts u

s v

wDt

w>u

w>v

w>x

w>y w>z

s y

s x w>{

w>|

} ~(�C�3��� ����� ������� �

Figure 5.1: Extensibility based dominance analysis

G3 → G4 → G7. However, even if the fault effect of fault β is propagated through gate G3,

it will be blocked at gate G4 ∈ DFβ(URβ). Thus G3 is not a potential D-frontier gate to

propagate the fault effect to a primary output. Next, G4 ∈ DFβ(URβ) satisfies the second

condition in the above observation. Finally, note that the second condition in Definition 5

still holds2. Hence, from the above observation we can conclude that fault α dominates fault

β. However, this conclusion cannot be obtained from Theorem 7 since DFα(URβ) does not

contain DFβ(URβ). The above observation ensures that for any vector t that detects β, the

gate assignments in DFβ(URβ) is always extensible to the assignments (D-frontier) obtained

when fault α is simulated under t (note that t satisfies URβ); we say that DFβ(URβ) is

extensible to DFα(URβ).

Definition 6. DF Extensibility - DFβ(URβ) is said to be extensible to DFα(URβ) if and

only if the following two conditions hold: (i) Observation 1 holds true and (ii) Either the

2We remove any G ∈ DFβ(URβ) that holds the 3rd condition in Observation 1

Page 101: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 85

fanout cones of gates in (DFα(URβ) \ DFβ(URβ)) do not overlap with those of gates in

DFβ(URβ) or if they overlap, any gate G in the overlapping region has the same inversion

parity from all paths originating from the corresponding gates in (DFα(URβ) \ DFβ(URβ))

and DFβ(URβ).

Theorem 8. Fault α dominates another fault β if DFβ(URβ) is extensible to DFα(URβ).

Proof: Let t be a test for fault β; t satisfies all assignments in URβ. Given the extensibility

relation, we can conclude that t excites fault α3.Let Pβ be an arbitrary path through which

t propagates the fault effect from the site of fault β to a primary output O. We divide Pβ

into two segments - Pβ0: represents the path from the fault site to one of the gates G ∈

DFβ(URβ) and Pβ1: represents the path from G to O. First condition in Definition 6 ensures

that t also propagates the fault effect of fault α to G. Let us call this path as Pα0. Last

condition in Definition 6 ensures that Pβ1 is not blocked under t in the presence of fault α.

Thus Pα0 → Pβ1 represents the path through which t propagates α to output O. ¦

The proof given above implicitly shows that Theorem 8 focuses on diagnostic dominance.

Further, (DF0 contains DF1) ⇒ (DF1 is extensible to DF0); however, the reverse is not nec-

essarily true. Thus, Theorem 8 captures all dominance relationships determined by Theorem

7.

Observation 2. If DFα(URβ) includes an output O then we can immediately conclude that

α w β.

3DFβ(URβ) 6= ∅; otherwise β is redundant. Also, t is a test for β.

Page 102: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 86

������

�����

������

������

���

�U���

�U�E�

�U���

�U���

������ 1¡D�B¢ £¤�G¥ ¢ ¦

����� ���

���

���

�U� ���

���

§��©¨�ªI«�¬;­ ® ¯;�G¢ £ �G¥ ¢ ¦

Figure 5.2: Special case of detection dominance

This observation follows from the fact that each test vector t for β will justify URβ and

fault α is propagated to output O under URβ. Thus, in such cases, we can omit the ex-

tensibility check. For instance, consider the c17 circuit from ISCAS85 [76] suite (Figure

5.2A). Let fault α be gate G3’s output sa-0 and fault β be gate G8’s second input (the

fanout branch from gate G7) sa-1. Now URβ ≡ {G7, G2}, DFβ(URβ) ≡ {G10, G13} and

DFα(URβ) ≡ {G6, G10, G13}. Though G6 ∈ (DFα(URβ) \ DFβ(URβ)) reconverges with

G10 ∈ DFβ(URβ) with different inversion parities4, from Observation 2, α w β since the

output G13 ∈ DFα(URβ).

The above observation may realize a special case of detection dominance! In other words,

α w β and fault α may be detected on at least one different primary output than β. To

visualize this, consider Figure 5.2B. Let fault α be gate G2’s output sa-1 and fault β be

4Violates the second condition in Definition 6.

Page 103: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 87

gate G4’s second input (the fanout branch from gate G2) sa-1. Now URβ ≡ {G2, G3}5,

DFβ(URβ) ≡ {G5} and DFα(URβ) ≡ {G5, G8}. Again, from Observation 2, α w β since

the output G8 ∈ DFα(URβ). However, β can never be detected at output G8.

Lower Bound

Theorem 9. A lower bound on the size of a collapsed fault list FL is the same as the lower

bound on the test set T such that each detectable fault in the circuit is detected by at least

one test in T .

Proof: Let FLmin represent a minimum collapsed fault list and Tmin represent a minimum

test set that detects all detectable faults in the circuit. Suppose |FLmin| < |Tmin|. Without

loss of generality, assume that |FLmin| = |Tmin| − 1. Then, by the pigeon-hole principle, ∃

t ∈ Tmin that detects faults that are already detected by other vectors ∈ Tmin. Then Tmin

cannot be a minimum test set; thus |FLmin| 6< |Tmin|.

Suppose |FLmin| > |Tmin|. Without loss of generality, let |FLmin| − 1 = |Tmin|. Then, using

the pigeon-hole principle again, either (i) ∃ t ∈ Tmin that must detect two faults α and β

in FLmin or (ii) one of the faults (say α) is redundant. In either case, we can eliminate

one fault from the collapsed fault list; then FLmin is not a minimum collapsed fault list.

Thus |FLmin| 6> |Tmin|. Since |FLmin| 6< |Tmin| and |FLmin| 6> |Tmin|, we can conclude that

|FLmin| = |Tmin|. ¦

5FIRE [136] analysis will not find the UR G1 for fault β.

Page 104: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 88

Krishnamurthy and Akers [138] showed that determining the minimum test set and its size

is NP-hard, even for an irredundant combinational circuit. Thus, from the above theorem,

determining the minimum size of a collapsed fault set is also NP-hard. On the contrary,

even if we have a minimum collapsed fault list FLmin, it may not be easy for an ATPG

engine to generate the minimum test set since it needs to generate a concurrent test for

each representative fault in FLmin [126]. However, Theorem 9 presents a lower bound on the

collapsed fault list size from a theoretical point of interest.

5.1.4 Algorithm and Implementation Details

Algorithm 6 Extensibility based Fault Collapsing

1: Structural analysis (equivalence + dominance) to obtain FL2: Compute static implications3: Perform FIREUR analysis (find redundant faults + UR)4: for each fault α ∈ FL do5: Obtain multi-node implications. Get reduced URα

6: Obtain DFα(URα)7: end for8: for each fault β ∈ FL do9: Obtain neighbor fault set Nβ

10: for each fault α ∈ Nβ do11: if Observation 2 holds true then12: Remove fault α from FL13: else if Theorem 8 holds true then14: Remove fault α from FL15: end if16: end for17: end for

The entire algorithm is illustrated in Algorithm 6. First, we perform a quick structural

FC (equivalence and dominance) to obtain an initial fault list FL. Although Theorem 8

Page 105: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 89

can capture such FC opportunities, we use structural FC instead to reduce the overall run-

time. Next, we identify static implications (direct, indirect and extended backward) [2].

We use these implications to perform FIRE analysis [136]. This analysis helps in removing

redundant faults - may not remove all redundant faults since FIRE is an incomplete but

low-cost technique. Also, URα for each α ∈ FL is implicitly determined during this analysis

(FIREUR analysis), as described in [137].

Next, for each α ∈ FL, we identify the multi-node implications of URα. This step may lead

to new non-trivial UR for α that cannot be identified during the above analysis. With this

updated set of UR, we compute and store DFα(URα). Memory footprint to store the learned

UR is usually very high, especially for large scale designs. For scalability, we store only the

antecedent of an implication (or implication path) if both antecedent and consequent are

part of the UR. For instance, reconsider the c17 circuit shown in Figure 5.2A. Let fault α

be gate G7’s output sa-1; then URα ≡ {G7, G3, G4}. Now, we reduce this to {G7} since

G7 → G3 and G7 → G4. In the sequel, we refer to this reduced set of UR as URα. One

may think that we may have to pay penalty in terms of run-time when accessing UR for

each fault. This is not true since we are only interested in determining the set of UR for

any fault; we can obtain this by identifying transitive closure of each assignment in URα, for

each fault α. In other words, we are not interested in any particular assignment in the UR

set. This is similar to the difference between linked list and array data structures. Further,

we may have to perform the antecedent check before adding an assignment to the UR set.

For practical cases, this time is usually negligible as shown in our experimental results!

Page 106: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 90

Finally, for each fault β in FL, we first identify the neighbor set Nβ [124]. This set is defined

as the set of faults that includes (i) G’s output sa-v, where G = v ∈ URβ or (ii) G’s input

(fanout branch of gate G′) sa-v, where G′ = v ∈ URβ. Then, for each fault α ∈ Nβ, if

observation 2 holds true, then we remove fault α from FL. Otherwise, if Theorem 8 holds

true then we remove fault α from FL. The set of remaining faults in FL is reported as the

final collapsed fault list.

Consider again the c17 circuit example discussed in Observation 2. Recall that for the fault

β: gate G8’s second input (the fanout branch from gate G7) sa-1, URβ ≡ {G7, G2}. After

structural FC, Nβ consists of the faults α0: gate G7’s output sa-1, α1: gate G9’s first input

(the fanout branch from gate G7) sa-1 and α2: gate G3’s output sa-0. Note that, structural

FC will identify G2’s output sa-0 ≡ G8’s output sa-1, G8’s output sa-0 w β, G9’s output

sa-0 w α1, G10’s second input (the fanout branch from G8) sa-0 ≡ G10’s output sa-1, G11’s

first input (the fanout branch from G8) sa-0 ≡ G11’s output sa-1 and G11’s output sa-1 w

α1. Thus such faults are not considered in Nβ. Our extensibility relation based dominance

analysis will determine that α0 w β and α2 w β. Thus, it will eliminate α0 and α2 from FL.

5.1.5 Experimental Analysis

We implemented EXTRACTOR (EXTensibility based dominance extRACTOR) in C++

on a Linux platform. Experimental analysis were conducted on a 3.0 GHz Intel Xeon ma-

chine with 2GB RAM and Linux OS. ISCAS85 [76] and full-scan versions of ISCAS89 [77]

Page 107: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 91

benchmark circuits were used. We compared EXTRACTOR with GRADER [124]6 (based

on Theorem 7) since it is the best known dominance extractor so far; it produced the most

compact fault lists prior to our work. Tables 5.1 and 5.2 report the FC results obtained for

the benchmark circuits, while Tables 5.3 and 5.4 summarize the resource usage results.

Table 5.1: EXTRACTOR vs.GRADER [124]- ISCAS85 [76]

Ckt TOTAL GRADER [124] EXTRACTOR %Rem.FAULTS |CL| CR |CL| CR Flts.

c432 1078 349 32.37 346 32.10 99.14

c499 1366 846 61.93 654 47.88 77.30

c1355 3366 614 18.24 614 18.24 100.00

c1908 4870 943 19.36 904 18.56 95.86

c2670 7282 1313 18.03 1153 15.83 87.81

c3540 9354 1704 18.22 1624 17.36 95.31

c5315 13988 2421 17.31 2369 16.94 97.85

c6288 14560 4111 28.23 4077 28.00 99.17

c7552 19942 3170 15.90 3105 15.57 97.95

c880 2396 535 22.33 533 22.25 99.63

AVG 7820.2 1600.6 25.19 1537.9 23.27 95.00

GRADER - Generates the most compact fault list prior to this workCL: Compact Fault List CR: Collapse Ratio

The formats of Tables 5.1 and 5.2 are the same. The first two columns show the circuit

instance and total number of uncollapsed faults in the circuit, respectively. The next two

columns report the size of the collapsed fault list and the collapse ratio (|Collapsed Fault list|

/ |Total Faults|) for GRADER [124]. The following two columns report the same metrics

for our FC tool, EXTRACTOR. The last column reports the percentage ratio of the size

of collapsed list obtained by EXTRACTOR over that obtained by GRADER. Thus, the

6We acknowledge V. Vimjam for providing us with the code for GRADER.

Page 108: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 92

Table 5.2: EXTRACTOR vs.GRADER [124] for full-scan ISCAS89 [77]

Ckt TOT. GRADER [124] EXTRACTOR %Rem.FLTS. |CL| CR |CL| CR Flts.

s5378 14866 2076 13.96 1949 13.11 93.88

s9234 28130 2682 9.53 2596 9.23 96.79

s13207 41212 4500 10.92 4224 10.25 93.87

s15850 49424 4773 9.66 4547 9.20 95.27

s35932 96290 12991 13.49 12972 13.47 99.85

s38417 115226 14022 12.17 13289 11.53 94.77

s38584 110406 16344 14.80 15728 14.25 96.23

s1196 3204 610 19.04 565 17.63 92.62

s1238 3226 634 19.65 585 18.13 92.27

s1423 3982 685 17.20 669 16.80 97.66

s1488 4158 785 18.88 744 17.89 94.78

s1494 4158 783 18.83 740 17.80 94.51

s208 582 105 18.04 104 17.87 99.05

s27 78 22 28.21 19 24.36 86.36

s298 800 145 18.13 145 18.13 100.00

s344 958 140 14.61 140 14.61 100.00

s349 968 141 14.57 139 14.36 98.58

s382 1030 207 20.10 189 18.35 91.30

s386 1064 225 21.15 222 20.86 98.67

s400 1074 213 19.83 186 17.32 87.32

s420 1170 211 18.03 210 17.95 99.53

s444 1168 214 18.32 186 15.92 86.92

s510 1346 267 19.84 263 19.54 98.50

s526n 1380 286 20.72 284 20.58 99.30

s526 1378 287 20.83 284 20.61 98.95

s641 2030 290 14.29 262 12.91 90.34

s713 2160 317 14.68 263 12.18 82.97

s820 2186 492 22.51 485 22.19 98.58

s832 2206 492 22.30 484 21.94 98.37

s838 2322 423 18.22 422 18.17 99.76

s953 2470 545 22.06 485 19.64 88.99

AVG 16150 2126 17.57 2044 16.67 95.03

GRADER - Generates the most compact fault list prior to this workCL: Compact Fault List CR: Collapse Ratio

Page 109: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 93

last column indicates the percentage of faults that our technique produces with respect to

GRADER. EXTRACTOR was able to further collapse the fault list than that obtained by

GRADER for all circuits except for c1355, s298 and s344, in which EXTRACTOR achieved

the same FC results as GRADER. We achieved a 14% additional FC for c499 than that

obtained by GRADER. When looking only at the collapsed faults, in c499, EXTRACTOR

results show that only 77% of the collapsed list obtained by GRADER is needed. On an

average, for both ISCAS85 and ISCAS89 circuits, EXTRACTOR results indicate that it

can further collapse the fault list reported by GRADER consistently by an additional 5%.

Thus, for a similar algorithmic run-time complexity, our FC technique can reduce the fault

list further.

Table 5.3: Resource Usage for ISCAS85 Benchmark

Ckt SI Time MemUR MR SU[124] Ours [124] Ours

c1355 0.03 0.84 0.27 1303.00 32(8 24) 97.54 3.08

c1908 0.06 1.45 1.09 2761.00 53(20 33) 98.08 1.33

c2670 0.08 1.83 0.76 3047.00 84(31 53) 97.24 2.40

c3540 0.68 9.44 14.99 19664.00 114(49 65) 99.42 0.63

c432 0.01 0.16 0.06 204.00 15(8 7) 92.65 2.64

c499 0.00 0.13 0.06 335.00 12(4 8) 96.42 2.09

c5315 0.37 3.62 2.27 7444.00 160(63 97) 97.85 1.60

c6288 0.35 2.65 1.32 1494.00 140(34 106) 90.63 2.00

c7552 0.82 9.39 8.34 14440.00 242(101 141) 98.32 1.13

c880 0.02 0.38 0.06 362.00 27(10 17) 92.54 6.55

AVG 2.44 2.99 2.92 5105.40 87.9(32.8 55.1) 96.07 2.34Time in seconds SI: Static Implications MemUR: Memory (KB) for storing UR

%MR: %Mem Reduction SU: Speed-Up

First two columns in Tables 5.3 and 5.4, shows the circuit name and time taken (in seconds)

Page 110: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 94

Table 5.4: Resource Usage for full-scan ISCAS89 Benchmark

Ckt SI Time MemUR MR SU[124] Ours [124] Ours

s1196 0.66 1.64 2.51 4528 66(35 31) 98.54 0.65

s1238 0.90 1.79 2.99 4430 73(41 32) 98.35 0.60

s13207 3.83 72.72 79.48 68320 435(110 325) 99.36 0.91

s1423 0.03 0.79 0.22 1560 43(14 29) 97.24 3.64

s1488 5.50 3.75 11.56 9910 132(84 48) 98.67 0.32

s1494 4.25 3.82 12.03 9919 133(85 48) 98.66 0.32

s15850 3.39 69.68 41.22 83K 536(151 385) 99.36 1.69

s298 0.01 0.15 0.03 229 9(4 5) 96.07 4.45

s344 0.01 0.18 0.03 332 10(3 7) 96.99 5.42

s349 0.01 0.18 0.03 333 10(3 7) 97.00 5.06

s35932 119 2211 1487 0.9M 5853(5120 733) 99.37 1.49

s382 0.01 0.19 0.04 281 11(4 7) 96.09 4.97

s38417 7.29 58.95 42.03 77843 1223(330 893) 98.43 1.40

s38584 149 8187 4416 1.8M 2328(1501 827) 99.87 1.85

s386 0.03 0.28 0.18 839 18(10 8) 97.85 1.58

s400 0.01 0.16 0.04 296 11(4 7) 96.28 4.00

s420 0.01 0.31 0.13 902 12(3 9) 98.67 2.42

s444 0.01 0.21 0.05 355 13(5 8) 96.34 4.38

s510 0.11 0.41 0.43 1194 22(10 12) 98.16 0.97

s526 0.02 0.27 0.09 537 18(9 9) 96.65 2.83

s526n 0.02 0.24 0.09 543 18(9 9) 96.69 2.62

s5378 0.41 5.82 3.19 10376 161(46 115) 98.45 1.83

s641 0.01 0.36 0.06 714 20(4 16) 97.20 5.84

s713 0.01 0.41 0.07 722 20(4 16) 97.23 5.99

s820 0.32 0.97 1.79 3329 44(29 15) 98.68 0.54

s832 0.37 1.00 1.84 3301 44(29 15) 98.67 0.55

s838 0.03 1.05 0.75 3091 25(7 18) 99.19 1.40

s9234 1.09 42.82 25.79 58762 322(110 212) 99.45 1.66

s953 0.52 1.53 2.35 3851 47(23 24) 98.78 0.65

AVG 10.23 367.86 211.45 0.1M 402(269 133) 98.01 2.42Time in seconds SI: Static Implications MemUR: Memory (KB) for storing UR

%MR: %Mem Reduction SU: Speed-Up

Page 111: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 95

for determining static implications. Next two columns shows the FC time (excluding static

implications time) for GRADER and EXTRACTOR respectively. The following two columns

report the memory consumed (in KB) by GRADER and EXTRACTOR for storing UR. For

EXTRACTOR, we considered the memory used by both UR (first value inside brackets) and

implication graph (second value inside brackets) since it has to find the transitive closure

(using implication graph) to determine the complete known UR for any given fault. How-

ever, for GRADER, we only considered the memory used by UR since it does not require

implication graph during FC. Last two columns indicate memory reduction and speed-up

achieved by EXTRACTOR over GRADER. Time taken for structural FC is ignored since it

is identical for both the methods.

In EXTRACTOR, when determining URα for each fault α during FIREUR analysis, we

perform the antecedent check (refer Section 5.1.4) before adding a gate assignment G = v to

URα. On an average, this overhead incurred only a mere 0.2 seconds for ISCAS85 benchmark

and 10 seconds for ISCAS89 benchmark circuits. Further, the memory reduction achieved

in terms of storing the UR is significant. For instance, for the large benchmark s38584, EX-

TRACTOR required only 0.13% of memory consumed by GRADER for storing the UR.

Tables 5.3 and 5.4 indicate that, on an average, EXTRACTOR required only 4% and 2%

of the memory used by GRADER for storing UR for ISCAS85 and ISCAS89 benchmark

circuits, respectively! Finally, our compact storage aided in speeding up the subsequent

steps to FIREUR analysis. This is evident from the 2.3× average speed-up achieved by

EXTRACTOR over GRADER. The speed-up achieved should be due to the compact stor-

Page 112: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 96

age (resulting in better cache/memory performance) since algorithmic complexities for both

GRADER and EXTRACTOR are similar. For instances with significant slow-down (speed-

up < 0.90), the total run time for both the tools was less than 15 seconds. This shows

that the antecedent check overhead is a burden only when the time taken for FC is much

less. Further, in terms of actual run time, the overhead caused by this check is insignificant.

Overall, our compact storage of UR helped in significantly reducing the memory footprint,

which in turn resulted on an average 2.3× speed-up.

One may point out that the test generation time may be smaller than the FC time for certain

benchmark circuits. Such comparison may not be fair since the test patterns generated using

EXTRACTOR’s (or even GRADER’s) CFL will be more compact than that generated using

the uncollapsed (or just structurally collapsed) fault list. The reason is that our FC technique

identifies several non-trivial α ⊇ β relationship among faults and eliminates fault α from

the CFL. If they are not collapsed then the test generator may obtain one test vector each

for fault α and fault β. Thus, we need to consider the test set compaction quality and

generation time when doing such a comparison. This is an interesting future direction for

research since both FC and test set compaction are essential components of commercial test

generation packages!

Page 113: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 97

5.1.6 Summary

Obtaining a compact fault list is essential since it has a direct impact on the costs incurred

for Manufacturing Test. We proposed a novel extensibility relation that supersedes the

containment relation, which had achieved the best fault dominance analysis results prior to

our work. We also realized a special case of detection dominance not reported by any of

the low-cost FC engines proposed in the literature. Further, we theorized a lower bound

on the size of a collapsed fault list. Finally, results on ISCAS85 and full-scan versions of

ISCAS89 benchmark circuits reveal that, on an average, we were able to eliminate 5% of the

collapsed faults reported by the best known low-cost dominance-based FC engine GRADER.

Further, the results show that our technique required only 2%− 4% of the memory used by

GRADER, which provided for a 2.3× average speed-up for our technique. Future Work:

One future direction is to intelligently enrich the FC engine with low-cost mechanisms to

identify detection dominance’s as well. Another one is to study the effects of fault collapsing

on test set generation time and compaction quality.

5.2 Multi-Valued SAT-based ATPG

5.2.1 Introduction

Automatic Test Pattern Generation (ATPG) is a widely studied problem for the past four

decades. Algorithms that address the ATPG problem attempt to generate a test pattern,

Page 114: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 98

if any, for the target fault (single stuck-at fault model) in the circuit. Since ATPG is

an NP-complete problem [139], several heuristics and learning techniques were proposed

in literature [48, 4, 15, 50, 46, 67, 53, 54, 55, 64] to speed up test pattern generation. Roth

proposed the D-algorithm [4] in 1966 for combinational test generation and proved that it

is complete. In [15], Goel proposed an implicit enumeration algorithm that searches only on

primary inputs to generate a test pattern for the target fault. In [67] the authors proposed an

efficient learning technique for test generation and other CAD problems. In [50] the authors

learned from conflict-yielding search states and used such information in subsequent searches

to avoid these non-solution search subspaces. With such learning they showed significant

improvement for the underlying ATPG algorithm. However, their technique is devoid of

conflict analysis and identifies an over-specified search-state as reason for a conflict. In [46]

the authors perform a level-dependent analysis to backtrack non-chronologically in case of

path sensitization conflicts, which helps in avoiding non-solution subspaces. Their analysis

does not result in learned search states that could be used for future searches.

In recent years, there is a significant interest in using Satisfiability (SAT) solvers for ATPG. In

[53] the author modeled ATPG as a Boolean SAT problem. Subsequently, several techniques

to improvise this formulation were proposed in [54,55,56,57]. In [54] the authors propose a

set of greedy heuristics to solve the test generation problem using SAT. In [55] the authors use

a new data structure for the implication graph and incorporate approaches like single-cone

processing and backward justification to improve their SAT framework for test generation.

In [56] a complete algorithm based on transitive closure of implication graph was proposed

Page 115: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 99

and in [57] algorithms for solving Boolean SAT instances of several combinational circuit

problems like test pattern generation and equivalence checking were proposed. All of these

methods model the ATPG problem in the Boolean domain using clauses that encode the

underlying circuit. However, in reality, ATPG is a problem in the multi-valued domain

{0, 1, D, D,X}.

In this section, we propose a new ATPG framework based on multi-valued SAT and attempt

to solve the ATPG problem directly in the multi-valued domain itself [58]. The framework

inherits the powerful features of a SAT solver, such as conflict-driven learning and faster

constraint propagation and at the same time reduces the size of the problem. Further, we

investigate the powerful search-space pruning that is based on search-states in the decision

tree and integrate it into the multi-valued framework. We propose a new method to learn the

exact reason for path sensitization conflicts and to perform non-chronological backtracking.

We also propose an efficient representation to store conflict reasons, which helps to prune

such search subspaces when encountered again. Furthermore, our conflict analysis has the

potential to identify several unpropagable search states from a single path-sensitization con-

flict. Experimental results reveal the promise of our approach, in which significantly higher

coverages were obtained with fewer aborted faults for most circuits.

The rest of this section is organized as follows. We discuss preliminaries in Section 5.2.2.

The new multi-valued framework for ATPG is proposed in Section 5.2.3. In Section 5.2.4

we discuss our conflict analysis method, conflict search states representation and the early

backtrack possible with our method. Section 5.2.5 presents the experimental results and

Page 116: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 100

Section 5.2.6 concludes the section.

5.2.2 Preliminaries

Multi-Valued SAT (MV-SAT)

We use the notation and definitions followed in [59] for MV-SAT. A few of the definitions

are reviewed below:

• Multi-Valued Variable: A variable in the MV-SAT problem with domain size greater

than one. Example: x is a multi-valued variable with domain {0, 1, D, D}.

• Multi-Valued Literal: A multi-valued literal is a Boolean function defined on a multi-

valued variable with respect to a subset of its domain. Example: x1D is a multi-valued

literal. It will evaluate to true if and only if the multi-valued variable x is assigned

from {1, D}, which is a subset of its domain {0, 1, D, D}.

• Multi-Valued Clause: A multi-valued clause is a logical disjunction of multi-valued

literals. Example: (x1D + y0D).

• Multi-Valued Conjunctive Normal Form (MVCNF) formula: A MVCNF formula is a

logical conjunction of multi-valued clauses. Example: (x1D + yD)(a1 + b0D)

Page 117: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 101

Background on Search State based learning

In [60], the authors showed that simulation of a partial primary input assignment decomposes

the logic circuit. In [50] the authors use this information to define a search state as the

circuit decomposition at any decision point in the ATPG search. Furthermore, an E-frontier

is defined as a set of assigned gates connected to at least one of the primary outputs through

an X-PATH [50]. An X-PATH is a path of nodes whose values are all don’t-care (X). Any

search state in the ATPG decision tree can be uniquely represented by the corresponding E-

Frontier in the circuit. The logic decomposition formed by an E-frontier in the circuit is also

called cut-set for the search state in the circuit. The three terms: search state, E-Frontier

and cut-set for the search state represent the logic decomposition of the circuit and hence are

used interchangeably in the subsequent discussion. Precise definitions adopted from [61,66]

needed to describe our algorithm are given below.

• Decision Tree: The tree obtained by the branch-and-bound ATPG procedure, with

input assignments as internal decision nodes.

• Search-State: After choosing each decision and performing logic simulation/implication,

logic values of all the internal gates form a search state in the circuit.

• Cut-set: Consider the circuit as a directed acyclic graph, G, with directed edges

connecting the internal gates. Let C be a subset of the directed edges in G that

partitions the graph into two disjoint sub-graphs X and Y . Then, the conjunction of

all the tail nodes of C forms a cut-set.

Page 118: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 102

• Cut-set for the search-state: Each search-state can be uniquely represented by a

cut-set in the circuit. After each decision, the cut-set can be obtained by a multiple

backtrace from the primary outputs. The first frontier of specified nodes, encountered

during the backtrace, is the cut-set for the search-state. In the sequel, we use the term

cut-set to refer to “cut-set for the search-state”.

For the circuit in Figure 5.3(A), a partial decision tree is shown in Figure 5.3(B). The

sets indicated on the decision edges (for example: {g1} and {g1, f 1, b0}) are cut-sets

for the corresponding search-states. For the last decision (b0), the cut-set {g1, f 1, b0}

is denoted by the dashed line, in Figure 5.3(A).

• solution/conflict branch: A branch in the decision tree that has at least one/no

solution below it.

• solution/conflict cut-set: A cut-set for the search-state in the solution/conflict

branch.

• solution/conflict subspace: A search subspace below a solution/conflict branch.

• activated cut-set: A cut-set in the circuit which contains at least one gate with

{D/D} value.

• unpropagable cut-set: An activated cut-set in the circuit in which no gate with a

D or D can be observed at any of the primary outputs.

• propagable cut-set: An activated cut-set in the circuit in which at least one gate

Page 119: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 103

with a D or D can be observed at at least one of the primary outputs.

• search state dominance: A search state S1 dominates another search state S2 if and

only if

1. the gate assignments in S1 are contained in S2

2. all {D/D} values in S2 are contained in S1.

In our work, we also call search-state dominance as D-dominance for short. Example:

Consider three search states: S1 = {a1, bD}, S2 = {a1, bD, c0}, and S3 = {a1, bD, cD}.

S1 D-dominates S2, but S1 does not D-dominate S3.

(B) Decision Tree

d{g }

b1

0 {g , f , b }1

1

1

(A)Search State Representative

a

b

c

d

g

f

X

1

1

Xh

zX

X

0

X

1

e

0

Figure 5.3: Cut-sets in the Search Space.

A given cut-set in the circuit will lead to a specific search subspace in the decision tree.

In [50], the authors stored the cut-sets that lead to conflict sub-spaces in a hash-table.

After each decision, the current cut-set is searched in the hash-table. If an exact matching

cut-set exists, they simply link the current branch to the stored node of the decision tree.

Page 120: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 104

Otherwise, they proceed with the usual search process. In this way, previously encountered

search subspaces are not repeatedly explored. However, such an approach does not consider

the following:

1. The exact reason for a path sensitization conflict may be a subset of the current

E-Frontier, which can be stronger than the search state represented by the E-Frontier.

2. An exact match of the latest activated cut-set (LAC) with one of the hash-table entries

(HT), in order to prove that LAC is unpropagable, is an over-specified requirement. It

was proved in [66] that a LAC is unpropagable if it is D-dominated by any of the

HT. However, no experimental results were provided to prove its practical importance.

Our algorithm addresses both of these issues. We perform conflict analysis whenever a path

sensitization conflict occurs and store the unpropagable cut-sets information in terms of

multi-valued clauses and hash-table entries. Furthermore, our conflict analysis may learn

several additional unpropagable cut-sets other than the current one as byproducts. In the

rest of this section, when we use the term conflict we refer to “path sensitization conflict”

unless otherwise specified.

5.2.3 Multi-Valued SAT Framework for ATPG

In this section, we describe our framework to represent a given circuit as a MVCNF formula,

from which the test patterns are generated for stuck-at faults in the circuit. The MVCNF

Page 121: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 105

formula is simply the logical conjunction of the MVCNF clauses for each gate. We first

convert the given circuit into a AND-INVERTER Graph (AIG) using the publicly available

ABC tool [62]. Hence we present the MVCNF formula only for these two gate types in

Figure 5.4. However, our approach does not require the circuit to be in the AIG form, and

the formula for other types of gates can also be obtained.

cb

a

a b

(A) AND

(B) NOT

(a01D + b0DD + cD)(a01D + b01D + c0)(a01D + b01D + cD)

(a0DD + b01D + cD)(a0DD + b0DD + c1)(a0DD + b01D + cD)

(a01D + b0DD + cD)(a01D + b01D + c0)(a01D + b01D + cD)

(a1 + c0DD)(b1 + c0DD)(c0 + a1DD)(c0 + b1DD)

(bD + a01D)(bD + a01D)(b1 + a1DD)(b0 + a0DD)

Figure 5.4: Multi-Valued Clauses

Test Pattern Generation

For test pattern generation, for a target fault, we add a new variable at the fault site to

disconnect it from the rest of the circuit. This new variable replaces the original variable,

within the clauses for gates in the fanout cone of the target fault. We then simply assert

the excitation Boolean value for the original variable at the fault site and the corresponding

fault-effect value (D or D) for the new variable. Unlike [53], where the fanout cone of the

fault site has to be duplicated, we need not duplicate them since we are representing the

problem in the multi-valued domain. Then we call our multi-valued SAT solver for this

Page 122: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 106

MVCNF formula. Decisions are made only at the primary inputs as in [15], in order to

ensure that the cut-sets learned are justified.

b

d

kf

g

ca

e

hji

Figure 5.5: Example Circuit 1

Consider the circuit shown in Figure 5.5. Suppose the fault is output of gate k stuck-at-1.

Figure 5.6 shows the original and newly generated clauses that involve gate k. Variable l

is the newly introduced variable. The newly generated clauses are included in the MVCNF

formula for the circuit, instead of the original clauses. Note that because the fanout cone

for the fault need not be duplicated, the size of the new formula is nearly the same as the

original.

(b01D + c0DD + kD)(b01D + c01D + kD)(b01D + c01D + k0)

(b01D + c0DD + kD)(b01D + c01D + k0)(b01D + c01D + kD)

(b0DD + c0DD + l1)(b0DD + c01D + lD)(b0DD + c01D + lD)

(l0)(kD)(l0 + b1DD)(l0 + c1DD)(l0DD + b1)(l0DD + c1)

(b01D + c0DD + lD)(b01D + c01D + lD)(b01D + c01D + l0)

(b01D + c0DD + lD)(b01D + c01D + l0)(b01D + c01D + lD)

(A) Original Clauses

(B) Newly generated clauses

(k0 + b1DD)(k0 + c1DD)(k0DD + b1)(k0DD + c1)

(b0DD + c0DD + k1)(b0DD + c01D + kD)(b0DD + c01D + kD)

Figure 5.6: Multi-Valued clauses for k s@1

Page 123: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 107

5.2.4 Search State Based Learning

In this section, we first present an example to illustrate the need for our approach. Next, we

discuss our learning procedure, conflict cut-set representation, and early backtrack enabled

with our method. Finally, we discuss the number of unpropagable cut-sets that can be

learned from a conflict.

A Motivating Example

Consider the same circuit and stuck-at fault discussed in the previous section. The decision

tree is shown in Figure 5.7 for our discussion. The cut-set after the fourth decision on gate

c is {e1, d1, kD}. As shown in the decision tree there is no solution below this search space.

An analysis of this scenario yields that with d1, a D on gate k cannot be propagated to the

primary output j.

First, in conventional cut-set based learning methods [50,51], the complete cut-set {e1, d1, kD}

is learned as a conflict yielding search state because the entire E-frontier is stored. This is

a weaker learning since {d1, kD} in the frontier is the exact reason for the conflict. In sub-

sequent searches, whenever {d1, kD} D-dominates the latest activated cut-set (LAC), it can

be concluded that the LAC is unpropagable. In addition, it may be observed that {d1, kD}

is also unpropagable because with d1, all paths from k to the primary output are blocked.

This can be generalized to any other blocked nodes to identify other unpropagable cut-sets.

Hence, we are able to learn several unpropagable cut-sets from a single conflict analysis.

Page 124: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 108

MS: Masked Sensitized gatesDL: Decision Level

C: ConflictCR: Conflict Reason

0 1

e

d

b

c

a

CC

DL: 0

1

1

0

1{e1, d1, kD}

DL: 1

DL: 2

DL: 3

DL: 4

{e1, d1}

{e1, d1, kD}

CR: {d1}MS: {kD}

{e1}

Figure 5.7: Decision Tree

Conflict Analysis and Learning

A path sensitization conflict occurs when all possible propagation paths from the fault site

are blocked. So whenever a conflict occurs, we first perform a traversal in the fault site’s

fanout cone and record the portion of LAC within the cone, which we call EmptyD. Note

that EmptyD is sufficient to represent the current scenario of all propagation paths be-

ing blocked. During the fanout-cone traversal, we also identify the maximal decision level

(prev Dblocked lev) at which a propagation path was blocked, such that prev Dblocked lev is

strictly less than the current decision level. The initial value of prev Dblocked lev is set to

be the decision level at which fault was excited. Then, we perform antecedent tracing [46]

(traverse the implication graph backwards) for each gate in EmptyD and record the first

antecedent nodes decided at a level less than the current decision level. The recorded nodes

together with the currently decided node is the exact reason for the current conflict.

Page 125: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 109

Suppose we encounter a conflict for both Boolean values of the current decision variable,

then we perform the above analysis in both cases and resolve the two reasons7. Let this

resolved reason be represented by CR, which is a subset of the search state obtained before

the current decision. In future ATPG searches, whenever this CR D-dominates the LAC

obtained, we can safely conclude that this LAC is unpropagable, and the search can simply

backtrack as there is no solution under this search space.

Let current Dblocked lev represent the maximal decision level at which the gates in CR

is assigned. Now we can backtrack non-chronologically to the max(current Dblocked lev,

prev Dblocked lev), since the search space below this level provably does not contain a solu-

tion [46]. It may be observed that prev Dblocked lev will always be identical for both Boolean

values of the current decision variable.

We illustrate the above discussion with the following example. Consider the k s@1 fault sce-

nario discussed previously. Let the current decision be a0. We encounter a path sensitization

conflict due to this decision as shown in Figure 5.7. A fanout cone traversal from the fault-

site yields that EmptyD is {j0}. We also find that prev Dblocked lev is the level at which

fault was excited (decision level 2), since no decision other than the current one blocked a

propagation path. Antecedent tracing from j0 identifies that a0 is the reason for the current

conflict. Next, a similar analysis for a1 decision yields that {d1, a1} is the reason for the cur-

rent conflict. Thus CR = {d1}, which is the binary resolution of the two identified reasons.

In future ATPG searches, we can backtrack whenever {d1, kD} D-dominates the latest cut-

7Binary resolution can be performed by taking union of the two reasons and eliminating the currentdecision variable.

Page 126: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 110

set as there is no solution under this search space. Furthermore, current Dblocked lev is level

1 and hence the non-chronological backtrack level for the current conflict is 2 (max(2,1)),

indicating that the search can directly backtrack all the way to level 2.

Unpropagable Cut-Set Representation

We complement the conflict reason, CR, which is a conjunction of literals, to form a multi-

valued clause mvc. A corresponding hash entry mvh is also created for the mvc using the

gates with a D or D in the unpropagable cut-set being learned. Using the same running ex-

ample, with CR = {d1} and an unpropagable cut-set that can result from this conflict being

{d1, kD}, the {mvc,mvh} pair would be {(d0DD), kD}. We maintain the clauses generated

from our conflict analysis separately and use a one-literal watching scheme to identify if a

clause is UNSAT. During BCP, when a clause mvc becomes UNSAT we compute the set of

gates having D or D in the latest cut-set and check if it is equivalent to the clause mvc’s

corresponding hash-table entry mvh. If equivalent (now both the conditions of D-dominance

are satisfied), then we backtrack, otherwise we proceed with our search. In this way, our

approach can identify if a learned conflict yielding search state D-dominates the current

search state and act accordingly.

Early backtrack

Consider the circuit in Figure 5.8. Let the target fault be a s@1. The decision tree in Figure

5.9(A) represents a partial search process. We learn that the cut-set {iD}, obtained after

Page 127: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 111

abcde

f

i

h

j

g

l

Figure 5.8: Example Circuit 2

the second decision (a0), cannot be propagated. So we create a multi-valued clause mvc

and hash table entry mvh to represent this unpropagable cut-set. Suppose the next fault

is b s@1. The decision tree in Figure 5.9(B) represents a partial search process. After the

fourth decision (a1), our method backtracks immediately because the previously learned mvc

becomes UNSAT and the gates having a D or D in the latest cut-set (when mvc becomes

UNSAT) matches with the corresponding mvh. Existing methods [50, 51] cannot backtrack

after the a1 decision because the cut-set that is obtained after this decision is {jD} rather

than {iD}. Hence our method is able to perform earlier backtrack compared to the existing

search state based learning methods.

Byproducts from conflict analysis

Consider an unpropagable cut-set, {a0, bD, cD, d1}; it was proved in [51] that {a0, bD, cD, d1}

is also an unpropagable cut-set. Hence at least two unpropagable cut-sets can be learned for

each conflict. In our work, we attempt to generate more unpropagable cut-sets from each

conflict. Using the earlier k s@1 example (in Figure 5.5), it can be seen that kD and e1 in the

cut-set does not occur in the CR. We refer to such gates as masked sensitized (MS) gates.

Page 128: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 112

{iD}

e

a

b

1{b1}

0

1

C C

0

(B) b stuck at 1(A) a stuck at 1

c

d

b

a{jD}

{c1}1

1

0

1C

{h1}

{bD, h1}

Figure 5.9: Decision Tree

MS gates are not part of the CR because they were blocked by a controlling value at the

side inputs of the successor gates. For instance, the MS gate k in our example is blocked

at gate f for the a0 decision and at gate i for the a1 decision. Irrespective of the value on

the MS gates, any cut-set (CC) D-dominated by CR (without considering the MS gates in

CC for D-dominance) is always unpropagable because CR is unpropagable and MS gates

will be blocked.

Suppose in subsequent ATPG searches, we encounter a LAC of {eD, d1, kD}. From the

previous conflict analysis, if we had learned {(d0DD), (kD)} as the only {mvc,mvh} pair,

then we cannot conclude that LAC is unpropagable. The reason is that, although the mvc

becomes UNSAT, the corresponding hash-entry does not match all the gates with a D or D

in LAC. On the other hand, if we had also learned {(d0DD), (eD, kD)} as a {mvc,mvh} pair,

then we would be able to conclude that LAC is unpropagable. Note that a single mvc may

have several corresponding mvh’s.

Page 129: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 113

Consider a conflict analysis that identified n MS gates. Then the number of unpropagable

cut-sets that our approach can learn from this conflict is 2×3n: one 3n is for the original set

of gates having a fault-effect in the CR and another one for the complemented fault-effect

values on the same set of gates. The 3n term arises from the fact that each of the n MS

gates can take any of the value in {D, D,X} in the unpropagable cut-sets being learned.

In our implementation, for each conflict involving n MS gates, while our approach allows us

to learn all 2×3n unpropagable cut-sets, we generate only 2×(2n+4) unpropagable cut-sets

to avoid exhaustively enumerating them. The 2n term comes from the following: for each

MS gate g, we can have one unpropagable cut-set to include gD, and another unpropagable

cut-set to include gD. The constant 4 term is arbitrary, meaning that we include only 4

unpropagable cut-sets with all n MS gates having a fault-effect, rather than enumerating all

2n combinations. For example, when n = 4, instead of enumerating all 24 = 16 combinations

((gD1 , gD

2 , gD3 , gD

4 ); ...; (gD1 , gD

2 , gD3 , gD

4 )), we simply pick 4 cases from them.

5.2.5 Experimental Results

The proposed method was implemented in C++. The experiments were conducted on a 3.0

GHz Intel Xeon machine with 2GB RAM, running the Linux Operating System. To illustrate

the efficiency of our technique we did not incorporate other techniques, such as those in [49,

54,55,46,56,65,64,52], to mention a few. In addition, we did not use initial random pattern

fault simulation nor redundancy analysis to remove faults from the fault-list. However,

Page 130: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 114

Table 5.5: Efficiency of Our Algorithm

Conventional cut-set-based learning Our multi-valued-based learningCkt #Fault Det Red Abt Mat/Mis Time Det Red Abt Mat/Mis Time

c432 590 561 6 23 2515/41355 6 561 25 4 1368/285371 2

c499 1274 1122 0 152 447/185202 43 1272 0 2 2572/24091 3

c1355 1402 1318 0 84 590/150085 35 1400 0 2 2685/32877 3

c1908 1232 1210 2 20 1011/38237 6 1227 3 2 1055/9436 1

c2670 1888 1878 7 3 904/19075 3 1878 10 0 1760/17499 2

c3540 2632 2596 1 35 3125/67546 23 2631 1 0 83/3884 4

c880 974 974 0 0 210/7480 0.7 974 0 0 0/0 0.4

c5315 4180 4178 0 2 1088/38647 12 4180 0 0 214/13323 6

c6288 7518 7309 0 209 4341/452903 581 7491 0 27 2037/573572 248

c7552 5100 5053 16 31 2232/86211 38 5060 38 2 1207/141852 15

s9234 4782 4716 40 26 1763/80189 26 4723 59 0 1359/1966703 20

s9234.1 4780 4715 33 32 2338/83899 28 4727 53 0 1381/1897268 20

s15850 9662 9543 111 8 1251/101773 63 9548 114 0 138/4857660 74

s15850.1 9676 9562 111 3 1262/90800 60 9565 111 0 117/3542459 61

s13207 7952 7904 48 0 795/55594 34 7904 48 0 10/17853 19

s13207.1 7980 7929 47 4 2543/75833 50 7933 47 0 12/20203 19

s35932 26804 26804 0 0 7810/107625 345 26804 0 0 0/0 253

s38417 26454 26248 13 193 7552/501828 737 26413 37 4 4276/11541 173

Total 124880 123620 435 825 41777/2184282 2090.7 124291 546 43 20274/13415592 923.4

Det: Detected Red: Redundant Abt: Aborted Mat/Mis: # of hash-table matches/missesTime is reported in seconds Backtrack limit of 1000 was used for all circuits

we used the conflict analysis proposed in [59] for justification conflicts. We compared our

method with PODEM implemented with the conventional cut-set based learning [50] for

ISCAS85 and full scan versions of ISCAS89 benchmark circuits. The conventional cut-set-

based learning method learns from both justification and path sensitization conflicts. The

results are presented in Table 5.5. For both techniques, a backtrack limit of 1000 was used.

In Table 5.5, the first and second columns report the Circuit Under Test (CUT) and the total

number of faults in the fault-list. The next five columns report the number of detected faults,

number of redundant faults, number of aborted faults, number of hash-table matches/misses

and time taken for test pattern generation for the “conventional cut-set-based learning”

Page 131: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 115

method. The last five columns report the same data for our approach. The last row shows

the total for all CUT’s. The number of hash-table matches/misses are not comparable

because of the following reasons:

1. In the “conventional cut-set-based learning”, a hash-table check is performed after

logic simulation of each primary input decision. However, in our algorithm a hash-

table check is performed only when one of the mvc clauses becomes UNSAT.

2. “Conventional cut-set-based learning” method might enter non-solution subspaces,

which our algorithm might avoid (discussed in Section 5.2.4), and find more hash-

table matches/misses.

In Table 5.5, boldfaced values indicate better results (higher detection, more redundant faults

identified, fewer aborted faults, shorter execution times). In terms of the number of aborted

faults, our algorithm consistently performs equal or better than the “conventional cut-set-

based learning” method. For example, in circuit s38417, the cut-set based method aborted

on 193 faults, while ours only aborted 4 faults, together with more than 75% reduction in

execution time!

We did not include Success-driven learning in our method. In other words, we did not learn

on propagable cut-sets. However, such learning was included in the ”conventional cut-set-

based learning” method with which we compared our results. In spite of this, the fact that

our method still outperformed the cut-set-based learning method indicates the supremacy

of our method. Finally, our learning technique is a complement and not a competition to

Page 132: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 5. Fault Collapsing and Test Generation 116

any of the previously published ATPG techniques [53, 54, 46,55,49,56,65,64,52,67].

5.2.6 Summary

We have presented a new multi-valued framework for ATPG. We also propose an efficient

learning method for path-sensitization conflicts and representation of these conflict cut-sets

to avoid such states during the ATPG search. Our method allows for the identification

of intermediate conflict cut-sets during BCP and is able to backtrack early, which is not

possible with existing search-state based learning methods. Experimental results showed

that our test generation approach was able to achieve significantly higher fault coverages

with fewer aborted faults and smaller run times.

5.3 Chapter Summary

Fault Collapsing has a direct impact on test economics during Manufacturing Test. In this

chapter, we proposed an efficient fault collapsing approach based on a novel extensibility

relationship exhibited by fault-pairs. Since ATPG is a multi-valued problem, we propose a

new multi-valued framework unlike approaches to model and solve test generation problem.

Finally, we also elegantly integrate an efficient learning mechanism into this multi-valued

framework that aids to learn on propagation conflicts during test generation. Our experi-

mental analysis indicated the promise of our approach.

Page 133: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Chapter 6

Diagnostic Test Generation

6.1 Introduction

Diagnostic Test Generation is the process of generating diagnostic test patterns to distinguish

a given set of fault-pairs in the CUT. In [5, 6], the authors proposed the first complete

diagnostic test generation engine based on the D-algorithm [4]. In [7], the authors proposed

to use fault detection status to reduce the number of fault pairs that needs to be considered

during ADTG. Essentially, any pair involving at least one untestable fault can be immediately

ignored since such pairs can be either distinguished (if only one fault of the pair is untestable)

or can never be distinguished (if both the faults are untestable). So the authors in [7], first

performed a detection oriented test generation (usually referred to as ATPG) and then

collected only the detected faults for consideration during the ADTG. Further, the authors

117

Page 134: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 118

Figure 6.1: ADTG Flow

proposed an interesting X-filling method of the ATPG patterns with the objective of de-

sensitizing as many paths in the circuit under test (CUT) as possible. The motivation was

to hinder the propagation of fault effects to outputs other than that where the target fault

is detected. This in turn may imply a greater distinguishability of the target fault with

other faults in the fault list. After this X-filling of ATPG patterns, diagnostic simulation

is performed to determine fault-pairs that can be distinguished with these patterns. For

the remaining unresolved pairs, the dedicated ADTG is performed. Most of the complete

ADTG algorithms published since then are based on a similar flow to the above described

procedure, illustrated in Figure 6.1.

In [8,116,10], the authors split the search space for ADTG based on the observation that for

distinguish-ability, at least one of the faults in the target pair must be detected. Further,

the authors in [8] use δ-path (similar to X-path in ATPG) for identifying conflict spaces.

In [116], the search space is split into three based on the three distinct cases when a fault

Page 135: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 119

pair can be distinguished. In [10], the authors use illegal state information to speed up the

test generation for sequential circuits. A striking similarity between detection and diagnosis-

oriented test generation is that in both cases, we try to excite and propagate a value difference

(on an arbitrary signal) between the two machines under consideration. Although this notion

is implicitly observable in [8, 116], the authors in [10] explicitly modeled the ADTG as a 2-

phase detection-oriented test generation problem, allowing for leveraging advances in ATPG

tools to ADTG. Later, in [118], the authors further refined the model proposed in [10] to use

a single pass ATPG to distinguish a given fault pair.

Aside from the development of complete ADTG algorithms discussed above, there has been

several low-cost preprocessing steps ( [11, 12, 13, 117]) to reduce the number of fault-pairs

to be considered by the ADTG. However, to reduce the candidate fault set size for silicon

diagnosis, we need a complete and aggressive ADTG engine. This is because most of the

fault pairs in the final candidate set will usually be either equivalent or hard-to distinguish.

Our main contributions [127,128] in this chapter are:

• We propose a novel learning framework based on search state extensibility. This frame-

work allows us to quickly identify non-trivial redundant search states during the ATPG

search process and perform appropriate search space reduction.

• We show that the proposed learning framework can be incorporated for ADTG since

the problem of distinguishing a given fault-pair can be modeled as an ATPG problem.

• We propose a new and efficient ADTG engine with incremental learning based on the

Page 136: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 120

flow shown in Figure 6.1. Information learned during ATPG can be incrementally

utilized during ADTG. Further, the learned information can also be utilized across

faults and fault-pairs during ATPG and ADTG respectively.

• Finally, we propose an X-filling method based on output deviation measure with the

objective to enhance diagnostic resolution of the detection-oriented test patterns.

Outline: Next section presents prior work that is related to our contribution in this chapter.

The subsequent section describes the proposed incremental learning framework. Section 6.4

explains the output deviation based X-filling method. Finally, Sections 6.5 and 6.6 present

the results of our experimental analysis and concludes the chapter respectively.

6.2 Related Work

6.2.1 E-Frontier based learning

In [50], the authors proposed to learn E-Frontiers obtained after each decision during the

search process. E-frontier is defined as the frontier of assigned nodes F in the CUT (obtained

after complete logic simulation of the current decision) such that for each gate g in the CUT

that has an X-path to at least one primary output, g ∈ F . The learned E-Frontiers were

classified into success and conflict frontiers based on whether they lead to a success or a

conflict state. In future search, whenever the learned frontier becomes equivalent to the

current E-frontier they conclude that the current intermediate search state is redundant. In

Page 137: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 121

the sequel, we refer this method as EST method. Our method differs from the EST method

in several major ways:

1. Let F be the E-Frontier just before node N in the DT . Note that R returned by AT

for a decision at N will be a subset of F . Since our learning is based on R (refer

Section 6.3) the search states learned by our method will be a subset of F . Thus we

use a more succinct notation than EST method to represent visited search states.

2. Extensibility is a generalized notion of equivalence; if a learned state is equivalent to

the current E-Frontier then it is also extensible to the current E-Frontier. However,

the reverse need not be true.

3. We check for extensibility after each implication obtained from the current decision.

However, EST methods check for equivalence only after the complete logic simulation

of the current decision.

From the above differences it can be observed that our method will enable us to determine

non-trivial redundant search states that may not be identified by the EST method. Further,

we use clauses to represent the learned search states. One-literal watching scheme on the

learned clause database is used to determine if the learned state becomes extensible to the

current search state. This is similar to the efficient two literal watching scheme proposed

in [18]. Thus we do not have to compute the current E-Frontier and compare it with the

learned search states to determine if it is redundant. However, we need to perform the

false-positive or false-negative check (refer Section 6.3) if the extensibility occurs. Note that

Page 138: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 122

for typical circuits with bounded fanin, the number of interconnections between gates in

the CUT is linearly related to the number of gates in the CUT . Since a false-positive or a

false-negative check visits each interconnection in the CUT at most once they are linear in

time complexity [46].

6.2.2 Output Deviation

Output deviation metric was originally proposed in [22] to enrich the test set obtained for

classical fault models. Later, it was used in [26] to quickly assess the test set quality and

thereby help in pattern selection from large test sets. In [23, 24], the authors utilized this

metric to obtain test patterns with high defect coverage. Recently, in [25], this metric

was utilized to evaluate LFSR reseeding for test data compression. We briefly discuss the

concepts of output deviation below; a complete overview is presented in [22,26].

Reliability: The reliability R of an n-input gate G is defined as a vector of 2n components

-

R = {r(00...00), r(00...01), r(00...10), ..., r(11...11)} (6.1)

Each component denotes the probability that G’s output is correct for the input combination

shown in the exponent. For instance, r(00) represents the probability that G’s output is

correct for the input combination 00 (assuming n = 2).

Page 139: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 123

Probability: pG,v represents the probability of gate G to be at logic value v, where v ∈

{0, 1}. Note that pG,0+pG,1 = 1. Further, signal correlations due to reconvergent fanouts are

ignored. The following equations can be used to obtain the probability values of a 2-input

gate G (with inputs f and g).

p(G,c⊕i) = pf,cpg,cr(cc) + pf,cpg,cr

(cc)

+ pf,cpg,cr(cc) + pf,cpg,c(1 − r(cc))

p(G,c⊕i) = pf,cpg,c(1 − r(cc)) + pf,cpg,c(1 − r(cc))

+pf,cpg,c(1 − r(cc)) + pf,cpg,cr(cc) (6.3)

where c and i represents the controlling and inversion values of G. The above equations

intuitively model the probability of obtaining value v at the output of gate G. It is based

on the probability of values at inputs f and g together with the reliability measure that

the output at G is correct for the considered input combination. Note that the probability

values for complex gates can be similarly defined.

Deviation: The deviation δG,t at gate G for an input pattern t is the probability value pG,v,

where v is the correct output at G for the pattern t. Intuitively, δG,t represents a measure

of the likelihood that the output at G is erroneous for the pattern t.

Page 140: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 124

6.3 Our Proposed Learning Framework

Since for diagnostic test generation, an ATPG engine is used, we first explain our learning

framework with respect to the base detection-oriented ATPG. Extension of this framework

to ADTG is straight-forward and will be discussed as well.

6.3.1 Success Driven Learning (SDL) based on Search State Ex-

tensibility

Consider the example shown in Figure 2.6. In Section 2.3.2, we showed how AT can be used

to determine a set R for the success state obtained for the fault G14 sa−1. After performing

AT for this success state, we record the current decision G34 = 0 in a set T . We know that

R ∪ T ⇒ G7,D203 . The pair {R, T} is recorded. The intuition behind this learning is that, in

future search, whenever R becomes extensible to the current search state (when targeting

other faults), its corresponding T may propagate the fault effect in R to a primary output.

After the {R, T} pair is learned, we backtrack to the highest decision level at which any

of the assignments in R is made. In the running example, since R = G5,D202 , we backtrack

to level 5 from level 7. Now, R represents an intermediate state and AT can be used to

determine the antecedents for this intermediate state. The new R returned by AT for this

intermediate state will be {G4,D201}. Again note that this new R together with its current

decision (G5,022 ) implies the intermediate state {G5,D

202}. Next, we update T with the current

decision; so T = {G34 = 0, G22 = 0}. Now, we continue to learn the pair {R, T} with the

Page 141: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 125

new R and the updated T . We iterate this process of determining antecedents and learning

the {R, T} pair until a fault effect is present in the learned R.

Now, consider the partial decision tree for the fault G19 sa − 0 shown in Figure 2.6. Note

that R = {G201 = D} and T = {G34 = 0, G22 = 0} is a previously learned pair. It can

be seen that after the third decision, the learned R = {G201 = D} is implied. Further,

its corresponding T propagates the fault effect in R to the primary output G203. Thus our

learning helps to identify intermediate states R that have the potential to lead to a success

state when the assignments in its corresponding T are made. However, false positive cases

may occur and is explained with an example below.

Consider the example circuit and the partial PODEM-based [15] decision trees shown in

Figure 6.2. When the fault e stuck-at 1 is detected at output g (Figure 6.2B), we learn the

following two {R, T} pairs - {f = D}, {h = 1} and {e = D}, {h = 1, c = 0}. Now, when

targeting the fault d sa − 1 (Figure 6.2C), previously learned R = {e = D} is implied after

the second decision. However, it can be seen that it’s corresponding T = {h = 1, c = 0}

does not propagate the fault effect in R to the primary output. We call such cases as false

positive cases.

A close observation reveals that the current target fault site (d sa − 1) is in the path of

implications of signal values in T = {h = 1, c = 0}. Further, the current target fault site

reconverges with R = {e = D} at gate f . Thus, we do not get the required propagating

value at gate f for T to propagate the fault effect in R. Though such occurrences are rare, we

need to identify and avoid these false positives. We note that methods such as the EST that

Page 142: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 126

Figure 6.2: False Positive in SDL

uses the complete evaluation frontier would avoid such false positives, but the conservative

nature would miss many of the search state extensibility opportunities. In other words, the

benefits gained from search state extensibility out-weighs the cost from the rare events of

false positives. To handle these false positives, we propose to perform a quick fault simulation

augmenting T with the current sequence of decisions (that implied R). If the target fault

is detected, then we proceed with the next fault. Otherwise, we proceed as usual from the

current sequence of decisions to determine a test. Further, it may be the case that T and

the current set of decisions (that implied R) may be inconsistent. In such cases, we override

the assignment in the current decision set with that in T for the false-positive check. The

motivation is that we know previously that T propagated the fault-effect in R to an output.

Further, the hope is that the overwritten assignment in the current decision set may not be

required to imply R. Since our learning is based on the extensibility of R with the current

Page 143: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 127

search state, we call our learning as search state extensibility based learning.

6.3.2 Conflict Driven Learning (CDL) based on Search State Ex-

tensibility

Now, we explain our learning when a conflict state occurs during the search process. Similar

to the success case, we use AT to determine R that along with the current decision implied

the conflict. Consider a decision node N in the DT . Suppose the decision N = 0 implies a

conflict in the CUT (i.e) all paths from the fault-site are blocked; thus the fault-site cannot

be sensitized to an output. This conflict state is represented by the E-Frontier F obtained

after the current decision. Note that all assignments in F are Boolean-valued. Otherwise

D-frontier contains a potential gate that has an X-path to an output and our assumption

of N = 0 implying a conflict state would be wrong. Now, we perform AT to determine R

for the Boolean-valued assignments in F . Let us call this R as R0. Similarly, if conflict

occurs for N = 1 decision as well, then we perform AT to get the set R1. Now, we compute

and learn BR = {R0

⋃R1}.Also, we non-chronologically backtrack to the highest decision

level (l) at which any gate assignments in BR is made. Now the interpretation is that, in

future search (may be for the same target fault or a different fault), whenever BR becomes

extensible to the current search state, potentially no fault effect in BR can be propagated to

a primary output. Further, after backtracking to level l, we perform AT for the assignments

in BR and record the returned set R as R0 or R1 based on the decision at level l. If both

Page 144: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 128

°±

² ³

´

µ

¶�·�¸º¹�»º¼©½C¾ ¿ÁÀÃÂAÄÆÅ(ÇCÂ È É#·#ÊÌË�»CÍQÎ ÀϷпÑË�»�ÍÒÎ

Ó

ÔÕ Õ

Ö×Ø

Ù

Ú

Û

Ü

Ý

Þß

Ó

ß

Õºà�áBâNã ä ÞIå

æ�ç�è"ç á ßÓ

é Ô�ê�ë�ì Ó�í�îï.à�ã ä ß ì Ó�í�ðÕ

Figure 6.3: Conflict Driven Learning

decisions were made at level l, then we learn a new BR = {R0

⋃R1} at level l and iterate the

aforementioned process. Otherwise, we decide on the remaining value for the input decided

at l and proceed with the search process. Finally, we stop our learning process, whenever

the backtrack level l is less than the level at which fault was excited. This allows us to learn

pervasive BR (i.e) the learned information can be used to determine redundant search states

in the future search process when targeting a different fault.

Consider Figure 6.3. For the fault b sa − 0 (Figure 6.3B), both the Boolean decisions on

primary input a implies a conflict state. For these two decisions, AT will return R0 = {d3,0}

and R1 = {e2,D, f 3,D}. Thus, the learned BR = {e = D, f = D, d = 0}. Now, consider the

scenario for fault e sa− 0 shown in Figure 6.3C. After the second decision, the BR (learned

for b sa−0) is extensible to the current search state. Note that none of the fault effects in the

current D-frontier can be propagated to the output l. Thus our learning aids in identifying

Page 145: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 129

potential intermediate states that may always lead to a conflict state.

Similar to the rare false-positives in SDL, two types of false negative (also rarely occurring)

case may occur in the CDL.

1. Type 0: At the instant of learning BR, we know that any assignment on a subset of

primary inputs always leads the intermediate state BR to a conflict state. For instance,

in the above example (Figure 6.3B), we know that both decisions on input a yields a

conflict state when learning BR = {e = D, f = D, d = 0}. Without loss of generality,

let us consider one such assignment A on the subset of primary inputs. Suppose in a

future search, this BR becomes extensible to the current search state. Further, let the

current target fault lie in the path of implications of A. Then, for the assignment A, we

may not get the required implications that will block all the fault effects in BR. This

is analogous to not getting the propagating values in false positive case for SDL (refer

Section 6.3.1). In such cases, the current search state (with which BR is extensible

to) may not lead to a conflict state for the assignment A. Thus it may be wrong to

conclude that the current intermediate state will never lead to a success state - a false

negative case. Note that since the current target fault lies in the path of implications

of A, if we do not get the required implications to block the fault effects in BR for

the assignment A, then there must exist a gate in the current D-frontier that can be

reached from the fault site without traversing any gates in the BR. This information

can be used to perform false negative check and is described in the sequel.

Page 146: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 130

ñò

ó

ô õö

÷�ø.ù�ú"û�üþý�ÿ ����� ����� � �9ø� ���û��� �Cø����û���

û

� ��

� �

û

������ ÿ � ���

!�"��#�!

$Cû%�&('�*)(+, ��ÿ � -'%�%)���

Figure 6.4: False Negative in CDL

2. Type 1: Whenever BR becomes extensible to the current search state, there might

be a fault effect other than that in the BR that may propagate to a primary output.

For example, consider Figure 6.4. For the fault d sa − 1, both the Boolean decisions

on input b yields a conflict state. Here, we learn that BR = {d = D}. Next, when we

target the fault b sa − 1 (Figure 6.4C), previously learned BR = {d = D} becomes

extensible to the search state obtained after the second decision. However, the fault

effect at the fault site (b) can still be propagated through the path b → f → g → h

(with input c = 1).

Thus, even though BR is extensible to the current search state we cannot immediately

conclude that the current search state will never lead to a success state. To detect the above

mentioned types of false negative cases, whenever the extensibility occurs, we traverse the

fanout cone of the target fault-site. If all fault effects at the inputs of gates in the current

D-frontier can be reached only after traversing through the assignments in BR, then it can

Page 147: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 131

be safely concluded that the current intermediate search state will always lead to a conflict

state. Thus we can conclude that the current space is a conflict space and can backtrack

from it. For instance, consider the fault e sa − 0 example (Figure 6.3C) discussed above.

When BR = {e = D, f = D, d = 0} becomes extensible to the search state obtained after

second decision d = 0, none of the fault effects at the inputs of the current D-frontier (gates

g and k) can be reached without traversing through the gates in BR. Thus our method will

conclude that the current search state is a conflict state, which is indeed true since both the

decisions on input a will lead to a conflict state. Next, consider the fault b sa − 1 example

(Figure 6.4C) discussed above. When BR = {d = D} becomes extensible to the search

state implied after the second decision, the fault effects at the inputs of gates in the current

D-frontier (gates e and f 1.) can be reached from the fault site without traversing the gates

included in BR. Thus our method will not conclude that the current search state is a conflict

state. This is indeed true here since the fault effect can be observed at the output h (with

input c = 1). For our learning, we need to perform AT to determine the reason for the

conflicts identified by CDL. For this purpose, we determine the antecedents of BR since

BR represents such conflict states.

Finally, we describe two enhancements to the conflict learning. The first enhancement is for

the false-negative check. When we traverse the fault-site fanout cone, we may hit a gate g

in the D-frontier without traversing through any assignment in BR. However, we can still

conclude that the current space is a conflict space if there does not exist an X-path from g.

1d2,D is the first implication at level 2. Since we check for extensibility after each implication the gates e

and f will not be implied when the extensibility occurs

Page 148: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 132

Note that for soundness, we need to append BR with the assignments that block the X-path

before performing AT for the assignments in BR. Second enhancement is to improve the

quality of BR learned. Let F be the E-frontier just before node N in the DT . Now we

define MG = F \ (R0

⋃R1). The insight is that irrespective of any value to gates in MG, no

fault effect in R0

⋃R1 can be propagated to an output. So the new BR = {R0

⋃R1

⋃MG},

where gates in MG appear in the X-literal (can assume any value) form. This enhancement

will help in the false-negative check since certain fault-effect at the inputs of gates in the

D-frontier may be reachable through gates in MG. Again, we note that the benefits gained

from search state extensibility out-weighs the cost from the rare events of false negatives.

6.3.3 Learning Framework for ADTG

In Section 2.2.3, we discussed the model that we use for generating a diagnostic pattern for

a given fault pair (f0, f1). Note that the diagnostic test generation problem was converted

into the select line S sa−0 (or equivalently S sa−1) problem. Thus the learning framework

that we proposed for the ATPG engine can be used for the diagnostic pattern generation

too. However, the learned search states, either in SDL or in CDL, must not include the

additional gates used to modify the original CUT’s netlist. This enables the learned search

states to be used across faults or fault-pairs during test generation. Further, we would like

to point out that our learning framework can be easily adopted for any ADTG engine since

it just succinctly records the visited search states and tries to prune them in future search.

Page 149: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 133

Figure 6.5: Proposed ADTG Flow

6.3.4 Incremental Learning

Overall, we follow the same flow for ADTG as proposed in [7]. However, we incorporate the

learning framework discussed above in the ATPG engine. The information learned during

detection-oriented test generation of each fault can be used for subsequent faults. Similarly,

the search states that are learned during diagnostic-oriented test generation of each fault

pair can be used for subsequent fault pairs. Further, the search states that were learned

during detection-oriented test generation can be incrementally used during ADTG. This

again follows from the modeling of diagnostic test generation as detection test generation.

This flow is depicted in Figure 6.5.

Page 150: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 134

6.4 Output Deviation based X-filling

The number of specified primary inputs in the test pattern returned by an ATPG tool is

often very less in comparison to the number of primary inputs. Traditionally, the unspecified

primary inputs in a returned test pattern have been filled (X-filling) with different objectives

- like to reduce test time and peak power during testing [27, 28], to enhance test data com-

pression [29] and to boost the performance of fault-dropping based ATPG tools by random

X-filling [19]. In this section, we propose an output deviation based X-filling method with

the objective of improving the diagnostic resolution of the detection-oriented test patterns.

For a test cube t returned by the ATPG tool, we randomly fill it to obtain te such that each

primary input in te is specified. Now, for each primary input I, we assign pI,v = 1 if the value

of I in te is v; otherwise pI,v = 0. Next, for each internal gate G and primary output O in

the CUT , the probability values pG/O,v, v ∈ {0, 1} are computed using Equation 6.2. From

these probability values, the output deviation δO,te for each output O in the CUT can be

computed. Now, we define the output deviation obtained for te as ODte =∑

foreachO δO,te .

Ideally, we would like to compute ODte for each te that can be obtained from t and pick the

te that has the maximum ODte value. However, the number of te for a given t is exponential

in the number of unspecified primary inputs in t. Thus, in our experiments, we randomly

choose up to 8 te patterns for a given t and pick the te, which has the maximum ODte value.

We discard the remaining 7 te patterns. Maximum ODte value intuitively implies that te is a

higher quality detection vector, which may help to detect more faults in the CUT . Further,

Page 151: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 135

we consider the output deviation value at each of the primary output in selecting te. This in

turn may imply to detect each fault at as many outputs in the CUT as possible. Intuitively

this may aid in distinguishing fault-pairs since the faults in really distinguishable fault-pairs

do not always share the same output response. Thus by trying to detect each fault at as

many outputs as possible, our intuition is that the faults in an arbitrary distinguishable

fault-pair may yield different responses in at least one of the output.

6.5 Experimental Results

Table 6.1: Original Flow vs. Proposed Flow

Ckt #Flts. Det Red Abt Unres. Dist Equal Abt #CM #SM T (s) ATOM /E+06 E+06 ATOMCOM

s13207 7952 7903 48 1 1.47 1465941 418 2 8194 1.637952 7903 48 1 1.63 1632629 418 1 785 1.3 5033

s13207.1 7980 7932 47 1 1.03 1031479 437 9 5762 2.007980 7932 47 1 1.03 1031480 437 8 82 0.854 2876

s15850 9662 9548 114 0 1.28 1275493 557 0 7485 1.949662 9548 114 0 1.28 1275493 557 0 216 1 3855

s15850.1 9676 9565 111 0 1.42 1416573 557 0 8983 1.999676 9565 111 0 1.42 1416573 557 0 227 1.2 4518

s5378 3680 3663 17 0 0.09 93010 169 0 277 1.333680 3663 17 0 0.1 104087 169 0 0 0.09 208

s9234 4782 4729 53 0 0.67 669499 248 52 1760 1.604782 4729 53 0 0.59 593273 248 43 811 0.549 1098

s38417 26454 26413 41 0 10.5 10452822 779 440 131889 1.6526454 26413 41 0 11.7 11729057 779 414 6713 11 79855

s35932 26804 26804 0 0 0.03 27966 2742 0 4113 1.1326804 26804 0 0 0.03 27966 2742 0 0 0.01 3650

b14 1 11136 11133 1 2 2.67 2666730 334 2177 46797 0.8011136 11133 1 2 2.67 2667704 334 1203 6 2.4 58584

Page 152: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 136

6.5.1 Incremental Learning Framework

We implemented the proposed learning framework based on search state extensibility on

top of a publicly available state-of-the-art ATPG tool ATOM [19]. In the sequel, we refer

this engine as ATOMCOM . We compared ATOM based on the original flow (Figure 6.1)

against ATOMCOM based on the proposed flow (Figure 6.5). The primary objective of this

experiment is to illustrate the effectiveness of the proposed flow and learning framework to

solve difficult fault pairs (faults). So during ATPG we targeted each single stuck-at fault in

the collapsed fault list individually to perform a thorough evaluation. Then we performed

diagnostic fault simulation to determine the fault pairs (from the the set of detected faults)

that are distinguishable by the test set (Tdet) returned by ATPG. Our diagnostic simulator

is based on the algorithm proposed in [20], which is one of the best simulators reported till

date. This simulator returns a set of equivalence classes containing faults, where none of

the fault pair in an equivalence class are distinguishable by Tdet. To avoid long diagnostic

simulation time, we used only the first 200 patterns from Tdet if its cardinality is greater than

200. After diagnostic simulation, we performed ADTG for each fault pair in each equivalence

class. We performed our experiments on full scan AND/INVERTER Graph [70] versions of

ISCAS89/ITC99 benchmark circuits. For all our experiments, we set the maximum backtrack

limit to 500.

The results for large benchmark circuits for ATOM and ATOMCOM are presented in Table

6.1. The first two columns report the circuit name and number of faults in the collapsed

fault list. For each circuit, the table shows two rows of data; the first row is for the original

Page 153: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 137

ATOM and the second row is for ATOMCOM . Columns 3-5 report the number of detected,

redundant and aborted faults for the original ATPG run. Column 6 reports the number of

unresolved pairs after the diagnostic simulation of Tdet. Columns 7-9 show the number of

distinguished, equivalent and aborted fault pairs for the ADTG run. Columns 10-11 indicate

the number of conflict/success state match that occurred for ATOMCOM together for ATPG

and ADTG run. Column 12 reports the time taken (in seconds) for both ATPG and ADTG

runs. The final column shows the ratio of run time for ATOM over ATOMCOM .

From the last column in the Table 6.1, it can be seen that ATOMCOM outperforms ATOM

in the time taken in almost all runs. For instance, for s13207.1, we obtain a 2× speedup

over ATOM . Note that the number of fault-pairs targeted by the two methods may dif-

fer since the Tdet used by them for diagnostic simulation may be different. For s13207 and

s38417, we achieve a speed-up of 1.63× even though we target more fault pairs than ATOM .

The reported speedup is achievable due to the huge number of search state matches (after

false-positive and false-negative check) that occur for ATOMCOM . For b14 1, our method

suffers a 0.2× slow-down over ATOM . This is due to the overhead of our learning frame-

work. However, we were able to solve 974 more fault-pairs than ATOM within the allotted

backtrack limit of 500. This indicates the potential of the proposed learning framework to

prune redundant search spaces during the test generation flow. For s35932, because most

fault pairs were easily resolvable under the backtrack limit of 500, the speedup we achieved

with the proposed learning was smaller. It was a speedup nevertheless. Thus, for almost all

circuits, we achieve either a speedup or resolve more of the unresolved fault pairs, or both.

Page 154: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 138

For both ATOM and ATOMCOM , we did not perform fault simulation during ATPG or

ADTG run since our objective was to show the effectiveness of the proposed flow and learning

framework. Also, in our implementation, we did not incorporate the multiple input assign-

ment and look-back technique in both ATOM and ATOMCOM . Note that this does not

invalidate our results since look-back mechanism tries to generate a test for a (detectable)

fault only after the maximum backtrack limit is reached for the regular search process. Since

the results reported for the two methods were obtained before exhausting the maximum

backtrack limit, the conclusions made above are still intact.

6.5.2 Output Deviation based X-filling

We implemented the proposed X-filling method in ATOM ATPG tool [19]. For each

detection-oriented test cube generated, we used the proposed output deviation based X-

filling to generate the completely specified test vector. We compared our X-filling method

against random X-filling method used in [19]. For the output deviation computation we de-

composed complex gates, if any, into sub-circuit consisting of simple NOT, 2-input (N)AND

and (N)OR gates. We considered the ISCAS85 and full-scan versions of ITC99 circuits for

this experiment. The results are summarized in Table 6.2. The first column indicates the

CUT . For each CUT , the table shows two rows of data - the first row is for the random

X-filling and the second row is for our output deviation based X-filling. The second and

third columns show the number of faults in the collapsed fault-list and the total number of

fault-pairs considered for diagnostic simulation respectively. The fourth column shows the

Page 155: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 139

number of pairs that cannot be resolved by the fully specified ATPG generated patterns -

these pairs require dedicated ADTG engine. The last column indicates the number of addi-

tional fault-pairs that can be resolved by the ATPG patterns filled by our output deviation

based X-filling method.

Table 6.2: Output Deviation based X-fill vs. Random X-fill

Ckt. #Flts Pairs Unresolved Difference

c6288 7744 2.97E+07 1083 457744 2.97E+07 1038

c1355 1574 1.23E+06 779 141574 1.23E+06 765

b14 22802 2.56E+08 2850 8722802 2.56E+08 2763

b15 1 28999 4.15E+08 4481 3428999 4.15E+08 4447

b20 1 33255 5.47E+08 3831 3933255 5.47E+08 3792

b20 45459 1.02E+09 6097 19745459 1.02E+09 5900

b21 1 32948 5.37E+08 3846 6832948 5.37E+08 3778

b21 46154 1.05E+09 6221 3646154 1.05E+09 6185

b22 1 49945 1.23E+09 5614 3949945 1.23E+09 5575

For the circuit b20, we were able to resolve 197 more fault-pairs than the random X-fill

method. On an average, the proposed X-fill method is able to resolve 62 more fault pairs than

the random X-fill method in the results reported in Table 6.2. In the considered benchmarks,

random X-filling also helps in resolving almost 99% of the total fault pairs. This is because

Page 156: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 140

the number of hard-to-distinguish fault-pairs are usually small in comparison with the total

number of fault-pairs in the CUT . However, note that for each additional fault-pair, the

ADTG engine may have to search a space that is exponential in the number of primary

inputs in the worst case. Thus, even a marginal increase in the number of resolved pairs may

substantially reduce the burden on the subsequent ADTG engine, which is often desired!

Overall, the results obtained indicate that the output deviation based X-filling technique has

the potential to distinguish more fault-pairs than the random X-filling method.

6.6 Chapter Summary

In conclusion, we have proposed a learning framework based on a new concept of search

state extensibility. Further, we proposed an incremental use of the information learned

during detection-oriented test generation in ADTG. This carry-over of learning is quite

interesting and further motivates us to investigate useful information that can be learned

during detection-oriented test generation for use in ADTG. Experimental results indicate

that our incremental learning framework achieves up to 2× speed-up and/or resolves more

initially unresolved fault pairs for each tested circuit. Finally, we proposed an X-filling

method based on output deviation measure to enhance the number of pairs that can be

resolved by detection-oriented test patterns. For b20 circuit, our X-filling method could

resolve 197 more pairs than the random X-fill method.

In recent years, the semiconductor industry is not only interested in conventional test pat-

Page 157: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 6. Diagnostic Test Generation 141

terns but also has an increasing interest in diagnostic test patterns. The main reason for

such interest is that these patterns can potentially help them improve the yield drastically.

An intelligent test generation engine that can offer both detection and diagnostic ability

can dramatically reduce the turn-around time and associated cost in silicon diagnosis. We

believe that the proposed investigation can serve as a promising stepping stone for the design

of such an intelligent engine.

Page 158: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Chapter 7

Conclusion

In today’s world, the core solving process of any Design Verification or Test Generation

problem is based on the branch-and-bound procedure. For practical purposes, aggressive

learning mechanisms are necessary to quickly solve such resource intensive, inherently hard

problems. In this dissertation, we proposed an interesting extensibility relation among search

states visited during the branch-and-bound search. We synergistically combined this novel

relation with antecedent tracing technique to design a powerful learning framework for proce-

dures based on branch-and-bound search. Further, we fine tuned this learning framework for

distinct applications in Design Verification and Test Generation like preimage computation,

abstraction refinement based model checking and diagnostic test generation.

Initially, we introduced our search state extensibility based learning framework with respect

to preimage computation, which is the core step of any Model Checking technique. We proved

142

Page 159: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 7. Conclusion 143

that our learning framework for preimage computation is sound and complete. We proposed

a probability based heuristic to guide antecedent tracing in our learning framework. Further,

we showed that our framework can be easily used to compute over-approximate preimage

space, which may be of interest for certain verification problems like pre-silicon debugging.

Next, we presented our image extraction technique with bounded model checking to compute

an upper approximation of the image space. We illustrated that our extensibility based

learning framework elegantly fits in this approach with subtle fine tuning for computing

over-approximate image space. Further, we proved that the computed over-approximate

space are indeed interpolants and designed a counter-example guided abstraction refinement

framework for Model Checking using our novel image extraction technique.

Next, fault collapsing is an integral pre-processor of today’s commercial test generation pack-

ages since it has a direct impact on test economics. We proposed a low-cost fault collapsing

engine based on our extensibility relationship between search states. Unique requirements

to test a fault were used for this purpose; we also proposed an efficient storage technique

for these unique requirements to significantly reduce the memory footprint. Further, we

also proved a lower bound on the size of a collapsed fault list from theoretical point of in-

terest. Later, we proposed a multi-valued ATPG framework to solve the test generation

problem directly in its multi-valued domain. We also integrated powerful learning technique

for sensitization conflicts in this multi-valued framework.

Finally, during silicon diagnosis, diagnostic test patterns are required for distinguishing fault

pairs in the candidate list. Such pairs are usually hard-to distinguish or hard-to prove

Page 160: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Chapter 7. Conclusion 144

equivalent pairs. So we customized our search state extensibility based learning framework

for diagnostic test generation. We also proposed an incremental learning framework to utilize

the information learned during detection-oriented test generation in diagnostic-oriented test

generation. Lastly, we also proposed an output-deviation based probabilistic metric to X-fill

generated test patterns with the objective of enhancing the diagnostic resolution of such

patterns.

Overall, we proposed a generic search state extensibility based learning framework for branch-

and-bound search procedures; which was later customized based on the target application.

In general, our framework aided in identifying non-trivial redundant search states providing

for significant pruning of redundant search spaces. We explained the applicability of this

framework for Model Checking and Test Generation problems. Further, in future, our frame-

work can be utilized with fine tuning for problems, like design debugging, that are currently

solved using branch-and-bound search procedures.

Page 161: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Bibliography

[1] E. J. Marinissen and N. Nicolici, “Editorial: Silicon Debug and Diagnosis,” IET Com-

puters and Digital Techniques, vol. 1, no. 6, pp. 659-660, Nov. 2007

[2] L-T. Wang et al., “VLSI Test Principles and Architectures,” Morgan Kaufmann Pub-

lishers, 2006

[3] M. Bushnell and V. D. Agrawal, “Essentials of Electronic Testing for Digital, Memory

& Mixed-Signal VLSI Circuits,” Boston, MA, Kluwer Academic Publishers, Kluwer,

2000

[4] J. P. Roth, “Diagnosis of automata failures: a calculus and a method,” IBM J. R&D,

vol. 10, no. 4, 1966, pp. 278-291

[5] J. P. Roth et al., “Programmed Algorithms to Compute Tests To Detect and Distin-

guish Between Failures,” IEEE Transactions on Electronic Computers, vol. EC-16, no.

5, Oct. 1967, pp. 567-580

145

Page 162: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 146

[6] J. Savir and J. P. Roth, “Testing For, and Distinguishing Between Failures,” 12th Fault

Tolerant Computing Symposium, June 1982, pp. 165-172

[7] P. Camurati et al., “Diagnosis Oriented Test Pattern Generation,” European Design

Automation Conference, 1990

[8] P. Camurati et al., “ A diagnostic test pattern generation algorithm,” International

Test Conference, 1990, pp. 52-58

[9] T. Gruning et al., “DIATEST: A Fast Diagnostic Test Pattern Generator for Combi-

national Circuits,” International Conference on Computer-Aided Design, 1991

[10] I. Hartanto et al., “Diagnostic Test Pattern Generation for Sequential Circuits,” VLSI

Test Symposium, 1997

[11] S. M. Reddy et al., “Diagnostic Test Generation for Synchronous Sequential Circuits

based on Test Elimination,” International Test Conference, 1998

[12] I. Pomeranz et al., “Diagnostic Test Generation for Combinational Circuits based on

Test Elimination,” Asian Test Symposium, 1998

[13] S. M. Reddy et al., “z-Diagnosis: Framework for Diagnostic Fault Simulation and

Test Generation Utilizing Subsets of Outputs,” IEEE Transactions on Computer-Aided

Design of Integrated Circuits and Systems, 2007

[14] I. Hartanto et al., “Diagnostic Fault Equivalence Identification using Redundancy In-

formation and Structural Analysis,” International Test Conference, 1996

Page 163: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 147

[15] P. Goel, “An Implicit Enumeration Algorithm to Generate Tests for Combinational

Logic Circuits”, IEEE Transactions on Computers, vol. C-30, no. 3, March 1981

[16] J. Silva and K. A. Sakallah, “Dynamic Search-Space Pruning Techniques in Path Sen-

sitization”, Proceedings of Design Automation Conference, June 1994, pp. 705-711.

[17] J. P. Marques-Silva and K. A. Sakallah, “GRASP: A search algorithm for propositional

SAT”, IEEE Transactions on Computers, May 1999

[18] M. Moskewicz et al., “CHAFF: Engineering an efficient SAT solver,” Design Automa-

tion Conference 2001

[19] I. Hamzaoglu et al., “New Techniques for Deterministic Test Pattern Generation,”

VLSI Test Symposium 1998

[20] S. Venkataraman et al., “Rapid Diagnostic Fault Simulation of Stuck-at Faults in

Sequential Circuits using Compact Lists,” Design Automation Conference 1995

[21] A. Kuehlmann et al., “Robust Boolean Reasoning for Equivalence Checking and Func-

tional Property Verification”, IEEE Transactions on Computer-Aided Design of Inte-

grated Circuits and Systems, vol. 21, no. 12, 2002

[22] Z. Wang, K. Chakrabarty and M. Goessel, “Test Set Enrichment using a Probabilistic

Fault Model and the Theory of Output Deviations,” Design, Automation and Test In

Europe, 2006, pp 1-6

Page 164: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 148

[23] Z. Wang and K. Chakrabarty, “An Efficient Test Pattern Selection Method for Improv-

ing Defect Coverage with Reduced Test Data Volume and Test Application Time,”

Asian Test Symposium, 2006, pp 333-338

[24] X. Kavousianos and K. Chakrabarty, “Generation of compact test sets with high defect

coverage,” Design, Automation and Test In Europe, 2009, pp 1130-1135

[25] Z. Wang, H. Fang, K. Chakrabarty and M. Bienek, “Deviation-Based LFSR Reseed-

ing for Test-Data Compression,” IEEE Transactions on Computer-Aided Design of

Integrated Circuits and Systems, vol. 28, no. 2, Feb. 2009, pp 259-271

[26] Z. Wang and K. Chakrabarty, “Test-Quality/Cost Optimization Using Output-

Deviation-Based Reordering of Test Patterns,” IEEE Transactions on Computer-Aided

Design of Integrated Circuits and Systems, vol. 27, no. 2, Feb. 2008, pp 352-365

[27] C-W. Tzeng and S-Y. Huang, “QC-Fill: An X-Fill method for quick-and-cool scan

test,” Design, Automation and Test In Europe, 2009, pp 1142-1147

[28] N. Badereddine et al., “Minimizing peak power consumption during scan testing: test

pattern modification with X filling heuristics,” Design and Test of Integrated Systems

in Nanoscale Technology, 2006, pp 359-364

[29] J. Li, X. Liu, Y. Zhang, Y. Hu, X. Li and Q. Xu, “On capture power-aware test

data compression for scan-based testing,” International Conference on Computer-Aided

Design, 2008, pp 67-72

Page 165: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 149

[30] R. Rudell, “Dynamic Variable Ordering for ordered binary decision diagrams”, Inter-

national Conference on Computer-Aided Design, Nov. 1994, pp. 42-47.

[31] S. Panda, F. Somenzi and B. F. Plessier, “Symmetric detection and dynamic variable

ordering of Decision Diagrams”, International Conference on Computer-Aided Design

1994, pp. 628-631.

[32] C. Meinel and C. Stangier, “Speeding up Symbolic Model Checking by accelerating

dynamic variable ordering”, GLSVLSI, 2000, pp. 39-42.

[33] A. Narayan, J. Jain, M. Fujita, and A. Sangiovanni-Vincentelli, “Partitioned ROBDDs:

A compact, canonical and efficiently manipulable representation for boolean functions”,

International Conference on Computer-Aided Design, Nov. 1996, pp. 547-554.

[34] I. H. Moon, H. H. Kukula, K. Ravi and F. Somenzi, “To Split or to conjoin: The

question in image computation”, Design Automation Conference, 2000, pp. 23-28.

[35] M. Chandrasekar and M. S. Hsiao, “A Novel Learning Framework for State Space

Exploration based on Search State Extensibility Relation,” submitted to VLSI Design,

2011

[36] M. Chandrasekar and M. S. Hsiao, “Search State Compatibility and Learning for State

Space Exploration,” SRC TECHCON, Austin, 2009

[37] M. Chandrasekar and M. S. Hsiao, “Minimum Search State based Learning: An Effi-

cient Preimage Computation Technique,” SRC TECHCON, Austin, 2008

Page 166: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 150

[38] S. Sheng and M. S. Hsiao, “Efficient pre-image computation using a novel success-

driven ATPG”, Design, Automation and Test In Europe, Sep. 2003, pp. 822-827.

[39] L. Zhang, C. Madigan, M. Moskewicz and S. Malik, “Efficient Conflict Driven Learning

in Boolean SAT”, International Conference on Computer-Aided Design, 2001, pp. 279-

285.

[40] P. A. Abdulla, P. Bjesse and N. Een, “Symbolic Reachability Analysis based on SAT

Solvers”, TACAS, 2000, pp. 411-425.

[41] P. Williams, A. Biere, E. M. Clarke and A. Gupta, “Combining Decision Diagrams

and SAT Procedures for Efficient Symbolic Model Checking”, Proc. CAV, 2000, pp.

124-138.

[42] K.McMillan, “Applying SAT Methods in Unbounded Model Checking”, CAV 2002,

pp. 250-264.

[43] H. Kang and I. C. Park, “SAT-based Unbounded Symbolic Model Checking”, Design

Automation Conference 2003, pp. 840-843.

[44] M. Ganai, A. Gupta and P. Ashar, “Efficient SAT-based Unbounded Symbolic Model

Checking using Circuit Cofactoring”, International Conference on Computer-Aided

Design, 2004, pp. 510-517.

[45] K. Chandrasekar and M. S. Hsiao, “Implicit Search-Space Aware Cofactor Expansion:

A Novel Preimage Computation Technique”, ICCD, Oct. 2006.

Page 167: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 151

[46] J. Silva et al.“Dynamic Search-Space Pruning Techniques in Path Sensitization”, Proc.

DAC, June 1994, pp. 705-711.

[47] K. Chandrasekar and M. S. Hsiao, “State Set Management for SAT based Preimage

Computation”, Proc. ICCD, 2005, pp. 585-590.

[48] M. Abramovici, M. A. Breuer and A. D. Friedman, “Digital Systems Testing and

Testable Design”, IEEE Press, NJ, 1990.

[49] M. H. Schulz, E. Trischler and T. M. Sarfert, “SOCRATES: A Highly Efficient Auto-

matic Test Pattern Generation System”, IEEE TCAD, Vol.7, No.1, 1988, pp. 126-137.

[50] J. Giraldi and M. L. Bushnell, “EST: The New Frontier in ATPG”, Design Automation

Conference, 1991, pp. 667-672.

[51] X. Chen and M. L. Bushnell, “Generalization of search state equivalence for automatic

test pattern generation”, Proc. VLSI Design, Jan. 1995, pp. 99-103.

[52] J. Giraldi and M. L. Bushnell. “Search State Equivalence for Redundancy Identification

and Test Generation”, Proc. ITC, Oct. 1991, pp. 184-193

[53] T. Larrabee, “Test pattern generation using Boolean Satisfiability”, IEEE TCAD, vol.

11, no. 1, Jan. 1992, pp. 4-15

[54] P. Stephan, R. K. Brayton, A. L. Sangiovanni-Vincentelli, “Combinational test gener-

ation using satisfiability”, IEEE TCAD, vol. 15, no. 9, Sep. 1996, pp. 1167-1176

Page 168: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 152

[55] E. Gizdarski, H. Fujiwara, “SPIRIT: A Highly Robust Combinational Test Generation

Algorithm”, IEEE TCAD, vol. 21, no. 12, Dec. 2002, pp. 1446-1458

[56] S. T. Chakradhar, V. D. Agrawal, and S. G. Rothweiler “A Transitive Closure Algo-

rithm for Test Generation”, IEEE TCAD, vol. 12, no. 7, July 1993, pp. 1015-1028

[57] L. Guerra e Silva, L. M. Silveira, and J. Marques-Silva, “Algorithms for solving Boolean

Satisfiability in Combinational Circuits”, Proc. DATE, March 1999, pp. 526-530

[58] M. Chandrasekar and M. S. Hsiao, “Efficient Search Space Pruning for multi-valued

SAT based ATPG,” IEEE European Test Symposium, 2007

[59] C. Liu, A. Kuehlmann, M. W. Moskewicz, “CAMA: A Multi-Valued Satisfiability

Solver”, Proc. ICCAD, Nov. 2003, pp. 326-333

[60] R. S. Wei and A. Sangiovanni-Vincentelli, “PROTEUS: A Logic Verification System

for Combinational Circuits”, ITC, Oct. 1986, pp. 350-359

[61] K. Chandrasekar and M. S. Hsiao, “Decision Selection and Learning for an ’All Solu-

tions ATPG Engine’”, Proc. ITC, Oct. 2004, pp. 607-616

[62] ABC:“http://www.eecs.berkeley.edu/ alanmi/abc/”

[63] “VIS Home Page. http://embedded.eecs.berkeley.edu/Respep/Research/vis/”

[64] P. Tafertshofer, A. Ganz and K. J. Antreich, “IGRAINE-an Implication GRaph-bAsed

engINE for fast implication, justification, and propagation”, IEEE TCAD, vol. 19, no.

8, Aug. 2000, pp. 907-927

Page 169: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 153

[65] T. Kirkland and M. R. Mercer, “A Topological Search Algorithm for ATPG”, Proc.

DAC, June 1987, pp. 502-508

[66] T. Fujino and H. Fujiwara, “An efficient test generation algorithm based on search

state dominance”, Proc. FTCS, July 1992, pp. 246-253

[67] W. Kunz and D. K. Pradhan, “Recursive Learning: A New Implication Technique for

Efficient Solutions to CAD Problems - Test, Verification, and Optimization”, IEEE

TCAD, vol. 13, no. 9, Sep. 1994, pp. 1143-1158

[68] B. Li, M. S. Hsiao and S. Sheng, “A Novel SAT All-Solutions Solver for Efficient

Preimage Computation”, Design, Automation and Test in Europe, Feb. 2004, pp. 272-

277

[69] K. Chandrasekar and M. S. Hsiao, “ATPG-based pre-image computation: efficient

search space pruning with ZBDD”, HLDVT, 2003, pp. 117-122

[70] A. Kuehlmann et al., “Robust Boolean Reasoning for Equivalence Checking and Func-

tional Property Verification”, IEEE TCAD, vol. 21, no. 12, 2002

[71] A. Gupta et al., “SAT-based image computation with applications in reachability anal-

ysis,” FMCAD, 2000

[72] M. Sipser, “Introduction to the Theory of Computation,” Second Edition, February

2005, pp. 1377-1394

Page 170: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 154

[73] H. Jin et al, “Strong conflict analysis for propositional SAT,” Design, Automation and

Test in Europe, 2006

[74] K. McMillan, “Symbolic Model Checking,” Kluwer Academic, 1993

[75] A. Biere et al,“Symbolic Model Checking using SAT Instead of BDDs,” Design Au-

tomation Conference, 1999

[76] F. Brglez and H. Fujiwara, “A Neutral Netlist of 10 Comb. Benchmark Designs and a

Special Translator in Fortran,” in International Symposium on Circuits and Systems,

1985, pp. 695-698

[77] F. Brglez, D. Bryan, and K. Kozminski, “Combinational Profiles of Sequential Bench-

mark Circuits”, International Symposium on Circuits and Systems, May 1989, pp.

1929-1934.

[78] F. Corno M. Sonza Reorda and G. Squillero, “RT-Level ITC 99 Benchmarks and First

ATPG Results,” IEEE Design and Test of Computers, July-August 2000, pp. 44-53.

[79] “Temporal Induction Prover Home Page. http://een.se/niklas/Tip/”

[80] M. Chandrasekar and M. S. Hsiao, “Tight Image Extraction for Unbounded Model

Checking,” submitted to DATE, 2011

[81] E. Clarke, O. Grumberg, S. Jha, Y. Lu and H. Veith, “Counter-Example Guided

Abstraction Refinement,” Lecture Notes in Computer Science, vol. 1855, 2000, pp.

154-169

Page 171: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 155

[82] E. Clarke, O. Grumberg, S. Jha, Y. Lu and H. Veith, “Counter-Example Guided

Abstraction Refinement for Symbolic Model Checking,” Journal of the ACM, vol. 50,

no. 5, September 2003, pp. 752-794

[83] R. Chadha and M. Viswanathan, “A Counterexample Guided Abstraction-Refinement

Framework for Markov Decision Processes,” to appear in ACM Transactions on Com-

putation Logic

[84] T. E. Hart, K. Ku, A. Gurfinkel, M. Chechik and D. Lie, “Augmenting

Counterexample-Guided Abstraction Refinement with Proof Templates,” IEEE/ACM

International Conference on Automated Software Engineering, 2008, pp. 387-390

[85] P. Bjesse and J. Kukula, “Using Counter Example Guided Abstraction Refinement to

Find Complex Bugs,” Design, Automation and Test in Europe, 2004

[86] W. Craig, “Linear reasoning: A new form of the Herbrand-Gentzen theorem,” Journal

of Symbolic Logic, vol. 22, no. 3, pp. 250-268, 1957

[87] K. L. McMillan, “Interpolation and SAT-based Model Checking,” International Con-

ference on Computer-Aided Verification, 2003, pp. 1-13

[88] K. L. McMillan, “Applications of Craig Interpolants in Model Checking,” TACAS,

2005, pp. 1-12

Page 172: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 156

[89] M. Sheeran, S. Singh and G. Stalmarck, “Safety Properties Using Induction and SAT

Solver,” In Proceedings of Formal Methods in Computer-Aided Design, vol. 1954 of

LNCS, pp. 108-125, 2000

[90] G. Cabodi, S. Nocco, M. Murciano and S. Quer, “Stepping Forward With Inter-

polant in Unbounded Model Checking,” In Proceedings of International Conference

on Computer-Aided Design, Nov. 2006

[91] G. Cabodi, P. Camurati and M. Murciano, “Automated Abstraction by Incremen-

tal Refinement in Interpolant-based Model Checking,” In Proceedings of International

Conference on Computer-Aided Design, 2008

[92] B. Keng and A. Veneris, “Scaling VLSI Design Debugging with Interpolation,” Formal

Method in Computer-Aided Design, 2009

[93] A. Smith, A. Veneris, M. F. Ali and A. Viglas, “Fault diagnosis and logic debugging

using Boolean Satisfiability,” IEEE Transactions on Computer Aided Design, vol. 24,

no. 10, pp. 1606-1621, 2005

[94] M. H. Liffiton and K. A. Sakallah, “Algorithms for computing minimal unsatisfiable

subsets of constraints,” Journal of Automated Reasoning, vol. 40, no. 1, pp. 1-33, 2008

[95] N. Een and N. Sorensson, “An extensible SAT-solver,” In International Conference on

Theory and Applications of Satisfiability Testing, pp. 502-518 2003

[96] http://fmv.jku.at/picosat/

Page 173: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 157

[97] M. Davis, G. Logemann and D. Loveland, “A Machine Program for Theorem-proving,”

Communications of the ACM, vol. 5, no. 7, pp. 394-397, 1962

[98] M. K. Ganai and A. Gupta “SAT-Based Scalable Formal Verification Solutions,”

Springer US Publisher, 2007

[99] E. Clarke, O. Grumberg and D. Peled, “Model Checking,” MIT Press, 2000

[100] E. Clarke, S. Jha, Y. Lu and D. Wang, “Exploiting symmetry in temporal logic model

checking,” Formal Methods in System Design, vol. 9, no. 1-2, pp. 41-76, 1996

[101] K. Jensen, “Condensed state spaces for symmetrical colored petri nets,” Formal Meth-

ods in System Design, vol. 9, no. 1-2, pp. 7-40, 1996

[102] E. Emerson and A. Sistla “Symmetry and Model Checking,” Formal Methods in System

Design, vol. 9, no. 1-2, pp. 105-130, 1996

[103] C. Ip and D. Dill, “Better Verification through Symmetry,” Formal Methods in System

Design, vol. 9, no. 1-2, pp. 41-75, 1996

[104] E. Emerson and R. Trefler “From asymmetry to full symmetry: New techniques for

symmetry reduction in Model Checking,” Correct Hardware Design and Verification

Methods, Lecture Notes In Computer Science, vol. 1703, Springer-Verilag, pp. 142-156,

1999

Page 174: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 158

[105] D. Peled, “All from one, one from all: On Model Checking using Representatives,”

In Proceedings of the 5th International Conference on Computer Aided Verification,

Lecture Notes in Computer Science, vol. 697, Springer-Verilag, pp. 409-423, 1993

[106] P. Godefroid, D. Peled and M. Staskauskas, “Using partial order methods in the formal

verification of industrial concurrent programs,” In Proceedings of the International

Symposium on Software Testing and Analysis, pp. 261-269, 1996

[107] P. Cousot and R. Cousot, “Abstract interpretation: A unified lattice model for static

analysis of programs by construction or approximation of fix-points,” In Proceedings

of the ACM Symposium of Programming Language, pp. 238-252, 1977

[108] D. E. Long, “Model Checking, abstraction and compositional verification,” PhD Dis-

sertation, School of Computer Science, Carnegie Mellon University, CMU-CS-93-178,

1993

[109] E. M. Clarke, O. Grumberg and D. E. Long, “Model Checking and Abstraction,” ACM

Transactions on Programming Languages and Systems, vol. 16, no. 5, pp. 1512-1542,

September 1994

[110] S. Graf and H. Saidi, “Construction of abstract state graphs with PVS,” In Proceedings

of Computer Aided Verification, 1997

[111] E. M. Clarke, O. Grumberg, S. Jha, Y. Lu and H. Veith, “Progress on the state

explosion problem in Model Checking,” In Informatics, 10 Years Back, 10 Years Ahead,

Lecture Notes in Computer Science, vol. 2000, Springer-Verilag, pp. 176-194, 2001

Page 175: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 159

[112] A. Biere, M. R. Prasad and A. Gupta, “A Survey of Recent Advances in SAT-based

Verification,” In International Journal on Software Tools for Technology Transfer, vol.

7, no. 2, pp. 156-173, April 2005

[113] K. Chandrasekar, “Search-space aware learning techniques for unbounded model check-

ing and path delay testing,” PhD dissertation, Bradley Department of Electrical and

Computer Engineering, Virginia Tech, April 2006

[114] E. J. McCluskey and F. W. Clegg, “Fault Equiv. in Combinational Logic Networks,”

IEEE Trans. on Comp., vol. C-20, no. 11, 1971

[115] A. Lioy, “Looking for Functional Fault Equivalence,” ITC, Oct. 1991

[116] T. Gruning et al., “DIATEST: A Fast Diagnostic Test Pattern Generator for Combi-

national Circuits,” Int Conf. on CAD, 1991

[117] I. Hartanto et al., “Diagnostic Fault Equiv. Identification using Redundancy Informa-

tion & Structural Anal.,” Int Test Conf., 1996

[118] A. Veneris R. Chang, M. S. Abadir and M. Amiri, “Fault Equiv. & Diagnostic Test

Generation Using ATPG,” IEEE International Symposium on Circuits and Systems,

May 2004, pp. 221-224

[119] A. Lioy, “Advanced Fault Collapsing,” IEEE Design and Test of Computers, vol. 42,

no. 3, Mar. 1993, pp. 268-271

Page 176: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 160

[120] A. V. S. S. Prasad et al., “New Algorithm for Global Fault Collapsing into Equiv. &

Dominance Sets,” Int Test Conf., 2002

[121] V. D. Agrawal et al., “Fault Collapsing via Functional Dominance,” International Test

Conference, 2003, pp. 274-280

[122] R. K. K. R. Sandireddy and V. D. Agrawal, “Diagnostic and detection fault collapsing

for multiple output circuits,” DATE, 2005

[123] M. Chandrasekar and M. S. Hsiao, “Fault Collapsing using a Novel Extensibility Re-

lation,” submitted to VLSI Design, 2011

[124] V. C. Vimjam and M. S. Hsiao, “Efficient Fault Collapsing via Generalized Dominance

Relations,” VLSI Test Symposium, 2006

[125] A. S. Doshi and V. D. Agrawal, “Independence Fault Collapsing,” VLSI Design and

Test Symposium, August 2005, pp. 357-364

[126] A. S. Doshi, “Independence Fault Collapsing and Concurrent Test Generation,” Thesis

(MS), Department of ECE, Auburn University, Auburn, Alabama, USA, 2005

[127] M. Chandrasekar and M. S. Hsiao, “Diagnostic Test Generation for Silicon Diagnosis

with an Incremental Learning Framework based on Search State Compatibility,” IEEE

High Level Design Validation and Test Workshop, 2009

[128] M. Chandrasekar, N. P. Rahagude and M. S. Hsiao, “Search state compatibility based

incremental learning framework and output deviation based X-filling for diagnostic

Page 177: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 161

test generation,” Journal of Electronic Testing: Theory and Applications, vol. 26, no.

2, pp. 165-176, April 2010

[129] W-T. Cheng, “Split Circuit Model for Test Generation,” In Proceedings of 25th IEEE

Design Automation Conference, pp. 96-101, 1988

[130] K. To, “Fault Folding for Irredundant & Redundant Combinational Circuits,” IEEE

Transactions on Computers, vol. C-22, no. 11, Nov. 1973, pp. 1008-1015

[131] Y. W. Ng and A. Avizienis, “Comments on Fault Folding for Irredundant and Re-

dundant Combinational Circuits,” IEEE Transactions on Computers, vol. C-25, no. 2,

1976, pp. 207-207

[132] M. Abramovici, D. T. Miller and R. K. Roy, “Dynamic Redundancy Identification in

Automatic Test Generation,” IEEE Transactions on CAD, vol. 11, no. 3, Mar. 1992,

pp. 404-407

[133] A. Lioy, “On the Equivalence of Fanout-Point Faults,” IEEE Transactions on Com-

puters, vol. 42, no. 3, Mar. 1993, pp. 268-271

[134] M. Nadjarbashi, Z. Navabi and M. R. Movahedin, “Line Oriented Structural Equiva-

lence Fault Collapsing,” IEEE Workshop on Model and Test, 2000

[135] R. Hahn, R. Krieger and B. Becker, “A Hierarchical Approach to Fault Collapsing,”

European Design and Test Conference, 1994, pp. 171-176

Page 178: Search State Extensibility based Learning Framework for ... · very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain.

Maheshwar Chandrasekar Bibliography 162

[136] M. A. Iyer and M. Abramovici, “FIRE:Fault Independent Combinational Redundancy

Algorithm,” IEEE Transactions on VLSI Systems, vol. 4, June 1996, pp. 295-301

[137] Q. Peng, M. Abramovici and J. Savir, “MUST:MUltiple STem Analysis for identifying

sequentially untestable faults, ” Int Test Conf., 2000, pp. 839-846

[138] B. Krishnamurthy and S. B. Akers, “On the Complexity of Estimating size of a Test

Set,” IEEE Trans. on Comp., vol. C-33, Aug. 1984, pp. 750-753

[139] H. Fujiwara and S. Toida, “The complexity of fault detection problems for combina-

tional logic circuits”, IEEE TC, June 1982, pp. 555-560.