Compositional Methods and Symbolic Model Checking
description
Transcript of Compositional Methods and Symbolic Model Checking
1
Compositional Methodsand
Symbolic Model Checking
Ken McMillan
Cadence Berkeley Labs
2
Compositional methods Reduce large verification problems to small ones by
– Decomposition
– Abstraction
– Specialization
– etc.
Based on symbolic model checking
System level verification
Will consider the implications of such an approach for symbolic model checking
3
Example -- Cache coherence
S/F network
protocol
hostprotocol
host
protocol
host
Distributedcachecoherence
INTF
P P
M IO
to net
Nondeterministic abstract model
Atomic actions
Single address abstraction
Verified coherence, etc...
(Eiriksson 98)
4
S/F networkprotocol
host otherhosts
Abstract model
Refinement to RTL level
CAMT
AB
LE
S
TAGS
RTL implementation(~30K lines of verilog)
refinement relations
5
Contrast to block level verification Block verification approach to capacity problem
– isolate small blocks
– place ad hoc constraints on inputs
This is falsification because
– constraints are not verified
– block interactions not exposed to verification
Result: FV does not replace any simulation activity
6
What are the implications for SMC? Verification and falsification have different needs
– Proof is as strong as its weakest link
– Hence, approximation methods are not attractive.
Importance of predictability and metrics
– Must have reliable decomposition strategies
Implications of using linear vs. branching time.
p q r s t
7
Predictability Require metrics that predict model checking hardness
– Most important is number of state variables
1
0
Ver
ific
atio
n p
rob
ab
ilit
y
verification falsification # state bits
original systemreductionreduction
– Powerful MC can save steps, but is not essential
– Predictability more important than capacity
8
Example -- simple pipeline
Goal: prove equivalence to unpipelined model
(modulo delay)
32 registers
+
bypass
32 bits
control
9
Direct approach by model checking
Model checking completely intractable due to large number of state variables ( > 2048 )
referencemodel d
elay
pipeline
=?
ops
10
Compositional refinement verification
Abstractmodel
System
Translations
11
Localized verification
Abstractmodel
System
Translations
assume prove
12
Localized verification
Abstractmodel
System
Translations
assumeprove
13
Circular inference rule
SPEC
1 2
: :
: :
^
( )
( )
( )
2 1
1 2
1 2
U
U
G
(related: AL 95, AH 96)
1 up to t -1 implies 2 up to t
2 up to t -1 implies 1 up to t
always 1 and 2
14
Decomposition for simple pipeline
32 registers
+
32 bits
control
correct valuesfrom reference
model
1 2
1 = operand correctness
2 = result correctness
15
Lemmas in SMV Operand correctness
layer L1: if(stage2.valid){ stage2.opra := stage2.aux.opra; stage2.oprb := stage2.aux.oprb; stage2.res := stage2.aux.res; }
16
Effect of decomposition
Bit slicing results from "cone of influence reduction"
(similarly in reference model)
32 registers
+
32 bits
control
correct valuesfrom reference
model
1 2 1 proved
2 assumed
17
Resulting MC performance Operand correctness property
0
20
40
60
80
100
120
140
0 8 16 24 32
Number of registers
Run
tim
e (s
)80 state variables
3rd order fit
Result correctness property
– easy: comparison of 32 bit adders
18
NOT! Previous slide showed hand picked variable order
Actually, BDD's blow up due to bad variable ordering
– ordering based on topological distance
0
50
100
150
200
250
300
0 8 16 24 32
Number of registers
Run
tim
e (s
)
19
Problem with topological ordering
Register files should be interleaved, but this is not evident from topology
bypasslogic
=?results ref. reg. file
impl. reg. file
20
Sifting to the rescue (?)
Lessons (?) :
– Cannot expect to solve PSPACE problems reliably
– Need a strategy to deal with heuristic failure
1
10
100
1000
10000
0 8 16 24 32
Number of registers
Run
tim
e (s
)
Note:- Log scale- High variance
21
Predictability and metrics Reducing the number of state variables
1
0
Ver
ific
atio
n p
rob
ab
ilit
y
# state bits
decomposition
– If heuristics fail, other reductions are available
2048 bits?80 bits
~600 orders of magnitude in state space size
22
SPEC
P PA
Big structures and path splitting
i
23
Temporal case splitting Prove separately that p holds at all times when v = i.
i G v i p
G p
: ( )*
)
Path splitting
v
record register index
G v i p( ) )
i
24
Case split for simple pipeline Show only correctness for operands fetched from register i
forall(i in REG) subcase L1[i] of stage2.opra//L1 for stage2.aux.srca = i;
Abstract remaining registers to "bottom"
Result
– 23 state bits in model
– Checking one case = ~1 sec
What about the 32 cases?
25
Exploiting symmetry Symmetric types
– Semantics invariant under permutations of type.
– Enforced by type checking rules.
Symmetry reduction rule
– Choose a set of representative cases under symmetry
Type REG is symmetric
– One representative case is sufficient (~1 sec)
Estimated time savings from case split: 5 orders
But wait, there's more...
26
Data type reductions Problem: types with large ranges
Solution: reduce large (or infinite) types
where T\i represents all the values in T except i.
Abstract interpretation
T i T i { , \ }
i T i
i
T i
\
\ { , }
1 0
0 0 1
27
Type reduction for simple pipeline Only register i is relevant
Reduce type REG to two values:
using REG->{i} prove stage2.opra//L1[i];
Number of state bits is now 11
Verification time is now independent of register file size.
Note: can also abstract out arithmetic verification using uninterpreted functions...
28
Effect of reduction1
0
Ver
ific
atio
n p
rob
ab
ilit
y
# state bits
original systemreductionreduction
– Manual decomposition produces order of magnitude reductions in number of state bits
– Inflexion point in curve crossed very rapidly
20488411
29
Desirata for model checking methods Importance of predictability and metrics
– Proof strategy based on reliable metric (# state bits)
– Prefer reliable performance in given range to occasional success on large problems *
e.g., stabilize variable ordering
– Methods that diverge unpredictably for small problems are less useful (e.g., infinite state, widening)
Moderate performance improvements are not that important
– Reduction steps gain multiple orders of magnitude
Approximations not appropriate
* given PSPACE completeness
30
Linear v branching time Model checking v compositional verification
M | | )
fixed model for all models
Verification complexity (in formula size)
compositional
model checking
CTL LTL
linear
EXP
PSPACE
PSPACE
In practice, with LTL, we can mostly recover linear complexity...
31
Avoiding "tableau variables" Problem: added state variables for LTL operators
v p X vFp Fp _Fp
Eliminating tableau variables
– Push path quantifiers inward (LTL to CTL*)
– Transition formulas (CTL+)
– Extract transition and fairness constraints
32
Translating LTL to CTL* Rewrite rules
A p Ep: :
A p q Ap Aq( )^ ^
AXp AXAp
E p Ap: :
E p q Ep Eq( )_ _
EXp EXEp
In addition, if p is boolean,
E p q p Eq( )^ ^A p q p Aq( )_ _
E p q E p Eq( ) ( )U Uno rule
By adding path quantifiers, we eliminate tableau variables
33
Rewrites that don't work
A p U Xq
A p U AXq
( )
( )
p p p q
q
E Xp U Xq
E Xp U EXq
( )
( )
p p
q
34
Examples LTL formulas that translate to CTL formulas
G p Fq AG p AFq( ) ( )) ) (note singly nested fixed point)
G p pWq AG p A pWq( ( )) ( ( ))) )
Incomplete rewriting (to CTL*)
G p F q Xq AG p AF q Xq( ( )) ( ( ))) ^ ) ^
Note: 3 tableau variables reduced to 1
Conjecture: all resulting formulas are forward checkable
35
Transition modalities Transition formulas
p Xq) v v' 1 XXq
CTL+ state modalitiesA p U q( )E p U q( ) where p is a transition formula
XAFp
Example CTL+ formulas
CTL+ still checkable in linear time
AG A p Xq( ) : ^ : :̂E p p Xp U p q( ( ) ( ))
ApEp
36
Constraint extraction Extracting path constraints
A Gp q A qp( ) ( , )) where p is a transition formula
A GFp q A qGFp( ) ( ,{ })) 1
Using rewriting and above...
GFp GFq AG AFq) w/ fairness const. GFp
Circular compositional reasoning
G U
A U
) : :̂
: :̂
( ( ))
( ( ))
If and are transitionformulas, this is in CTL+, hencecomplexity is linear
Note: typically, are very large, and is small
37
Effect of reducing LTL to CTL+ In practice, tableau variables rarely needed
Thus, complexity exponential only in # of state variables
– Important metric for proof strategy
Doubly nested fixed points used only where needed
– I.e., when fairness constraints apply
Forward and backward traversal possible
– Curious point: backward is commonly faster in refinement verification
38
SMC for compositional verification
Cannot expect to solve PSPACE complete problems reliably
– User reductions provide fallback when heuristics fail
– Robust metrics are important to proof strategy
Each user reductions gains many orders of magnitude
– Modest performance improvements not very important
Exact verification is important
Must be able to handle linear time efficiently
BDD's are great fun, but...