Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks … · 2017. 9. 20. · 2/ 30...

33
1 / 30 Reluplex : An Efficient SMT Solver for Verifying Deep Neural Networks Guy Katz NASA Machine Learning Workshop August 30, 2017

Transcript of Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks … · 2017. 9. 20. · 2/ 30...

  • 1 / 30

    Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

    Guy Katz

    NASA Machine Learning WorkshopAugust 30, 2017

  • 2 / 30

    Based on:

    “Reluplex: An Efficient SMT Solver for VerifyingDeep Neural Networks”, in Proc. CAV 2017

    Joint work with Clark Barrett, David Dill, Kyle Julian and Mykel Kochenderfer

    Available on arXiv: 1702.01135

  • 3 / 30

    Artificial Neural Networks An emerging solution to a wide variety of problems◦ Image recognition, game playing, autonomous driving, etc.

    A “black box” solution◦ An advantage, but also a drawback

    Goal: Reason about the inner workings of neural networks

  • 4 / 30

    Possible Applications1. Transform (simplify) neural networks ◦ While preserving certain properties

    2. Extract properties of networks ◦ Make networks more understandable to humans

    3. Check robustness against adversarial examples

    4. Formally verify safety-critical systems that incorporate neural networks

  • 5 / 30

    Airborne Collision-Avoidance System for drones A new standard being developed by the FAA

    Produce advisories: 1. Strong left (SL)2. Weak left (L)3. Strong right (SR)4. Weak right (R)5. Clear of conflict (COC)

    FAA considering an implementation that uses 45 deep neural networks ◦ But wants to verify them!

    Case Study: ACAS Xu

  • 6 / 30

    Certifying ACAS Xu Networks Neural networks generalize to previously-unseen inputs

    Show that certain properties hold for all inputs

    Examples:◦ If intruder is distant, advisory is always COC◦ If intruder is nearby on the left, advisory is always “strong right”

    Crucial for increasing the level of confidence

  • 7 / 30

    Agenda The Reluplex Algorithm

    Verifying the ACAS Xu Networks

    Conclusion

  • 8 / 30

    Agenda The Reluplex Algorithm

    Verifying the ACAS Xu Networks

    Conclusion

  • 9 / 30

    Deep Neural Nets (DNNs)

    ACAS Xu networks: 8 layers, 310 nodes (x 45)

    State of the art verification: networks with ~20 nodes◦ NP-Complete problem!

  • 10 / 30

    Steps:1. Weighted sum2. Activation function

    𝑓(4) 2 ⋅ 1 + 0 ⋅ 3 + (−1) ⋅ −2 = 4

    The Culprits: Activation Functions

    1

    3

    −2

    2

    0

    −1

  • 11 / 30

    40

    Rectified Linear Units (ReLUs) ReLU(𝑥) = max(0, 𝑥)◦ 𝑥 ≥ 0: active case, return 𝑥◦ 𝑥 < 0: inactive case, return 0

    1

    3

    −2

    21

    0−2

    −10

    1 ⋅ 1 + −2 ⋅ 3 + 0 ⋅ −2 = −52 ⋅ 1 + 0 ⋅ 3 + (−1) ⋅ −2 = 4

  • 12 / 30

    Case Splitting Linear programs (LPs) are easier to solve

    Piecewise-linear constraints reducible to LPs

    Case Splitting:◦ Fix each ReLU to active or inactive state◦ Solve the resulting LP◦ If solution is found, we are done◦ Otherwise, backtrack and try other option

    State explosion: 300ReLUs → 2

  • 13 / 30

    Reluplex A technique for solving linear programs with ReLUs◦ Can encode neural networks as input

    Extends the simplex method Does not require case splitting in advance◦ ReLU constraints satisfied incrementally ◦ Split only if we must

    Scales to the ACAS Xu networks◦ An order of magnitude larger networks than previously possible

  • 14 / 30

    A Simple Example

    Property being checked:Is it possible that 𝑥> ∈ [0,1] and 𝑥B ∈ [0.5,1]?

    𝑥C

    𝑥<

    𝑥> 𝑥B

    1

    −1

    1

    1

  • 15 / 30

    𝑥CD

    𝑥

    1

    −1

    𝑥B

    1

    1

    𝑥CE

    𝑥

  • 16 / 30

    Encoding Networks (cnt’d) Introduce equalities:

    Set bounds:𝑥> ∈ [0,1]𝑥B ∈ 0.5,1𝑥CD, 𝑥

  • 17 / 30

    𝑥G ≔ 𝑥G − 0.5𝑥H ≔ 𝑥H − 0.5Success

    𝑥> = 𝑥CD − 𝑥G

    Reluplex: Example𝑥G = 𝑥CD − 𝑥>𝑥H = 𝑥𝑥I = 𝑥B − 𝑥 0 1𝑥CD 0 𝑥CK

    0 𝑥CE 0 𝑥CL

    𝑥

  • 18 / 30

    0.5

    −0.5

    0.5 0.5

    1

    −1

    1

    1

    0.5

    0ReLU

    ReLU

    The Assignment is a Solution

    0.5

    −0.5

    0.5 0.5

    1

    −1

    1

    1

    0.5

    0ReLU

    ReLU

    Property being checked:Is it possible that 𝑥> ∈ [0,1] and 𝑥B ∈ [0.5,1]?

  • 19 / 30

    Soundness & Termination Soundness is straightforward

    Can we always find a solution using pivots and updates?

    No: sometimes get into a loop

    May have to split on ReLU variables◦ Do so lazily◦ In practice, about 10% of the ReLUs

  • 20 / 30

    𝑥 ≥ 2

    Reluplex: Efficient Implementation1. Bound tightening◦ Deriving tighter lower/upper bounds can eliminate ReLUs

    2. Use a Satisfiability Modulo Theories (SMT) framework ◦ Efficiently manage case splits and bound tightening

    3. Use floating point arithmetic◦ Put bounds on round-off errors

    For more details, see paper

    𝑥 = 𝑦 + 𝑧𝑥 ≥ −2𝑦 ≥ 1𝑧 ≥ 1

  • 21 / 30

    Agenda The Reluplex Algorithm

    Verifying the ACAS Xu Networks

    Conclusion

  • 22 / 30

    Properties of Interest

    1. No unnecessary turning advisories2. Alerting regions are consistent3. Strong alerts do not appear when vertical separation

    is large

  • 23 / 30

    ACAS Xu: Example 1 “If the intruder is near and approaching from the left, the

    network advises strong right”◦ Distance: 12000 ≤ 𝜌 ≤ 62000◦ Angle to intruder: 0.2 ≤ 𝜃 ≤ 0.4◦ …

    Proof time: 01:29:29 (using 4 machines)

  • 24 / 30

    ACAS Xu: Example 2 “If vertical separation is large and the previous advisory is

    weak left, the network advises COC or weak left”◦ Distance: 0 ≤ 𝜌 ≤ 60760◦ Time to loss of vertical separation: τ = 100◦ …

    Time to find counter example: 11:08:22(using 1 machine)

  • 25 / 30

    Robustness to Adversarial Inputs Slight input perturbations cause misclassification

    Reluplex can prove that these cannot occur (for given input and amount of noise)

    𝜖

    Goodfellow et al., 2015

  • 26 / 30

    Local Adversarial Robustness Local robustness at 𝑥: how far from 𝑥 is the first

    adversarial example?

    δ = 0.1 δ = 0.075 δ = 0.05 δ = 0.025 δ = 0.01 Avg. Time1 vulnerable vulnerable vulnerable robust robust 00:03:322 robust robust robust robust robust 00:24:383 robust robust robust robust robust 00:04:504 vulnerable vulnerable vulnerable robust robust 00:09:225 robust robust robust robust robust 01:08:12

  • 27 / 30

    Agenda The Reluplex Algorithm

    Verifying the ACAS Xu Networks

    Conclusion

  • 28 / 30

    Conclusion Reluplex: a technique for solving linear programs with

    ReLUs

    Can encode neural networks and properties as Reluplex inputs

    Scalable

    Sound and terminating◦ Modulo floating point

  • 29 / 30

    Next Steps1. Further certify the ACAS Xu networks

    2. Scalability: better SMT and LP techniques

    3. Additional activation functions

    4. Additional applications: autonomous driving (Intel)

  • 30 / 30

    Questions

    Thank You!

    Available on arXiv: 1702.01135

  • 31 / 30

    Comparison to SMT/LP Solvers 𝜑> …𝜑Z: Satisfiable properties, find point 𝑥 for which

    the output is 𝑦 with score at least 𝑐

    Times in seconds, - indicates the 4h timeout

    𝝋𝟏 𝝋𝟐 𝝋𝟑 𝝋𝟒 𝝋𝟓 𝝋𝟔 𝝋𝟕 𝝋𝟖CVC4 - - - - - - - -Z3 - - - - - - - -Yices 1 37 - - - - - -MathSat 2040 9780 - - - - - -Gurobi 1 1 1 - - - - -Reluplex 11 3 9 10 155 7 10 14

  • 32 / 30

    Global Adversarial Robustness Local robustness: check for fixed 𝑥

    Global robustness: checked for all inputs simultaneously

    𝑥> 𝑥C

  • 33 / 30

    Global Adversarial Robustness Inputs 𝑥>, 𝑥C are labeled as ℓ with confidence 𝑝>, 𝑝C

    ∀𝑥>, 𝑥C. ∥ 𝑥> − 𝑥C ∥≤ 𝛿 ⇒ |𝑝> − 𝑝C| ≤ 𝜖

    Checked for all inputs simultaneously

    Difficult and slow◦ Double the network size◦ Large input domain