SRAF Insertion via Supervised Dictionary Learningbyu/papers/C76-ASPDAC2019-SRAF-slides.pdf ·...

27
SRAF Insertion via Supervised Dictionary Learning Hao Geng 1 , Haoyu Yang 1 , Yuzhe Ma 1 , Joydeep Mitra 2 , Bei Yu 1 1 The Chinese University of Hong Kong 2 Cadence Inc. 1 / 19

Transcript of SRAF Insertion via Supervised Dictionary Learningbyu/papers/C76-ASPDAC2019-SRAF-slides.pdf ·...

  • SRAF Insertion via Supervised DictionaryLearning

    Hao Geng1, Haoyu Yang1, Yuzhe Ma1, Joydeep Mitra2, Bei Yu1

    1The Chinese University of Hong Kong2Cadence Inc.

    1 / 19

  • Moore’s Law to Extreme Scaling

    1940 1950 1960 1970 1980 1990 2000 2010 2020

    10,000,000,000

    110

    1001,000

    10,000100,000

    1,000,00010,000,000

    100,000,0001,000,000,000

    Intel Microprocessors

    Invention of the Transistor

    10 1 0.1 0.0145nmAAAB7HicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLUlsLDHxgAQuZG/Zgw37cdndMyEXfoONhcbY+oPs/DcucIWCL5nk5b2ZzMyLU86M9f1vr7S1vbO7V96vHBweHZ9UT886RmWa0JAornQvxoZyJmlomeW0l2qKRcxpN57eLfzuE9WGKfloZymNBB5LljCCrZPCukSiPqzW/Ia/BNokQUFqUKA9rH4NRopkgkpLODamH/ipjXKsLSOcziuDzNAUkyke076jEgtqonx57BxdOWWEEqVdSYuW6u+JHAtjZiJ2nQLbiVn3FuJ/Xj+zyW2UM5lmlkqyWpRkHFmFFp+jEdOUWD5zBBPN3K2ITLDGxLp8Ki6EYP3lTdJpNgK/ETw0a61mEUcZLuASriGAG2jBPbQhBAIMnuEV3jzpvXjv3seqteQVM+fwB97nD6q2jd0=AAAB7HicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLUlsLDHxgAQuZG/Zgw37cdndMyEXfoONhcbY+oPs/DcucIWCL5nk5b2ZzMyLU86M9f1vr7S1vbO7V96vHBweHZ9UT886RmWa0JAornQvxoZyJmlomeW0l2qKRcxpN57eLfzuE9WGKfloZymNBB5LljCCrZPCukSiPqzW/Ia/BNokQUFqUKA9rH4NRopkgkpLODamH/ipjXKsLSOcziuDzNAUkyke076jEgtqonx57BxdOWWEEqVdSYuW6u+JHAtjZiJ2nQLbiVn3FuJ/Xj+zyW2UM5lmlkqyWpRkHFmFFp+jEdOUWD5zBBPN3K2ITLDGxLp8Ki6EYP3lTdJpNgK/ETw0a61mEUcZLuASriGAG2jBPbQhBAIMnuEV3jzpvXjv3seqteQVM+fwB97nD6q2jd0=AAAB7HicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLUlsLDHxgAQuZG/Zgw37cdndMyEXfoONhcbY+oPs/DcucIWCL5nk5b2ZzMyLU86M9f1vr7S1vbO7V96vHBweHZ9UT886RmWa0JAornQvxoZyJmlomeW0l2qKRcxpN57eLfzuE9WGKfloZymNBB5LljCCrZPCukSiPqzW/Ia/BNokQUFqUKA9rH4NRopkgkpLODamH/ipjXKsLSOcziuDzNAUkyke076jEgtqonx57BxdOWWEEqVdSYuW6u+JHAtjZiJ2nQLbiVn3FuJ/Xj+zyW2UM5lmlkqyWpRkHFmFFp+jEdOUWD5zBBPN3K2ITLDGxLp8Ki6EYP3lTdJpNgK/ETw0a61mEUcZLuASriGAG2jBPbQhBAIMnuEV3jzpvXjv3seqteQVM+fwB97nD6q2jd0=AAAB7HicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLUlsLDHxgAQuZG/Zgw37cdndMyEXfoONhcbY+oPs/DcucIWCL5nk5b2ZzMyLU86M9f1vr7S1vbO7V96vHBweHZ9UT886RmWa0JAornQvxoZyJmlomeW0l2qKRcxpN57eLfzuE9WGKfloZymNBB5LljCCrZPCukSiPqzW/Ia/BNokQUFqUKA9rH4NRopkgkpLODamH/ipjXKsLSOcziuDzNAUkyke076jEgtqonx57BxdOWWEEqVdSYuW6u+JHAtjZiJ2nQLbiVn3FuJ/Xj+zyW2UM5lmlkqyWpRkHFmFFp+jEdOUWD5zBBPN3K2ITLDGxLp8Ki6EYP3lTdJpNgK/ETw0a61mEUcZLuASriGAG2jBPbQhBAIMnuEV3jzpvXjv3seqteQVM+fwB97nD6q2jd0=nmAAAB7HicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLUlsLDHxgAQuZG/Zgw37cdndMyEXfoONhcbY+oPs/DcucIWCL5nk5b2ZzMyLU86M9f1vr7S1vbO7V96vHBweHZ9UT886RmWa0JAornQvxoZyJmlomeW0l2qKRcxpN57eLfzuE9WGKfloZymNBB5LljCCrZPCukSiPqzW/Ia/BNokQUFqUKA9rH4NRopkgkpLODamH/ipjXKsLSOcziuDzNAUkyke076jEgtqonx57BxdOWWEEqVdSYuW6u+JHAtjZiJ2nQLbiVn3FuJ/Xj+zyW2UM5lmlkqyWpRkHFmFFp+jEdOUWD5zBBPN3K2ITLDGxLp8Ki6EYP3lTdJpNgK/ETw0a61mEUcZLuASriGAG2jBPbQhBAIMnuEV3jzpvXjv3seqteQVM+fwB97nD6q2jd0=AAAB7HicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLUlsLDHxgAQuZG/Zgw37cdndMyEXfoONhcbY+oPs/DcucIWCL5nk5b2ZzMyLU86M9f1vr7S1vbO7V96vHBweHZ9UT886RmWa0JAornQvxoZyJmlomeW0l2qKRcxpN57eLfzuE9WGKfloZymNBB5LljCCrZPCukSiPqzW/Ia/BNokQUFqUKA9rH4NRopkgkpLODamH/ipjXKsLSOcziuDzNAUkyke076jEgtqonx57BxdOWWEEqVdSYuW6u+JHAtjZiJ2nQLbiVn3FuJ/Xj+zyW2UM5lmlkqyWpRkHFmFFp+jEdOUWD5zBBPN3K2ITLDGxLp8Ki6EYP3lTdJpNgK/ETw0a61mEUcZLuASriGAG2jBPbQhBAIMnuEV3jzpvXjv3seqteQVM+fwB97nD6q2jd0=AAAB7HicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLUlsLDHxgAQuZG/Zgw37cdndMyEXfoONhcbY+oPs/DcucIWCL5nk5b2ZzMyLU86M9f1vr7S1vbO7V96vHBweHZ9UT886RmWa0JAornQvxoZyJmlomeW0l2qKRcxpN57eLfzuE9WGKfloZymNBB5LljCCrZPCukSiPqzW/Ia/BNokQUFqUKA9rH4NRopkgkpLODamH/ipjXKsLSOcziuDzNAUkyke076jEgtqonx57BxdOWWEEqVdSYuW6u+JHAtjZiJ2nQLbiVn3FuJ/Xj+zyW2UM5lmlkqyWpRkHFmFFp+jEdOUWD5zBBPN3K2ITLDGxLp8Ki6EYP3lTdJpNgK/ETw0a61mEUcZLuASriGAG2jBPbQhBAIMnuEV3jzpvXjv3seqteQVM+fwB97nD6q2jd0=AAAB7HicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLUlsLDHxgAQuZG/Zgw37cdndMyEXfoONhcbY+oPs/DcucIWCL5nk5b2ZzMyLU86M9f1vr7S1vbO7V96vHBweHZ9UT886RmWa0JAornQvxoZyJmlomeW0l2qKRcxpN57eLfzuE9WGKfloZymNBB5LljCCrZPCukSiPqzW/Ia/BNokQUFqUKA9rH4NRopkgkpLODamH/ipjXKsLSOcziuDzNAUkyke076jEgtqonx57BxdOWWEEqVdSYuW6u+JHAtjZiJ2nQLbiVn3FuJ/Xj+zyW2UM5lmlkqyWpRkHFmFFp+jEdOUWD5zBBPN3K2ITLDGxLp8Ki6EYP3lTdJpNgK/ETw0a61mEUcZLuASriGAG2jBPbQhBAIMnuEV3jzpvXjv3seqteQVM+fwB97nD6q2jd0=

    Year

    Num

    ber o

    f Tra

    nsis

    tors

    per I

    nteg

    rate

    d C

    ircui

    tMoore’s Law

    Process Technology (µmAAAB7nicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLElsLDERMIEL2VsW2LC7d9mdMyEXfoSNhcbY+nvs/DcucIWCL5nk5b2ZzMyLEiks+v63V9ja3tndK+6XDg6Pjk/Kp2cdG6eG8TaLZWweI2q5FJq3UaDkj4nhVEWSd6Pp7cLvPnFjRawfcJbwUNGxFiPBKDqpW+2rlKjqoFzxa/4SZJMEOalAjtag/NUfxixVXCOT1Npe4CcYZtSgYJLPS/3U8oSyKR3znqOaKm7DbHnunFw5ZUhGsXGlkSzV3xMZVdbOVOQ6FcWJXfcW4n9eL8XRTZgJnaTINVstGqWSYEwWv5OhMJyhnDlCmRHuVsIm1FCGLqGSCyFYf3mTdOq1wK8F9/VKs5HHUYQLuIRrCKABTbiDFrSBwRSe4RXevMR78d69j1VrwctnzuEPvM8fNUyOxg==AAAB7nicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLElsLDERMIEL2VsW2LC7d9mdMyEXfoSNhcbY+nvs/DcucIWCL5nk5b2ZzMyLEiks+v63V9ja3tndK+6XDg6Pjk/Kp2cdG6eG8TaLZWweI2q5FJq3UaDkj4nhVEWSd6Pp7cLvPnFjRawfcJbwUNGxFiPBKDqpW+2rlKjqoFzxa/4SZJMEOalAjtag/NUfxixVXCOT1Npe4CcYZtSgYJLPS/3U8oSyKR3znqOaKm7DbHnunFw5ZUhGsXGlkSzV3xMZVdbOVOQ6FcWJXfcW4n9eL8XRTZgJnaTINVstGqWSYEwWv5OhMJyhnDlCmRHuVsIm1FCGLqGSCyFYf3mTdOq1wK8F9/VKs5HHUYQLuIRrCKABTbiDFrSBwRSe4RXevMR78d69j1VrwctnzuEPvM8fNUyOxg==AAAB7nicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLElsLDERMIEL2VsW2LC7d9mdMyEXfoSNhcbY+nvs/DcucIWCL5nk5b2ZzMyLEiks+v63V9ja3tndK+6XDg6Pjk/Kp2cdG6eG8TaLZWweI2q5FJq3UaDkj4nhVEWSd6Pp7cLvPnFjRawfcJbwUNGxFiPBKDqpW+2rlKjqoFzxa/4SZJMEOalAjtag/NUfxixVXCOT1Npe4CcYZtSgYJLPS/3U8oSyKR3znqOaKm7DbHnunFw5ZUhGsXGlkSzV3xMZVdbOVOQ6FcWJXfcW4n9eL8XRTZgJnaTINVstGqWSYEwWv5OhMJyhnDlCmRHuVsIm1FCGLqGSCyFYf3mTdOq1wK8F9/VKs5HHUYQLuIRrCKABTbiDFrSBwRSe4RXevMR78d69j1VrwctnzuEPvM8fNUyOxg==AAAB7nicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLElsLDERMIEL2VsW2LC7d9mdMyEXfoSNhcbY+nvs/DcucIWCL5nk5b2ZzMyLEiks+v63V9ja3tndK+6XDg6Pjk/Kp2cdG6eG8TaLZWweI2q5FJq3UaDkj4nhVEWSd6Pp7cLvPnFjRawfcJbwUNGxFiPBKDqpW+2rlKjqoFzxa/4SZJMEOalAjtag/NUfxixVXCOT1Npe4CcYZtSgYJLPS/3U8oSyKR3znqOaKm7DbHnunFw5ZUhGsXGlkSzV3xMZVdbOVOQ6FcWJXfcW4n9eL8XRTZgJnaTINVstGqWSYEwWv5OhMJyhnDlCmRHuVsIm1FCGLqGSCyFYf3mTdOq1wK8F9/VKs5HHUYQLuIRrCKABTbiDFrSBwRSe4RXevMR78d69j1VrwctnzuEPvM8fNUyOxg==µmAAAB7nicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLElsLDERMIEL2VsW2LC7d9mdMyEXfoSNhcbY+nvs/DcucIWCL5nk5b2ZzMyLEiks+v63V9ja3tndK+6XDg6Pjk/Kp2cdG6eG8TaLZWweI2q5FJq3UaDkj4nhVEWSd6Pp7cLvPnFjRawfcJbwUNGxFiPBKDqpW+2rlKjqoFzxa/4SZJMEOalAjtag/NUfxixVXCOT1Npe4CcYZtSgYJLPS/3U8oSyKR3znqOaKm7DbHnunFw5ZUhGsXGlkSzV3xMZVdbOVOQ6FcWJXfcW4n9eL8XRTZgJnaTINVstGqWSYEwWv5OhMJyhnDlCmRHuVsIm1FCGLqGSCyFYf3mTdOq1wK8F9/VKs5HHUYQLuIRrCKABTbiDFrSBwRSe4RXevMR78d69j1VrwctnzuEPvM8fNUyOxg==AAAB7nicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLElsLDERMIEL2VsW2LC7d9mdMyEXfoSNhcbY+nvs/DcucIWCL5nk5b2ZzMyLEiks+v63V9ja3tndK+6XDg6Pjk/Kp2cdG6eG8TaLZWweI2q5FJq3UaDkj4nhVEWSd6Pp7cLvPnFjRawfcJbwUNGxFiPBKDqpW+2rlKjqoFzxa/4SZJMEOalAjtag/NUfxixVXCOT1Npe4CcYZtSgYJLPS/3U8oSyKR3znqOaKm7DbHnunFw5ZUhGsXGlkSzV3xMZVdbOVOQ6FcWJXfcW4n9eL8XRTZgJnaTINVstGqWSYEwWv5OhMJyhnDlCmRHuVsIm1FCGLqGSCyFYf3mTdOq1wK8F9/VKs5HHUYQLuIRrCKABTbiDFrSBwRSe4RXevMR78d69j1VrwctnzuEPvM8fNUyOxg==AAAB7nicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLElsLDERMIEL2VsW2LC7d9mdMyEXfoSNhcbY+nvs/DcucIWCL5nk5b2ZzMyLEiks+v63V9ja3tndK+6XDg6Pjk/Kp2cdG6eG8TaLZWweI2q5FJq3UaDkj4nhVEWSd6Pp7cLvPnFjRawfcJbwUNGxFiPBKDqpW+2rlKjqoFzxa/4SZJMEOalAjtag/NUfxixVXCOT1Npe4CcYZtSgYJLPS/3U8oSyKR3znqOaKm7DbHnunFw5ZUhGsXGlkSzV3xMZVdbOVOQ6FcWJXfcW4n9eL8XRTZgJnaTINVstGqWSYEwWv5OhMJyhnDlCmRHuVsIm1FCGLqGSCyFYf3mTdOq1wK8F9/VKs5HHUYQLuIRrCKABTbiDFrSBwRSe4RXevMR78d69j1VrwctnzuEPvM8fNUyOxg==AAAB7nicbVA9TwJBEJ3DL8Qv1NJmI5hYkTsaLElsLDERMIEL2VsW2LC7d9mdMyEXfoSNhcbY+nvs/DcucIWCL5nk5b2ZzMyLEiks+v63V9ja3tndK+6XDg6Pjk/Kp2cdG6eG8TaLZWweI2q5FJq3UaDkj4nhVEWSd6Pp7cLvPnFjRawfcJbwUNGxFiPBKDqpW+2rlKjqoFzxa/4SZJMEOalAjtag/NUfxixVXCOT1Npe4CcYZtSgYJLPS/3U8oSyKR3znqOaKm7DbHnunFw5ZUhGsXGlkSzV3xMZVdbOVOQ6FcWJXfcW4n9eL8XRTZgJnaTINVstGqWSYEwWv5OhMJyhnDlCmRHuVsIm1FCGLqGSCyFYf3mTdOq1wK8F9/VKs5HHUYQLuIRrCKABTbiDFrSBwRSe4RXevMR78d69j1VrwctnzuEPvM8fNUyOxg== )

    40048086

    286386

    486 Pentium

    Pentium IIPentium 4

    Core 2 Duo Core i7

    Doubles every 2.1 yrs

    2 / 19

  • Nanometer Era of Manufacturing: An Inverter Example

    3 / 19

  • Optical Proximity Correction (OPC)

    Design target

    4 / 19

  • Optical Proximity Correction (OPC)

    Design target Mask Wafer

    without OPC

    4 / 19

  • Optical Proximity Correction (OPC)

    Design target Mask Wafer

    without OPC

    with OPC

    4 / 19

  • What is SRAF?

    I Patterns deliver light to target features without printing themselvesI Make isolated features more denseI Improve the robustness of the target patternsI Rule-based [Jun+,SPIE’15], Model-based [Shang+,Mentor’05], Machine learning

    model-based [Xu+,ISPD’16]

    (a)

    (b)

    Target

    OPC

    SRAF

    PV band

    (a) Printing with OPC only (2688 nm2 PV band area); (b) Printing with both OPC and SRAF (2318nm2 PV band area).

    5 / 19

  • Outline

    Supervised Feature Revision

    SRAF Insertion

    Experimental Results

    6 / 19

  • Outline

    Supervised Feature Revision

    SRAF Insertion

    Experimental Results

    7 / 19

  • Concentric Circle Area Sampling

    I Initial feature extraction method in SRAF generation

    Label: 1

    Label: 0

    (a)

    0 1 2N%1 sub%sampling0point

    (b)

    (a) SRAF label; (b) CCAS feature extraction method in machine learning model-based SRAF generation.

    7 / 19

  • Introduction to Dictionary Learning

    OverviewOriginally, the dictionary learning model is composed of two parts. One is sparse codingand the other is dictionary constructing. The joint objective function with respect to D andx is below.

    minx,D

    1N

    N∑

    t=1

    {12‖yt − Dxt‖22 + λ ‖xt‖p}, (1)

    I yt ∈ R(n): the t-th input data vectorI D = {dj}sj=1 ,dj ∈ R(n): the dictionary where every column is called an atom.I xt ∈ R(s): the sparse codeI λ: hyper-parameterI p: the norm type of penalty term, e.g. l1 norm

    8 / 19

  • The Illustration for Dictionary Learning

    yt>

    9 / 19

  • The Illustration for Dictionary Learning

    yt>

    D

    9 / 19

  • The Illustration for Dictionary Learning

    yt>

    D

    xt>

    9 / 19

  • The Illustration for Dictionary Learning

    yt>

    D

    xt>

    9 / 19

  • Online Learning Framework

    Sparse Coding

    The subproblem with D fixed is convex. The objective function for sparse coding of i-thtraining data vector in memory is

    xt∆=argmin

    x

    12‖yt − Dx‖22 + λ‖x‖p. (2)

    Solver Details

    I p = 0: l0 norm and NP-hard [Mallat+,TIP’93], [Pati+,ACSSC’93]I p = 1: LASSO problem [Friedman+,JSS’10], [Beck+,SIIMS’09]

    10 / 19

  • Online Learning FrameworkDictionary Constructing

    The subproblem with x fixed is also convex. The objective function for dictionaryconstructing is

    D ∆=argminD

    1N

    N∑

    t=1

    12‖yt − Dxt‖22 + λ‖xt‖p. (3)

    Solver DetailsI Block coordinate descent method with

    warm startI Introducing two auxiliary variables B and

    C to speed up convergence rateI Sequentially updating atoms in a

    dictionary D

    ~Bt ←t − 1

    t~Bt−1 +

    1t~yt~x>t , (4)

    ~Ct ←t − 1

    t~Ct−1 +

    1t~xt~x>t . (5)

    11 / 19

  • Further Exploration: Supervised Dictionary LearningExploring Latent Label Information

    minx,D,A

    1N

    N∑

    t=1{1

    2

    ∥∥∥∥(

    y>t ,√αq>t

    )>−(

    D√αA

    )xt∥∥∥∥

    2

    2+ λ‖xt‖p}. (6)

    Exploiting both Latent and Direct Label Information

    minx,D,A,W

    1N

    N∑

    t=1{1

    2

    ∥∥∥∥∥∥

    (y>t ,√αq>t ,

    √βht)>−

    D√αA√βW

    xt

    ∥∥∥∥∥∥

    2

    2

    + λ‖xt‖p}. (7)

    12 / 19

  • The Illustration for Supervised Online Dictionary Learning

    xi>for i tyi>,

    p↵q>t ,

    p�ht

    0@

    Dp↵Ap�W

    1A

    13 / 19

  • Outline

    Supervised Feature Revision

    SRAF Insertion

    Experimental Results

    14 / 19

  • SRAF InsertionPreliminary Work

    I SRAF probability learning for each grid: Logistic regressionI SRAF grid model construction: Merging

    c(x, y) =

    {∑(i,j)∈(x,y) p(i, j), if ∃ p(i, j) ≥ threshold,

    −1, if all p(i, j) < threshold.(8)

    I p(i, j): the probability of a grid with index(i,j)

    I c(x, y): the summed probability value ofmerged grid with index (x,y)

    (x, y)

    (i, j)

    10nm

    SRAF grid model construction.

    14 / 19

  • SRAF Insertion via ILP

    maxa(x,y)

    x,yc(x, y) · a(x, y) (9a)

    s.t. a(x, y) + a(x− 1, y− 1) ≤ 1, ∀(x, y), (9b)a(x, y) + a(x− 1, y + 1) ≤ 1, ∀(x, y), (9c)a(x, y) + a(x + 1, y− 1) ≤ 1, ∀(x, y), (9d)a(x, y) + a(x + 1, y + 1) ≤ 1, ∀(x, y), (9e)a(x, y) + a(x, y + 1) + x(x, y + 2)

    + a(x, y + 3) ≤ 3, ∀(x, y), (9f)a(x, y) + a(x + 1, y) + x(x + 2, y)

    + a(x + 3, y) ≤ 3, ∀(x, y), (9g)a(x, y) ∈ {0, 1}, ∀(x, y). (9h)

    Wmin

    Wmax

    40nm

    X X

    X X

    SRAF insertion design ruleunder the grid model.

    15 / 19

  • Outline

    Supervised Feature Revision

    SRAF Insertion

    Experimental Results

    16 / 19

  • The Overall Flow

    CCAS Feature ExtractionLayout Pattern

    Supervised Feature Revision

    SRAF Probability Learning

    SRAF Generation via ILP SRAFed Pattern

    Feature Extraction

    SRAF Insertion

    16 / 19

  • Experimental Bed

    Benchmark Set

    I The same benchmark set as applied in [Xu+,ISPD’16]I 8 dense layouts and 10 sparse layouts with contacts sized 70nmI 70nm spacing for dense and ≥ 70nm spacing for sparse layouts

    (a) (b)

    (a) Dense layout with golden SRAFs; (b) Sparse layout with golden SRAFs.

    17 / 19

  • Resultsthe proposed approach shows some improvement on accuracywith higher false alarms.

    B. SRAF insertion

    In the flow of machine learning-based SRAF insertion,when comes to the feature extraction stage, each layoutclip is firstly put on a 2-D grid plane with a specific gridsize. Then original features are extracted via constrainedconcentric circle with area sampling (CCCAS) method at eachgrid.With CCCAS features and corresponding labels as input,the proposed supervised online dictionary learning (SODL)model will be expected to output the discriminative feature inlow-dimension.

    Considering the label information, the joint objective func-tion has been proposed as Equation (2) in [2]:

    minx,D

    1

    N

    NX

    t=1

    {12

    ������

    ⇣y>t ,p↵q>t ,

    p�ht

    ⌘>�

    0@

    Dp↵Ap�W

    1Axt

    ������

    2

    2

    + �kxtk1}, (2)

    where yt 2 Rn still acts as the raw feature vector, and D ={dj}sj=1 ,dj 2 Rn refers to the dictionary, xt 2 Rs indicatesnew representation of a CCCAS feature. Besides, qt 2 Rs isdefined as the discriminative sparse code of t-th input sample,and then A 2 Rs⇥s transforms original sparse code xt intodiscriminative sparse code. In addition, ht 2 R is the labelinformation, while W 2 R1⇥s is the label weight vector. ↵and � are hyper-parameters which control the contribution ofeach term to reconstruction error and balance the trade-off.

    After feature extraction via CCCAS and proposed SODLframework, the new discriminative feature in low-dimension isfed into a machine learning classifier. Through SODL modeland classifier, the probabilities of each 2-D grid can be ob-tained. Combined with some relaxed SRAF design rules suchas maximum length and width, minimum spacing, the SRAFinsertion can be modeled as an integer linear programming(ILP) problem. With ILP to model SRAF insertion, we willobtain a global view for SRAF generation.

    We employ a benchmark set which consists of 8 denselayouts and 10 sparse layouts with contacts sized 70nm. Thespacing for dense and sparse layouts are set to 70nm and� 70nm respectively. In following comparisons (i.e. Figures 4to 7), “ISPD’16” denotes the results from a state-of-the-art SRAF insertion tool , while “SODL” and “SODL+ILP”correspond to the results of our supervised online dictionarylearning framework without and with ILP model in post-processing. Note that in “SODL”, a greedy SRAF generationapproach as in “ISPD’16” is utilized. Due to the length oflimitation, only comparisons of SRAF insertion outputs onone sparse benchmark are exemplified in Fig. 7, in which redrectangles are inserted SRAFs, while green ones are OPCedtarget contacts. Therefore, experimental results have verifiedthe effectiveness and the efficiency of our SODL algorithmand ILP model.

    Dense-Average Sparse-Average Total-Average

    85

    90

    95F

    1sc

    ore

    (%)

    ISPD’16 SODL

    Fig. 3: Comparison with a state-of-the-art SRAF insertion toolon F1 score.

    Dense-Average Sparse-Average Total-Average

    2.4

    2.6

    2.8

    PVba

    ndar

    ea(0

    .001µm

    2)

    ISPD’16 SODL+Greedy SODL+ILP

    Fig. 4: Lithographic Performance Comparisons on PV bandarea.

    Dense-Average Sparse-Average Total-Average

    0.6

    0.8

    1

    EPE

    (nm

    )

    ISPD’16 SODL+Greedy SODL+ILP

    Fig. 5: Lithographic Performance Comparisons on EPE.

    Dense-Average Sparse-Average Total-Average

    0

    20

    40

    Run

    time

    (s)

    ISPD’16 SODL+Greedy SODL+ILP

    Fig. 6: Performance Comparisons on runtime.

    REFERENCES[1] H. Geng, H. Yang, B. Yu, X. Li, and X. Zeng, “Sparse vlsi layout feature

    extraction: A dictionary learning approach,” in 2018 IEEE Computer

    (a)

    the proposed approach shows some improvement on accuracywith higher false alarms.

    B. SRAF insertion

    In the flow of machine learning-based SRAF insertion,when comes to the feature extraction stage, each layoutclip is firstly put on a 2-D grid plane with a specific gridsize. Then original features are extracted via constrainedconcentric circle with area sampling (CCCAS) method at eachgrid.With CCCAS features and corresponding labels as input,the proposed supervised online dictionary learning (SODL)model will be expected to output the discriminative feature inlow-dimension.

    Considering the label information, the joint objective func-tion has been proposed as Equation (2) in [2]:

    minx,D

    1

    N

    NX

    t=1

    {12

    ������

    ⇣y>t ,p↵q>t ,

    p�ht

    ⌘>�

    0@

    Dp↵Ap�W

    1Axt

    ������

    2

    2

    + �kxtk1}, (2)

    where yt 2 Rn still acts as the raw feature vector, and D ={dj}sj=1 ,dj 2 Rn refers to the dictionary, xt 2 Rs indicatesnew representation of a CCCAS feature. Besides, qt 2 Rs isdefined as the discriminative sparse code of t-th input sample,and then A 2 Rs⇥s transforms original sparse code xt intodiscriminative sparse code. In addition, ht 2 R is the labelinformation, while W 2 R1⇥s is the label weight vector. ↵and � are hyper-parameters which control the contribution ofeach term to reconstruction error and balance the trade-off.

    After feature extraction via CCCAS and proposed SODLframework, the new discriminative feature in low-dimension isfed into a machine learning classifier. Through SODL modeland classifier, the probabilities of each 2-D grid can be ob-tained. Combined with some relaxed SRAF design rules suchas maximum length and width, minimum spacing, the SRAFinsertion can be modeled as an integer linear programming(ILP) problem. With ILP to model SRAF insertion, we willobtain a global view for SRAF generation.

    We employ a benchmark set which consists of 8 denselayouts and 10 sparse layouts with contacts sized 70nm. Thespacing for dense and sparse layouts are set to 70nm and� 70nm respectively. In following comparisons (i.e. Figures 4to 7), “ISPD’16” denotes the results from a state-of-the-art SRAF insertion tool , while “SODL” and “SODL+ILP”correspond to the results of our supervised online dictionarylearning framework without and with ILP model in post-processing. Note that in “SODL”, a greedy SRAF generationapproach as in “ISPD’16” is utilized. Due to the length oflimitation, only comparisons of SRAF insertion outputs onone sparse benchmark are exemplified in Fig. 7, in which redrectangles are inserted SRAFs, while green ones are OPCedtarget contacts. Therefore, experimental results have verifiedthe effectiveness and the efficiency of our SODL algorithmand ILP model.

    Dense-Average Sparse-Average Total-Average

    85

    90

    95

    F1

    scor

    e(%

    )

    ISPD’16 SODL

    Fig. 3: Comparison with a state-of-the-art SRAF insertion toolon F1 score.

    Dense-Average Sparse-Average Total-Average

    2.4

    2.6

    2.8

    PVba

    ndar

    ea(0

    .001µm

    2)

    ISPD’16 SODL+Greedy SODL+ILP

    Fig. 4: Lithographic Performance Comparisons on PV bandarea.

    Dense-Average Sparse-Average Total-Average

    0.6

    0.8

    1

    EPE

    (nm

    )

    ISPD’16 SODL+Greedy SODL+ILP

    Fig. 5: Lithographic Performance Comparisons on EPE.

    Dense-Average Sparse-Average Total-Average

    0

    20

    40R

    untim

    e(s

    )

    ISPD’16 SODL+Greedy SODL+ILP

    Fig. 6: Performance Comparisons on runtime.

    REFERENCES[1] H. Geng, H. Yang, B. Yu, X. Li, and X. Zeng, “Sparse vlsi layout feature

    extraction: A dictionary learning approach,” in 2018 IEEE Computer

    (b)

    the proposed approach shows some improvement on accuracywith higher false alarms.

    B. SRAF insertion

    In the flow of machine learning-based SRAF insertion,when comes to the feature extraction stage, each layoutclip is firstly put on a 2-D grid plane with a specific gridsize. Then original features are extracted via constrainedconcentric circle with area sampling (CCCAS) method at eachgrid.With CCCAS features and corresponding labels as input,the proposed supervised online dictionary learning (SODL)model will be expected to output the discriminative feature inlow-dimension.

    Considering the label information, the joint objective func-tion has been proposed as Equation (2) in [2]:

    minx,D

    1

    N

    NX

    t=1

    {12

    ������

    ⇣y>t ,p↵q>t ,

    p�ht

    ⌘>�

    0@

    Dp↵Ap�W

    1Axt

    ������

    2

    2

    + �kxtk1}, (2)

    where yt 2 Rn still acts as the raw feature vector, and D ={dj}sj=1 ,dj 2 Rn refers to the dictionary, xt 2 Rs indicatesnew representation of a CCCAS feature. Besides, qt 2 Rs isdefined as the discriminative sparse code of t-th input sample,and then A 2 Rs⇥s transforms original sparse code xt intodiscriminative sparse code. In addition, ht 2 R is the labelinformation, while W 2 R1⇥s is the label weight vector. ↵and � are hyper-parameters which control the contribution ofeach term to reconstruction error and balance the trade-off.

    After feature extraction via CCCAS and proposed SODLframework, the new discriminative feature in low-dimension isfed into a machine learning classifier. Through SODL modeland classifier, the probabilities of each 2-D grid can be ob-tained. Combined with some relaxed SRAF design rules suchas maximum length and width, minimum spacing, the SRAFinsertion can be modeled as an integer linear programming(ILP) problem. With ILP to model SRAF insertion, we willobtain a global view for SRAF generation.

    We employ a benchmark set which consists of 8 denselayouts and 10 sparse layouts with contacts sized 70nm. Thespacing for dense and sparse layouts are set to 70nm and� 70nm respectively. In following comparisons (i.e. Figures 4to 7), “ISPD’16” denotes the results from a state-of-the-art SRAF insertion tool , while “SODL” and “SODL+ILP”correspond to the results of our supervised online dictionarylearning framework without and with ILP model in post-processing. Note that in “SODL”, a greedy SRAF generationapproach as in “ISPD’16” is utilized. Due to the length oflimitation, only comparisons of SRAF insertion outputs onone sparse benchmark are exemplified in Fig. 7, in which redrectangles are inserted SRAFs, while green ones are OPCedtarget contacts. Therefore, experimental results have verifiedthe effectiveness and the efficiency of our SODL algorithmand ILP model.

    Dense-Average Sparse-Average Total-Average

    85

    90

    95

    F1

    scor

    e(%

    )

    ISPD’16 SODL

    Fig. 3: Comparison with a state-of-the-art SRAF insertion toolon F1 score.

    Dense-Average Sparse-Average Total-Average

    2.4

    2.6

    2.8

    PVba

    ndar

    ea(0

    .001µm

    2)

    ISPD’16 SODL+Greedy SODL+ILP

    Fig. 4: Lithographic Performance Comparisons on PV bandarea.

    Dense-Average Sparse-Average Total-Average

    0.6

    0.8

    1

    EPE

    (nm

    )

    ISPD’16 SODL+Greedy SODL+ILP

    Fig. 5: Lithographic Performance Comparisons on EPE.

    Dense-Average Sparse-Average Total-Average

    0

    20

    40

    Run

    time

    (s)

    ISPD’16 SODL+Greedy SODL+ILP

    Fig. 6: Performance Comparisons on runtime.

    REFERENCES[1] H. Geng, H. Yang, B. Yu, X. Li, and X. Zeng, “Sparse vlsi layout feature

    extraction: A dictionary learning approach,” in 2018 IEEE Computer

    (c)

    the proposed approach shows some improvement on accuracywith higher false alarms.

    B. SRAF insertion

    In the flow of machine learning-based SRAF insertion,when comes to the feature extraction stage, each layoutclip is firstly put on a 2-D grid plane with a specific gridsize. Then original features are extracted via constrainedconcentric circle with area sampling (CCCAS) method at eachgrid.With CCCAS features and corresponding labels as input,the proposed supervised online dictionary learning (SODL)model will be expected to output the discriminative feature inlow-dimension.

    Considering the label information, the joint objective func-tion has been proposed as Equation (2) in [2]:

    minx,D

    1

    N

    NX

    t=1

    {12

    ������

    ⇣y>t ,p↵q>t ,

    p�ht

    ⌘>�

    0@

    Dp↵Ap�W

    1Axt

    ������

    2

    2

    + �kxtk1}, (2)

    where yt 2 Rn still acts as the raw feature vector, and D ={dj}sj=1 ,dj 2 Rn refers to the dictionary, xt 2 Rs indicatesnew representation of a CCCAS feature. Besides, qt 2 Rs isdefined as the discriminative sparse code of t-th input sample,and then A 2 Rs⇥s transforms original sparse code xt intodiscriminative sparse code. In addition, ht 2 R is the labelinformation, while W 2 R1⇥s is the label weight vector. ↵and � are hyper-parameters which control the contribution ofeach term to reconstruction error and balance the trade-off.

    After feature extraction via CCCAS and proposed SODLframework, the new discriminative feature in low-dimension isfed into a machine learning classifier. Through SODL modeland classifier, the probabilities of each 2-D grid can be ob-tained. Combined with some relaxed SRAF design rules suchas maximum length and width, minimum spacing, the SRAFinsertion can be modeled as an integer linear programming(ILP) problem. With ILP to model SRAF insertion, we willobtain a global view for SRAF generation.

    We employ a benchmark set which consists of 8 denselayouts and 10 sparse layouts with contacts sized 70nm. Thespacing for dense and sparse layouts are set to 70nm and� 70nm respectively. In following comparisons (i.e. Figures 4to 7), “ISPD’16” denotes the results from a state-of-the-art SRAF insertion tool , while “SODL” and “SODL+ILP”correspond to the results of our supervised online dictionarylearning framework without and with ILP model in post-processing. Note that in “SODL”, a greedy SRAF generationapproach as in “ISPD’16” is utilized. Due to the length oflimitation, only comparisons of SRAF insertion outputs onone sparse benchmark are exemplified in Fig. 7, in which redrectangles are inserted SRAFs, while green ones are OPCedtarget contacts. Therefore, experimental results have verifiedthe effectiveness and the efficiency of our SODL algorithmand ILP model.

    Dense-Average Sparse-Average Total-Average

    85

    90

    95

    F1

    scor

    e(%

    )

    ISPD’16 SODL

    Fig. 3: Comparison with a state-of-the-art SRAF insertion toolon F1 score.

    Dense-Average Sparse-Average Total-Average

    2.4

    2.6

    2.8

    PVba

    ndar

    ea(0

    .001µm

    2)

    ISPD’16 SODL+Greedy SODL+ILP

    Fig. 4: Lithographic Performance Comparisons on PV bandarea.

    Dense-Average Sparse-Average Total-Average

    0.6

    0.8

    1

    EPE

    (nm

    )

    ISPD’16 SODL+Greedy SODL+ILP

    Fig. 5: Lithographic Performance Comparisons on EPE.

    Dense-Average Sparse-Average Total-Average

    0

    20

    40

    Run

    time

    (s)

    ISPD’16 SODL+Greedy SODL+ILP

    Fig. 6: Performance Comparisons on runtime.

    REFERENCES[1] H. Geng, H. Yang, B. Yu, X. Li, and X. Zeng, “Sparse vlsi layout feature

    extraction: A dictionary learning approach,” in 2018 IEEE Computer

    (d)

    Lithographic performance comparisons with a state-of-the-art machine learning based SRAF insertion tool.

    18 / 19

  • Conclusion

    Summary:I First introduced the concept of dictionary learning into the layout feature extraction

    stageand further proposed a supervised online dictionary learning algorithm.

    I ILP for SRAF generation in a global view.

    I Boost F1 score and enhance lithographic performance with less time overhead.

    Future Work:I Speed up SRAF insertion process

    I Consider more SRAF design rules into ILP

    I ...

    19 / 19

    IntroductionMain TalkSupervised Feature RevisionSRAF InsertionExperimental Results