Com Bi National Logic Optimization

download Com Bi National Logic Optimization

of 24

Transcript of Com Bi National Logic Optimization

  • 8/3/2019 Com Bi National Logic Optimization

    1/24

    Hauptseminar

    Combinational Logic Optimization

    Nikolaus Horr

    Studiengang: Informatik

    Betreuer: Talal Arnaout

    Institut fur Technische Informatik

    Universitat StuttgartPfaffenwaldring 47

    D70569 Stuttgart

  • 8/3/2019 Com Bi National Logic Optimization

    2/24

    Abstract

    Todays combinational logic circuits have increasing area, number of gates and I/O-connections. The latter aspect requires minimization algorithms which are highlyefficient since even the notation (which is the input to the algorithms) of the imple-

    mented functions can have exponential size in the number of inputs.

    For the optimization of two-level logic circuits, either heuristic or exact methodsare used. The practical use of the latter ones is restricted to smaller circuits. Adata scheme on which different operations can be computed very efficiently is thepositional-cube notation.

    Multiple-level logic optimization is normally done using heuristic algorithms, due tothe high computational complexity of exact algorithms.

  • 8/3/2019 Com Bi National Logic Optimization

    3/24

    Contents

    1 Introduction 1

    2 Definitions 2

    2.1 Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    2.2 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    2.3 Representations of boolean functions . . . . . . . . . . . . . . . . . . 4

    3 Two-Level Combinational Logic Optimization 6

    3.1 Testability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    3.2 Algorithms for Exact Logic Minimization . . . . . . . . . . . . . . . . 8

    3.2.1 Positional-cube notation of binary-valued functions . . . . . . 10

    3.2.2 Positional-cube notation of multiple-valued functions . . . . . 12

    3.2.3 The unate recursive paradigm . . . . . . . . . . . . . . . . . . 14

    3.3 Algorithms for Heuristic Logic Minimization . . . . . . . . . . . . . . 15

    4 Multiple-level combinational logic optimization 16

    4.1 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    4.2 Algebraic model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.3 Boolean model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    4.4 Delay optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    5 Conclusion 19

    Bibliography 20

  • 8/3/2019 Com Bi National Logic Optimization

    4/24

    List of Figures

    2.1 Redundant cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2.2 Irredundant cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2.3 Minimum cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    3.1 Covers of a two-output function . . . . . . . . . . . . . . . . . . . . . 7

    3.2 Prime implicant table . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    3.3 Dominated column c1 and dominant column c2 . . . . . . . . . . . . . 9

    3.4 Dominant row r1 and dominated row r2 . . . . . . . . . . . . . . . . . 9

    3.5 Reduced prime implicant table . . . . . . . . . . . . . . . . . . . . . . 10

    3.6 Petricks method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.7 Symbol encoding in positional-cube notation . . . . . . . . . . . . . . 10

    3.8 Implicants in symbolic notation . . . . . . . . . . . . . . . . . . . . . 11

    3.9 Fast test for intersection of two impicants in positional-cube notation 11

    3.10 Positional-cube notation of a multiple-output boolean function . . . . 11

    3.11 Positional-cube notation of a multi-valued input function (the greennumbers show the input values) . . . . . . . . . . . . . . . . . . . . . 12

    3.12 Positional-cube notation of a strongly unate function (in the original

    bit-order, the green numbers show the input values). . . . . . . . . . 133.13 Positional-cube notation of a strongly unate function (in a bit-order

    leaving no 0s right to a 1, the green numbers show the input values). 13

    3.14 Positional-cube notation of a weakly unate function . . . . . . . . . . 13

  • 8/3/2019 Com Bi National Logic Optimization

    5/24

    Chapter 1

    Introduction

    Logic circuits can be divided into two parts: combinational logic circuits, whichare circuits without feedback (in particular, no flip-flops are used or created) andtherefore stateless, and sequential logic circuits which represent finite-state machines.We will discuss combinational logic circuits only, but note that sequential circuitsalso have combinational components.

    Two-level logic optimization is a means for optimizing the implementation of circuitsgiven in two-level tabular forms in terms of area and testability. A programmablelogic array (PLA), is a standard example for such a circuit form. Two-level logicoptimization is also a means to reduce the information needed to describe logic

    functions which are components of multiple-level representations of logic circuits.And lastly it is a formal way of processing the representation of systems describedby logic functions. Since any path from an input to an output has two gates in it,the delay depends on the size of the fanout stems (output capacity), which is relatedto the area of the circuit. So reducing the area of two-level logic circuits also reducesthe delay.

    Multiple-level logic optimization is used when a logic function is to be implementedin multiple-level logic. A reason for using more than two levels is the size of circuitsin two-level implementations. When reducing the area using multiple-level logic thepaths from inputs to outputs can be longer1 than in two-level logic (where each pathhas two gates in it). So the importance of delay minimization increases.

    The goals of logic optimization are:

    Minimization of delay times

    Minimization of the required area

    Testability (mostly irredundancy)

    1

    The length of the path is correlated to the sum of gate-delays along the path, the number ofgates can be used as approximation

  • 8/3/2019 Com Bi National Logic Optimization

    6/24

    Chapter 2

    Definitions

    2.1 Conventions

    We assume the sum of products form is available (using De Morgans ru-les preserving the number of literals and the number of terms or by select-transformations explained later).

    We sometimes identify a cover F and the the function f associated with F (i.e.if it makes a shorter description possible).

    We write x

    instead of x for the inverse of x.

    We assume positive logic (0=false, 1=true).

    2.2 Basics

    A completely specified boolean function f has an on set and an off set which arethe subsets of the domain (all possible input assignments) where the output takesthe values 1 and 0 respectively.

    An incompletely specified boolean function has an additionally dc set (Dont care)where the output can take any of the possible values.

    Boolean functions can be either completely or incompletely specified. Since the com-pletely specified ones can be seen as incompletely specified functions with an emptydc set, we will mostly look upon incompletely specified boolean functions [1].

    The variables a function depends on are called the support of the function. Iff(x1, . . . , xn) is a boolean function depending on n variables, the support of thefunction f is {x1, . . . , xn}.

    Cofactors are widely used in logic optimization. If f(x1, . . . , xn) is a boolean func-

    tion, the cofactor with respect to a positive literal xi (1 i n) is fxi =

  • 8/3/2019 Com Bi National Logic Optimization

    7/24

    2.2 Basics 3

    f(x1, . . . , xi1, 1, xi+1, . . . , xn) and the cofactor with respect to a negative literal xi

    is fxi

    = f(x1, . . . , xi1, 0, xi+1, . . . , xn). This definition will be extended to functionswith multi-valued inputs in 3.2.2.

    If f(x1, . . . , xn) is a boolean function, then it can be transformed into a sum ofproducts (SOP) of n literals called the minterms of the function by using Shannonsexpansion recursively. This transformation is also called Booles expansion and worksas follows: f(x1, . . . , xn) = xi fxi + x

    i fxi, i {1 . . . n}.

    The select transformation corresponds to Shannons expansion with respect to thelatest-arriving input [4], [5].

    Each minterm corresponds to the input-assignments which imply the output of thecorresponding function is 1 (true).

    An implicant is either a minterm of a function f (alternatively of a cover F) or comesfrom combining minterms of f to a product term of less literals than the mintermshave. In order to increase the size of an implicant, input assignments from the dcset can also be used like minterms.

    An implicant is called a prime implicant or prime if it is not contained in any otherimplicant.

    A prime implicant is called essential prime implicant if it contains an implicant thatis not contained in any other prime implicant

    A cover F of a function f is a set of implicants that satisfies the following constraint:

    Fon F Fon FdcSome different covers of the same boolean function f(x,y,z) = xyz+xyz +xyz +xyz + xyz + xyz are shown in figures 2.1, 2.2 and 2.3.

    The size (also called cardinality) of a cover is the number of its implicants. Usuallya function can be implemented by different covers. In particular when dealing withincompletely specified boolean functions, we can use the dc set to reduce the size ofa cover.

    A cover F is called redundant, if there exists an implicant in it that can be removedmaintaining the cover-property. An example for a redundant cover is presented in

    figure 2.1.A cover is irredundant or minimal if it is not a proper superset of any other coverof the function. An irredundant cover (also called minimal cover) is shown in figure2.2.

    A weaker minimality-criteria is being minimal with respect to single implicant con-tainment. A cover is minimal with respect to single implicant containment if noimplicant of the cover containes any other implicant of the cover. The cover shownin figure 2.1 is redundant, however it is minimal with respect to single implicantcontainment.

  • 8/3/2019 Com Bi National Logic Optimization

    8/24

    2.3 Representations of boolean functions 4

    Figure 2.1: Redundant cover

    Figure 2.2: Irredundant cover

    A minimal cover is not inevitably a cover of minimum cardinality, since there mayexist other covers consisting of less (and other) implicants. A minimum cover is acover with minimum cardinality. An example for an minimum cover is given in figure2.3.

    2.3 Representations of boolean functions

    There are many different ways for representing boolean functions. Some of the mostpopular ones are shortly described here:

    Tabular forms are representations like the truth table. The truth table is a com-plete list of all input-assignments and the corresponding output-values. Since

  • 8/3/2019 Com Bi National Logic Optimization

    9/24

    2.3 Representations of boolean functions 5

    Figure 2.3: Minimum cover

    all input-assignments are listed (and the number of those is exponential in thenumber of inputs), the size of a truth table is exponential in the number ofinputs and therefore it is only used for small functions [6]. Another tabularform is the implicant table. An implicant table of a single-output function con-sists of the implicants of the function as rows. Considering multiple-outputfunctions, we have to add an additional column to the table which contains arow vector of the output-values corresponding to each implicant.

    Expression forms are used for better readability of certain equalities. Commonlogic expressions are the two-level forms and multiple-level forms of logic ex-pressions. For example, f(a,b,c) = a b + b c is a two-level expression form.

    Binary decision diagrams (BDDs) are graph-based representations based ontrees or directed acyclic graphs (DAGs) with a root element. A BDD repres-ents a set of binary-valued (BV) decisions, making an overall decision that canbe either true or false. A BDD can have a defined order on the variables, whichwould then be called an ordered binary decision diagram (OBDD). Since thecost for operations on OBDDs are (only) polynomial in the number of vertices,

    OBDDs are used more often in practical applications than any other BDD [1].

  • 8/3/2019 Com Bi National Logic Optimization

    10/24

    Chapter 3

    Two-Level Combinational Logic

    Optimization

    Combinational logic optimization can either be done in an exact or in a heuristicmanner. While algorithms for exact logic minimization yield a minimum cover (if youhave enough resources for running them), algorithms for heuristic logic minimizationsearch for minimal (irredundant) covers, but may also find a minimum cover.

    Two-level logic minimization tries to reduce the size of a boolean functions repre-sentation. Since there are different styles of implementing a boolean function, thedefinition of the size is varying slightly.

    A PLA implementation, for example, has the primary target in reducing the numberof terms and a second target in reducing the number of literals. Other implementa-tion styles using complex gates might have the reduction of literals as the primaryobjective.

    There are not only functions with one output, called single-output functions, butalso with more than one output, called multiple-output functions. Multiple-outputfunctions can be optimized using the same principles as for single-output functions,but it is a more complex task. If the scalar components of a function would beoptimized one at a time, the result might be suboptimal, since product term sharing

    is not exploited. Figure 3.1 shows an optimal and a suboptimal cover of the two-output function f with f1(x,y,z) = x

    yz + xyz + xyz + xyz + xyz and f2(x,y,z) =xyz + xyz corresponding to the first and second output.

    A multiple-output minterm of a boolean function f: {0, 1}n {0, 1, }m consistsof an input part and an output part, a pair of row vectors of dimensions n andm, respectivly. The input part is defined as if it were a single-output minterm forone of the (sub-)functions scalar outputs. The output part has exactly one 1-entry,corresponding to the subfunction for which the single-output minterm implies thevalue 1. (If an implicant implies more than one output to have the value 1, theaccording number of multiple-output implicants have to be used.)

  • 8/3/2019 Com Bi National Logic Optimization

    11/24

    3.1 Testability 7

    Figure 3.1: Covers of a two-output function

    A multiple-output implicant is an extension of multiple-output minterms like single-output implicants are extensions of single-output minterms. Input and output partsstill have dimensions n and m, respectively. However dont cares are allowed in theinput part (so this row vector can now contain -entries) and the output part canhave more than only one 1-entry. The 1-entries in the output part imply the outputof the corresponding subfunction is either true or dont care.

    An n-valued single-output function can also be interpreted as a binary-valued m-output function with m = log2(n) by binary encoding of the input values. Theinterpretation the other way round is straightforward, except output value combi-

    nations that never occur can be omitted.

    3.1 Testability

    With shrinking chip-structures, the possibility of having defects increases, makingtestability one of the main targets. For many testing purposes, a stuck-at faultmodel is used to represent malfunctions. If a circuit has a stuck-at-0 fault, its outputbehaves as if a logic gates input were permanently 0. The definition of stuck-at-1faults is straightforward. A multiple stuck-at fault is related to more than one fault,

    while a single stuck-at fault corresponds to a single fault [1]. Despite the possibilityof equipping circuits with various kinds of additional testing circuits, it is desirableto have an irredundant cover for basic testing (i.e. testing with respect to stuck-atfault models). If a set of input assignments exists which allows the detection of allpossible faults, the circuit is called fully testable for a particular fault model (e.g.single stuck-at fault model) [1].

    If a (two-level) cover of a single-output function is prime and irredundant, the re-sulting circuit is fully testable for single- and multiple stuck-at faults. A prime andirredundant cover of a multiple-output function yields a fully testable circuit if eachscalar output is represented by a prime and irredundant cover or there are no shared

    product terms [1].

  • 8/3/2019 Com Bi National Logic Optimization

    12/24

    3.2 Algorithms for Exact Logic Minimization 8

    abc 000 1 1 0 0001 1 0 0 0010 0 1 1 0110 0 0 1 1111 0 0 0 1

    Figure 3.2: Prime implicant table

    A non-minimum irredundant (=minimal) prime cover has the same testability pro-perties as a minimum prime cover (if we do not implement additional pins or circuitsfor testing only)[1].

    3.2 Algorithms for Exact Logic Minimization

    Exact logic minimization addresses the problem of computing a minimum cover.Because of the exponential cost of the exact algorithms, the use of exact logic mini-mization is limited to functions with at most about 15 variables [2] . The algorithmsused are based on the work of Quine and McCluskey.

    Quine discovered that there is a minimum cover that is prime while McCluskeyformulated the search for a minimum cover as covering problem on a prime implicant

    table [1].

    The prime implicant table, also called covering matrix [2] is a binary valued matrix.The rows and columns are corresponding to the minterms and the prime implicants,respectively.

    Figure 3.2 shows the prime implicant table (with named rows and columns) forfunction f = abc + abc + abc + abc + abc. The prime implicants of f are =ab, = ac, = bc, = ab.

    The covering matrix can be seen as the incidence matrix of a hypergraph withvertices and edges corresponding to the minterms and prime implicants, respectively.

    The covering matrix can grow very fast as the number of inputs increases, since aboolean function with n inputs and one output can have O(3n/n) prime implicantsand O(2n) minterms [7].

    As a result of the exponential algorithm1 on a problem of exponential size (coveringmatrix), exact minimization requires a huge amount (O(23

    n

    )) of time and memory.In order to reduce the size of the problem, the prime implicant table can be reduced.To obtain a reduced prime implicant table, the following rules are used [1], [7]:

    Double rows often occur in practical applications and can be reduced to one row.

    1The minimum covering problem is NP-complete.

  • 8/3/2019 Com Bi National Logic Optimization

    13/24

    3.2 Algorithms for Exact Logic Minimization 9

    c1 c2

    0 1

    0 01 10 01 1

    Figure 3.3: Dominated column c1 and dominant column c2

    r1 110110r2 010100

    Figure 3.4: Dominant row r1 and dominated row r2

    Essential columns correspond to the essential primes and must be part of anycover. A column is essential if it has a 1 in a row where no other column hasa 1.

    Dominated columns can be omitted. A column c1 is dominated by another co-lumn c2 if all entries of c1 are less than or equal to the corresponding entry ofc2 as shown in figure 3.3.

    Dominant rows can be omitted. A row r1 dominates another row r2 if all entriesof r1 are greater than or equal to the corresponding entry of r2 as shown infigure 3.4. Note that this definition from [1] differs from the definition givenin [7]where dominant rows and dominated rows (rows only!) are defined theother way round, and therefore dominated rows are removed.

    A reduced prime implicant table is shown in figure 3.5. If this table has entries left,it is called cyclic and models the so-called cyclic core of the problem, which can besolved by branching. Otherwise a minimum cover has been generated by using onlythe essential primes. The table shown in figure 3.5 is not cyclic, hence no branching

    is required and we can select one of the primes or to extend the set of essentialprimes and to a cover.

    McCluskeys original method uses branching to figure out which of the left overimplicants (columns) in the cyclic have to be chosen in order to get the optimumsolution.

    Another algorithm used to determine minimum covers is called Petricks method,shown in figure 3.6.

    For each minterm (row), the implicants covering it are written as a sum, which hasto evaluate true, since at least one of the implicants covering a minterm has to be

    part of the functions cover. Then all of these sums are combined to a product ofsums, which must also evaluate true (since all minterms have to be covered by a

  • 8/3/2019 Com Bi National Logic Optimization

    14/24

    3.2 Algorithms for Exact Logic Minimization 10

    Figure 3.5: Reduced prime implicant table

    Figure 3.6: Petricks method

    cover). After this product of sums is transformed into a sum of products, whereeach product represents a possible selection of of prime implicants, we can select theproduct term (clause) with the fewest literals, which represents a minimum cover.

    At first glance Petricks method may seem the absolutely better approach but acloser look shows that it is not because of transforming a product of sums into asum of products is exponential in the number of operations.

    3.2.1 Positional-cube notation of binary-valued functions

    In order to to increase the efficiency of operations on implicants, a binary encodingcalled postitional-cube notation has been developed. When dealing with booleanfunctions, the symbols are encoded using 2-bit (representing the two values a variablecan take) fields as shown in figure 3.7.

    The positional-cube notation of a BV single-output function can be derived from the

    Symbol Binaryencoding

    000 101 01 11

    Figure 3.7: Symbol encoding in positional-cube notation

  • 8/3/2019 Com Bi National Logic Optimization

    15/24

    3.2 Algorithms for Exact Logic Minimization 11

    0 0 01 10 1 01

    Figure 3.8: Implicants in symbolic notation

    Figure 3.9: Fast test for intersection of two impicants in positional-cube notation

    functions implicants (as row-vectors, one below the other), replacing the symbolicentries by the corresponding 2-bit binary encoding. The symbol can occur onlyin implicants which are results of manipulations on (other) implicants and states,that the resulting implicant is empty (and therefore can never evaluate true, so it isvoid).

    A very efficient operation based on the positional-cube notation is the intersectionof two (or more) implicants, for example. The implicants of function f(a,b,c,d) =ad + ab + ab + acd are given in figure 3.8 as row vectors using the normal symbols.

    In positional-cube notation, the intersection of two implicants can be determinedbit-by-bit as the product of the implicants entries. In our example, the intersectionof the first two implicants ( ) is abd and the intersection of the second and thethird implicant ( ) is empty (void), figure 3.9 presents this in positional-cubenotation.

    This notation can be extended to multiple-output BV functions by appending anoutput (row-) vector to each row (implicant). The output vectors have as manyentries as the function has outputs and represent the outputs for which the implicant(row) implies the true-value (as with multiple output implicants). Figure 3.10 showsan example for a multiple-output function f : f1 = a

    b + ab,f2 = ab,f3 = ab in

    positional-cube notation.

    01 10 11010 01 01001 01 001

    Figure 3.10: Positional-cube notation of a multiple-output boolean function

  • 8/3/2019 Com Bi National Logic Optimization

    16/24

    3.2 Algorithms for Exact Logic Minimization 12

    Figure 3.11: Positional-cube notation of a multi-valued input function (the greennumbers show the input values)

    3.2.2 Positional-cube notation of multiple-valued functions

    When x is a p-valued variable of a multiple-valued input (MVI) function, there areother literals than x and x: For each subset S {0, . . . , p 1} (the values, the

    variable x can take), the corresponding literal is x[S]

    . This means a literal x[S]

    willevaluate true for more than one value which variable x can take, when the cardinalityof S is greater than 1. In particular, x{0,...,p1} is called full literal, evaluating truefor any value of x and therefore representing a dont care condition on x and, onthe other hand, x is called empty literal, evaluating false for all values x can take.

    The positional-cube notation can also be extended to MVI functions. When dealingwith BV input functions, the left and right bit in the 2-bit symbol encoding cor-responds to the literal being true for the value 0 and 1, respectively. A n-valuedvariable is represented by a n-bit field, so we also use one bit for every value pos-sible. For example, the function f(x, y) = x{2} y{0,1} + x{0,1} y{1} with a ternary

    valued variable x (x {0, 1, 2}) and a BV variable y (y {0, 1}) can be simplifiedimmediately to f(x, y) = x{2} + x{0,1} y{1} since the literal y{0,1} is full (representinga dont care) and therefore can be dropped. The positional-cube notation of functionf is given in figure 3.11.

    Let x be a p-valued variable in the support of f. The cofactor of an MVI functionf(x1, . . . , xn) with respect to literal x

    [k]i is fx[k]

    i

    = f(x1, . . . , xi1, k , xi+1, . . . , xn), i

    {1, . . . , n}, k {0, . . . , p 1}.

    A BV function f is called positive unate in variable x if fx fx and negative unatein variable x if fx fx. A function is positive unate if it is positive unate in all

    variables, negative unate if it is negative unate in all variables, otherwise it is binate[1].

    The function f(a,b,c,d) = a + b + c + d is positive unate in variables a and d and isnegative unate in variables b and c and therefore is binate. Function g(b, c) = b + c

    is negative unate and h(a, d) = a + d is positive unate.

    When considering functions with multiple-valued inputs, there are two kinds ofunateness, namely strong unateness and weak unateness. A function f is stronglyunate in p-valued variable x, if an order 2 can be imposed on the values of x,such that the following holds: i, j {0, . . . , p 1}, i j : fx[i] fx[j] . The

    2

    This symbol is used to indicate that any relation which is antisymmetrical, transitive andreflexive can be used as order.

  • 8/3/2019 Com Bi National Logic Optimization

    17/24

    3.2 Algorithms for Exact Logic Minimization 13

    Figure 3.12: Positional-cube notation of a strongly unate function (in the originalbit-order, the green numbers show the input values).

    Figure 3.13: Positional-cube notation of a strongly unate function (in a bit-orderleaving no 0s right to a 1, the green numbers show the input values).

    cover of a function is strongly unate in a variable if an order of the bit-fields in thecorresponding column of the covers positional cube notation exists, in which no 0sare right of a 1. An example for such reordering is shown in figures 3.12 and 3.13. Ifa cover F is unate, the corresponding function is also unate.

    Strong unateness is not used in two-level logic minimization (so far), since anotherkind of unateness, called weak unateness, is easier to compute, more likely to appearand can also be used for accelerating operations which are based on tautology tests[3].

    A function f is weakly unate in p-valued variable x if there exists a value i of x sothat changing x in an input-assignment (where x has the value i) from i to anyother value will either change the output from 0 to 1 or not change it at all. Moreformally: i {0, . . . , p 1}j {0, . . . , p 1} : fx[i] fx[j]. If a cover is weaklyunate in variable x, the bit-fields in the column corresponding to x seen as a

    matrix contain a column of 0s, when omitting the full literals (represented bybit-fields having only 1s). (Figure 3.14)

    Figure 3.14: Positional-cube notation of a weakly unate function

  • 8/3/2019 Com Bi National Logic Optimization

    18/24

    3.2 Algorithms for Exact Logic Minimization 14

    3.2.3 The unate recursive paradigm

    The unate recursive paradigm belongs to a class of algorithms for the recursivedivide-and-conquer approach and (recursively) makes use of the fact that operationson sets of implicants can be done on a variable-by-variable (and value-by-value)basis. Ifx is a p-valued variable in the support of functions f and g, a binary operationon f and g can be executed by merging 3 the results of the operation on the cofactorswith respect to a chosen variable: f g =

    p1k=0 x

    [k] (fx[k] gx[k]) where is anarbitrary binary operator [1]. Note, that this is an extension of Shannons expansion.By recursive decomposition of a binate function, the result may yield unate cofactors,whose processing can be done in a very efficient way. It is possible to direct theexpansion towards obtaining unate cofactors in early recursion-levels by selectingthe variables in a special order.

    Some of this classs algorithms are shortly described in the following.

    Tautology test: Since it is an important question in logic optimization wether afunction is a tautology or not, a tautology test is needed. By negating thefunction, it can also be tested for being a contradiction.

    As a direct consequence of Shannons expansion, a BV function being a tau-tology is equivalent to both of its cofactors with respect to any (positive andnegative) literal being tautologies. Similar, as a direct consequence of extendingShannons expansion to MVI functions, an MVI function being a tautology is

    equivalent to all of its cofactors with respect to any value of any variable aretautologies.

    So a function can be tested for tautology by recursively expanding it alongthe values of all its variables until it is possible to decide, wether a cofactoris a tautology or not. The earlier this decision can be made, the faster is thealgorithm.

    Since there are criteria for this decision which only apply to unate functions,it is desirable to obtain unate cofactors in the earliest possible stage of therecursion-tree.

    The rules which use the unateness for terminating or simplifying the recursionare [1]:

    A cover is not a tautology when it is weakly unate and there is no row ofall 1s (in positional-cube notation).

    If the cover is weakly unate in some variable, the tautology test can bedone on the subset of rows not depending on the weakly unate variable.

    3Merging is done here by summing up the products of every result and the literal correspondingto the cofactor of the result.

  • 8/3/2019 Com Bi National Logic Optimization

    19/24

    3.3 Algorithms for Heuristic Logic Minimization 15

    Containment test: A cover F containing an implicant is equivalent to F4 beinga tautology, hence containment test can be done using our tautology test.

    Complementation: The complementation of a Cover is simpler to obtain withunate cofactors

    Computation of prime implicants: Since a cover of a strongly unate functioncontains all prime implicants, the primes can be obtained by finding a cover.

    3.3 Algorithms for Heuristic Logic Minimization

    Heuristic algorithms for logic minimization were developed because of the need to

    reduce the size of two-level forms while having limited memory and time. Heuristicoptimization can be seen as applying different operators to a cover which is theinput to our algorithms. The different heuristic minimizers can be characterized bythe operators they use and the order in which they are applied. When the size ofthe cover can not be further reduced by applying the operators used, the algorithmterminates and outputs the modified cover. Some of the operators used in heuristiclogic minimization are given below. All of them use heuristics [1].

    Expand: The expand operator tries to maximize the size of each implicant of cover,so that other (smaller) implicants become covered and can be deleted.

    Reduce: The reduce operator decreases the size of each implicant of a cover, sothat successive expansion may lead to a better result.

    Irredundant: The irredundant operator makes a cover irredundant by deleting amaximum number of redundant implicants. Recall that an irredundant coveris minimal.

    Essentials: The essentials operator is used to detect the essential primes of a cover(also for exact minimization) and can be implemented using a containmenttest (which in turn can be implemented using a tautology test)

    Reshape: The reshape operator modifies the cover preserving its cardinality. The-refore the implicants are processed pairwise expanding one and reducing theother so that together with the other implicants the set is still a coverof the function.

    4

    F is another extension of the cofactors and can be seen as cofactor with respect to an implicant(instead of a variable).

  • 8/3/2019 Com Bi National Logic Optimization

    20/24

    Chapter 4

    Multiple-level combinational logic

    optimization

    Combinational logic circuits are very often implemented as multiple-level logic net-works, since additional degrees of freedom can be used to reduce area and delays.It is also possible to design a circuit in a way that special timing requirements aremet for the logic paths. Although a few exact methods for optimizing multiple-levellogic networks are available, they are normally not used because of their immenselyhigh computational complexity [1]. Usually heuristic algorithms are used to optimizemultiple-level logic circuits. The criteria are usually minimizing the area (respecting

    timing requirements) or reducing the delay (with respect to a maximum area).

    4.1 Transformations

    Combinational multiple-level circuits can be represented by directed acyclic graphs.The input variables are vertices having only outgoing edges. Each internal vertexcorresponds to a boolean function of the (negated) results of the vertices having anoutgoing edge to the vertex considered. So an internal vertex can also be consideredbeing a variable. The outputs are represented by vertices having only one incoming

    edge denoting the external visibility of a variable. Some of the transformations usedin multiple-level combinational logic optimization are given below:

    Elimination: If we replace a variable by the corresponding expression in all itsoccurrences, the internal vertex representing the variable can be eliminated(removed). This is the simplest transformational algorithm.

    Decomposition: An internal vertex can be replaced by two (or more) vertices ofa subnetwork, which together represent the same expression.

    Extraction: If two (or more) functions (represented by vertices) have a commonsubexpression, this subexpression can be extracted into a new vertex and the

  • 8/3/2019 Com Bi National Logic Optimization

    21/24

    4.2 Algebraic model 17

    corresponding variable can be used within the other functions, replacing thecommon subexpression.

    Simplification: Simplification reduces the complexity of a function by using itsrepresentations properties.

    Substitution: If a subexpression of a function is already represented by a vertex,the variable corresponding to that vertex can be used for substituting thesubexpression.

    4.2 Algebraic model

    Using the algebraic model, the local boolean functions are represented by algebraicexpressions. Transformations in the algebraic model only use the rules of polynomialalgebra, neglecting the special properties of boolean algebra. Thus De Morgans rules,absorbtion, idempotence, the definition of complements, tautologies, contradictionsand the dont care conditions cannot be used. Also only one distributive law applies[1]. For example (x + y) z = (xz) + (yz) but xy + z = (x + z) (y + z), xx = 0 andx + x = 1. The quality of the result fully depends on the function to be optimized.A given ALU, optimized using algebraic methods only, can be several times as largeas the result of optimizing the same ALU using boolean methods [2].

    One of the most important methods in the algebraic model is the algebraic division

    used in transformations like substitution and decomposition.

    4.3 Boolean model

    In the boolean model, the full power of boolean algebra is exploited. So transforma-tions used have a greater computational complexity than those used in the algebraicmodel. Each variable represented by a vertex corresponds to a local boolean functionwhich has a local dc set (that may be empty). When a local function is given assum of products, it can be optimized by means of two-level optimization, targeting

    a minimum number of literals (instead of minimal number of terms) [1].

    Dont care conditions are of great importance in multiple-level logic optimization,since they offer a certain degree of freedom for optimization. The dc set can bederived from the interconnections from and to our subnetworks inputs and outputs.It consists of input assignments that can never occur, namely the input controllabilitydont care set and input assignments for which the output values are never observed,called output observability dontt care set.

  • 8/3/2019 Com Bi National Logic Optimization

    22/24

    4.4 Delay optimization 18

    4.4 Delay optimization

    Technology independent delay optimization can be done using several techniques,which modify the critical path1 of a circuit [4].

    Select transformation (reduces the length of the critical path)

    Bypass transformation (makes the critical path false)

    KMS Transformation (removes the critical path)

    Local transformations of the local path (resynthesis of parts of the circuit)

    clustering (builds few two-level clusters of circuit partitions)

    1The critical path of a circuit is the longest sensitizable path in terms of delay

  • 8/3/2019 Com Bi National Logic Optimization

    23/24

    Chapter 5

    Conclusion

    Many smaller two-level circuits can be minimized exactly today by the use of smartalgorithms and data structures which are capable of exploiting the problems na-ture. The positional-cube notation can be used to implement data structures forproduct-forms of literals such as implicants. It is also possible to implement da-ta structures for implicants of functions with multiple-valued inputs and multiplebinary-valued outputs. Since multiple-valued outputs can also be represented by alogarithmic number of binary valued outputs these data structures can be used forthe optimization of any discrete function.

    The multiple-level logic implementations of circuits are normally smaller (than two-

    level logic implementations) in terms of area. This is due to the fact that commonsubexpressions can be shared. Because of the immensely high computational com-plexity of the few known exact algorithms for optimization, only heuristic methodsare considered to be practical. They usually change the critical path directly or heu-ristically select partitions of multiple-level combinational networks which then aresubject to two-level optimization.

  • 8/3/2019 Com Bi National Logic Optimization

    24/24

    Bibliography

    [1] Giovanni De Micheli, Synthesis and optimization of digital circuits (McGraw-Hill, 1994)

    [2] Soha Hassoun and Tsutomu Sasao, Logic Synthesis And Verification (KluwerAcademic Publishers, USA, 2002)

    [3] Robert K. Brayton, Logic Synthesis for Hardware Systems, Lecture Notes, Lec-ture 3, Berkeley, USA (1998)

    [4] H.-J. Wunderlich, Lecture Notes on Advanced Processor Architecture, Stuttgart,Germany (2002)

    [5] Avinoam Kolodny, CAD of VLSI Systems, Lecture Notes, Lecture 14, TechnionCity, Israel (2002)

    [6] Priyank Kalla, VLSI Logic Test, Validation and Verification, Lecture Notes,Lecture 6, Salt Lake City, USA (2002)

    [7] Andreas Kuehlmann, Logic Synthesis and Verification, Lecture Notes, Lecture7, Berkeley, USA (2003)