Compilers Notes

download Compilers Notes

of 31

Transcript of Compilers Notes

  • 8/3/2019 Compilers Notes

    1/31

    Compilers

    A compiler is a program that reads a program in one language, the source language and

    translates into an equivalent program in another language, the target language.

    The translation process should also report the presence of errors in the source program.

    SourceProgram Compiler

    TargetProgram

    Error

    Messages

    There are two parts of compilation.

    The analysis part breaks up the source program into constant piece and creates an

    intermediate representation of the source program.

    The synthesis part constructs the desired target program from the intermediate

    representation.

    Phases of Compiler

    The compiler has a number of phases plus symbol table manager and an error handler.

    Input Source

    Program

    Lexical Analyzer

    1

  • 8/3/2019 Compilers Notes

    2/31

    Syntax Analyzer

    Symbol Table

    Manager

    Semantic

    Analyzer

    Error

    Handler

    IntermediateCode Generator

    Code Optimizer

    Code Generator

    Out Target

    Program

    The cousins of the compiler are

    1. Preprocessor.

    2. Assembler.

    3. Loader and Link-editor.

    Front End vs Back End of a Compilers. The phases of a compiler are collected into front

    end and back end.

    The front end includes all analysis phases end the intermediate code generator.

    The back end includes the code optimization phase and final code generation phase.

    The front end analyzes the source program and produces intermediate code while the

    back end synthesizes the target program from the intermediate code.

    2

  • 8/3/2019 Compilers Notes

    3/31

    A naive approach (front force) to that front end might run the phases serially.

    1. Lexical analyzer takes the source program as an input and produces a long string

    of tokens.2. Syntax Analyzer takes an out of lexical analyzer and produces a large tree.

    3. Semantic analyzer takes the output of syntax analyzer and produces another tree.4. Similarly, intermediate code generator takes a tree as an input produced by

    semantic analyzer and produces intermediate code.

    Minus Points

    Requires enormous amount of space to store tokens and trees.

    Very slow since each phase would have to input and output to and from temporary

    disk

    Remedy

    use syntax directed translation to inter leaves the actions of phases.

    Compiler construction tools.

    Parser Generators:

    The specification of input based on regular expression. The organization is based

    on finite automation.

    Scanner Generator:

    The specification of input based on regular expression. The organization is based

    on finite automation.

    Syntax-Directed Translation:

    It walks the parse tee and as a result generate intermediate code.Automatic Code Generators:

    Translates intermediate rampage into machine language.

    Data-Flow Engines:

    It does code optimization using data-flow analysis.

    Syntax Definition

    A contex free grammar, CFG, (synonyms: Backus-Naur Firm of BNF) is a commonnotation for specifying the syntax of a languages

    For example, an "IF-ELSE" statement in c-language has the form

    IF (Expr) stmt ELSE stmt

    In other words, it is the concatenation of:

    the keyword IF ;

    an opening parenthesis ( ;

    3

  • 8/3/2019 Compilers Notes

    4/31

    an expression Expr ;

    a closing parenthesis ) ;

    a statement stmt ;

    a keyword ELSE ;

    Finally, another statement stmt.

    The syntax of an 'IF-ELSE' statement can be specified by the following 'production rule'

    in the CFG.

    stmt IF (Expr) stmt ELSE stmt

    The arrow ( ) is read as "can have the form".

    A context-free grammar (CFG) has four components:

    1. A set of tokens called terminals.

    2. A set of variable called nonterminals.

    3. A set of production rules.4. A designation of one of the nonterminals as the start symbol.

    Multiple production with the same nonterminal on the left like:

    list + digit

    list - digitlist

    may be grouped together separated by vertical bars, like:

    list list + digit | list - digit | digit

    Ambiguity

    A grammar is ambiguous if two or more different parse trees can be desire the same

    token string. Equivalently, an ambiguous grammar allows two different derivations for atoken string.

    Grammar for complier should be unambiguous since different parse trees will give a

    token string different meaning.

    Consider the following grammar

    4

  • 8/3/2019 Compilers Notes

    5/31

    string string + string

    | string - string

    | 0 | 2 | . . . | 9

    To show that a grammar is ambiguous all we need to find a "single" stringthat has more

    than one perse tree.

    Figure:23 --- pg.31

    Above figure show two different parse trees for the token string 9 - 5 + 2 that correspondsto two different way of parenthesizing the expression:

    ( - 5) + 2 and 9 -(5 + 2).

    The first parenthesization evaluates to 2.

    Perhaps, the most famous example of ambiguity in a programming language is the

    dangling 'ELSE'.

    Consider the grammar G with the production:

    S IF b THEN S ELSE S

    | IF b THEN S

    | a

    G is ambiguous since the sentence

    IF b THEN IF b THEN a ELSE a

    has two different parse trees or derivation trees.

    Parse tree I

    figure

    This parse tree imposes the interpretation

    IF b THEN (IF b THEN a ) ELSE a

    Parse Tree II

    Figure

    This parse tree imposes the interpretation

    5

  • 8/3/2019 Compilers Notes

    6/31

    IF b THEN (IF b THEN a ELSE a)

    The reason that the grammar G is ambiguous is that an 'ELSE' can be associated with two

    different THENs. For this reason, programming languages which allows both IF-THEN-ELSE and IF-THEN constant can be ambiguous.

    Associativity of Operators

    If operand has operators on both side then by connection, operand should be associated

    with the operator on the left.

    In most programming languages arithmetic operators like addition, subtraction,multiplication, and division are left associative.

    Token string: 9 - 5 + 2

    Production rules

    list list - digit | digitdigit 0 | 1 | 2 | . . . | 9

    Parse tree for left-associative operator is

    figure 24 on pg. 31

    In the C programming language the assignment operator, =, is right associative. That is,

    token string a = b = c should be treated as a = (b = c).

    Token string: a = b =c.

    Production rules:

    right letter = right | letter

    letter a | b | . . . | z

    Parse tree for right-associative operator is:

    Figure

    Precedence of Operators

    An expression 9 + 5 * 2 has two possible interpretation:

    6

  • 8/3/2019 Compilers Notes

    7/31

    (9 + 5) * 2 and 9 + (5 * L)

    The associativity of '+' and '*' do not resolve this ambiguity. For this reason, we need to

    know the relative precedence of operators.The convention is to give multiplication and division higher precedence than addition and

    subtraction.Only when we have the operations of equal precedence, we apply the rules of associative.

    So, in the example expression: 9 + 5 * 2.We perform operation of higher precedence i.e., * before operations of lower precedence

    i.e., +. Therefore, the correct interpretation is 9 + (5 *).

    Separate Rule

    Consider the following grammar and language again.

    S IF b THEN S ELSE S| IF b THEN S

    | a

    An ambiguity can be removed if we arbitrary decide that an ELSE should be attached to

    the last preceding THEN, like:

    Figure

    We can revise the grammar to have two nonterminals S1 and S2. We insist that S2generates IF-THEN-ELSE, while S1 is free to generate either kind of statements.

    The rules of the new grammar are:

    S1 IF b THEN S1| IF b THEN S2 THEN S1| a

    S2 IF b THEN S2 ELSE S2

    | a

    Although there is no general algorithm that can be used to determine if a given grammar

    is ambiguous, it is certainly possible to isolate rules which leads to ambiguity or

    ambiguous grammar.

    A grammar containing the productions.

    7

  • 8/3/2019 Compilers Notes

    8/31

    A AA | Alpha

    is ambiguous because the substring AAA has different parse tree.

    Figure

    This ambiguity disappears if we use the productions

    A AB | BB

    or

    A BA | B

    B

    Syntax of Expressions A grammar of arithmetic expressions looks like:

    Expr expr + term | expr - term | term

    term term * factor | term/factor | factor

    factor id | num | (expr)

    That is, expr is a string of terms separated by '+' and '-'.

    A term is a string of factors separated by '*' and '/' and a factor is a single operand or an

    expression wrapped inside of parenthesis.

    Syntax-Directed Translation

    Modern compilers use syntax-directed translation to interleaves the actions of the

    compiler phases.

    The syntax analyzer directs the whole process during the parsing of the source code.

    Calls the lexical analyzer whenever syntax analyzer wants another token.

    Perform the actions of semantic analyzer.

    Perform the actions of the intermediate code generator.

    The actions of the semantic analyzer and the intermediate code generator require the

    passage of information up and/or down the parse tree.

    8

  • 8/3/2019 Compilers Notes

    9/31

    We think of this information as attributes attached to the nodes of the parse tree and the

    parser moving this information between parent nodes and children nodes as it performs

    the productions of the grammar.

    Postfix Notation

    Postfix notation also called reverse polish notation or RPN places each binary arithmeticoperator after its two operands instead of between them.

    InfixExpression

    : (9 - 5) +2

    = (95 -) +

    2

    = (95-) 2+

    = 95 - 2 +: Postfix

    Notation

    Infix

    Expression

    : 9 - (5 +

    2)

    = 9 -

    (52+)

    = 9 (52+)-

    = 9 5 2 +

    -

    : Postfix

    Notation

    Why postfix notation?

    There are two reasons

    There is only one interpretation

    We do not need parenthesis to disambignate the grammar.

    Syntax-Directed Definitions

    A syntax-directed definition uses a CFG to specify the syntatic structure of the

    input.

    A syntax-directed definition associates a set of attributes with each grammar

    symbol. A syntax-directed definition associates a set of semantic rules with each

    production rule.

    For example, let the grammar contains the production:

    X Y Z

    9

  • 8/3/2019 Compilers Notes

    10/31

    And also let that nodes X, Y and Z have associated attributes X.a, Y.a and Z.a

    respectively.

    The annotated parse tree looks like:

    diagram

    If the semantic rule{X.a := Y.a + Z.a}

    is associated with the production

    X Y Zthen parser should add the attribute 'a' of node Y and attribute 'a' of node Z together and

    set the attribute 'a' of node X to their sum.

    Synthesized Attributes

    An attribute is synthesized if its value at a parent node can be determined from attributes

    of its children.

    diagram

    Since in this example, the value of a node X can be determined from 'a' attribute of Y and

    Z nodes attribute 'a' in a synthesized attribute.Synthesized attributes can be evaluated by a single bottom-up traversal of the parse tree.

    Example 2.6: Following figure shows the syntax-directed definition of an infix-to-

    postfix translator.

    Figure 2.5 Pg. 34

    PRODUCTION SEMANTIC RULE

    expr expr1 +

    term

    expr.t : = expr1.t + | |

    term.t | | '+'

    expr expr1 -term

    expr.t : = expr1.t + | |term.t | | '-'

    expr term expr.t : = term.t

    term 0 term.t : = '0'

    term 1 term.t : = '1': :

    : :term 9 term.t : = '9'

    Parse tree corresponds to Productions

    Diagram

    10

  • 8/3/2019 Compilers Notes

    11/31

    Annotated parse tree corresponds to semantic rules.

    Diagram

    The above annotated parse tree shows how the input infix expression 9 - 5 + 2 is

    translated to the prefix expression 95 - 2 + at the root.

    Depth-First Traversals

    A depth-first traversal of a parse tree is one way of evaluating attributes.

    Note that a syntax-directed definition does not impose any particular order as long as

    order computes attribute of parent after all its children's attributes.

    PROCEDURE visit (n: node)

    BEGIN

    FOR each child m of n, from left to right

    Do visist (m);

    Evaluate semantic rules at node nEND

    Diagram

    Translation Schemes

    A translation scheme is another way of specifying a syntax-directed translation. This

    scheme is a CFG in which program fragments called semantic actions are embedded

    within the right sides of productions.

    For example,

    rest + term {primt ( ' + ' )} rest,indicates that a '+' sign should be printed between:

    depth-first traversal of the term node, and

    depth first traversal of the rest, node.

    Diagram

    Ex. 2.8

    REVISION: SYNTAX-DIRECTED TRANSLATION

    Step1: Syntax-directed definition for translating infix expression to postfix form.

    PRODUCTION SEMANTIC RULE

    expr expr1 +term

    expr.t : = expr1.t + | |term.t | | '+'

    expr expr1 -

    term

    expr.t : = expr1.t + | |

    term.t | | '-'

    11

  • 8/3/2019 Compilers Notes

    12/31

    expr term expr.t : = term.t

    term 0 term.t : = '0'

    term 1 term.t : = '1': :

    : :

    term 9 term.t : = '9'

    Step 2: A translation scheme derived from syntax-direction definition is :

    Figure 2.15 on pg. 39

    expr expr +

    term{print( ' + ' )}

    expr expr -term

    {print( ' - ')}

    expr term

    term 0 {print( ' 0 ' )}term 1 {print( ' 1 ' )}

    : :

    : :

    term 9 {print( ' 9 ' )}

    Step 3: A parse tree with actions translating 9 - 5 + 2 into 95 - 2 +

    Figure 2.14 on pg. 40

    Note that it is not necessary to actually construct the parse tree.

    Parsing

    The parsing is a process of finding a parse tree for a string of tokens. Equivalently, it is a

    process of determining whether a string of tokens can be generated by a grammar.The worst-case time pf parsing algorithms are O(nn3) but typical is : O(n) time.

    For example. The production rules of grammar G is:

    list list + digit | list - digit | digitdigit 0 | 1 | . . . | 9

    12

  • 8/3/2019 Compilers Notes

    13/31

    Given token string is 9-5+2.

    Parse tree is:

    diagram

    Each node in the parse tree is labeled by a grammar symbol.

    the interior node corresponds to the left side of the production.the children of the interior node corresponds to the right side of production.

    The language defined by a grammar is the set of all token strings can be derived from its

    start symbol.

    The language defined by the grammar:

    list list + digit | list - digit | digit

    digit 0 | 1 | 2 | . . . | 9

    contains all lists of digits separated by plus and minus signs.

    The Epsilon, E, on the right side of the production denotes the empty string.

    As we have mentioned above, the parsing is the process of determining if a string oftokens can be generated by a grammar. A parser must be capable of constructing the tree,

    or else the translation cannot be guaranteed correct. For any language that can be

    described by CFG, the parsing requires O(n3) time to parse string of n token. However,

    most programming languages are so simple that a parser requires just O(n) time with a

    single left-to-right scan over the iput string of n tokens.

    There are two types of Parsing

    1. Top-down Parsing (start from start symbol and derive string)

    A Top-down parser builds a parse tree by starting at the root and working downtowards the leaves.

    o Easy to generate by hand.

    o Examples are : Recursive-descent, Predictive.

    2. Bottom-up Parsing (start from string and reduce to start symbol)

    A bottom-up parser builds a parser tree by starting at the leaves and working up

    towards the root.o Not easy to handle by hands, usually compiler-generating

    software generate bottom up parser

    o But handles larger class of grammar

    o Example is LR parser.

    13

  • 8/3/2019 Compilers Notes

    14/31

    Top-Down Parsing

    Consider the CFG with productions:

    expr term restrest + term rest | - term rest

    term 0 | 1 | . . . | 9

    Step 0:

    Initialization: Root must be starting symbolStep 1:

    expr term rest

    Step 2:term 9

    Step 3rest term rest

    Step 4:term 5

    Step 5:

    rest term restStep 6:

    term 2

    Step 7:rest E

    In the example above, the grammar made it easy for the top-down parser to pick thecorrect production in each step.

    This is not true in general, see example of dangling 'else'.

    Predictive Parsing

    Recursive-descent parsing is a top-down method of syntax analysis that executes a set of

    recursive procedure to process the input. A procedure is associated with each nonterminal

    of a grammar.

    A predictive parsing is a special form of recursive-descent parsing, in which the current

    input token unambiguously determines the production to be applied at each step.

    Let the grammar be:

    14

  • 8/3/2019 Compilers Notes

    15/31

    expr term rest

    rest + term rest | - term rest | 6

    term 0 | 1 | . . . | 9

    In a recursive-descent parsing, we write code for each nonterminal of a grammar. In the

    case of above grammar, we should have three procedure, correspond to nonterminalsexpr, rest, and term.

    Since there is only one production for nonterminal expr, the procedure expr is:

    expr ( ){

    term ( );

    rest ( );return

    }

    Since there are three (3) productions for rest, procedure rest uses a global variable,

    'lookahead', to select the correct production or simply selects "no action" i.e.,

    E - production, indicating that lookahead variable is neither + nor -

    rest ( )

    {

    IF (lookahead = = '+') {match ( ' + ' );

    term ( );

    rest ( );return}

    ELSE IF ( lookahead = = '-') {

    match (' - ');term ( );

    rest ( );

    return{

    ELSE {

    return;

    }}

    The procedure term checks whether global variable lookahead is a digit.

    term ( ) {

    IF ( isdigit (lookahead)) {match (lookahead);

    15

  • 8/3/2019 Compilers Notes

    16/31

    return;

    }

    else{ReportError ( );

    }

    After loading first input token into variable 'lookahead' predictive parser is stared bycalling starting symbol, 'expr'.

    If the input is error free, the parser conducts a depth-first traversal of the parse tree and

    return to caller routine through expr.

    Problem with Predictive Parsing: left recursion

    Left Recursion

    The production is left-recursive if the leftmost symbol on the right side is the same as the

    non terminal on the left side. For example,expr expr + term.

    If one were to code this production in a recursive-descent parser, the parser would go in

    an infinite loop.

    diagram

    We can eliminate the left-recursion by introducing new nonterminals and newproductions rules.

    For example, the left-recursive grammar is:

    E E + T | T

    E T * F | F

    F (E) | id.

    We can redefine E and T without left-recursion as:

    E TE`E` + TE` | ET FT`T * FT` | EF (E) | id

    16

  • 8/3/2019 Compilers Notes

    17/31

    Getting rid of such immediate left recursion is not enough. One must get rid of indirect

    left recursion too, where two or more nonterminals are mutually left-recursive.

    Lexical Analyzer

    The main task of lexical Analyzer is to read a stream of characters as an input and

    produce a sequence of tokens such as names, keywords, punctuation marks etc.. for

    syntax analyzer.

    It discards the white spaces and comments between the tokens and also keep track of linenumbers.

    Tokens, Patterns, Lexemes

    Specification of Tokens

    o Regular Expressions

    o Notational Shorthand

    Finite Automata

    o Nondeterministic Finite Automata (NFA).

    o Deterministic Finite Automata (DFA).

    o Conversion of an NFA into a DFA.

    o From a Regular Expression to an NFA.

    Tokens, Patterns, Lexemes

    Token

    A lexical token is a sequence of characters that can be treated as a unit in the grammar ofthe programming languages.

    Example of tokens:

    Type token (id, num, real, . . . ) Punctuation tokens (IF, void, return, . . . )

    Alphabetic tokens (keywords)

    Example of non-tokens:

    Comments, preprocessor directive, macros, blanks, tabs, newline, . . .

    17

  • 8/3/2019 Compilers Notes

    18/31

    Patterns

    There is a set of strings in the input for which the same token is produced as output. This

    set of strings is described by a rule called a pattern associated with the token.

    Regular expressions are an important notation for specifying patterns.

    For example, the pattern for the Pascal identifier token, id, is: id letter (letter | digit)*.

    Lexeme

    A lexeme is a sequence of characters in the source program that is matched by the pattern

    for a token.

    For example, the pattern for the RELOP token contains six lexemes ( =, < >, ,>=) so the lexical analyzer should return a RELOP token to parser whenever it sees any

    one of the six.

    3.3 Specification of Tokens

    An alphabet or a character class is a finite set of symbols. Typical examples of symbols

    are letters and characters.

    The set {0, 1} is the binary alphabet. ASCII and EBCDIC are two examples of computeralphabets.

    Strings

    A string over some alphabet is a finite sequence of symbol taken from that alphabet.

    For example, banana is a sequence of six symbols (i.e., string of length six) taken from

    ASCII computer alphabet. The empty string denoted by , is a special string with zero

    symbols (i.e., string length is 0).

    If x and y are two strings, then the concatenation of x and y, written xy, is the string

    formed by appending y to x.

    For example, If x = dog and y = house, then xy = doghouse. For empty string, , we have

    S = S = S.

    String exponentiation concatenates a string with itself a given number of times:

    S2 = SS or S.S

    S3 = SSS or S.S.SS4 = SSSS or S.S.S.S and so on

    18

  • 8/3/2019 Compilers Notes

    19/31

    By definition S0 is an empty string, , and S` = S. For example, if x =ba and na then xy2 =

    banana.

    Languages

    A language is a set of strings over some fixed alphabet. The language may contain a finite

    or an infinite number of strings.

    Let L and M be two languages where L = {dog, ba, na} and M = {house, ba} then

    Union: LUM = {dog, ba, na, house}

    Concatenation: LM = {doghouse, dogba, bahouse, baba, nahouse, naba}

    Expontentiation: L2 = LL

    By definition: L0 ={ } and L` = L

    The kleene closure of language L, denoted by L*, is "zero or more Concatenation of" L.

    L* = L0 U L` U L2 U L3 . . . U Ln . . .

    For example, If L = {a, b}, then

    L* = { , a, b, aa, ab, ab, ba, bb, aaa, aba, baa, . . . }

    The positive closure of Language L, denoted by L+, is "one or more Concatenation of" L.

    L+ = L` U L2 U L3 . . . U Ln . . .

    For example, If L = {a, b}, then

    L+ = {a, b, aa, ba, bb, aaa, aba, . . . }

    Regular Expressions

    1. The regular expressions over alphabet specifies a language according to the

    following rules. is a regular expression that denotes { }, that is, the set

    containing the empty string.2. If a is a symbol in alphabet, then a is a regular expression that denotes {a}, that is,

    the set containing the string a.

    3. Suppose r and s are regular expression denoting the languages L(r) and L(s). Thena. (r)|(s) is a regular expression denoting L(r) U L(s).

    b. (r)(s) is a regular expression denoting L(r) L(s).

    c. (r)* is a regular expression denoting (L(r))*.

    19

  • 8/3/2019 Compilers Notes

    20/31

    d. (r) is a regular expression denoting L(r), that is, extra pairs of parentheses

    may be used around regular expressions.

    Unnecessary parenthesis can be avoided in regular expressions using the followingconventions:

    The unary operator * (kleene closure) has the highest precedence and is left

    associative.

    Concatenation has a second highest precedence and is left associative.

    Union has lowest precedence and is left associative.

    Regular Definitions

    A regular definition gives names to certain regular expressions and uses those names in

    other regular expressions.

    Here is a regular definition for the set of Pascal identifiers that is define as the set of

    strings of letter and digits beginning with a letters.

    letter A | B | . . . | Z | a | b | . . . | z

    digit 0 | 1 | 2 | . . . | 9id letter (letter | digit)*

    The regular expression id is the pattern for the Pascal identifier token and defines letterand digit.

    Where letter is a regular expression for the set of all upper-case and lower case letters inthe alphabet and digit is the regular for the set of all decimal digits.

    The pattern for the Pascal unsigned token can be specified as follows:

    digit 0 | 1 | 2 | . . . | 9

    digit digit digit*

    Optimal-fraction . digits | Optimal-exponent (E (+ | - | ) digits) |

    num digits optimal-fraction optimal-exponent.

    This regular definition says that

    20

  • 8/3/2019 Compilers Notes

    21/31

    An optimal-fraction is either a decimal point followed by one or more digits or it

    is missing (i.e., an empty string).

    An optimal-exponent is either an empty string or it is the letter E followed by an '

    optimal + or - sign, followed by one or more digits.

    Notational Shorthand

    The unary postfix operator + means "one of more instances of "

    (r)+ = rr*

    The unary postfix operator? means "zero or one instance of"

    r? = (r | )

    Using these shorthand notation, Pascal unsigned number token can be written as:

    digit 0 | 1 | 2 | . . . | 9digits digit+

    optimal-fraction (. digits)?

    optimal-exponent (E (+ | -)?digits)?num digits optimal-fraction optimal-exponent

    Finite Automata

    A recognizer for a language is a program that takes a string x as an input and answers"yes" if x is a sentence of the language and "no" otherwise.

    One can compile any regular expression into a recognizer by constructing a generalized

    transition diagram called a finite automation.

    A finite automation can be deterministic means that more than one transition out of a

    state may be possible on a same input symbol.

    Both automata are capable of recognizing what regular expression can denote.

    Nondeterministic Finite Automata (NFA)

    21

  • 8/3/2019 Compilers Notes

    22/31

    A nondeterministic finite automation is a mathematical model consists of

    1. a set of states S;

    2. a set of input symbol, , called the input symbols alphabet.3. a transition function move that maps state-symbol pairs to sets of states.

    4. a state so called the initial or the start state.5. a set of states F called the accepting or final state.

    An NFA can be described by a transition graph (labeled graph) where the nodes are statesand the edges shows the transition function.

    The labeled on each edge is either a symbol in the set of alphabet, , or denoting empty

    string.

    Following figure shows an NFA that recognizes the language: (a | b)* a bb.

    FIGURE 3.19 - pp 114

    This automation is nondeterministic because when it is in state-0 and the input symbol is

    a, it can either go to state-1 or stay in state-0.

    The transition is

    FIGURE 115 pp. 115

    The advantage of transition table is that it provides fast access to the transitions of statesand the disadvantage is that it can take up a lot of soace.

    The following diagram shows the move made in accepting the input strings abb, aabb and

    ba bb.

    abb :

    In general, more than one sequence of moves can lead to an accepting state. If at least one

    such move ended up in a final state. For instance

    The language defined by an NFA is the set of input strings that particular NFA accepts.

    22

  • 8/3/2019 Compilers Notes

    23/31

    Following figure shows an NFA that recognize aa* | bb*.

    Note that 's disappear in a cancatenation.

    FIGURE 3.21 pp. 116

    The transition table is:

    Deterministic Finite Automata (DFA)

    A deterministic finite automation is a special case of a non-deterministic finiteautomation (NFA) in which

    1. no state has an -transition

    2. for each state s and input symbol a, there is at most one edge labeled a leaving s.

    A DFA has st most one transition from each state on any input. It means that each entryon any input. It means that each entry in the transition table is a single state (as oppose to

    set of states in NFA).

    Because of single transition attached to each state, it is vary to determine whether a DFA

    accepts a given input string.

    Algorithm for Simulating a DFA

    INPUT:

    string x

    a DFA with start state, so . . .

    a set of accepting state's F.

    OUTPUT:

    The answer 'yes' if D accepts x; 'no' otherwise.

    The function move (S, C) gives a new state from state s on input character C.

    The function 'nextchar' returns the next character in the string.

    Initialization:

    S := S0 C := nextchar;

    23

  • 8/3/2019 Compilers Notes

    24/31

    while not end-of-file do

    S := move (S, C)

    C := nextchar;

    If S is in F then

    return "yes"

    else

    return "No".

    Following figure shows a DFA that recognizes the language (a|b)*abb.

    FIGURE

    The transition table is

    state a b0 1 0

    1 1 2

    2 1 3

    3 1 0

    With this DFA and the input string "ababb", above algorithm follows the sequence of

    states:

    FIGURE

    Conversion of an NFA into a DFA

    It is hard for a computer program to simulate an NFA because the transition function is

    multivalued. Fortunately, an algorithm, called the subset construction will convert an

    NFA for any language into a DFA that recognizes the same languages. Note that thisalgorithm is closely related to an algorithm for constructing LR parser.

    In the transition table of an NFA, entry is a set of states;

    In the transition table of a DFA, each entry is a single state;

    The general idea behind the NFA-to-DFA construction is that the each DFA state

    corresponds to a set of NFA states.

    24

  • 8/3/2019 Compilers Notes

    25/31

    For example, let T be the set of all states that an NFA could reach after reading input: a1,

    a2, . . . , an - then the state that the DFA reaches after reading a1, a2, . . . , an corresponds to

    set T.

    Theoretically, the number of states of the DFA can be exponential in the number of states

    of the NFA, i.e., (2n

    ), but in practice this worst case rarely occurs.

    Algorithm: Subset construction.

    INPUT: An NFA N

    OUTPUT: A DFA D accepting the same language.

    METHOD: Construct a transition table DTrans. Each DFA state is a set of NFA states.

    DTran simulates in parallel all possible moves N can make on a given string.

    Operations to keep track of sets of NFA states:

    Closure (S)

    Set of states reachable from state S via epsilon. Closure (T)

    Set of states reachable from any state in set T via epsilon.

    move (T, a)Set of states to which there is an NFA transition from states in T on a symbol a.

    Algorithm:

    initially, -Closure (S0) in DTrans.

    While unmarked state T in DTransmark Tfor each input symbol 'a'

    do u = Closure (T, a)

    If u is not in DTransthen add u to DTrans

    DTrans [T, a] = U

    Following algorithm shows a computation of -Closure function.

    Push all states in T onto stack.

    initialize -Closure (T) to Twhile stack is not empty

    do pop top element t

    for each state u with -edget to u

    do If u is not in -Closure(T)

    do add u Closure (T)push u onto stack

    25

  • 8/3/2019 Compilers Notes

    26/31

    Following example illustrates the method by constructing a DFA for the NFA.

    From a Regular Expression to an NFA

    Thompson's construction is an NFA from a regular expression.

    The Thompson's construction is guided by the syntax of the regular expression with cases

    following the cases in the definition of regular expression.

    1. is a regular expression that denotes {}, the set containing just the empty string.diagram

    where i is a new start state and f is a new accepting state. This NFA recognizes

    {}.2. If a is a symbol in the alphabet, a , then regular expression 'a' denotes {a} and

    the set containing just 'a' symbol.

    diagram

    This NFA recognizes {a}.3. Suppose, s and t are regular expressions denoting L{s} and L(t) respectively, then

    a. s/r is a regular expression denoting L(s) L(t)

    b. st is a regular expression denoting L(s) L(t)diagram

    c. s* is a regular expression denoting L(s)*

    diagram

    d. (s) is a regular expression denoting L(s) and can be used for puttingparenthesis around regular expression

    Example: Use above algorithm, Thompson's construction, to construct NFA for the

    regular expression r = (a|b)* abb.

    First constant the parse tree for r = (a|b)* abb.

    figure

    For r1 - use case 2.

    figure

    For r2 - use case 2.

    figure

    For r3 - use case 3a

    26

  • 8/3/2019 Compilers Notes

    27/31

    figure

    For r5 - use case 3c

    figure

    We have r5 = (a|b)*

    For r6 - use case 2

    figure

    and for r7 - use case 3b

    figure

    We get r7 =(a|b)* a

    Similarly for r8 and r10 - use case 2

    figure

    figure

    And get r11 by case 3b

    figure

    We have r = (a|b)*abb.

    Code Generation

    Introduction

    Phases of typical compiler and position of code generation.

    27

  • 8/3/2019 Compilers Notes

    28/31

    Since code generation is an "undecidable problem (mathematically speaking), we must be

    content with heuristic technique that generate "good" code (not necessarily optimal code).

    Code generation must do following things:

    Produce correct code make use of machine architecture.

    run efficiently.

    Issues in the Design of Code generator

    Code generator concern with:

    1. Memory management.

    2. Instruction Selection.3. Register Utilization (Allocation).4. Evaluation order.

    1. Memory Management

    Mapping names in the source program to address of data object is cooperating done in

    pass 1 (Front end) and pass 2 (code generator).

    Quadruples address Instruction.

    Local variables (local to functions or procedures ) are stack-allocated in the activation

    record while global variables are in a static area.

    2. Instruction Selection

    The nature of instruction set of the target machine determines selection.

    -"Easy" if instruction set is regular that is uniform and complete.Uniform: all triple addresses

    all stack single addresses.

    Complete: use all register for any operation.

    If we don't care about efficiency of target program, instruction selection is straightforward.

    For example, the address code is:

    a := b + c

    d := a + e

    Inefficient assembly code is:

    28

  • 8/3/2019 Compilers Notes

    29/31

    1. MOV b, R0 R0 b

    2. ADD c, R0 R0 c + R0

    3. MOV R 0, a a R04. MOV a, R0 R0 a

    5. ADD e, R0 R0 e + R0

    6. MOV R 0 , d d R0

    Here the fourth statement is redundant, and so is the third statement if 'a' is notsubsequently used.

    3. Register Allocation

    Register can be accessed faster than memory words. Frequently accessed variables should

    reside in registers (register allocation). Register assignment is picking a specific registerfor each such variable.

    Formally, there are two steps in register allocation:

    1. Register allocation (what register?)

    This is a register selection process in which we select the set of variables thatwill reside in register.

    2. Register assignment (what variable?)

    Here we pick the register that contain variable. Note that this is a NP-Completeproblem.

    Some of the issues that complicate register allocation (problem).

    1. Special use of hardware for example, some instructions require specific register.

    2. Convention for Software:

    For example

    Register R6 (say) always return address.

    Register R5 (say) for stack pointer.

    Similarly, we assigned registers for branch and link, frames, heaps, etc.,

    3. Choice of Evaluation order

    Changing the order of evaluation may produce more efficient code.

    This is NP-complete problem but we can bypass this hindrance by generating code forquadruples in the order in which they have been produced by intermediate code

    generator.

    ADD x, Y, T1ADD a, b, T2

    is legal because X, Y and a, b are different (not dependent).

    29

  • 8/3/2019 Compilers Notes

    30/31

    The Target Machine

    Familiarity with the target machine and its instruction set is a prerequisite for designing a

    good code generator.

    Typical Architecture

    Target machine is:

    1. Byte addressable (factor of 4).

    2. 4 byte per word.3. 16 to 32 (or n) general purpose register.

    4. Two addressable instruction of form:

    Op source, destination.

    e.g., move A, Badd A, D

    Typical Architecture:

    1. Target machine is :2. Bit addressing (factor of 1).

    3. Word purpose registers.

    4. Three address instruction of forms:Op source 1, source 2, destination

    e.g.,ADD A, B, C

    Byte-addressable memory with 4 bytes per word and n general-purpose registers,

    R0, R1, . . . , Rn-1. Each integer requires 2 bytes (16-bits).

    Two address instruction of the form

    mnemonic source, destination

    MODE FORM ADDRESS EXAMPLEADDED-

    COST

    Absolute M M ADD R 0, R1 1

    register R R ADD temp,

    R10

    Index c (R) c + contents (R)ADD

    100(R2), R11

    Indirectregister

    *R contents (R)ADD * R2,

    R10

    30

  • 8/3/2019 Compilers Notes

    31/31

    Indirect

    Index*c (R)

    contents (c +

    contents (R)

    ADD *

    100(R2), R11

    Literal # c constant cADD # 3,

    R11

    Instruction costs:

    Each instruction has a cost of 1 plus added costs for the source and destination.

    => cost of instruction = 1 + cost associated the source and destination address mode.

    This cost corresponds to the length (in words ) of instruction.

    Examples

    1. Move register to memory R0 M.

    MOV R0, M cost = 1+1 = 2.2. Indirect indexed mode:

    MOV * 4 (R0), M

    cost = 1 plus indirect index plusinstruction word

    = 1 + 1 + 1 = 3

    3. Indexed mode:

    MOV 4(R0), Mcost = 1 + 1 + 1 = 3

    4. Litetral mode:

    MOV #1, R0

    cost = 1 + 1 = 25. Move memory to memory

    MOV m, m cost = 1 + 1 + 1 = 3