UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word...

277
UNIT – I INTRODUCTION System Software consists of a variety of programs that support the operation of a computer. It makes possible for the user to focus on an application or other problem to be solved, without needing to know the details of how the machine works internally. You probably wrote programs in a high level language like C, C++ or VC++, using text editor to create and modify the program. You translated these programs into machine languages using a compiler. The resulting machine language program was loaded into memory and prepared for execution by loader and linker. Also used debugger to find errors in the programs. Later, you probably wrote programs in assembler language, by using macro instructions to read and write data. You used assembler, which included macro processor, to translate these programs into machine languages. You controlled all these processes by interacting with the operating system of the computer. The operating system took care of all the machine level details for you. You should concentrate on what you wanted to do, without worrying about how it was accomplished. You will come to understand the processes that were going on “ behind the scenes” as you used the computer in previous courses. By understanding the system software, you will gain a deeper understanding of how computers actually work. SYSTEM SOFTWARE AND MACHINE ARCHITECTURE An application program is primarily concerned with the solution of some problem, using the computer as a tool. The focus is on the application, not on the computing system. System programs, on the other hand, are intended to support the operation and use of the computer itself, rather than any particular application. For this reason, they are usually related to the architecture of the machine on which they are to run. 1

Transcript of UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word...

Page 1: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

UNIT – I

INTRODUCTION

             System Software consists of a variety of programs that support the operation of a computer. It makes possible for the user to focus on an application or other problem to be solved, without needing to know the details of how the machine works internally. You probably wrote programs in a high level language like C, C++ or VC++, using text editor to create and modify the program. You translated these programs into machine languages using a compiler. The resulting machine language program was loaded into memory and prepared for execution by loader and linker. Also used debugger to find errors in the programs.

            Later, you probably wrote programs in assembler language, by using macro instructions to read and write data. You used assembler, which included macro processor, to translate these programs into machine languages.

            You controlled all these processes by interacting with the operating system of the computer. The operating system took care of all the machine level details for you. You should concentrate on what you wanted to do, without worrying about how it was accomplished.

            You will come to understand the processes that were going on “ behind the scenes” as you used the computer in previous courses. By understanding the system software, you will gain a deeper understanding of how computers actually work.

SYSTEM SOFTWARE AND MACHINE ARCHITECTURE

            An application program is primarily concerned with the solution of some problem, using the computer as a tool. The focus is on the application, not on the computing system. System programs, on the other hand, are intended to support the operation and use of the computer itself, rather than any particular application. For this reason, they are usually related to the architecture of the machine on which they are to run.

 For example,

 

Assemblers translate mnemonic instructions into machine code, the instruction formats, addressing modes, etc., are of direct concern in assembler design.

 

Compilers generate machine code, taking into account such hardware characteristics as the number and type of registers & machine instruction available.

 

1

Page 2: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Operating system concerned with the management of nearly all resources of a computing system.

 

Some of the system software is machine independent, the processes of linking together independent assembled subprograms does not usually depend on the computer being used. And the other system software is machine dependent, we must include real machines and real pieces of software in our study.

 

However, most real computers have certain characteristics that are unusual or even unique. It is difficult to distinguish between those features of the software. To avoid this problem, we present the fundamental functions of piece of software through discussion of a Simplified Instructional Computer (SIC). SIC is a hypothetical computer that has been carefully designed to include the hardware features most often found on real machines, while avoiding unusual or irrelevant complexities.

THE SIMPLIFIED INSTRUCTIONAL COMPUTER (SIC)

 

            SIC comes in two versions

                       

                        SIC (Standard model)

                        XE  (“extra equipment”)

            The two versions have been designed to be upward compatible, ie., an object program for the standard SIC machine will also execute properly on a SIC/XE system.

 

SIC MACHINE ARCHITECTURE

 Memory

 Memory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). All addresses on SIC are byte addresses, words are addressed by the location of their lowest numbered byte. There aretotal of 32768 bytes in the computer memory.

 

Registers

 There are five registers, all of which have special uses. Each register is 24 bits in length.

 

2

Page 3: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

 Mnemonic        Number                          Special Use

 

A                                 0                      Accumulator, used for arithmetic

operations

X                                 1                      Index register, used for

Addressing

L                                  2                      Linkage register, the jump to

subroutine instruction stores the

return address in this register.

PC                               8                      Program counter, contains the

address of the next instruction to

be fetched for execution.

SW                              9                      Status word, contains a variety of 

information, including a Condition Code.

ASSEMBLERS

1. Introduction

There are two main classes of programming languages: high level (e.g., C, Pascal) and low level. Assembly Language is a low level programming language. Programmers code symbolic instructions, each of which generates machine instructions.

3

Page 4: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

An assembler is a program that accepts as input an assembly language program (source) and produces its machine language equivalent (object code) along with the information for the loader.

Assembly language program

Assembler Linker EXE

Figure 1. Executable program generation from an assembly source code

Advantages of coding in assembly language are:

Provides more control over handling particular hardware components May generate smaller, more compact executable modules Often results in faster execution

Disadvantages:

Not portable More complex Requires understanding of hardware details (interfaces)

Assembler:

An assembler does the following:

4

Page 5: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

1. Generate machine instructions

- evaluate the mnemonics to produce their machine code- evaluate the symbols, literals, addresses to produce their equivalent machine addresses- convert the data constants into their machine representations

2. Process pseudo operations

2. Two Pass Assembler

A two-pass assembler performs two sequential scans over the source code:

Pass 1: symbols and literals are defined

Pass 2: object program is generated

Parsing: moving in program lines to pull out op-codes and operands

Data Structures:

- Location counter (LC): points to the next location where the code will be placed

- Op-code translation table: contains symbolic instructions, their lengths and their op-codes (or subroutine to use for translation)

- Symbol table (ST): contains labels and their values

- String storage buffer (SSB): contains ASCII characters for the strings

- Forward references table (FRT): contains pointer to the string in SSB and offset where its value will be inserted in the object code

5

Page 6: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

assembly machine

language Pass1 Pass 2 language

program Symbol table program

Forward references table

String storage buffer

Partially configured object file

Figure 2. A simple two pass assembler.

Example 1: Decrement number 5 by 1 until it is equal to zero.

assembly language memory object code

program address in memory

-----------------------

START 0100H

LDA #5 0100 01

0101 00

0102 05

LOOP:SUB #1 0103 1D

0104 00

0105 01

6

Page 7: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

COMP #0 0106 29

0107 00

0108 00

JGT LOOP 0109 34

010A 01 placed in Pass 1

010B 03

RSUB 010C 4C

010D 00

010E 00

END

Op-code Table

Mnemonic Addressing mode

Opcode

LDA immediate 01

SUB immediate 1D

COMP immediate 29

LDX immediate 05

ADD indexed 18

TIX direct 2C

JLT direct 38

JGT direct 34

RSUB implied 4C

Symbol Table

Symbol

Value

LOOP 0103

7

Page 8: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Assembler Features

• Machine Dependent Assembler Features

– Instruction formats and addressing modes (SIC/XE)

– Program relocation

• Machine Independent Assembler Features

– Literals

– Symbol-defining statements

– Expressions

– Program blocks

– Control sections and program linking

A SIC/XE Program (Fig. 2.5)

8

Page 9: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Relocatable CodeProducing an object code, which can be placed to any specific area in memory.

Direct Address Table (DAT): contains offset locations of all direct addresses in the program (e.g., 8080 instructions that specify direct addresses are LDA, STA, all conditional jumps...). To relocate the program, the loader adds the loading point to all these locations.

assembly language program Assembler machine language program

and DAT

Figure 6. Assembler output for a relocatable code.

Example 3: Following relocatable object code and DAT are generated for Example 1.

assembly language memory object code

program address in memory9

Page 10: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

-----------------------

START

LDA #0 0000 01

0001 00

0002 00

LDX #0 0003 05

0004 00

0005 00

LOOP:ADD LIST, X 0006 18

0007 00

0008 12

TIX COUNT 0009 2C

000A 00

000B 15

JLT LOOP 000C 38

000D 00

000E 06

RSUB 000F 4C

0010 00

0011 00

LIST: WORD 200 0012 00

0013 02

0014 00

COUNT: WORD 6 0015 00

0016 00

0017 06

END

10

Page 11: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

DAT

0007

000A

000D

Forward and backward references in the machine code are generated relative to address 0000. To relocate the code, the loader adds the new load-point to the references in the machine code which are pointed by the DAT.

One-Pass Assemblers

Two methods can be used:

- Eliminating forward references

Either all labels used in forward references are defined in the source program before they are referenced, or forward references to data items are prohibited.

- Generating the object code in memory

No object program is written out and no loader is needed. The program needs to be re-assembled every time.

Multi-Pass Assemblers

Make as many passes as needed to process the definitions of symbols.

Example 3:

A EQU BB EQU CC DS 1

3 passes are required to find the address for A.

11

Page 12: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Such references can also be solved in two passes: entering symbol definitions that involve forward references in the symbol table. Symbol table also indicates which symbols are dependent on the values of others.

Example 4:

A EQU BB EQU DC EQU DD DS 1

At the end of Pass1:

Symbol TableA &1 B 0B &1 D A 0C &1 D 0D 200 B C 0

After evaluating dependencies:

Symbol TableA 200 0B 200 0C 200 0D 200 0

12

Page 13: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

LINKERS AND LOADERS

1. Introduction

Installing a code in memory is called loading.

Memory

Assembler Loader

Obj. prog.

Source Object

program program

Figure 1. Loading an object code into memory

The task of integrating the program modules together is called linking.

relocatable

object

modules

Assembler Linker Loader Module1

Module2

13

Page 14: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Source Linked

modules ObjectModule 3

modules

Figure 2. Linking and loading as split processes

Linking and loading are frequently done at one time by a linking loader.

Module 1

Assembler Linking Loader Module 2

relocatable

Source object

modules modules object Module 3

modules

Figure 3. Linking loader

Types of loaders:

14

Page 15: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

1. Absolute loaders2. Relocating loaders3. Linking loaders

2. Absolute Loaders

Assembler generates the code and writes instructions in a file together with their load addresses.

The loader reads the file and places the code at the absolute address given in the file.

Example 1: Assume that the assembler generates the following code:

Address: instruction:

0100 F2 01 04

0103 47

0104 B5

… …

0200 05

0201 F2 02 00

The above code is written in the file as follows:

0100 location (2 bytes)

15

Page 16: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

5 number of bytes

F2

01

04 code

47

B5

0200 location

4 number of bytes

05

F2 code

02

00

EOF

16

Page 17: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

read 2 bytes

EOF Y

marker return

N

set LC to the byte read

read a byte and set NB to it

read a byte

place it into the memory location

pointed to by LC

17

Page 18: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

NB = NB – 1 LC: Location Counter

LC = LC + 1 NB: Number of Bytes

N Y

NB>0

Figure 4. Absolute loader

3. Relocating Loader

independently

assembled modules

Assembler Relocating Loader

relocation information

Example 2: Assume that the following two relocatable codes and the associated DATs are generated by the assembler:

Address: instruction:

0000 F2 00 04

18

Page 19: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

0003 47

0004 B5

… …

DAT

0001

0000 05

0001 F2 00 00

… …

DAT

0002

The relocating loader adds the load-point of the code to all the references specified in DAT. If the load-points of the above programs are 500 and 700, they are placed in the memory as follows:

Memory

0500 3A

0501 05

0502 04

0503 47

19

Page 20: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

0504 B5

0700 05

0701 F2

0702 07

0703 00

Get load-point

LC=0

Read a byte

EOF Y

marker return

N

Y LC is in N

DAT

Read next byte Place the byte at memory

location LC + Load Point

20

Page 21: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Add load point

LC = LC + 1

Place the bytes at memory

locations LC + Load Point

and LC + Load Point + 1

LC = LC + 2

Figure 5. Relocating loader

21

Page 22: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

4. Linking Loader

Linking loaders perform four functions:

1. Allocation: allocating space in memory for programs2. Linking: resolving external references between modules3. Relocation: adjusting all address references4. Loading: physically placing machine instructions and data in memory

Entry and External Points

When a statement in one module refers to a symbol in another module, it is called an external reference (EXTREF).

Any symbol in a module that can be referred to from another module is called an entry point (EXTDEF).

Module A Module B

EXTERNAL ALPHA, BETA ENTRY ALPHA, BETA

… …

LDA ALPHA

… ALPHA: …

LXI BETA …

… BETA: …

22

Page 23: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Example 3: The assembler has generated the object codes of the following source programs where external references have not been resolved. These codes will be linked and loaded into memory by a linking-loader.

Source programs Code generated by the assembler

H PROGAR SIZER BUFR SUM

PROGA: START 0 address

EXTREF SIZE, BUF, SUM

LDA #128 0000 29 01 28

STA SIZE 0003 0C __ __

LDA #1 0006 29 00 01

LDX #0 0009 05 00 00

L1: STA BUF, X 000C 0F __ __

ADD #1 000F 19 00 01

TIX SIZE 0012 2C __ __

JEQ L2 0015 30 00 1B (placed by the assembler in pass 2)

J L1 0018 3C 00 0C (placed by the assembler in pass 1)

L2: JSUB SUM 001B 48 __ __

RSUB 001E 4F 00 00

END

DAT

0016

23

Page 24: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

DAT

0019

M SIZE

0004

M BUF 000D

M SIZE

0013

M SUM

001B

H PROGBD SUM 0000R BUFR SIZER TOT

PROGB: START 0

EXTDEF SUM

EXTREF SIZE, BUF, TOT

SUM: LDA #0 0000 29 00 00

LDX #0 0003 05 00 00

L3: ADD BUF, X 0006 1B __ __

TIX SIZE 0009 2C __ __

JEQ L4 000C 30 00 12 (placed by the assembler in pass 2)

J L3 000F 3C 00 06 (placed by the assembler in pass 1)

L4: STA TOT 0012 0C __ __

RSUB 0015 4F 00 00

END

24

Page 25: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

DAT

000D

DAT

0010

M BUF 0007

M SIZE

000A

M TOT 0013

H PROGCD SIZE 0000D TOT 0003D BUF 0006

PROGC: START 0

EXTDEF SIZE, BUF, TOT

SIZE: RESW 1 0000 __ __ __

TOT: RESW 1 0003 __ __ __

BUF: RESW 200 0006 __ __ __

END 0009 __ __ __

000C __ __ __

… …

Relocatable machine codes

Assembler DATs Linker

H/D/R/M Information

25

Page 26: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

The linker does two passes:

Pass1:

1. Gets load points of all programs.2. Creates ESTAB table, using the information in D/R table. Calculates the absolute

address of each label in ESTAB as: load point + relative address.

Pass 2:

1. Using the M information and the absolute addresses in ESTAB, resolves the external references in all programs.

2. Using the DAT information, adds load points to the local references.

ESTAB

program load point label absolute address

PROGA 1000PROGB 2000

SUM 2000PROGC 3000

SIZE 3000TOT 3003BUF 3006

26

Page 27: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Source programs Code generated Code generated

by the assembler at the end of linking

address address

PROGA: START 0

EXTREF SIZE, BUF, SUM

LDA #128 0000 29 01 28 0000 29 01 28

STA SIZE 0003 0C __ __ 0003 0C 30 00

LDA #1 0006 29 00 01 0006 29 00 01

LDX #0 0009 05 00 00 0009 05 00 00

L1: STA BUF, X 000C 0F __ __ 000C 0F 30 06

ADD #1 000F 19 00 01 000F 19 00 01

TIX SIZE 0012 2C __ __ 0012 2C 30 00

JEQ L2 0015 30 00 1B 0015 30 10 1B

J L1 0018 3C 00 0C 0018 3C 10 0C

L2: JSUB SUM 001B 48 __ __ 001B 48 20 00

RSUB 001E 4F 00 00 001E 4F 00 00

END

PROGB: START 0

EXTDEF SUM

EXTREF SIZE, BUF, TOT

SUM: LDA #0 0000 29 00 00 0000 29 00 00

LDX #0 0003 05 00 00 0003 05 00 0027

Page 28: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

L3: ADD BUF, X 0006 1B __ __ 0006 1B 30 06

TIX SIZE 0009 2C __ __ 0009 2C 30 00

JEQ L4 000C 30 00 12 000C 30 20 12

J L3 000F 3C 00 06 000F 3C 20 06

L4: STA TOT 0012 0C __ __ 0012 0C 30 03

RSUB 0015 4F 00 00 0015 4F 00 00

END

PROGC: START 0

EXTDEF SIZE, BUF, TOT

SIZE: RESW 1 0000 __ __ __ 0000 __ __ __

TOT: RESW 1 0003 __ __ __ 0003 __ __ __

BUF: RESW 200 0006 __ __ __ 0006 __ __ __

END 0009 __ __ __ 0009 __ __ __

000C __ __ __ 000C __ __ __

… … … …

28

Page 29: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Get Load Point

Read program name and

enter in ESTAB

Read a record

N

D record

Y

Enter symbol in ESTAB and

calculate absolute address

N end of

program

Y

N end of

all programs

29

Page 30: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Y

Pass 2

Figure 6. Pass 1 of the Linker

Get load point from ESTAB and

set LC to the load point

Read a record

Y

object code

Write it in the memory location N

pointed by LC

M record Y

30

Page 31: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

N Find absolute address of the symbol in ESTAB

and write it in memory location pointed by

LC + relative location

Y

DAT record

Add load point to the value in memory N

location pointed by LC + DAT entry

Y more

records

N

Done

31

Page 32: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Figure 7. Pass 2 of the Linker

32

Page 33: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Linked codes are combined into a single file in the following format:

File format:

File name

File type Header

Number of blocks

Load point (2 bytes)

Number of bytes

Code

Block 1

Checksum

… …

… Block N

Checksum:

To calculate checksum, add up all data in the block and take its low order byte.33

Page 34: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

When writing the code to the file, calculate checksum and write it at the end of each block.

When installing the code in memory, recompute checksum and compare it with the one read from the file.

34

Page 35: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Loader

Read header

NB = block count

Read load point

LC = load point

Read byte count

BC = byte count

Initialize checksum

Read a byte

Write the byte in memory

at location pointed to by LC

Add the byte to checksum

35

Page 36: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

LC = LC + 1

BC = BC – 1

Y

BC>0

N

Read checksum byte

N

Checksum print error message

OK

Y

NB = NB – 1

NB: Number of blocks

Y LC: Location counter

NB > 0 BC: Byte count

N

Return

36

Page 37: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Figure 8. Absolute loader, which reads the linked codes from a file and loads them into the memory.

5. Linking with Libraries

Library contains the relocatable image of each module. User programs are assembled individually. The library routines used in user programs appear as external references.

Library routines

Relocatable DAT D/R/M

machine code Information

user Relocatable machine code

programs DAT Linker

D/R/M Information

Most linkers execute a default transparent search of a set of libraries that contain most of the commonly called routines.

Searches are performed at the end of Pass 1 Library routines may have global symbols to be resolved, linker performs iterative

search until all symbols are resolved Library files are designed to expedite the search. It is not necessary to go through the

whole file, instead, a library file header is reviewed, usually located at the beginning of a file. The header contains all information.

37

Page 38: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Ex: Library file header

Routine Name Global Symbols Location in the File… … …… … …

Library routines are linked with user programs, between Pass 1 and Pass 2, while the linker is resolving external references.

N Y

more Take an External reference

Pass 2 references

Resolve it

Entry

Y found for it in

user programs

N

Check all libraries (or only those specified by the user)

to find an Entry for it

38

Page 39: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

N

found print error message

Y

Calculate a load point for the library routine

Configure direct addresses in the modules

Append the library routine to the user file

Merge the new D/R information to the

ESTAB

Figure 9. Linking the library routines with the user programs.

39

Page 40: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Dynamic Address Resolution

In dynamic linking, a subroutine is loaded and linked to the other programs when it is first called during the execution. Dynamic linking allows several executing programs to share the same copy of a subroutine. For example, a single copy of the routines in a dynamic linking library (run-time library) can be loaded into the memory, and all currently executing programs can be linked to this copy.

Dynamic linking provides the ability to load the routines only when they are needed. For example, error correction routines are not need to be linked and loaded if no errors occur during the program.

Dynamic linking also avoids the necessity of loading the entire library for each execution. For example, a program may be calling a different routine depending on the input data.

Dynamic linking and loading has the following steps:1. The user program makes a Load-and-Call request to the operating system

(dynamic loader).2. The operating system examines its internal tables to determine whether or not the

routine is already loaded. If not, loads it into the memory. Then it transfers control to the routine.

3. After the subroutine is processed, it returns control to the operating system.4. The operating system returns control to the program that issued the request. 5. The subroutine may be retained in the memory for later use.

Bootstrap loaders

When a computer is first turned on or restarted, bootstrap loader is executed. Bootstrap loader is a simple absolute loader. Its function is to load the first system program to be run by the computer, which is the operating system or a more complex loader that loads the rest of the system.

Bootstrap loader is coded as a fixed-length record and added to the beginning of the system programs that are to be loaded into an empty system. A built-in hardware or a very simple program in ROM reads this record into the memory and transfers control to it. When it is executed, it loads the following program, which is either the operating system itself or other system programs to be run without the operating system.

40

Page 41: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

UNIT - IIMACROPROCESSORS

A macro name is an abbreviation, which stands for some related lines of code. Macros are useful for the following purposes:

to simplify and reduce the amount of repetitive coding to reduce errors caused by repetitive coding to make an assembly program more readable.

Macro

definitions expanded

Macroprocessor source Assembler

Source code code

with

macro calls

Figure 1. Macro expansion on a source program.

41

Page 42: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Ex:

User program

Macro Definition User program (after macro expansion)

INITZ MACRO header … …

MOV AX, @data INITZ MOVAX, @data

MOV DS, AX A template MOV DS, AX

MOV ES, AX (code)MOV ES, AX

ENDM trailer … …

prototype (macro name) macro call

Using parameters in macros:

Ex: displaying a message. (Assume that: MES2 DB ‘Enter the data as mm/dd/yy’)

User program

Macro Definition User program (after macro expansion)

PROMPT MACRO MESSGE ;dummy argument ……

MOV AH, 09H

42

Page 43: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

LEA DX, MESSGE PROMPT MES2MOV AH, 09H

INT 21H … LEA DX, MES2

ENDM ;end of macro … INT 21H

MES2 DB ‘Enter the MES2 DB ‘Enter the

data as mm/dd/yy’ data as mm/dd/yy’

Conditional assembly:

Only part of the macro is copied out into the code. Which part is copied out will be under the control of the parameters in the macro call.

CONMB (condition) branch address

Ex:

Lineno. Assembly instructions

… …

8 CONMB (&C>2), 15

9

… …

15

… … 43

Page 44: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

If condition is true, skip the code up to line 15.

If condition is false, expansion continues from line 9.

Macro calls within Macros

- Use a stack to keep the order of the macro calls - Create a parameter table at each macro call- If a dummy parameter appears in the real parameter field of a called macro, search the

parameter table of the calling macro and replace it with the appropriate real parameter.

44

Page 45: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Find the appropriate macro definition

Build a parameter table to relate dummy and real parameters

Push LP onto the stack

LP = first line in the template

Examine the line

?

instruction

conditional macro call macro end

assembly call

substitute real parameters evaluate the Boolean expression Find the appropriate Pop from stack

45

Page 46: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

macro definitionand set LP and

current param.

write it to the output file N True Ytable pointer

Build a parameter table

LP = LP + 1 LP = LP + 1 LP = line number in the statement

Push LP and the pointer

to the last parameter table

onto the stack

LP = first line in the

template

Y more lines N return LP: line pointer

46

Page 47: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Basic Macro Processor Functions:

1.Macro Definition and Expansion

2.Macro processor algorithms and data structures

1.Macro Definition and Expansion

The figure shows the MACRO expansion. The left block shows the MACROdefinition and the right block shows the expanded macro replacing the MACRO call withits block of executable instruction.

M1 is a macro with two parameters D1 and D2. The MACRO stores the contentsof register A in D1 and the contents of register B in D2. Later M1 is invoked with the parameters DATA1 and DATA2, Second time with DATA4 and DATA3. Every call of MACRO is expended with the executable statements.

The statement M1 DATA1, DATA2 is a macro invocation statements that gives thename of the macro instruction being invoked and the arguments (M1 and M2) to be usedin expanding. A macro invocation is referred as a Macro Call or Invocation

Macro Expansion

The program with macros is supplied to the macro processor. Each macroinvocation statement will be expanded into the statement s that form the body of themacro, with the arguments from the macro invocation substituted for the parameters inthe macro prototype. During the expansion, the macro definition statements are deletedsince they are no longer needed

The arguments and the parameters are associated with one another according totheir positions. The first argument in the macro matches with the first parameter in themacro prototype and so on

47

Page 48: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

MACROPROCESSORS

INTRODUCTION

Macro Instructions

• A macro instruction (macro) – It is simply a notational convenience for the programmer to write a

shorthand version of a program. – It represents a commonly used group of statements in the source program. – It is replaced by the macro processor with the corresponding group of source

language statements. This operation is called “expanding the macro” • For example:

– Suppose it is necessary to save the contents of all registers before calling a subroutine.

– This requires a sequence of instructions. – We can define and use a macro, SAVEREGS, to represent this sequence of

instructions.

Macro Processor

• A macro processor – Its functions essentially involve the substitution of one group of characters

or lines for another. – Normally, it performs no analysis of the text it handles. – It doesn’t concern the meaning of the involved statements during macro

expansion. • Therefore, the design of a macro processor generally is machine independent. • Macro processors are used in

– assembly language – high-level programming languages, e.g., C or C++ – OS command languages – general purpose

Format of macro definition

A macro can be defined as follows

MACRO - MACRO pseudo-op shows start of macro definition.Name [List of Parameters] – Macro name with a list of formal parameters.

68

48

Page 49: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

…….…….… - Sequence of assembly language instructions.

MEND - MEND (MACRO-END) Pseudo shows the end of macro definition.

Example:

MACROSUM X,YLDA XMOV BX,XLDA YADD BX

MEND

4.1 BASIC MACROPROCESSOR FUNCTIONS

The fundamental functions common to all macro processors are:1. Macro Definition 2. Macro Invocation 3. Macro Expansion

Macro Definition and Expansion

Two new assembler directives are used in macro definition:o MACRO: identify the beginning of a macro definition o MEND: identify the end of a macro definition

· Prototype for the macro: o Each parameter begins with ‘&’

label op operandsname MACRO parameters

:body

:MEND

· Body: The statements that will be generated as the expansion of the macro.

69

49

Page 50: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

7050

Page 51: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

· It shows an example of a SIC/XE program using macro Instructions. · This program defines and uses two macro instructions, RDBUFF and WRDUFF . · The functions and logic of RDBUFF macro are similar to those of the RDBUFF

subroutine. · The WRBUFF macro is similar to WRREC subroutine. · Two Assembler directives (MACRO and MEND) are used in macro definitions. · The first MACRO statement identifies the beginning of macro definition. · The Symbol in the label field (RDBUFF) is the name of macro, and entries in the

operand field identify the parameters of macro instruction. · In our macro language, each parameter begins with character &, which facilitates

the substitution of parameters during macro expansion. · The macro name and parameters define the pattern or prototype for the macro

instruction used by the programmer. The macro instruction definition has been deleted since they have been no longer needed after macros are expanded.

· Each macro invocation statement has been expanded into the statements that form the body of the macro, with the arguments from macro invocation substituted for the parameters in macro prototype.

· The arguments and parameters are associated with one another according to their positions.

Macro Invocation

· A macro invocation statement (a macro call) gives the name of the macro instruction being invoked and the arguments in expanding the macro.

· The processes of macro invocation and subroutine call are quite different. o Statements of the macro body are expanded each time the macro is

invoked. o Statements of the subroutine appear only one; regardless of how many

times the subroutine is called. · The macro invocation statements treated as comments and the statements

generated from macro expansion will be assembled as though they had been written by the programmer.

71

51

Page 52: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Macro Expansion

· Each macro invocation statement will be expanded into the statements that form the body of the macro.

· Arguments from the macro invocation are substituted for the parameters in the macro prototype.

o The arguments and parameters are associated with one another according to their positions.

The first argument in the macro invocation corresponds to the first parameter in the macro prototype, etc.

· Comment lines within the macro body have been deleted, but comments on individual statements have been retained.

· Macro invocation statement itself has been included as a comment line.

Example of a macro expansion

72

52

Page 53: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

· In expanding the macro invocation on line 190, the argument F1 is substituted for the parameter and INDEV wherever it occurs in the body of the macro.

· Similarly BUFFER is substituted for BUFADR and LENGTH is substituted for RECLTH.

· Lines 190a through 190m show the complete expansion of the macro invocation on line 190.

· The label on the macro invocation statement CLOOP has been retained as a label on the first statement generated in the macro expansion.

· This allows the programmer to use a macro instruction in exactly the same way as an assembler language mnemonic.

· After macro processing the expanded file can be used as input to assembler. · The macro invocation statement will be treated as comments and the statements

generated from the macro expansions will be assembled exactly as though they had been written directly by the programmer.

73

53

Page 54: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

4.1.1 Macro Processor Algorithm and Data Structures

· It is easy to design a two-pass macro processor in which all macro definitions are processed during the first pass ,and all macro invocation statements are expanded during second pass

· Such a two pass macro processor would not allow the body of one macro instruction to contain definitions of other macros.

Example 1:

Example 2:

54

Page 55: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

74

55

Page 56: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

· Defining MACROS or MACROX does not define RDBUFF and the other macro instructions. These definitions are processed only when an invocation of MACROS or MACROX is expanded.

· A one pass macroprocessor that can alternate between macro definition and macro expansion is able to handle macros like these.

· There are 3 main data structures involved in our macro processor.

Definition table (DEFTAB)1. The macro definition themselves are stored in definition table (DEFTAB), which

contains the macro prototype and statements that make up the macro body. 2. Comment lines from macro definition are not entered into DEFTAB because they

will not be a part of macro expansion.

Name table (NAMTAB)1. References to macro instruction parameters are converted to a positional entered

into NAMTAB, which serves the index to DEFTAB. 2. For each macro instruction defined, NAMTAB contains pointers to beginning and

end of definition in DEFTAB.

Argument table (ARGTAB)1. The third Data Structure in an argument table (ARGTAB), which is used during

expansion of macro invocations. 2. When macro invocation statements are recognized, the arguments are stored in

ARGTAB according to their position in argument list. 3. As the macro is expanded, arguments from ARGTAB are substituted for the

corresponding parameters in the macro body.

56

Page 57: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

75

57

Page 58: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

· The position notation is used for the parameters. The parameter &INDEV has been converted to ?1, &BUFADR has been converted to ?2.

· When the ?n notation is recognized in a line from DEFTAB, a simple indexing operation supplies the property argument from ARGTAB.

Algorithm:

· The procedure DEFINE, which is called when the beginning of a macro definition is recognized, makes the appropriate entries in DEFTAB and NAMTAB.

· EXPAND is called to set up the argument values in ARGTAB and expand a macro invocation statement.

· The procedure GETLINE gets the next line to be processed · This line may come from DEFTAB or from the input file, depending upon

whether the Boolean variable EXPANDING is set to TRUE or FALSE.

58

Page 59: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

76

59

Page 60: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

4.2 MACHINE INDEPENDENT MACRO PROCESSOR FEATURES

Machine independent macro processor features are extended features that are not directly related to architecture of computer for which the macro processor is written.

4.2.1 Concatenation of Macro Parameter

· Most Macro Processor allows parameters to be concatenated with other character strings.

· A program contains a set of series of variables: XA1, XA2, XA3,…

77

60

Page 61: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

XB1, XB2, XB3,… · If similar processing is to be performed on each series of variables, the

programmer might want to incorporate this processing into a macro instructuion. · The parameter to such a macro instruction could specify the series of variables to

be operated on (A, B, C …). · The macro processor constructs the symbols by concatenating X, (A, B, …), and

(1,2,3,…) in the macro expansion.

· Suppose such parameter is named &ID, the macro body may contain a statement: LDA X&ID1, in which &ID is concatenated after the string “X” and before the string “1”.

LDA XA1 (&ID=A) LDA XB1 (&ID=B)

· Ambiguity problem: E.g., X&ID1 may mean “X” + &ID + “1” “X” + &ID1 This problem occurs because the end of the parameter is not marked.

· Solution to this ambiguity problem: Use a special concatenation operator “ ” to specify the end of the parameter LDA X&ID 1 So that the end of parameter &ID is clearly identified.

Macro definition

Macro invocation statements

7861

Page 62: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

· The macroprocessor deletes all occurrences of the concatenation operator immediately after performing parameter substitution, so the character will not appear in the macro expansion.

4.2.2 Generation of Unique Labels

· Labels in the macro body may cause “duplicate labels” problem if the macro is invocated and expanded multiple times.

· Use of relative addressing at the source statement level is very inconvenient, error-prone, and difficult to read.

· It is highly desirable to 1. Let the programmer use label in the macro body

· Labels used within the macro body begin with $. 2. Let the macro processor generate unique labels for each macro invocation and

expansion. · During macro expansion, the $ will be replaced with $xx, where xx is a two-character alphanumeric counter of the number of macro instructions expanded. · XX=AA, AB, AC …….

`Consider the definition of WRBUFF

5 COPY START 0::

135 TD =X ‘&OUTDEV’:

140 JEQ *-3:

155 JLT *-14:

255 END FIRST

79

62

Page 63: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

· If a label was placed on the TD instruction on line 135, this label would be defined twice, once for each invocation of WRBUFF.

· This duplicate definition would prevent correct assembly of the resulting expanded program.

· The jump instructions on line 140 and 155 are written using the re4lative operands *-3 and *-14, because it is not possible to place a label on line 135 of the macro definition.

· This relative addressing may be acceptable for short jumps such as “ JEQ *-3” · For longer jumps spanning several instructions, such notation is very

inconvenient, error-prone and difficult to read. · Many macroprocessors avoid these problems by allowing the creation of special

types of labels within macro instructions.

RDBUFF definition

· Labels within the macro body begin with the special character $.

Macro expansion

80

63

Page 64: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

· Unique labels are generated within macro expansion. · Each symbol beginning with $ has been modified by replacing $ with $AA. · The character $ will be replaced by $xx, where xx is a two-character

alphanumeric counter of the number of macro instructions expanded. · For the first macro expansion in a program, xx will have the value AA. For

succeeding macro expansions, xx will be set to AB, AC etc.

4.2.3 Conditional Macro Expansion

· Arguments in macro invocation can be used to: o Substitute the parameters in the macro body without changing the

sequence of statements expanded. o Modify the sequence of statements for conditional macro expansion (or

conditional assembly when related to assembler). This capability adds greatly to the power and flexibility of a macro language.

Consider the example

81

64

Page 65: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

MacroTime

variabl

e

Boolean Expression

· Two additional parameters used in the example of conditional macro expansion o &EOR: specifies a hexadecimal character code that marks the end of a

record o &MAXLTH: specifies the maximum length of a record

· Macro-time variable (SET symbol) o can be used to

store working values during the macro expansion store the evaluation result of Boolean expression control the macro-time conditional structures

o begins with “&” and that is not a macro instruction parameter o be initialized to a value of 0 o be set by a macro processor directive, SET

· Macro-time conditional structure o IF-ELSE-ENDIF o WHILE-ENDW

65

Page 66: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

4.2.3.1 Implementation of Conditional Macro Expansion (IF-ELSE-ENDIFStructure)

· A symbol table is maintained by the macroprocessor. o This table contains the values of all macro-time variables used. o Entries in this table are made or modified when SET statements are

processed. o This table is used to look up the current value of a macro-time variable

whenever it is required. · The testing of the condition and looping are done while the macro is being

expanded. · When an IF statement is encountered during the expansion of a macro, the

specified Boolean expression is evaluated. If value is o TRUE

The macro processor continues to process lines from DEFTAB until it encounters the next ELSE or ENDIF statement.

If ELSE is encountered, then skips to ENDIF o FALSE

The macro processor skips ahead in DEFTAB until it finds the next ELSE or ENDLF statement.

4.2.3.2 Implementation of Conditional Macro Expansion (WHILE-ENDWStructure)

· When an WHILE statement is encountered during the expansion of a macro, the specified Boolean expression is evaluated. If value is

o TRUE The macro processor continues to process lines from DEFTAB until it encounters the next ENDW statement. When ENDW is encountered, the macro processor returns to the preceding WHILE, re-evaluates the Boolean expression, and takes action again.

o FALSE The macro processor skips ahead in DEFTAB until it finds the next ENDW statement and then resumes normal macro expansion.

66

Page 67: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

4.2.4 Keyword Macro Parameters

· Positional parameters

o Parameters and arguments are associated according to their positions in the macro prototype and invocation. The programmer must specify the arguments in proper order.

o If an argument is to be omitted, a null argument should be used to maintain the proper order in macro invocation statement.

o For example: Suppose a macro instruction GENER has 10 possible parameters, but in a particular invocation of the macro only the 3rd and 9th parameters are to be specified.

o The statement is GENER ,,DIRECT,,,,,,3.o It is not suitable if a macro has a large number of parameters, and only a

few of these are given values in a typical invocation.

· Keyword parameters

o Each argument value is written with a keyword that names the corresponding parameter.

o Arguments may appear in any order. o Null arguments no longer need to be used. o If the 3rd parameter is named &TYPE and 9th parameter is named

&CHANNEL, the macro invocation would be

GENER TYPE=DIRECT,CHANNEL=3.

o It is easier to read and much less error-prone than the positional method.

Consider the example

· Here each parameter name is followed by equal sign, which identifies a keyword parameter and a default value is specified for some of the parameters.

67

Page 68: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Here the value if &INDEV is specified as F3 and the value of &EOR is specified as null.

68

Page 69: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

4.3. MACROPROCESSOR DESIGN OPTIONS

4.3.1 Recursive Macro Expansion

· RDCHAR: o read one character from a specified device into register A o should be defined beforehand (i.e., before RDBUFF)

69

Page 70: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Implementation of Recursive Macro Expansion

· Previous macro processor design cannot handle such kind of recursive macro invocation and expansion, e.g., RDBUFF BUFFER, LENGTH, F1

· Reasons: 1) The procedure EXPAND would be called recursively, thus the invocation

arguments in the ARGTAB will be overwritten. 2) The Boolean variable EXPANDING would be set to FALSE when the

“inner” macro expansion is finished, that is, the macro process would forget that it had been in the middle of expanding an “outer” macro.

3) A similar problem would occur with PROCESSLINE since this procedure too would be called recursively.

· Solutions: 1) Write the macro processor in a programming language that allows

recursive calls, thus local variables will be retained. 2) Use a stack to take care of pushing and popping local variables and return

addresses. · Another problem: can a macro invoke itself recursively?

4.3.2 One-Pass Macro Processor

· A one-pass macro processor that alternate between macro definition and macro expansion in a recursive way is able to handle recursive macro definition.

· Because of the one-pass structure, the definition of a macro must appear in the source program before any statements that invoke that macro.

Handling Recursive Macro Definition

· In DEFINE procedure o When a macro definition is being entered into DEFTAB, the normal

approach is to continue until an MEND directive is reached. o This would not work for recursive macro definition because the first

MEND encountered in the inner macro will terminate the whole macro definition process.

o To solve this problem, a counter LEVEL is used to keep track of the level of macro definitions.

Increase LEVEL by 1 each time a MACRO directive is read. Decrease LEVEL by 1 each time a MEND directive is read. A MEND can terminate the whole macro definition process only when LEVEL reaches 0.

70

Page 71: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

This process is very much like matching left and right parentheses when scanning an arithmetic expression.

4.3.3 Two-Pass Macro Processor

· Two-pass macro processor o Pass 1:

Process macro definition o Pass 2:

Expand all macro invocation statements · Problem

o This kind of macro processor cannot allow recursive macro definition, that is, the body of a macro contains definitions of other macros (because all macros would have to be defined during the first pass before any macro invocations were expanded).

Example of Recursive Macro Definition

· MACROS (for SIC) o Contains the definitions of RDBUFF and WRBUFF written in SIC

instructions. · MACROX (for SIC/XE)

o Contains the definitions of RDBUFF and WRBUFF written in SIC/XE instructions.

· A program that is to be run on SIC system could invoke MACROS whereas a program to be run on SIC/XE can invoke MACROX.

· Defining MACROS or MACROX does not define RDBUFF and WRBUFF. These definitions are processed only when an invocation of MACROS or MACROX is expanded.

71

Page 72: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

4.3.4 General-Purpose Macro Processors

Goal· Macro processors that do not dependent on any particular programming language,

but can be used with a variety of different languages.

Advantages· Programmers do not need to learn many macro languages.

72

Page 73: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

· Although its development costs are somewhat greater than those for a language-specific macro processor, this expense does not need to be repeated for each language, thus save substantial overall cost.

Disadvantages· Large number of details must be dealt with in a real programming language · Situations in which normal macro parameter substitution should not occur, e.g.,

comments. · Facilities for grouping together terms, expressions, or statements · Tokens, e.g., identifiers, constants, operators, keywords · Syntax

4.3.5 Macro Processing within Language Translators

Macro processors can be

1) Preprocessors o Process macro definitions. o Expand macro invocations. o Produce an expanded version of the source program, which is then used as input

to an assembler or compiler. 2) Line-by-line macro processor

o Used as a sort of input routine for the assembler or compiler. o Read source program. o Process macro definitions and expand macro invocations. o Pass output lines to the assembler or compiler.

3) Integrated macro processor

4.3.5.1 Line-by-Line Macro Processor

Benefits· It avoids making an extra pass over the source program. · Data structures required by the macro processor and the language translator can

be combined (e.g., OPTAB and NAMTAB) · Utility subroutines can be used by both macro processor and the language

translator. o Scanning input lines o Searching tables o Data format conversion

· It is easier to give diagnostic messages related to the source statements.

4.3.5.2 Integrated Macro Processor

73

Page 74: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

· An integrated macro processor can potentially make use of any information about the source program that is extracted by the language translator.

· As an example in FORTRAN DO 100 I = 1,20

– a DO statement: • DO: keyword • 100: statement number • I: variable name

DO 100 I = 1– An assignment statement

• DO100I: variable (blanks are not significant in FORTRAN)· An integrated macro processor can support macro instructions that depend upon

the context in which they occur.Drawbacks of Line-by-line or Integrated Macro Processor

· They must be specially designed and written to work with a particular implementation of an assembler or compiler.

· The cost of macro processor development is added to the costs of the language translator, which results in a more expensive software.

· The assembler or compiler will be considerably larger and more complex.

74

Page 75: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

TEXT- EDITORS

OVERVIEW OF THE EDITING PROCESS.

An interactive editor is a computer program that allows a user to create and revise

a target document. The term document includes objects such as computer programs,

texts, equations, tables, diagrams, line art and photographs-anything that one might find on a printed page. Text editor is one in which the primary elements being edited are character strings of the target text. The document editing process is an interactive user-computer dialogue designed to accomplish four tasks:

1) Select the part of the target document to be viewed and manipulated 2) Determine how to format this view on-line and how to display it. 3) Specify and execute operations that modify the target document. 4) Update the view appropriately.

Traveling – Selection of the part of the document to be viewed and edited. It involves first traveling through the document to locate the area of interest such as “next screenful”, ”bottom”,and “find pattern”. Traveling specifies where the area of interest is;

Filtering - The selection of what is to be viewed and manipulated is controlled by filtering. Filtering extracts the relevant subset of the target document at the point of interest such as next screenful of text or next statement.

Formatting: Formatting determines how the result of filtering will be seen as a visible representation (the view) on a display screen or other device.

Editing: In the actual editing phase, the target document is created or altered with a set of operations such as insert, delete, replace, move or copy.

Manuscript oriented editors operate on elements such as single characters, words, lines, sentences and paragraphs; Program-oriented editors operates on elements such as identifiers, keywords and statements

THE USER-INTERFACE OF AN EDITOR.

The user of an interactive editor is presented with a conceptual model of the editing system. The model is an abstract framework on which the editor and the world on which the operations are based. The line editors simulated the world of the keypunch they allowed operations on numbered sequence of 80-character card image lines.

The Screen-editors define a world in which a document is represented as a quarter-plane of text lines, unbounded both down and to the right. The user sees, through a cutout, only a rectangular subset of this plane on a multi line display terminal. The cutout can be moved left or right, and up or down, to display other portions of the document. The user interface is also concerned with the input devices, the output devices,

75

Page 76: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

and the interaction language of the system.

INPUT DEVICES: The input devices are used to enter elements of text being edited, to enter commands, and to designate editable elements. Input devices are categorized as: 1) Text devices 2) Button devices 3) Locator devices

1) Text or string devices are typically typewriter like keyboards on which user presses and release keys, sending unique code for each key. Virtually all computer key boards are of the QWERTY type.

2) Button or Choice devices generate an interrupt or set a system flag, usually causing an invocation of an associated application program. Also special function keys are also available on the key board. Alternatively, buttons can be simulated in software by displaying text strings or symbols on the screen. The user chooses a string or symbol instead of pressing a button.

3) Locator devices: They are two-dimensional analog-to-digital converters that position a cursor symbol on the screen by observing the user‟s movement of the device. The most common such devices are the mouse and the tablet.

The Data Tablet is a flat, rectangular, electromagnetically sensitive panel. Either the ballpoint pen like stylus or a puck, a small device similar to a mouse is moved over the surface. The tablet returns to a system program the co-ordinates of the position on the data tablet at which the stylus or puck is currently located. The program can then map these data-tablet coordinates to screen coordinates and move the cursor to the corresponding screen position. Text devices with arrow (Cursor) keys can be used to simulate locator devices. Each of these keys shows an arrow that point up, down, left or right. Pressing an arrow key typically generates an appropriate character sequence; the program interprets this sequence and moves the cursor in the direction of the arrow on the key pressed.

VOICE-INPUT DEVICES: which translate spoken words to their textual equivalents, may prove to be the text input devices of the future. Voice recognizers are currently available for command input on some systems.

OUTPUT DEVICES The output devices let the user view the elements being edited and the result of the editing operations.

The first output devices were teletypewriters and other character-printing terminals that generated output on paper.

Next “glass teletypes” based on Cathode Ray Tube (CRT) technology which uses CRT screen essentially to simulate the hard-copy teletypewriter.

Today‟s advanced CRT terminals use hardware assistance for such features as moving the cursor, inserting and deleting characters and lines, and scrolling lines and pages.

The modern professional workstations are based on personal computers with high resolution displays; support multiple proportionally spaced character fonts to produce

76

Page 77: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

realistic facsimiles of hard copy documents. INTERACTION LANGUAGE:

The interaction language of the text editor is generally one of several commontypes.

The typing oriented or text command-oriented method It is the oldest of the major editing interfaces. The user communicates with the editor by typing text strings both for command names and for operands. These strings are sent to the editor and are usually echoed to the output device. Typed specification often requires the user to remember the exact form of all commands, or at least their abbreviations. If the command language is complex, the user must continually refer to a manual or an on-line Help function. The typing required can be time consuming for in-experienced users.

Function key interfaces: Each command is associated with marked key on the key board. This eliminates much typing. E.g.: Insert key, Shift key, Control key

Disadvantages:

Have too many unique keysMultiple key stroke commands

Menu oriented interface A menu is a multiple choice set of text strings or icons which are graphical symbols that represent objects or operations. The user can perform actions by selecting items for the menus. The editor prompts the user with a menu. One problem with menu oriented system can arise when there are many possible actions and several choices are required to complete an action. The display area of the menu is rather limited

77

Page 78: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Most Text editors have a structure similar to that shown above.

The command Language Processor It accepts input from the user‟s input devices, and analyzes the tokens and syntactic structure of the commands. It functions much like the lexical and syntactic phases of a compiler. The command language processor may invoke the semantic routines directly. In a text editor, these semantic routines perform functions such as editing and viewing. The semantic routines involve traveling, editing, viewing and display functions. Editing operations are always specified by the user and display operations are specified implicitly by the other three categories of operations. Traveling and viewing operations may be invoked either explicitly by the user or implicitly by the editing operations

Editing ComponentIn editing a document, the start of the area to be edited is determined by the

current editing pointer maintained by the editing component, which is the collection of modules dealing with editing tasks. The current editing pointer can be set or reset explicitly by the user using travelling commands, such as next paragraph and next screen, or implicitly as a side effect of the previous editing operation such as delete paragraph.

Traveling ComponentThe traveling component of the editor actually performs the setting of the current

editing and viewing pointers, and thus determines the point at which the viewing and /or editing filtering begins.

Viewing ComponentThe start of the area to be viewed is determined by the current viewing pointer.

This pointer is maintained by the viewing component of the editor, which is a collection of modules responsible for determining the next view. The current viewing pointer can be set or reset explicitly by the user or implicitly by system as a result of previous editing operation. The viewing component formulates an ideal view, often expressed in a device independent intermediate representation. This view may be a very simple one consisting of a window‟s worth of text arranged so that lines are not broken in the middle of the words.

Display ComponentIt takes the idealized view from the viewing component and maps it to a physical

output device in the most efficient manner. The display component produces a display by mapping the buffer to a rectangular subset of the screen, usually a window

Editing FilterFiltering consists of the selection of contiguous characters beginning at the

current point. The editing filter filters the document to generate a new editing buffer based on the current editing pointer as well as on the editing filter parameters

Editing BufferIt contains the subset of the document filtered by the editing filter based on the

editing pointer and editing filter parameters Viewing Filter

78

Page 79: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

When the display needs to be updated, the viewing component invokes the viewing filter. This component filters the document to generate a new viewing buffer based on the current viewing pointer as well as on the viewing filter parameters.

Viewing BufferIt contains the subset of the document filtered by the viewing filter based on the

viewing pointer and viewing filter parameters. E.g. The user of a certain editor might travel to line 75,and after viewing it, decide to change all occurrences of “ugly duckling” to “swan” in lines 1 through 50 of the file by using a change command such as

[1,50] c/ugly duckling/swan/

As a part of the editing command there is implicit travel to the first line of the file. Lines 1 through 50 are then filtered from the document to become the editing buffer. Successive substitutions take place in this editing buffer without corresponding updates of the view

In Line editors, the viewing buffer may contain the current line; in screen editors , this buffer may contain rectangular cut out of the quarter-plane of text. This viewing buffer is then passed to the display component of the editor, which produces a display by mapping the buffer to a rectangular subset of the screen, usually called a window.

The editing and viewing buffers, while independent, can be related in many ways. In a simplest case, they are identical: the user edits the material directly on the screen. On the other hand, the editing and viewing buffers may be completely disjoint.

Windows typically cover the entire screen or rectangular portion of it. Mapping viewing buffers to windows that cover only part of the screen is especially useful for editors on modern graphics based workstations. Such systems can support multiple windows,

79

Page 80: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

simultaneously showing different portions of the same file or portions of different file.

80

Page 81: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

This approach allows the user to perform inter-file editing operations much more effectively than with a system only a single window.

The mapping of the viewing buffer to a window is accomplished by two components of the system.

(i) First, the viewing component formulates an ideal view often expressed in a device independent intermediate representation. This view may be a very simple one consisting of a windows worth of text arranged so that lines are not broken in the middle of words. At the other extreme, the idealized view may be a facsimile of a page of fully formatted and typeset text with equations, tables and figures.

(ii) Second the display component takes these idealized views from the viewing component and maps it to a physical output device the most efficient manner possible.

The components of the editor deal with a user document on two levels:(i) In main memory and (ii) In the disk file system. Loading an entire document into main memory may be infeasible. However if

only part of a document is loaded and if many user specified operations require a disk read by the editor to locate the affected portions, editing might be unacceptably slow. In some systems this problem is solved by the mapping the entire file into virtual memory and letting the operating system perform efficient demand paging.

An alternative is to provide is the editor paging routines which read one or more logical portions of a document into memory as needed. Such portions are often termed pages, although there is usually no relationship between these pages and the hard copy document pages or virtual memory pages. These pages remain resident in main memory until a user operation requires that another portion of the document be loaded.

Editors function in three basic types of computing environment:(i) Time-sharing environment (ii) Stand-alone environment and (iii) Distributed environment.

Each type of environment imposes some constraint on the design of an editor. The Time –Sharing Environment The time sharing editor must function swiftly within the context of the load on the computer‟s processor, central memory and I/O devices.

The Stand alone Environment The editor on a stand-alone system must have access to the functions that the time sharing editors obtain from its host operating system. This may be provided in pare by a small local operating system or they may be built into the editor itself if the stand alone system is dedicated to editing. Distributed Environment The editor operating in a distributed resource sharing local network must, like a standalone editor, run independently on each user‟s machine and must, like a time sharing editor, content for shared resources such as files.

81

Page 82: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

INTERACTIVE DEBUGGING SYSTEMSAn interactive debugging system provides programmers with facilities that aid

in testing and debugging of programs interactively.

DEBUGGING FUNCTIONS AND CAPABILITIESExecution sequencing: It is the observation and control of the flow of program

execution. For example, the program may be halted after a fixed number of instructions are executed.

Breakpoints – The programmer may define break points which cause execution to be suspended, when a specified point in the program is reached. After execution is suspended, the debugging command is used to analyze the progress of the program and to diagnose errors detected. Execution of the program can then be removed.

Conditional Expressions – Programmers can define some conditional expressions, evaluated during the debugging session, program execution is suspended, when conditions are met, analysis is made, later execution is resumed

Gaits- Given a good graphical representation of program progress may even be useful in running the program in various speeds called gaits. A Debugging system should also provide functions such as tracing and traceback. Tracing can be used to track the flow of execution logic and data modifications. The control flow can be traced at different levels of detail – procedure, branch, individual instruction, and so on…

Traceback can show the path by which the current statement in the program was reached. It can also show which statements have modified a given variable or parameter. The statements are displayed rather than as hexadecimal displacements. Program -display Capabilities It is also important for a debugging system to have good program display capabilities. It must be possible to display the program being debugged, complete with statement numbers. Multilingual Capability A debugging system should consider the language in which the program being debugged is written. Most user environments and many applications systems involve the use of different programming languages. A single debugging tool should be available to multilingual situations.Context Effects

The context being used has many different effects on the debugging interaction. For example. The statements are different depending on the language

COBOL - MOVE 6.5 TO XFORTRAN - X = 6.5

Likewise conditional statements should use the notation of the source language COBOL - IF A NOT EQUAL TO BFORTRAN - IF (A .NE. B)

82

Page 83: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Similar differences exist with respect to the form of statement labels, keywords and so

on.

Display of source codeThe language translator may provide the source code or source listing tagged

in some standard way so that the debugger has a uniform method of navigating about it.

Optimization:It is also important that a debugging system be able to deal with optimized code. Many

optimizations involve the rearrangement of segments of code in the programFor eg. - invariant expressions can be removed from loop - separate loops can

be combined into a single loop - redundant expression may be eliminated - elimination of unnecessary branch instructions The debugging ofoptimized code requires a substantial amount of cooperation from the optimizing compiler.

Relationship with Other Parts of the System

An interactive debugger must be related to other parts of the system in many different ways. Availability Interactive debugger must appear to be a part of the run-time environment and an integral part of the system. When an error is discovered, immediate debugging must be possible because it may be difficult or impossible to reproduce the program failure in some other environment or at some other times. Consistency with security and integrity components User need to be able to debug in a production environment. When an application fails during a production run, work dependent on that application stops. Since the production environment is often quite different from the test environment, many program failures cannot be repeated outside the production environment. Debugger must also exist in a way that is consistent with the security and integrity components of the system. Use of debugger must be subjected to the normal authorization mechanism and must leave the usual audit trails. Someone (unauthorized user) must not access any data or code. It must not be possible to use the debuggers to interface with any aspect of system integrity. Coordination with existing and future systems The debugger must co-ordinate its activities with those of existing and future language compilers and interpreters. It is assumed that debugging facilities in existing language will continue to exist and be maintained. The requirement of cross-language debugger assumes that such a facility would be installed as an alternative to the individual language debuggers.

USER- INTERFACE CRITERIAThe interactive debugging system should be user friendly. The facilities of

debugging system should be organized into few basic categories of functions which should closely reflect common user tasks.

83

Page 84: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Full – screen displays and windowing systemsThe user interaction should make use of full-screen display and windowing

systems. The advantage of such interface is that the information can be should displayed and changed easily and quickly.

Menus:With menus and full screen editors, the user has far less information to enter

and remember It should be possible to go directly to the menus without having to retrace an

entire hierarchy. When a full-screen terminal device is not available, user should have an

equivalent action in a linear debugging language by providing commands.

Command language:The command language should have a clear, logical, simple syntax. Parameters

names should be consistent across set of commands

Parameters should automatically be checked for errors for type and range values. Defaults should be provided for parameters. Command language should minimize punctuations such as parenthesis, slashes,

and special characters.

On Line HELP facilityGood interactive system should have an on-line HELP facility that should

provide help for all options of menu Help should be available from any state of the debugging system.

UNIT - III84

Page 85: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

COMPILERS AND INTERPRETERS

ASPECTS OF COMPILATION

Definition – A compiler bridges the semantic gap between the PL(Programming Language) domain and the execution domain. In simple words, compiler is the system program to translate the source program to object module.

Responsibility of compiler

1. Generate code to implement meaning of source program.

2. If any error in the semantics of the language during compilation, the compiler

provide the error information(called as diagnostics) .

To understand compilers responsibilities, we need to consider the Programming Language(PL) features , such as

1. Data types.2. Data Structures.3. Scope rules.4. Control Structure.

Data Types

Definition : A data type is the specification of

1. legal values for data items.2. legal operations on the value of data items.

85

Page 86: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

The compiler has to check if the compatibility of an operation with its operands, it has to generate code to perform conversion of values, and it must use appropriate instruction sequences.

Data Structures

Stacks, arrays and records are some examples for data structures. To have a reference for a element in a data structure, the compiler has to develop memory mapping functions. When the records in the data structure have different data, it leads to complex memory mapping functions.

Scope Rules The scope of a element in a program is the part of the program where the element is accessible. For example, we have the auto variables, which are accessible only in the function where they are declared. The global variables are used throughout the program.

If we have elements of different scopes, we have the collection of those elements called as name space. The association of the elements in different name spaces are called Interprocedural aliases.

Control Structure

The control structure is the collection of language features which would alter the normal flow of control in a program. Examples of these are the loop constructs(while, for etc), function calls, and the conditional control(if statements).

MEMORY ALLOCATION

86

Page 87: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Memory allocation has the following tasks

1. To find the amount of memory needed for a data item.2. To use the memory allocation model which is appropriate for the scope of data

items3. To find the memory mappings for the elements in a data structure(ex array).

Memory Binding : Definition

A memory binding is an association between the memory address attribute of a data item and the address of memory area.

There are two types of memory allocations

1. Static memory allocation2. Dynamic memory allocation.

Static allocation Dynamic allocation

In static memory allocation, memory is In Dynamic memory allocation

allocated to a variable before the execution memory is allocated and

of the program.ie, memory is allocated destroyed during execution time.

during compile time.

Fortran is an example Pascal and Ada are examples

for this type of allocation. for this type of allocation.

Dynamic allocation is of two types,

87

Page 88: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

1. Automatic allocation / Deallocation2. Program controlled allocation / Deallocation.

Automatic allocation Program controlled allocation

In automatic allocation, memory In program controlled

allocated to the variables in the allocation , memory is allocated

program unit when the program unit and deallocated at arbitrary points

is entered, and memory is deallocated

when the program unit is exited.

This is implemented using stacks This is implemented using heaps.

Memory allocation in block structured languages

A program in a block structured language is a nested structure of blocks. A block is a program unit which can contain data declarations. Ex of block structured language are Algol – 60, Pascal, Ada etc.

Scope Rules

A variable ( say var) which is given a name (say name) is referred by the binding

(name, var).

ie if you have a declaration, int x, y, z;

88

Page 89: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

x, y, z are called the names and int is the variable(var), so we can represent each binding as (x, int), (y, int) and (z, int)

If we have a block ‘b’ where we have the declaration (int x) then the binding (x, int) is accessible only within the block ’b’. If we have a block ‘ b1’ which is inside block ‘b’ , and if this block contain the declaration ( int y), then the binding (x int ) is suppressed inside block ‘b1’.Only if block ‘b1’ doesn’t contain the declaration (int y) then the binding (x, int) would have been effective inside block ‘b1’.

So a variable (var) with a name(name1) in a block b,

1. can be accessed in block b.2. can be accessed in block b1 enclosed in block b, unless b1 contains a

declaration using the same name.

A variable declared inside a block b is called local variable of block b. A variable of a enclosing block which is accessible inside block b is called a nonlocal variable of block b.

Ex:

x, y, z : int

b g: int;

b1

89

Page 90: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

So here g is a local variable to block b1, and if x is accessible inside block b1 then x is said to be a nonlocal variable of block b1.

Memory allocation and access

Automatic allocation is implemented using stack model with a minor variation, ie having 2 reserved pointers. So for every variable, the stack allocates memory , this is called as the activation record (AR) of a block. We have a register called Activation Record Base(ARB) which points to the start address of the TOS(Top Of Stack) record.

Dynamic Pointers The first reserved pointer is called as Dynamic pointer and it has the address 0ARB). This pointer is used to deallocate an AR.

Static Pointer

The second reserved pointer is called as static pointer, and it has the address 1(ARB). This is used to access the nonlocal variables of a block.

Accessing nonlocal variables.

The block which encloses a block (‘b’) is called the static or textual ancestor of block ‘b’. So a block which immediately encloses a block is called the level 1 ancestor of that block. In general a level ‘m’ ancestor is a block which immediately encloses the level (m-1) ancestor.

90

Page 91: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Displays

Accessing nonlocal variables using static pointers is expensive when the number of levels increases. Display is an array which is used to make the access of nonlocal variables efficient. When a block B is in execution, the Display array contains the following information:

Display[1] = address of level(nesting level – 1)ancestor of B

Display[2] = address of level(nesting level – 2)ancestor of B

Symbol table requirements

To implement dynamic allocation and access, a compiler should perform the following tasks while compiling the usage of a name v in a block b_current, the block being compiled:

1. Determine the static nesting level of b_current.2. Determine the variable designated by the name v.3. Determine the static nesting level of the block in which v is defined

The symbol table in the TOS record is searched first. Existence of v there implies that v designates a local variable of b_current. If v doesn’t exists then the previous record in the stack is searched. When v is found in a symbol table, its displacement n the AR is obtained from the symbol table entry.

Recursion

Recursion is nothing but many invocations of a procedure coexists during the execution of a program. A copy of the local variables of the procedure must be allocated for each invocation.

91

Page 92: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Limitations of stack based memory allocation

Stack based memory model is not suitable for program controlled memory allocation. It is also inadequate for multi-activity programs because the concurrent activities in such a program enter and exit program blocks in a non-LIFO manner.

Array allocation and access

The order in which elements in a array are arranged depends on the rules of the PL . For static memory allocation the dimension bounds of an array must be known at the compilation time.

The address of a element a[s1, s2] is given by

Ad.a[s1,s2] = Ad.a[1,1] + { (s2-1)*n + (s1-1)} *k

Where n is the number of rows in the array and k is the number of memory allocations occupied by each element in the array.

For a two dimensional array

A[L1:U1, L2:U2)],

Where Li, Ui represents the lower and the upper bounds of the ith element.

The range of the ith subscript is given by

92

Page 93: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Range1 = U1 – L1 + 1

Range2 = U2 – L2 + 1

So the address of element a[s1,s2] is given by

Ad. A[s1, s2] = Ad.a[L1, L2] + {(s2 – L2) * (U1 – L1 + 1) + ( s1 – L1) } * k

The addresses and the dimension bounds of an array can be stored in a n array descriptor called the Dope Vector. If the information are known at compile time then the dope vector need to exist at the compile time itself. Else the vector need to exist during the program execution. The dope vector is allocated in the AR of a block and its displacement is noted in the symbol table entry of the array.

Dope Vector

Ad.a[0, ………0]

No. of dimensions (m)

L1 U1 Range1

L2 U2 Range2

L3 U3 Range3

93

Page 94: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

COMPILATION OF EXPRESSIONS

A Toy Code Generator for expressions

The major issues in code generation for expressions are

1. Determination of the evaluation order of operators2. Selection of instructions to be used in the target code3. Use of registers and memory locations for holding partial results.

The evaluation order depends on the precedence or the priority of the operators, the operator which precedes its left and right neighbours must be evaluated before either of them.

The choice of the instruction depends on

1. The type and the length of the operand2. The addressability of the operand

Operand Descriptor is used to maintain this information

A partial result is the value of some subexpression computed while evaluating an expression. If the result exceeds the CPU registers then it can be stored in the memory.A register Descriptor is used to maintain this information.

Operand Descriptor

The fields in Operand Descriptor are

94

Page 95: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

1. Attributes : contains the type, length and miscellaneous information2. Addressability : where the operand is located and how it can be accessed. The

subfields are a. Addressability code : ‘M’ for operand in memory

‘R’ for operand in registers

‘AM’ address in memory

‘AR’ address in registers

b . Address : Address of the CPU register or memory word.

Register Descriptor

It has 2 fields

1. Status : contains free/occupied 2. operand descriptor : if status = occupied this field contains the operand

descriptor # (number) for the operand in the register.

Generating an instruction

When an operator op is reduced by a parser , the function codegen is called with the op and descriptors as parameters. If one operand is in memory and one in register ,then a single instruction is generated to evaluate op. If both the operands are in memory then one has to be moved to the CPU register

95

Page 96: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Saving partial results

If all registers are occupied when the operator op is evaluated, a register r is freed by copying its contents into a temporary memory location. We assume that a array temp is used to store the partial results. When a partial result is moved to a temporary location the descriptor of the partial result must change.

Intermediate code for expressions

Postfix strings

In postfix notation the operator appears immediately after its last operand. Thus a binary operator op in the postfix string appears after its second operand.Code generation from the postfix string is performed using a stack of operand descriptors.

For example for a source string

a + b * c + d * e

The postfix notation is given by abc*+de*+

Triples and Quadraples

A triple (or three address code) is a representation of an elementary operation in the form of a pseudo-machine instruction.

Operator operand1 operand2

The above are the attributes in a triple

96

Page 97: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

for a expression a+b* c the triple representation can be given as

operator operand1 operand2

* b c

+ #1 a

A program representation called indirect triples is useful in optimizing compilers. All distinct triples are entered in the table of triples.

A quadraple (or four address code) represents an elementary evaluation in the following format

Operator operand1 operand2 Result name

Where Result name indicates the result of evaluation.

For a expression a+b*c the quadraple can be given as

operator operand1 operand2 result name

* b c t1

+ t1 a t2

97

Page 98: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Expression Trees

A compiler must analyse an expression to find the best evaluation order for its operators. An expression tree is an abstract syntax tree which depicts the structure of an expression.

A two step procedure is used to find the best evaluation order ,

1. Determine the register requirements of each subtree in an expression2. Analyse the Register Requirement label of the child nodes of a node and

determine the order in which a child node is evaluated.

Algorithm(Evaluation order for operators)

1. Visit all nodes in an expression in post order.

For each node n

a. if n is the leaf node then , if n1 is the left operand of its parent then RR(n) = 1;else RR(n) = 0;

b . If n is not a leaf node then

If RR(l_childnode n ) =/= RR(r_childnode n) then

RR(n) = max (RR(r_childnode n), RR(l_childnode n))

Else

RR(n) = RR(l_childnode n ) + 1;

2.Perform the procedure call evaluation order (root).

98

Page 99: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Procedure evaluation order(node);

if node is a leaf node then

print node ;

else

if RR(l_childnode) < RR(r_childnode) then

evaluation order(r_childnode);

evaluation order(l_childnode);

else

evaluation order(l_childnode);

evaluation order(r_childnode);

return;

end evaluation order;

99

Page 100: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

COMPILATION OF CONTROL STRUCTURES

Definition – The control structure of a programming language is the collection of language features which govern the sequencing of control through a program.

Ex. Conditional control constructs, iteration control constructs, procedure calls

Control transfer, conditional and iterative constructs

Control is implemented through conditional and unconditional goto’s.When a target language is a assembly language, a statement goto lab can be simply compiled as the assembly statement BC ANY LAB.

If (e1) then

S1:

Else

S2;

S3;

The above program when compiled takes the following form

If (~e1) then goto int1;

S1;

goto int2 ;

int1 : S2;

100

Page 101: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

int2 : S3;

Function and procedure calls

A function call ie

x = fun(a, b) + y

computes the value of fun(a,b) by executing the body of the function and returns it to the calling program. In addition the function call may also result in side effects.

Side Effect

A side effect of a function call is a change in the value of a variable which is not local to the called function.

While implementing a function call, the compiler must see to that

1. Actual parameters are accessible in the called function2. The called function produce side effects3. Control is transferred to and from the called function.4. The function value is returned to the calling program.

The compiler uses the following features to implement the function calls

1. Parameter list : contains a descriptor to each actual parameter101

Page 102: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

2. Save area : the called function saves the contents of the CPU registers in this area, and it is restored after execution.

3. Calling conventions : This indicates a. how the parameter list must be accessedb. how the save area is to be accessedc. how the call and return are to be implementedd. how the value of the function is to be returned

Parameter passing mechanisms

The four mechanisms available are

call by value

call by value- result

call by reference

call by name

Call by value

The values of the actual parameters are passed to the called function which are assigned to the formal parameters.The passing takes place only in one direction. If there is a change in formal parameter, the change is not reflected in the actual parameter. So no side effect is produced by the function on the parameters.The advantage of this mechanism is simplicity.

102

Page 103: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Call by value – result

It extends the capabilities of call by value by copying the values of the formal parameters back to the corresponding actual parameters. Thus side effects are realized in return.

Call by reference

The address of the actual parameter is passed to the called function. If the parameter is a expression the value is stored in a temporary location and the address of the location is passed. The parameter list in the list of addresses of actual parameters.

Call by name

The occurrence of formal parameter in the body of the called function is replaced by the name of the corresponding actual parameter. The actual parameter corresponding to a formal parameter can change dynamically during the execution of a function. This makes the call by name mechanism immensely powerful.It has become less attractive because of the overheads.

CODE OPTIMIZATION

Code optimization in compilers aims at improving the execution efficiency of a program by eliminating the redundancies and by rearranging the computations in the program without affecting the meaning of the program. Optimization seeks to improve the program rather than the algorithm.

Optimizing transformations

103

Page 104: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

An optimizing transformation is a rule for rewriting a segment of a program to improve the execution efficiency of the program without affecting its semantics

The two classification of transformations are

1. Local optimization2. Global optimization

Ex of optimizing transformations used in compilers :

1. Compile time evaluation :

Execution efficiency is improved by performing certain actions specified in the program during compilation itself. This reduces the execution time of a program.For ex we have a constant given by a = 5/2, and this can be replaced by a = 2.5 which eliminates a division operation.

2. Elimination of common subexpression.

Common subexpressions are occurrences of expressions yielding the same value . They are otherwise called equivalent expressions. First the expression which yield the same value are identified. Their equivalence is identified by considering whether their operands have the same values in all the occurrences . And the occurrences of the expressions which satisfy the criteria can be eliminated.

Dead code elimination

Code that can be eliminated from a program without affecting the meaning of the program results is called dead code. This type of code can be eliminated as they are of no use in the program.

104

Page 105: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Frequency reduction

Execution time of a program can be reduced by moving code from a high execution frequency region of the program to a low frequency region(ie from a loop to outside the loop).

Strength reduction

This replaces a time consuming operation by a faster operation, ie multiplication operator which is a time consuming operation can be replaced by addition operator as it is faster.

Local Optimization

Definition : the optimizing transformations are applied over small program segments containing a few statements

The scope of local optimization is around a basic block

Definition : Basic Block

A basic block is a sequence of program statements(s1, s2, s3..sn) where sn is the transfer of control statement and only s1 can be the destination of a transfer of control statement

A basic block is a program segment with a single entry point. If control reaches statement s1 during execution all statements s1,s2…sn are executed.

105

Page 106: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Value numbers

Value numbers provide a simple technique to determine if two occurrences of an expression in a basic block are equivalent. A value number valpha is associated with a variable alpha. The value number of alpha changes while processing a statement

alpha = …

The statements in a basic block are numbered and for a statement n, the current statement being processed we set vnalpha to n. A new field is made in the symbol table entry to hold the value number of a variable. And the Intermediate Code(IC) is the quadraple. A Boolean flag save

is associated with each quadraple to indicate whether its value should be saved for future use. The flag is false for every new quadraple entered in the table.

When a quadraple is formed for an expression ‘e’ the value numbers of the operands are copied from the symbol table .The new quadraple is compared with the existing ones in the IC. Existence of a match implies that the current expression has the same value as the previous expression. In this case the newly generated quadraple is not entered in the table. The result name is stored. This result can be used for the next occurrence of the expression. So the flag of that entry is marked ‘true’ as it is used further.

Global Optimization

Definition : Optimizing transformations are applied over a program unit ie over a function or a procedure

Compared to local optimization , global optimization requires more analysis effort to establish the feasibility of an optimization. If some expression x*y occurs in a set of basic blocks SB in a program P, the occurrence in a block b1 which belongs to SB can be eliminated under the following conditions

106

Page 107: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

1. Basic block b1 is executed only after some block b2 that belongs to SB has been executed one or more times

2. No assignments to x or y have been executed after the last evaluation of x*y in block b2

To ensure the above conditions the program is analysed using two techniques

1. Control flow analysis2. Data flow analysis.

Program Representation

A program is represented in the form of a program flow graph(PFG)

Definition(PFG) : A program flow graph for a program P is a directed graph ,

GP = (N, E, n0)

where

N – set of basic blocks in P

E – set of directed edges (bi , bj ) indicating the possibility of control flow

from last statement of bi to first statement of bj

n0 – start node of P

If (bi , bj ) belongs to E , then bi is a predecessor of bj and bj is the successor of bi. A path is a sequence of edges such that the destination of one edge is the origin of the following edge. bi is the ancestor of bj if a path exists from bi to bj.

107

Page 108: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Control flow analysis

A path Gp represents a possible flow of control during the execution of a program P. Control flow analysis analyses the path in a program to determine the structure of a program ie the presence and nesting of loops in the program.

Data flow analysis

This technique analyses the definitions and uses of data items in a program to collect information for the purpose of optimization. This information is called data flow information and is associated with the entry and exit of each basic block in Gp.

Three such data flow concepts are

1. Available expressions -- used in common subexpression elimination2. Line variable -- used in dead code elimination3. Reaching definition – used in 4 . Constant and variable propagation

Available Expressions

Definition : An expression e is available at a program point p if a value equal to its value is computed before the program execution reaches p. The availability of the expression at entry/exit of a basic block an be determined using the following rules

1. e is available at the exit of bi if i) bi contains an evaluation of e which is not followed by assignments

to any operand of e or ii) the value of e is available at the entry to bi and bi does not contain

assignments to any operands of e2. e is available at the entry of bi if it is available at the exit of each predecessor

of bi in Gp

Available expressions uses forward flow concept where availability at the exit of a node determines the availability at the entry of the successors. It is an all paths

108

Page 109: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

concept since availability at entry of a basic block requires availability at the exit of all predecessors.

We use the following notations

Avail_ini – availability of expression e at entry of block bi

Avail_outi - availability of expression e at exit of block bi

Evali – ‘true’ only if expression e is evaluated in b i and none of its operands are modified following the evaluation

Modifyi - ‘true’ only if some operand of e is modified in bi

Avail_ini and Avail_ini are global properties which can be computed as

Line variables

A variable var is said to be live at a program point p in basic block b if the value contained in it at point p is likely to be used somewhere during the execution of the program. If var is not live in the program point which contains a definition var := ….

the value assigned to var by this definition is redundant in the program. Such a code constitutes dead code and can be eliminated.

Data flow concept has the following variables

Live_ini : var is live at the entry of bi

Live_outi : var is live at the exit of bi

Refi : var is referenced in bi and no assignment of var precedes the reference

Defi : An assignment to var exists in bi

109

Page 110: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Line variables is a backward data flow concept since availability at the entry of a block determines the availability at the exit of its predecessors. It is an any path concept since liveness at exit of a basic block requires liveness at the entry of some successor.

INTERPRETERS

The use of interpreters avoids the overheads of compilation of a program. This is advantageous during program development where a program can be modified between executions. But interpretation is expensive in terms of CPU time. It is best not to interpret a program with large execution requirements.

let

tc – average compilation time per statement

te – average execution time per statement

ti – average interpretation time per statement

Then the relation is given by

tc = ti ,

and tc = 20. te , ie the execution time can be several times smaller than the compilation time.

Consider a program p , let size and stmt_executed represent the number of statements in p and the number of statement in some execution of p respectively.

Use of interpreters

Interpreters are used for 2 reasons

1. Simplicity2. Efficiency

110

Page 111: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

It is better to use interpretation when stmt_executed<size and when a program is modified between executions. In other situations compilation can be used.

It is simpler to develop a interpreter as it doesn’t involve code generation , so this fact makes interpretation more attractive.So interpretation is the choice for commands to an operating system and in an editor.

Overview of interpretation

The interpreter has three main components.

1. A symbol table to hold information concerning the program entities2. A data store to store values of different data types.3. A set of data manipulation routines , which has a routine for every legal data

manipulation action in the source language.

Pure and impure Interpreters

Data

111

Page 112: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Interpreter

Pure Interpreter

Source program

Preprocessor IR Interpreter

Impure Interpreter

In the case of pure interpreter the source program is retained in source form all through the interpretation. This eliminates compilation overheads

A impure interpreter performs some preliminary processing of the source program to minimize the analysis overheads during interpretation. The preprocessor converts the program to an IC or IR( Intermediate Representation) which is used during interpretation. This speeds up interpretation.

UNIT - IV

DEFINITION OF OPERATING SYSTEM:

112

Page 113: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

In the 1960s operating system is defined as the software that controls the hardware. A better definition of operating system is needed. An operating system as the programs, implemented in either s/w or firmware, that make the hardware usable. Hardware provides “raw computing power”. operating system make this computing power conveniently available to users, and they manage the hardware carefully to achieve good performance.

Operating system are primarily resource managers; the main resource they manage is computer hardware. In the form of processors, storage, input /output devices, communication devices, and data. Operating system perform many functions such as implementing the user interface, sharing hardware among users, allowing users to share data among themselves, preventing users from interfacing with one another, scheduling resources among users, facilitating i/o recovering from errors, accounting for resource usage, facilitating parallel operations, organizing data for secure and rapid access and handling network communications.

OPERATING SYSTEM OBJECTIVES AND FUNCTIONS:

An operating system is a program that controls the execution of application programs and acts as an interface between the user of a computer and the computer hardware. An operating system can be thought of as having three objectives or performing 3 functions.

1) Convenience:

An operating system makes a computer more convenient to use.

2) Efficiency:

An operating system allows the computer system resources to be used in an efficient manner.

3) Ability to evolve:

An operating system should be constructed in such a way as to permit the effective development, testing and introduction of new system functions without at the same time interfering with service.

EARLY HISTORY:

The 1940’s and the 1950’s:

113

Page 114: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Operating system have evolved over the last 40 years through a number of distinct phases or generations. In the 1940’s , the earlist electronic digital computers had no operating system. Machines of the time programs were entered one bit at a time on rows of mechanical switches. Machine language programs were entered on punched cards, and assembly languages were developed to speed the programming process.

The general motors research laboratories implemented the first operating system in the early 1990’s for their IBM 701.The systems of the 1950’s generally ram only one job at a time and smoothed the transition between jobs to get maximum initialization of the computer system. these were called single-stream batch processing system because programs and data were submitted in groups or batches.

The 1960’s:

The systems of the 1960’s were also batch processing systems, but they were able to take better advantage of the computer’s resources by running several jobs at once. They contained many peripheral devices such as card readers, card punches, printers, tape drives and disk drives. Any one job rarely utilised all a computer’s resources effectively. Operating system designers that when one job was waiting for an i/o operation to complete before the job could continue using the processor, some other job could use the idle processor. Similarly, when one job was using the processor other job could be using the various input /output devices. In fact running a mixture of diverse jobs appeared to be the best way to optimize computer utilization. So operating system designers developed the concept of in which several jobs are in main memory at once, a processor is switched from job to job as needed to keep several jobs advancing while keeping the peripheral devices in use.

More advanced operating system were developed to service multiple interactive users at once. Timesharing systems were developed to multi program large numbers of simultaneous interactive users. Many of the time-sharing systems of the 1960’s were multimode systems also supporting batch processing as well as real-time application. Real-time systems are characterized by supplying immediate response.

The key time-sharing development efforts of this period included the CTSS system developed at MIT, the TSS system developed by IBM, the multics system developed at MIT, as the successor to CTSS turn around time that is the time between submission of a job and the return of results, was reduced to minutes or even seconds.

THE EMERGENCE OF A NEW FIELD: SOFTWARE ENGINEERINGThe operating system developed during the 1960’s endless hours and countless

dollars were spent detecting and removing bugs that should never have entered the systems in the first place. So much attention was given to these problems of

114

Page 115: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

constructing software systems. This spawned the field of engineering is developing a disciplined and structured approach to the construction of reliable, understandable and maintainable software.

The 1970’s:

The systems of the 1970’s were primarily multimode timesharing systems that supported batch processing, time - sharing, and real-time applications. Personal computing was in its incipient stages. Communication in local area networks was made practical and economical by the ethernet. Security problems increased with the huge volumes of information passing over vulnerable communication lines. Encryption and decrytion received much attention.

The 1980’s:

The 1980’s was the decade of the personal computer and the workstation. Individuals could have their own dedicated computers for performing the bulk of their work, and they use communication facilities for transmitting data between systems. Computing was distributed to the sites at which it was needed rather than bringing the data to be processed to some central, large - scale, computer installation. The key was to transfer information between computers in computer networks. E-mail file transfer and remote database access applications and client/server model become widespread.

The 1990’s and beyond:

In 1990’s the distributed computing were used in which computations will be paralleled into sub - computations that can be executed on other processors in multiprocessor computers and in computer networks. Networks will be dynamically configured new devices and s/w are added/removed. When new server is added, it will make itself known to the server tells the networks about its capabilities, billing policies accessibility and forth client need not know all the details of the networks instead they contact locating brokers for the services provided by servers. The locating brokers know which servers are available, where they are, and how to access them. This kind of connectivity will be facilitated by open system standards and protocols.

Computing is destined to become very powerful and very portable. In recent years, laptop computers have been introduced that enable people to carry their computers with them where ever they go. With the development of OSI communication protocols, integrated services digital network (ISDN) people will be able to communicate and transmit data worldwide with high reliability.

115

Page 116: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

UNIX:

The unix operating system was originally designed in the late 1960’s and elegance attracted researchers in the universities and industry. Unix is the only operating system that has been implementing on computers ranging from micros to supercomputers

PROCESS CONCEPTS:

The notion of process, which is central to the understanding of today’s computer systems that perform and keep track of many simultaneous activities.

DEFINITIONS OF “PROCESS”:

The term “Process” was first used by the designers of the Multics sytem in the 1960s.

Some definitions are as follows.

A program in execution. An asynchronous activity. The “animated spirit” of a procedure. The “locus of control” of a procedure in execution. That which is manifested by the existence of a “process control block” in

the operating system. That entity to which processors are assigned. The “dispatchable” unit.

PROCESS STATES:

A process goes through a series of discrete process states. Various events can cause a process to change states.

A process is said to be running (ie., in the running state) if it currently has the CPU. A process is said to be ready (ie., in the ready state) if it could use a CPU if one were available. A process is said to be blocked (ie., in the blocked state) if it is waiting for some event to happen (such as an I/O completion event) before it can proceed.

116

Page 117: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

For example consider a single CPU system, only one process can run at a time, but several processes may be ready, and several may be blocked. So establish a ready list of ready processes and a blocked list of blocked processes. The ready list is maintained in priority order so that the next process to receive the CPU is the first process on the list.

PROCESS STATE TRANSITIONS:

When a job is admitted to the system, a corresponding process is created and normally inserted at the back of the ready list. The process gradually moves to the head of the ready list as the processes before it complete their turns at using the CPU. When the process reaches the head of the list, and when the CPU becomes to make a state transition from ready state to the running state. The assignment of the CPU to the first process on the ready list is called dispatching, and is performed by a system entity called the dispatcher. We indicate this transition as follows

Dispatch ( processname ) : ready --> running.

To prevent any one process to use the system. Wholly, the operating system sets a hardware interrupting clock ( or interval timer ) to allow this user to run for a specific time interval or quantum. If the process does not leave the CPU before the time interval expires, the interrupting clock generates an interrupt, causing the operating system to regain control. The operating system then makes the previously running process ready, and makes the first process on the ready list running. These state transitions are indicated as

timerunout ( processname ) : running --> ready

and dispatch ( processname ) : ready --> running

PROCESS STATE TRANISITIONS

117

Page 118: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

If a running process initiates an input/output operation before its quantum expires, the running process voluntarily leaves the CPU. This state transition is

Block ( processname ) : running --> blocked

When an input/output operation (or some other event the process is waiting for) completes. The process makes the transition from the blocked state to the ready state. The transition is

Wakeup ( processname ) : blocked --> ready

THE PROCESS CONTROL BLOCK(PCB):

The PCB is a data structure containing certain important information about the process including.

The current state of the process. Unique identification of the process. A pointers to the process’s parent ( ie., the process that created this

process ). Pointers to the process’s child processes ( ie., processes created by this

process ).

118

Page 119: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

The process’s priority. Pointers to locate the process’s memory. Pointers to allocated resources. A register save area. The processor it is running on ( in multiprocessor system)

The PCB is a central store of information that allows the operating system to locate all key information about a process. The PCB is the entity that defines a process to the operating system.

INTERRUPT PROCESSING:

An interrupt is an event that alters the sequence in which a processor executes instructions. It is generated by the Hardware of the computer system. When an interrupt occurs.

The operating system gains control. The operating system saves the state of the interrupted process. In

many systems this information is stored in the interrupted process’s PCB.

The operating system analyzes the interrupt and passes control to the appropriate routing to handle the interrupt.

The interrupt handler routine processes the interrupt. The state of the interrupted process is restored. The interrupted process executes.

An interrupt may be initiated by a running process called a trap and said to be synchronous with the operation of the process or it may be caused by some event that may or may not be related to the running process it is said to be asynchronous with the operation of the process.

INTERRUPT CLASSES:

There are six interrupt classes. They are

* SVC (Supervisor Call) interrupts.

These are initiated by a running process that execute the svc is a user generated request for a particular system service such as performing input/output, obtaining more storage, or communicating with the system operator.

* I/O interrupts:

119

Page 120: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

These are initiated by the input/output hardware. They signal to the cpu that the status of a channel or device has changed. For eg., they are caused when an I/O operation completes, when an I/O error occurs.

* External interrupts:

These are caused by various events including the expiration of a quantum on an interrupting clock or the receipt of a signal from another processor on a multiprocessor system.

* Restart interrupts:

These occur when the operator presses the restart button or arrival of restart signal processor instruction from another processor on a multiprocessor system.

* Program check interrupts:

These may occur when a programs machine language instructions are executed. These problems include division by zero, arithmetic overflow or underflow, data is in wrong format, attempt to execute an invalid operation code or attempt to refer a memory location that do not exist or attempt to refer protected resource.* Machine check interrupts:

These are caused by multi-functioning hardware.

CONTEXT SWITCHING:

The operating system includes routines called first level interrupt handlers(FLIHS) to process each different class of the interrupt. Thus there are 6 first level interrupt handlers-the SVC FLIH, the I/O FLIH, the external FLIH, the restart FLIH, the program check FLIH and the machine check FLIH. When an interrupt occurs , the operating system saves the status of the interrupted process and routes central to the appropriate first level interrupt handler. This is accomplished by technique called context switching. The first level interrupt handlers must then distinguish between interrupts of the same class; processing of these different interrupts is then carried out by various second level interrupt handlers.

Program status words (PSWS) central the order of instruction execution and contain various information about the state of a process. There are 3 type of PSWs namely current PSWs, new PSWs and old PSWs.

PSWS swapping in interruption processing120

Page 121: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

The address of the next instruction to be executed is kept in the current PSW. On a uni-processor system, there is only one current PSW, but there are 6 new PSW ( one for each interrupt type ) and six old PSWs( one for each interrupt type ). The new PSW for a given interrupt type contains the permanent main memory address at which the interrupt handler for that interrupt type resides. When an interrupt occurs

Supervisor call

Input / output

Machine check

Restart

External

Program check

Machine check

Restart

External

Input / output

Supervisor call

Program check

121

Page 122: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

if the processor is not disabled for that type of interrupt, then the hardware automatically switches PSWs by

Storing the current PSW in the old PSW for that type of interrupt Storing the new PSW for that type of interrupt into the current PSW.

After this PSW swap, the current PSW contains the address of the appropriate interrupt handler. The interrupt handler executes ad processes the interrupt. When the processing of the interrupt is complete the CPU is dispatched to either the process that was running at the time of the interrupt or to the highest priority ready process.

SEMAPHORES:

A semaphore is a protected variable whose value can be accessed and altered only by the operations P and V and an initialization operation we call semphoreinitialize. Binary semaphores can assume only the value 0 or the value 1. Counting semaphores ( or general semaphores) can assume only nonnegative integer values.

The P operation on semaphore S, written P(S), operates as follows

If S > 0

Then S : = S - 1

Else ( Wait on S )

The V operation on semaphore S, written V ( S ), operates as follows,

If ( one or more processes are waiting on S )

Then ( let one of these processes proceed )

Else S := S+1

PROCESS SYNCHRONIZATION WITH SEMAPHORES:

122

Page 123: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

When a process issues an input/output request, it blocks itself or await the completion of the I/O. Some other process must awaken the blocked process. Such and interaction is an example of a block/wakeup protocol.

Suppose one process wants to be notified about the occurrence of a particular event. Some other process is capable of detecting that event has occurred such implement of a simple two-process block/wakeup synchronization mechanism can be done by semaphore operations.

THE PRODUCER-CONSUMER RELATIONSHIP:

When one process passes data to another process. Such transmission is an example of inter-process communication.

Consider the following producer-consumer relationship. Suppose one process, a producer is generating information that a second process, a consumer is using. Suppose they communicate by a single shared integer variable, numberbuffer. The producer does some calculations and then writes the result into numberbuffer, the consumer reads the data from numberbuffer and prints it.

Suppose the speeds of the processes are mismatched. If the consumer is operating faster than the producer, the consumer could read and print the same number twice before this producer deposits the next number. If the producer is operating faster than the consumer, the producer could overwrite its previous result before the consumer has had a chance to read it and print it; a fast producer could in fact do this several times so that many results would be lost.

The producer and the consumer to cooperate in such a manner that data written to numberbuffer are neither lost nor duplicated. Enforcing such behavior is an example of process synchronization.

The algorithm below shows a concurrent program that uses semaphore operations to implement a producer-consumer relationship. Here there are 2 semaphores : number deposited is indicated (V’d) by the producer and tested

(P’d) by the consumer; the consumer cannot proceed until a number has been deposited in numberbuffer. The consumer indicates (V’s) numberretrieved and the producer tests (P’s) it; the producer cannot proceed until a number already in the buffer has been retrieved. The initial settings of the semaphores force the producer to deposit a value in the numberbuffer before the consumer can proceed.

Producer – consumer relationship implemented with semaphores.123

Page 124: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Program producer consumer relationship;

Var numberbuffer : integer;

Numberdeposited : semaphore;

Numberretrieved : semaphore;

Procedure producerprocess;

Var nextresult : integer;

Begin

While true do

Begin

Calculate nextresult;

P( number retrieved );

Numberbuffer := nextresult;

V ( numberdeposited )

End

End;

Procedure consumerprocess;

Var nextresult : integer;

Begin

While true do

Begin

P( numberdeposited);

Nextresult := numberbuffer;

V(numberretrieved);

124

Page 125: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Write ( nextresult )

End

End;

Begin

Semaphoreinitialize ( numberdeposited, 0);

Semaphoreinitialize ( numberretrieved, 1);

Parbegin

Producerprocess;

Consumerprocess;

Parend

End;

COUNTING SEMAPHORE:

Counting semaphores are particularly useful when a resource is to be allocated from a pool of identical resources. The semaphore is initialized to the number of resources in the pool. Each P operation decrements the semaphore by 1, indicating that another resource has been removed from the pool and is in use by a process. Each V operation increments the semaphore by 1, indicating that a process has returned a resource to the pool, and the resource may be reallocated to another process. If a P operation is attempted when the semaphore has been decremented to zero, then the process must wait until a resource is returned to the pool by a V operation.

DEADLOCK AND INDEFINITE POSTPONEMENT:

A process in a multiprogramming is said to be in a state of deadlock if it is waiting for a particular event that will not occur. In multi programmed computing systems, resource sharing is one of the primary goals of the operating system. When resources are shared among a population of users, each of whom maintains exclusive control over particular resources allocated to that user, it is possible for deadlocks to develop in which the processes of some users will never be able to finish.

EXAMPLES OF DEADLOCK:

If a process is given the task of waiting for an event to occur, and if the system includes no provision for signaling that event, then we have a one process deadlock. Several common examples of deadlock are

125

Page 126: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

A Traffic Deadlock:

A number of automobiles are attempting to drive through a busy section of the city, but the traffic has become completely jammed. Traffic comes to a halt, and it is necessary for the police to unwind the jam by slowly and carefully backing cars out of the area. Eventually the traffic begins to flow normally, but not without much annoyance, effort and the loss of considerable time.

A Simple Resource Deadlock:

A simple examples of a resource deadlock is illustrated

This resource allocation graph shows two processes as rectangles and two resources as circles. An arrow from a resource to a process indicates that the resource belongs to, or has been allocated to the process. An arrow from a process to a resource indicates that the process is requesting, but has not yet been allocated, the resource. The diagram illustrates a deadlocked system: process A holds Resource 1 and needs Resource 2 to continue. Process B holds Resource 2 and needs Resource 1 to continue.

Resource 1 is allocated to

Process A

Process A is requesting

Resource 2

Resource 2 is allocated to

Process B

Process B is allocated to

Process B

126

Page 127: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Each process is waiting for the other to free a resource that it will not free until the other frees its resources that it will not do until the other from its resources , etc. This circular wait is characterized of deadlock systems.

DEADLOCK IN SPOOLING SYSTEMS:

Spooling systems are often prone to deadlock. A spooling system is used to improve system throughput by disassociating a program from the slow operating speeds of devices such as printers. For example, if a program sending lines to the printer must wait for each line to be printed before it can transmit the next line, then the program will execute slowly. To speed the program’s executing, output lines are routed to a much faster device such as a disk drive where they are temporarily stored until they may be printed. In some spooling systems, the complete output from a program must be available before actual printing can begin. Thus several partially complete jobs generating print lines to a spool file could become deadlocked if the available space fills before any job completes. Unwinding or recovering from such a deadlock might involve restarting the system with a loss of all work performed so far.

A RELATED PROBLEM: INDEFINITE POSTPONEMENT In any system that keeps processes waiting while it makes resource allocation and process scheduling decisions, it is possible to delay indefinitely the scheduling of a process while other processes receive the system’s attention. This situation, called as indefinite postponement or indefinite blocking or starvation.

Indefinite postponement may occur because of resource scheduling policies. When resources are scheduled on a priority basis, it is possible for a given process to wait for a resource indefinitely as processes with higher priorities continue arriving. In some systems, indefinite postponement is prevented by allowing a process’s priority to increase as it waits for a resource. This is called aging.

RESOURCE CONCEPTS:

An operating system is primarily a resource manager. It is responsible for the allocation of a vast array of resources of various types. We consider resources that are preemptible such as CPU and main memory; a user program currently occupying a particular range of locations in main memory may be removed or preempted by

127

Page 128: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

another program. The CPU must be rapidly switched among a large number of processes competing for system service to keep all those processes progressing at a reasonable pace.

Certain resources are non preemptible and cannot be removed from the processes to which they are assigned. For eg., tape drives are normally assigned to a particular process for periods of several minutes or hours .

While a tape drive belongs to one process and given to another. Code that cannot be changed in use in use is said to be reentrant. Code that may be changed but is reinitialized each time it is used is said to be serially revisable.

FOUR NECESSARY CONDITON FOR DEADLOCK :

Coffman, Elphick and Shoshani stated the following 4 necessary conditions that must be in effect for a deadlock to exit.

Processes claim excusive control of the resources they require (natural exclusion condition)

Processes hold resources already allocated to them while waiting for additional resource (wait for condition)

Resources cannot be removed the process holding them until the resource are used completion (no preemption condition).

A circular chain of processes exits in which each process hols are more resources that are requested by the next process in the chain(circular wait condition).

MAJOR AREA OF DEADLOCK RESEARCH:

Deadlock has been one of the more productive research area in computer science and operating systems the four area of interest in deadlock research. These are deadlock prevention deadlock avoidance, deadlock detection and deadlock recovery.

In deadlock prevention our concern is to condition a system to remove any possibility of deadlocks occurring. In deadlock avoidance the goal of detection is to determine if a deadlock can be avoided and an attempt to get better resource utilization. The goal of detection is to determine if a deadlock has occurred, and to determine deadlock recovery methods are used to clear dead lock from a system so

128

Page 129: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

that it my personal to operate free of the deadlock, and so that the deadlocked processes may complete their execution and free their resources.

Deadlock prevention:

Havender concluded that if any of the four necessary conditions is denied, a deadlock cannot occur.

Each process must request all its required resources at once and cannot proceed until all have been granted.

If a process holding certain resources is denied a further request, that process must release its original resources and, if necessary, request them again together with the additional resources.

Impose a linear ordering of resource types on all processes, ie., if a process has been allocated resources of a given type, it may subsequently request only those resources of types later in the ordering.

Denying the “wait for” condition: Havenders first strategy requires that all of the resources a process will need

must be requested at once. If the set of resources needed by a process is available, then the system may grant all of those resources to the process at one time and the process will be allowed to proceed. If the complete set of resources needed by a process must wait until the complete set is available. While the process waits, however it may not hold any resources. Thus the “wait for” condition is denied and deadlocks simply cannot occur.

DENYING THE ‘NO-PREEMPTION’ CONDITION:

Havenders second strategy requires that when a process holding resources is denied a request for additional resources, that process must release its held resources and, if necessary, request them again together with the additional resources. Implementation of this strategy effectively denies the “no preemption” condition. Resources can be removed from the processes holding them prior to the completion of those processes.

When a process releases resources the process may lose all its work to that point. One serious consequences of this strategy is the possibility of indefinite postponement.

DENYING THE “CIRCULAR WAIT” CONDITION:

129

Page 130: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Havenders third strategy denies the possibility of a circular wait. Because all resources are uniquely numbered, and because processes must request resources in linear ascending order, a circular wait cannot develop.

Resources must be requested in ascending order by resource number. Resource numbers are assigned for the installation and must be “lived with” for long periods. If new resources types are added at an installation existing programs and systems may have to be rewritten.

Clearly, when the resources numbers are assigned, they should reflect the normal ordering in which most jobs actually use resources. For jobs matching this ordering efficient operation may be expected.

One of the most important goals in today’s operating systems is to create user-friendly environments.

DEADLOCK AVOIDANCE AND THE BANKER’S ALGORITHM:

Deadlock can be avoided by being careful when resources are allocated. The most famous deadlock avoidance algorithm is Dijkstra’s Banker’s algorithm, because it involves a banker who makes loans and receives payments from a given source of capital.

DIJKSTRA’S BANKERS ALGORITHM:

Resource of the same type for example consider the allocation of a quantity, t, of identical tape drives.

An operating system shares a fixed number of equivalent tape drives, t, among a fixed number of users u. Each user specifies in advance the maximum number of tape drives required during the execution of the job on the system.

The operating system will accept a users request if that users maximum need for tape drives does not exceed it.

A user may obtain or release tape drives one by one. Sometimes, a user may have to wait to obtain an additional tape drive, but the operating system guarantees a finite wait. The current number of tape drives allocated to a user will never exceed that user’s stated maximum need.

If the operating system is able to satisfy a user’s maximum need for tape drives, then the user guarantees that the tape drives, then the user guarantees that the tape drives will be used and released to the operating system within a finite time.

130

Page 131: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

The current state of the system is called Safe if it is possible for the operating system to allow all current users to complete their jobs withing a finite time. If not, then the current system state is called Unsafe.

Now suppose there are n user.

Loan(i) represents user i’s current loan of tape drives. Max(i) be the maximum need of user i. Claim (i) be the current claim of a user where a user’s claim is equal to his maximum need minus the user’s current loan.

EXAMPLE OF A SAFE STATE:

Suppose a system has twelve equivalent tape drives, and three users sharing the drives as in State1

Current loan maximum need

User(1) 1 4

User(2) 4 6

User(3) 5 8

Available 2

This state is “safe” because it is possible for all three users to finish. User(2) currently has a loan of 4 tape drives and will eventually need a maximum of six or two additional drives. The system has twelve drives of which ten are currently in use and two are available. If these two available drives are given to user(2) , fulfilling user(2)’s maximum need, then user(2) may run to completion. User(2) upon finishing will release all 6 tape drives that the system may then assign to user(1) and user(3). User(1) has one tape drive and will need three more eventually. User(3) has five and will need 3 more. If user(2) returns 6, then 3 may be given to user(1), who may then finish and return 4 tape drives to the system. Thus the key to a state being safe is that there is at least one way for all users to finish.

EXAMPLE OF AN UNSAFE STATE:

Assume a system’s twelve tape drives are allocated as in State II

State II131

Page 132: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Current loan Maximum need

User(1) 8 10

User(2) 2 5

User(3) 1 3

Available 1

Here eleven of the system’s twelve tape drives are currently in use and only one drive is available for allocation. No matter which user requests the available drive, we cannot guarantee that all three users will finish.

EXAMPLE OF SAFE STATE TO UNSAFE STATE TRANSITION:

A state is known to be safe does not imply all future states will be safe. So resource allocation policy must carefully consider all resource requests before granting them. For example Consider State III

State III

Current Loan Maximum need

User(1) 1 4

User(2) 4 6

User(3) 5 8

Available 2

Now user(3) requests an additional resource. If the system were to grant this request, then the new State would be State IV

State IV

Current Loan Maximum need

User(1) 1 4

User(2) 4 6

User(3) 6 8

Available 1

132

Page 133: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

The state has gone from a safe state to a unsafe one. The state IV characterizes a system in which completion o f all user processes cannot be guaranteed.

BANKER’S ALGORITHM RESOURCE ALLOCATION:

In Dijkstra’s Banker’s Algorithm “Mutual Exclusion”, “wait-for”, and “no-preemption” conditions are allowed. The system grants request that result in safe states only. A users request that would result in an unsafe state is repeatedly denied until that request can eventually be satisfied.

Weaknesses in the Bankers Algorithm:

The algorithm requires fixed number of resources to allocate. Since resources require service, either because of breakdowns or maintenance. So resource cannot be fixed.

The algorithm requires users remain fixed. In todays multiprogrammed systems, the user is constantly changing.

The algorithm requires that the banker grant all requests within a finite time. Clearly much better guarantees than this are needed in real systems.

Similarly, the algorithm requires that clients repay all loans within a finite time. Much better guarantees than this are needed in real systems.

The algorithm requires that users state their maximum needs in advance. It is difficult to predict the users maximum need

DEADLOCK DETECTION:

Deadlock Detection is the process of actually determining that a deadlock exists and of identifying the processes and resources involved in the deadlock

RESOURCE ALLOCATION GRAPHS:

To detect deadlocks a popular notation is used in which a directed graph indicates resource allocation and requests. Squares represent processes and large circles represent classes of identical devices. Small circles drawn inside large circles indicate the number of identical devices of each class. Diagram:

REDUCTION OF RESOURCE ALLOCATION GRAPHS:

One technique useful for detecting deadlocks involves graph reductions in which the processes that may complete their execution and the processes that will remain deadlocked are determined. If a process’s resource requests may be granted, then we say that a graph may be reduced by that process. The reduction of a graph by a process is shown by removing the arrows to that process from resources and by

133

Page 134: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

removing arrows from that process to resource. If a graph can be reduced by all its processes, then there is no deadlock. If a graph cannot be reduced by all its processes then the irreducible processes constitute the set of deadlocked processes in the graph.

DEADLOCK RECOVERY:

Once a system has become deadlocked, the deadlock must be broken by removing one or more of the necessary conditions. Several processes will lose some or all of the work they have accomplished.

Recovery is done by forcibly removing a process from the system and reclaiming its resources. The removed process is ordinarily lost, but the remaining processes may now be able to complete. Sometimes it is necessary to remove several processes until sufficient resources have been reclaimed to allow the remaining process to finish. Processes may be removed according to some priority order.

STORAGE ORGANIZATION:

Storage organization means how the main storage is viewed

*do we place only a single user in the main storage or

*do we place several users in it at the same time?

*if several user programs are in the main storage at the same time

(1) Do we give each of them the same amount of space or(2) Do we divide main storage into portions called partitions of different sizes?(3) Do we provide more dynamic partition or(4) Do we allow jobs to run anywhere they will fit?(5) Do we require that each job be placedin one contiguous block of storage

locations.

STORAGE MANAGEMENT:

Storage management strategies determine how a particular storage organization performs under various policies .

*when do we get a new program to place in the memory?

*do we get it when the system specifically asks for it,or

134

Page 135: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

*do we attempt to anticipate the systems requests?

*where in main storage do we place the next program to be run?

*do we place the program as close as possible into available memory slots to minimize wasted space.

If a new program needs to be placed in main storage and if main storage is currently full, which of the other programs do we displace? Should we replace the oldest programs, or should we replace those that are least frequently used or least recently used.

STORAGE MANAGEMENT STRATEGIES:

Storage management strategies are to obtain the best possible use of the main storage resource. Storage management strategies are divided into the following categories

1.fetch strategies

(a)demand fetch strategies

(b)anticipatory fetch strategies

2.placement strategies

3.replacement strategies.

Fetch strategies are concerned with when to obtain the next piece of program or data for transfer to main storage from secondary storage. Demand fetch, in which the next piece of program or data is brought into the main storage when it is referenced by a running program. Placement strategies are concerned with determining where in main storage to place an incoming program. Replacement strategies are concerned with determining which piece of program are data to displace to make room for incoming programs.

CONTIGUOUS VS NONCONTIGUOUS STORAGE ALLOCATION:

In contiguous storage allocation each program had to occupy a single contiguous block of storage locations. In noncontiguous storage allocation, a program is divided into several blocks or segments that may be placed throughout main storage in pieces not necessarily adjacent to one another.

SINGLE USER CONTIGUOUS STORAGE ALLOCATION:

135

Page 136: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

The earliest computer systems allowed only a single person at a time to use the machine. All of the machines resources were at the users disposal. User wrote all the code necessary to implement a particular application, including the highly detailed machine level input/output instructions. To implement basic functions was consolidated into an input/output control system (iocs).

SINGLE USER CONTIGUOUS STORAGE ALLOCATION

0

A

B

C

Programs are limited in size to the amount of main storage, but it is possible to run programs larger than the main storage by using overlays.

If a particular program section is not needed for the duration of the program’s execution, then another section of the program may be brought in from the secondary storage to occupy the storage used by the program section that is no longer needed.

OPERATING

SYSTEM

USER

UNUSED

136

Page 137: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

A TYPICAL OVERLAY STRUCTURE User program with storage requirement

larger than available portion of main storage

0

A

B

OPERATING

SYSTEM

PORTION OF USER CODE AND DATA THAT MUST REMAIN IN MAIN STORAGE FOR DURATION OF EXECUTION

OVERLAY AREA

Initialization Processing Output

phase phase phase

1

2

3

137

Page 138: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

C

Load initialization phase at B and run

Then load processing phase at B and run

Then load output phase at B and run

PROTECTION IN SINGLE USER SYSTEMS:

In single user contiguous storage allocation systems, the user has complete control over all of main storage. Storage is divided into a portion holding operating system routines, a portion holding the user’s program and an unused portion.

Suppose the user destroys the operating system for example, suppose certain input/output are accidentally changed. The operating system should be protected from the user. Protection is implemented by the use of a single boundary register built into the CPU. Each time a user program refers to a storage address, the boundary register is checked to be certain that the user is not about to destroy the operating system. The boundary register contains the highest address used by the operating system. If the user tries to enter the operating system, the instruction is intercepted and the job terminates with an appropriate error message.

The user needs to enter the operating system from time to time to obtain services such as input/output. This problem is solved by giving the user a specific instruction with which to request services from the operating system( ie., A supervisor call instruction). The user wanting to read from tape will issue an instruction asking the operating system to do so in the user’s behalf.

PAGE-4

0 OPERATING

SYSTEM

1

2

3

138

Page 139: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

A

B ALL ADDRESSES DEVELOPED BY THE

USER PROGRAM ARE CHECKED TO BE

SURE THEY ARE NOT LESS THAN A

C

STORAGE PROTECTION WITH SINGLE USER CONTIGUOUS STORAGE ALLOCATION

USER

UNUSED

CPU

BOUNDARY REGISTER

A

139

Page 140: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

SINGLE STREAM BATCH PROCESING:

Early single-user real storage systems were dedicated to one job for more than the job’s execution time. During job setup and job tear down the computer is idle. Designers realized that if they could automate job-to-job transition, then they could reduce considerably the amount of time wasted between jobs. In single stream batch processing, jobs are grouped in batches by loading them consecutively onto tape or disk. A job stream processor reads the job control language statements and facilitates the setup of the next job. When the current job terminates the job stream reader automatically reads in the control language statements for the next job, and facilitate the transition to the next job .

FIXED-PARTITION MULTIPROGRAMMING:

Even with batch processing system ,single user systems still waste a considerable amount of the computing resource .

For a user doing intensive calculation Shaded area indicates “CPU

in use”

For a user doing regular input/output:

Use Wait for Use Wait for Use Wait for Use

140

Page 141: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

CPU Completion CPU Completion Completion

Input/Output Input/Output Input/Output

CPU UTILIZATION ON A SINGLE USER SYSTEM

*the program consumes the cpu resource until an input or output is needed

*when the input and output request is issued the job often can’t continue until the requested data is either sent or received.

*input and output speeds are extremely slow compared with cpu speeds

*to increase the utilization of the cpu multiprogramming systems are implemented in which several users simultaneously compete for system resources .

*advantage of multiprogramming is several jobs should reside in the computer‘s main storage at once. Thus when one job requests input/output ,the cpu may immediately switched to another job ing and may do calculations without delay. Thus both input/output and cpu calculations can occur simultaneously. This greatly increases cpu utilization and system through put.

*multiprogramming normally requires considerably more storage than a single user system. Because multi-user programs has to be stored inside the main storage.

FIXED PARTITION MULTIPROGRAMMING: ABSOLUTE TRANSLATION AND LOADING:The earliest multiprogramming systems used fixed partition multiprogramming.

*The main storage is divided into a number of fixed size partitions.

*Each partition could hold a single job.

*CPU switch between users to create simultaneity.

*Jobs were translated with absolute assemblers & compilers to run only in a specified partition.

*If a job was ready to run and its partition was occupied, then that job had to wait, even if other partitions were available.

*This resulted in waste of the storage resource.

FIXED PARTITION MULTIPROGRAMMING WITH ABSOLUTE TRANSLATION AND LOADING

141

Page 142: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

O

Job queue for partition 1

A

These jobs may be

run only in partition 1

B

Job queue for partition 2

These jobs may be run

only in partition 2

C

Job queue for partition 3

These jobs may be run

only in partition 3

OPERATING

SYSTEM

PARTITION 1

PARTITION 3

PARTITION 2

142

Page 143: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

O

Job queue for partition 1 A

No jobs waiting for

partition 1

B

Job queue for partition 2

No jobs waiting for

partition 2

C

Job queue for partition 3

Job Job Job

C B A

An extreme example of poor storage utilization in fixed partition multiprogramming with absolute translation and loading. Jobs waiting for partition 3

PARTITION 1

PARTITION 3

PARTITION 2

143

Page 144: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

are small and could “fit” in the other partitions. But with absolute translation and loading, these jobs may run only in partition 3. The other two partitions remain empty.

FIXED PARTITION MULTIPROGRAMMING : RELOCATABLE TRANSLATION AND LOADING

*Relocating compilers, assemblers and loaders are used to produce relocatable programs that can run in any available partition that is large enough to hold them.

*This scheme eliminates storage waste inherent in multiprogramming with absolute translation and loading.

FIXED PARTITION MULTIPROGRAMMING : RELOCATABLE TRANSLATION AND LOADING

O

JOB QUEUE

A

B

JOBS MAY BE PLACED

IN ANY AVAILABLE PARTITION IN C

WHICH THEY WILL FIT

PROTECTION IN MULTIPROGRAMMING SYSTEMS:

OPERATING

SYSTEM

PARTITION 1

PARTITION 2

PARTITION 3

144

Page 145: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

*In contiguous allocation multiprogramming systems, protection is implemented with boundary registers.

*With two registers, the low and high boundaries of a user partition can be delineated or the low boundary (high boundary) and the length of the region can be indicated.

*The user wants any service to be done by operating system. The user can request operating system through supervisor call instruction (SVC).

*This allows the user to cross the boundary of the operating system without compromising operating system security.

Storage protection in contiguous allocation multiprogramming systems. While the user in partition 2 is active, all storage addresses developed by the running program are checked to be sure they fall between b and c.

O

C cu

A

B

C

FRAGMENTATION IN FIXED PARTITION MULTIPROGRAMMING:

CPU

CURRENTLY

ACTIVE USER

LOW BOUNDARY

2

B

C

OPERATING

SYSTEM

PARTITION 1

PARTITION 2

PARTITION 3

145

Page 146: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

There are two difficulties with the use of equal-size fixed partitions.

*A program may be too big to fit into a partition. In this case, the programmer must design the program with the use of the overlays, so that only a portion of the program need be in main memory at any one-time.

*Main memory use is extremely inefficient. Any program, no matter how small, occupies an entire partition. In our example, there may be a program that occupies less than 128KB of memory, yet it takes up a 512K partition whenever it is swapped in. This phenomenon, in which there is wasted space internal to a partition due to the fact that the block of data located is smaller than the partition, is referred to as internal fragmentation.

VARIABLE PARTITION MULTIPROGRAMMING:

*To overcome the problems with fixed partition multiprogramming, is to allow jobs to occupy as much space as they needed.

*No fixed boundaries.

-

-

JOB

QUEUE

USER A NEEDS 15K

USER B NEEDS 20 K

USER C NEEDS 10 K

USER D NEEDS 25 K

USER E NEEDS 14 K

USER F NEEDS 32 K

146

Page 147: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

FR

*Instead, jobs would be given as much storage as they required.

*This scheme is called variable partition multiprogramming.

*There is no waste a jobs partition is exactly the size of the job.

FREEOPERATING SYSTEM

USER A

15 K

FREE

OPERATING

USER A 15 K

USER B 20 K

FREE

OPERATING

SYSTEM

USER A 15 K

USER B 20 K

USER C 10 K

FREE

OPERATING

SYSTEM

USER A 15 K

USER B 20 K

USER C 10 K

USER D 25 K

FREE

147

Page 148: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

An example of variable partition programming is shown below using 1MB of main memory. Main memory is empty except for the operating system.(Fig a). The first three processes are loaded in starting where the operating system ends, and occupy just enough space for each process. (Fig b,c,d).

PAGE-9This leaves a “hole”(ie a unused space) at the end of memory that is too small for a fourth process. At some point, none of the processes in memory is ready. The operating system therefore swaps out process 2(Fig e), which leaves sufficient room to load a new process, process 4(Fig f). Because process 4 is smaller than process 2, another small hole is created.

(A) (B) (C) (D)

OPERATING

SYSTEM

OPERATING

SYSTEM

PROCESS 1

PROCESS

SPACE

PROCESS

SPACE

UNUSED

OPERATING

SYSTEM

OPERATIING

SYSTEM

PROCESS 1

PROCESS 2

224 K

PROCESS 1

PROCESS 2

224 K

PROCESS 3

288 K

OPERATING

SYSTEM

OPERATING

SYSTEM

OPERATING

SYSTEM

OPERATING

SYSTEM

SPACE

UNUSED UNUSED SPACE 64 K

148

Page 149: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

(E) (F) (G) (H)

Then after that the operating system swaps out process 1, and swaps process 2 back in (Fig h). As this example this method starts out well but leads to a lot of small holes in memory. As time goes on, memory becomes more and more fragmented, and memory use declines. This phenomenon is called external fragmentation. One technique for overcoming external fragmentation is compaction.

COALESCING HOLES:

PROCESS 1

320 K

FREE 224 K

PROCESS 3

288 K

PROCESS 1

320 K

PROCESS 4

128 K

FREE SPACE

96 K

FREE SPACE

320 K

PROCESS 4

128 K

FREE SPACE

96 K

PROCES 2

224 K

FREE SPACE

96 K

PROCESS 4

128 K

FREE SPACE

PROCESS 3

288 K

FREE SPACE

64 K

PROCESS 3

288 K

FREE SPACE

64 K

FREE SPACE

96 K

PROCESS 3

288 K

FREE SPACE 64K

149

Page 150: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

When a job finishes in a variable partition multiprogramming system, we can check whether the storage being freed borders on other free storage areas(holes). If it does then we may record in the free storage list either

(1) an additional hole or(2) a single hole reflecting the merger of the existing hole and the new adjacent

hole.The process of merging adjacent hole to form a single larger hole is called coalescing. By coalescing we reclaim, the largest possible contiguous block of storage.

COALESCING STORAGE “HOLES” IN VARIABLE PARTITION MULTIPROGRAMMING

USER A OPERATING

FINISHES SYSTEM

AND FREES COMBINES

ITS ADJACENT STORAGE HOLES TO

FROM A SINGLE

Q LARGER

HOLE

OPERATING

SYSTEM

OTHER USERS

2K HOLE FREE

5K USER A

OTHER

OPERATING

SYSTEM

OTHER USERS

2K HOLE FREE

5K HOLE FREE

OTHER

USERS

OPERATING

SYSTEM

OTHER USERS

7K HOLE

FREE

OTHER

USERS

150

Page 151: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

STORAGE COMPACTION:

Sometimes when a job requests a certain amount of main storage no individual holes is large enough to hold the job, even though the sum of all the holes is larger than the storage needed by the new job.

User 6 wants to execute his program . The program requires 100k of storage in main storage. But he cannot use the main storage of his program in contiguous storage allocation. Because 100k of storage is available but divided into 20k, 40k and 40k. So user 6 programs cannot be stored in the storage area. So the memory space is wasted. To avoid this technique storage compaction was followed.

STORAGE “HOLES” IN VARIABLE PARTITION MULTIPROGRAMMING:

USER 2

COMPLETES

USER 4

COMPLETES

AND FREES ITS STORAGE

OPERATING

SYSTEM

USER 1 10 K

USER 2 20 K

OPERATING

SYSTEM

OPERATING

SYSTEM

USER 1 10 K

FREE SPACE

20 K

USER 3 30 K

USER 1 10 K

USER 2 20 K

USER 3 30 K USER 3 30 K

USER 4 40 K

USER 5 50 K

FREE

SPACE 40 K FREE SPACE

40 K

USER 5 50 K

FREE

STORAGE

USER 5 50 K

FREE SPACE 40 K

FREE SPACE

40 K151

Page 152: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

The technique of storage compaction involves moving all occupied areas of storage to one end or the other of main storage. This leaves a single large hole for storage hole instead of the numerous small holes common in variable partition multiprogramming. Now all of the available free storage is contiguous so that a waiting job can run if its memory requirement is met by the single hole that results from compaction.

Before Compaction After Compaction

Operating system places all “in use” blocks together leaving free storage as a single, large hole

STORAGE COMPACTION IN VARIABLE PARTITION MULTIPROGRAMMING

Compaction involves drawbacks

It consumes system resources that could otherwise be used productively. The system must stop everything while it performs the compaction. This can

result inerratic response times for interactive users and could be devastating in real-time systems.

OPERATING SYSTEM

USER 1 10 K

FREE SPACE 20 K

USER 3 30 K

FREE SPACE 30 K

USER 5 40 K

OPERATING SYSTEM

USER 1 10 K

USER 3 30 K

USER 5 40 K

FREE SPACE 50 K

152

Page 153: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Compaction involves relocating the jobs that are in storage. This means that relocation information, ordinarily lost when a program is loaded, must now be maintained in readily accessible form.

With a normal, rapidly changing job mix, it is necessary to compact frequently.STORAGE PLACEMENT STRATEGIES:

Storage placement strategies are used to determine where in the main storage to place incoming programs and data

Three strategies of storage placement are

1)Best-fit Strategy: An incoming job is placed in the hole in main storage in which it fits most tightly and leaves the smallest amount of unused space.

BEST-FIT STRATEGY

PLACE JOB IN THE SMALLEST POSSIBLE HOLE IN WHICH IT WILL FIT

FREE STORAGE LIST (KEPT IN ASCENDING ORDER BY HOLE SIZE)

O

START LENGTH A

ADDRESS

E 5K B

CC C

A 16K D

G 30K

E

F

G

H

OPERATING SYSTEM

16 K HOLE

IN USE

14 K HOLE

IN USE

5 K HOLE

IN USE

30 K HOLE

REQUEST FOR 13 K

C 14K

153

Page 154: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

2) First-fit Strategy: An incoming job is placed in the main storage in the first available hole large enough to hold it

PLACE JOB IN FIRST STORAGE HOLE ON FREE STORAGE LIST IN WHICH IT WILL FIT

FREE STORAGE LIST (KEPT IN STORAGE ADDRESS ORDER OR SOMETIMES IN RANDOM ORDER)

START LENGTH

ADDRESS O

A

B

C 14 K

E 5 K C

G 30 K D

E

F

G

H

3)Worst-fit Strategy: Worst fit says to place a program in main storage in the hole in which it fits worst ie., the largest possible hole. The idea behind is after placing the

OPERATING SYSTEM

16 K HOLE

IN USE

14 K HOLE

IN USE

5 K HOLE

IN USE

30 K HOLE

REQUEST FOR 13 K

A 16 K

154

Page 155: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

program in this large hole, the remaining hole often is also large and is thus able to hold a relatively large new program.

WORST-FIT STRATEGY:

PLACE JOB IN THE LARGEST POSSIBLE HOLE IN WHICH IT WILL FIT

FREE STORAGE LIST (KEPT IN DESCENDING ORDER BY HOLE SIZE)

O

START LENGTH A

ADDRESS

B

C

A 16K D C 14K

E 5K E

F

G

H

MULTIPROGRAMMING WITH STORAGE SWAPPING:

In swapping systems, one job occupies the main storage at once. Job runs until it can no longer continue and then it relinquishes both the

storage and the CPU to the next job.

OPERATING SYSTEM

16 K HOLE

IN USE

14 K HOLE

IN USE

5 K HOLE

IN USE

30 K HOLE

REQUEST FOR 13 K

G 30K

155

Page 156: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

The entire storage is dedicated to one job for a brief period That job is then removed ( ie., swapped out or rolled out) and the next job is

brought in ( ie., swapped in or rolled in) A job will normally be swapped in and out many times before it is completed The storage swapping technique is useful when main storage is limited It was used in many early timesharing systems that supported relatively few

users The user runs in the main storage until

a) I/O is issuedb) Timer runoutc) Voluntary termination

MULTIPROGRAMMING IN A SWAPPING SYSTEM IN WHICH ONLY A SINGLE USER AT A TIME IS IN MAIN STORAGE

USE

MAIN STORAGE IMAGES STORES ON SECONDARY DIRECT ACCESS STORAGE

OPERATING

SYSTEM

SWAPPING

AREA

USER A USER B USER C

156

Page 157: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

VIRTUAL STORAGE:

The term virtual storage is associated with the ability to address a storage space much larger than that available in the primary storage of a particular computer system.

The two most common methods of implementing virtual storage are paging and segmentation. Fixed-Size blocks are called pages; variable-size blocks are called segments.

VIRTUAL STORAGE MANAGEMENT STRATEGIES:

1) FETCH STRATEGIES: It is concerned with when a page or segment should be brought from secondary to primary storage

Demand Fetch Schemes: Demand fetch strategy wait for a page or segment to be referenced by a running process before bringing the page or segment to primary storage

Anticipatory Fetch Schemes: Anticipatory fetch strategies attempt to determine in advance what pages or segments will be referenced by a process.

2) PLACEMENT STRATEGIES: These are concerned with where in primary storage to place an income page or segment.

3) REPLACEMENT STRATEGIES: These are concerned with deciding which page or segment to displace to make room for an incoming page or segment when primary storage is already fully committed

PAGE REPLACEMENT STRATEGIES:

In this case operating system storage management routines must decide which page in primary storage to displace to make room for an incoming page.

The following replacement strategies

1) The principle of optimality2) Random page replacement3) First-in-first-out4) Least-recently used5) Least-frequently used6) Not-used-recently7) Second chance8) Clock9) Working set10) Page fault frequency

157

Page 158: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

THE PRINCIPLE OF OPTIMALITY:

The principle of optimality states that to obtain optimum performance the page to replace is the one that will not be used again for the furthest time in future.

RANDOM PAGE REPLACEMENT:

All pages in main storage thus have an equal likelihood of being selected for replacement. This strategy could select any page for replacement, including the next page to be referred

FIRST-IN-FIRST-OUT(FIFO) PAGE REPLACEMENT:

When a page needs to be replaced, we choose the one that has been in storage the longest. First-in-first-out is likely to replace heavily used pages because the reason a page has been in primary storage for a long time may be that it is in constant use.

FIFO ANAMOLY:

Belady, Nelson and Shelder discovered that under FIFO page replacement, certain page reference patterns actually cause more page faults when the number of page frames allocated to a process is increased. This phenomenon is called the FIFO Anamoly or Belady’s Anamoly.

LEAST-RECENTLY-USED(LRU) PAGE REPLACEMENT:

This strategy selects that page for replacement that has not been used for the longest time. LRU can be implemented with a list structure containing one entry for each occupied page frame. Each time a page frame is referenced, the entry for that page is placed at the head of the list. Older entries migrate toward the tail of the list. When a page must be replaced to make room for an incoming page, the entry at the tail of the list is selected, the corresponding page frame is freed, the incoming page is placed in that page frame, and the entry for that page frame is placed at the head of the list because that page is now the one that has been most recently used.

LEAST-FREQUENTLY-USED(LFU) PAGE REPLACEMENT:

In this strategy the page to replace is that page that is least frequently used or least intensively referenced. The wrong page could be selected for replacement. For example , the least frequently used page could be the page brought into main storage most recently.

158

Page 159: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

NOT-USED RECENTLY(NUR) PAGE REPLACEMENT:

Pages not used recently are not likely to be used in the near future and they may be replaced with incoming pages.

The NUR strategy is implemented with the addition of two hardware bit per page. These are

a)referenced bit=0 if the page has not been referenced

=1 if the page has been referenced

b)modified bit=0 if the page has not been modified

=1 if the page has been modified

The NUR strategy works as follows. Initially, the referenced bits of all pages are set to 0. As a reference to a particular page occurs, the referenced bit of that page is set to 1. When a page is to be replaced we first try to find a page which has not been referenced.

MODIFICATIONS TO FIFO; CLOCK PAGE REPLACEMENT AND SECOND CHANCE PAGE REPLACEMENT:

The second chance variation of FIFO examines the referenced bit of the oldest page; if this bit is off, the page is immediately selected for replacement. If the referenced bit is on, it is set off and the page is moved to the tail of the FIFO list and treated essentially as a new arrival; this page gradually moves to the head of the list from which it will be selected for replacement only if its referenced bit is still off. This essentially gives the page a second chance to remain in primary storage if indeed its referenced bit is turned on before the page reaches the head of the list.

LOCALITY:

Locality is a property exhibited by running processes, namely that processes tend to favor a subset of their pages during an execution interval. Temporal locality means that if a process reference a page, it will probably reference that page again soon. Spatial locality means that if a process references a page it will probably reference adjacent pages in its virtual address space.

159

Page 160: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

WORKING SETS:

Denning developed a view of program paging activity called the working set theory of program behavior. A working set is a collection of pages a process is actively referencing. To run program efficiently, its working set of pages must be maintained in primary storage. Otherwise excessive paging activity called thrashing might occur as the program repeatedly requests pages from secondary storage.

A working set storage management policy seeks to maintain the working sets of active programs in primary storage. The working set

of pages of a process, W(t,w) at time t, is the set of pages referenced by the process during time interval t-w to t. Process time is the time during which a process has the CPU.

PAGE FAULT FREQUENCY PAGE REPLACEMENT:

Processes attempting to execute without sufficient space for their working sets often experience thrashing, a phenomenon in which they continually replace pages and then immediately recall the replaced pages back to primary storage.

The page fault frequency algorithm adjusts a process’s resident page set, ie., those of its pages which are currently in memory, based on the frequency at which the process is faulting.

DEMAND PAGING:

No pages will be brought from secondary to primary storage until it is explicitly referenced by a running process. Demand paging guarantees that the only pages brought to main storage are those actually needed by processes. As each new page is referenced, the process must wait while the new page is transferred to primary storage.

ANTICIPATORY PAGING:

The method of reducing the amount of time people must wait for results from a computer. Anticipatory paging is sometimes called prepaging. In anticipatory paging, the operating system attempts to predict the pages a process will need, and then preloads these pages when space is available. If correct decisions are made, the total run time of the process can be reduced considerably. While the process runs with its current pages, the system loads new pages that will be available when the process requests them.

PAGE RELEASE:

When a page will no longer be needed, a user could issue a page release command to free the page frame. It could eliminate waste and speed program execution.

160

Page 161: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

PAGE SIZE:

A number of issues affect the determination of optimum page size for a given system

A small page size causes larger page table. The waste of storage due to excessively large tables is called table fragmentation.

A large page size causes large amount of information that ultimately may not be referenced are paged into primary storage.

I/O transfers are more efficient with large pages. Localities tend to be small. Internal fragmentation is reduced with small. In the balance, most designers feel that pages factors point to the need for small pages.

UNIT – V

PROCESSOR MANAGEMENT:

The assignment of physical processors to processes allows processes to accomplish work. The problems of determining when processors should be assigned, and to which processes. This is called processor scheduling.

SCHEDULING LEVELS:

Three important levels of scheduling are considered.

High-Level Scheduling:Sometimes called job scheduling, this determines which jobs shall be allowed to compete actively for the resources of the system. This is sometimes called admission scheduling because it determines which jobs gain admission to the system.

Intermediate-Level Scheduling:This determines which processes shall be allowed to compete for the CPU.

The intermediate-level scheduler responds to short-term fluctuations in system load by temporarily suspending and activating (or resuming) processes to achieve smooth system operation and to help realize certain system wide performance goals.

Low-Level Scheduling:

161

Page 162: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

This determines which ready process will be assigned the CPU when it next becomes available, and actually assigns the CPU to this process.

PREEMPTIVE VS NONPREEMPTIVE SCHEDULING:

A scheduling discipline is nonpreemptive if, once a process has been given the CPU, the CPU cannot be taken away from that process. A scheduling discipline is preemptive if the CPU can be taken away.

Preemptive scheduling is useful in systems in which high-priority require rapid attention. In real-time systems and interactive timesharing systems, preemptive scheduling is important in guaranteeing acceptable response times.

To make preemption effective, many processes must be kept in main storage so that the next process is normally ready for the CPU when it becomes available. Keeping nonrunning program in main storage also involves overhead.

In nonpreemptive systems, short jobs are made to wait by longer jobs, but the treatment of all processes is fairer. Response time are more predictable because incoming high-priority jobs cannot displace waiting jobs.

In designing a preemptive scheduling mechanism, one must carefully consider the arbitrariness of virtually any priority scheme.

THE INTERVAL TIMER OR INTERRUPTING CLOCK:

The processes to which the CPU is currently assigned is said to be running. To prevent users from monopolizing the system the operating system has mechanisms for taking the CPU away from the user. The operating system sets an interrupting clock or interval timer to generate an interrupt at some specific future time. The CPU is then dispatched to the process. The process retains control of the CPU until it voluntarily releases the CPU, or the clock interrupts or some other interrupt diverts the attention of the CPU. If the user is running and the clock interrupts, the interrupt causes the operating system to run. The operating system then decides which process should get the CPU next. The interrupting clock helps guarantee reasonable response times to interactive users, to prevent the system from getting hung up on a user in an infinite loop, and allows processes to respond to time-dependent events. Processes that need to run periodically depend on the interrupting events.

162

Page 163: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

PRIORITIES:

Priorities may be assigned automatically by the system or they may be assigned externally.

STATIC VS DYNAMIC PRIORITIES:

Static priorities do not change. Static priority mechanisms are easy to implement and have relatively low overhead. They are not responsive to changes in environment, changes that might make it desirable to adjust a priority.

Dynamic priority mechanisms are responsive to change. The initial priority assigned to a process may have only a short duration after which it is adjusted to a more appropriate values. Dynamic priority schemes are more complex to implement and have greater overhead than static schemes.

PURCHASED PRIORITIES:

An operating system must provide competent and reasonable service to a large community of users but must also provide for those situations in which a member of the user community needs special treatment.

A user with a rush job may be willing to pay a premium, ie., purchase priority, for a higher level of service. This extra charge is merited because resources may need to be withdrawn from other paying customers. If there were no extra charge, then all users would request the higher level of service.

DEADLINE SCHEDULING:

In deadline scheduling certain jobs are scheduled to be completed within a specific time or deadline. These jobs may have very high value if delivered on time and may be worthless if delivered later than the deadline. The user is often willing to pay a premium to have the system ensure on-time consumption. Deadline scheduling is complex for many reasons.

The user must supply the resource requirements of the job in advance. Such information is rarely available.

The system must run the deadline job without severely degrading service to other users.

The system must plan its resource requirements through to the deadline because new jobs may arrive and place unpredictable demands on the system.

If many deadline jobs are to be active at once, scheduling could become so complex.

The intensive resource management required by deadline scheduling may generate substantial overhead.

163

Page 164: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

FIRST-IN-FIRST-OUT (FIFO) SCHEDULING:

First-in-first-out is the simplest scheduling processes are dispatched according to their arrival time on the ready queue. Once a process has the CPU, it runs to completion. FIFO is a nonpreemptive discipline. It is somewhat unfair in that long jobs make short jobs wait, and unimportant jobs make important jobs wait. This scheduling is not useful for interactive users because it cannot guarantee good response times.

FIRST-IN-FIRST-OUT SCHEDULING Completion

FIFO is often embedded within other schemes. For example, many scheduling schemes dispatch processes according to priority, but processes with the same priority are dispatched FIFO.

ROUND ROBIN (RR) SCHEDULING:

In round robin(RR) Scheduling processes are dispatched FIFO but are given a limited amount of CPU time called a time-slice or a quantum. If a process does not complete before its CPU time expires, the CPU is preempted and given to the next waiting process. The preempted process is then placed at the end of the ready list. Round Robin is effective in timesharing environments in which the system needs to guarantee reasonable response times for interactive users.

A B C CPU

164

Page 165: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

ROUND-ROBIN SCHEDULING:

Ready List Completion

Preemption

kleinrock discusses a variant of round robin called selfish round robin. In this scheme, as processes enter the system, they first reside in a holding queue until their priorities reach the levels of processes in an active queue.

QUANTUM SIZE:

Determination of quantum size is critical to the effective operation of a computer system. Should the quantum be large or small? Should it be fixed or variable? Should it be the same for all users or should it be determined separately for each user?

When the quantum is very large, each process is given as much time as it needs to complete, so the round-robin scheme degenerates to FIFO. When the quantum is small, context switching overhead becomes a dominant factor and the performance of the system degrades to the point that most of the time is spent switching.

A B C CPU

165

Page 166: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Quantum can be set between zero and infinity? When the quantum is near zero then the context switching overhead consumes most of the CPU resource. The interactive users experience a poor response times. As the quantum is increased, the response time improve. At least the point has been reached at which the percentage of CPU consumed by overhead is small enough that the users receive some CPU service. But response times are still not very good.

As the quantum time is increased, response times continue to improve. At one point the users are getting prompt responses from the system. Again the quantum is increased it is optimal. When the quantum is increased again the response times become sluggish again. As the quantum gets larger, it is becoming sufficiently large for each user to run to completion upon receiving the CPU. The scheduling is degenerating to FIFO in which longer processes make shorter ones wait, and the average waiting time increases as the longer processes run to completion upon receiving the CPU. The scheduling is degenerating to FIFO in which longer processes make shorter ones wait, and the average waiting time increases as the longer processes run to completion before yielding the CPU.

The interactive requests requires less time than the duration of the quantum. When an interactive process begins executing, it normally uses the CPU long enough to generate an I/O request. Once the I/O is generated that process yields the CPU to the next process. Optimal quantum varies from system to system, and it also varies from process to process.

SHORTEST-JOB-FIRST(SJF) SCHEDULING:

Shortest-job-first(SJF) is a non preemptive scheduling discipline in which the waiting job with the smallest estimated run-time to completion is run

next. SJF reduces average waiting time over FIFO. SJF favors short jobs at the expense of longer ones. SJF selects jobs for service in a manner that ensures the next job service in a manner that ensures the next job will complete and leave the system as soon as possible. This tends to reduce the number of waiting jobs, and also reduces the number of jobs waiting behind large jobs. The obvious problem with SJF is that it requires precise knowledge of how long a job of process will run, and this information is not usually available. The best SJF can do is to rely on user estimates of run times.

If users know that the system is designed to favor jobs with small estimated run-times they may give small estimates. The scheduler can be designed to remove this temptation. The user can be forewarned that if the job runs longer than estimated, it will be terminated and the user will be charged for the work. A second option is to run the job for the estimated time plus a small percentage extra,

166

Page 167: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

and then to shelve it ie., preserve it in its current form so that it can be restarted at a later time. Another solution is to run the job for the estimated time at normal billing rates, and then to charge a premium rate, well above the normal charges, for additional execution time. SJF is nonpreemptive and thus not useful in timesharing environments in which reasonable response times must be guaranteed.

SHORTEST-REMAINING-TIME(SRT) SCHEDULING:

Shortest-Remaining-Time Scheduling(SRT) is the preemptive counterpart of SJF and is useful in timesharing. In SRT, the process with the smallest estimated run-time to completion is run next, including new arrivals. In SJF, once a job begins executing, it runs to complete. In SRT a running process may be preempted by a new process with a shorter estimated run-time. SRT has higher overhead than SJF. It must keep track of the elapsed service time of the running job, and must handle occasional preemptions. Arriving small processes will run almost immediately. SRT requires that elapsed service times be recorded , and this contribution to the schemes overhead.

Suppose a running job is almost complete, and a new job with a small estimated service time arrives. Should the running job be preempted? The pure SRT discipline would perform the preemption, but is it really worth it? This situation may be handled by building threshold value so that once a running job needs less than this amount of time to complete, the system guarantees it will run to completion uninterrupted.

HIGHEST-RESPONSE-RATIO-NEXT(HRN) SCHEDULING:

Brinch Hansen developed the highest-response-ratio-next strategy that correct some of the weaknesses in SJF, particularly the favor toward short new jobs. HRN is a nonpreemptive scheduling discipline in which the priority of each job is a function not only of the jobs service time but also of the amount of time the jobs has been waiting for service. Dynamic priorities in HRN are calculated according to the formula.

priority= (time waiting +service time)/service time

DISTRIBUTED COMPUTING:

Parallel processing techniques are used to push computing power to its limits. These techniques are frequently employed in the class of machines called supercomputers.

167

Page 168: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Multiprocessing is the use of multiple processors to execute separate portions of a computation truly simultaneously. The small size of microprocessors makes it reasonable to consider packaging many of them in a single system.

SUPERCOMPUTER PROJECTS:

The goal of supercomputers is to push the state of the art in computing to its practical limits. A commercial computer with vector instructions and pipelined floating-point arithmetic operations is referred to as a supercomputer. Supercomputers are very powerful, high-performance machines used mostly for scientific computations. To speed up the operation, the components are packed tightly together to minimize the distance that the electronic signals have to travel. Supercomputers also use special techniques for removing the heat from circuits to prevent them from burning up because of their close proximity.

A supercomputer is a computer system best known for its high computation speed, fast and large memory systems, and the extensive use of parallel processing. It is equipped with multiple functional units and each unit has its own pipeline configuration. Supercomputers are limited in their use of a number of scientific applications such as numerical weather forecasting, seismic wave analysis and space research.

A measure used to evaluate computers in their ability to perform a given number of floating-point operations per second is referred to as flops. The term megaflops are used to denote million flops and gigaflops to denote billion flops.

The first supercomputer developed in 1976 is the Cray-I supercomputer. It uses vector processing with 12 distinct functional units in parallel. Each functional unit is segments to process the incoming data through a pipeline.

All the functional units can operate concurrently with operands stored in the large number of registers in the CPU.

Cray research extended its supercomputer to a multiprocessor configuration called CRAY X-MP which appeared first in 1982. the CRAY X-MP has two or four identical processors that share I/O and memory. Main memory uses 64-bit words, an indication of the machines design to favor high precision scientific computation. The memory is 32-way interleaved. The CPUs communicate through clusters of shared registers.

The Cedar system was developed at the center for supercomputing research and development at the university of Illinois. It connects eight Alliant FX/8 superminicomputers with a multistage network to 64 global memory modules.

168

Page 169: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

The Cosmic cube is a “hypercube” multiprocessor, developed at the California institute of technology. IBM is working on a machine dubbed the TF-1, a supercomputer with a goal of executing 3 trillion double-precision floating point operation per second. This is about 1000 times faster than today’s most powerful supercomputers.

CLASSIFICATION OF SEQUENTIAL AND PARALLEL ARCHITECTURES:

The purpose of parallel processing is to speed up the computer processing capability and increase its throughput, that is, the amount of processing that can be accomplished during a given interval of time. There are a variety of ways that parallel processing can be classified. It can be considered from the internal organization of the processors, from the interconnection structure between processors, or from the flow of information through the system. One classification introduced by M.J.FLYNN considers the organization of a computer system by the number of instructions and data items that are manipulated simultaneously. The normal operation of a computer is to fetch instructions from memory and execute them in the processor. The sequence of instructions read from memory constitutes an instruction stream. The operations performed on the data in the processor constitutes a data stream.

Flynn’s classification divides computers into four major groups as follows.

Single Instruction Stream, Single Data Stream (SISD)

Single Instruction Stream, Multiple Data Stream (SIMD)

Multiple Instruction Stream, Single Data Stream (MISD)

Multiple Instruction Stream, Multiple Data Stream (MIMD)

SISD represents the organization of a single computer containing a control unit, a processor unit a memory unit. Instructions are executed sequentially and the system may or may not have internal parallel processing capabilities. These are uniprocessor computer that process one instruction at a time. SISD machines process data from a single data stream.

Acronym Meaning Instruction Data Examples

Streams Streams169

Page 170: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

SISD Single instruction stream, 1 1 IBM 370,DEC,

Single data stream VAX, Macintosh

SIMD Single instruction stream, 1 >1 Illiac IV,

Multiple data Stream Connection

Machine,NASA’s MPP

MISD Multiple instruction stream >1 1 Not used

Single data Stream

MIMD Multiple instruction stream >1 >1 Cray X/MP,

Cedar,Butterfly

CLASSIFICATION SCHEME FOR COMPUTER ARCHITECTUREs

SIMD represents an organization that includes many processing units under the supervision of a common control unit. All processors receive the same instruction from the control unit but operate on different items of data. The shared memory unit must contain multiple modules so that it can communicate with all the processors simultaneously. It is commonly referred as an array processor, a device that essentially performs the same operation simultaneously on every element of an array. Vector processors and pipeline processors are sometimes included in this category.

The MISD(Multiple Instruction Stream, Single Data Stream) machine has not found application in industry.

170

Page 171: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

The MIMD(Multiple Instruction Stream, Multiple Data Stream) machine is a true parallel processor machines in this class are commonly called multiprocessors. MIMD organization refers to a computer system capable of processing several programs at the same time.

PIPELINING: Pipelining is a technique of decomposing a sequential process into suboperations, with each subprocess being executed in a special dedicated segment that operates concurrently with all other segments. It is characteristic of pipelines that several computations can be in progress in distinct segments at the same time. Multi-stage pipelines are much like assembly lines automobile factories. Each station in the instruction pipeline performs a different operation on an instruction then the instruction moves on to the next station; all stations operate in parallel. Pipelines enable many instructions, each in different phases of execution, to be in progress at once. These phases typically include instruction fetch, operand fetch, instruction decode and instruction execution. In the most general case, the computer needs to process each instruction with the following sequence of steps.

1. Fetch the instruction from memory.

2. Decode the instruction

3. Calculate the effective address.

4. Fetch the operands from memory.

5. Execute the instruction

6. Store the result in the proper place

Early computers executed a single machine language instruction at a time, start to finish. Today’s pipelined systems being working on the next instruction, and often the next several, which the current instruction is executing.

VECTOR PROCESSING: There is a class of computational problems that are beyond the capabilities of a conventional computer. These problems are characterized by the fact that they require a vast number of computations that will take a conventional computer days or even weeks to complete. In many science and engineering applications, the problems can be formulated in terms of vectors and matrices that lend themselves to vector processing.

171

Page 172: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Computers with vector processing capabilities are in demand in specialized applications. The following are representative application areas where vector processing is of the utmost importance.

1. Long-range weather forecasting

2. Petroleum explorations

3. Seismic Data Analysis

4. Medical Diagnosis

5. Aerodynamics and Space Flight Simulations

6. Artificial Intelligence and Expert Systems

7. Mapping the Human Genome

8. Image Processing

Without sophisticated computers, many of the required computations cannot be completed within a reasonable amount of time. To achieve the required level of high performance it is necessary to utilize the fastest and most reliable hardware and apply innovative procedures from vector and parallel processing techniques.

Vector processing requires pipelined hardware, but not vice versa. Many scientific problems require arithmetic operations on large arrays of numbers. These numbers are usually formulated as vectors and matrices of floating-point numbers. A vector is an ordered set of a one-dimensional array of data items. A vector V of length n is represented as a row vector by V=[V1 V2 V3 …. VN] . It may be represented as a column vector if the data items are listed in a column. A conventional sequential computer is capable of processing operands one at a time. Consequently, operations on vectors must be broken down into single computations with subscripted variables.

A vector instruction indicates the operation to be performed, and specifies the list of operands(called a vector) on which it is to operate. It allows operations to be specified with a single vector instruction of the form

C(1:100)=A(1:100)+B(1:100)

The vector instruction includes the initial address of the operands, the length of the vectors, and the operation to be performed, all in one composite instruction.

When a vector instruction is executed, the elements of the vector are fed into the appropriate pipeline one at a time, delayed by the time it takes to complete into the appropriate pipeline one at a time, delayed by the time it takes to complete one

172

Page 173: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

stage of the pipeline. Compilers “vectorize” programs to run efficiently on vector processors.

The Cray-1 Vector processor has 13 pipelines that can operate in parallel; these are dedicated to operations such as floating addition, floating multiplication, reciprocal approximation, fixed addition and so on.

ARRAY PROCESSORS: Array processors are single instruction multiple data machines. An array processor is a processor that performs computations on large arrays of data. An SIMD array processor is a computer with multiple processing units are synchronized to perform the same operation under the control of a common control unit, thus providing a single instruction stream, multiple data stream (SIMD) organization. A general block diagram of an array processor is :

- -

- -

PE1

PE2

PE3

PEn

M1

M2

M3

Mn

Master Control

Unit

Main Memory

173

Page 174: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

SIMD array processor organization

It contains a set of identical processing elements (PEs), each having a local memory M. Each processor element includes an ALU, a floating-point arithmetic unit, and working registers. The master control unit controls the operations in the processor elements. The main memory is used for storage of the program. The function of the master control unit is to decode the instructions and determine how the instruction is to be executed.

Array processors are not useful for general-purpose computing environments. One of the earliest array processors was the ILLIAC IV, developed at the university of Illinois. The MPP(Massive Parallel Processing) is an SIMD machine that has 16,384 processors. It can perform a total of more than 6 billion 8-bit operations per second. It was designed for NASA by Goodyear Aerospace to perform image processing tasks.

DATA-FLOW COMPUTERS:

In sequential processors, control of what to do next resides in the program counter which after the completion of one instruction, normally points to the next instruction to be performed.

Data flow computers can perform many operations in parallel. These machines are said to be data driven because they perform each instruction (potentially simultaneously if enough processors are available) for which the needed data is available. Data flow machines have not been implemented but data flow techniques are already being incorporated in compilers that prepare programs for optimal execution on various kinds of parallel architectures.

174

Page 175: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

An example of how a data flow computer might outperform a sequential computer is show in which a series of assignment statements is evaluated for both a parallel data flow architectures and a sequential architecture.

Assignments: x:=a-c*d+e;

y:=b+d/a;

z:=x+e*y;

Evaluation of a Sequential processor:

1 c*d

2 a-(c*d)

3 (a-(c*d)+e

4 d/a [parents indicate “already evaluated”]

5 b+(d/a)

6 e*y

7 x+(e*y)

175

Page 176: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

A e c d e b d a

{+,*,and /are

1 executed in

parallel} ----------------------------------------------------------------------------

{- and + are executed

Evaluation in parallel}

On a data flow

processor 2

-------------------------------------------------------------------

3

----------------------------------------------------

4

Z

176

Page 177: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

MULTIPROCESSORS: A multiprocessor system is an interconnection of two or more CPUs with memory and input-output equipment. A multiprocessor system is controlled by one operating system that provides interaction between processors and all the components of the system cooperate in the solution of a problem.

One appeal of multiprocessing systems is that if a processor fails, the remaining processors can normally continue operating. A failing processor must somehow inform the other processors to take over ; functioning processors must be able to detect a processor that has failed. The operating system must note that a particular processor has failed and is no longer available for allocation.

Multiprocessing can improve performance by decomposing a program into parallel executable tasks or multiple independent jobs can be made to operate in parallel. With decreasing hardware costs, it has become common to connect a large number of microprocessors to form a multiprocessor in this way, large-scale computer power can be achieved without the use of costly ultra-high speed processors.

FAULT TOLERANCE: One of the most important capabilities of multiprocessor operating systems is their ability to withstand equipment failures in individual processors and to continue operation; this ability is referred to as fault tolerance.

Fault tolerance systems can achieve operating even when portions of the system fail. This kind of operation is especially important in so-called mission critical systems. Fault tolerance is appropriate for systems in which it may not be possible for humans to intervene and repair the problem, such as in deep-space probes, aircrafts, and the like. It is also appropriate for systems in which these consequences could happen so quickly that humans could not intervene quickly enough.

Many techniques are commonly used to facilitate fault tolerance. These include

critical data for the system and the various processes should be maintained in multiple-copies. These should reside in separate storage banks so that failures in individual components will not completely destroy the data.

The operating system must be designed so that it can run the maximal configuration of hardware effectively, but it must also be able to run subsets of the hardware effectively in case of failures.

177

Page 178: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Hardware error detection and correction capability should be implemented so that extensive validation is performed without interfering with the efficient operation of the system.

Idle processors capacity should be utilized to attempt to detect potential failures before they occur.

DISK PERFORMANCE OPTIMIZATION: In multi programmed computing systems, inefficiency is often caused by improper use of rotational storage devices such as disks and drums.OPERATION OF MOVING – HEAD DISK STORAGE:

PLATTERS SPINDLE READ-WRITE BOOM HEAD

SCHEMATIC OF A MOVING-HEAD DISK

This is a schematic representation of the side view of a moving-head disk. Data is recorded on a series of magnetic disk or platters. These disks are connected by a common spindle that spins at very high speed. The data is accessed (ie., either read or written) by a series of read-write heads, one head per disk surface. A read-write head can access only data immediately adjacent to it.

178

Page 179: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

PAGE-2Therefore, before data can be accessed, the portion of the disk surface from which the data is to be read (or the portion on which the data is to be written) must rotate until it is immediately below (or above) the read-write head. The time it takes for data to rotate from its current position to a position adjacent to the read-write head is called latency time. Each of the several read-write heads, while fixed in position, sketches out in circular track of data on a disk surface. All read-write heads are attached to a single boom or moving arm assembly. The boom may move in or out. When the boom moves the read-write heads to a new position, a different set of tracks becomes accessible. For a particular position of the boom, the set of tracks sketched out by all the read-write heads forms a vertical cylinder. The process of moving the boom to a new cylinder is called a seek operation. Thus, in order to access a particular record of data on a moving-head disk, several operations are usually necessary. First, the boom must be moved to the appropriate cylinder. Then the portion of the disk on which the data record is stored must rotate until it is immediately under(or over) the read-write head (ie., latency time).

COMPONENTS OF A DISK ACCESS

179

Page 180: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Then the record, which is of arbitrary size must be made to spin by the read-write head. This is called transmission time. This is tediously slow compared with the high processing speeds of the central computer system.

PAGE-3

WHY DISK SCHEDULING IS NECESSARY: In multiprogrammed computing systems, many processes may be generating requests for reading and writing disk records. Because these processes sometimes make requests faster than they can be serviced by the moving-head disks, waiting lines or queues build up for each device. Some computing systems simply service these requests on a first-come-first-served (FCFS) basis. Whichever request for service arrives first is serviced first. FCFS is a fair method of allocating service, but when the request rate becomes heavy, FCFS can result in very long waiting times.

FCFS random seek pattern. The numbers indicate the order in which the requests arrived

180

Page 181: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

FCFS exhibits a random seek pattern in which successive requests can cause time consuming seeks from the innermost to the outermost cylinders. To minimize time spent seeking records, it seems reasonable to order the request queue in some manner other than FCFS. This process is called disk scheduling. Disk scheduling involves a careful examination of pending requests to determine the most efficient way to service the requests.

PAGE-4

A disk scheduler examines the positional relationships among waiting requests. The request queue is then reordered so that the requests will be serviced with minimum mechanical motion. The two most common types of scheduling are seek optimization and rotation (or latency) optimization.

DESIRABLE CHARACTERISTICS OF DISK SCHEDULING POLICIES: Several other criteria for categorizing scheduling policies are

1. throughput2. mean response time3. variance of response times (ie., predictability)

A scheduling policy should attempt to maximize throughputthe number of requests serviced per unit time. A scheduling policy should attempt to minimize the mean response time (or average waiting time plus average service time). Variance is a mathematical measure of how far individual items tend to deviate from the average of the items. Variance to indicate predictability- the smaller the variance, the greater the predictability. We desire a scheduling policy that minimizes variance.

SEEK OPTIMIZATION: Most popular seek optimization strategies.

1) FCFS(First-Come-First Served) Scheduling:

181

Page 182: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

In FCFS scheduling, the first request to arrive is the first one serviced. FCFS is fair in the same that once a request has arrived, its place in the schedule is fixed. A request cannot be displaced because of the arrival of a higher priority request. FCFS will actually do a lengthy seek to service a distant waiting request even though another request may have just arrived on the same cylinder to which the read-write head is currently positioned. It ignores the positional relationships among the pending requests in the queue. FCFS is acceptable when the load on a disk is light. FCFS tend to saturate the device and response times become large.

2) SSTF(Shortest-Seek-Time-First) Scheduling: In SSTF Scheduling, the request that results in the shortest seek distance is serviced next, even if that request is not the first one in the queue. SSTF is a cylinder –oriented shceme SSTF seek patterns tend to be highly localized with the result that the innermost and outermost tracks can receive poor service compared with the mid-range tracks.

PAGE-5SSTF results in better throughput rates than FCFS, and mean

response times tend to be lower for moderate loads. One significant drawback is that higher variance occur on response times because of the discrimination against the outermost and innermost tracks. SSTF is useful in batch processing systems where throughput is the major consideration. But its high variance of response times (ie., its lack of predictability) makes it unacceptable in interactive systems.

SSTF localized seek pattern

182

Page 183: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

3) SCAN Scheduling: Denning developed the SCAN scheduling strategy to overcome the discrimination and high variance in response times of SSTF. SCAN operates like SSTF except that it chooses the request that results in the shortest seek distance in a preferred direction. If the preferred direction is currently outward, then the SCAN strategy chooses the shortest seek distance in the outward direction. SCAN does not change direction until it reaches the outermost cylinder or until there are no further requests pending in the preferred direction. It is sometimes called the elevator algorithm because an elevator normally continues in one direction until there are no more requests pending and then it reverses direction.

SCAN behaves very much like SSTF in terms of improvedthroughput and improved mean response times, but it eliminates much of the discrimination inherent in SSTF schemes and offers much lower variance.

PAGE-6SCAN scheduling with preferred directions

183

Page 184: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Outward sweep

Inward Sweep \\\

4) N-STEP SCAN SCHEDULING: One interesting modification to the basic SCAN strategy is called N-STEP SCAN . In this strategy, the disk arm moves back and forth as in SCAN except that it services only those requests waiting when a particular sweep begins. Requests arriving during a sweep are grouped together and ordered for optimum service during the return sweep. N-STEP SCAN offers good performance in throughput and mean response time. N-STEP has a lower variance of response times than either SSTF or conventional SCAN scheduling. N-STEP SCAN avoids the possibility of indefinite postponement occurring if a large number of requests arrive for the current cylinder. It saves these requests for servicing on the return sweep.

5)C-SCAN SCHEDULING: Another interesting modification to the basic SCAN strategy is called C-SCAN (for circular SCAN). In C-SCAN strategy, the arm moves from the outer cylinder to the inner cylinder, servicing requests on a shortest-seek basis. When the arm has completed its inward sweep, it jumps (without servicing requests) to the request nearer the outermost cylinder, and then resumes its inward sweep processing requests. Thus C-SCAN completely eliminates the discrimination against requests for the innermost or outermost cylinder. It has a very small variance in response times. At low loading, the SCAN policy is best. At medium to heavy loading, C-SCAN yields the best results.

PAGE-7

184

Page 185: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

N-Step SCAN Scheduling

Inward Sweep

Outward Sweep

C-SCAN SCHEDULING

RER Inward Sweep ------ Jump to --------- ----- outermost request next inward sweep

185

Page 186: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

RAM DISKS: A RAM disk is a disk device simulated in conventional random access memory. It completely eliminates delays suffered in conventional disks because of the mechanical motions inherent in seeks and in spinning a disks. RAM disks are especially useful in high-performance applications. Caching incurs a certain amount of CPU overhead in maintaining the contents of the cache and in searching for data in the cache before attempting to read the data from disk. If the record reference patterns is not seen in the cache, then the disk cache hit ratio will be small and the CPU’s efforts in managing the cache will be waster, possibly resulting in poor performance. RAM disks are much faster than conventional disks because they involve no mechanical motion. They are separate from main memory so they do not occupy space needed by the operating system or applications. Reference times to individual data items are uniform rather than widely variable as with conventional disks. RAM disks are much more expensive than regular disks. Most forms of RAM in use today are volatile ie., they lose their contents when power is turned off or when the power supply is interrupted. Thus RAM disk users should perform frequent backups to conventional disks. As memory prices continue decreasing, and as capacities continue increasing it is anticipated that RAM disks will become increasingly popular.

OPTICAL DISKS: Various recording techniques are used. In one technique, intense laser heat is used to burn microscopic holes in a metal coating. In another technique, the laser heat causes raised blisters on the surface. In a third technique, the reflectivity of the surface is altered. The first optical disks were write-once-read-many(WORM) devices. This is not useful for applications that require regular updating. Several rewritable optical disk products have appeared on the market recently. Each person could have a disk with the sum total of human knowledge and this disk could be updated regularly. Some estimates of capacities are so huge that researchers feel it will be possible to store 10^21 bits on a single optical disk.

FILE AND DATABASE SYSTEMS:INTRODUCTION:

186

Page 187: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

A file is a named collection of data. It normally resides on a secondary storage device such as a disk or tape. It may be manipulated as a unit by operations such as

open – prepare a file to be referenced.

PAGE-9close – prevent further reference to a file until it is reopened.create – build a new file.destroy – remove a file.copy – create another version of the file with a new name.rename – change the name of a file.list – print or display the contents of a file.

Individual data items within the file may be manipulated by operations like

read – input a data item to a process from a file.write – output a data item from a process to a file.update – modify an existing data item in a file.insert – add a new data item to a file.delete – remove a data item from a file.

Files may be characterized byvolatility – this refers to the frequency with which additions and deletions are made to a file.activity – this refers to the percentage of a file’s records accessed during a given period of time.size – this refers to the amount of information stored in the file.

THE FILE SYSTEM: An important component of an operating system is the file system. File systems generally contain

Access Methods – these are concerned with the manner in which data stored in files is accessed.File Management – This is concerned with providing the mechanisms for files to be stored, referenced, shared and secured.Auxiliary storage Management – This is concerned with allocating space for files on secondary storage devices.File integrity mechanisms – These are concerned with guaranteeing that the information in a file is uncorrupted.

187

Page 188: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

The file system is primarily concerned with managingsecondary storage space, particularly disk storage. Let us assume an environment of a large-scale timesharing system supporting approximately 100 active terminals accessible to a user community of several thousand users. It is common for user accounts to contain between 10 and 100 files. Thus with a user community of several thousand users, a system disks might contain 50,000 to 1,00,000 or more separate files. These files need to be accessed quickly to keep response times small.

A file system for this type of environment may be organized as follows. A root is used to indicate where on disk the root directory begins. The root directory points to the various user directories. A user directory contains an entry for each of a user’s files; each entry points to where the corresponding file is stored on disk. Files names should be unique within a given user directory. In hierarchically structured file systems, the system name of a file is usually formed as pathname from the root directory to the file. For eg., in a two-level file system with users A,B and C and in which A has files PAYROLL and INVOICES, the pathname for file PAYROLL is A:PAYROLL.

TWO-LEVEL HIERARCHIAL FILE SYSTEM

Userdirectory

188

Page 189: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

User files

FILE SYSTEM FUNCTIONS: Some of the functions normally attributed to file systems follows.

1) users should be able to create, modify and delete files.2) Users should be able to share each others files in a carefully

controlled manner in order to build upon each others work.3) The mechanism for sharing files should provide various types of

controlled access such as read access, write access, execute access or various combinations of these.

4) Users should be able to structure their files in a manner most appropriate for each application.

5) Users should be able to order the transfer of information between files.

6) Backup and recovery capabilities must be provided to prevent either accidental loss or malicious destruction of information.

7) Users should be able to refer to their files by symbolic names rather than having to user physical devices name (ie., device independence)

8) In sensitive environments in which information must be kept secure and private, the file system may also provide encryption and decryption capabilities.

9) The file system should provide a user-friendly interface. It should give users a logical view of their data and functions to be performed upon it rather than a physical view. The user should not have to be concerned with the particular devices on which data is stored, the form the data takes on those devices, or the physical means of transferring data to and from these devices.

THE DATA HIERARCHY: Bits are grouped together in bit patterns to represent all data items. There are 2^n possible bit patterns for a string of n bits. The two most popular character sets in use today are ASCII (American Standard Code for Information Interchange) and EBCDIC (Extended Binary Coded Decimal Interchange Code). ASCII is popular in personal computers and in data communication systems. EBCDIC is popular for representing data internally in mainframe computer systems, particularly those of IBM.

189

Page 190: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

A field is a group of characters. A record is a group of fields. A record key is a control field that uniquely identifies the record. A file is a group of related records. A database is a collection of files.

BLOCKING AND BUFFERING: A physical record or block is the unit of information actually read from or written to a device. A logical record is a collection of data treated as a unit from the user’s standpoint. When each physical record contains exactly one logical record, the file is said to consist of unblocked records. When each physical record may contain several logical records, the file is said to consist of blocked records. In a file with fixed-length records, all records are the same length. In a file with variable-length records, records may vary in size up to the block size.

Buffering allows computation to proceed in parallel with input/output. Spaces are provided in primary storage to hold several physical blocks of a file at once – each of these spaces is called a buffer. The most common scheme is called double buffering and it operates as follows (for output). There are two buffers. Initially, records generated by a running process are deposited in the first buffer until it is full. The transfer of the block in the first buffer to secondary storage is then initiated. While this transfer is in progress, the process continues generating records that are deposited in the second buffer. When the second buffer is full, and when the transfer from the first buffer is complete, transfer from the second buffer is initiated. The process continues generating records that are now deposited in the first buffer. This alternation between the buffers allows input/output to occur in parallel with a process’s computations.

FILE ORGANIZATION: File organization refers to the manner in which the records of a file are arranged on secondary storage. The most popular file organization schemes in use today follows. sequential – Records are placed in physical order. The “next” record is the one that physically follows the previous record. This organization is natural for files stored on magnetic tape, an inherently sequential medium.

190

Page 191: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

direct – records are directly (randomly) accessed by their physical addresses on a direct access storage device (DASD). indexed sequential – records are arranged in logical sequence according to a key contained in each record. Indexed sequential records may be accessed sequentially in key order or they may be accessed directly. Partitioned – This is essentially a file of sequential subfiles. Each sequential subfile is called a member. The starting address of each member is stored in the file’s directory. The term volume is used to refer to the recording medium for each particular auxiliary storage device. The volume used on a tape drive is a reel of magnetic tape; the volume used on a disk drive is a disk.

QUEUED AND BASIC ACCESS METHODS: Operating systems generally provide many access methods. These are sometimes grouped into two categories, namely queued access methods and basic access methods. The queued methods provide more powerful capabilities than the basic methods.

Queued access methods are used when the sequence in which records are to be processed can be anticipated, such as in sequential and indexed sequential accessing. The queued methods perform anticipatory buffering and scheduling of I/O operations. They try to have the next record available for processing as soon as the previous record has been processed. The basic access methods are normally used when the sequence in which records are to be processed cannot be anticipated such as in direct accessing. And also in user applications to control record access without incurring the overhead of the queue method.

ALLOCATING AND FREEING SPACE: When files are allocated and freed it is common for the space on disk to become increasingly fragmented. One technique for alleviating this problem is to perform periodic compaction or garbage collection. Files may be reorganized to occupy adjacent areas of the disk, and free areas may be collected into a single block or a group of large blocks. This garbage collection is often done during the system shut down; some systems perform compaction dynamically while in operation. A system

191

Page 192: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

may choose to reorganize the files of users not currently logged in, or it may reorganize files that have not been referenced for a long time. Designing a file system requires knowledge of the user community, including the number of users, the average number and size of files per user, the average duration of user sessions, the nature of application to be run on the system, and the like. Users searching a file for information often use file scan options to locate the next record or the previous record. In a paged systems, the smallest amount of information transferred between secondary and primary storage is a page, so it makes sense to allocate secondary storage in blocks of the page size or a multiple of a page size. Locality tells us tat once a process has referred to a data item on a page it is likely to reference additional data items on that page; it is also likely to reference data items on pages contiguous to that page in the user’s virtual address space.

CONTIGUOUS ALLOCATION: In contiguous allocation, files are assigned to contiguous areas of secondary storage. A user specifies in advance the size of the area needed to hold a file is to be created. If the desired amount of contiguous space is not available the file cannot be created. One advantage of contiguous allocation is that successive logical records are normally physically adjacent to one another. This speed access compared to systems in which successive logical records are dispersed throughout the disk.

The file directories in contiguous allocation systems are relatively straightforward to implement. For each file it is necessary to retain the address of the start of the file and the file’s length. Disadvantage of contiguous allocation is as files are deleted, the space they occupied on secondary storage is reclaimed. This space becomes available for allocation of new files, but these new files must fit in the available holes. Thus contiguous allocation schemes exhibit the same types of fragmentation problems inherent in variable partition multiprogramming systems – adjacent secondary storage holes must be coalesced, and periodic compaction may need to be performed to reclaim storage areas large enough to hold new files.

192

Page 193: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

NONCONTIGUOUS ALLOCATION: Files tend to grow or shrink over time so generally we go for dynamic noncontiguous storage allocation systems instead of contiguous allocation systems.

SECTOR-ORIENTED LINKED ALLOCATION: Files consists of many sectors which may be dispersed throughout the disk. Sectors belonging to a common file contain pointers to one another, forming a linked list. A free space list contains entries for all free sectors on the disk. When a file needs to grow, the process requests more sectors from the free space list. Files that shrink return sectors to the free space list. There is no need for compaction. The drawbacks in noncontiguous allocation is that the records of a file may be dispersed throughout the disk, retrieval of logically contiguous records can involve lengthy seeks.

BLOCK ALLOCATION: One scheme used to manage secondary storage more efficiently and reduce execution time overhead is called block allocation. This is a mixture of both contiguous allocation and noncontiguous allocation methods. In this scheme, instead of allocating individual sectors, blocks of contiguous sectors (sometimes called extents) are allocated. There are several common ways of implementing block-allocation systems. These include block chaining, index block chaining, and block –oriented file mapping. In block chaining entries in the user directory point to the first block of each file. The fixed-length blocks comprising a file each contain two portions: a data block, and a pointer to the next block. Locating a particular record requires searching the block chain until the appropriate block is found, and then searching that block until the appropriate block is found, and then searching that block until the appropriate record is found. Insertions and deletion are straightforward.

193

Page 194: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

BLOCK CHAINING

USER DIRECTORYFILE LOCATION

With index block chaining, the pointers are placed into separate index blocks. Each index block contains a fixed number of items. Each entry contains a record identifier and a pointer to that record. If more than one index block is needed to describe a file, then a series of index blocks is chained together. The big advantage of index block chaining over simple block chaining over simple block chaining is that searching may take place in the index blocks themselves. Once the appropriate record is located via the index blocks, the data block containing that record is read

194

Page 195: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

into primary storage. The disadvantage of this scheme is that insertions can require the complete reconstruction of the index blocks, so some systems leave a certain portion of the index blocks empty to provide for future insertions. In block-oriented file mapping instead of using pointers, the system uses block numbers. Normally, these are easily converted to actual block addresses because of the geometry of the disk. A file map contains one entry for each block on the disk. Entries in the user directory point to the first entry in the file map for each file. Each entry in the file map for each file. Each entry in the file mapcontains the block number of the next block in that file. Thus all the blocks in a file may be located by following the entries in the file map.

INDEX BLOCK CHAINING

Index Continuation Block index block

FILE LOCATION

195

Page 196: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

The entry in the file map that corresponds to the last entry of a particular file is set to some sentinel value like ‘Nil’ to indicate that the last block of a file has been reached. Some of the entries in the file map are set to “Free” to indicate that the block is available for allocation. The system may either search the file map linearly to PAGE-17

locate a free block, or a free block list can be maintained. An advantage of this scheme is that the physical adjacencies on the disk are reflected in the file map. Insertions and deletions are straightforward in this scheme.

BLOCK-ORIENTED FILE MAPPING

22Nil52692010Free17114

FILE LOCATIONA 8B 6C 2

196

Page 197: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

Free340FreeFree1213Nil23Free1819FreeFreeNilFree

PAGE-18

PHYSICAL BLOCKS ON SECONDARY STORAGE

Block 0B(4)

Block 1

B(10)

Block 2

C(1)

Block 3

A(4)

Block 4

B(8)

Block 5

C(2)

Block 6

B(1)

Block 7 Free

Block 8

A(1)

Block 9

B(9)

Block 10

B(2)

Block 11

Free

Block 12

A(3)

Block 13 B(7)

Block 14 B(3)

Block 15 Free

Block 16

Free

Block 17

A(2)

Block 18

B(6)

Block 19

C(5)

Block 20 C(3)

Block Block 22 Block 23 Block 24 Block 25 Block 26 Block 27 197

Page 198: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

21 Free B(5) C(4) Free Free A(5) Free

FILE DESCRIPTOR: A file descriptor or file control block is a control block containing information the system needs to manage a file. A typical file descriptor might include1) symbolic file name2) location of file in secondary storage3) file organization (Sequential, indexed sequential, etc.)4) device type5) access control data6) type (data file, object program, c source program, etc.)7) disposition (permanent vs temporary)8) creation date and time9) destroy date10) date and time last modified11) access activity counts (number of reads, for example) File descriptors are maintained on secondary storage. They are brought to primary storage when a file is opened.

ACCESS CONTROL MATRIX: One way to control access to files is to create a two-dimensional access control matrix listing all the users and all the files in the system. The entry Aij is 1 if user i is allowed access to file j

Otherwise Aij=0. In an installation with a largae number of users and a large number of files, this matrix would be very large and very sparse. Allowing one user access to another users files.

To make a matrix concept useful, it would be necessary to use codes to indicate various kinds of access such as read only, write only, execute only, read write etc.

ACCESS CONTROL MATRIX

1 2 3 4 1 1 1 0 0

198

Page 199: UNIT – I - Web viewMemory consists of 8- bit bytes, any three consecutive bytes form a word (24 bits). ... Stacks, arrays and records are some examples for data structures. To have

2 0 0 1 0

3 0 1 0 1

4 1 0 0 0

ACCESS CONTROL BY USER CLASSES: A technique that requires considerably less space is to control access to various user classes. A common classification scheme is1) Owner – Normally, this is the user who created the file.2) Specified User - The owner specifies that another individual may use the file.3) Group or Project – Users are often members of a group working on a particular project. In this case the various members of the group may all be granted access to each other’s project-related files.4) Public- Most systems allow a file to be designated as public so that it may be accessed by any member of the system’s user community. Public access normally allows users to read or execute a file, but not to write it.

199