Lab Manual Os

51
Ramrao Adik Institute Of Technology, Nerul, Navi Mumbai 2008-2009 Subject: Operating Systems Semester: VI Practical Lists 1) Study and implement scheduling algorithms (FCFS, SJF, RR, Priority) 2) Study and implement page replacement algorithms (FIFO, LRU). 3) Study and implement memory management algorithm (Best fit and First Fit) 4) Study and implement Dekker’s and Peterson’s Algorithm 5) Study and implement Readers/ Writers Problem 6) Study and implement Dinning Philosophers Problem 7) Study and implement Banker’s Algorithm (Deadlock Avoidance) 8) Study basics of shell commands in Unix / Linux 9) Study and implement basics of shell programming in Linux / Unix

Transcript of Lab Manual Os

Page 1: Lab Manual Os

Ramrao Adik Institute Of Technology, Nerul, Navi Mumbai 2008-2009

Subject: Operating Systems Semester: VI

Practical Lists

1) Study and implement scheduling algorithms (FCFS, SJF, RR, Priority) 2) Study and implement page replacement algorithms (FIFO, LRU). 3) Study and implement memory management algorithm (Best fit and First Fit) 4) Study and implement Dekker’s and Peterson’s Algorithm 5) Study and implement Readers/ Writers Problem 6) Study and implement Dinning Philosophers Problem 7) Study and implement Banker’s Algorithm (Deadlock Avoidance) 8) Study basics of shell commands in Unix / Linux 9) Study and implement basics of shell programming in Linux / Unix

Page 2: Lab Manual Os

SCHEDULING ALGORITHMS AIM OF THE EXPERIMENT :- To study and implement scheduling algorithms (FCFS, SJF, Round Robin and Priority) THEORY :- FIRST COME FIRST SERVED(FCFS):-

• The process that requested the CPU first is allocated the CPU first. • FCFS policy is easily implemented with a FIFO queue. • When a process enters into the ready queue, its PCB is linked onto the tail of the

queue. • The average waiting time under the FCFS policy is often quite long. • The FCFS scheduling algorithm is NON-PREEMPTIVE. • Once the CPU has been allocated to a process, that process keep the CPU until it

releases the CPU, either by terminating or requesting I/O. • The FCFS algorithm is particularly troublesome for time sharing systems. • There is a convoy affect as all the other process wait for one big process to get off

the CPU. EXAMPLE:- PROCESS BURST TIME P1 24 P2 03 P3 03 GANTT CHART:-

P1 P2 P3 0 24 27 30 PROCESS WAITING TIME TURN AROUND TIME P1 00 24 P2 24 27 P3 27 30 Avg. waiting time =(0+24+27)/3=17 time unit. Avg.turn around time =(24+27+30)/3=27 time unit. SHORTEST JOB FIRST SHEDULING ALGORITHM (SJF):-

• When CPU is available it is assigned to the process that has the smallest next

CPU burst. • FCFS scheduling is used to break tie.

Page 3: Lab Manual Os

• SJF scheduling algorithm is optimal since it gives minimum average waiting time for a given set of process.

• Difficulty with SJF is knowing the length of the next CPU request. • SJF algorithm may be either PREEMPTIVE or NON-PREEMPTIVE. • When a new process arrive at the ready queue while a previous process is

executing the new process may have shorter next CPU burst than what is left of the currently executing process to finish its CPU burst.

• PREEMPTIVE SJF scheduling is also called as shortest remaining time first scheduling

• Although the SJF algorithm is optimal it can not be implemented at the level of short term CPU scheduling.

• SJF scheduling is used frequently in long term scheduling where time limit of the process can be used as length of he process that the user specifies while substituting the job.

• Since there is no way to know the length of next CPU burst we try to predict its value.

• We expect that the next CPU burst will be similar in length to the previous ones. • We compute an appropriate length of next CPU burst and pick the process with

the shortest predicted CPU burst is generally predicted as an exponential average of the measured length of previous CPU burst

Tn+1 =α tn + (1- α) Tn where 0≤ α≤1 exponential average. tn = length of the CPU burst (most recent information). Tn = the past history of all CPU bursts. Tn+1 = our predictor value for the next CPU burst • the parameter α control the relative weight of recent and past history in our

prediction .if α=0,then Tn+1=Tn and only the most recent CPU burst matters.if α=1/2,recent history and past history are equally weighted.

EXAMPLE:- PROCESS BURST TIME P1 06 P2 08 P3 07 P4 03 GANTT CHART:-

P4 P2 P3 P2 0 3 9 16 24 PROCESS Waiting Time Turn Around Time

P1 03 09 P2 16 24 P3 09 16

Page 4: Lab Manual Os

P4 00 03 Avg. waiting time = (3+16+9+0)=7 time unit. Avg. Turn around time= (9+24+16+3)=13 time unit. PRIORITY SCHEDULING ALGORITHM:-

• a priority is associated with each process and the CPU is allocated to the process

with the highest priority .equal priority process are scheduled in FCFS order • priority are generally some fixed range of numbers such as 0 to7,0 to

4095etc.some system use low number to represent low priority • priority can be defined internally, for eg. Time limit, memory requirements, the

no. of open files and the ratio of average. I/O burst to avg. CPU burst etc. have been used to compute priority.

• Priority can be defined externally, by using criteria such as importance of the process, the type and amount of fund being paid for computer use, the department sponsoring the work, political factor etc.

• Priority scheduling can be either PREEMPTIVE OR NON-PREEMPTIVE. • A PREEMPTIVE priority scheduling algorithm will preempt the CPU if the

priority of the newly arrived process is higher than the priority of the currently running process.

• A NON-PREEMPTIVE priority scheduling algorithm will simply put the new process with higher priority than that of the currently running process at the head of ready queue.

• A major problem with priority scheduling algorithm is indefinite blocking or starvation.

• A process that is ready to run but the CPU can be considered blocked waiting for the CPU.

• A priority scheduling algorithm can leave some some low priority process waiting indefinitely for the CPU. in heavily loaded computer system, a steady stream of higher priority processes can prevent a low priority process from ever getting the CPU.

• A solution to the problem of indefinite blockage of low priority processes is aging • Aging is a technique of gradually increasing the priority of processes that wait in

the system for a long time. •

EXAMPLE:- PROCESS BURST TIME PRIORITY P1 10 03 P2 01 01 P3 02 04 P4 01 05 P5 05 02

Page 5: Lab Manual Os

GANTT CHART:-

P2 P5 P1 P3 P4 0 1 6 16 18 19 PROCESS Waiting Time Turn Around Time

P1 06 16 P2 00 01 P3 16 18 P4 18 19

P5 01 06 Avg. waiting time = (6+0+16+18+1)/5=8.2 time unit. Avg. Turn around time= (16+1+18+19+6)/5=12 time unit ROUND ROBIN SCHEDULING(RR):-

• Round robin scheduling algorithm is designed especially for time-shearing and is

PREEMPTIVE algorithm. A unit of time called a time quantum(time slice) is defined .

• The ready queue is treated as a circular queue and the CPU scheduling goes around the ready queue allocating the CPU to each process for a time interval of time quantum.

• Ready queue is implemented as FIFO queue and all new processes are added to the tail of the ready queue.

• The CPU scheduler picks the first process from the ready queue, lets a time to intercept after one time quantum and dispatches the process.

• If the process have a CPU burst of less than one time quantum, the process itself will release the CPU voluntarily and the scheduler select another process from ready queue.

• If the CPU burst of the currently running process is longer than one time quantum, the time will go off and cause an intercept to the operating system.

• A context switch will be executed and the process will b e put at the tail of the ready queue and the CPU scheduler will than select the next process in the ready queue.

• If there are n processes in the queue and in time quantum in q then each process must wait for (n-1)*q time unit until its next time quantum.

• The performance of the the round robin algorithm depends heavily on the size of time quantum.

• If the time quantum is very large the RR policy is same as the FCFS policy. • If the time quantum is very small , the RR approach is called processor sharing

and its appears to the user as though each of n processes has its own processor running at 1/n the speed of real processor .Time should not be very small such that most of the time is wasted in context switch than computation.

Page 6: Lab Manual Os

• Thus time quantum must be large with respective the context switch time. • Turn around time also depends on the size of time quantum. The avg. TAT can be

improved if most processes finish their next CPU burst in a single time quantum. • A rule of thumb is that 80% of the CPU burst should be shorter than the time

quantum. EXAMPLE:-

PROCESS BURST TIME P1 24 P2 03 P3 03 GANTT CHART:-

P1 P2 P3 P1 P1 P1 P1 P1 0 4 7 10 14 18 22 26 30 PROCESS Waiting Time Turn Around Time

P1 00 30 P2 04 07 P3 07 10

Avg. waiting time = (0+4+7)/3=3.6 time unit. Avg. Turn around time= (30+7+10)/3=15.6 time unit REFFERENCE BOOK:- Silberschatz A., Galvin P., ” Operating Systems Principles”, Willey REQUIREMENT:- TURBO C++ / JAVA

Page 7: Lab Manual Os

PAGE REPLACEMENT ALGORITHMS

AIM OF THE EXPERIMENT :- To study and implement page replacement algorithms (FIFO, LRU). THEORY :-

In a computer operating system that utilizes paging for virtual memory memory management, page replacement algorithms decide which memory pages to page out (swap out, write to disk) when a page of memory needs to be allocated. Paging happens when a page fault occurs and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than some threshold.

When the page that was selected for replacement and paged out is referenced again it has to be paged in (read in from disk), and this involves waiting for I/O completion. This determines the quality of the page replacement algorithm: the less time waiting for page-ins, the better the algorithm. A page replacement algorithm looks at the limited information about accesses to the pages provided by hardware, and tries to guess which pages should be replaced to minimize the total number of page misses, while balancing this with the costs (primary storage and processor time) of the algorithm itself.

Replacement algorithms can be local or global.

When a process incurs a page fault, a local page replacement algorithm selects for replacement some page that belongs to that same process (or a group of processes sharing a memory partition). A global replacement algorithm is free to select any page in memory.

Local page replacement assumes some form of memory partitioning that determines how many pages are to be assigned to a given process or a group of processes. Most popular forms of partitioning are fixed partitioning and balanced set algorithms based on the working set model. The advantage of local page replacement is its scalability: each process can handle its page faults independently without contending for some shared global data structure.

FIRST-IN, FIRST-OUT

The first-in, first-out (FIFO) page replacement algorithm is a low-overhead algorithm that requires little book-keeping on the part of the operating system. The idea is obvious from the name - the operating system keeps track of all the pages in memory in a queue, with the most recent arrival at the back, and the earliest arrival in front. When a page needs to be replaced, the page at the front of the queue (the oldest page) is selected. While FIFO is

Page 8: Lab Manual Os

cheap and intuitive, it performs poorly in practical application. Thus, it is rarely used in its unmodified form. This algorithm experiences Belady's anomaly. Belady's anomaly proves that it is possible to have more page faults when increasing the number of page frames while using FIFO method of frame management.

FIFO page replacement algorithm is used by the VAX/VMS operating system, with some modifications. Partial second chance is provided by skipping a limited number of entries with valid translation table references, and additionally, pages are displaced from process working set to a system wide pool from which they can be recovered if not already re-used.

LEAST RECENTLY USED

LRU works on the idea that pages that have been most heavily used in the past few instructions are most likely to be used heavily in the next few instructions too. While LRU can provide near-optimal performance in theory (almost as good as Adaptive Replacement Cache), it is rather expensive to implement in practice. There are a few implementation methods for this algorithm that try to reduce the cost yet keep as much of the performance as possible.

The most expensive method is the linked list method, which uses a linked list containing all the pages in memory. At the back of this list is the least recently used page, and at the front is the most recently used page. The cost of this implementation lies in the fact that items in the list will have to be moved about every memory reference, which is a very time-consuming process.

Another method that requires hardware support is as follows: suppose the hardware has a 64-bit counter that is incremented at every instruction. Whenever a page is accessed, it gains a value equal to the counter at the time of page access. Whenever a page needs to be replaced, the operating system selects the page with the lowest counter and swaps it out. With present hardware, this is not feasible because the required hardware counters do not exist.

Because of implementation costs, one may consider algorithms (like those that follow) that are similar to LRU, but which offer cheaper implementations.

One important advantage of LRU algorithm is that it is amenable to full statistical analysis. It has been proved, for example, that LRU can never result in more than N-times more page faults than OPT algorithm, where N is proportional to the number of pages in the managed pool.

On the other hand, LRU's weakness is that its performance tends to degenerate under many quite common reference patterns. For example, if there are N pages in the LRU pool, an application executing a loop over array of N + 1 pages will cause a page fault on each and every access. As loops over large arrays are common, much effort has been put into modifying LRU to work better in such situations. Many of the proposed LRU

Page 9: Lab Manual Os

modifications try to detect looping reference patterns and to switch into suitable replacement algorithm, like Most Recently Used (MRU).

REFFERENCE BOOK:- Silberschatz A., Galvin P., ” Operating Systems Principles”, Willey

REQUIREMENT:- TURBO C++ / JAVA

Page 10: Lab Manual Os

MEMORY MANAGEMENT ALGORITHMS

AIM OF THE EXPERIMENT :- To study and implement memory management algorithm (Best fit and First Fit) THEORY :-

• Basic requirements that drive memory designs o The primary memory access time must be as small as possible. This need

influnences both software and hardware design o The primary memory must be as large as possible. Using virtual memory,

software and hardware can make the memory appear to be larger than it actually is

o The primary memory must be cost-effective. The cost cannot be more than a small percentage of the total cost of the computer.

Memory Manager

The purpose of the memory manager is

o to allocate primary memory space to processes o to mao the process address space into the allocated portion of the primary

memory o to minimize access times using a cost-effective amount of primary memory

Memory Management Algorithms

In an environment that supports dynamic memory allocation, the memory manager must keep a record of the usage of each allocatable block of memory. This record could be kept by using almost any data structure that implements linked lists. An obvious implementation is to define a free list of block descriptors, with each descriport containing a pointer to the next descriptor, a pointer to the block, and the length of the block. The memory manager keeps a free list pointer and inserts entries into the list in some order conducive to its allocation strategy. A number of strategies are used to allocate space to the processes that are competing for memory.

o Best Fit

The allocator places a process in the smallest block of unallocated memory in which it will fit.

Problems:

� It requires an expensive search of the entire free list to find the best hole. � More importantly, it leads to the creation of lots of little holes that are not

big enough to satisfy any requests. This situation is called fragmentation, and is a problem for all memory-management strategies, although it is particularly bad for best-fit.

Page 11: Lab Manual Os

Solution: One way to avoid making little holes is to give the client a bigger block than it asked for. For example, we might round all requests up to the next larger multiple of 64 bytes. That doesn't make the fragmentation go away, it just hides it.

� Unusable space in the form of holes is called external fragmentation � Unusable space in the form of holes is called external fragmentation

o Worst Fit

The memory manager places process in the largest block of unallocated memory available. The ides is that this placement will create the largest hole after the allocations, thus increasing the possibility that, compared to best fit, another process can use the hole created as a result of external fragmentation.

o First Fit

Another strategy is first fit, which simply scans the free list until a large enough hole is found. Despite the name, first-fit is generally better than best-fit because it leads to less fragmentation.

Problems:

� Small holes tend to accumulate near the beginning of the free list, making the memory allocator search farther and farther each time.

Solution:

� Next Fit

o Next Fit

The first fit approach tends to fragment the blocks near the beginning of the list without considering blocks further down the list. Next fit is a variant of the first-fit strategy.The problem of small holes accumulating is solved with next fit algorithm, which starts each search where the last one left off, wrapping around to the beginning when the end of the list is reached (a form of one-way elevator)

REFFERENCE BOOK:- Silberschatz A., Galvin P., ” Operating Systems Principles”, Willey REQUIREMENT:- TURBO C++ / JAVA

Page 12: Lab Manual Os

DEKKER’S AND PETERSON’S ALGORITHM AIM OF THE EXPERIMENT :- To Study and implement Dekker’s and Peterson’s Algorithm THEORY :- DEKKER'S ALGORITHM

• Dekker's algorithm is a concurrent programming algorithm for mutual exclusion derived by the Dutch mathematician T. J. Dekker in 1964 that allows two threads to share a single-use resource without conflict, using only shared memory for communication.

• It avoids the strict alternation of a naive turn-taking algorithm, and was one of the first mutual exclusion algorithms to be invented.

• If two processes attempt to enter a critical section at the same time, the algorithm will allow only one process in, based on whose turn it is. If one process is already in the critical section, the other process will Busy wait for the first process to exit.

• This is done by the use of two flags f0 and f1 which indicate an intention to enter the critical section and a turn variable which indicates who has priority between the two processes.

Pseudocode f0 := false f1 := false turn := 0 // or 1 p0: p1: f0 := true f1 := true while f1 { while f0 { if turn ≠ 0 { if turn ≠ 1 { f0 := false f1 := false while turn ≠ 0 { while turn ≠ 1 { } } f0 := true f1 := true } } } } // critical section // critical section ... ... // remainder section // remainder section turn := 1 turn := 0 f0 := false f1 := false

Page 13: Lab Manual Os

Processes indicate an intention to enter the critical section which is tested by the outer while loop. If the other process has not flagged intent, the critical section can be entered safely irrespective of the current turn. Mutual exclusion will still be guaranteed as neither process can become critical before setting their flag (implying at least one process will enter the while loop). T his also guarantees progress as waiting will not occur on a process which has withdrawn intent to become critical. Alternatively, if the other process's variable was set the while loop is entered and the turn variable will establish who is permitted to become critical. Processes without priority will withdraw their intention to enter the critical section until they are given priority again (the inner while loop). Processes with priority will break from the while loop and enter their critical section. Dekker's algorithm guarantees mutual exclusion, freedom from deadlock, and freedom from starvation. Let us see why the last property holds. Suppose p0 is stuck inside the "while f1" loop forever. There is freedom from deadlock, so eventually p1 will proceed to its critical section and set turn = 0 (and the value of turn will remain unchanged as long as p0 doesn't progress). Eventually p0 will break out of the inner "while turn ≠ 0" loop (if it was ever stuck on it). After that it will set f0 := true and settle down to waiting for f1 to become false (since turn = 0, it will never do the actions in the while loop). The next time p1 tries to enter its critical section, it will be forced to execute the actions in its "while f0" loop. In particular, it will eventually set f1 = false and get stuck in the "while turn ≠ 1" loop (since turn remains 0). The next time control passes to p0, it will exit the "while f1" loop and enter its critical section. If the algorithm were modified by performing the actions in the "while f1" loop without checking if turn = 0, then there is a possibility of starvation. Thus all the steps in the algorithm are necessary. One advantage of this algorithm is that it doesn't require special Test-and-set (atomic read/modify/write) instructions and is therefore highly portable between languages and machine architectures. One disadvantage is that it is limited to two processes and makes use of Busy waiting instead of process suspension. (The use of busy waiting suggests that processes should spend a minimum of time inside the critical section.) Modern operating systems provide mutual exclusion primitives that are more general and flexible than Dekker's algorithm. However, it should be noted that in the absence of actual contention between the two processes, the entry and exit from critical section is extremely efficient when Dekker's algorithm is used. Many modern CPUs execute their instructions in an out-of-order fashion. This algorithm won't work on SMP machines equipped with these CPUs without the use of memory barriers.

Page 14: Lab Manual Os

Additionally, many optimizing compilers can perform transformations that will cause this algorithm to fail regardless of the platform. In many languages, it is legal for a compiler to detect that the flag variables f0 and f1 are never accessed in the loop. It can then remove the writes to those variables from the loop, using a process called Loop-invariant code motion. It would also be possible for many compilers to detect that the turn variable is never modified by the inner loop, and perform a similar transformation, resulting in a potential infinite loop. If either of these transformations is performed, the algorithm will fail, regardless of architecture. To alleviate this problem, volatile variables should be marked as modifiable outside the scope of the currently executing context. For example, in Java, one would annotate these variables as 'volatile'. Note however that the C/C++ "volatile" attribute only guarantees that the compiler generates code with the proper ordering; it does not include the necessary memory barriers to guarantee in-order execution of that code. PETERSON'S ALGORITHM Peterson's algorithm is a concurrent programming algorithm for mutual exclusion that allows two processes to share a single-use resource without conflict, using only shared memory for communication. It was formulated by Gary Peterson in 1981 at the University of Rochester. While Peterson's original formulation worked with only two processes, the algorithm can be generalised for more than two, as discussed in "Operating Systems Review, January 1990 ('Proof of a Mutual Exclusion Algorithm', M Hofri)". The algorithm flag[0] = 0 flag[1] = 0 turn = 0 P0: flag[0] = 1 P1: flag[1] = 1 turn = 1 turn = 0 while( flag[1] && turn == 1 ); while( flag[0] && turn == 0 ); // do nothing // do nothing // critical section // critical section ... ... // end of critical section // end of critical section flag[0] = 0 flag[1] = 0 The algorithm uses two variables, flag and turn. A flag value of 1 indicates that the process wants to enter the critical section. The variable turn holds the ID of the process

Page 15: Lab Manual Os

whose turn it is. Entrance to the critical section is granted for process P0 if P1 does not want to enter its critical section or if P1 has given priority to P0 by setting turn to 0. The algorithm does not satisfy completely the three essential criteria of mutual exclusion: Mutual exclusion P0 and P1 can never be in the critical section at the same time: If P0 is in its critical section, then flag[0] is 1 and either flag[1] is 0 or turn is 0. In both cases, P1 cannot be in its critical section. Progress requirement This criteria says that no process which is not in a critical section is allowed to block a process which wants to enter the critical section.There is not strict alternating between P0 and P1. However, there is the possibility that this requirement is not met: 1. P[0] sets Flag[0]=1 and turn=1 2. context switch 3. P[1] sets Flag[1]=1 4. context switch 5. P[0] enters its critical section 6. P[0] leaves its critical section 7. now if P[0] wants to enter its critical section again, it has to wait until P[1] sets turn=0, even though there is currently no process in the critical section Bounded waiting A process will not wait longer than one turn for entrance to the critical section: After giving priority to the other process, this process will run to completion and set its flag to 0, thereby allowing the other process to enter the critical section. When working at the hardware level, Peterson's algorithm is typically not needed to achieve atomic access. Some processors have special instructions, like test-and-set or compare-and-swap that, by locking the memory bus, can be used to provide mutual exclusion in SMP systems. Most modern CPUs reorder memory accesses to improve execution efficiency. Such processors invariably give some way to force ordering in a stream of memory accesses, typically through a memory barrier instruction. Implementation of Peterson's and related algorithms on processors which reorder memory accesses generally require use of such operations to work correctly to keep sequential operations from happening in an incorrect order. Note that reordering of memory accesses can happen even on processors that don't reorder instructions (such as the PowerPC processor on the Xbox 360). Most such CPU's also have some sort of guaranteed atomic operation, such as XCHG on x86 processors and Load-Link/Store-Conditional on Alpha, MIPS, PowerPC, and other

Page 16: Lab Manual Os

architectures. These instructions are intended to provide a way to build synchronization primitives more efficiently than can be done with pure shared memory approaches.

REFFERENCE BOOK:- William Stallings,”Operating Systems”, Fourth Edition, Pearson Education REQUIREMENT:- TURBO C++ / JAVA

Page 17: Lab Manual Os
Page 18: Lab Manual Os

READERS/ WRITERS PROBLEM

AIM OF THE EXPERIMENT :- To study and implement Readers/ Writers Problem THEORY :-

The R-W problem is another classic problem for which design of synchronization and concurrency mechanisms can be tested. The producer/consumer is another such problem; the dining philosophers is another.

Definition

• There is a data area that is shared among a number of processes. • Any number of readers may simultaneously write to the data area. • Only one writer at a time may write to the data area. • If a writer is writing to the data area, no reader may read it. • If there is at least one reader reading the data area, no writer may write to it. • Readers only read and writers only write • A process that reads and writes to a data area must be considered a writer

(consider producer or consumer)

Semaphore Solution: Readers have Priority

int readcount = 0; semaphore wsem = 1; // semaphore x = 1; // void main(){ int p = fork(); if(p) reader; // assume multiple instances else writer; // assume multiple instances }

void reader(){ while(1){ wait(x); readcount++;

void writer(){ while(1){ wait(wsem) doWriting();

Page 19: Lab Manual Os

if (readcount==1) wait(wsem); signal(x); doReading(); wait(x); readcount--; if (readcount==0) signal(wsem); signal(x); } }

signal(wsem) } }

Once readers have gained control, a flow of reader processes could starve the writer processes.

Rather have the case that when a write needs access, then hold up subsequent reading requests until after the writing is done.

Semaphore Solution: Writers have Priority

int readcount, writecount = 0; semaphore rsem, wsem = 1; // semaphore x,y,z = 1; // void main(){ int p = fork(); if(p) reader; // assume multiple instances else writer; // assume multiple instances }

void reader(){ while(1){ wait(z); wait(rsem); wait(x); readcount++; if (readcount==1) wait(wsem); signal(x); signal(rsem); signal(z); doReading(); wait(x); readcount--; if (readcount==0)

void writer(){ while(1){ wait(y); writecount++; if (writecount==1) wait(rsem); signal(y); wait(wsem); doWriting(); signal(wsem); wait(y); writecount--; if (writecount==0) signal(rsem); signal(y);

Page 20: Lab Manual Os

signal(wsem); signal(x); } }

} }

Only Readers Only Writers Both w/ Reader

First Both w/ Writer

First

• wsem set • no queues

• wsem and rsem set

• writers Q on wsem

• wsem set by reader

• rsem set by writer

• writers Q on wsem

• 2nd reader Q on rsem

• other readers on z

• wsem set by writer

• rsem set by writer

• writers Q on wsem

• 1st reader Q on rsem

• other readers on z

REFFERENCE BOOK:- William Stallings,”Operating Systems”, Fourth Edition, Pearson Education REQUIREMENT:- TURBO C++ / JAVA

Page 21: Lab Manual Os

DINNING PHILOSOPHERS PROBLEM

AIM OF THE EXPERIMENT :- To study and implement Dinning Philosophers Problem THEORY :-

In computer science, the dining philosophers problem is an illustrative example of a common computing problem in concurrency. It is a classic multi-process synchronization problem.

In 1965, Edsger Dijkstra set an examination question on a synchronization problem where five computers competed for access to five shared tape drive peripherals. Soon afterwards the problem was retold by Tony Hoare as the dining philosophers problem.

This is a theoretical explanation of deadlock and resource starvation by assuming that each philosopher takes a different fork as a first priority and then looks for another.

The dining philosophers problem is summarized as five philosophers sitting at a table doing one of two things: eating or thinking. While eating, they are not thinking, and while thinking, they are not eating. The five philosophers sit at a circular table with a large bowl of spaghetti in the center. A fork is placed in between each philosopher, and as such, each philosopher has one fork to his left and one fork to his right. As spaghetti is difficult to serve and eat with a single fork, it is assumed that a philosopher must eat with two forks. The philosopher can only use the fork on his immediate left or right.

Illustration of the dining philosophers problem

The dining philosophers problem is sometimes explained using rice and chopsticks rather than spaghetti and forks, as it is more intuitively obvious that two chopsticks are required to begin eating.

The philosophers never speak to each other, which creates a dangerous possibility of deadlock when every philosopher holds a left fork and waits perpetually for a right fork (or vice versa).

Originally used as a means of illustrating the problem of deadlock, this system reaches deadlock when there is a 'cycle of unwarranted requests'. In this case philosopher P1 waits for the fork grabbed by philosopher P2 who is waiting for the fork of philosopher P3 and so forth, making a circular chain.

Page 22: Lab Manual Os

Starvation (and the pun was intended in the original problem description) might also occur independently of deadlock if a philosopher is unable to acquire both forks due to a timing issue. For example there might be a rule that the philosophers put down a fork after waiting five minutes for the other fork to become available and wait a further five minutes before making their next attempt. This scheme eliminates the possibility of deadlock (the system can always advance to a different state) but still suffers from the problem of livelock. If all five philosophers appear in the dining room at exactly the same time and each picks up their left fork at the same time the philosophers will wait five minutes until they all put their forks down and then wait a further five minutes before they all pick them up again.

The lack of available forks is an analogy to the lacking of shared resources in real computer programming, a situation known as concurrency. Locking a resource is a common technique to ensure the resource is accessed by only one program or chunk of code at a time. When the resource a program is interested in is already locked by another one, the program waits until it is unlocked. When several programs are involved in locking resources, deadlock might happen, depending on the circumstances. For example, one program needs two files to process. When two such programs lock one file each, both programs wait for the other one to unlock the other file, which will never happen.

In general the dining philosophers problem is a generic and abstract problem used for explaining various issues which arise in problems which hold mutual exclusion as a core idea. For example, as in the above case deadlock/livelock is well explained with the dining philosophers problem.

Page 23: Lab Manual Os

Solutions

Waiter solution

A relatively simple solution is achieved by introducing a waiter at the table. Philosophers must ask his permission before taking up any forks. Because the waiter is aware of which forks are in use, he is able to arbitrate and prevent deadlock. When four of the forks are in use, the next philosopher to request one has to wait for the waiter's permission, which is not given until a fork has been released. The logic is kept simple by specifying that philosophers always seek to pick up their left hand fork before their right hand fork (or vice versa).

Page 24: Lab Manual Os

To illustrate how this works, consider the philosophers are labelled clockwise from A to E. If A and C are eating, four forks are in use. B sits between A and C so has neither fork available, whereas D and E have one unused fork between them. Suppose D wants to eat. Were he to take up the fifth fork, deadlock becomes likely. If instead he asks the waiter and is told to wait, we can be sure that next time two forks are released there will certainly be at least one philosopher who could successfully request a pair of forks. Therefore deadlock cannot happen.

Resource hierarchy solution

Another simple solution is achieved by assigning a partial order, or hierarchy, to the resources (the forks, in this case), and establishing the convention that all resources will be requested in order, and released in reverse order, and that no two resources unrelated by order will ever be used by a single unit of work at the same time. Here, the resources (forks) will be numbered 1 through 5, in some order, and each unit of work (philosopher) will always pick up the lower-numbered fork first, and then the higher-numbered fork, from among the two forks he plans to use. Then, he will always put down the higher numbered fork first, followed by the lower numbered fork. In this case, if four of the five philosophers simultaneously pick up their lower-numbered fork, only the highest numbered fork will remain on the table, so the fifth philosopher will not be able to pick up any fork. Moreover, only one philosopher will have access to that highest-numbered fork, so he will be able to eat using two forks. When he finishes using the forks, he will put down the highest-numbered fork first, followed by the lower-numbered fork, freeing another philosopher to grab the latter and begin eating.

While the resource hierarchy solution avoids deadlocks, it is not always practical, especially when the list of required resources is not completely known in advance. For example, if a unit of work holds resources 3 and 5 and then determines it needs resource 2, it must release 5, then 3 before acquiring 2, and then it must re-acquire 3 and 5 in that order. Computer programs that access large numbers of database records would not run efficiently if they were required to release all higher-numbered records before accessing a new record, making the method impractical for that purpose.

This is often the most practical solution for real world Computer Science problems; by assigning a constant hierarchy of locks, and by enforcing the ordering of obtaining the locks this problem can be avoided.

Chandy / Misra solution

In 1984, K. Mani Chandy and J. Misra proposed a different solution to the dining philosophers problem to allow for arbitrary agents (numbered P1, ..., Pn) to contend for an arbitrary number of resources, unlike Dijkstra's solution. It is also completely distributed and requires no central authority after initialization.

Page 25: Lab Manual Os

1. For every pair of philosophers contending for a resource, create a fork and give it to the philosopher with the lower ID. Each fork can either be dirty or clean. Initially, all forks are dirty.

2. When a philosopher wants to use a set of resources (i.e. eat), he must obtain the forks from his contending neighbors. For all such forks he does not have, he sends a request message.

3. When a philosopher with a fork receives a request message, he keeps the fork if it is clean, but gives it up when it is dirty. If he sends the fork over, he cleans the fork before doing so.

4. After a philosopher is done eating, all his forks become dirty. If another philosopher had previously requested one of the forks, he cleans the fork and sends it.

This solution also allows for a large degree of concurrency, and will solve an arbitrarily large problem.

Algorithm:

One can consider the Dining Philosophers to be a deadlock problem, and can apply deadlock prevention to it by numbering the forks and always acquiring the lowest numbered fork first.

#define N 5 /* Number of philosphers */ #define RIGHT(i) (((i)+1) %N) #define LEFT(i) (((i)==N) ? 0 : (i)+1) typedef enum { THINKING, HUNGRY, EATING } phil _state; phil_state state[N]; semaphore mutex =1; semaphore f[N]; /* one per fork, all 1*/ void get_forks(int i) { int max, min; if ( RIGHT(i) > LEFT(i) ) { max = RIGHT(i); min = LEFT(i); } else { min = RIGHT(i); max = LEFT(i); } P(f[min]); P(f[max]); } void put_forks(int i) { V(f[LEFT(i)]); V(f[RIGHT(i)]); }

Page 26: Lab Manual Os

void philosopher(int process) { while(1) { think(); get_forks(process); eat(); put_forks(process); } }

REFFERENCE BOOK:- Silberschatz A., Galvin P., ” Operating Systems Principles”, Willey. REQUIREMENT:- TURBO C++ / JAVA

Page 27: Lab Manual Os

BANKER’S ALGORITHMS AIM OF THE EXPERIMENT :- To study and implement Banker’s Algorithm (Deadlock Avoidance) THEORY :-

Deadlock Definition

A set of processes is deadlocked if each process in the set is waiting for an event that only another process in the set can cause (including itself).

Banker's algorithm

The Banker's algorithm is a resource allocation & deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of pre-determined maximum possible amounts of all resources, and then makes a "safe-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue.

The algorithm was developed in the design process for the THE operating system and originally described (in Dutch) in EWD108[1]. The name is by analogy with the way that bankers account for liquidity constraints.

The Banker's algorithm is run by the operating system whenever a process requests resources.[2] The algorithm prevents deadlock by denying or postponing the request if it determines that accepting the request could put the system in an unsafe state (one where deadlock could occur).

Resources

For the Banker's algorithm to work, it needs to know three things:

• How much of each resource each process could possibly request • How much of each resource each process is currently holding • How much of each resource the system has available

Some of the resources that are tracked in real systems are memory, semaphores and interface access.

Page 28: Lab Manual Os

Example

Assuming that the system distinguishes between four types of resources, (A, B, C and D), the following is an example of how those resources could be distributed. Note that this example shows the system at an instant before a new request for resources arrives. Also, the types and number of resources are abstracted. Real systems, for example, would deal with much larger quantities of each resource.

Available system resources: A B C D 3 1 1 2 Processes (currently allocated resources): A B C D P1 1 2 2 1 P2 1 0 3 3 P3 1 1 1 0 Processes (maximum resources): A B C D P1 3 3 2 2 P2 1 2 3 4 P3 1 1 5 0

Safe and Unsafe States

A state (as in the above example) is considered safe if it is possible for all processes to finish executing (terminate). Since the system cannot know when a process will terminate, or how many resources it will have requested by then, the system assumes that all processes will eventually attempt to acquire their stated maximum resources and terminate soon afterward. This is a reasonable assumption in most cases since the system is not particularly concerned with how long each process runs (at least not from a deadlock avoidance perspective). Also, if a process terminates without acquiring its maximum resources, it only makes it easier on the system.

Given that assumption, the algorithm determines if a state is safe by trying to find a hypothetical set of requests by the processes that would allow each to acquire its maximum resources and then terminate (returning its resources to the system). Any state where no such set exists is an unsafe state.

Pseudo-Code

P - set of processes

Mp - maximal requirement of resources for process p

Cp - current resources allocation process p

A - currently available resources

Page 29: Lab Manual Os

while (P != ∅) { found = FALSE; foreach (p � P) { if (Mp − Cp ≤ A) { /* p can obtain all it needs. */ /* assume it does so, terminates, and */ /* releases what it already has. */ A = A + Cp ; P = P − {p}; found = TRUE; } } if (! found) return FAIL; } return OK;

Example

We can show that the state given in the previous example is a safe state by showing that it is possible for each process to acquire its maximum resources and then terminate.

1. P1 acquires 2 A, 1 B and 1 D more resources, achieving its maximum o The system now still has 1 A, no B, 1 C and 1 D resource available

2. P1 terminates, returning 3 A, 3 B, 2 C and 2 D resources to the system o The system now has 4 A, 3 B, 3 C and 3 D resources available

3. P2 acquires 2 B and 1 D extra resources, then terminates, returning all its resources

o The system now has 5 A, 3 B, 6 C and 6 D resources 4. P3 acquires 4 C resources and terminates

o The system now has all resources: 6 A, 4 B, 7 C and 6 D 5. Because all processes were able to terminate, this state is safe

Note that these requests and acquisitions are hypothetical. The algorithm generates them to check the safety of the state, but no resources are actually given and no processes actually terminate. Also note that the order in which these requests are generated – if several can be fulfilled – doesn't matter, because all hypothetical requests let a process terminate, thereby increasing the system's free resources.

For an example of an unsafe state, consider what would happen if process 2 were holding 1 more unit of resource B at the beginning.

Requests

When the system receives a request for resources, it runs the Banker's algorithm to determine if it is safe to grant the request. The algorithm is fairly straight forward once the distinction between safe and unsafe states is understood.

1. Can the request be granted?

Page 30: Lab Manual Os

o If not, the request is impossible and must either be denied or put on a waiting list

2. Assume that the request is granted 3. Is the new state safe?

o If so grant the request o If not, either deny the request or put it on a waiting list

Whether the system denies or postpones an impossible or unsafe request is a decision specific to the operating system.

Example

Continuing the previous examples, assume process 3 requests 2 units of resource C.

1. There is not enough of resource C available to grant the request 2. The request is denied

On the other hand, assume process 3 requests 1 unit of resource C.

1. There are enough resources to grant the request 2. Assume the request is granted

o The new state of the system would be:

Available system resources A B C D Free 3 1 0 2 Processes (currently allocated resources): A B C D P1 1 2 2 1 P2 1 0 3 3 P3 1 1 2 0 Processes (maximum resources): A B C D P1 3 3 2 2 P2 1 2 3 4 P3 1 1 5 0

1. Determine if this new state is safe 1. P1 can acquire 2 A, 1 B and 1 D resources and terminate 2. Then, P2 can acquire 2 B and 1 D resources and terminate 3. Finally, P3 can acquire 3 C resources and terminate 4. Therefore, this new state is safe

2. Since the new state is safe, grant the request

Finally, assume that process 2 requests 1 unit of resource B.

Page 31: Lab Manual Os

1. There are enough resources 2. Assuming the request is granted, the new state would be:

Available system resources: A B C D Free 3 0 1 2 Processes (currently allocated resources): A B C D P1 1 2 2 1 P2 1 1 3 3 P3 1 1 1 0 Processes (maximum resources): A B C D P1 3 3 2 2 P2 1 2 3 4 P3 1 1 5 0

1. Is this state safe? Assuming P1, P2, and P3 request more of resource B and C. o P1 is unable to acquire enough B resources o P2 is unable to acquire enough B resources o P3 is unable to acquire enough C resources o No process can acquire enough resources to terminate, so this state is not

safe 2. Since the state is unsafe, deny the request

Note that in this example, no process was able to terminate. It is possible that some processes will be able to terminate, but not all of them. That would still be an unsafe state.

}

REFFERENCE BOOK:- Silberschatz A., Galvin P., ” Operating Systems Principles”, Willey.

REQUIREMENT:- TURBO C++ / JAVA

Page 32: Lab Manual Os

BASICS OF SHELL COMMANDS IN UNIX / LINUX

AIM OF THE EXPERIMENT :- To study basics of shell commands in Unix / Linux THEORY :- Linux commands This guide will make you familiar with basic GNU/Linux shell commands. It is just an introduction to complement Ubuntu's graphical tools. Note that Linux is case sensitive. User, user, and USER are all different to Linux. Starting a Terminal To open a Terminal do as follow: Choose Applications → Accessories → Terminal; Or press Alt+F2 and type gnome-terminal. File and Directory Commands cd The cd command changes directories. When you open a terminal you will be in your home directory. To move around the file system you will use cd. Examples: To navigate into the root directory, type: cd / To navigate to your home directory, type: cd or cd ~ To navigate up one directory level, type: cd ..

Page 33: Lab Manual Os

To navigate to the previous directory (or back), type: cd - To navigate through multiple levels of directory at once, specify the full directory path that you want to go to. For example, type: cd /var/www to go directly to the /www subdirectory of /var/. As another example, type: cd ~/Desktop to move you to the Desktop subdirectory inside your home directory. pwd pwd: The pwd command will show you which directory you're located in (pwd stands for “print working directory”). For example, typing pwd in the Desktop directory, will show ~/Desktop. GNOME Terminal also displays this information in the title bar of it's window. ls The ls command shows you the files in your current directory. Used with certain options, you can see sizes of files, when files where made, and permissions of files. For example, typing ls ~ will show you the files that are in your home directory. Create a Text File using cat command To create a text file called foo.txt, enter: $ cat > foo.txt Output: This is a test. Hello world! press CTRL+D to save fileTo display file contents, type $ cat foot.txt

Page 34: Lab Manual Os

cp The cp command makes a copy of a file for you. For example, type: cp file foo to make a exact copy of file and name it foo, but the file file will still be there. mv The mv command moves a file to a different location or will rename a file. Examples are as follows: mv file foo will rename the file file to foo. mv foo ~/Desktop will move the file foo to your Desktop directory but will not rename it. You must specify a new file name to rename a file. If you are using mv with sudo you will not be able to use the ~ shortcut, but will have to use the full pathnames to your files. This is because when you are working as root, ~ will refer to the root account's home directory, not your own. rm Use the rm command to remove or delete a file in your directory. It will not work on directories which have files in them. mkdir The mkdir command will allow you to create directories. For example, typing: mkdir music will create a music directory in the current directory. System Information Commands df The df command displays filesystem disk space usage for all partitions. df -h will give information using megabytes (M) and gigabytes (G) instead of blocks (-h means "human-readable"). free The free command displays the amount of free and used memory in the system.

Page 35: Lab Manual Os

free -m will give the information using megabytes, which is probably most useful for current computers. top The top command displays information on your GNU/Linux system, running processes and system resources, including CPU, RAM & swap usage and total number of tasks being run. To exit top, press q. uname The uname command with the -a option, prints all system information, including machine name, kernel name & version, and a few other details. Most useful for checking which kernel you're using. lsb_release The lsb_release command with the -a option prints version information for the Linux release you're running. For example, typing: lsb_release -a will give you: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 7.04 Release: 7.04 Codename: feisty ifconfig The ifconfig command reports on your system's network interfaces. Executing Commands with Elevated Privileges The following commands will need to be prefaced with the sudo command. Please see RootSudo for information on using sudo. Adding a New Group The addgroup command is used to create a new group on the system. To create a new group, type: addgroup newgroup The above command will create a new group called newgroup. Adding A New User The adduser is used to create new users on the system. To create a new user, type: adduser newuser

Page 36: Lab Manual Os

The above command will create a new user called newuser. To assign a password for the new user use the passwd command: passwd newuser Finally, to assign the new user to the new group, type: adduser newuser newgroup Options The default behavior for a command may usually be modified by adding a -- option to the command. The ls command, for example, has a -s option so that ls -s will include file sizes in the listing. There is also a -h option to get those sizes in a "human readable" format. Options can be grouped in clusters so ls -sh is exactly the same command as ls -s -h Most options have a long version, prefixed with two dashes instead of one, so even ls --size --human-readable is the same command. "Man" and getting help command --help and man command are the two most important tools at the command line. Virtually all commands understand the -h (or --help) option which will produce a short usage description of the command and it's options, then exit back to the command prompt. Type man -h or man --help to see this in action. Every command and nearly every application in Linux will have a man (manual) file, so finding them is as simple as typing man command to bring up a longer manual entry for the specified command. For example, man mv

Page 37: Lab Manual Os

will bring up the mv (move) manual. Move up and down the man file with the arrow keys, and quit back to the command prompt with q. man man will bring up the manual entry for the man command, which is a good place to start. man intro is especially useful - it displays the "Introduction to user commands" which is a well-written, fairly brief introduction to the Linux command line. There are also info pages, which are generally more in-depth than man pages. Try info info for the introduction to info pages. Searching for man files If you aren't sure which command or application you need to use, you can try searching the man files. man -k foo, will search the man files for foo. Try man -k nautilus to see how this works. This is the same as the apropos command. man -f foo, searches only the titles of your system's man files. For example, try man -f gnome This is the same as the whatis command. Other Useful Things Pasting in commands Often, you will be referred to instructions that require commands to be pasted into the terminal. You might be wondering why the text you've copied from a web page using Ctrl+C won't paste in with Ctrl+V. Surely you don't have to type in all those nasty

Page 38: Lab Manual Os

commands and filenames? Relax. Middle Button Click on your mouse (both buttons simultaneously on a two-button mouse) or Right Click and select Paste from the menu. Save on typing Up Arrow or Ctrl+p Scrolls through the commands you've entered previously. Down Arrow or Ctrl+n Takes you back to a more recent command. Enter When you have the command you want. Tab A very useful feature. It autocompletes any commands or filenames, if there's only one option, or else gives you a list of options. Change the text The mouse won't work. Use the Left/Right arrow keys to move around the line. When the cursor is where you want it in the line, typing inserts text, it doesn't overtype what's already there. Ctrl+a Home Moves the cursor to the start of a line. Ctrl+e End Moves the cursor to the end of a line. Ctrl+b Moves to the beginning of the previous or current word. Ctrl+k Deletes from the current cursor position to the end of the line. Ctrl+u Deletes the whole of the current line. Ctrl+w Deletes the word before the cursor. The following online guides are available: AptGetHowto - using apt-get to install packages from the command line.

Page 39: Lab Manual Os

Commandline Repository Editing - adding the Universe/Multiverse repositories through the command line. grep Howto - grep is a powerful command line search tool. find - locate files on the command line. CommandlineHowto - longer and more complete than this basic guide, but still unfinished. HowToReadline - information on some more advanced customization for the command line. For more detailed tutorials on the Linux command line, please see: http://linuxcommand.org/- basic BASH tutorials, including BASH scripting http://linuxsurvival.com/index.php- Java-based tutorials http://rute.2038bug.com/index.html.gz- a massive online book about system administration, almost all from the command line. grep What Is grep? Grep is a command line tool that allows you to find a string in a file or stream. It can be used with Regular expression to be more flexible at finding strings. How To Use grep In the simplest case, grep can simply be invoked like this : % grep 'STRING' filenameThis is OK but it does not show the true power of grep. First this only looks at one file. A cool example of using grep with multiple file would be to find all files in a directory that contains the name of a person. This can be easily accomplished using a grep in the following way : % grep 'Nicolas Kassis' *Notice the use of single quotes; This are not essential but in this example it was required since the name contains a space. Double quotes could also have been used in this example. Now lets use some regular expressions... Grep Regular Expression

Page 40: Lab Manual Os

grep can search for complicated pattern to find what you need. Here is a list of some of the special characters used to create a regular expression: Grep Regular Expression ^ Denotes the beginning of a line $ Denotes the end of a line . Matches any one characters * Matches 0 or more of the previous characters .* Matches any number or type of characters [] Matches on character for the one listed in the the Square brackets [^] Does not match any characters listed \<, >/ Denotes the beginning and end (respectively) of a word So an example of a regular expression search would be % grep "\<[A-Za-z].*" fileThis will search for any word which begins with a letter upper or lower case. For more details check: BasicCommands http://www.gnu.org/software/grep/doc/ http://en.wikipedia.org/wiki/Grep man grep and info grep on your computer

Page 41: Lab Manual Os

ALL COMMANDS A to Z alias Create an alias awk Find and Replace text within file(s) break Exit from a loop cal Display a calendar case Conditionally perform a command cat Display the contents of a file cd Change Directory chgrp Change group ownership chmod Change access permissions chown Change file owner and group clear Clear terminal screen cmp Compare two files comm Compare two sorted files line by line command Run a command - ignoring shell functions continue Resume the next iteration of a loop cp Copy one or more files to another location csplit Split a file into context-determined pieces cut Divide a file into several parts date Display or change the date & time dc Desk Calculator diff Display the differences between two files diff3 Show differences among three files dir Briefly list directory contents echo Display message on screen egrep Search file(s) for lines that match an extended expression exec Execute a command exit Exit the shell expr Evaluate expressions factor Print prime factors false Do nothing, unsuccessfully fgrep Search file(s) for lines that match a fixed string find Search for files that meet a desired criteria function Define Function Macros grep Search file(s) for lines that match a given pattern groups Print group names a user is in history Command History

Page 42: Lab Manual Os

info Help info kill Stop a process from running locate Find files logname Print current login name logout Exit a login shell man Help manual mkdir Create new folder(s) more Display output one screen at a time mv Move or rename files or directories passwd Modify a user password printf Format and print data ps Process status pwd Print Working Directory read read a line from standard input return Exit a shell function rm Remove files rmdir Remove folder(s) sleep Delay for a specified time sort Sort text files source Run commands from a file `.' split Split a file into fixed-size pieces tail Output the last part of files true Do nothing, successfully tty Print filename of terminal on stdin type Describe a command uniq Uniquify files until Execute commands (until error) useradd Create new user account usermod Modify user account users List users currently logged in watch Execute/display a program periodically wc Print byte, word, and line counts while Execute commands who Print all usernames currently logged in whoami Print the current user id and name (`id -un')

Page 43: Lab Manual Os

BASICS OF SHELL PROGRAMMING IN UNIX / LINUX

AIM OF THE EXPERIMENT :- To study basics of shell programming in Unix / Linux THEORY :-

Shell Programming Shell , command interpreter, is a program started up after opening of the user session by the login process. The shell is active till the occurence of the <EOT> character, which signals requests for termination of the execution and for informing about that fact the operating system kernel. Each user obtains their own separate instance of the sh. Program shprints out the monit on the screen showing its readiness to read a next command. The shell interpreter works based on the following scenario: 1. displays a prompt, 2. waits for entering text from a keyboard, 3. analyses the command line and _nds a command, 4. submit to the kernel execution of the command, 5. accepts an answer from the kernel and again waits for user input.

The Shell Initialization The shell initialization steps: 1. assigned values to environmental variables , 2. system scripts, de_ning other shell variables, executed, shell system scripts 1. sh, ksh .pro_le 2. csh .login, .cshrc Commands Submitting a command $ [ VAR=value ... ] command_name [ arguments ... ] $ echo $PATH Built-in commands $ PATH=$PATH:/usr/local/bin $ export PATH the set built-in without any parameters prints values of all variables, the export built-in without any parameters prints values of all exported environmental variables. Special Parameters Special parameters, these parameters may only be referenced, direct assignment to them is not allowed. $0 name of the command $1 _rst argument of the scipt/ function $2 second argument of the script/ function $9 ninth argument of the scipt/ function $* all positional arguments "$*" = "$1 $2 .." $@ list of separated all positional arguments "$@" = "$1" "$2" .. $# the number of arguments of some commands or given to the last set , $? exit status of the most recently executed foreground command,

Page 44: Lab Manual Os

$! PID of the most recently started backgruond command. $$ PID of the current shell, $0-9 also: may be set by the set command. Metacharacters During resolving of _le names and grouping commands into bigger sets, special characters called metacharacters are used. * string without the "/" character, ? any single character, [ ] one character from the given set, [...-...] like [ ], with given scope from the first to the last, [!..-...] any character except those within the given scope, # start of a comment, \ escape character, preserves the literal value of the following character, $ a value of a variable named with the following string, ; commands separator, ` ` string in accent characters executed as a command with the stdout of the execution as a result of that quotation, ' ' preserves the literal value of each character within the quotes " " preserves the literal value of all characters within the quotes, with the exception of $, `, and \

Command interpretation Steps in command interpretation under the sh shell: 1. entering line of characters, 2. division of the line into sequence of words, based on the IFS value, 3. substitution 1: subsitution of $�name. strings with variables' values, $ b=/usr/user $ ls -l prog.* > ${b}3 4. substitution 2: substitution of metacharacters * ? [ ] into appropriate _le names in the current directory, 5. substitution 3: interpretation of accent quoted strings, ` `, as commands and their execution

Grouping special argument --, commands may be grouped into brackets: round brackets, ( commands-sequence; ) to group process which are to be run as a separate sub-process; may be run in background (&), curly brackets, � commands-sequence; . just to group commands, command end recognized with: <NL> ; &

Input/ output Redirection After session opening user environment contains the following elements: standard input (stdin ) - stream 0, standard output (stdout ) - stream 1, standardo error output (stderr ) - stream 2. There are the following redirection operators: > file redirect stdout to _le >> file append stdout to _le < file redirect stdin from _le << EOT read input stream directly from the following lines, till EOT word occurence. n > file redirect output stream with descriptor n to _le, n >> file append output stream with descriptor n to _le, n>&m redirect output of stream n to input of stream m, n<&m redirect input of stream n to output of stream m.

Shell Scripts Commands grouped together in a common text _le may be executed by: $ sh [options] file_with_commands [arg ...] After giving to the _le execute permision by command: chmod , np.: $ chmod +x plik_z_cmd

Page 45: Lab Manual Os

one can submit it as a command without giving sh before the text _le name. $ file_with_commands arg ... Compound Commands for steering of the shell script execution there are the following instructions: if , for , while , until , case it is possible to write if in a shorter way: And-if && (when result equal to 0) Or-if || (when result di_erent to 0) $ cp x y && vi y $ cp x y || cp z y Each command execution places in $? variable result of execution. The value "0" means that the execution was succesful. Nonzero result means occurence of some error during command execution.

'if' Instruction the standard structure of the compound if if_list then then_list [ elif elif_list; then then_list ] ... [ else else_list ] fi the ifælist is executed. If its exit status is zero, the thenælist is executed. Otherwise, each elifælist is executed in turn, and if its exit status is zero, the corresponding thenælist is executed and the command completes. Otherwise, the elseælist is executed, if present. if cc -c p.c then ld p.o else echo "compilation error" 1>&2 fi 'case' Instruction the standard structure of the compound case word in pattern1) list1;; pattern2) list2;; *) list_default;; esac a case command _rst expands word , and tries to match it against each pattern in turn, using the same matching rules as for path-name expansion. an example case $# in

0) echo 'usage: man name' 1>&2; exit 2;; Loop Instructions In the sh command interpreter there are three types of loop instructions: for name [ in word ] ; do list ; done while list; do list; done until list; do list; done for instruction, executed once for each element of the forælist, while instruction, with loop executed while the condition returns 0 exit code (while condition is ful_lled), until instruction, with loop executed until the condition _nally returns 0 exit code (loop executed while condition is not ful_lled), instructions continue and break may be used inside loops #!/bin/sh for i in /tmp /usr/tmp do rm -rf $i/* done

Page 46: Lab Manual Os

How to create a shell program file: 1. create a file with extension in only editor a prog1.sh. 2. Enter commands similar to they are typed at $ prompt. 3. save file using CTRL+a option. 3. For execution of shell program,we have to write following program. along with filename(.sh extension) $sh prog1.sh. 5. read command used to get value to var throught keyboard. 6. Echo command used to disply characters on the screen. syntax: echo "HELLO" display: HELLO(on screen). 7. Explanation of command used in shell program.

Different examples $ cat file.dat | while read x y z do echo $x $y $z done #!/bin/sh i=1 while [ $i -le 5 ]; do echo $i i=`expr $i + 1` done $ who -r . ru-level 2 Aug 21 16:58 2 0 S $ set `who -r` $ echo $6 16:58

The Real-world Example #!/usr/bin/zsh PATH=/usr/bin:/usr/local/bin:/bin WAIT_TIME=5 . /export/home/oracle/.zshenv # check whether it makes sense to check it PID=`ps -ef | grep LISTENER | grep -v grep | awk -e '{print $2 }'` if test -z "$PID" then exit 0 fi # check how it works lsnrctl status >/dev/null 2>&1 & sleep $WAIT_TIME kill $! 2>/dev/null res="$?" if test "$res" != "1" then kill $PID kill -9 $PID logger -p user.err Oracle LISTENER ERROR (stunned) - restarted lsnrctl start fi

Page 47: Lab Manual Os

BANKER’S ALGORITHM (DEADLOCK AVOIDANCE)

AIM OF THE EXPERIMENT :- To study and implement Banker’s Algorithm (Deadlock Avoidance) THEORY:-

Banker's algorithm

The Banker's algorithm is a resource allocation & deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of pre-determined maximum possible amounts of all resources, and then makes a "safe-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue.

The algorithm was developed in the design process for the THE operating system and originally described (in Dutch) in EWD108[1]. The name is by analogy with the way that bankers account for liquidity constraints.

Algorithm

The Banker's algorithm is run by the operating system whenever a process requests resources. The algorithm prevents deadlock by denying or postponing the request if it determines that accepting the request could put the system in an unsafe state (one where deadlock could occur).

Resources

For the Banker's algorithm to work, it needs to know three things:

• How much of each resource each process could possibly request • How much of each resource each process is currently holding • How much of each resource the system has available

Some of the resources that are tracked in real systems are memory, semaphores and interface access.

Example

Assuming that the system distinguishes between four types of resources, (A, B, C and D), the following is an example of how those resources could be distributed. Note that this example shows the system at an instant before a new request for resources arrives. Also,

Page 48: Lab Manual Os

the types and number of resources are abstracted. Real systems, for example, would deal with much larger quantities of each resource.

Available system resources: A B C D 3 1 1 2 Processes (currently allocated resources): A B C D P1 1 2 2 1 P2 1 0 3 3 P3 1 1 1 0 Processes (maximum resources): A B C D P1 3 3 2 2 P2 1 2 3 4 P3 1 1 5 0

Safe and Unsafe States

Bankers algorithm (safe/unsafe states) A state is said to be a safe state if there exists a sequence of other states that leads to all the customers getting loans up to their credit limits (all the processes getting all their resources and terminating). An unsafe state does not have to lead to deadlock, since a customer might not need the entire credit line available, but the banker cannot count on this behaviour.

A state (as in the above example) is considered safe if it is possible for all processes to finish executing (terminate). Since the system cannot know when a process will terminate, or how many resources it will have requested by then, the system assumes that all processes will eventually attempt to acquire their stated maximum resources and terminate soon afterward. This is a reasonable assumption in most cases since the system is not particularly concerned with how long each process runs (at least not from a deadlock avoidance perspective). Also, if a process terminates without acquiring its maximum resources, it only makes it easier on the system.

Given that assumption, the algorithm determines if a state is safe by trying to find a hypothetical set of requests by the processes that would allow each to acquire its maximum resources and then terminate (returning its resources to the system). Any state where no such set exists is an unsafe state.

Page 49: Lab Manual Os

Pseudo-Code

P - set of processes

Mp - maximal requirement of resources for process p

Cp - current resources allocation process p

A - currently available resources

while (P != ∅) { found = FALSE; foreach (p � P) { if (Mp − Cp ≤ A) { /* p can obtain all it needs. */ /* assume it does so, terminates, and */ /* releases what it already has. */ A = A + Cp ; P = P − {p}; found = TRUE; } } if (! found) return FAIL; } return OK;

Example

We can show that the state given in the previous example is a safe state by showing that it is possible for each process to acquire its maximum resources and then terminate.

1. P1 acquires 2 A, 1 B and 1 D more resources, achieving its maximum o The system now still has 1 A, no B, 1 C and 1 D resource available

2. P1 terminates, returning 3 A, 3 B, 2 C and 2 D resources to the system o The system now has 4 A, 3 B, 3 C and 3 D resources available

3. P2 acquires 2 B and 1 D extra resources, then terminates, returning all its resources

o The system now has 5 A, 3 B, 6 C and 6 D resources 4. P3 acquires 4 C resources and terminates

o The system now has all resources: 6 A, 4 B, 7 C and 6 D 5. Because all processes were able to terminate, this state is safe

Note that these requests and acquisitions are hypothetical. The algorithm generates them to check the safety of the state, but no resources are actually given and no processes actually terminate. Also note that the order in which these requests are generated – if several can be fulfilled – doesn't matter, because all hypothetical requests let a process terminate, thereby increasing the system's free resources.

Page 50: Lab Manual Os

For an example of an unsafe state, consider what would happen if process 2 were holding 1 more unit of resource B at the beginning.

Requests

When the system receives a request for resources, it runs the Banker's algorithm to determine if it is safe to grant the request. The algorithm is fairly straight forward once the distinction between safe and unsafe states is understood.

1. Can the request be granted? o If not, the request is impossible and must either be denied or put on a

waiting list 2. Assume that the request is granted 3. Is the new state safe?

o If so grant the request o If not, either deny the request or put it on a waiting list

Whether the system denies or postpones an impossible or unsafe request is a decision specific to the operating system.

Example

Continuing the previous examples, assume process 3 requests 2 units of resource C.

1. There is not enough of resource C available to grant the request 2. The request is denied

On the other hand, assume process 3 requests 1 unit of resource C.

1. There are enough resources to grant the request 2. Assume the request is granted

o The new state of the system would be:

Available system resources A B C D Free 3 1 0 2 Processes (currently allocated resources): A B C D P1 1 2 2 1 P2 1 0 3 3 P3 1 1 2 0 Processes (maximum resources): A B C D P1 3 3 2 2 P2 1 2 3 4 P3 1 1 5 0

Page 51: Lab Manual Os

1. Determine if this new state is safe 1. P1 can acquire 2 A, 1 B and 1 D resources and terminate 2. Then, P2 can acquire 2 B and 1 D resources and terminate 3. Finally, P3 can acquire 3 C resources and terminate 4. Therefore, this new state is safe

2. Since the new state is safe, grant the request

Finally, assume that process 2 requests 1 unit of resource B.

1. There are enough resources 2. Assuming the request is granted, the new state would be:

Available system resources: A B C D Free 3 0 1 2 Processes (currently allocated resources): A B C D P1 1 2 2 1 P2 1 1 3 3 P3 1 1 1 0 Processes (maximum resources): A B C D P1 3 3 2 2 P2 1 2 3 4 P3 1 1 5 0

1. Is this state safe? Assuming P1, P2, and P3 request more of resource B and C. o P1 is unable to acquire enough B resources o P2 is unable to acquire enough B resources o P3 is unable to acquire enough C resources o No process can acquire enough resources to terminate, so this state is not

safe 2. Since the state is unsafe, deny the request