Operating Systems

Post on 19-Nov-2014

2.652 views 0 download

Tags:

description

 

Transcript of Operating Systems

1

OS

2Operating System

Just a program Provides a stable, consistent way for

applications to deal with the hardware

3What constitutes an OS?

Kernel

System Programs

Application Programs

5Storage Device Hierarchy

6Cache

Writing policies

Write back Initially, writing is done only to the cache Mark them as dirty for later writing to the

backing store

Write through Write is done synchronously both to the

cache and to the backing store

Replacement Policy?

7System Calls

Instructions that allow access to privileged or sensitive resources on the CPU

System calls provide a level of portability

Parameters to system calls can be passed through registers, tables or stack

Is “printf” a system call?

8System Calls

File copy program Acquire input file name Acquire output file name If file cannot be created abort Open input file If file doesn't exist abort Read from input file Write to output file Repeat until read fails Close output file Terminate normally by returning closing status to

OS

9Operating System Operation

Interrupt driven Dual Mode Operation

“Kernel” mode (or “supervisor” or “protected”) “User” mode: Normal programs executed

10

Operating System Operation

Timer

Interrupts the computer after a specified period

May be treated as a fatal error or may give the program more time Penny, Penny, Penny

11

Process

A program in execution A single ‘thread’ of execution

Uniprogramming: one thread at a time

Multiprogramming: more than one thread at a time

12

Process States

New : Process is being created Ready : Process waiting to run Running : Instructions are being executed Waiting : Process waiting for some event to occur Terminated : The process has finished execution

13

Process Control Block(PCB)

ProcessControlBlock

Only one PCB active at a time

14

Process Creation

vi catemacs

ls

init

CSH CSH

pid=1

Pid=7778

Pid=1400

Parent followed by a child process

15

Process Creation – Fork()

16

A Question (Microsoft)

17

Inter Process Communication

Shared-Memory Systems A process creates a shared-memory segment Other processes attach it to their address space

Message-Passing Systems send(P, message) receive(id, message)

18

IPC - Pipes

Pipes are OS level communication links between processes

Pipes are treated as file descriptors in most OS

19

Multithreading

Each thread has its own stack – current execution state Threads encapsulate concurrency

20

Multithreading

Advantages Responsiveness Resource Sharing Economy Scalability

Pthreads POSIX standard defining an API for thread creation

and synchronization

21

A Question (Google)

22

Scheduling

Scheduling – Deciding which threads are given access to resources from moment to moment

The CPU should not be idle

At least on process should use CPU

23

Scheduling

Scheduling – Deciding which threads are given access to resources from moment to moment

24

Scheduling

Goals/Criteria Minimize Response Time Maximize Throughput Fairness

First-Come, First-Served (FCFS) Scheduling Run until done Shorts jobs get behind long ones

P1 P2 P3

24 27 300

25

Preemption

Capability to preempt a process in execution

Execution prioritized

Higher priority processes preempt the lower ones

26

Round Robin (RR)

Each process gets a small unit of CPU time(time quantum) After quantum expires, process is preempted and added to the

end of ready queue N processes in ready queue and time quantum is q

No process waits more than (N-1)q time units

Performance q large -> FCFS q must be large with respect to context switch,

otherwise too much overhead

27

Round Robin (RR)

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 28 48 68 88 108 112 125 145 153

Process Burst Time

P1 53

P2 8

P3 68

P4 24

Waiting time for P1 = ?, P2 = ?, P3 = ?, P4 = ?

Average Waiting Time = ?Average Completion Time = ?

28

Round Robin (RR)

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 28 48 68 88 108 112 125 145 153

Process Burst Time

P1 53

P2 8

P3 68

P4 24

Waiting time for P1 = (68-20) + (112-88) = 72, P2 = 20, P3 = 85, P4

= 88

Average Waiting Time = 66.5Average Completion Time = 104.5

29

Round Robin (RR)

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 28 48 68 88 108 112 125 145 153

Process Burst Time

P1 53

P2 8

P3 68

P4 24

Pros and Cons: Better for short jobs Context-switching adds up for long jobs

30

What if we know future?

Shortest Job First (SJF) Run whatever job has the least amount of

computation to do

Shortest Remaining Time First (SRTF) Preemptive version of SJF: if job arrives

and has a shorter time to completion than the remaining time on the current job, immediately preempt CPU

Basically Idea is to get short jobs out of the system Big effect on short jobs, only small effect on long ones Result is better average response time

31

Synchronization

Most of the time, threads are working on separate data, so scheduling doesn’t matter

But, what happens when they work on a shared variable

Atomic Operations An operation that always runs to completion or not at all Indivisible Fundamental Building Block

32

Synchronization

Synchronization Using atomic operations to ensure cooperation between

threads

Mutual Exclusion Only one thread does a particular thing at a time

Critical Section Piece of code that only one thread can execute at once

Lock Prevents someone from doing something

33

Synchronization

Load/Store Disable I nts Test&Set Comp&Swap

Locks Semaphores Monitors Send/Receive

Shared Programs

Hardware

Higher-level API

Programs

Everything is pretty painful if only atomic primitives are load and store

34

Semaphores

A kind of generalized lock

Definition: A semaphore has a non-negative integer value and supports the following two operations

P() : An atomic operation that waits for semaphore to become positive, then decrements it by 1 Also called the wait() operation

V() : An atomic operation that increments the semaphore by 1, waking up a waiting P, if any Also called the signal() operation

35

Semaphores Implementation

36

Semaphore vs Mutex

Nope, the purpose of mutex and semaphore are different

Mutex is locking mechanism used to synchronize access to resource. Ownership associated with mutex. Only the owner can release the lock.

Semaphore is Signaling mechanism (“I am done, you can carry on” kind of signal)

More at http://www.geeksforgeeks.org/mutex-vs-semaphore/

Are binary semaphore and mutex same?

37

Readers-Writers

Problem : Several readers and writers accessing the same file Need to control access to buffer If several readers are reading, no problem If writer is writing and reader is reading, some problem

Variations First-Writers-Then-Readers First-Readers-Then-Writers

38

Readers-Writers

39

Deadlocks

P0

Wait(Q)Wait(S)

Some code here

Signal(Q)Signal(S)

P1

Wait(S)Wait(Q)

Some code here

Signal(S)Signal(Q)

A deadlock is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does.

40

Deadlock Requirements

Mutual Exclusion Only one thread at a time can use a resource

Hold and Wait Thread holding at least one resource is waiting to acquire

additional resources held by other threads

No preemption Resources are released only voluntarily by the thread holding

the resource, after thread is finished with it

Circular Wait There exists a set {T1, …, Tn} of waiting threads

T1 is waiting for a resource that is held by T2

T2 is waiting for a resource that is held by T3

… Tn is waiting for a resource that is held by T1

41

Some Techniques

Deadlock Detection Resource allocation graph etc.

Prevention Circular Wait

Avoidance Banker’s algorithm