1 OMSE 510: Computing Foundations 7: More IPC & Multithreading Chris Gilmore Portland State...
-
date post
21-Dec-2015 -
Category
Documents
-
view
216 -
download
0
Transcript of 1 OMSE 510: Computing Foundations 7: More IPC & Multithreading Chris Gilmore Portland State...
1
OMSE 510: Computing Foundations7: More IPC & Multithreading
Chris Gilmore <[email protected]>
Portland State University/OMSE
Material Borrowed from Jon Walpole’s lectures
2
Today
Classical IPC Problems
Monitors
Message Passing
Scheduling
3
Classical IPC problems
Producer Consumer (bounded buffer)
Dining philosophers
Sleeping barber
Readers and writers
4
Producer consumer problem
8 Buffers
InP
OutP
Consumer
Producer
Producer and consumerare separate threads
Also known as the bounded buffer problem
5
Is this a valid solution?thread producer {
while(1){ // Produce char c while (count==n) { no_op } buf[InP] = c InP = InP + 1 mod n count++}
}
thread consumer {while(1){ while (count==0) { no_op } c = buf[OutP] OutP = OutP + 1 mod n count-- // Consume char}
}
0
1
2
n-1
…
Global variables: char buf[n] int InP = 0 // place to add int OutP = 0 // place to get int count
6
0 thread consumer {1 while(1) {2 while (count==0) {3 sleep(empty)4 }5 c = buf[OutP]6 OutP = OutP + 1 mod n7 count--;8 if (count == n-1)9 wakeup(full)10 // Consume char11 }12 }
How about this?0 thread producer {1 while(1) {2 // Produce char c3 if (count==n) {4 sleep(full)5 }6 buf[InP] = c;7 InP = InP + 1 mod n8 count++9 if (count == 1)10 wakeup(empty)11 }12 }
0
1
2
n-1
…
Global variables: char buf[n] int InP = 0 // place to add int OutP = 0 // place to get int count
7
Does this solution work?
0 thread producer {1 while(1){2 // Produce char c...3 down(empty_buffs)4 buf[InP] = c5 InP = InP + 1 mod n6 up(full_buffs)7 }8 }
0 thread consumer {1 while(1){2 down(full_buffs)3 c = buf[OutP]4 OutP = OutP + 1 mod n5 up(empty_buffs)6 // Consume char...7 }8 }
Global variables semaphore full_buffs = 0; semaphore empty_buffs = n; char buff[n]; int InP, OutP;
8
Producer consumer problem
8 Buffers
InP
OutP
Consumer
Producer
Producer and consumerare separate threads
What is the shared state in the last solution?
Does it apply mutual exclusion? If so, how?
9
Definition of Deadlock
A set of processes is deadlocked if each
process in the set is waiting for an event that only
another process in the set can cause
Usually the event is the release of a currently held resource
None of the processes can … be awakened run release resources
10
Deadlock conditionsA deadlock situation can occur if and only if the following conditions hold simultaneously
Mutual exclusion condition – resource assigned to one process
Hold and wait condition – processes can get more than one resource
No preemption condition
Circular wait condition – chain of two or more processes (must be waiting for resource from next one in chain)
11
Resource acquisition scenarios
acquire (resource_1)use resource_1release (resource_1)
Thread A:
Example: var r1_mutex: Mutex ... r1_mutex.Lock() Use resource_1 r1_mutex.Unlock()
12
Resource acquisition scenariosThread A:
acquire (resource_1)use resource_1release (resource_1)
Another Example: var r1_sem: Semaphore ... r1_sem.Down() Use resource_1 r1_sem.Up()
13
Resource acquisition scenarios
acquire (resource_2)use resource_2release (resource_2)
Thread A: Thread B:
acquire (resource_1)use resource_1release (resource_1)
14
Resource acquisition scenarios
acquire (resource_2)use resource_2release (resource_2)
Thread A: Thread B:
No deadlock can occur here!
acquire (resource_1)use resource_1release (resource_1)
15
Resource acquisition scenarios: 2 resources
acquire (resource_1)acquire (resource_2)use resources 1 & 2release (resource_2)release (resource_1)
acquire (resource_1)acquire (resource_2)use resources 1 & 2release (resource_2)release (resource_1)
Thread A: Thread B:
16
Resource acquisition scenarios: 2 resources
acquire (resource_1)acquire (resource_2)use resources 1 & 2release (resource_2)release (resource_1)
acquire (resource_1)acquire (resource_2)use resources 1 & 2release (resource_2)release (resource_1)
Thread A: Thread B:
No deadlock can occur here!
17
Resource acquisition scenarios: 2 resources
acquire (resource_1)use resources 1release (resource_1)acquire (resource_2)use resources 2release (resource_2)
acquire (resource_2)use resources 2release (resource_2)acquire (resource_1)use resources 1release (resource_1)
Thread A: Thread B:
18
Resource acquisition scenarios: 2 resources
acquire (resource_1)use resources 1release (resource_1)acquire (resource_2)use resources 2release (resource_2)
acquire (resource_2)use resources 2release (resource_2)acquire (resource_1)use resources 1release (resource_1)
Thread A: Thread B:
No deadlock can occur here!
19
Resource acquisition scenarios: 2 resources
acquire (resource_1)acquire (resource_2)use resources 1 & 2release (resource_2)release (resource_1)
acquire (resource_2)acquire (resource_1)use resources 1 & 2release (resource_1)release (resource_2)
Thread A: Thread B:
20
Resource acquisition scenarios: 2 resources
acquire (resource_1)acquire (resource_2)use resources 1 & 2release (resource_2)release (resource_1)
acquire (resource_2)acquire (resource_1)use resources 1 & 2release (resource_1)release (resource_2)
Thread A: Thread B:
Deadlock is possible!
21
Examples of deadlock Deadlock occurs in a single programProgrammer creates a situation that deadlocksKill the program and move onNot a big deal
Deadlock occurs in the Operating SystemSpin locks and locking mechanisms are mismanaged within the OSThreads become frozenSystem hangs or crashesMust restart the system and kill and applications
22
Five philosophers sit at a tableOne fork between each philosopher
Why do they need to synchronize?How should they do it?
Dining philosophers problem
while(TRUE) { Think(); Grab first fork; Grab second fork; Eat(); Put down first fork; Put down second fork;}
Each philosopher ismodeled with a thread
23
Is this a valid solution?
#define N 5
Philosopher() { while(TRUE) { Think(); take_fork(i); take_fork((i+1)% N); Eat(); put_fork(i); put_fork((i+1)% N); }}
24
Working towards a solution …
#define N 5
Philosopher() { while(TRUE) { Think(); take_fork(i); take_fork((i+1)% N); Eat(); put_fork(i); put_fork((i+1)% N); }}
take_forks(i)
put_forks(i)
25
Working towards a solution …
#define N 5
Philosopher() { while(TRUE) { Think(); take_forks(i); Eat(); put_forks(i); }}
26
Picking up forks// only called with mutex set!
test(int i) { if (state[i] == HUNGRY && state[LEFT] != EATING && state[RIGHT] != EATING){ state[i] = EATING; up(sem[i]); }}
int state[N]semaphore mutex = 1semaphore sem[i]
take_forks(int i) { down(mutex); state [i] = HUNGRY; test(i); up(mutex); down(sem[i]);}
27
Putting down forks// only called with mutex set!
test(int i) { if (state[i] == HUNGRY && state[LEFT] != EATING && state[RIGHT] != EATING){ state[i] = EATING; up(sem[i]); }}
int state[N]semaphore mutex = 1semaphore sem[i]
put_forks(int i) { down(mutex); state [i] = THINKING; test(LEFT); test(RIGHT); up(mutex);}
28
Dining philosophers
Is the previous solution correct?
What does it mean for it to be correct?
Is there an easier way?
29
The sleeping barber problem
30
The sleeping barber problemBarber:While there are people waiting for a hair cut, put one in
the barber chair, and cut their hairWhen done, move to the next customerElse go to sleep, until someone comes in
Customer: If barber is asleep wake him up for a haircutIf someone is getting a haircut wait for the barber to
become free by sitting in a chairIf all chairs are all full, leave the barbershop
31
Designing a solutionHow will we model the barber and customers?
What state variables do we need? .. and which ones are shared?…. and how will we protect them?
How will the barber sleep?
How will the barber wake up?
How will customers wait?
What problems do we need to look out for?
32
Is this a good solution?
Barber Thread: while true Down(customers) Lock(lock) numWaiting = numWaiting-1 Up(barbers) Unlock(lock) CutHair() endWhile
Customer Thread: Lock(lock) if numWaiting < CHAIRS numWaiting = numWaiting+1 Up(customers) Unlock(lock) Down(barbers) GetHaircut() else -- give up & go home Unlock(lock) endIf
const CHAIRS = 5var customers: Semaphore barbers: Semaphore lock: Mutex numWaiting: int = 0
33
The readers and writers problem
Multiple readers and writers want to access a database (each one is a thread)
Multiple readers can proceed concurrently
Writers must synchronize with readers and other writers only one writer at a time ! when someone is writing, there must be no readers !
Goals:Maximize concurrency.Prevent starvation.
34
Designing a solutionHow will we model the readers and writers?
What state variables do we need? .. and which ones are shared?…. and how will we protect them?
How will the writers wait?
How will the writers wake up?
How will readers wait?
How will the readers wake up?
What problems do we need to look out for?
35
Is this a valid solution to readers & writers?
Reader Thread: while true Lock(mut) rc = rc + 1 if rc == 1 Down(db) endIf Unlock(mut) ... Read shared data... Lock(mut) rc = rc - 1 if rc == 0 Up(db) endIf Unlock(mut) ... Remainder Section... endWhile
var mut: Mutex = unlocked db: Semaphore = 1 rc: int = 0
Writer Thread: while true ...Remainder Section... Down(db) ...Write shared data... Up(db) endWhile
36
Readers and writers solution
Does the previous solution have any problems? is it “fair”?can any threads be starved? If so, how
could this be fixed?
37
MonitorsIt is difficult to produce correct programs using semaphorescorrect ordering of up and down is tricky!avoiding race conditions and deadlock is tricky!boundary conditions are tricky!
Can we get the compiler to generate the correct semaphore code for us?what are suitable higher level abstractions for
synchronization?
38
MonitorsRelated shared objects are collected together
Compiler enforces encapsulation/mutual exclusionEncapsulation:
Local data variables are accessible only via the monitor’s entry procedures (like methods)
Mutual exclusionA monitor has an associated mutex lockThreads must acquire the monitor’s mutex lock before
invoking one of its procedures
39
Monitors and condition variables
But we need two flavors of synchronizationMutual exclusion
Only one at a time in the critical sectionHandled by the monitor’s mutex
Condition synchronizationWait until a certain condition holdsSignal waiting threads when the condition holds
40
Monitors and condition variables
Condition variables (cv) for use within monitorswait(cv)
thread blocked (queued) until condition holdsmonitor mutex released!!
signal(cv)signals the condition and unblocks (dequeues)
a thread
41
Monitor structures
initializationcode
“entry” methods
y
x
shared data
condition variablesmonitor entry queue
List of threadswaiting to
enter the monitor
Can be called fromoutside the monitor.Only one active at
any moment.
Local to monitor(Each has an associatedlist of waiting threads)
local methods
42
Monitor example for mutual exclusion
process Producerbegin loop <produce char “c”> BoundedBuffer.deposit(c) end loopend Producer
process Consumerbegin loop BoundedBuffer.remove(c) <consume char “c”> end loopend Consumer
monitor: BoundedBuffervar buffer : ...; nextIn, nextOut :... ;
entry deposit(c: char) begin ... end
entry remove(var c: char) begin ... end
end BoundedBuffer
43
Observations
That’s much simpler than the semaphore-based solution to producer/consumer (bounded buffer)!
… but where is the mutex?
… and what do the bodies of the monitor procedures look like?
44
Monitor example with condition variables
monitor : BoundedBuffervar buffer : array[0..n-1] of char nextIn,nextOut : 0..n-1 := 0 fullCount : 0..n := 0 notEmpty, notFull : condition
entry deposit(c:char) entry remove(var c: char) begin begin if (fullCount = n) then if (fullCount = n) then wait(notFull) wait(notEmpty) end if end if
buffer[nextIn] := c c := buffer[nextOut] nextIn := nextIn+1 mod n nextOut := nextOut+1 mod n fullCount := fullCount+1 fullCount := fullCount-1 signal(notEmpty) signal(notFull) end deposit end remove
end BoundedBuffer
45
Condition variables
“Condition variables allow processes to synchronize based on some state of the monitor variables.”
46
Condition variables in producer/consumer“NotFull” condition
“NotEmpty” condition
Operations Wait() and Signal() allow synchronization within the monitor
When a producer thread adds an element...A consumer may be sleepingNeed to wake the consumer... Signal
47
Condition synchronization semantics
“Only one thread can be executing in the monitor at any one time.”
Scenario:Thread A is executing in the monitorThread A does a signal waking up thread BWhat happens now?Signaling and signaled threads can not both run!… so which one runs, which one blocks, and on what
queue?
48
Monitor design choicesCondition variables introduce a problem for mutual exclusion only one process active in the monitor at a time, so what to do
when a process is unblocked on signal? must not block holding the mutex, so what to do when a
process blocks on wait?
Should signals be stored/remembered? signals are not stored if signal occurs before wait, signal is lost!
Should condition variables count?
49
Monitor design choicesChoices when A signals a condition that unblocks B A waits for B to exit the monitor or blocks again B waits for A to exit the monitor or block Signal causes A to immediately exit the monitor or block (… but
awaiting what condition?)
Choices when A signals a condition that unblocks B & C B is unblocked, but C remains blocked C is unblocked, but B remains blocked Both B & C are unblocked … and compete for the mutex?
Choices when A calls wait and blocks a new external process is allowed to enter but which one?
50
Option 1: Hoare semanticsWhat happens when a Signal is performed?signaling thread (A) is suspendedsignaled thread (B) wakes up and runs immediately
Result:B can assume the condition is now true/satisfiedHoare semantics give strong guaranteesEasier to prove correctness
When B leaves monitor, A can run.A might resume execution immediately ... or maybe another thread (C) will slip in!
51
Option 2: MESA Semantics (Xerox PARC)
What happens when a Signal is performed? the signaling thread (A) continues. the signaled thread (B) waits.when A leaves monitor, then B runs.
Issue: What happens while B is waiting?can another thread (C) run after A signals, but before B
runs?
In MESA semantics a signal is more like a hintRequires B to recheck the state of the monitor variables
(the invariant) to see if it can proceed or must wait some more
52
Code for the “deposit” entry routine
monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition
entry deposit(c: char) if cntFull == N notFull.Wait() endIf buffer[nextIn] = c nextIn = (nextIn+1) mod N cntFull = cntFull + 1 notEmpty.Signal() endEntry
entry remove() ...
endMonitor
Hoare Semantics
53
Code for the “deposit” entry routine
monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition
entry deposit(c: char) while cntFull == N notFull.Wait() endWhile buffer[nextIn] = c nextIn = (nextIn+1) mod N cntFull = cntFull + 1 notEmpty.Signal() endEntry
entry remove() ...
endMonitor
MESA Semantics
54
Code for the “remove” entry routine
monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition
entry deposit(c: char) ...
entry remove() if cntFull == 0 notEmpty.Wait() endIf c = buffer[nextOut] nextOut = (nextOut+1) mod N cntFull = cntFull - 1 notFull.Signal() endEntry
endMonitor
Hoare Semantics
55
Code for the “remove” entry routine
monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition
entry deposit(c: char) ...
entry remove() while cntFull == 0 notEmpty.Wait() endWhile c = buffer[nextOut] nextOut = (nextOut+1) mod N cntFull = cntFull - 1 notFull.Signal() endEntry
endMonitor
MESA Semantics
56
“Hoare Semantics”What happens when a Signal is performed?
The signaling thread (A) is suspended.The signaled thread (B) wakes up and runs immediately.
B can assume the condition is now true/satisfied
From the original Hoare Paper:
“No other thread can intervene [and enter the monitor] between the signal and the continuation of exactly one waiting thread.”
“If more than one thread is waiting on a condition, we postulate that the signal operation will reactivate the longest waiting thread. This gives a simple neutral queuing discipline which ensures that every waiting thread will eventually get its turn.”
57
Implementing Hoare SemanticsThread A holds the monitor lock
Thread A signals a condition that thread B was waiting on
Thread B is moved back to the ready queue?B should run immediatelyThread A must be suspended... the monitor lock must be passed from A to B
When B finishes it releases the monitor lock
Thread A must re-aquire the lockPerhaps A is blocked, waiting to re-aquire the lock
58
Implementing Hoare Semantics
Problem:Possession of the monitor lock must be
passed directly from A to B and then back to A
Simply ending monitor entry methods with
monLock.Unlock()
… will not workA’s request for the monitor lock must be
expedited somehow
59
Implementing Hoare Semantics
Implementation Ideas:Consider a thread like A that hands off the
mutex lock to a signaled thread, to be “urgent”.Thread C is not “urgent”
Consider two wait lists associated with each MonitorLock
UrgentlyWaitingThreadsNonurgentlyWaitingThreads
Want to wake up urgent threads first, if any
60
Brinch-Hansen SemanticsHoare SemanticsOn signal, allow signaled process to runUpon its exit from the monitor, signaler process continues.
Brinch-Hansen SemanticsSignaler must immediately exit following any invocation of
signalRestricts the kind of solutions that can be written… but monitor implementation is easier
61
Reentrant codeA function/method is said to be reentrant if...
A function that has been invoked may be invoked again before the first invocation has returned, and will still work correctly
Recursive routines are reentrant
In the context of concurrent programming...
A reentrant function can be executed simultaneously by more than one thread, with no ill effects
62
Reentrant CodeConsider this function...
integer seed;
integer rand() {
seed = seed * 8151 + 3423
return seed;
}
What if it is executed by different threads concurrently?
63
Reentrant CodeConsider this function...
integer seed;
integer rand() {
seed = seed * 8151 + 3423
return seed;
}
What if it is executed by different threads concurrently?The results may be “random”This routine is not reentrant!
64
When is code reentrant?Some variables are “local” -- to the function/method/routine “global” -- sometimes called “static”
Access to local variables?A new stack frame is created for each invocationEach thread has its own stack
What about access to global variables?Must use synchronization!
65
Making this function threadsafeinteger seed;
semaphore m = 1;
integer rand() {
down(m);
seed = seed * 8151 + 3423
tmp = seed;
up(m);
return tmp;
}
66
Making this function reentrant
integer seed;
integer rand( *seed ) {
*seed = *seed * 8151 + 3423
return *seed;
}
67
Message PassingInterprocess Communicationvia shared memoryacross machine boundaries
Message passing can be used for synchronization or general communication
Processes use send and receive primitives receive can block (like waiting on a Semaphore)send unblocks a process blocked on receive (just as a signal
unblocks a waiting process)
68
Producer-consumer with message passing
The basic idea:After producing, the producer sends the data to consumer in
a messageThe system buffers messages
The producer can out-run the consumerThe messages will be kept in order
But how does the producer avoid overflowing the buffer?After consuming the data, the consumer sends back an “empty”
messageA fixed number of messages (N=100)The messages circulate back and forth.
69
Producer-consumer with message passing
thread producer var c, em: char while true // Produce char c... Receive(consumer, &em) -- Wait for an empty msg Send(consumer, &c) -- Send c to consumer endWhileend
70
Producer-consumer with message passing
thread consumer var c, em: char while true Receive(producer, &c) -- Wait for a char Send(producer, &em) -- Send empty message back // Consume char... endWhileend
const N = 100 -- Size of message buffervar em: charfor i = 1 to N -- Get things started by Send (producer, &em) -- sending N empty messagesendFor
71
Design choices for message passing
Option 1: MailboxesSystem maintains a buffer of sent, but not
yet received, messagesMust specify the size of the mailbox ahead
of timeSender will be blocked if the buffer is fullReceiver will be blocked if the buffer is
empty
72
Design choices for message passing
Option 2: No buffering If Send happens first, the sending thread blocks If Receiver happens first, the receiving thread
blocksSender and receiver must Rendezvous (ie. meet)Both threads are ready for the transferThe data is copied / transmittedBoth threads are then allowed to proceed
73
Barriers
(a) Processes approaching a barrier (b) All processes but one blocked at barrier (c) Last process arrives; all are let through
74
QuizWhat is the difference between a monitor and a semaphore?Why might you prefer one over the other?
How do the wait/signal methods of a condition variable differ from the up/down methods of a semaphore?
What is the difference between Hoare and Mesa semantics for condition variables?What implications does this difference have for code
surrounding a wait() call?
75
Scheduling
New ProcessReady
Blocked
RunningTermination
Process state model
Responsibility of the Operating System
76
CPU scheduling criteriaCPU Utilization – how busy is the CPU?
Throughput – how many jobs finished/unit time?
Turnaround Time – how long from job submission to job termination?
Response Time – how long (on average) does it take to get a “response” from a “stimulus”?
Missed deadlines – were any deadlines missed?
77
Scheduler optionsPrioritiesMay use priorities to determine who runs next
amount of memory, order of arrival, etc..
Dynamic vs. Static algorithmsDynamically alter the priority of the tasks while they are in the system
(possibly with feedback)Static algorithms typically assign a fixed priority when the job is initially
started.
Preemptive vs. NonpreemptivePreemptive systems allow the task to be interrupted at any
time so that the O.S. can take over again.
78
Scheduling policies
First-Come, First Served (FIFO)
Shortest Job First (non-preemeptive)
Shortest Job First (with preemption)
Round-Robin Scheduling
Priority Scheduling
Real-Time Scheduling
79
First-Come, First-Served (FIFO)Start jobs in the order they arrive (FIFO queue)Run each job until completion
80
First-Come, First-Served (FIFO)
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Start jobs in the order they arrive (FIFO queue)Run each job until completion
81
First-Come, First-Served (FIFO)
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Total time taken,from submission to
completion
Start jobs in the order they arrive (FIFO queue)Run each job until completion
82
First-Come, First-Served (FIFO)
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Total time taken,from submission to
completion
Arrival Times of the Jobs
Start jobs in the order they arrive (FIFO queue)Run each job until completion
83
First-Come, First-Served (FIFO)
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Total time taken,from submission to
completion
Start jobs in the order they arrive (FIFO queue)Run each job until completion
84
First-Come, First-Served (FIFO)
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Total time taken,from submission to
completion
Start jobs in the order they arrive (FIFO queue)Run each job until completion
85
First-Come, First-Served (FIFO)
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Total time taken,from submission to
completion
Start jobs in the order they arrive (FIFO queue)Run each job until completion
86
First-Come, First-Served (FIFO)
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Total time taken,from submission to
completion
Start jobs in the order they arrive (FIFO queue)Run each job until completion
87
First-Come, First-Served (FIFO)
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Total time taken,from submission to
completion
Start jobs in the order they arrive (FIFO queue)Run each job until completion
88
First-Come, First-Served (FIFO)
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Total time taken,from submission to
completion
Start jobs in the order they arrive (FIFO queue)Run each job until completion
89
First-Come, First-Served (FIFO)
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Total time taken,from submission to
completion
Start jobs in the order they arrive (FIFO queue)Run each job until completion
90
First-Come, First-Served (FIFO)
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Total time taken,from submission to
completion
Start jobs in the order they arrive (FIFO queue)Run each job until completion
91
First-Come, First-Served (FIFO)
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Total time taken,from submission to
completion
Start jobs in the order they arrive (FIFO queue)Run each job until completion
92
First-Come, First-Served (FIFO)
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Total time taken,from submission to
completion
Start jobs in the order they arrive (FIFO queue)Run each job until completion
93
First-Come, First-Served (FIFO)
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Total time taken,from submission to
completion
Start jobs in the order they arrive (FIFO queue)Run each job until completion
94
First-Come, First-Served (FIFO)
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Total time taken,from submission to
completion
Start jobs in the order they arrive (FIFO queue)Run each job until completion
95
First-Come, First-Served (FIFO)
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 0 2 2 6 1 3 4 4 5 4 6 5 7 5 8 2 10
Total time taken,from submission to
completion
Start jobs in the order they arrive (FIFO queue)Run each job until completion
96
First-Come, First-Served (FIFO)
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 0 3 2 2 6 1 7 3 4 4 5 9 4 6 5 7 12 5 8 2 10 12
Total time taken,from submission to
completion
Start jobs in the order they arrive (FIFO queue)Run each job until completion
97
Shortest Job FirstSelect the job with the shortest (expected) running timeNon-Preemptive
98
Shortest Job First
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Same Job Mix
Select the job with the shortest (expected) running timeNon-Preemptive
99
Shortest Job First
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Select the job with the shortest (expected) running timeNon-Preemptive
100
Shortest Job First
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Select the job with the shortest (expected) running timeNon-Preemptive
101
Shortest Job First
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Select the job with the shortest (expected) running timeNon-Preemptive
102
Shortest Job First
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Select the job with the shortest (expected) running timeNon-Preemptive
103
Shortest Job First
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Select the job with the shortest (expected) running timeNon-Preemptive
104
Shortest Job First
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Select the job with the shortest (expected) running timeNon-Preemptive
105
Shortest Job First
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Select the job with the shortest (expected) running timeNon-Preemptive
106
Shortest Job First
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Select the job with the shortest (expected) running timeNon-Preemptive
107
Shortest Job First
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Select the job with the shortest (expected) running timeNon-Preemptive
108
Shortest Job First
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Seslect the job with the shortest (expected) running timeNon-Preemptive
109
Shortest Job First
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Select the job with the shortest (expected) running timeNon-Preemptive
110
Shortest Job First
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Select the job with the shortest (expected) running timeNon-Preemptive
111
Shortest Job First
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 0 2 2 6 1 3 4 4 7 4 6 5 9 5 8 2 1
Select the job with the shortest (expected) running timeNon-Preemptive
112
Shortest Job First
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 0 3 2 2 6 1 7 3 4 4 7 11 4 6 5 9 14 5 8 2 1 3
Select the job with the shortest (expected) running timeNon-Preemptive
113
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Same Job Mix
Preemptive version of SJF
114
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
• Preemptive version of SJF
115
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
• Preemptive version of SJF
116
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
• Preemptive version of SJF
117
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
• Preemptive version of SJF
118
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
• Preemptive version of SJF
119
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
• Preemptive version of SJF
120
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
• Preemptive version of SJF
121
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
• Preemptive version of SJF
122
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
• Preemptive version of SJF
123
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
• Preemptive version of SJF
124
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
• Preemptive version of SJF
125
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
• Preemptive version of SJF
126
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
• Preemptive version of SJF
127
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 0 2 2 6 7 3 4 4 0 4 6 5 9 5 8 2 0
• Preemptive version of SJF
128
Shortest Remaining Time
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 0 3 2 2 6 7 13 3 4 4 0 4 4 6 5 9 14 5 8 2 0 2
• Preemptive version of SJF
129
Round-Robin Scheduling
Goal: Enable interactivity Limit the amount of CPU that a process
can have at one time.
Time quantum Amount of time the OS gives a process
before interventionThe “time slice”Typically: 1 to 100ms
130
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
131
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
132
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
133
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
134
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
135
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
136
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
137
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
138
Ready List:
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
139
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
140
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
141
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
142
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
143
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
144
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
145
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
146
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
147
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
148
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
149
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
150
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
151
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
152
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
153
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
154
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
155
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
Ready List:
156
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
157
Round-Robin Scheduling
0 5 10 15 20
Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 1 4 2 2 6 10 16 3 4 4 9 13 4 6 5 9 14 5 8 2 5 7
158
Round-Robin SchedulingEffectiveness of round-robin depends on The number of jobs, and The size of the time quantum.
Large # of jobs means that the time between scheduling of a single job increases Slow responses
Larger time quantum means that the time between the scheduling of a single job also increases Slow responses
Smaller time quantum means higher processing rates but also more overhead!
159
Scheduling in general purpose systems
160
Priority schedulingAssign a priority (number) to each processSchedule processes based on their priorityHigher priority jobs processes get more CPU time
Managing priorities Can use “nice” to reduce your priority Can periodically adjust a process’ priority
Prevents starvation of a lower priority process Can improve performance of I/O-bound processes by
basing priority on fraction of last quantum used
161
Multi-Level Queue Scheduling
CPU
High priority
Low priority
Multiple queues, each with its own priority. Equivalently: each priority has its own ready
queue
Within each queue...Round-robin scheduling.
Simplist Approach: A Process’s priority is fixed & unchanging
162
Multi-Level Feedback Queue Scheduling
Problem: Fixed priorities are too restrictive Processes exhibit varying ratios of CPU to I/O times.
Dynamic Priorities Priorities are altered over time, as process behavior
changes!
163
Multi-Level Feedback Queue Scheduling
Problem: Fixed priorities are too restrictive Processes exhibit varying ratios of CPU to I/O times.
Dynamic Priorities Priorities are altered over time, as process behavior
changes!
Issue: When do you change the priority of a process and how often?
164
Multi-Level Feedback Queue Scheduling
Problem: Fixed priorities are too restrictive Processes exhibit varying ratios of CPU to I/O times.
Dynamic Priorities Priorities are altered over time, as process behavior changes!
Issue: When do you change the priority of a process and how often?
Solution: Let the amount of CPU used be an indication of how a process is to be handled Expired time quantum more processing needed Unexpired time quantum less processing needed
Adjusting quantum and frequency vs. adjusting priority?
165
Multi-Level Feedback Queue Scheduling
CPU
High priority
Low priority
??
n priority levels, round-robin scheduling within a level
Quanta increase as priority decreases
Jobs are demoted to lower priorities if they do not complete within the current quantum
166
Multi-Level Feedback Queue Scheduling
Details, details, details... Starting priority?
High priority vs. low priority
Moving between priorities: How long should the time quantum be?
167
Lottery SchedulingScheduler gives each thread some lottery tickets
To select the next process to run... The scheduler randomly selects a lottery number The winning process gets to run
Example Thread A gets 50 tickets
Thread B gets 15 tickets
Thread C gets 35 tickets
There are 100 tickets outstanding.
168
Lottery Scheduling
Scheduler gives each thread some lottery tickets.
To select the next process to run... The scheduler randomly selects a lottery number The winning process gets to run
Example Thread A gets 50 tickets 50% of CPU
Thread B gets 15 tickets 15% of CPU
Thread C gets 35 tickets 35% of CPU
There are 100 tickets outstanding.
169
Lottery SchedulingScheduler gives each thread some lottery tickets.
To select the next process to run...
The scheduler randomly selects a lottery number
The winning process gets to run
Example Thread A gets 50 tickets 50% of CPU
Thread B gets 15 tickets 15% of CPU
Thread C gets 35 tickets 35% of CPU
There are 100 tickets outstanding.
• Flexible
• Fair
• Responsive
170
A Brief Look at Real-Time Systems
Assume processes are relatively periodic Fixed amount of work per period (e.g. sensor
systems or multimedia data)
171
A Brief Look at Real-Time Systems
Assume processes are relatively periodic Fixed amount of work per period (e.g. sensor
systems or multimedia data)
Two “main” types of schedulers...
Rate-Monotonic Schedulers
Earliest-Deadline-First Schedulers
172
A Brief Look at Real-Time Systems
Assume processes are relatively periodic Fixed amount of work per period (e.g. sensor systems or
multimedia data)
Two “main” types of schedulers...
Rate-Monotonic Schedulers Assign a fixed, unchanging priority to each process No dynamic adjustment of priorities
Less aggressive allocation of processor
Earliest-Deadline-First Schedulers Assign dynamic priorities based upon deadlines
173
A Brief Look at Real-Time Systems
Typically real-time systems involve several steps (that aren’t in traditional systems)
Admission control All processes must ask for resources ahead of time. If sufficient resources exist,the job is “admitted” into the
system.
Resource allocation Upon admission... the appropriate resources need to be reserved for the
task.
Resource enforcement Carry out the resource allocations properly
174
Rate Monotonic Schedulers
Process P1
T = 1 secondC = 1/2 second / period
For preemptable, periodic processes (tasks)
Assigns a fixed priority to each task T = The period of the task C = The amount of processing per task period
In RMS scheduling, the question to answer is... What priority should be assigned to a given task?
175
Rate Monotonic Schedulers
P1
P2
176
Rate Monotonic Schedulers
P1PRI > P2PRI
P2PRI > P1PRI
P1
P2