1dv012 CPU Scheduling - orion.lnu.se

52
1dv012 CPU Scheduling Morgan Ericsson <[email protected] >

Transcript of 1dv012 CPU Scheduling - orion.lnu.se

Page 1: 1dv012 CPU Scheduling - orion.lnu.se

1dv012CPU Scheduling

Morgan Ericsson<[email protected]>

Page 2: 1dv012 CPU Scheduling - orion.lnu.se

Reasonable model

Actual Conceptual Time line

Different views on multi programming

Page 3: 1dv012 CPU Scheduling - orion.lnu.se

Process States

Page 4: 1dv012 CPU Scheduling - orion.lnu.se

Scheduling

• Processes can be described as either:

• I/O-bound process – spends more time doing I/O than computations, many short CPU bursts

• CPU-bound process – spends more time doing computations; few very long CPU bursts

Page 5: 1dv012 CPU Scheduling - orion.lnu.se

Scheduling

Page 6: 1dv012 CPU Scheduling - orion.lnu.se

CPU burst

Page 7: 1dv012 CPU Scheduling - orion.lnu.se

Preemption

• Major classification

• Preemptive vs non-preemptive scheduling

• Non-preemptive: scheduling under 1 and 4

• A process switches to a waiting state only as a function of its own behaviour, i.e. when it invokes OS services, or when it terminates

• Preemptive: all other

• A process can be “forced” to move to a waiting state

Page 8: 1dv012 CPU Scheduling - orion.lnu.se

Preemption

• Non-preemptive scheduling is “easier”

• Consider access to shared data

• Consider preemption while in kernel mode

• Consider interrupts occurring during crucial OS activities

• Cost: Maintaining consistent system state while the processes are suspended in the midst of critical activity

Page 9: 1dv012 CPU Scheduling - orion.lnu.se

Dispatcher• Dispatcher module gives control of the CPU to the

process selected by the short-term scheduler; this involves:

• switching context

• switching to user mode

• jumping to the proper location in the user program to restart that program

• Dispatch latency – time it takes for the dispatcher to stop one process and start another running

Page 10: 1dv012 CPU Scheduling - orion.lnu.se

Scheduling Metrics: User

• Performance related

• response time: time it takes to produce the first response

• turnaround time: time spent from the time of “submission” to time of completion

• deadlines: the time within which the program must complete (the policy must maximize percentage of deadlines met)

Page 11: 1dv012 CPU Scheduling - orion.lnu.se

Scheduling Metrics: User

• Other

• predictability: expectation that the job runs the same regardless of system load

Page 12: 1dv012 CPU Scheduling - orion.lnu.se

Scheduling Metrics: System

• Performance related

• waiting time: time spent waiting to get the CPU

• throughput: the number of processes completed per unit time (directly affected by the waiting time)

• CPU utilization: percentage of time the CPU is busy

Page 13: 1dv012 CPU Scheduling - orion.lnu.se

Scheduling Metrics: System

• Other

• fairness: no process should suffer starvation

• enforcing priorities: higher priority processes should not wait

Page 14: 1dv012 CPU Scheduling - orion.lnu.se

First Come First Served (FCFS)

• Non-preemptive

• Implementation

• a queue of processes

• new processes enter the ready queue at the end

• when a process terminates the CPU is given to the process at the beginning of the queue

• (in practice) when a process blocks it goes to the end of the queue

Page 15: 1dv012 CPU Scheduling - orion.lnu.se

Example

P1! P2! P3!

24! 27! 30!0!

Process Burst Time

P1 24

P2 3

P3 3

Suppose that the processes arrive in the order: P1 , P2 , P3. The Gantt Chart for the schedule is:

Page 16: 1dv012 CPU Scheduling - orion.lnu.se

Two Metrics

• Average wait time to finish

• (d1 + (d1+d2) + (d1+d2+d3) + ... + ∑ dj )/n

• Average wait time to start

• (d1 + (d1+d2) + ... + ∑ dj )/n

P1 P2 P3 Pi Pn

d1 d2 d3 di dn

time

n

1

1

n-1

Page 17: 1dv012 CPU Scheduling - orion.lnu.se

Analysis

• Total time to finish: 24+3+3 = 30

• Average time to finish (24+(24+3)+(24+3+3))/3 = 27

• Average time to start ((24)+(24+3))/3 = 17

P1! P2! P3!

24! 27! 30!0!

Page 18: 1dv012 CPU Scheduling - orion.lnu.se

Analysis

• Total time to finish: 3+3+24 = 30

• Average time to finish (3+(3+3)+(3+3+24))/3 = 13

• Average time to start (3+(3+3))/3 = 3

P1!P3!P2!

6!3! 30!0!

Page 19: 1dv012 CPU Scheduling - orion.lnu.se

Performance of FCFS• Pro:

• very simple code, data-structures and hence low overhead

• Con:

• can result int large average waiting times

• convoy effect

• General disadvantage due to lack of preemption

• consider a set of tasks with one long-running CPU intensive process and several I/O intensive tasks

Page 20: 1dv012 CPU Scheduling - orion.lnu.se

Shortest Job First (SJF)

• Associate with each process the length of its next CPU burst

• Use these lengths to schedule the process with the shortest time

• FCFS is used to break ties

Page 21: 1dv012 CPU Scheduling - orion.lnu.se

Example

Process Burst Time

P1 6

P2 8

P3 7

P4 3

P4! P3!P1!

3! 16!0! 9!

P2!

24!

Page 22: 1dv012 CPU Scheduling - orion.lnu.se

Analysis

• Total time to finish: 6+8+7+3 = 24

• Average time to finish (3+(3+6)+(3+6+7)+(3+6+7+8))/4 = 13

• Average time to start (3+(3+6)+(3+6+7))/4 = 7

P4! P3!P1!

3! 16!0! 9!

P2!

24!

Page 23: 1dv012 CPU Scheduling - orion.lnu.se

Performance of SJF

• Pro:

• if times are accurate, SJF gives a minimum average waiting time

• SJF is optimal

• Con:

• it is difficult to estimate CPU burst times

• Interesting as a lower limit on waiting time

Page 24: 1dv012 CPU Scheduling - orion.lnu.se

Predicting the CPU Burst

• For long-term scheduling

• the user can be “encouraged” to give an estimate

• part of the job submission requirements

Page 25: 1dv012 CPU Scheduling - orion.lnu.se

Predicting the CPU Burst

• For short-term scheduling

• we can attempt to predict its value by assuming some locality in CPU bursts and use exponential averaging

• τn+1 = α * Tn + (1 - α) * τn

• τn is the estimated value for the nth CPU burst

• Tn is the actual most recent burst value

• 0 ≤ α ≤ 1, α = 0.5 is commonly used

• the estimate lags the (potentially) sharper transitions of the CPU bursts

Page 26: 1dv012 CPU Scheduling - orion.lnu.se

Predicting the CPU Burst

Page 27: 1dv012 CPU Scheduling - orion.lnu.se

Modifications to SJF• Preemptive SJF

• if the shortest estimated CPU burst among all the processes in the ready queue (say this belongs to Pj) is less than the remaining time for the one that is running,

• preempt the currently running job;

• use its remaining time as its next CPU burst and add it to the ready queue

• start process Pj

• Shortest-Remaining-Time-First (SRTF)

• Policy prioritizes jobs with short CPU bursts

Page 28: 1dv012 CPU Scheduling - orion.lnu.se

Example

• Average waiting time = ((10-1)+(1-1)+(17-2)+5-3))/4 = 6.5

P1! P1!P2!

1! 17!0! 10!

P3!

26!5!

P4!

Process Arrival Time

Burst Time

P1 0 8

P2 1 4

P3 2 9

P4 3 5

Page 29: 1dv012 CPU Scheduling - orion.lnu.se

Priorities: A More General Notion

• Assign a numerical priority to each process

• convention: a smaller number means higher priority

• Examples of priority use

• value of the next CPU burst

• importance of interactive responsive

Page 30: 1dv012 CPU Scheduling - orion.lnu.se

Priorities: A More General Notion

• Priorities can be based upon external considerations

• the importance of the user group running the process

• the amount of economic investment that the group might have in the system

• or upon internal considerations

• memory and other needs of the job

• ratio of CPU to I/O burst times

• number of open files etc.

Page 31: 1dv012 CPU Scheduling - orion.lnu.se

Priorities: A More General Notion

• Priority-based scheduling

• assign the CPU to the process with highest priority

• may be used with or without preemption

• SJF is priority scheduling where priority is the inverse of predicted next CPU burst time

Page 32: 1dv012 CPU Scheduling - orion.lnu.se

Example

• Average waiting time = 8.2

P2! P3!P5!

1! 18!0! 16!

P4!

19!6!

P1!

Process Burst Time

Priority

P1 10 3

P2 1 1

P3 2 4

P4 1 5

P5 5 2

Page 33: 1dv012 CPU Scheduling - orion.lnu.se

Disadvantages of Priority Schemes

• A process can continuously be overtaken by higher priority processes arriving later

• can lead to starvation

• leads to better overall performance perhaps

• but not from the point of view of the process in question

• happens in real OSes unless special measures are taken

Page 34: 1dv012 CPU Scheduling - orion.lnu.se

Disadvantages of Priority Schemes

• Common solution

• a process' priority goes up with its age

• FCFS is used to break ties between processes with equal priorities

• a process will not wait forever

• given enough time in the ready queue, its priority will eventually be the highest

• What should happen if low-priority process holds resources require by the high-priority process? (priority inversion)

Page 35: 1dv012 CPU Scheduling - orion.lnu.se

Example of Priority Aging: Unix

• Priority goes up with lack of CPU usage

• process accumulates CPU usage

• Recalculates priority every time unit (~ 1 second)

• priority = CPUusage + basepriority

• CPUusage = (CPUusage) / 2

• A smaller number implies a higher priority

• basepriority is defined by the user (predefined range)

Page 36: 1dv012 CPU Scheduling - orion.lnu.se

Round Robin (RR)

• Algorithm

• choose a fixed time unit, called a quantum

• allocate CPU time in quanta

• preempt the process when it has used its quantum

• typically, FCFS is used as a sequencing policy

Page 37: 1dv012 CPU Scheduling - orion.lnu.se

Round Robin (RR)

• Each new process is added at the end of the ready queue

• When a process blocks or is preempted, it goes to the end of the ready queue

• A strictly preemptive policy

• very common choice for scheduling interactive systems

Page 38: 1dv012 CPU Scheduling - orion.lnu.se

Round Robin (RR)

Page 39: 1dv012 CPU Scheduling - orion.lnu.se

Example

P1! P2! P3! P1! P1! P1! P1! P1!

0! 4! 7! 10! 14! 18! 22! 26! 30!

Process Burst Time

P1 24

P2 3

P3 3

Typically, higher average turnaround than SJF, but better response

Page 40: 1dv012 CPU Scheduling - orion.lnu.se

Quantum Size• Quantum size q is critical

• Affects waiting and turnaround times

• if q is the quantum size and there are n processes in the ready queue,

• The maximum wait is (n-1) * q units of time

• As q increases, we approach FCFS scheduling

• As q decreases the rate of context switches increases

• the average wait time goes down, and the system approaches one with 1/n the speed of the original system

Page 41: 1dv012 CPU Scheduling - orion.lnu.se

Quantum Size

Page 42: 1dv012 CPU Scheduling - orion.lnu.se

Quantum Size

80% of CPU bursts should be shorter than q

Page 43: 1dv012 CPU Scheduling - orion.lnu.se

Multilevel Queue Scheduling

• Processes are partitioned into groups based on static criteria

• background (batch)

• foreground (interactive)

• All the processes in a fixed group of the partition share the same scheduling strategy and a distinct family of queues

• different scheduling algorithm can be used across different groups

• background: FCFS

• foreground: Round Robin

Page 44: 1dv012 CPU Scheduling - orion.lnu.se

Multilevel Queue Scheduling

• Need to schedule the CPU between the groups as well

• fixed-priority: e.g., serve all from foreground, then from background

• possibility of starvation

• time slice: each group gets a certain fraction of the CPU

• e.g., 80% to foreground in RR, 20% to background in FCFS

Page 45: 1dv012 CPU Scheduling - orion.lnu.se

Multilevel Queue Scheduling

Page 46: 1dv012 CPU Scheduling - orion.lnu.se

Multilevel Feedback Queues

• Provide a mechanism for jobs to move between queues

• aging can be implemented this way

• Complete specification

• queues: number, scheduling algorithms (within and across queues)

• promotion and demotion policies

• which queue should a process enter when it needs service?

Page 47: 1dv012 CPU Scheduling - orion.lnu.se

Example

• 3 queues: Q0 (FCFS, 8ms), Q1 (FCFS, 16ms), Q2 (FCFS)

• scheduling:

• process starts in Q0, gets 8ms of the CPU (FCFS)

• if not done, it moves to Q1 where it gets a further 16 ms of the CPU (FCFS)

• if still not done, it moves to Q2 and stays there till completed

Page 48: 1dv012 CPU Scheduling - orion.lnu.se

Example

Page 49: 1dv012 CPU Scheduling - orion.lnu.se

Choosing a Scheduling Approach

• Characterize system components as random processes

• random variables for CPU, I/O bursts

• typically have exponential distributions

• random variable for process arrival time

• we are interested in the average values of metrics

• e.g., average throughput, utilization

Page 50: 1dv012 CPU Scheduling - orion.lnu.se

Modeling

• System modeled as a network of servers

• Queueing theory:

• for a queue, Little’s formula is n = λ W

• where λ is the arrival rate; W is the average waiting time; and n is the average queue length

• Has limitations on the degree to which realistic systems can be modeled and analyzed

Page 51: 1dv012 CPU Scheduling - orion.lnu.se

Simulations

• When analytical solutions are infeasible, the models are simulated

• distribution driven as in the previous case,

• or actual (possibly synthetic) workloads

• Leads to more realistic predictions than the above forms of analysis

• Computationally more expensive than either of the above approaches

Page 52: 1dv012 CPU Scheduling - orion.lnu.se

Actual Measurements

• Instrument a real system and capture the extra information

• Can make very precise determination

• Very cumbersome and time-consuming to achieve

• a problem is that observation changes the system