1GWU CS 259 Brad Taylor Spring 2004 Systems Programming Meeting 10: Scheduling, Programming Tools &...

Post on 27-Dec-2015

214 views 0 download

Transcript of 1GWU CS 259 Brad Taylor Spring 2004 Systems Programming Meeting 10: Scheduling, Programming Tools &...

1GWU CS 259 Brad Taylor Spring 2004

Systems Programming

Meeting 10:Meeting 10:

Scheduling, Programming Scheduling, Programming Tools & Wire Project BriefsTools & Wire Project Briefs

2GWU CS 259 Brad Taylor Spring 2004

Objectives

• Processor Scheduling MethodsProcessor Scheduling Methods• Programming ToolsProgramming Tools• Wire Wire Project PresentationsProject Presentations • Next week: Final reviewNext week: Final review• Assignment 4: due todayAssignment 4: due today• Written Project Reports: due next Written Project Reports: due next weekweek• Final: 5/3/04, due noon 5/4/04 (posted Final: 5/3/04, due noon 5/4/04 (posted early)early)

3GWU CS 259 Brad Taylor Spring 2004

CPU Scheduling

• Basic ConceptsBasic Concepts• Scheduling Criteria Scheduling Criteria • Scheduling AlgorithmsScheduling Algorithms• Multiple-Processor SchedulingMultiple-Processor Scheduling• Real-Time SchedulingReal-Time Scheduling• Algorithm EvaluationAlgorithm Evaluation

4GWU CS 259 Brad Taylor Spring 2004

Scheduling -- Overview

Simple: put a variety of jobs on N processorsSimple: put a variety of jobs on N processors

P1

P2

PN

5GWU CS 259 Brad Taylor Spring 2004

Basic Concepts

• MaximizeMaximize processor processor utilizationutilization, keep I/O devices , keep I/O devices busy, throughput obtained with multiprogramming busy, throughput obtained with multiprogramming (jobs/time): recurrent OS scheduling theme(jobs/time): recurrent OS scheduling theme

• Every process (or thread) has associated Every process (or thread) has associated scheduling scheduling prioritypriority: : larger numbers = lower prioritylarger numbers = lower priority

• FairnessFairness: CPU (kernel thread) scheduling : CPU (kernel thread) scheduling negative negative feedbackfeedback makes single process (thread) time makes single process (thread) time hogging difficult (process/thread aging hogging difficult (process/thread aging prevents prevents starvationstarvation))

• MinimizeMinimize latencylatency: metrics : metrics – response timeresponse time (user time ~100 +/- 50 ms) (user time ~100 +/- 50 ms) – job completion timejob completion time (asap) (asap)

• CPU–I/O Burst Cycle DistributionCPU–I/O Burst Cycle Distribution

6GWU CS 259 Brad Taylor Spring 2004

CPU–I/O Burst Cycle Distribution

• Alternating Sequence of Alternating Sequence of CPU And I/O BurstsCPU And I/O Bursts

• Process execution Process execution cyclecycle of of CPU executionCPU execution and and I/O waitI/O wait

• Process relinquishing CPU, Process relinquishing CPU, goes to goes to sleepsleep on on eventevent

• When event occurs, system When event occurs, system process knowing about it process knowing about it calls calls wakeupwakeup for event for event address: address: allall processes that processes that sleptslept on that address put on that address put into ready queue to runinto ready queue to run

7GWU CS 259 Brad Taylor Spring 2004

Histogram of CPU-burst Times

8GWU CS 259 Brad Taylor Spring 2004

Problem Cases

• I/O goes idle due to job type I/O goes idle due to job type blindnessblindness• OptimizationOptimization

– Type “A” jobs favored over “B”Type “A” jobs favored over “B”– Suppose: Lots of A’s Suppose: Lots of A’s – Result: B’s starveResult: B’s starve

• Interactive processInteractive process trapped behind others; trapped behind others; => response time sucks for no reason=> response time sucks for no reason

• PrioritiesPriorities: : – A depends on BA depends on B– A’s priority > B’sA’s priority > B’s– Result: Result: B never runsB never runs

9GWU CS 259 Brad Taylor Spring 2004

Processor Scheduler

• Selects from among processes Selects from among processes (or threads) ready to execute (or threads) ready to execute in memory; allocates CPU to in memory; allocates CPU to oneone

• CPU scheduling decisions occur CPU scheduling decisions occur as process:as process:– 1. Switches from 1. Switches from runningrunning to I/O to I/O

blockedblocked (aka (aka waitingwaiting) state) state– 2. Switches from 2. Switches from runningrunning to to readyready

state (Interrupt)state (Interrupt)– 3. Switches from 3. Switches from blockedblocked to to readyready– 4. Terminates 4. Terminates

• Scheduling under 1 and 4 is Scheduling under 1 and 4 is nonpreemptivenonpreemptive

• All other scheduling is All other scheduling is preemptive … preemptive … why?? Explain … why?? Explain …

• SelectionSelection: : Scheduler DispatchScheduler Dispatch

running ready

blocked1 3

4 start

SD

2

10GWU CS 259 Brad Taylor Spring 2004

Dispatcher

• Dispatcher moduleDispatcher module gives CPU control to gives CPU control to process selected by short-term process selected by short-term scheduler, involves:scheduler, involves:– Context switch Context switch – Switch to user modeSwitch to user mode– Jump to proper user program locationJump to proper user program location– Restart processRestart process

• Dispatch latencyDispatch latency – time for dispatcher to – time for dispatcher to stop one process & start another runningstop one process & start another running– Includes nonpreemptive kernel processing Includes nonpreemptive kernel processing

and context switch timeand context switch time – Does not include interrupt processingDoes not include interrupt processing

11GWU CS 259 Brad Taylor Spring 2004

Scheduling Optimization Criteria

• Maximize CPU utilizationMaximize CPU utilization: keep : keep processor as busy as possibleprocessor as busy as possible

• Maximize ThroughputMaximize Throughput: # of processes : # of processes that complete execution per time unitthat complete execution per time unit

• Minimize Turnaround timeMinimize Turnaround time: amount of : amount of time to execute particular processtime to execute particular process

• Minimize Waiting timeMinimize Waiting time: amount of time : amount of time process waiting in ready queueprocess waiting in ready queue

• Minimize Response timeMinimize Response time: amount of : amount of time from submitting request until time from submitting request until first first response producedresponse produced, , notnot final output final output

12GWU CS 259 Brad Taylor Spring 2004

First-Come, First-Served Scheduling (FCFS)

• Run jobs in order that they arriveRun jobs in order that they arriveProcessProcess Burst TimeBurst Time

PP11 2424 PP22 33 PP33 33

• Suppose processes arrive in the order: Suppose processes arrive in the order: PP11 , , PP22 , , PP3 3 The Gantt Chart for schedule is:The Gantt Chart for schedule is:

• Waiting time for Waiting time for PP11 = 0; = 0; PP22 = 24; = 24; PP3 3 = 27= 27• Average waiting time: (0 + 24 + 27)/3 = 17Average waiting time: (0 + 24 + 27)/3 = 17• Advantage: simpleAdvantage: simple

P1 P2 P3

24 27 300

13GWU CS 259 Brad Taylor Spring 2004

FCFS Scheduling (Cont.)

• Suppose that the processes arrive in the orderSuppose that the processes arrive in the order

PP22 , , PP33 , , PP11 • The Gantt chart for the schedule is:The Gantt chart for the schedule is:

• Waiting time for Waiting time for PP1 1 == 6; 6; PP22 = 0; = 0; PP3 3 = = 33• Average waiting time: (6 + 0 + 3)/3 = 3Average waiting time: (6 + 0 + 3)/3 = 3• Much better than previous caseMuch better than previous case• Convoy effectConvoy effect short process behind long short process behind long

processprocess

P1P3P2

63 300

14GWU CS 259 Brad Taylor Spring 2004

FCFS Convoy effect

• CPU bound job holds processor until done or CPU bound job holds processor until done or causes I/O burst (rare occurrence for CPU-bound causes I/O burst (rare occurrence for CPU-bound process)process)– Long periods when no I/O requests issued and CPU Long periods when no I/O requests issued and CPU

monopolizedmonopolized– Result: poor I/O device utilizationResult: poor I/O device utilization

• Example: one CPU bound job, many I/O boundExample: one CPU bound job, many I/O bound» CPU bound runs (I/O devices idle)CPU bound runs (I/O devices idle)» CPU bound blocksCPU bound blocks» I/O bound job(s) run, quickly block on I/OI/O bound job(s) run, quickly block on I/O» CPU bound runs againCPU bound runs again» I/O completesI/O completes» CPU bound still runs while I/O devices idle (continues…)CPU bound still runs while I/O devices idle (continues…)

15GWU CS 259 Brad Taylor Spring 2004

Shortest-Job-First Scheduling (SJR)

• Associate Associate nextnext CPU burst lengthCPU burst length with each with each process; schedule process with shortest timeprocess; schedule process with shortest time

• Two schemes: Two schemes: – NonpreemptiveNonpreemptive – once CPU given to process not – once CPU given to process not

preempted until completes its burstpreempted until completes its burst– PreemptivePreemptive – preempt if new process arrives with – preempt if new process arrives with

CPU burst length less than current executing CPU burst length less than current executing process remaining time; Shortest-Remaining-Time-process remaining time; Shortest-Remaining-Time-First (SRTF) scheme First (SRTF) scheme

• SJF is optimal – gives minimum average SJF is optimal – gives minimum average waiting time for a given set of processeswaiting time for a given set of processes

16GWU CS 259 Brad Taylor Spring 2004

Preemptive SJF Example

ProcessProcess Arrival TimeArrival Time Burst TimeBurst TimePP11 0.00.0 77 PP22 2.02.0 44 PP33 4.04.0 11 PP44 5.05.0 44

SJF (preemptive)SJF (preemptive)

Average waiting time: (9 + 1 + 0 +2)/4 = 3Average waiting time: (9 + 1 + 0 +2)/4 = 3

P1 P3P2

42 110

P4

5 7

P2 P1

16

17GWU CS 259 Brad Taylor Spring 2004

Determining Length of Next CPU Burst

• ((ObviouslyObviously) can only estimate length) can only estimate length• Use exponential averaging of previous CPU Use exponential averaging of previous CPU

bursts’ lengthbursts’ length

• Effects of varying Effects of varying :: = 0: Recent history does not count= 0: Recent history does not count = 1: Only last CPU burst counts= 1: Only last CPU burst counts– Expansion shows that, as Expansion shows that, as and (1 - and (1 - ) <= to 1, each ) <= to 1, each

successive term has less weight than predecessorsuccessive term has less weight than predecessor

nnn t 1 1:Recursively define: 4.

10 : 3.

burst CPU next the for value predicted 2.

burst CPU of length actual 1.

1n

thn nt

18GWU CS 259 Brad Taylor Spring 2004

Prediction of the Length of the Next CPU Burst

19GWU CS 259 Brad Taylor Spring 2004

Priority Scheduling

• Associate priority (smallest integer = highest Associate priority (smallest integer = highest priority) with each process (or thread)priority) with each process (or thread)

• CPU allocated to process with highest priority CPU allocated to process with highest priority – PreemptivePreemptive– NonpreemptiveNonpreemptive

• SJF is priority scheduling algorithm where SJF is priority scheduling algorithm where priority is predicted next CPU burst timepriority is predicted next CPU burst time

• Starvation ProblemStarvation Problem: low priority processes may : low priority processes may never executenever execute

• Aging SolutionAging Solution: as time progresses increase : as time progresses increase priority of waiting processpriority of waiting process

20GWU CS 259 Brad Taylor Spring 2004

Round Robin Scheduling (RR)

• Each process provided small CPU time allocation, Each process provided small CPU time allocation, time quantumtime quantum, ~ 10-100 milliseconds, ~ 10-100 milliseconds

• Process preempted when time elapses, added to Process preempted when time elapses, added to ready queue endready queue end

• Given Given nn ready queue processes & time quantum ready queue processes & time quantum qq, each process gets 1/, each process gets 1/nn of the CPU time in of the CPU time in qq –sized chunks –sized chunks

• No process waits longer than (No process waits longer than (nn-1)-1)qq• PerformancePerformance

– qq large large FIFO FIFO– q q small small q q must be large with respect to context must be large with respect to context

switch, otherwise overhead too highswitch, otherwise overhead too high

21GWU CS 259 Brad Taylor Spring 2004

Round Robin ExampleTime Quantum = 20ms

ProcessProcess Burst TimeBurst TimePP11 5353 PP22 1717 PP33 6868 PP44 2424

The Gantt chart is: The Gantt chart is:

Typically, Typically, longer average turnaroundlonger average turnaround than SJF, but than SJF, but better better responseresponse

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

22GWU CS 259 Brad Taylor Spring 2004

Time Quantum and Context Switch Time

• Think about Think about long vs. short traffic lightslong vs. short traffic lights– Which leads to shorter waits?Which leads to shorter waits?– Which leads to less congestion?Which leads to less congestion?

23GWU CS 259 Brad Taylor Spring 2004

Turnaround Time Varies With The Time Quantum

24GWU CS 259 Brad Taylor Spring 2004

Multilevel Queue

• Ready queue Ready queue partitionedpartitioned into separate queues: into separate queues:– foreground (interactive)foreground (interactive)– background (batch)background (batch)

• Each queue has own Each queue has own scheduling algorithmscheduling algorithm– foreground – RRforeground – RR– background – FCFSbackground – FCFS

• Scheduling needed Scheduling needed betweenbetween queues; choices: queues; choices:– Fixed priority scheduling (serve all from foreground Fixed priority scheduling (serve all from foreground

then from background): starvation possiblethen from background): starvation possible– Time slice (each queue given certain amount of CPU Time slice (each queue given certain amount of CPU

time that is recursively scheduled; i.e., 80% to time that is recursively scheduled; i.e., 80% to foreground (RR) & 20% to background (FCFS)foreground (RR) & 20% to background (FCFS)

25GWU CS 259 Brad Taylor Spring 2004

Multilevel Queue Scheduling

26GWU CS 259 Brad Taylor Spring 2004

Multilevel Feedback Queue

• A process often moves between various A process often moves between various queues; implement aging this way!queues; implement aging this way!

• Multilevel-feedback-queue scheduler defined Multilevel-feedback-queue scheduler defined by the following parameters:by the following parameters:– number of queuesnumber of queues– scheduling algorithms for each queuescheduling algorithms for each queue– method used to determine when to upgrade a method used to determine when to upgrade a

processprocess– method used to determine when to demote a method used to determine when to demote a

processprocess– method used to determine which queue a process method used to determine which queue a process

will enter when that process needs servicewill enter when that process needs service

27GWU CS 259 Brad Taylor Spring 2004

Simple Multilevel Queue

• Attacks both efficiency and response time problemsAttacks both efficiency and response time problems– Efficiency: long time quanta = low switching overheadEfficiency: long time quanta = low switching overhead– Response time: quickly run after becoming unblockedResponse time: quickly run after becoming unblocked

• Priority queue organization: ready queue for each priority levelPriority queue organization: ready queue for each priority level

– process created: give high priority and short time sliceprocess created: give high priority and short time slice– if process uses up the time slice without blocking:if process uses up the time slice without blocking:

» priority = priority - 1; time_slice = time_slice * 2;priority = priority - 1; time_slice = time_slice * 2;

prio

rity

28GWU CS 259 Brad Taylor Spring 2004

Multilevel Feedback Queue Example

• Three queues: Three queues: – QQ00 – time quantum 8 ms, FCFS – time quantum 8 ms, FCFS– QQ11 – time quantum 16 ms, – time quantum 16 ms,

FCFSFCFS– QQ22 – FCFS – FCFS

• SchedulingScheduling– New job enters queue New job enters queue QQ00

– It gains CPU, job receives 8 msIt gains CPU, job receives 8 ms– If it does not finish, job moved If it does not finish, job moved

to queue to queue QQ11

– At At QQ11 job is again served FCFS job is again served FCFS and receives 16 additional msand receives 16 additional ms

– If it still does not complete, it is If it still does not complete, it is preempted & moved to queue preempted & moved to queue QQ22

29GWU CS 259 Brad Taylor Spring 2004

Multiple-Processor Scheduling

• CPU scheduling more complex when CPU scheduling more complex when multiple CPUs availablemultiple CPUs available

• Homogeneous processorsHomogeneous processors within within multiprocessor multiprocessor Symmetric Symmetric multiprocessingmultiprocessing

• Load sharingLoad sharing algorithms for processes & algorithms for processes & threadsthreads

• Asymmetric multiprocessingAsymmetric multiprocessing – only one – only one processor accesses system data processor accesses system data structures, alleviating data sharing need structures, alleviating data sharing need

30GWU CS 259 Brad Taylor Spring 2004

Parallel Systems: Gang Scheduling

• N independent processes: load-balanceN independent processes: load-balance– Run process on next CPU (with some affinity)Run process on next CPU (with some affinity)

• N cooperating processes: run at same timeN cooperating processes: run at same time– Cluster into groups, schedule Cluster into groups, schedule

as unitas unit– Can be much faster: Can be much faster: – Share cachesShare caches– No context switching No context switching

to communicateto communicate

cpu1 cpu2 cpu2...

cpu1 cpu2 cpu3... cpu4

31GWU CS 259 Brad Taylor Spring 2004

Distributed System Load Balancing

• Large independent node systemLarge independent node system

• Desire to run job on lightly loaded nodeDesire to run job on lightly loaded node– Querying each node too expensiveQuerying each node too expensive

• Instead: randomly pick oneInstead: randomly pick one– (used by lots of internet servers)(used by lots of internet servers)

• Best Choice? Then randomly pick another one!Best Choice? Then randomly pick another one!– Send job to shortest run queueSend job to shortest run queue– Result? Really close to optimal (w/ a few assumptions ;-)Result? Really close to optimal (w/ a few assumptions ;-)– Exponential convergence = picking 3 doesn’t gain you muchExponential convergence = picking 3 doesn’t gain you much

32GWU CS 259 Brad Taylor Spring 2004

Real-Time Scheduling

• Hard real-timeHard real-time systems systems – Guarantee (deterministic)Guarantee (deterministic)– Required to complete a critical task within a Required to complete a critical task within a

guaranteed amount of timeguaranteed amount of time– Resource ReservationResource Reservation– Memory: Non-deterministic VM & Garbage collection Memory: Non-deterministic VM & Garbage collection

unsolved challengesunsolved challenges– Dedicated (reserved) hardwareDedicated (reserved) hardware

• Soft real-timeSoft real-time computing computing– Best EffortBest Effort– Critical processes given priority over regular onesCritical processes given priority over regular ones– Highest priority not agedHighest priority not aged– Achievable on general purpose hardware / systemAchievable on general purpose hardware / system

33GWU CS 259 Brad Taylor Spring 2004

Real-Time Dispatch Latency

34GWU CS 259 Brad Taylor Spring 2004

Evaluation of CPU Schedulers by Simulation

35GWU CS 259 Brad Taylor Spring 2004

Solaris 2 Scheduling

36GWU CS 259 Brad Taylor Spring 2004

Scheduling Summary• In principle, scheduling decisions In principle, scheduling decisions arbitraryarbitrary as given as given

system system shouldshould produce same results in any event produce same results in any event– ‘‘Good Enough’: rare that “the best” process can be calculatedGood Enough’: rare that “the best” process can be calculated

• Unfortunately, algorithms have Unfortunately, algorithms have strongstrong effect on effect on system system overheadoverhead, , efficiencyefficiency and and responseresponse time time

• Best schemes are Best schemes are adaptiveadaptive; absolutely best requires ; absolutely best requires predicting the future (predicting the future (Seeking fortune tellersSeeking fortune tellers ;) ;)– Most current algorithms tend to give the highest Most current algorithms tend to give the highest

priority to the processes that need the least!priority to the processes that need the least!– Scheduling has gotten *increasingly* ad hoc over Scheduling has gotten *increasingly* ad hoc over

the years: 1960s papers very math heavy, now the years: 1960s papers very math heavy, now mostly “tweak and see”mostly “tweak and see”

37GWU CS 259 Brad Taylor Spring 2004

Programming Tools

• make, automakemake, automake• libtoollibtool• cvscvs• autoconfautoconf• GOOGLEGOOGLE

38GWU CS 259 Brad Taylor Spring 2004

Project Presentations

Group I: “Wire Socket”Group I: “Wire Socket”

Dan, Ricardo & KamalDan, Ricardo & Kamal

39GWU CS 259 Brad Taylor Spring 2004

Project Presentations

Group II: “Wire Named Pipes”Group II: “Wire Named Pipes”

Clayton, JasonClayton, Jason

40GWU CS 259 Brad Taylor Spring 2004

Project Presentations

Group III: “Wire Shared Memory, Group III: “Wire Shared Memory, Semaphores & Futexs”Semaphores & Futexs”

Brooke, Nush, RamBrooke, Nush, Ram