CE01000-6 Operating Systems Lecture 2 Low level hardware support for operating systems.
CE01000-3 Operating Systems
-
Upload
grant-mcfarland -
Category
Documents
-
view
26 -
download
0
description
Transcript of CE01000-3 Operating Systems
![Page 1: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/1.jpg)
CE01000-3 Operating Systems
Lecture 7Threads &
Introduction to CPU Scheduling
![Page 2: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/2.jpg)
Timetable change for this week only. Group 1 Tuesday 12-2pm K106 Group 2 Friday 11am-1pm in K006 Group 3 Thursday 11am-1pm in K006
![Page 3: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/3.jpg)
Overview of lecture
In this lecture we will be looking at What is a thread? thread types CPU/IO burst cycle CPU scheduling - preemptive & nonpreemptive dispatcher scheduling criteria First Come First Served (FCFS) algorithm Shortest Job First (SJF) algorithm
![Page 4: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/4.jpg)
Threads
![Page 5: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/5.jpg)
Threads - analogy
Analogy: Process is like a manual of procedures (code),
sets of files and paper (memory), and other resources. CPU is like a person who carries out (executes) the instructions in the manual of procedures
CPU (person) may be ‘context switched’ from doing one task to doing another
![Page 6: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/6.jpg)
Threads – analogy (Cont.) A thread consists of a bookmark in the manual
of procedures (program counter value), and pad of paper that is used to hold information that is currently being used (register and stack values)
it is possible for a single process to have a number of bookmarks in the manual with a pad of paper associated with each bookmark (a number of threads within a process)
![Page 7: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/7.jpg)
Threads - analogy (Cont.)
the person (CPU) could then switch between doing one thing in the manual of procedures (executing one thread) to doing another thing somewhere else (start executing another thread)
This switching between threads is different from context switching between processes - it is quicker to switch between threads in a process
![Page 8: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/8.jpg)
Threads
A thread exists as the current execution state of a process consisting of: program counter, processor register values and stack
space it is called a thread because of the analogy between a
thread and a sequence of executed instructions (imagine drawing a line through each line of instructiuins in the manual of procedures (code) when it has been executed - you get a thread (line) through the manual (code)
![Page 9: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/9.jpg)
Threads (Cont.) A thread is often called a lightweight process there can be multiple threads associated with
a single process each thread in a process shares with other
peer threads the following: code section, data section, operating-system
resources all threads collectively form a task
![Page 10: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/10.jpg)
Threads (Cont.)
A traditional process is equal to a task with one thread i.e. processes used to only have a single thread
Overhead of switching between processes is expensive especially with more complex operating systems - threads reduce switching overhead and improve granularity of concurrent operation
![Page 11: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/11.jpg)
Example in use: In a multiple threaded task, while one server thread
is blocked and waiting, a second thread in the same task can run. Cooperation of multiple threads in same job confers
higher throughput and improved performance. Threads provide a mechanism that allows sequential
processes to make blocking system calls while also achieving parallelism.
Threads (Cont.)
![Page 12: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/12.jpg)
Thread types
2 different thread types: Kernel-supported threads (e.g. Mach and
OS/2) - kernel of O/S sees threads and manages switching between threads i.e. in terms of analogy boss (OS) tells person
(CPU) which thread in process to do next.
![Page 13: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/13.jpg)
Thread types (Cont.) User-level threads - supported above the
kernel, via a set of library calls at the user level. Kernel only sees process as whole and is completely unaware of any threads i.e. in terms of analogy manual of prcedures (user
code) tells person (CPU) to stop current thread and start another (using library call to switch threads)
![Page 14: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/14.jpg)
Introduction to CPU Scheduling Topics: CPU-I/O burst cycle Preemptive, nonpreemptive dispatcher Scheduling Criteria Scheduling Algorithms -some this lecture, the rest
next lecture. This lecture: First come first served (FCFS) Shortest Job First (SJF)
![Page 15: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/15.jpg)
CPU-I/O Burst Cycle
![Page 16: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/16.jpg)
CPU-I/O Burst Cycle (Cont.)
CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait.
CPU burst is length of time process needs to use CPU before it next makes a system call (normally request for I/O).
I/O burst is the length of time process spends waiting for I/O to complete.
![Page 17: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/17.jpg)
Histogram of CPU-burst Times
Typical CPU burst distribution
![Page 18: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/18.jpg)
CPU Scheduler
Allocates CPU to one of processes that are ready to execute (in ready queue)
CPU scheduling decisions may take place when a process:1.Switches from running to waiting state (e.g. when
I/O request)
2.Terminates
3.Switches from waiting to ready(e.g. on I/O completion)
4.Switches from running to ready state(e.g.Timer interrupt)
![Page 19: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/19.jpg)
CPU Scheduler (Cont.) If scheduling occurs only when 1 and 2
happens it is called nonpreemptive - process keeps CPU until it voluntarily releases it (process termination or request for I/O)
If scheduling also occurs when 3 & 4 happen it is called preemptive - CPU can be taken away from process by OS (external I/O interrupt or timer interrupt)
![Page 20: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/20.jpg)
Dispatcher
Dispatcher gives control of the CPU to the process selected by the short-term scheduler; this involves: switching context switching to user mode jumping to the proper location in the user
program to restart that program (i.e. last action is to set program counter)
![Page 21: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/21.jpg)
Dispatcher (Cont.) Dispatch latency – time it takes for the
dispatcher to switch between processes and start new one running
![Page 22: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/22.jpg)
Scheduling Criteria
CPU utilisation i.e. CPU usage - to maximise
Throughput = number of processes that complete their execution per time unit - to maximise
Turnaround time = amount of time to execute a particular process - to minimise
![Page 23: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/23.jpg)
Scheduling criteria (Cont.) Waiting time = amount of time a process has
been waiting in the ready queue - to minimise Response time = amount of time it takes from
when a job was submitted until it initiates its first response (output), not to time it completes output of its first response - to minimise
![Page 24: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/24.jpg)
First-Come, First-Served (FCFS) Scheduling
Schedule = order of arrival of process in ready queue
Example: Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3.
![Page 25: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/25.jpg)
FCFS Scheduling (Cont.)
The Gantt Chart for the schedule then is:
Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17
P1 P2 P3
24 27 300
![Page 26: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/26.jpg)
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order P2 , P3 , P1 .
The Gantt chart for the schedule is:
Waiting time for P1 = 6; P2 = 0; P3 = 3Average waiting time: (6 + 0 + 3)/3 = 3
P1P3P2
63 300
![Page 27: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/27.jpg)
FCFS Scheduling (Cont.) waiting time usually not minimal and large
variance in times Convoy effect – this is where short process
may have a long wait before being scheduled onto CPU due to long process being ahead of them
![Page 28: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/28.jpg)
Shortest-Job-First (SJF) Scheduling
Each process has a next CPU burst - and this will have a length (duration). Use these lengths to schedule the process with the next shortest burst.
Two schemes: 1. non-preemptive – once CPU given to the process
it cannot be preempted until completes its CPU burst.
![Page 29: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/29.jpg)
SJF Scheduling (Cont.)2. Preemptive – if a new process arrives with CPU
burst length less than remaining time of current executing process, preempt. This scheme is known as Shortest-Remaining-Time-First (SRTF).
SJF is optimal – gives minimum average waiting time for a given set of processes.
![Page 30: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/30.jpg)
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
SJF (non-preemptive) Average waiting time = (0 + 6 + 3 + 7)/4 = 4
Example of Non-Preemptive SJF
P1 P3 P2
73 160
P4
8 12
![Page 31: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/31.jpg)
Example of Preemptive SJF Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
SJF (preemptive) Average waiting time = (9 + 1 + 0 +2)/4 = 3
P1 P3P2
42 110
P4
5 7
P2 P1
16
![Page 32: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/32.jpg)
Determining Length of Next CPU Burst
Can only estimate the length. Can be done by using the length of previous
CPU bursts, using exponential averaging (decaying average).
![Page 33: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/33.jpg)
Determining Length of Next CPU Burst (Cont.)
:Define 4.
10 , 3.
burst CPU next the for value predicted 2.
burst CPU of lenght actual 1.
1n
thn nt
.1 1 nnn t
![Page 34: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/34.jpg)
Examples of Exponential Averaging
=0, n+1 = n
last CPU burst does not count - only longer term history
=1, n+1 = tn
Only the actual last CPU burst counts.
![Page 35: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/35.jpg)
Examples of Exponential Averaging (Cont.)
If we expand the formula, we get:n+1 = tn+(1 - ) tn-1 + …
+(1 - )j tn-j + …
+(1 - )n+1 0
Since both and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor.
![Page 36: CE01000-3 Operating Systems](https://reader036.fdocuments.us/reader036/viewer/2022062720/56813569550346895d9cd08e/html5/thumbnails/36.jpg)
References Operating System Concepts. Chapter 4 & 5.