Operating Systems

40
OS 1

description

 

Transcript of Operating Systems

Page 1: Operating Systems

1

OS

Page 2: Operating Systems

2Operating System

Just a program Provides a stable, consistent way for

applications to deal with the hardware

Page 3: Operating Systems

3What constitutes an OS?

Kernel

System Programs

Application Programs

Page 4: Operating Systems

5Storage Device Hierarchy

Page 5: Operating Systems

6Cache

Writing policies

Write back Initially, writing is done only to the cache Mark them as dirty for later writing to the

backing store

Write through Write is done synchronously both to the

cache and to the backing store

Replacement Policy?

Page 6: Operating Systems

7System Calls

Instructions that allow access to privileged or sensitive resources on the CPU

System calls provide a level of portability

Parameters to system calls can be passed through registers, tables or stack

Is “printf” a system call?

Page 7: Operating Systems

8System Calls

File copy program Acquire input file name Acquire output file name If file cannot be created abort Open input file If file doesn't exist abort Read from input file Write to output file Repeat until read fails Close output file Terminate normally by returning closing status to

OS

Page 8: Operating Systems

9Operating System Operation

Interrupt driven Dual Mode Operation

“Kernel” mode (or “supervisor” or “protected”) “User” mode: Normal programs executed

Page 9: Operating Systems

10

Operating System Operation

Timer

Interrupts the computer after a specified period

May be treated as a fatal error or may give the program more time Penny, Penny, Penny

Page 10: Operating Systems

11

Process

A program in execution A single ‘thread’ of execution

Uniprogramming: one thread at a time

Multiprogramming: more than one thread at a time

Page 11: Operating Systems

12

Process States

New : Process is being created Ready : Process waiting to run Running : Instructions are being executed Waiting : Process waiting for some event to occur Terminated : The process has finished execution

Page 12: Operating Systems

13

Process Control Block(PCB)

ProcessControlBlock

Only one PCB active at a time

Page 13: Operating Systems

14

Process Creation

vi catemacs

ls

init

CSH CSH

pid=1

Pid=7778

Pid=1400

Parent followed by a child process

Page 14: Operating Systems

15

Process Creation – Fork()

Page 15: Operating Systems

16

A Question (Microsoft)

Page 16: Operating Systems

17

Inter Process Communication

Shared-Memory Systems A process creates a shared-memory segment Other processes attach it to their address space

Message-Passing Systems send(P, message) receive(id, message)

Page 17: Operating Systems

18

IPC - Pipes

Pipes are OS level communication links between processes

Pipes are treated as file descriptors in most OS

Page 18: Operating Systems

19

Multithreading

Each thread has its own stack – current execution state Threads encapsulate concurrency

Page 19: Operating Systems

20

Multithreading

Advantages Responsiveness Resource Sharing Economy Scalability

Pthreads POSIX standard defining an API for thread creation

and synchronization

Page 20: Operating Systems

21

A Question (Google)

Page 21: Operating Systems

22

Scheduling

Scheduling – Deciding which threads are given access to resources from moment to moment

The CPU should not be idle

At least on process should use CPU

Page 22: Operating Systems

23

Scheduling

Scheduling – Deciding which threads are given access to resources from moment to moment

Page 23: Operating Systems

24

Scheduling

Goals/Criteria Minimize Response Time Maximize Throughput Fairness

First-Come, First-Served (FCFS) Scheduling Run until done Shorts jobs get behind long ones

P1 P2 P3

24 27 300

Page 24: Operating Systems

25

Preemption

Capability to preempt a process in execution

Execution prioritized

Higher priority processes preempt the lower ones

Page 25: Operating Systems

26

Round Robin (RR)

Each process gets a small unit of CPU time(time quantum) After quantum expires, process is preempted and added to the

end of ready queue N processes in ready queue and time quantum is q

No process waits more than (N-1)q time units

Performance q large -> FCFS q must be large with respect to context switch,

otherwise too much overhead

Page 26: Operating Systems

27

Round Robin (RR)

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 28 48 68 88 108 112 125 145 153

Process Burst Time

P1 53

P2 8

P3 68

P4 24

Waiting time for P1 = ?, P2 = ?, P3 = ?, P4 = ?

Average Waiting Time = ?Average Completion Time = ?

Page 27: Operating Systems

28

Round Robin (RR)

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 28 48 68 88 108 112 125 145 153

Process Burst Time

P1 53

P2 8

P3 68

P4 24

Waiting time for P1 = (68-20) + (112-88) = 72, P2 = 20, P3 = 85, P4

= 88

Average Waiting Time = 66.5Average Completion Time = 104.5

Page 28: Operating Systems

29

Round Robin (RR)

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 28 48 68 88 108 112 125 145 153

Process Burst Time

P1 53

P2 8

P3 68

P4 24

Pros and Cons: Better for short jobs Context-switching adds up for long jobs

Page 29: Operating Systems

30

What if we know future?

Shortest Job First (SJF) Run whatever job has the least amount of

computation to do

Shortest Remaining Time First (SRTF) Preemptive version of SJF: if job arrives

and has a shorter time to completion than the remaining time on the current job, immediately preempt CPU

Basically Idea is to get short jobs out of the system Big effect on short jobs, only small effect on long ones Result is better average response time

Page 30: Operating Systems

31

Synchronization

Most of the time, threads are working on separate data, so scheduling doesn’t matter

But, what happens when they work on a shared variable

Atomic Operations An operation that always runs to completion or not at all Indivisible Fundamental Building Block

Page 31: Operating Systems

32

Synchronization

Synchronization Using atomic operations to ensure cooperation between

threads

Mutual Exclusion Only one thread does a particular thing at a time

Critical Section Piece of code that only one thread can execute at once

Lock Prevents someone from doing something

Page 32: Operating Systems

33

Synchronization

Load/Store Disable I nts Test&Set Comp&Swap

Locks Semaphores Monitors Send/Receive

Shared Programs

Hardware

Higher-level API

Programs

Everything is pretty painful if only atomic primitives are load and store

Page 33: Operating Systems

34

Semaphores

A kind of generalized lock

Definition: A semaphore has a non-negative integer value and supports the following two operations

P() : An atomic operation that waits for semaphore to become positive, then decrements it by 1 Also called the wait() operation

V() : An atomic operation that increments the semaphore by 1, waking up a waiting P, if any Also called the signal() operation

Page 34: Operating Systems

35

Semaphores Implementation

Page 35: Operating Systems

36

Semaphore vs Mutex

Nope, the purpose of mutex and semaphore are different

Mutex is locking mechanism used to synchronize access to resource. Ownership associated with mutex. Only the owner can release the lock.

Semaphore is Signaling mechanism (“I am done, you can carry on” kind of signal)

More at http://www.geeksforgeeks.org/mutex-vs-semaphore/

Are binary semaphore and mutex same?

Page 36: Operating Systems

37

Readers-Writers

Problem : Several readers and writers accessing the same file Need to control access to buffer If several readers are reading, no problem If writer is writing and reader is reading, some problem

Variations First-Writers-Then-Readers First-Readers-Then-Writers

Page 37: Operating Systems

38

Readers-Writers

Page 38: Operating Systems

39

Deadlocks

P0

Wait(Q)Wait(S)

Some code here

Signal(Q)Signal(S)

P1

Wait(S)Wait(Q)

Some code here

Signal(S)Signal(Q)

A deadlock is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does.

Page 39: Operating Systems

40

Deadlock Requirements

Mutual Exclusion Only one thread at a time can use a resource

Hold and Wait Thread holding at least one resource is waiting to acquire

additional resources held by other threads

No preemption Resources are released only voluntarily by the thread holding

the resource, after thread is finished with it

Circular Wait There exists a set {T1, …, Tn} of waiting threads

T1 is waiting for a resource that is held by T2

T2 is waiting for a resource that is held by T3

… Tn is waiting for a resource that is held by T1

Page 40: Operating Systems

41

Some Techniques

Deadlock Detection Resource allocation graph etc.

Prevention Circular Wait

Avoidance Banker’s algorithm