Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

21
Operating Systems Operating Systems CSE 411 CSE 411 CPU Management CPU Management Sept. 29 2006 - Lecture 11 Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Instructor: Bhuvan Urgaonkar Urgaonkar

Transcript of Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

Page 1: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

Operating SystemsOperating SystemsCSE 411CSE 411

CPU ManagementCPU Management

Sept. 29 2006 - Lecture 11Sept. 29 2006 - Lecture 11

Instructor: Bhuvan UrgaonkarInstructor: Bhuvan Urgaonkar

Page 2: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

Threads

Page 3: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

What’s wrong with a process?

• Multi-programming was developed to allow multiplexing of CPU and I/O– Multiple processes given the illusion of running concurrently

• Several applications would like to have multiple processes– Web server: a process blocks on a file I/O call, then another process can run

on CPU– What would be needed?

• Ability to create/destroy processes on demand– We already know how the OS does this

• We may want control over the scheduling of related processes– This is totally controlled by the OS scheduler

• Processes may need to communicate with each other– Message passing (e.g., signals) or shared memory (coming up) both need OS assistance

• Processes may need to be synchronized with each other (coming up)– Consider two Web server processes updating the same data

– Things not very satisfactory with multi-process applications:1. Communication needs help from the OS (system calls)2. Duplication of same code may cause wastage of memory3. PCBs are large and eat up precious kernel memory4. Process context-switching imposes overheads5. No control over scheduling of processes comprising the same application

Page 4: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

Kernel-level threads1. Communication between related processes needs OS system calls• OS intervention can be avoided if the processes were able to share some

memory without any help from the OS– That is, we are looking for a way for multiple processes to have the same address

space (almost)– Address space: Code, data (global variables and heap), stack– Option #1: Share global variables

• Problem: We don’t know in advance what communication may occur, so we do not know how much memory need to be shared in advance

– Option #2: Share data (globals and heap)

2. Duplication of code may cause waste of memory– Option #3: Share code and data

• Note: Not all processes may want to execute the same code• Expose the same code to all, let each execute whatever part they want to

– Different threads may execute different parts of the code

• What we have now are called kernel-level threads– Cycle through the same 5 states that we had studied for a process– OS provides system calls (analogous to fork, exit, exec) for kernel-level

threads

Page 5: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

Kernel Threads

• PCB can contain things common across threads belonging to the process

• Have a Thread Control Block (TCB) for things specific to a thread

• Side-effect: TCBs are smaller than PCBs, occupy less memory

Page 6: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

Things not very satisfactory with a process

and how we can address these

Kernel-level threads help fix some problems with processes 1. Share data, efficient communication made possible 2. Share code 3. TCBs are smaller than PCBs => take less kernel

memoryNow let us consider the remaining problems with

processes4. Process context-switching imposes overhead

- Do threads impose a smaller overhead?5. No control over scheduling of processes comprising the same

process/application- Do threads help us here?

Page 7: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

Context Switch Revisited

Context switch involves– Save all registers and PC in PCB: same for a kernel-level thread– Save process state in PCB: same for a kernel-level thread

• Do not confuse “process state” (ready, waiting, etc.) with “processor state” (registers, PC) and “processor mode” (user or kernel)

– Flush TLB (not covered yet): no if threads belong to same process– Run scheduler to pick the next process, change address space: same

for kernel-level threads belonging to different processesContext switch between threads of the same processis faster than a process context switch

- Due to address space change related operations Context switch between threads of different processesis almost as expensive as a process context switch

Note: SGG: Thread creation and context switching fasterthan process creation and context switching - Only creation faster not necessarily context switching!!

Page 8: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

How can the context switch overhead be reduced?

• If a multi-threaded application were able to switch between its threads without involving the OS …

• Can this be achieved? What would it involve?• The application would have to

– Maintain separate PC and stack for each thread• Easy to do: allocate and maintain PCs and stacks on the

heap

– Be able to switch from thread to thread in accordance with some scheduling policy

• Need to save/restore processor state (PC, registers) while in user mode

• Possible using setjmp()/longjmp() calls

Page 9: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

setjmp() and longjmp()

Page 10: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

setjmp() and longjmp()

Page 11: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

Reducing the context switch overhead

Requirement 1: Application maintains separate PC and stack for each thread

Requirement 2: Application has a way to switch from thread to thread without OS intervention

• Final missing piece:• How does a thread scheduler get invoked? That is, when is a thread

taken off the CPU and another scheduled?• Note: Only concerned with threads of the same process. Why?

• Strategy 1: Require all threads to yield the CPU periodically• Strategy 2: Set timers that send SIGALRM signals to the process

“periodically”• E.g., UNIX: settimer() system call• Implement signal handler

• Handler saves CPU state for prev. running thread using setjmp() into jmp_buf struct

• Copies the contents of jmp_buf into TCB on the heap• Calls the thread scheduler that picks the next thread to run• Copies the CPU state of chosen thread from heap into jmp_buf, calls longjmp()

• What we have now are called user-level threads

Page 12: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

Kernel-level threads• Pro: OS knows about all the threads in a

process– Can assign different scheduling priorities to each

one– Can context switch between multiple threads in

one process

• Con: Thread operations require calling the kernel– Creating, destroying, or context switching require

system calls

Page 13: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

User-level threads• Pro: Thread operations very fast

– Typically 10-100X faster than going through kernel

• Pro: Thread state is very small– Just CPU state and stack

• Con: If one thread blocks, stall entire process• Con: Can’t use multiple CPUs!

– Kernel only knows one CPU context

• Con: OS may not make good scheduling decisions– Could schedule a process with only idle threads– Could de-schedule a process with a thread holding a

lock

Page 14: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

Signal Handling with Threads

• Recall: Signals are used in UNIX systems to notify a process that a particular event has occurred

• Recall: A signal handler is used to process signals- Signal is generated by particular event

- Signal is delivered to a process- Signal is handled

• Options:– Deliver the signal to the thread to which the signal

applies– Deliver the signal to every thread in the process– Deliver the signal to certain threads in the process– Assign a specific thread to receive all signals for the

process

Page 15: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

Signal Handling (more)• When does a process handle a signal?

– Whenever it gets scheduled next after the generation of the signal

• We said the OS marks some members of the PCB to indicate that a signal is due– And we said the process will execute the signal handler

when it gets scheduled– But its PC had some other address!

• The address of the instruction the process was executing when it was scheduled last

– Complex task due to the need to juggle stacks carefully while switching between user and kernel mode

Page 16: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

Signal Handling (more)• Remember that signal handlers are functions

defined by processes and included in the user mode code segment– Executed in user mode in the process’s

context• The OS forces the handler’s starting address

into the program counter – The user mode stack is modified by the OS

so that the process execution starts at the signal handler

Page 17: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

Combining the benefits of kernel and user-

level threads

• Read from text: Sections 4.2, 4.3, 4.4.6

Page 18: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

Inter-process Communication (IPC)

• Two fundamental ways– Shared-Memory

• E.g., playing tic-tac-toe or chess

– Message Passing• E.g., Letter, Email

• Any communication involves a combination of these two

Page 19: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

IPC: Message Passing

• OS provides system calls that processes/threads can use to pass messages to each other

• A thread library could provide user-level calls for the same– OS not involved

Page 20: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

IPC: Shared Memory• OS provides system calls using

which processes can create shared memory that they can read to/write from

• Threads: Can share memory without OS intervention

Page 21: Operating Systems CSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar.

Process/Thread synchronization

• Fundamental problem that needs to be solved to enable IPC

• Will study it next time