Post on 21-Dec-2015
What we will cover… Processes
Process Concept Process Scheduling Operations on Processes Interprocess Communication Communication in Client-Server Systems (Reading
Materials)
Threads Overview Multithreading Models Threading Issues
1-1Lecture 3
What is a process An operating system executes a variety of
programs: Batch system – jobs Time-shared systems – user programs or tasks Single-user Microsoft Windows or Macintosh OS
• User runs many programs• Word processor, web browser, email
Informally, a Process is just one such program in execution; progress in sequential fashion Similar to any high level language programs code
(C/C++/Java code etc.) written by users
However, formally, a process is something more than just the program code (text section)!
1-2Lecture 3
Process in Memory In addition to the text section
A process includes: program counter contents of the processor’s
registers stack
Contains temporary data Method parameters Return addresses Local variables
data section
While a program is a passive entity, a process is known as an active entity 1-3Lecture 3
Process State
As a process executes, goes from creation to termination, goes through various “states”
new: The process is being created running: Instructions are being executed waiting: The process is waiting for some event to occur ready: The process is waiting to be assigned to a
processor terminated: The process has finished execution
1-4Lecture 3
Diagram of Process State
1-5Lecture 3
Process Control Block (PCB) A process contains numerous information A system has many processes How to manage all the process information
Each process is represented by a Process Control Block
a table full of information for each process Process state Program counter CPU registers CPU scheduling information Memory-management information I/O status information
1-6Lecture 3
Process Control Block (PCB)
1-7Lecture 3
CPU Switch From Process to Process
1-8Lecture 3
Process Scheduling
In a multiprogramming environment, there will be many processes many of them ready to run Many of them waiting for some other events to occur
How to manage the architecture? Queuing
Job queue – set of all processes in the system Ready queue – set of all processes residing in main memory,
ready and waiting to execute Device queues – set of processes waiting for an I/O device
Processes migrate among these various queues
1-9Lecture 3
A Representation of Process Scheduling
1-10Lecture 3
OS Queue structure (implemented with link list)
1-11Lecture 3
Schedulers A process migrates among various queues Often more processes are there than can be executed
immediately Stored in mass-storage devices (typically, disk) Must be brought into main memory for execution
OS selects processes in some fashion Selection process carried out by a scheduler
Two schedulers in effect… Long-term scheduler (or job scheduler) – selects which
processes should be brought into the memory Short-term scheduler (or CPU scheduler) – selects which
process should be executed next and allocates CPU
1-12Lecture 3
Schedulers (Cont) Short-term scheduler is invoked very frequently
(milliseconds) (must be fast) Long-term scheduler is invoked very infrequently (seconds,
minutes) (may be slow) The long-term scheduler controls the degree of
multiprogramming
Long-term scheduler has another big responsibility Processes can be described as either:
I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
CPU-bound process – spends more time doing computations; few very long CPU bursts
Balance between two types of processes
1-13Lecture 3
Addition of Medium Term Scheduling
1-14Lecture 3
Context Switch All the earlier mentioned process scheduling has a
trade-off
When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process via a context switch
Time dependent on hardware support
Context-switch time is pure overhead; the system does no useful work while switching
1-15Lecture 3
Interprocess Communication Concurrent processes within a system may be independent or
cooperating Cooperating process can affect or be affected by other processes,
including sharing data
Reasons for cooperating processes: Information sharing – several users may be interested in a shared file Computation speedup – break a task into subtasks and work in parallel Convenience
Need InterProcess Communication (IPC) Two models of IPC
Shared memory Message passing
1-16Lecture 3
Communications Models
Message-passing
Shared-memory1-17Lecture 3
Shared memory: Producer-Consumer Problem
Paradigm for cooperating processes producer process produces information that
is consumed by a consumer process
IPC implemented by a shared buffer unbounded-buffer places no practical limit
on the size of the buffer bounded-buffer assumes that there is a
fixed buffer size• More practical• Let’s design!
1-18Lecture 3
Bounded-Buffer – Shared-Memory Solution design
Three steps in the design problem1. Design the buffer2. Design the producer process3. Design the consumer process
1. Shared buffer (implemented as circular array with two logical pointers: in and out)
#define BUFFER_SIZE 10typedef struct {
. . .} item;
item buffer[BUFFER_SIZE];int in = 0;int out = 0;
1-19Lecture 3
Bounded-Buffer – Producer & Consumer process design
2. Producer designwhile (true) { /* Produce an item */ while (((in + 1) % BUFFER SIZE) == out)
; /* do nothing -- no free buffers */ buffer[in] = nextProduced; in = (in + 1) % BUFFER SIZE;}
3. Consumer designwhile (true) { while (in == out) ; // do nothing -- nothing to consume // remove an item from the buffer nextConsumed = buffer[out]; out = (out + 1) % BUFFER SIZE; }
1-20Lecture 3
Shared Memory design
Previous design is correct, but can only use BUFFER_SIZE-1 elements!!!
Exercise for you to design a solution where BUFFER_SIZE items can be in the buffer Part of Assignment 1
1-21Lecture 3
Interprocess Communication – Message Passing
Processes communicate with each other without resorting to shared memory
IPC facility provides two operations: send(message) – message size fixed or variable receive(message)
If P and Q wish to communicate, they need to: establish a communication link between them exchange messages via send/receive
1-22Lecture 3
Direct Communication Processes must name each other explicitly:
send (P, message) – send a message to process P receive(Q, message) – receive a message from process
Q
Properties of communication link A link is associated with exactly one pair of
communicating processes Between each pair there exists exactly one link
Symmetric (both sender & receiver must name the other to communicate)
Asymmetric (receiver not required to name the sender)
1-23Lecture 3
Indirect Communication Messages are directed and received from
mailboxes (also referred to as ports) Each mailbox has a unique id Processes can communicate only if they share a mailbox
Properties of communication link Link established only if processes share a common
mailbox A link may be associated with many processes Each pair of processes may share several
communication links Link may be unidirectional or bi-directional
1-24Lecture 3
Communications in Client-Server Systems
Socket connection
1-25Lecture 3
Sockets A socket is defined as an endpoint for
communication
Concatenation of IP address and port
The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8
Communication consists between a pair of sockets
1-26Lecture 3
Socket Communication
1-27Lecture 3
Threads Process model discussed so far assumed that a
process was sequentially executed program with a single thread
Increased scale of computing putting pressure on programmers, challenges include
• Dividing activities• Balance• Data splitting• Data dependency• Testing and debugging
Think of a busy web server!
1-28Lecture 3
Single and Multithreaded Processes
1-29Lecture 3
Benefits
Responsiveness
Resource Sharing
Economy
Scalability
1-30Lecture 3
Multithreaded Server Architecture
1-31Lecture 3
Concurrent Execution on a Single-core System
1-32Lecture 3
Parallel Execution on a Multicore System
1-33Lecture 3
User and Kernel Threads
User threads: Thread management done by user-level threads library
Kernel threads: Supported by the Kernel Windows XP Solaris Linux Mac OS X
1-34Lecture 3
Multithreading Models
Many-to-One
One-to-One
Many-to-Many
1-35Lecture 3
Many-to-One
Many user-level threads mapped to single kernel thread
Examples: Solaris Green Threads GNU Portable Threads
1-36Lecture 3
One-to-One
Each user-level thread maps to kernel thread
Examples Windows NT/XP/2000 Linux
1-37Lecture 3
Many-to-Many Model
Allows many user level threads to be mapped to many kernel threads
Allows the operating system to create a sufficient number of kernel threads
Solaris prior to version 9
1-38Lecture 3
Many-to-Many Model
1-39Lecture 3
Threading Issues
Thread cancellation of target thread
Dynamic unbound usage of threads
1-40Lecture 3
Thread Cancellation
Terminating a thread before it has finished
General approaches: Asynchronous cancellation
terminates the target thread immediately
• Problems?
Deferred cancellation allows the target thread to periodically check if it should be cancelled
1-41Lecture 3
Dynamic usage of threads Create thread as and when needed
Disadvantages: Amount of time to create a thread Nevertheless, this thread will be discarded
once it has completed work; no reusage No bound on the total number of threads
created in the system • may result in severe resource scarcity
1-42Lecture 3
Solution: Thread Pools Create a number of threads in a pool where
they await work Advantages:
Usually faster to service a request with an existing thread than create a new thread
Allows the number of threads in the application(s) to be bound to the size of the pool
Almost all modern OS provide kernel support for threads: Windows XP, MAC, Linux…
1-43Lecture 3