Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design...
-
Upload
taylor-bowen -
Category
Documents
-
view
225 -
download
1
Transcript of Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design...
![Page 1: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/1.jpg)
Multiple Processor Systems
Bits of Chapters 4, 10, 16
Operating Systems:Internals and Design Principles, 6/E
William Stallings
![Page 2: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/2.jpg)
Parallel Processor Architectures
![Page 3: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/3.jpg)
Multiprocessor Systems
• Continuous need for faster computers– shared memory model– message passing multiprocessor– wide area distributed system
![Page 4: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/4.jpg)
Multiprocessors
Definition:A computer system in which two or more CPUs share full access to a common RAM
![Page 5: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/5.jpg)
Multiprocessor Hardware
Bus-based multiprocessors
![Page 6: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/6.jpg)
Non-blocking network
UMA Multiprocessor using a crossbar switch
![Page 7: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/7.jpg)
Blocking network
Omega Switching Network
![Page 8: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/8.jpg)
Master-Slave multiprocessors
Bus
![Page 9: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/9.jpg)
Symmetric Multiprocessors
Bus
![Page 10: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/10.jpg)
Symmetric Multiprocessor Organization
![Page 11: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/11.jpg)
Multiprocessor Operating Systems Design Considerations• Simultaneous concurrent processes or
threads: reentrant routines, IPC
• Scheduling: on which processor a process should run
• Synchronization: locks
• Memory management: e.g. shared pages
• Reliability and fault tolerance: graceful degradation
![Page 12: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/12.jpg)
Multiprocessor Synchronization
TSL fails if bus is not locked
![Page 13: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/13.jpg)
Spinning versus Switching
• In some cases CPU must wait– waits to acquire ready list
• In other cases a choice exists– spinning wastes CPU cycles– switching uses up CPU cycles also– possible to make separate decision each
time locked mutex encountered (e.g. using history)
![Page 14: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/14.jpg)
Scheduling Design Issues
• Assignment of processes to processors
• Use of multiprogramming on individual processors
• Actual dispatching of a process
![Page 15: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/15.jpg)
Assignment of Processes to Processors
• Treat processors as a pooled resource and assign process to processors on demand
• Static assignment: Permanently assign process to a processor– Dedicate short-term queue for each processor– Less overhead– Processor could be idle while another
processor has a backlog
![Page 16: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/16.jpg)
Assignment of Processes to Processors (2)
• Dynamic assignment: Global queue– Schedule to any available processor– Process migration, cf. local cache
![Page 17: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/17.jpg)
Master/slave architecture
– Key kernel functions always run on a particular processor
– Master is responsible for scheduling– Slave sends service request to the master– Disadvantages
• Failure of master brings down whole system• Master can become a performance bottleneck
![Page 18: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/18.jpg)
Peer architecture
– Kernel can execute on any processor– Each processor does self-scheduling– Complicates the operating system
• Make sure two processors do not choose the same process
![Page 19: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/19.jpg)
Traditional Process Scheduling
• Single queue for all processes
• Multiple queues are used for priorities
• All queues feed to the common pool of processors
![Page 20: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/20.jpg)
Thread Scheduling
• An application can be a set of threads that cooperate and execute concurrently in the same address space
• True parallelism
![Page 21: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/21.jpg)
Comparison One and Two Processors
![Page 22: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/22.jpg)
Comparison One and Two Processors (2)
![Page 23: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/23.jpg)
Multiprocessor Thread Scheduling
• Load sharing– Threads are not assigned to a particular
processor
• Gang scheduling– A set of related threads is scheduled to run on
a set of processors at the same time
• Dedicated processor assignment– Threads are assigned to a specific processor
![Page 24: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/24.jpg)
Load Sharing
• Load is distributed evenly across the processors
• No centralized scheduler required
• Use global queues
![Page 25: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/25.jpg)
Disadvantages of Load Sharing
• Central queue needs mutual exclusion
• Preemptive threads are unlikely to resume execution on the same processor
• If all threads are in the global queue, not all threads of a program will gain access to the processors at the same time
![Page 26: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/26.jpg)
Multiprocessor Scheduling (3)
• Problem with communication between two threads– both belong to process A– both running out of phase
![Page 27: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/27.jpg)
Gang Scheduling
• Simultaneous scheduling of threads that make up a single process
• Useful for applications where performance severely degrades when any part of the application is not running
• Threads often need to synchronize with each other
![Page 28: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/28.jpg)
Dedicated Processor Assignment
• When application is scheduled, its threads are assigned to a processor
• Some processors may be idle
• No multiprogramming of processors
![Page 29: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/29.jpg)
Application Speedup
![Page 30: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/30.jpg)
Client/Server Computing
• Client machines are generally single-user PCs or workstations that provide a highly user-friendly interface to the end user
• Each server provides a set of shared services to the clients
• The server enables many clients to share access to the same database and enables the use of a high-performance computer system to manage the database
![Page 31: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/31.jpg)
Generic Client/Server Environment
![Page 32: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/32.jpg)
Client/Server Applications
• Basic software is an operating system running on the hardware platform
• Platforms and the operating systems of client and server may differ
• These lower-level differences are irrelevant as long as a client and server share the same communications protocols and support the same applications
![Page 33: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/33.jpg)
Generic Client/Server Architecture
![Page 34: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/34.jpg)
Middleware
• Set of tools that provide a uniform means and style of access to system resources across all platforms
• Enable programmers to build applications that look and feel the same
• Enable programmers to use the same method to access data
![Page 35: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/35.jpg)
Role of Middleware in Client/Server Architecture
![Page 36: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/36.jpg)
Distributed Message Passing
![Page 37: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/37.jpg)
Basic Message-Passing Primitives
![Page 38: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/38.jpg)
Reliability versus Unreliability
• Reliable message-passing guarantees delivery if possible– Not necessary to let the sending process
know that the message was delivered
• Send the message out into the communication network without reporting success or failure– Reduces complexity and overhead
![Page 39: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/39.jpg)
Blocking versus Nonblocking
• Nonblocking– Process is not suspended as a result of
issuing a Send or Receive– Efficient and flexible– Difficult to debug
![Page 40: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/40.jpg)
Blocking versus Nonblocking
• Blocking– Send does not return control to the sending
process until the message has been transmitted
– OR does not return control until an acknowledgment is received
– Receive does not return until a message has been placed in the allocated buffer
![Page 41: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/41.jpg)
Remote Procedure Calls
• Allow programs on different machines to interact using simple procedure call/return semantics
• Widely accepted
• Standardized– Client and server modules can be moved
among computers and operating systems easily
![Page 42: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/42.jpg)
Remote Procedure Call
![Page 43: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/43.jpg)
Remote Procedure Call Mechanism
![Page 44: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/44.jpg)
Client/Server Binding
• Binding specifies the relationship between remote procedure and calling program
• Nonpersistent binding– Logical connection established during remote
procedure call
• Persistent binding– Connection is sustained after the procedure
returns
![Page 45: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/45.jpg)
Synchronous versus Asynchronous
• Synchronous RPC– Behaves much like a subroutine call
• Asynchronous RPC– Does not block the caller– Enable a client execution to proceed locally in
parallel with server invocation
![Page 46: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/46.jpg)
Clusters
• Alternative to symmetric multiprocessing (SMP)
• Group of interconnected, whole computers (nodes) working together as a unified computing resource– Illusion is one machine
![Page 47: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/47.jpg)
No Shared Disks
![Page 48: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/48.jpg)
Shared Disk
![Page 49: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/49.jpg)
Clustering Methods: Benefits and Limitations
![Page 50: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/50.jpg)
Clustering Methods: Benefits and Limitations
![Page 51: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/51.jpg)
Operating System Design Issues
• Failure management– Highly available cluster offers a high
probability that all resources will be in service• No guarantee about the state of partially executed
transactions if failure occurs
– Fault-tolerant cluster ensures that all resources are always available
![Page 52: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/52.jpg)
Operating System Design Issues (2)
• Load balancing– When new computer added to the cluster, the
load-balancing facility should automatically include this computer in scheduling applications
![Page 53: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/53.jpg)
Multicomputer Scheduling - Load Balancing (1)
• Graph-theoretic deterministic algorithm
Process
![Page 54: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/54.jpg)
Load Balancing (2)
• Sender-initiated distributed heuristic algorithm– overloaded sender
![Page 55: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/55.jpg)
Load Balancing (3)
• Receiver-initiated distributed heuristic algorithm– under loaded receiver
![Page 56: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/56.jpg)
Operating System Design Issues (3)
• Parallelizing Computation– Parallelizing compiler: determines at compile
time which parts can be executed parallel– Parallelized application: by programmer,
message passing for moving data– Parametric computing: several runs with
different settings – simulation model
![Page 57: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/57.jpg)
Clusters Compared to SMP
• SMP is easier to manage and configure
• SMP takes up less space and draws less power
• SMP products are well established and stable
![Page 58: Multiple Processor Systems Bits of Chapters 4, 10, 16 Operating Systems: Internals and Design Principles, 6/E William Stallings.](https://reader036.fdocuments.us/reader036/viewer/2022062417/5515d273550346dd6f8b46c9/html5/thumbnails/58.jpg)
Clusters Compared to SMP
• Clusters are better for incremental and absolute scalability
• Clusters are superior in terms of availability