Synchronization

33
1 Synchronization “A parallel computer is a collection of processing elements that cooperate and communicate to solve large problems fast.” Types of Synchronization Mutual Exclusion Event synchronization point-to-point group global (barriers)

description

Synchronization. “A parallel computer is a collection of processing elements that cooperate and communicate to solve large problems fast.” Types of Synchronization Mutual Exclusion Event synchronization point-to-point group global (barriers). Components of a Synchronization Event. - PowerPoint PPT Presentation

Transcript of Synchronization

Page 1: Synchronization

1

Synchronization

“A parallel computer is a collection of processing elements that cooperate and communicate to solve large problems fast.”

Types of Synchronization• Mutual Exclusion• Event synchronization

– point-to-point– group– global (barriers)

Page 2: Synchronization

2

Components of a Synchronization Event

Acquire method• Acquire right to the synch (enter critical section, go past event

Waiting algorithm• Wait for synch to become available when it isn’t

Release method• Enable other processors to acquire right to the synch

Waiting algorithm is independent of type of synchronization

Page 3: Synchronization

3

Waiting Algorithms

Blocking• Waiting processes are descheduled• High overhead• Allows processor to do other things

Busy-waiting• Waiting processes repeatedly test a location until it changes value• Releasing process sets the location• Lower overhead, but consumes processor resources• Can cause network traffic

Busy-waiting better when• Scheduling overhead is larger than expected wait time• Processor resources are not needed for other tasks• Scheduler-based blocking is inappropriate (e.g. in OS kernel)

Hybrid methods: busy-wait a while, then block

Page 4: Synchronization

4

Role of System and User

User wants to use high-level synchronization operations• Locks, barriers...• Doesn’t care about implementation

System designer: how much hardware support in implementation?• Speed versus cost and flexibility• Waiting algorithm difficult in hardware, so provide support for others

Popular trend:• System provides simple hardware primitives (atomic operations)• Software libraries implement lock, barrier algorithms using these• But some propose and implement full-hardware synchronization

Page 5: Synchronization

5

Challenges

Same synchronization may have different needs at different times• Lock accessed with low or high contention• Different performance requirements: low latency or high throughput• Different algorithms best for each case, and need different primitives

Multiprogramming can change synchronization behavior and needs• Process scheduling and other resource interactions• May need more sophisticated algorithms, not so good in dedicated case

Rich area of software-hardware interactions• Which primitives available affects what algorithms can be used• Which algorithms are effective affects what primitives to provide

Need to evaluate using workloads

Page 6: Synchronization

6

Mutual Exclusion: Hardware Locks

Separate lock lines on the bus: holder of a lock asserts the line• Priority mechanism for multiple requestors

Lock registers (Cray XMP)• Set of registers shared among processors

Inflexible, so not popular for general purpose use– few locks can be in use at a time (one per lock line)– hardwired waiting algorithm

Primarily used to provide atomicity for higher-level software locks

Page 7: Synchronization

7

First Attempt at Simple Software Lock

lock: ld register, location /* copy location to register */

cmp location, #0 /* compare with 0 */

bnz lock /* if not 0, try again */

st location, #1 / * store 1 to mark it locked */

ret /* return control to caller */

and

unlock: st location, #0 /* write 0 to location */

ret /* return control to caller */

Problem: lock needs atomicity in its own implementation• Read (test) and write (set) of lock variable by a process not atomic

Solution: atomic read-modify-write or exchange instructions• atomically test value of location and set it to another value, return success

or failure somehow

Page 8: Synchronization

8

Atomic Exchange Instruction

Specifies a location and register. In atomic operation:• Value in location read into a register• Another value (function of value read or not) stored into location

Many variants• Varying degrees of flexibility in second part

Simple example: test&set• Value in location read into a specified register• Constant 1 stored into location• Successful if value loaded into register is 0• Other constants could be used instead of 1 and 0

Can be used to build locks

Page 9: Synchronization

9

Simple Test&Set Lock

lock: t&s register, location

bnz lock /* if not 0, try again */

ret /* return control to caller */

unlock: st location, #0 /* write 0 to location */

ret /* return control to caller */

Other read-modify-write primitives can be used too• Swap• Fetch&op• Compare&swap

– Three operands: location, register to compare with, register to swap with– Not commonly supported by RISC instruction sets

Can be cacheable or uncacheable (we assume cacheable)

Page 10: Synchronization

10

Enhancements to Simple Lock Algorithm

Reduce frequency of issuing test&sets while waiting• Test&set lock with backoff• Don’t back off too much or will be backed off when lock becomes free• Exponential backoff works quite well empirically: ith time = k*ci

Busy-wait with read operations rather than test&set• Test-and-test&set lock• Keep testing with ordinary load

– cached lock variable will be invalidated when release occurs

• When value changes (to 0), try to obtain lock with test&set– only one attemptor will succeed; others will fail and start testing again

Page 11: Synchronization

11

Performance Criteria (T&S Lock)

Uncontended Latency• Very low if repeatedly accessed by same processor; indept. of p

Traffic• Lots if many processors compete; poor scaling with p• Each t&s generates invalidations, and all rush out again to t&s

Storage• Very small (single variable); independent of p

Fairness• Poor, can cause starvation

Test&set with backoff similar, but less trafficTest-and-test&set: slightly higher latency, much less trafficBut still all rush out to read miss and test&set on release

• Traffic for p processors to access once each: O(p2)

Luckily, better hardware primitives as well as algorithms exist

Page 12: Synchronization

12

Improved Hardware Primitives: LL-SC

Goals: • Test with reads• Failed read-modify-write attempts don’t generate invalidations• Nice if single primitive can implement range of r-m-w operations

Load-Locked (or -linked), Store-Conditional

LL reads variable into register

Follow with arbitrary instructions to manipulate its value

SC tries to store back to location if and only if no one else has written to the variable since this processor’s LL

• If SC succeeds, means all three steps happened atomically• If fails, doesn’t write or generate invalidations (need to retry LL)• Success indicated by condition codes; implementation later

Page 13: Synchronization

13

Simple Lock with LL-SClock: ll reg1, location /* LL location to reg1 */

sc location, reg2 /* SC reg2 into location*/beqz reg2, lock /* if failed, start again */ret

unlock: st location, #0 /* write 0 to location */ret

Can do more fancy atomic ops by changing what’s between LL & SC• But keep it small so SC likely to succeed• Don’t include instructions that would need to be undone (e.g. stores)

SC can fail (without putting transaction on bus) if:• Detects intervening write even before trying to get bus• Tries to get bus but another processor’s SC gets bus first

LL, SC are not lock, unlock respectively• Only guarantee no conflicting write to lock variable between them• But can use directly to implement simple operations on shared variables

Page 14: Synchronization

14

More Efficient SW Locking AlgorithmsProblem with Simple LL-SC lock

• No invals on failure, but read misses by all waiters after both release and successful SC by winner

• No test-and-test&set analog, but can use backoff to reduce burstiness• Doesn’t reduce traffic to minimum, and not a fair lock

Better SW algorithms for bus (for r-m-w instructions or LL-SC)• Only one process to try to get lock upon release

– valuable when using test&set instructions; LL-SC does it already

• Only one process to have read miss upon release– valuable with LL-SC too

• Ticket lock achieves first• Array-based queueing lock achieves both• Both are fair (FIFO) locks as well

Page 15: Synchronization

15

Ticket Lock

Only one r-m-w (from only one processor) per acquire

Works like waiting line at deli or bank• Two counters per lock (next_ticket, now_serving)• Acquire: fetch&inc next_ticket; wait for now_serving to equal it

– atomic op when arrive at lock, not when it’s free (so less contention)

• Release: increment now-serving• FIFO order, low latency for low-contention if fetch&inc cacheable• Still O(p) read misses at release, since all spin on same variable

– like simple LL-SC lock, but no inval when SC succeeds, and fair

• Can be difficult to find a good amount to delay on backoff– exponential backoff not a good idea due to FIFO order– backoff proportional to now-serving - next-ticket may work well

Wouldn’t it be nice to poll different locations ...

Page 16: Synchronization

16

Array-based Queuing Locks

Waiting processes poll on different locations in an array of size p• Acquire

– fetch&inc to obtain address on which to spin (next array element)– ensure that these addresses are in different cache lines or memories

• Release– set next location in array, thus waking up process spinning on it

• O(1) traffic per acquire with coherent caches• FIFO ordering, as in ticket lock• But, O(p) space per lock• Good performance for bus-based machines• Not so great for non-cache-coherent machines with distributed memory

– array location I spin on not necessarily in my local memory (solution later)

Page 17: Synchronization

17

Array based lock on scalable machines

• No bus• Avoid spinning on other locations not in local memory• Useful (especially) for machines such as Cray T3E

– No coherent cache

• Solution based on distributed queues:– Each node in the linked list stored on the processor that requested the lock– Head and Tail pointers– Each process spins on its own local node– On release: “wake up” the next process by writing the flag in the next

processor’s node.

• On Cache-coherent machines: (sec 8.8)

L.V.Kale

Page 18: Synchronization

18

Point to Point Event Synchronization

Software methods:• Interrupts• Busy-waiting: use ordinary variables as flags• Blocking: use semaphores

Full hardware support: full-empty bit with each word in memory• Set when word is “full” with newly produced data (i.e. when written)• Unset when word is “empty” due to being consumed (i.e. when read)• Natural for word-level producer-consumer synchronization

– producer: write if empty, set to full; consumer: read if full; set to empty

• Hardware preserves atomicity of bit manipulation with read or write• Problem: flexiblity

– multiple consumers, or multiple writes before consumer reads?– needs language support to specify when to use– composite data structures?

Page 19: Synchronization

19

Barriers

Software algorithms implemented using locks, flags, counters

Hardware barriers• Wired-AND line separate from address/data bus• Set input high when arrive, wait for output to be high to leave• In practice, multiple wires to allow reuse• Useful when barriers are global and very frequent• Difficult to support arbitrary subset of processors

– even harder with multiple processes per processor

• Difficult to dynamically change number and identity of participants– e.g. latter due to process migration

• Not common today on bus-based machines

Let’s look at software algorithms with simple hardware primitives

Page 20: Synchronization

20

A Simple Centralized Barrier

Shared counter maintains number of processes that have arrived• increment when arrive (lock), check until reaches numprocs

struct bar_type {int counter; struct lock_type lock; int flag = 0;} bar_name;

BARRIER (bar_name, p) {LOCK(bar_name.lock);if (bar_name.counter == 0)

bar_name.flag = 0; /* reset flag if first to reach*/mycount = bar_name.counter++; /* mycount is private */UNLOCK(bar_name.lock);if (mycount == p) { /* last to arrive */

bar_name.counter = 0; /* reset for next barrier */bar_name.flag = 1; /* release waiters */

}else while (bar_name.flag == 0) {}; /* busy wait for release */

}• Problem?

Page 21: Synchronization

21

A Working Centralized Barrier

Consecutively entering the same barrier doesn’t work• Must prevent process from entering until all have left previous instance• Could use another counter, but increases latency and contention

Sense reversal: wait for flag to take different value consecutive times• Toggle this value only when all processes reach

BARRIER (bar_name, p) {local_sense = !(local_sense); /* toggle private sense variable */

LOCK(bar_name.lock);mycount = bar_name.counter++; /* mycount is private */if (bar_name.counter == p)

UNLOCK(bar_name.lock); bar_name.flag = local_sense; /* release waiters*/

else { UNLOCK(bar_name.lock);

while (bar_name.flag != local_sense) {}; }}

Page 22: Synchronization

22

Centralized Barrier Performance

Latency• Want short critical path in barrier• Centralized has critical path length at least proportional to p

Traffic• Barriers likely to be highly contended, so want traffic to scale well• About 3p bus transactions in centralized

Storage Cost• Very low: centralized counter and flag

Fairness• Same processor should not always be last to exit barrier• No such bias in centralized

Key problems for centralized barrier are latency and traffic• Especially with distributed memory, traffic goes to same node

Page 23: Synchronization

23

Improved Barrier Algorithms for a Bus

• Separate arrival and exit trees, and use sense reversal• Valuable in distributed network: communicate along different paths• On bus, all traffic goes on same bus, and no less total traffic• Higher latency (log p steps of work, and O(p) serialized bus xactions)• Advantage on bus is use of ordinary reads/writes instead of locks

Software combining tree•Only k processors access the same location, where k is degree of tree

Flat Tree structured

Contention Little contention

Page 24: Synchronization

24

Synchronization Summary

Rich interaction of hardware-software tradeoffs

Must evaluate hardware primitives and software algorithms together• primitives determine which algorithms perform well

Evaluation methodology is challenging• Use of delays, microbenchmarks• Should use both microbenchmarks and real workloads

Simple software algorithms with common hardware primitives do well on bus

• Will see more sophisticated techniques for distributed machines• Hardware support still subject of debate

Theoretical research argues for swap or compare&swap, not fetch&op• Algorithms that ensure constant-time access, but complex

Page 25: Synchronization

25

List-based Queuing LocksList-based locks

• build linked-lists per lock in SW• acquire

– allocate (local) list element and enqueue on list– spin on flag field of that list element

• release– set flag of next element on list

• use compare&swap to manage lists– swap is sufficient, but lose FIFO property– FIFO– spin locally (cache-coherent or not)– O(1) network transactions even without consistent caches– O(1) space per lock– but, compare&swap difficult to implement in hardware

Page 26: Synchronization

26

Implementing Atomic Ops with Caching

One possibility: Load Linked / Store Conditional (LL/SC)

• Load Linked loads the lock and sets a bit

• When “atomic” operation is done, Store Conditional succeeds only if bit was not reset in interim

• Doesn’t need diff instructions with diff nos. of arguments

• Good for bus-based machine: SC result delivered by bus

• More complex for directory-based machine:– wait for SC to go to directory and get ownership (long latency)– have LL load in exclusive mode, so SC succeeds immediately if still in

exclusive mode

Page 27: Synchronization

27

Bottom Line for Locks

Lots of options

SW algorithms can do well given simple HW primitives (fetch&op)

• LL/SC works well if there is locality of synch access• Otherwise, in-memory fetch&ops are good for high contention

Page 28: Synchronization

28

Dissemination Barrier

Goal is to allow statically allocated flags• avoid remote spinning even without cache coherence

log p rounds of synchronization

In round k, proc i synchronizes with proc (i+2k) mod p• can statically allocate flags to avoid remote spinning

Like a butterfly network

Page 29: Synchronization

29

Tournament Barrier

Like binary combining tree

But representative processor at a node chosen statically• no fetch-and-op needed

In round k, proc i sets a flag for proc j = i - 2k (mod 2k+1)• i then drops out of tournament and j proceeds in next round• i waits for global flag signalling completion of barrier to be set by

root– could use combining wakeup tree

Without coherent caches and broadcast, suffers from either traffic due to single flag or same problem as combining trees (for wakeup)

Page 30: Synchronization

30

MCS Barrier

Modifies tournament barrier to allow static allocation in wakeup tree, and to use sense reversal

Every processor is a node in two p-node trees• has pointers to its parent, building a fanin-4 arrival tree• has pointers to its children to build a fanout-2 wakeup tree

+ spins on local flag variables

+ requires O(P) space for P processors

+ theoretical minimum no. of network transactions (2P -2)

+ O(log P) network transactions on critical path

Page 31: Synchronization

31

Network Transactions

Centralized, combining tree: O(p) if broadcast and coherent caches;

unbounded otherwise

Dissemination: O(p log p)

Tournament, MCS: O(p)

Page 32: Synchronization

32

Critical Path Length

If independent parallel network paths available:• all are O(log P) except centralized, which is O(P)

If not (e.g. shared bus):• linear terms dominate

Page 33: Synchronization

33

Reading assignment: Cs433

Read sections 7.9 and 8.8

Synch. Issues on Scalable multiprocessors and directory based schemes