Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Outline
• Abstract• Introduction• Simple Reader Writer Spin Lock• Reader Preference Lock• Fair Lock
• Locks with Local-Only-Spinning• Fair Lock• Reader Preference Lock• Writer Preference Lock
• Empirical Results & Conclusions• Summary
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Abstract – readers & writers
• All processes request mutex access to the same memory section.• Multiple readers can access the memory section
at the same time.• Only one writer can access the memory section
at a time.
writers readers
010011010101010001000100010100101001001101010101000100010001010010100100110101010100010001000101001010
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Abstract (continued)
• Mutex locks implementation using busy wait.• Busy wait locks causes memory and network
contention which degrades performance.• The problem: busy wait is implemented globally
(everyone busy wait on the same variable / memory location), creating a global bottleneck instead of a local one.• The global bottleneck created by the busy wait,
prevents efficient, larger scale (scalability) implementation of mutex synchronization.
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Outline
• Abstract• Introduction• Simple Reader Writer Spin Lock• Reader Preference Lock• Fair Lock
• Locks with Local-Only-Spinning• Fair Lock• Reader Preference Lock• Writer Preference Lock
• Empirical Results & Conclusions• Summary
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
The purpose of the paper
Presenting readers/writers locks which exploits local spin busy wait implementation, in order to reduce memory and network contention .
Global – everyone busy wait (spin) on the same location.Local – everyone busy wait (spin) on a different memory location.
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Definitions
• Fair lock • readers wait for earlier writers• writers wait for any earlier process (reader or writer) • no starvation
• Readers preference lock• writers wait as long as there are readers requests.• possible starvation• minimizes the delay for readers• maximizes the throughput
• Writers preference lock• readers wait as long as there are writer waiting• possible starvation• prevents the system from using outdated information
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
The MCS lock
The MCS (Mellor-Crummey and Scott) lock is a queue based local spin lock
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
The MCS lock – acquire lock
new_node
lock
tail
new_node
lock
tail
Scal
able
Rea
der W
riter
Syn
chro
niz
ationmy_node
The MCS lock – release lock
lock
tail
my_nodelock
tail
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
The MCS lock – release lock
The spin is local since each process spins (busy wait) on its own node
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Outline
• Abstract• Introduction• Simple Reader Writer Spin Lock• Reader Preference Lock• Fair Lock
• Locks with Local-Only-Spinning• Fair Lock• Reader Preference Lock• Writer Preference Lock
• Empirical Results & Conclusions• Summary
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Simple Reader-Writer Locks
This section presents centralized (not local) algorithms for busy wait reader-writer locks.
WRITER
start_write(lock)writing_critical_sectionend_write(lock)
READER
start_read(lock)reading_critical_sectionend_read(lock)
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Outline
• Abstract• Introduction• Simple Reader Writer Spin Lock• Reader Preference Lock• Fair Lock
• Locks with Local-Only-Spinning• Fair Lock• Reader Preference Lock• Writer Preference Lock
• Empirical Results & Conclusions• Summary
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock
• A reader preference lock is used in several cases:•when there are many writers requests,
and the preference for readers is required to prevent their starvation.•when the throughput of the system is
more important than how up to date the information is.
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock• The lowest bit indicates if a writer is writing• The upper bits count the interested and currently reading
processes.• When a reader arrives it inc the counter, and waits until the
writer bit is deactivated.• Writers wait until the whole counter is 0.
0131 writers flag
readers counter
lock… …
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock
0131 writers flag
readers counter
lock… … 101111 0000
start writing
end writing
a writer can write, only when no reader is interested or reading, and no writer is writing
Notice that everything is done on the same 32 bit location in the memory.
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock
0131 writers flag
readers counter
lock… … 1010000
start reading
end reading
readers always get in front of the line, before any writer, other than the one already writing
Again, notice that everything is done on the same 32 bit location in the memory.
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Outline
• Abstract• Introduction• Simple Reader Writer Spin Lock• Reader Preference Lock• Fair Lock
• Locks with Local-Only-Spinning• Fair Lock• Reader Preference Lock• Writer Preference Lock
• Empirical Results & Conclusions• Summary
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock
• A Fair lock is used when the system must maintain a balance between keeping the information up to date, and still being reactive (the system should respond to data requests within a reasonable amount of time)
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock• The readers have 2 counters: • completed readers/writers: those who finished reading/writing• current readers/writers: those who finished + requests
• prev/ticket: for waiting in line• writers ticket = total readers + total writers• readers ticket = total writers (because they can read with the rest
of the readers)
/readers writers
/
total readers
total writers
completed readers
completed writers
Ticket = prev
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock
/readers writers
/
Ticket = prev
3 15 1
6
2
total readers
total writers
completed readers
completed writers
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock
/readers writers
/3 15
6
25
Ticket = prev
total readers
total writers
completed readers
completed writers
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock
/readers writers
/15
6
25 2
Ticket = prev
total readers
total writers
completed readers
completed writers
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock
/readers writers
/25
3
35 6
Ticket = prev
total readers
total writers
completed readers
completed writers
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock
/readers writers
/26
3
35
Ticket = prev
3
total readers
total writers
completed readers
completed writers
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock
/readers writers
/3
3
35 6
Ticket = prev
6
total readers
total writers
completed readers
completed writers
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock
/readers writers
/
Ticket = prev
Again, notice that everything is done on the same centralized location in the
memory – 3 counters.
total readers
total writers
completed readers
completed writers
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Spin On A Global Location
• The last 2 algorithms use busy wait by spinning on the same memory location.• When many processes try and spin on the same
location, it causes a hot spot in the system.• Interference from still waiting/spinning
processes, increase the time required to release the lock by those who are finished waiting.• Also, Interference from still waiting/spinning
degrades the performance of processes who are trying to access the same memory area (not just the same exact location)
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Outline
• Abstract• Introduction• Simple Reader Writer Spin Lock• Reader Preference Lock• Fair Lock
• Locks with Local-Only-Spinning• Fair Lock• Reader Preference Lock• Writer Preference Lock
• Empirical Results & Conclusions• Summary
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Locks with Local Only Spinning• This is the main section of the paper, it contains the
implementation of reader/writer locks which uses busy wait on local locations (not all on the same location).
• Why not just use the previously mentioned MCS algorithm?• too much serialization for the readers,
they can read at the same time• too long code path for this purpose,
can be done more efficiently my_node
lock
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Outline
• Abstract• Introduction• Simple Reader Writer Spin Lock• Reader Preference Lock• Fair Lock
• Locks with Local-Only-Spinning• Fair Lock• Reader Preference Lock• Writer Preference Lock
• Empirical Results & Conclusions• Summary
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
• Writing can be done when all previous read and write requests have been met.• Reading can be done when all previous write
requests have been met.• Like the MCS algorithm, using a queue.• A reader can begin reading when its predecessor
is an active reader or when the previous writer finished.• A writer can write if its predecessor is done and
there are on active readers.
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
node type : reader/writernext : pointer blocked : booleansuccessor_type : reader/writer
lock tail : pointer to a node (nil)reader_count : counter (0)next_writer : pointer to a node (nil)
blocked
free
n we rx It t e r
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
writersucc: none
reader/writer
succ : reader/writer
blockedfree
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
writersucc: none
reader/writer
succ : reader/writer
blocked
lock = tail
free
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
writersucc: none
reader/writer
succ : reader/writer
blocked
reader_counter
next_writer
0
free
lock = tailnew_node
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
writersucc: none
reader/writer
succ : reader/writer
blocked
lock = tail
0
free
new_node
reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
writersucc: none
reader/writer
succ : reader/writer
blocked
0
free
lock = tailnew_node
reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
writersucc: none
reader/writer
succ : reader/writer
blocked
0
free
lock = tailnew_node
reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
writersucc: none
reader/writer
succ : reader/writer
blocked
1
The busy wait is on my own
node
free
lock = tailnew_node
reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
writersucc: none
reader/writer
succ : reader/writer
blocked
0
next_writer
free
lock = tailnew_node
reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
reader/writer
succ : reader/writer
blocked
writersucc: none
free
lock = tail
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
reader/writer
succ : reader/writer
blocked
lock = tail
writersucc: none
free
new_node
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
reader/writer
succ : reader/writer
blocked
writersucc: nonesucc: writer
free
pred
lock = tailnew_node
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
reader/writer
succ : reader/writer
blocked
writersucc: writer
free
lock = tailnew_node
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
my_node
writer
reader/writer
succ : reader/writer
blockedfree
lock = tail
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
my_node
writer
reader/writer
succ : reader/writer
blockedfree
lock = tail
my_node
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
my_node
writer
reader/writer
succ : reader/writer
blockedfree
lock = tail
my_node
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
my_node
writer
reader/writer
succ : reader/writer
blockedfree
lock = tail
my_node
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
my_node
writer
reader/writer
succ : reader/writer
blockedfree
lock = tail
my_node
nextreader
0reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
my_node
writer
reader/writer
succ : reader/writer
blockedfree
lock = tail
my_node
nextreader
1reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
my_node
writer
reader/writer
succ : reader/writer
blockedfree
lock = tail
my_node
next
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)reader/writer
succ : reader/writer
blockedfree
lock = tail
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
reader
reader/writer
succ : reader/writer
blockedfree
lock = tail
succ: none
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
reader
reader/writer
succ : reader/writer
blockedfree
lock = tail
succ: none
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
reader
reader/writer
succ : reader/writer
blockedfree
lock = tail
succ: none
01
new_node
reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
reader
reader/writer
succ : reader/writer
blockedfree
lock = tail
succ: none
1
writer
new_node
pred
pred
succ: readersucc: none
reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
reader
reader/writer
succ : reader/writer
blockedfree
lock = tail
succ: none
1
writer
new_node
pred
pred
succ: reader
next
reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
reader
reader/writer
succ : reader/writer
blockedfree
lock = tail
succ: none
1
writer
new_node
pred
pred
succ: reader
next
reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
reader
reader/writer
succ : reader/writer
blockedfree
lock = tail
succ: none
0
reader
new_node
pred
pred
1
next
succ: not none
reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
reader
reader/writer
succ : reader/writer
blockedfree
lock = tail
succ: reader
anothernode
new_node
1reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
reader
reader/writer
succ : reader/writer
blockedfree
lock = tail
succ: reader
anothernode
new_node
1reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
new_node
reader
reader/writer
succ : reader/writer
blockedfree
lock = tail
succ: reader
anothernode
new_node
12reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
my_node
reader
reader/writer
succ : reader/writer
blockedfree
lock = tailanother
node
my_node
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
my_node
reader
reader/writer
succ : reader/writer
blockedfree
lock = tail
my_node
succ: writer
next_writer
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Fair Lock (local spinning only)
my_node
reader
reader/writer
succ : reader/writer
blockedfree
lock = tail
my_node
10
next_writer = ww
reader_counter
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Outline
• Abstract• Introduction• Simple Reader Writer Spin Lock• Reader Preference Lock• Fair Lock
• Locks with Local-Only-Spinning• Fair Lock• Reader Preference Lock• Writer Preference Lock
• Empirical Results & Conclusions• Summary
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
• In this algorithm, the writers will be in a queue, while the readers will be in a list, since if there are any readers they get preference.
• The 0 bit will be used as a flag to indicate interested writers.• The 1 bit will be used as a flag to indicate active writers.• And the rest is a reader counter.• A reader can’t begin if there is an active writer.• A writer can’t begin if any other writer is active or the reader
counter is non zero (there are waiting/active readers).• In order to avoid race conditions between modifying the flag
and adding the reader to the list, a reader must double check the flag.
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
nodenext : pointer blocked : boolean
blocked
free
0131
readers counter
… …23
The lock has 3 pointers (reader_head, writer head, writer_tail – all nil), and a
counter
active writer interested writer
reader_head
writer_head
writer_tail
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
new_node
reader_head
writer_head
writer_tail
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
new_node
writer_tailwriter_tail = writer_head
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
writer_tail = writer_headnew_node
011
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
writer_tail
new_node
pred
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
new_nodewriter_tail
pred
Notice. each node busy wait on its own node
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
writer_head
10
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
reader_head
readers list
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
readers list
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
readers list
reader_head
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
readers list
reader_head
new_node
next
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
readers list
reader_head
next
0
head
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
readers list
reader_head
next
1
Notice. each node busy wait on its own node
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
readers list
reader_head
next
10
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
readers list
reader_head
next
0
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
readers list
reader_head
0 101
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Reader Preference Lock (local spinning only)
0131
readers counter
… …23
blockedfree
active writer interested writer
1 0
writer_head
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Outline
• Abstract• Introduction• Simple Reader Writer Spin Lock• Reader Preference Lock• Fair Lock
• Locks with Local-Only-Spinning• Fair Lock• Reader Preference Lock• Writer Preference Lock
• Empirical Results & Conclusions• Summary
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Writer Preference Lock (local spinning only)
The code is very long and will not be shown here!!!
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Outline
• Abstract• Introduction• Simple Reader Writer Spin Lock• Reader Preference Lock• Fair Lock
• Locks with Local-Only-Spinning• Fair Lock• Reader Preference Lock• Writer Preference Lock
• Empirical Results & Conclusions• Summary
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Performance Results & Conclusions
• This table shows the single processor latency for each of the lock operations in the absence of competition.
• Notice that the latency is longer in the local spin algorithms, this is due to the managing operations of the algorithms which have noticeable effect when there’s no competition.
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Each point in the graph is an
average time for a process to acquire and
release the lock.
Performance Results & Conclusions
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
• The 2 upper lines, are the centralized algorithms, The more competition/processes their time gets worse.
• But notice the local spin algorithms, they have a slow start, but the more processes they handle their time gets better until it levels off.
Performance Results & Conclusions
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
• Same table, with focus on the local spin algorithms.
Performance Results & Conclusions
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Performance Results & Conclusions
• The local spin algorithms provide better results, They’re faster, and there’s no contention due to the busy wait.
• These results indicate that contention due to busy wait synchronization, is much less a problem than has generally been thought.
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Outline
• Abstract• Introduction• Simple Reader Writer Spin Lock• Reader Preference Lock• Fair Lock
• Locks with Local-Only-Spinning• Fair Lock• Reader Preference Lock• Writer Preference Lock
• Empirical Results & Conclusions• Summary
Scal
able
Rea
der W
riter
Syn
chro
niz
ation
Summary
• The MCS lock – simple mutex queue based lock with local spin• A reader pref. lock with centralized busy wait• A Fair lock with centralized busy wait• A reader pref. lock with local spin based busy
wait• A Fair lock with local spin based busy wait• The local spin gets better results.
Top Related