Architecting and Exploiting Asymmetry in Multi-Core Architectures

202
Architecting and Exploiting Asymmetry in Multi-Core Architectures Onur Mutlu [email protected] June 19, 2013 TUBITAK

description

Architecting and Exploiting Asymmetry in Multi-Core Architectures. Onur Mutlu [email protected] June 19 , 2013 TUBITAK. Course Info: Who Am I?. 1. developing efficient, high-performance, and scalable systems; 2. s olving difficult architectural problems at low cost & complexity. - PowerPoint PPT Presentation

Transcript of Architecting and Exploiting Asymmetry in Multi-Core Architectures

Page 1: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Architecting and ExploitingAsymmetry in Multi-Core

Architectures

Onur [email protected] 19, 2013

TUBITAK

Page 2: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Course Info: Who Am I? Instructor: Prof. Onur Mutlu

Carnegie Mellon University ECE/CS PhD from UT-Austin, worked at Microsoft Research, Intel, AMD http://www.ece.cmu.edu/~omutlu [email protected] (Best way to reach me) http://users.ece.cmu.edu/~omutlu/projects.htm

Research, Teaching, Consulting Interests Computer architecture, software/hardware interaction and co-

design Many-core systems, heterogeneous systems Memory systems, interconnects Scalable, predictable and QoS-aware systems Fault tolerance and security Algorithms and architectures for genome analysis and important

applications …

2

1. developing efficient, high-performance, and scalable systems;2. solving difficult architectural problems at low cost & complexity

Page 3: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Overview of My Group’s Research Heterogeneous systems, accelerating bottlenecks

Memory (and storage) systems Scalability, energy, latency, parallelism, performance Compute in/near memory

Predictable performance, QoS

Efficient interconnects

Bioinformatics algorithms and architectures

Acceleration of important applications, software/hardware co-design

3

Page 4: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Our Goals

Solve difficult platform and system (software and hardware) design problems to Enable more scalable system designs solving bigger

problems or solving problems more efficiently Enable new applications Enable new usage models of computers

4

Page 5: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Three Key Problems in Future Systems Memory system

Many important existing and future applications are increasingly data intensive require bandwidth and capacity

Data storage and movement limits performance & efficiency

Efficiency (performance and energy) scalability Enables scalable systems new applications Enables better user experience new usage models

Predictability and robustness Resource sharing and unreliable hardware causes QoS

issues Predictable performance and QoS are first class

constraints

5

Page 6: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Readings and Videos

Page 7: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Mini Course: Multi-Core Architectures Lecture 1.1: Multi-Core System Design

http://users.ece.cmu.edu/~omutlu/pub/onur-Bogazici-June-6-2013-lecture1-1-multicore-and-asymmetry-afterlecture.pptx

Lecture 1.2: Cache Design and Management http

://users.ece.cmu.edu/~omutlu/pub/onur-Bogazici-June-7-2013-lecture1-2-cache-management-afterlecture.pptx

Lecture 1.3: Interconnect Design and Management http

://users.ece.cmu.edu/~omutlu/pub/onur-Bogazici-June-10-2013-lecture1-3-interconnects-afterlecture.pptx

7

Page 9: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Readings for Today Required – Symmetric and Asymmetric Multi-Core Systems

Suleman et al., “Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures,” ASPLOS 2009, IEEE Micro 2010.

Suleman et al., “Data Marshaling for Multi-Core Architectures,” ISCA 2010, IEEE Micro 2011.

Joao et al., “Bottleneck Identification and Scheduling for Multithreaded Applications,” ASPLOS 2012.

Joao et al., “Utility-Based Acceleration of Multithreaded Applications on Asymmetric CMPs,” ISCA 2013.

Recommended Amdahl, “Validity of the single processor approach to achieving large

scale computing capabilities,” AFIPS 1967. Olukotun et al., “The Case for a Single-Chip Multiprocessor,” ASPLOS

1996. Mutlu et al., “Runahead Execution: An Alternative to Very Large

Instruction Windows for Out-of-order Processors,” HPCA 2003, IEEE Micro 2003.

Mutlu et al., “Techniques for Efficient Processing in Runahead Execution Engines,” ISCA 2005, IEEE Micro 2006.

9

Page 11: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Online Lectures and More Information Online Computer Architecture Lectures

http://www.youtube.com/playlist?list=PL5PHm2jkkXmidJOd59REog9jDnPDTG6IJ

Online Computer Architecture Courses Intro: http://www.ece.cmu.edu/~ece447/s13/doku.php Advanced:

http://www.ece.cmu.edu/~ece740/f11/doku.php Advanced: http://www.ece.cmu.edu/~ece742/doku.php

Recent Research Papers http://users.ece.cmu.edu/~omutlu/projects.htm http://scholar.google.com/citations?user=7XyGUGkAAAA

J&hl=en 11

Page 12: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Architecting and ExploitingAsymmetry in Multi-Core

Architectures

Page 13: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Warning This is an asymmetric talk But, we do not need to cover all of it…

Component 1: A case for asymmetry everywhere

Component 2: A deep dive into mechanisms to exploit asymmetry in processing cores

Component 3: Asymmetry in memory controllers

Asymmetry = heterogeneity A way to enable specialization/customization

13

Page 14: Architecting and Exploiting Asymmetry in Multi-Core Architectures

The Setting Hardware resources are shared among many

threads/apps in a many-core system Cores, caches, interconnects, memory, disks, power,

lifetime, …

Management of these resources is a very difficult task When optimizing parallel/multiprogrammed workloads Threads interact unpredictably/unfairly in shared

resources

Power/energy consumption is arguably the most valuable shared resource Main limiter to efficiency and performance

14

Page 15: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Shield the Programmer from Shared Resources Writing even sequential software is hard enough

Optimizing code for a complex shared-resource parallel system will be a nightmare for most programmers

Programmer should not worry about (hardware) resource management What should be executed where with what resources

Future computer architectures should be designed to Minimize programmer effort to optimize (parallel)

programs Maximize runtime system’s effectiveness in automatic

shared resource management 15

Page 16: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Shared Resource Management: Goals Future many-core systems should manage power and

performance automatically across threads/applications

Minimize energy/power consumption While satisfying performance/SLA requirements

Provide predictability and Quality of Service Minimize programmer effort

In creating optimized parallel programs

Asymmetry and configurability in system resources essential to achieve these goals

16

Page 17: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Asymmetry Enables Customization

Symmetric: One size fits all Energy and performance suboptimal for different phase

behaviors Asymmetric: Enables tradeoffs and customization

Processing requirements vary across applications and phases

Execute code on best-fit resources (minimal energy, adequate perf.)

17

C4 C4

C5 C5

C4 C4

C5 C5

C2

C3

C1

Asymmetric

C C

C C

C C

C C

C C

C C

C C

C C

Symmetric

Page 18: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Thought Experiment: Asymmetry Everywhere Design each hardware resource with asymmetric,

(re-)configurable, partitionable components Different power/performance/reliability characteristics To fit different computation/access/communication

patterns

18

Page 19: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Thought Experiment: Asymmetry Everywhere Design the runtime system (HW & SW) to automatically

choose the best-fit components for each phase Satisfy performance/SLA with minimal energy Dynamically stitch together the “best-fit” chip for each

phase

19

Phase 1

Phase 2

Phase 3

Page 20: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Thought Experiment: Asymmetry Everywhere Morph software components to match asymmetric HW

components Multiple versions for different resource characteristics

20

Version 1Version 2Version 3

Page 21: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Many Research and Design Questions How to design asymmetric components?

Fixed, partitionable, reconfigurable components? What types of asymmetry? Access patterns, technologies?

What monitoring to perform cooperatively in HW/SW? Automatically discover phase/task requirements

How to design feedback/control loop between components and runtime system software?

How to design the runtime to automatically manage resources? Track task behavior, pick “best-fit” components for the entire

workload21

Page 22: Architecting and Exploiting Asymmetry in Multi-Core Architectures

22

Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions

Page 23: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Exploiting Asymmetry: Simple Examples

23

Execute critical/serial sections on high-power, high-performance cores/resources [Suleman+ ASPLOS’09, ISCA’10, Top Picks’10’11, Joao+ ASPLOS’12] Programmer can write less optimized, but more likely correct

programs

Serial Parallel

Page 24: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Exploiting Asymmetry: Simple Examples

24

Execute streaming “memory phases” on streaming-optimized cores and memory hierarchies More efficient and higher performance than general purpose

hierarchy

Streaming Random access

Page 25: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Exploiting Asymmetry: Simple Examples

25

Partition memory controller and on-chip network bandwidth asymmetrically among threads [Kim+ HPCA 2010, MICRO 2010, Top Picks 2011] [Nychis+ HotNets 2010] [Das+ MICRO 2009, ISCA 2010, Top Picks 2011] Higher performance and energy-efficiency than

symmetric/free-for-all

Latency sensitive

Bandwidth sensitive

Page 26: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Exploiting Asymmetry: Simple Examples

26

Have multiple different memory scheduling policies apply them to different sets of threads based on thread behavior [Kim+ MICRO 2010, Top Picks 2011] [Ausavarungnirun, ISCA 2012] Higher performance and fairness than a homogeneous policy

Memory intensiveCompute intensive

Page 27: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Exploiting Asymmetry: Simple Examples

27

Build main memory with different technologies with different characteristics (energy, latency, wear, bandwidth) [Meza+ IEEE CAL’12]

Map pages/applications to the best-fit memory resource Higher performance and energy-efficiency than single-level

memory

CPUDRAMCtrl

Fast, durableSmall, leaky,

volatile, high-cost

Large, non-volatile, low-costSlow, wears out, high active

energy

PCM CtrlDRAM Phase Change Memory (or Tech. X)

DRAM Phase Change Memory

Page 28: Architecting and Exploiting Asymmetry in Multi-Core Architectures

28

Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions

Page 29: Architecting and Exploiting Asymmetry in Multi-Core Architectures

29

Serialized Code Sections in Parallel Applications Multithreaded applications:

Programs split into threads

Threads execute concurrently on multiple cores

Many parallel programs cannot be parallelized completely

Serialized code sections: Reduce performance Limit scalability Waste energy

Page 30: Architecting and Exploiting Asymmetry in Multi-Core Architectures

30

Causes of Serialized Code Sections Sequential portions (Amdahl’s “serial part”) Critical sections Barriers Limiter stages in pipelined programs

Page 31: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Bottlenecks in Multithreaded ApplicationsDefinition: any code segment for which threads contend (i.e.

wait)

Examples:

Amdahl’s serial portions Only one thread exists on the critical path

Critical sections Ensure mutual exclusion likely to be on the critical path if

contended

Barriers Ensure all threads reach a point before continuing the latest thread

arriving is on the critical path

Pipeline stages Different stages of a loop iteration may execute on different threads,

slowest stage makes other stages wait on the critical path 31

Page 32: Architecting and Exploiting Asymmetry in Multi-Core Architectures

32

Critical Sections Threads are not allowed to update shared data

concurrently For correctness (mutual exclusion principle)

Accesses to shared data are encapsulated inside critical sections

Only one thread can execute a critical section at a given time

Page 33: Architecting and Exploiting Asymmetry in Multi-Core Architectures

33

Example from MySQL

Open database tables

Perform the operations….

CriticalSection

Parallel

Access Open Tables Cache

Page 34: Architecting and Exploiting Asymmetry in Multi-Core Architectures

34

Contention for Critical Sections

0

Critical Section

Parallel

Idle

12 iterations, 33% instructions inside the critical section

P = 1

P = 3

P = 2

P = 4

1 2 3 4 5 6 7 8 9 10 11 12

33% in critical section

Page 35: Architecting and Exploiting Asymmetry in Multi-Core Architectures

35

Contention for Critical Sections

0

Critical Section

Parallel

Idle

12 iterations, 33% instructions inside the critical section

P = 1

P = 3

P = 2

P = 4

1 2 3 4 5 6 7 8 9 10 11 12

Accelerating critical sections increases performance and scalability

Critical SectionAccelerated by 2x

Page 36: Architecting and Exploiting Asymmetry in Multi-Core Architectures

36

Impact of Critical Sections on Scalability Contention for critical sections leads to serial

execution (serialization) of threads in the parallel program portion

Contention for critical sections increases with the number of threads and limits scalability

MySQL (oltp-1)

0 8 16 24 320

1

2

3

4

5

6

7

8

0 8 16 24 320

1

2

3

4

5

6

7

8

Chip Area (cores)

Sp

eed

up

Page 37: Architecting and Exploiting Asymmetry in Multi-Core Architectures

37

Impact of Critical Sections on Scalability Contention for critical sections leads to serial

execution (serialization) of threads in the parallel program portion

Contention for critical sections increases with the number of threads and limits scalability

0 8 16 24 320

1

2

3

4

5

6

7

8

0 8 16 24 320

1

2

3

4

5

6

7

8

Chip Area (cores)

Sp

eed

up

Today

Asymmetric

MySQL (oltp-1)

Page 38: Architecting and Exploiting Asymmetry in Multi-Core Architectures

38

A Case for Asymmetry Execution time of sequential kernels, critical

sections, and limiter stages must be short

It is difficult for the programmer to shorten theseserialized sections Insufficient domain-specific knowledge Variation in hardware platforms Limited resources

Goal: A mechanism to shorten serial bottlenecks without requiring programmer effort

Idea: Accelerate serialized code sections by shipping them to powerful cores in an asymmetric multi-core (ACMP)

Page 39: Architecting and Exploiting Asymmetry in Multi-Core Architectures

39

ACMP

Provide one large core and many small cores Execute parallel part on small cores for high

throughput Accelerate serialized sections using the large core

Baseline: Amdahl’s serial part accelerated [Morad+ CAL 2006, Suleman+, UT-TR 2007]

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Largecore

ACMP

Page 40: Architecting and Exploiting Asymmetry in Multi-Core Architectures

40

Conventional ACMP

EnterCS()

PriorityQ.insert(…)

LeaveCS()

On-chip Interconnect

1. P2 encounters a Critical Section2. Sends a request for the lock3. Acquires the lock4. Executes Critical Section5. Releases the lock

Core executing critical section

P1P2 P3 P4

Page 41: Architecting and Exploiting Asymmetry in Multi-Core Architectures

41

Accelerated Critical Sections (ACS)

Accelerate Amdahl’s serial part and critical sections using the large core Suleman et al., “Accelerating Critical Section Execution

with Asymmetric Multi-Core Architectures,” ASPLOS 2009, IEEE Micro Top Picks 2010.

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Largecore

ACMP

Critical SectionRequest Buffer (CSRB)

Page 42: Architecting and Exploiting Asymmetry in Multi-Core Architectures

42

Accelerated Critical Sections (ACS)

EnterCS()

PriorityQ.insert(…)

LeaveCS()

Onchip-Interconnect

Critical SectionRequest Buffer (CSRB)

1. P2 encounters a critical section (CSCALL)2. P2 sends CSCALL Request to CSRB3. P1 executes Critical Section4. P1 sends CSDONE signal

Core executing critical section

P4P3P2P1

Page 43: Architecting and Exploiting Asymmetry in Multi-Core Architectures

ACS Architecture Overview ISA extensions

CSCALL LOCK_ADDR, TARGET_PC CSRET LOCK_ADDR

Compiler/Library inserts CSCALL/CSRET

On a CSCALL, the small core: Sends a CSCALL request to the large core

Arguments: Lock address, Target PC, Stack Pointer, Core ID Stalls and waits for CSDONE

Large Core Critical Section Request Buffer (CSRB) Executes the critical section and sends CSDONE to the

requesting core

43

Page 44: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Accelerated Critical Sections (ACS)

44

A = compute()

LOCK X result = CS(A)UNLOCK X

print result

Small CoreSmall Core Large Core

A = compute()

CSDONE Response

CSCALL Request

Send X, TPC, STACK_PTR, CORE_ID

PUSH ACSCALL X, Target PC

………

Acquire XPOP Aresult = CS(A)PUSH resultRelease XCSRET X

TPC:

POP resultprint result

…………

………

Waiting in Critical Section Request

Buffer (CSRB)

Page 45: Architecting and Exploiting Asymmetry in Multi-Core Architectures

False Serialization ACS can serialize independent critical sections

Selective Acceleration of Critical Sections (SEL) Saturating counters to track false serialization

45

CSCALL (A)

CSCALL (A)

CSCALL (B)

Critical Section Request Buffer(CSRB)

4

4

A

B

32

5

To large core

From small cores

Page 46: Architecting and Exploiting Asymmetry in Multi-Core Architectures

46

ACS Performance Tradeoffs Pluses

+ Faster critical section execution+ Shared locks stay in one place: better lock locality+ Shared data stays in large core’s (large) caches: better shared data locality, less ping-ponging

Minuses- Large core dedicated for critical sections: reduced parallel throughput- CSCALL and CSDONE control transfer overhead- Thread-private data needs to be transferred to large core: worse private data locality

Page 47: Architecting and Exploiting Asymmetry in Multi-Core Architectures

ACS Performance Tradeoffs Fewer parallel threads vs. accelerated critical

sections Accelerating critical sections offsets loss in throughput As the number of cores (threads) on chip increase:

Fractional loss in parallel performance decreases Increased contention for critical sections

makes acceleration more beneficial

Overhead of CSCALL/CSDONE vs. better lock locality ACS avoids “ping-ponging” of locks among caches by keeping

them at the large core

More cache misses for private data vs. fewer misses for shared data

47

Page 48: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Cache Misses for Private Data

48

Private Data: NewSubProblems

Shared Data: The priority heap

PriorityHeap.insert(NewSubProblems)

Puzzle Benchmark

Page 49: Architecting and Exploiting Asymmetry in Multi-Core Architectures

ACS Performance Tradeoffs Fewer parallel threads vs. accelerated critical

sections Accelerating critical sections offsets loss in throughput As the number of cores (threads) on chip increase:

Fractional loss in parallel performance decreases Increased contention for critical sections

makes acceleration more beneficial

Overhead of CSCALL/CSDONE vs. better lock locality ACS avoids “ping-ponging” of locks among caches by keeping

them at the large core

More cache misses for private data vs. fewer misses for shared data Cache misses reduce if shared data > private data

49

We will get back to this

Page 50: Architecting and Exploiting Asymmetry in Multi-Core Architectures

ACS Comparison Points

Conventional locking

50

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Largecore

ACMP

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Largecore

ACS

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Smallcore

Small core

Smallcore

Smallcore

Smallcore

Smallcore

SCMP

Conventional locking

Large core executes Amdahl’s serial part

Large core executes Amdahl’s serial part and critical sections

Page 51: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Accelerated Critical Sections: Methodology Workloads: 12 critical section intensive applications

Data mining kernels, sorting, database, web, networking

Multi-core x86 simulator 1 large and 28 small cores Aggressive stream prefetcher employed at each core

Details: Large core: 2GHz, out-of-order, 128-entry ROB, 4-wide, 12-

stage Small core: 2GHz, in-order, 2-wide, 5-stage Private 32 KB L1, private 256KB L2, 8MB shared L3 On-chip interconnect: Bi-directional ring, 5-cycle hop latency

51

Page 52: Architecting and Exploiting Asymmetry in Multi-Core Architectures

52

ACS Performance

0

20

40

60

80

100

120

140

160

Sp

eed

up

ove

r S

CM

P

Accelerating Sequential KernelsAccelerating Critical Sections

Equal-area comparisonNumber of threads = Best threads

Chip Area = 32 small coresSCMP = 32 small coresACMP = 1 large and 28 small cores

269 180 185

Coarse-grain locks Fine-grain locks

Page 53: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Equal-Area Comparisons

53

0 8 16 24 320

0.5

1

1.5

2

2.5

3

3.5

0 8 16 24 320

0.5

1

1.5

2

2.5

3

0 8 16 24 320

1

2

3

4

5

0 8 16 24 320

1

2

3

4

5

6

7

0 8 16 24 320

0.5

1

1.5

2

2.5

3

3.5

0 8 16 24 320

2

4

6

8

10

12

14

0 8 16 24 320

1

2

3

4

5

6

0 8 16 24 320

2

4

6

8

10

0 8 16 24 32012345678

0 8 16 24 320

2

4

6

8

10

12

0 8 16 24 320

0.5

1

1.5

2

2.5

3

0 8 16 24 320

2

4

6

8

10

12

Sp

eed

up

over

a s

mall c

ore

Chip Area (small cores)

(a) ep (b) is (c) pagemine (d) puzzle (e) qsort (f) tsp

(i) oltp-1 (i) oltp-2(h) iplookup (k) specjbb (l) webcache(g) sqlite

Number of threads = No. of cores

------ SCMP------ ACMP------ ACS

Page 54: Architecting and Exploiting Asymmetry in Multi-Core Architectures

ACS Summary Critical sections reduce performance and limit

scalability

Accelerate critical sections by executing them on a powerful core

ACS reduces average execution time by: 34% compared to an equal-area SCMP 23% compared to an equal-area ACMP

ACS improves scalability of 7 of the 12 workloads

Generalizing the idea: Accelerate all bottlenecks (“critical paths”) by executing them on a powerful core

54

Page 55: Architecting and Exploiting Asymmetry in Multi-Core Architectures

55

Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions

Page 56: Architecting and Exploiting Asymmetry in Multi-Core Architectures

BIS Summary Problem: Performance and scalability of multithreaded

applications are limited by serializing bottlenecks different types: critical sections, barriers, slow pipeline stages importance (criticality) of a bottleneck can change over time

Our Goal: Dynamically identify the most important bottlenecks and accelerate them How to identify the most critical bottlenecks How to efficiently accelerate them

Solution: Bottleneck Identification and Scheduling (BIS) Software: annotate bottlenecks (BottleneckCall, BottleneckReturn)

and implement waiting for bottlenecks with a special instruction (BottleneckWait)

Hardware: identify bottlenecks that cause the most thread waiting and accelerate those bottlenecks on large cores of an asymmetric multi-core system

Improves multithreaded application performance and scalability, outperforms previous work, and performance improves with more cores

56

Page 57: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Bottlenecks in Multithreaded ApplicationsDefinition: any code segment for which threads contend (i.e.

wait)

Examples:

Amdahl’s serial portions Only one thread exists on the critical path

Critical sections Ensure mutual exclusion likely to be on the critical path if

contended

Barriers Ensure all threads reach a point before continuing the latest thread

arriving is on the critical path

Pipeline stages Different stages of a loop iteration may execute on different threads,

slowest stage makes other stages wait on the critical path 57

Page 58: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Observation: Limiting Bottlenecks Change Over Time

A=full linked list; B=empty linked list

repeatLock A

Traverse list ARemove X from A

Unlock ACompute on XLock B

Traverse list BInsert X into B

Unlock Buntil A is empty

58

Lock A is limiterLock B is limiter

32 threads

Page 59: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Limiting Bottlenecks Do Change on Real Applications

59

MySQL running Sysbench queries, 16 threads

Page 60: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Previous Work on Bottleneck Acceleration Asymmetric CMP (ACMP) proposals [Annavaram+, ISCA’05]

[Morad+, Comp. Arch. Letters’06] [Suleman+, Tech. Report’07] Accelerate only the Amdahl’s bottleneck

Accelerated Critical Sections (ACS) [Suleman+, ASPLOS’09] Accelerate only critical sections Does not take into account importance of critical sections

Feedback-Directed Pipelining (FDP) [Suleman+, PACT’10 and PhD thesis’11] Accelerate only stages with lowest throughput Slow to adapt to phase changes (software based library)

No previous work can accelerate all three types of bottlenecks or quickly adapts to fine-grain changes in the importance of bottlenecks

Our goal: general mechanism to identify performance-limiting bottlenecks of any type and accelerate them on an ACMP 60

Page 61: Architecting and Exploiting Asymmetry in Multi-Core Architectures

61

Bottleneck Identification and Scheduling (BIS) Key insight:

Thread waiting reduces parallelism and is likely to reduce performance

Code causing the most thread waiting likely critical path

Key idea: Dynamically identify bottlenecks that cause

the most thread waiting Accelerate them (using powerful cores in an ACMP)

Page 62: Architecting and Exploiting Asymmetry in Multi-Core Architectures

1. Annotatebottleneck code

2. Implement waiting for bottlenecks

1. Measure thread waiting cycles (TWC)for each bottleneck

2. Accelerate bottleneck(s)with the highest TWC

Binary containing BIS instructions

Compiler/Library/Programmer Hardware

62

Bottleneck Identification and Scheduling (BIS)

Page 63: Architecting and Exploiting Asymmetry in Multi-Core Architectures

while cannot acquire lockWait loop for watch_addr

acquire lock…release lock

Critical Sections: Code Modifications

…BottleneckCall bid, targetPC…

targetPC: while cannot acquire lockWait loop for watch_addr

acquire lock…release lockBottleneckReturn bid

63

BottleneckWait bid, watch_addr

… Used to keep track of waiting cyclesUsed to enable

acceleration

Page 64: Architecting and Exploiting Asymmetry in Multi-Core Architectures

64

Barriers: Code Modifications…BottleneckCall bid, targetPCenter barrierwhile not all threads in barrier

BottleneckWait bid, watch_addrexit barrier…

targetPC: code running for the barrier…BottleneckReturn bid

Page 65: Architecting and Exploiting Asymmetry in Multi-Core Architectures

65

Pipeline Stages: Code Modifications

BottleneckCall bid, targetPC…

targetPC: while not donewhile empty queue

BottleneckWait prev_biddequeue workdo the work …while full queue

BottleneckWait next_bidenqueue next work

BottleneckReturn bid

Page 66: Architecting and Exploiting Asymmetry in Multi-Core Architectures

1. Annotatebottleneck code

2. Implements waiting for bottlenecks

1. Measure thread waiting cycles (TWC)for each bottleneck

2. Accelerate bottleneck(s)with the highest TWC

Binary containing BIS instructions

Compiler/Library/Programmer Hardware

66

Bottleneck Identification and Scheduling (BIS)

Page 67: Architecting and Exploiting Asymmetry in Multi-Core Architectures

BIS: Hardware Overview Performance-limiting bottleneck identification and

acceleration are independent tasks Acceleration can be accomplished in multiple ways

Increasing core frequency/voltage Prioritization in shared resources [Ebrahimi+, MICRO’11] Migration to faster cores in an Asymmetric CMP

67

Large core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Small core

Page 68: Architecting and Exploiting Asymmetry in Multi-Core Architectures

1. Annotatebottleneck code

2. Implements waiting for bottlenecks

1. Measure thread waiting cycles (TWC)for each bottleneck

2. Accelerate bottleneck(s)with the highest TWC

Binary containing BIS instructions

Compiler/Library/Programmer Hardware

68

Bottleneck Identification and Scheduling (BIS)

Page 69: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Determining Thread Waiting Cycles for Each Bottleneck

69

Small Core 1 Large Core 0

Small Core 2

BottleneckTable (BT)

BottleneckWait x4500

bid=x4500, waiters=1, twc = 0bid=x4500, waiters=1, twc = 1bid=x4500, waiters=1, twc = 2

BottleneckWait x4500

bid=x4500, waiters=2, twc = 5bid=x4500, waiters=2, twc = 7bid=x4500, waiters=2, twc = 9bid=x4500, waiters=1, twc = 9bid=x4500, waiters=1, twc = 10bid=x4500, waiters=1, twc = 11bid=x4500, waiters=0, twc = 11bid=x4500, waiters=1, twc = 3bid=x4500, waiters=1, twc = 4bid=x4500, waiters=1, twc = 5

Page 70: Architecting and Exploiting Asymmetry in Multi-Core Architectures

1. Annotatebottleneck code

2. Implements waiting for bottlenecks

1. Measure thread waiting cycles (TWC)for each bottleneck

2. Accelerate bottleneck(s)with the highest TWC

Binary containing BIS instructions

Compiler/Library/Programmer Hardware

70

Bottleneck Identification and Scheduling (BIS)

Page 71: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Bottleneck Acceleration

71

Small Core 1 Large Core 0

Small Core 2

BottleneckTable (BT)

Scheduling Buffer (SB)bid=x4700, pc, sp, core1

AccelerationIndex Table (AIT)

BottleneckCall x4600

Execute locally

BottleneckCall x4700

bid=x4700 , large core 0

Execute remotely

AIT

bid=x4600, twc=100

bid=x4700, twc=10000

BottleneckReturn x4700

bid=x4700 , large core 0

bid=x4700, pc, sp, core1

twc < Threshold

twc > Threshold

Execute locallyExecute remotely

Page 72: Architecting and Exploiting Asymmetry in Multi-Core Architectures

BIS Mechanisms Basic mechanisms for BIS:

Determining Thread Waiting Cycles Accelerating Bottlenecks

Mechanisms to improve performance and generality of BIS: Dealing with false serialization Preemptive acceleration Support for multiple large cores

72

Page 73: Architecting and Exploiting Asymmetry in Multi-Core Architectures

False Serialization and Starvation Observation: Bottlenecks are picked from Scheduling

Buffer in Thread Waiting Cycles order

Problem: An independent bottleneck that is ready to execute has to wait for another bottleneck that has higher thread waiting cycles False serialization

Starvation: Extreme false serialization

Solution: Large core detects when a bottleneck is ready to execute in the Scheduling Buffer but it cannot sends the bottleneck back to the small core

73

Page 74: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Preemptive Acceleration

Observation: A bottleneck executing on a small core can become the bottleneck with the highest thread waiting cycles

Problem: This bottleneck should really be accelerated (i.e., executed on the large core)

Solution: The Bottleneck Table detects the situation and sends a preemption signal to the small core. Small core: saves register state on stack, ships the bottleneck to the large

core

Main acceleration mechanism for barriers and pipeline stages

74

Page 75: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Support for Multiple Large Cores Objective: to accelerate independent bottlenecks

Each large core has its own Scheduling Buffer (shared by all of its SMT threads)

Bottleneck Table assigns each bottleneck to a fixed large core context to preserve cache locality avoid busy waiting

Preemptive acceleration extended to send multiple instances of a bottleneck to different large core contexts

75

Page 76: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Hardware Cost Main structures:

Bottleneck Table (BT): global 32-entry associative cache, minimum-Thread-Waiting-Cycle replacement

Scheduling Buffers (SB): one table per large core, as many entries as small cores

Acceleration Index Tables (AIT): one 32-entry tableper small core

Off the critical path

Total storage cost for 56-small-cores, 2-large-cores < 19 KB

76

Page 77: Architecting and Exploiting Asymmetry in Multi-Core Architectures

BIS Performance Trade-offs Faster bottleneck execution vs. fewer parallel

threads Acceleration offsets loss of parallel throughput with large core

counts

Better shared data locality vs. worse private data locality Shared data stays on large core (good) Private data migrates to large core (bad, but latency hidden with

Data Marshaling [Suleman+, ISCA’10])

Benefit of acceleration vs. migration latency Migration latency usually hidden by waiting (good) Unless bottleneck not contended (bad, but likely not on critical

path)77

Page 78: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Methodology

Workloads: 8 critical section intensive, 2 barrier intensive and 2 pipeline-parallel applications Data mining kernels, scientific, database, web, networking,

specjbb

Cycle-level multi-core x86 simulator 8 to 64 small-core-equivalent area, 0 to 3 large cores, SMT 1 large core is area-equivalent to 4 small cores

Details: Large core: 4GHz, out-of-order, 128-entry ROB, 4-wide, 12-

stage Small core: 4GHz, in-order, 2-wide, 5-stage Private 32KB L1, private 256KB L2, shared 8MB L3 On-chip interconnect: Bi-directional ring, 2-cycle hop latency

78

Page 79: Architecting and Exploiting Asymmetry in Multi-Core Architectures

BIS Comparison Points (Area-Equivalent) SCMP (Symmetric CMP)

All small cores Results in the paper

ACMP (Asymmetric CMP) Accelerates only Amdahl’s serial portions Our baseline

ACS (Accelerated Critical Sections) Accelerates only critical sections and Amdahl’s serial

portions Applicable to multithreaded workloads

(iplookup, mysql, specjbb, sqlite, tsp, webcache, mg, ft)

FDP (Feedback-Directed Pipelining) Accelerates only slowest pipeline stages Applicable to pipeline-parallel workloads (rank, pagemine)

79

Page 80: Architecting and Exploiting Asymmetry in Multi-Core Architectures

BIS Performance Improvement

80

Optimal number of threads, 28 small cores, 1 large core

BIS outperforms ACS/FDP by 15% and ACMP by 32% BIS improves scalability on 4 of the benchmarks

barriers, which ACS cannot accelerate

limiting bottlenecks change over timeACS FDP

Page 81: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Why Does BIS Work?

81

Coverage: fraction of program critical path that is actually identified as bottlenecks 39% (ACS/FDP) to 59% (BIS)

Accuracy: identified bottlenecks on the critical path over total identified bottlenecks 72% (ACS/FDP) to 73.5% (BIS)

Fraction of execution time spent on predicted-important bottlenecks

Actually critical

Page 82: Architecting and Exploiting Asymmetry in Multi-Core Architectures

BIS Scaling Results

82

Performance increases with:

1) More small cores Contention due to

bottlenecks increases Loss of parallel throughput

due to large core reduces

2) More large cores Can accelerate

independent bottlenecks Without reducing parallel

throughput (enough cores)

2.4%6.2%

15% 19%

Page 83: Architecting and Exploiting Asymmetry in Multi-Core Architectures

BIS Summary Serializing bottlenecks of different types limit

performance of multithreaded applications: Importance changes over time

BIS is a hardware/software cooperative solution: Dynamically identifies bottlenecks that cause the most thread

waiting and accelerates them on large cores of an ACMP Applicable to critical sections, barriers, pipeline stages

BIS improves application performance and scalability: 15% speedup over ACS/FDP Can accelerate multiple independent critical bottlenecks Performance benefits increase with more cores

Provides comprehensive fine-grained bottleneck acceleration for future ACMPs with little or no programmer effort

83

Page 84: Architecting and Exploiting Asymmetry in Multi-Core Architectures

84

Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions

Page 85: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Staged Execution Model (I) Goal: speed up a program by dividing it up into pieces Idea

Split program code into segments Run each segment on the core best-suited to run it Each core assigned a work-queue, storing segments to be run

Benefits Accelerates segments/critical-paths using specialized/heterogeneous

cores Exploits inter-segment parallelism Improves locality of within-segment data

Examples Accelerated critical sections, Bottleneck identification and scheduling Producer-consumer pipeline parallelism Task parallelism (Cilk, Intel TBB, Apple Grand Central Dispatch) Special-purpose cores and functional units

85

Page 86: Architecting and Exploiting Asymmetry in Multi-Core Architectures

86

Staged Execution Model (II)

LOAD XSTORE YSTORE Y

LOAD Y….

STORE Z

LOAD Z….

Page 87: Architecting and Exploiting Asymmetry in Multi-Core Architectures

87

Staged Execution Model (III)

LOAD XSTORE YSTORE Y

LOAD Y….

STORE Z

LOAD Z….

Segment S0

Segment S1

Segment S2

Split code into segments

Page 88: Architecting and Exploiting Asymmetry in Multi-Core Architectures

88

Staged Execution Model (IV)

Core 0 Core 1 Core 2

Work-queues

Instances of S0

Instances of S1

Instances of S2

Page 89: Architecting and Exploiting Asymmetry in Multi-Core Architectures

89

LOAD XSTORE YSTORE Y

LOAD Y….

STORE Z

LOAD Z….

Core 0 Core 1 Core 2

S0

S1

S2

Staged Execution Model: Segment Spawning

Page 90: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Staged Execution Model: Two Examples Accelerated Critical Sections [Suleman et al., ASPLOS 2009] Idea: Ship critical sections to a large core in an asymmetric

CMP Segment 0: Non-critical section Segment 1: Critical section

Benefit: Faster execution of critical section, reduced serialization, improved lock and shared data locality

Producer-Consumer Pipeline Parallelism Idea: Split a loop iteration into multiple “pipeline stages”

where one stage consumes data produced by the next stage each stage runs on a different core Segment N: Stage N

Benefit: Stage-level parallelism, better locality faster execution

90

Page 91: Architecting and Exploiting Asymmetry in Multi-Core Architectures

91

Problem: Locality of Inter-segment Data

LOAD XSTORE YSTORE Y

LOAD Y….

STORE Z

LOAD Z….

Transfer Y

Transfer Z

S0

S1

S2

Core 0 Core 1 Core 2

Cache Miss

Cache Miss

Page 92: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Problem: Locality of Inter-segment Data Accelerated Critical Sections [Suleman et al., ASPLOS 2010] Idea: Ship critical sections to a large core in an ACMP Problem: Critical section incurs a cache miss when it

touches data produced in the non-critical section (i.e., thread private data)

Producer-Consumer Pipeline Parallelism Idea: Split a loop iteration into multiple “pipeline stages”

each stage runs on a different core Problem: A stage incurs a cache miss when it touches data

produced by the previous stage

Performance of Staged Execution limited by inter-segment cache misses

92

Page 93: Architecting and Exploiting Asymmetry in Multi-Core Architectures

93

What if We Eliminated All Inter-segment Misses?

Page 94: Architecting and Exploiting Asymmetry in Multi-Core Architectures

94

Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions

Page 95: Architecting and Exploiting Asymmetry in Multi-Core Architectures

95

Terminology

LOAD XSTORE YSTORE Y

LOAD Y….

STORE Z

LOAD Z….

Transfer Y

Transfer Z

S0

S1

S2

Inter-segment data: Cache block written by one segment and consumed by the next segment

Generator instruction:The last instruction to write to an inter-segment cache block in a segment

Core 0 Core 1 Core 2

Page 96: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Key Observation and Idea Observation: Set of generator instructions is stable

over execution time and across input sets

Idea: Identify the generator instructions Record cache blocks produced by generator

instructions Proactively send such cache blocks to the next

segment’s core before initiating the next segment

Suleman et al., “Data Marshaling for Multi-Core Architectures,” ISCA 2010, IEEE Micro Top Picks 2011.

96

Page 97: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Data Marshaling

1. Identify generatorinstructions

2. Insert marshalinstructions

1. Record generator- produced addresses2. Marshal recorded blocks to next coreBinary containing

generator prefixes & marshal Instructions

Compiler/Profiler Hardware

97

Page 98: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Data Marshaling

1. Identify generatorinstructions

2. Insert marshalinstructions

1. Record generator- produced addresses2. Marshal recorded blocks to next coreBinary containing

generator prefixes & marshal Instructions

Hardware

98

Compiler/Profiler

Page 99: Architecting and Exploiting Asymmetry in Multi-Core Architectures

99

Profiling Algorithm

LOAD XSTORE YSTORE Y

LOAD Y ….STORE Z

LOAD Z ….

Mark as Generator Instruction

Inter-segment data

Page 100: Architecting and Exploiting Asymmetry in Multi-Core Architectures

100

Marshal Instructions

LOAD X STORE YG: STORE Y MARSHAL C1

LOAD Y ….G:STORE Z MARSHAL C2

0x5: LOAD Z ….

When to send (Marshal)

Where to send (C1)

Page 101: Architecting and Exploiting Asymmetry in Multi-Core Architectures

DM Support/Cost Profiler/Compiler: Generators, marshal instructions ISA: Generator prefix, marshal instructions Library/Hardware: Bind next segment ID to a

physical core

Hardware Marshal Buffer

Stores physical addresses of cache blocks to be marshaled

16 entries enough for almost all workloads 96 bytes per core

Ability to execute generator prefixes and marshal instructions

Ability to push data to another cache

101

Page 102: Architecting and Exploiting Asymmetry in Multi-Core Architectures

DM: Advantages, Disadvantages Advantages

Timely data transfer: Push data to core before needed Can marshal any arbitrary sequence of lines: Identifies

generators, not patterns Low hardware cost: Profiler marks generators, no need

for hardware to find them

Disadvantages Requires profiler and ISA support Not always accurate (generator set is conservative):

Pollution at remote core, wasted bandwidth on interconnect Not a large problem as number of inter-segment blocks is

small

102

Page 103: Architecting and Exploiting Asymmetry in Multi-Core Architectures

103

Accelerated Critical Sections with DM

Small Core 0

MarshalBuffer

Large Core

LOAD X STORE YG: STORE Y CSCALL

LOAD Y ….G:STORE Z CSRET

Cache Hit!

L2 Cache

L2 CacheData Y

Addr Y

Critical Section

Page 104: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Accelerated Critical Sections: Methodology Workloads: 12 critical section intensive applications

Data mining kernels, sorting, database, web, networking Different training and simulation input sets

Multi-core x86 simulator 1 large and 28 small cores Aggressive stream prefetcher employed at each core

Details: Large core: 2GHz, out-of-order, 128-entry ROB, 4-wide, 12-

stage Small core: 2GHz, in-order, 2-wide, 5-stage Private 32 KB L1, private 256KB L2, 8MB shared L3 On-chip interconnect: Bi-directional ring, 5-cycle hop latency

104

Page 105: Architecting and Exploiting Asymmetry in Multi-Core Architectures

105

DM on Accelerated Critical Sections: Results

0

20

40

60

80

100

120

140

Sp

ee

du

p o

ve

r A

CS

DM

Ideal

168 170

8.7%

Page 106: Architecting and Exploiting Asymmetry in Multi-Core Architectures

106

Pipeline Parallelism

Core 0

MarshalBuffer

Core 1

LOAD X STORE YG: STORE Y MARSHAL C1

LOAD Y ….G:STORE Z MARSHAL C2

0x5: LOAD Z ….

Cache Hit!

L2 Cache

L2 CacheData Y

Addr Y

S0

S1

S2

Page 107: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Pipeline Parallelism: Methodology Workloads: 9 applications with pipeline parallelism

Financial, compression, multimedia, encoding/decoding Different training and simulation input sets

Multi-core x86 simulator 32-core CMP: 2GHz, in-order, 2-wide, 5-stage Aggressive stream prefetcher employed at each core Private 32 KB L1, private 256KB L2, 8MB shared L3 On-chip interconnect: Bi-directional ring, 5-cycle hop latency

107

Page 108: Architecting and Exploiting Asymmetry in Multi-Core Architectures

108

DM on Pipeline Parallelism: Results

0

20

40

60

80

100

120

140

160

Sp

eed

up

ove

r B

asel

ine

DM Ideal

16%

Page 109: Architecting and Exploiting Asymmetry in Multi-Core Architectures

DM Coverage, Accuracy, Timeliness

High coverage of inter-segment misses in a timely manner

Medium accuracy does not impact performance Only 5.0 and 6.8 cache blocks marshaled for average

segment109

0

10

20

30

40

50

60

70

80

90

100

ACS Pipeline

Pe

rce

nta

ge

Coverage

Accuracy

Timeliness

Page 110: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Scaling Results

DM performance improvement increases with More cores Higher interconnect latency Larger private L2 caches

Why? Inter-segment data misses become a larger bottleneck More cores More communication Higher latency Longer stalls due to communication Larger L2 cache Communication misses remain

110

Page 111: Architecting and Exploiting Asymmetry in Multi-Core Architectures

111

Other Applications of Data Marshaling Can be applied to other Staged Execution models

Task parallelism models Cilk, Intel TBB, Apple Grand Central Dispatch

Special-purpose remote functional units Computation spreading [Chakraborty et al., ASPLOS’06] Thread motion/migration [e.g., Rangan et al., ISCA’09]

Can be an enabler for more aggressive SE models Lowers the cost of data migration

an important overhead in remote execution of code segments

Remote execution of finer-grained tasks can become more feasible finer-grained parallelization in multi-cores

Page 112: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Data Marshaling Summary Inter-segment data transfers between cores limit the

benefit of promising Staged Execution (SE) models

Data Marshaling is a hardware/software cooperative solution: detect inter-segment data generator instructions and push their data to next segment’s core Significantly reduces cache misses for inter-segment data Low cost, high-coverage, timely for arbitrary address

sequences Achieves most of the potential of eliminating such misses

Applicable to several existing Staged Execution models Accelerated Critical Sections: 9% performance benefit Pipeline Parallelism: 16% performance benefit

Can enable new models very fine-grained remote execution

112

Page 113: Architecting and Exploiting Asymmetry in Multi-Core Architectures

113

Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions

Page 114: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Motivation• Memory is a shared resource

• Threads’ requests contend for memory– Degradation in single thread performance– Can even lead to starvation

• How to schedule memory requests to increase both system throughput and fairness?

114

Core Core

Core CoreMemory

Page 115: Architecting and Exploiting Asymmetry in Multi-Core Architectures

8 8.2 8.4 8.6 8.8 91

3

5

7

9

11

13

15

17

FRFCFSSTFMPAR-BSATLAS

Weighted Speedup

Max

imum

Slo

wdo

wn

Previous Scheduling Algorithms are Biased

115

System throughput bias

Fairness bias

No previous memory scheduling algorithm provides both the best fairness and system throughput

Ideal

Better system throughput

Bett

er fa

irnes

s

Page 116: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Take turns accessing memory

Why do Previous Algorithms Fail?

116

Fairness biased approach

thread C

thread B

thread A

less memory intensive

higherpriority

Prioritize less memory-intensive threads

Throughput biased approach

Good for throughput

starvation unfairness

thread C thread Bthread A

Does not starve

not prioritized reduced throughput

Single policy for all threads is insufficient

Page 117: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Insight: Achieving Best of Both Worlds

117

thread

thread

higherpriority

thread

thread

thread

thread

thread

thread

Prioritize memory-non-intensive threads

For Throughput

Unfairness caused by memory-intensive being prioritized over each other • Shuffle threads

Memory-intensive threads have different vulnerability to interference• Shuffle asymmetrically

For Fairness

thread

thread

thread

thread

Page 118: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Overview: Thread Cluster Memory Scheduling

1. Group threads into two clusters2. Prioritize non-intensive cluster3. Different policies for each cluster

118

thread

Threads in the system

thread

thread

thread

thread

thread

thread

Non-intensive cluster

Intensive cluster

thread

thread

thread

Memory-non-intensive

Memory-intensive

Prioritized

higherpriority

higherpriority

Throughput

Fairness

Page 119: Architecting and Exploiting Asymmetry in Multi-Core Architectures

119

Prioritize threads according to MPKI

• Increases system throughput– Least intensive thread has the greatest potential

for making progress in the processor

Non-Intensive Cluster

thread

thread

thread

thread

higherpriority lowest MPKI

highest MPKI

Page 120: Architecting and Exploiting Asymmetry in Multi-Core Architectures

120

Periodically shuffle the priority of threads

• Is treating all threads equally good enough?• BUT: Equal turns ≠ Same slowdown

Intensive Cluster

thread

thread

thread

Increases fairness

Most prioritizedhigherpriority

thread

thread

thread

Page 121: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Results: Fairness vs. Throughput

7.5 8 8.5 9 9.5 104

6

8

10

12

14

16

TCM

ATLAS

PAR-BS

STFM

FRFCFS

Weighted Speedup

Max

imum

Slo

wdo

wn

121

Better system throughput

Bett

er fa

irnes

s

5%

39%

8%5%

TCM provides best fairness and system throughput

Averaged over 96 workloads

Page 122: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Results: Fairness-Throughput Tradeoff

122

12 12.5 13 13.5 14 14.5 15 15.5 162

4

6

8

10

12

Weighted Speedup

Max

imum

Slo

wdo

wn

When configuration parameter is varied…

Adjusting ClusterThreshold

TCM allows robust fairness-throughput tradeoff

STFMPAR-BS

ATLAS

TCM

Better system throughput

Bett

er fa

irnes

s FRFCFS

Page 123: Architecting and Exploiting Asymmetry in Multi-Core Architectures

TCM Summary

123

• No previous memory scheduling algorithm provides both high system throughput and fairness– Problem: They use a single policy for all threads

• TCM is a heterogeneous scheduling policy1. Prioritize non-intensive cluster throughput2. Shuffle priorities in intensive cluster fairness3. Shuffling should favor nice threads fairness

• Heterogeneity in memory scheduling provides the best system throughput and fairness

Page 124: Architecting and Exploiting Asymmetry in Multi-Core Architectures

More Details on TCM

• Kim et al., “Thread Cluster Memory Scheduling: Exploiting Differences in Memory Access Behavior,” MICRO 2010, Top Picks 2011.

124

Page 125: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Memory Control in CPU-GPU Systems Observation: Heterogeneous CPU-GPU systems

require memory schedulers with large request buffers

Problem: Existing monolithic application-aware memory scheduler designs are hard to scale to large request buffer sizes

Solution: Staged Memory Scheduling (SMS) decomposes the memory controller into three simple

stages:1) Batch formation: maintains row buffer locality2) Batch scheduler: reduces interference between

applications3) DRAM command scheduler: issues requests to DRAM

Compared to state-of-the-art memory schedulers: SMS is significantly simpler and more scalable SMS provides higher performance and fairness

125Ausavarungnirun et al., “Staged Memory Scheduling,” ISCA 2012.

Page 126: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Asymmetric Memory QoS in a Parallel Application Threads in a multithreaded application are inter-

dependent Some threads can be on the critical path of

execution due to synchronization; some threads are not

How do we schedule requests of inter-dependent threads to maximize multithreaded application performance?

Idea: Estimate limiter threads likely to be on the critical path and prioritize their requests; shuffle priorities of non-limiter threads to reduce memory interference among them [Ebrahimi+, MICRO’11]

Hardware/software cooperative limiter thread estimation: Thread executing the most contended critical section Thread that is falling behind the most in a parallel for loop

126Ebrahimi et al., “Parallel Application Memory Scheduling,” MICRO 2011.

Page 127: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions

127

Page 128: Architecting and Exploiting Asymmetry in Multi-Core Architectures

128

Related Ongoing/Future Work Dynamically asymmetric cores Memory system design for asymmetric cores

Asymmetric memory systems Phase Change Memory (or Technology X) + DRAM Hierarchies optimized for different access patterns

Asymmetric on-chip interconnects Interconnects optimized for different application

requirements

Asymmetric resource management algorithms E.g., network congestion control

Interaction of multiprogrammed multithreaded workloads

Page 129: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Talk Outline Problem and Motivation How Do We Get There: Examples Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling Thread Cluster Memory Scheduling (if time permits) Ongoing/Future Work Conclusions

129

Page 130: Architecting and Exploiting Asymmetry in Multi-Core Architectures

130

Summary Applications and phases have varying performance

requirements Designs evaluated on multiple metrics/constraints: energy,

performance, reliability, fairness, …

One-size-fits-all design cannot satisfy all requirements and metrics: cannot get the best of all worlds

Asymmetry in design enables tradeoffs: can get the best of all worlds Asymmetry in core microarch. Accelerated Critical Sections,

BIS, DM Good parallel performance + Good serialized performance

Asymmetry in memory scheduling Thread Cluster Memory Scheduling Good throughput + good fairness

Simple asymmetric designs can be effective and low-cost

Page 131: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Thank You

Onur [email protected]

http://www.ece.cmu.edu/~omutluEmail me with any questions and feedback!

Page 132: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Architecting and ExploitingAsymmetry in Multi-Core

Architectures

Onur [email protected] 19, 2013

TUBITAK

Page 133: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Vector Machine Organization (CRAY-1) CRAY-1

Russell, “The CRAY-1 computer system,” CACM 1978.

Scalar and vector modes

8 64-element vector registers

64 bits per element 16 memory banks 8 64-bit scalar

registers 8 24-bit address

registers 133

Page 134: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Identifying and AcceleratingResource Contention

Bottlenecks

Page 135: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Thread Serialization Three fundamental causes

1. Synchronization

2. Load imbalance

3. Resource contention

135

Page 136: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Memory Contention as a Bottleneck Problem:

Contended memory regions cause serialization of threads

Threads accessing such regions can form the critical path

Data-intensive workloads (MapReduce, GraphLab, Graph500) can be sped up by 1.5 to 4X by ideally removing contention

Idea: Identify contended regions dynamically Prioritize caching the data from threads which are

slowed down the most by such regions in faster DRAM/eDRAM

Benefits: Reduces contention, serialization, critical path

136

Page 137: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Evaluation Workloads: MapReduce, GraphLab, Graph500

Cycle-level x86 platform simulator CPU: 8 out-of-order cores, 32KB private L1, 512KB

shared L2 Hybrid Memory: DDR3 1066 MT/s, 32MB DRAM, 8GB

PCM

Mechanisms Baseline: DRAM as a conventional cache to PCM CacheMiss: Prioritize caching data from threads with

highest cache miss latency Region: Cache data from most contended memory

regions ACTS: Prioritize caching data from threads most slowed

down due to memory region contention137

Page 138: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Caching Results

138

Page 139: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Heterogeneous Main Memory

Page 140: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Heterogeneous Memory Systems

Meza, Chang, Yoon, Mutlu, Ranganathan, “Enabling Efficient and Scalable Hybrid Memories,” IEEE Comp. Arch. Letters, 2012.

CPUDRAMCtrl

Fast, durableSmall, leaky,

volatile, high-cost

Large, non-volatile, low-costSlow, wears out, high active

energy

PCM CtrlDRAM Phase Change Memory (or Tech. X)

Hardware/software manage data allocation and movement to achieve the best of multiple technologies

Page 141: Architecting and Exploiting Asymmetry in Multi-Core Architectures

141

One Option: DRAM as a Cache for PCM PCM is main memory; DRAM caches memory

rows/blocks Benefits: Reduced latency on DRAM cache hit; write

filtering Memory controller hardware manages the DRAM

cache Benefit: Eliminates system software overhead

Three issues: What data should be placed in DRAM versus kept in

PCM? What is the granularity of data movement? How to design a low-cost hardware-managed DRAM

cache?

Two idea directions: Locality-aware data placement [Yoon+ , CMU TR 2011]

Cheap tag stores and dynamic granularity [Meza+, IEEE CAL 2012]

Page 142: Architecting and Exploiting Asymmetry in Multi-Core Architectures

142

DRAM vs. PCM: An Observation Row buffers are the same in DRAM and PCM Row buffer hit latency same in DRAM and PCM Row buffer miss latency small in DRAM, large in PCM

Accessing the row buffer in PCM is fast What incurs high latency is the PCM array access avoid

this

CPUDRAMCtrl

PCM Ctrl

Bank

Bank

Bank

Bank

Row bufferDRAM Cache PCM Main Memory

N ns row hitFast row miss

N ns row hitSlow row miss

Page 143: Architecting and Exploiting Asymmetry in Multi-Core Architectures

143

Row-Locality-Aware Data Placement Idea: Cache in DRAM only those rows that

Frequently cause row buffer conflicts because row-conflict latency is smaller in DRAM

Are reused many times to reduce cache pollution and bandwidth waste

Simplified rule of thumb: Streaming accesses: Better to place in PCM Other accesses (with some reuse): Better to place in DRAM

Bridges half of the performance gap between all-DRAM and all-PCM memory on memory-intensive workloads

Yoon et al., “Row Buffer Locality-Aware Data Placement in Hybrid Memories,” CMU SAFARI Technical Report, 2011.

Page 144: Architecting and Exploiting Asymmetry in Multi-Core Architectures

144

The Problem with Large DRAM Caches A large DRAM cache requires a large metadata (tag

+ block-based information) store How do we design an efficient DRAM cache?

DRAM PCM

CPU

(small, fast cache) (high capacity)

MemCtlr

MemCtlr

LOAD X

Access X

Metadata:X DRAM

X

Page 145: Architecting and Exploiting Asymmetry in Multi-Core Architectures

145

Idea 1: Tags in Memory Store tags in the same row as data in DRAM

Store metadata in same row as their data Data and metadata can be accessed together

Benefit: No on-chip tag storage overhead Downsides:

Cache hit determined only after a DRAM access Cache hit requires two DRAM accesses

Cache block 2Cache block 0 Cache block 1

DRAM rowTag0 Tag1 Tag2

Page 146: Architecting and Exploiting Asymmetry in Multi-Core Architectures

146

Idea 2: Cache Tags in SRAM Recall Idea 1: Store all metadata in DRAM

To reduce metadata storage overhead

Idea 2: Cache in on-chip SRAM frequently-accessed metadata Cache only a small amount to keep SRAM size small

Page 147: Architecting and Exploiting Asymmetry in Multi-Core Architectures

147

Idea 3: Dynamic Data Transfer Granularity Some applications benefit from caching more data

They have good spatial locality Others do not

Large granularity wastes bandwidth and reduces cache utilization

Idea 3: Simple dynamic caching granularity policy Cost-benefit analysis to determine best DRAM cache

block size Group main memory into sets of rows Some row sets follow a fixed caching granularity The rest of main memory follows the best granularity

Cost–benefit analysis: access latency versus number of cachings

Performed every quantum

Page 148: Architecting and Exploiting Asymmetry in Multi-Core Architectures

148

Methodology System: 8 out-of-order cores at 4 GHz

Memory: 512 MB direct-mapped DRAM, 8 GB PCM 128B caching granularity DRAM row hit (miss): 200 cycles (400 cycles) PCM row hit (clean / dirty miss): 200 cycles (640 / 1840

cycles)

Evaluated metadata storage techniques All SRAM system (8MB of SRAM) Region metadata storage TIM metadata storage (same row as data) TIMBER, 64-entry direct-mapped (8KB of SRAM)

Page 149: Architecting and Exploiting Asymmetry in Multi-Core Architectures

SRAM Region TIM TIMBER TIMBER-Dyn0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Nor

mal

ized

Wei

ghte

d Sp

eedu

p

149

TIMBER Performance

-6%

Meza, Chang, Yoon, Mutlu, Ranganathan, “Enabling Efficient and Scalable Hybrid Memories,” IEEE Comp. Arch. Letters, 2012.

Page 150: Architecting and Exploiting Asymmetry in Multi-Core Architectures

SRAM

RegionTIM

TIMBER

TIMBER-D

yn-1.66533453693773E-16

0.2

0.4

0.6

0.8

1

1.2

Nor

mal

ized

Per

form

ance

per

Watt

(f

or M

emor

y Sy

stem

)

150

TIMBER Energy Efficiency18%

Meza, Chang, Yoon, Mutlu, Ranganathan, “Enabling Efficient and Scalable Hybrid Memories,” IEEE Comp. Arch. Letters, 2012.

Page 151: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Summary Applications and phases have varying performance

requirements Designs evaluated on multiple metrics/constraints: energy,

performance, reliability, fairness, …

One-size-fits-all design cannot satisfy all requirements and metrics: cannot get the best of all worlds

Asymmetry in design enables tradeoffs: can get the best of all worlds Asymmetry in core microarch. Accelerated Critical Sections,

BIS, DM Good parallel performance + Good serialized performance

Asymmetry in main memory Data Management for DRAM-PCM Hybrid Memory Good performance + good efficiency

Simple asymmetric designs can be effective and low-cost151

Page 152: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Overview of Research in the CMU SAFARI Research

Group

Onur [email protected]

http://www.ece.cmu.edu/~omutluOctober 2012

Page 153: Architecting and Exploiting Asymmetry in Multi-Core Architectures

A Bit About Me and My Research Assistant Professor @ Carnegie Mellon University ECE/CS http://www.ece.cmu.edu/~omutlu [email protected], 512-658-0891 (cell)

Interested in fundamental techniques for efficient, high-performance, and scalable systems; solving difficult architectural problems at low cost & complexity

Research, teaching, consulting in: Computer architecture, hardware/software interaction and

cooperation Multi-core systems, heterogeneous systems, new execution models Memory systems (memory controllers, caches, DRAM, emerging

technologies) Interconnects Hardware/software interaction and co-design (PL, OS, Architecture) Predictable and QoS-aware systems Fault tolerance Bioinformatics algorithms and architectures

153

Page 154: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Goal: Scalable and Energy-Efficient Memory Problem: Main memory is a large energy & performance

bottleneck Our goal: Rethink main memory design for efficiency and

scalability1) Minimize waste and maximize parallelism in DRAM & controllers2) Enable emerging technologies for energy-efficient memory

Example recent solution ideas: Tiered latency DRAM: Low latency at low cost [Lee+ HPCA 2013] SALP: Subarray-level parallelism in DRAM [Kim+ ISCA 2012] RAIDR: Retention-aware DRAM refresh [Liu+ ISCA 2012] Staged memory controllers [Ausavarungnirun+ ISCA 2012] Hybrid PCM+DRAM main memory with efficient DRAM cache

design, dynamic data transfer granularity, and intelligent data placement [Meza+ IEEE CAL 2012, Yoon+ ICCD 2012, Lee+ ISCA 2009]

Low complexity cache/memory compression [Pekhimenko+ PACT 2012]

Efficient caching with evicted address filters [Seshadri+ PACT 2012]

154

Page 155: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Problem: Memory interference is uncontrolled today uncontrollable, unpredictable, vulnerable system

Goal: We need to control it Design a QoS-aware system

Solution: Hardware/software cooperative memory QoS Hardware designed to provide a configurable fairness

substrate Application-aware memory scheduling, partitioning, throttling

Software designed to configure the resources to satisfy different QoS goals, satisfy SLAs

E.g., memory controllers and interconnects provide QoS and predictable performance

[2007-2013, Top Picks’09,’11a,’11b,’12]

New schedulers: STFM, ATLAS, PARBS, TCM, IMPS, SMS, MISE [Subramanian HPCA 2013]

Goal: QoS-Aware, Predictable (Memory) Systems

Page 156: Architecting and Exploiting Asymmetry in Multi-Core Architectures

156

Tiered Latency DRAM (TL-DRAM)

Tiered Bitline

+ Low Latency + Low Area Cost+ Low Area Cost

- High Latency

Long Bitline

+ Low Latency- High Area CostShort Bitline

Page 157: Architecting and Exploiting Asymmetry in Multi-Core Architectures

My Research Group: SAFARI 14 PhD students 1 Postdoctoral researcher (1 more joining soon) 2 Undergraduate students 4 Visiting students

Research spanning: architecture, hardware/software interface circuit/architecture interface system/architecture interface algorithms bioinformatics …

157

Page 158: Architecting and Exploiting Asymmetry in Multi-Core Architectures

This Set of Slides … Describe the current major research areas/directions

in my group (Slides 5-13)

Provide a partial list of papers with links to papers and talks (Slides 14-31)

Describe the Memory QoS research in some detail (Slides 32-56)

Describe the Heterogeneous Memory research in some detail (Slides 57-69)

Describe recent results in Scalable Memory Systems research (Slides 70-139)

158

Page 159: Architecting and Exploiting Asymmetry in Multi-Core Architectures

159

Research Topics (I): Memory, Caches, Prefetching How to provide QoS and predictability in memory

and interconnect Novel memory controller, interconnect, cache designs A major effort in my group since 2006 [Top

Picks’09,’11a,’11b,’12] Prefetch-aware shared resource management [MICRO’09,

ISCA’11]

How to do effective cache/DRAM compression Base-delta-immediate compression [PACT’12]

How to efficiently utilize large caches Evicted-address filters [PACT’12]

How to tolerate long memory latencies Runahead execution [HPCA’03, ISCA’05, MICRO’05, Top

Picks’03,’05]

Prefetching in multi-core [HPCA’07,’09, MICRO’08’09, ISCA’11]

Page 160: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Problem: Memory interference is uncontrolled uncontrollable, unpredictable, vulnerable system

Goal: We need to control it Design a QoS-aware system

Solution: Hardware/software cooperative memory QoS Hardware designed to provide a configurable fairness

substrate Application-aware memory scheduling, partitioning, throttling

Software designed to configure the resources to satisfy different QoS goals

E.g., fair, programmable memory controllers and on-chip networks provide QoS and predictable performance

[2007-2012, Top Picks’09,’11a,’11b,’12]

QoS-Aware, Predictable Memory Systems

Page 161: Architecting and Exploiting Asymmetry in Multi-Core Architectures

161

More on QoS-aware Memory Systems Smart vs. dumb resources approaches

Application-aware memory scheduling, partitioning, throttling

How to handle prefetch requests in a QoS-aware multi-core memory system? Prefetch-aware shared resource management, ISCA’11. Prefetch-aware memory controllers, MICRO’08, IEEE-TC’11. Coordinated control of multiple prefetchers, MICRO’09.

How to design QoS mechanisms in the interconnect? Topology-aware, scalable QoS, ISCA’11. Slack-based packet scheduling, ISCA’10. Efficient bandwidth guarantees, MICRO’09. Application-aware request prioritization, MICRO’09.

ISCA 2011 Talk

Micro 2009 Talk

Micro 2008 Talk

Page 162: Architecting and Exploiting Asymmetry in Multi-Core Architectures

162

Research Topics (II): Acceleration Bottleneck identification and acceleration

BIS, ASPLOS 2012.

Heterogeneous multi-core systems with automatic resource management Accelerating critical section execution, ASPLOS’09.

Improving programmer productivity with better hardware support for parallel and safe programs Data Marshaling for Staged Execution, ISCA’10. HW Support for Safe Languages, ISCA’09, ASPLOS’08, ISCA’07.

Page 163: Architecting and Exploiting Asymmetry in Multi-Core Architectures

163

Research Topics (III): Interconnects Very efficient interconnects

Bufferless routers [ISCA’09, HotNets’10, HPCA’11, SIGCOMM’12] Minimally buffered deflection routing [NOCS’12] Congestion control [HotNets’10, SIGCOMM’12, Tech Report’11] Hierarchical rings with deflection routing [Tech Report’11] Multidrop express channels, HPCA’09.

QoS-aware interconnects Topology-aware scalable QoS, ISCA’11. Slack-based packet scheduling, ISCA’10. Efficient bandwidth guarantees, MICRO’09. Application-aware request prioritization, MICRO’09.

Core, interconnect, memory co-design

Page 164: Architecting and Exploiting Asymmetry in Multi-Core Architectures

164

Goal: Low Energy, High Performance Systems Heterogeneous multi-core systems and accelerators

Bottleneck identification and scheduling [ASPLOS’12] Accelerated critical sections [ASPLOS’09], Data Marshaling

[ISCA’10]

Acceleration of garbage collection [ISCA’09] and virtual functions [ASPLOS’08, ISCA’07]

Very efficient interconnects Bufferless routers [ISCA’09, HotNets’10, HPCA’11, SIGCOMM’12] Minimally buffered deflection routing [NOCS’12] Congestion control [HotNets’10, SIGCOMM’12, SBAC-PAD’12] Hierarchical rings with deflection routing [CMU Tech Report’11] Multidrop express channels [HPCA’09]

Very efficient latency-tolerant core designs Runahead and beyond [HPCA’03, ISCA’05, MICRO’05, Top

Picks’03,’05]

Prefetching in multi-core [HPCA’07,’09, MICRO’08’09, ISCA’11]

Page 165: Architecting and Exploiting Asymmetry in Multi-Core Architectures

165

Research Topics (IV): Fault Tolerance Hard and soft error tolerance

Online testing for wearout detection, MICRO’07, ICCAD’09, VTS’10.

Soft error detection via re-execution, DSN’05.

NVM/Flash error analysis and tolerance [DATE’12, ICCD’12]

Page 166: Architecting and Exploiting Asymmetry in Multi-Core Architectures

166

Research Topics (V): Main Memory Scaling Scaling DRAM into the future

RAIDR: retention-aware DRAM refresh, ISCA’12. Eliminating bank conflicts via subarray-level parallelism,

ISCA’12.

Enabling emerging memory technologies as main memory PCM-based main memory, ISCA’09. Hybrid PCM-DRAM main memory, IEEE CAL’12. Locality-aware data placement in hybrid memory, CMU Tech

Report’11. NVMW’12, ICCD’12. MLC PCM as main memory, NVMW’12, Submission’12.

Page 167: Architecting and Exploiting Asymmetry in Multi-Core Architectures

167

Research Topics (VI): Core Design Latency tolerant, energy efficient core designs

Runahead execution [HPCA’03, ISCA’05, MICRO’05, Top Picks’03,’05]

Prefetching in multi-core [HPCA’07,’09, MICRO’08’09, ISCA’11]

New execution paradigms and hardware/software interfaces

Efficient thread context management, multithreading, hardware based scheduling

Specialization of cores to different functions, heterogeneous multi-core

Page 168: Architecting and Exploiting Asymmetry in Multi-Core Architectures

168

Research Topics (VII): Bioinformatics Algorithms for faster genome sequence analysis

MrFAST, Nature Genetics 2009. FastHash, BMC Genomics 2013.

Architectures and accelerators for faster genome sequence analysis

Page 169: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Memory QoS

Page 170: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Trend: Many Cores on Chip Simpler and lower power than a single large core Large scale parallelism on chip

170

IBM Cell BE8+1 cores

Intel Core i78 cores

Tilera TILE Gx100 cores, networked

IBM POWER78 cores

Intel SCC48 cores, networked

Nvidia Fermi448 “cores”

AMD Barcelona4 cores

Sun Niagara II8 cores

Page 171: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Many Cores on Chip

What we want: N times the system performance with N times the

cores

What do we get today?

171

Page 172: Architecting and Exploiting Asymmetry in Multi-Core Architectures

(Un)expected Slowdowns

Memory Performance HogLow priority

High priority

(Core 0) (Core 1)

Moscibroda and Mutlu, “Memory performance attacks: Denial of memory service in multi-core systems,” USENIX Security 2007.

Attacker(Core 1)

Movie player(Core 2)

172

Page 173: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Why? Uncontrolled Memory Interference

CORE 1 CORE 2

L2 CACHE

L2 CACHE

DRAM MEMORY CONTROLLER

DRAM Bank 0

DRAM Bank 1

DRAM Bank 2

Shared DRAMMemory System

Multi-CoreChip

unfairnessINTERCONNECT

attacker movie player

DRAM Bank 3

173

Page 174: Architecting and Exploiting Asymmetry in Multi-Core Architectures

// initialize large arrays A, B

for (j=0; j<N; j++) { index = rand(); A[index] = B[index]; …}

174

A Memory Performance Hog

STREAM

- Sequential memory access - Very high row buffer locality (96% hit rate)- Memory intensive

RANDOM

- Random memory access- Very low row buffer locality (3% hit rate)- Similarly memory intensive

// initialize large arrays A, B

for (j=0; j<N; j++) { index = j*linesize; A[index] = B[index]; …}

streaming random

Moscibroda and Mutlu, “Memory Performance Attacks,” USENIX Security 2007.

Page 175: Architecting and Exploiting Asymmetry in Multi-Core Architectures

175

What Does the Memory Hog Do?

Row Buffer

Row

dec

oder

Column mux

Data

Row 0

T0: Row 0

Row 0

T1: Row 16

T0: Row 0T1: Row 111

T0: Row 0T0: Row 0T1: Row 5

T0: Row 0T0: Row 0T0: Row 0T0: Row 0T0: Row 0

Memory Request Buffer

T0: STREAMT1: RANDOM

Row size: 8KB, cache block size: 64B128 (8KB/64B) requests of T0 serviced before T1

Moscibroda and Mutlu, “Memory Performance Attacks,” USENIX Security 2007.

Page 176: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Effect of the Memory Performance Hog

STREAM RANDOM0

0.5

1

1.5

2

2.5

3

176

1.18X slowdown

2.82X slowdown

Results on Intel Pentium D running Windows XP(Similar results for Intel Core Duo and AMD Turion, and on Fedora Linux)

Slo

wd

ow

n

STREAM gcc0

0.5

1

1.5

2

2.5

3

STREAM Virtual PC0

0.5

1

1.5

2

2.5

3

Moscibroda and Mutlu, “Memory Performance Attacks,” USENIX Security 2007.

Page 177: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Greater Problem with More Cores

Vulnerable to denial of service (DoS) [Usenix Security’07]

Unable to enforce priorities or SLAs [MICRO’07,’10,’11, ISCA’08’11’12, ASPLOS’10]

Low system performance [IEEE Micro Top Picks ’09,’11a,’11b,’12]

Uncontrollable, unpredictable system

177

Page 178: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Distributed DoS in Networked Multi-Core Systems

178

Attackers(Cores 1-8)

Stock option pricing application

(Cores 9-64)

Cores connected via packet-switched routers on chip

~5000X slowdown

Grot, Hestness, Keckler, Mutlu, “Preemptive virtual clock: A Flexible, Efficient, and Cost-effective QOS Scheme for Networks-on-Chip,“MICRO 2009.

Page 179: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Problem: Memory interference is uncontrolled uncontrollable, unpredictable, vulnerable system

Goal: We need to control it Design a QoS-aware system

Solution: Hardware/software cooperative memory QoS Hardware designed to provide a configurable fairness

substrate Application-aware memory scheduling, partitioning, throttling

Software designed to configure the resources to satisfy different QoS goals

E.g., fair, programmable memory controllers and on-chip networks provide QoS and predictable performance

[2007-2012, Top Picks’09,’11a,’11b,’12]

Solution: QoS-Aware, Predictable Memory

Page 180: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Designing QoS-Aware Memory Systems: Approaches Smart resources: Design each shared resource to have

a configurable interference control/reduction mechanism QoS-aware memory controllers [Mutlu+ MICRO’07] [Moscibroda+, Usenix

Security’07] [Mutlu+ ISCA’08, Top Picks’09] [Kim+ HPCA’10] [Kim+ MICRO’10, Top Picks’11] [Ebrahimi+ ISCA’11, MICRO’11] [Ausavarungnirun+, ISCA’12]

QoS-aware interconnects [Das+ MICRO’09, ISCA’10, Top Picks ’11] [Grot+ MICRO’09, ISCA’11, Top Picks ’12]

QoS-aware caches

Dumb resources: Keep each resource free-for-all, but reduce/control interference by injection control or data mapping Source throttling to control access to memory system [Ebrahimi+

ASPLOS’10, ISCA’11, TOCS’12] [Ebrahimi+ MICRO’09] [Nychis+ HotNets’10]

QoS-aware data mapping to memory controllers [Muralidhara+ MICRO’11]

QoS-aware thread scheduling to cores 180

Page 181: Architecting and Exploiting Asymmetry in Multi-Core Architectures

181

Memory Channel Partitioning Idea: System software maps badly-interfering

applications’ pages to different channels [Muralidhara+, MICRO’11]

Separate data of low/high intensity and low/high row-locality applications

Especially effective in reducing interference of threads with “medium” and “heavy” memory intensity 11% higher performance over existing systems (200 workloads)

A Mechanism to Reduce Memory Interference

Core 0App A

Core 1App B

Channel 0

Bank 1

Channel 1

Bank 0

Bank 1

Bank 0

Conventional Page Mapping

Time Units

12345

Channel Partitioning

Core 0App A

Core 1App B

Channel 0

Bank 1

Bank 0

Bank 1

Bank 0

Time Units

12345

Channel 1

MCP Micro 2011 Talk

Page 182: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Designing QoS-Aware Memory Systems: Approaches Smart resources: Design each shared resource to have

a configurable interference control/reduction mechanism QoS-aware memory controllers [Mutlu+ MICRO’07] [Moscibroda+, Usenix

Security’07] [Mutlu+ ISCA’08, Top Picks’09] [Kim+ HPCA’10] [Kim+ MICRO’10, Top Picks’11] [Ebrahimi+ ISCA’11, MICRO’11] [Ausavarungnirun+, ISCA’12]

QoS-aware interconnects [Das+ MICRO’09, ISCA’10, Top Picks ’11] [Grot+ MICRO’09, ISCA’11, Top Picks ’12]

QoS-aware caches

Dumb resources: Keep each resource free-for-all, but reduce/control interference by injection control or data mapping Source throttling to control access to memory system [Ebrahimi+

ASPLOS’10, ISCA’11, TOCS’12] [Ebrahimi+ MICRO’09] [Nychis+ HotNets’10]

QoS-aware data mapping to memory controllers [Muralidhara+ MICRO’11]

QoS-aware thread scheduling to cores 182

Page 183: Architecting and Exploiting Asymmetry in Multi-Core Architectures

QoS-Aware Memory Scheduling

How to schedule requests to provide High system performance High fairness to applications Configurability to system software

Memory controller needs to be aware of threads

183

Memory Controller

Core Core

Core CoreMemory

Resolves memory contention by scheduling requests

Page 184: Architecting and Exploiting Asymmetry in Multi-Core Architectures

QoS-Aware Memory Scheduling: Evolution Stall-time fair memory scheduling [Mutlu+ MICRO’07]

Idea: Estimate and balance thread slowdowns Takeaway: Proportional thread progress improves

performance, especially when threads are “heavy” (memory intensive)

Parallelism-aware batch scheduling [Mutlu+ ISCA’08, Top Picks’09] Idea: Rank threads and service in rank order (to preserve

bank parallelism); batch requests to prevent starvation Takeaway: Preserving within-thread bank-parallelism

improves performance; request batching improves fairness

ATLAS memory scheduler [Kim+ HPCA’10] Idea: Prioritize threads that have attained the least service

from the memory scheduler Takeaway: Prioritizing “light” threads improves performance184

Page 185: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Take turns accessing memory

Throughput vs. Fairness

185

Fairness biased approach

thread C

thread B

thread A

less memory intensive

higherpriority

Prioritize less memory-intensive threads

Throughput biased approach

Good for throughput

starvation unfairness

thread C thread Bthread A

Does not starve

not prioritized reduced throughput

Single policy for all threads is insufficient

Page 186: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Achieving the Best of Both Worlds

186

thread

thread

higherpriority

thread

thread

thread

thread

thread

thread

Prioritize memory-non-intensive threads

For Throughput

Unfairness caused by memory-intensive being prioritized over each other • Shuffle thread ranking

Memory-intensive threads have different vulnerability to interference• Shuffle asymmetrically

For Fairness

thread

thread

thread

thread

Page 187: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Thread Cluster Memory Scheduling [Kim+ MICRO’10]

1. Group threads into two clusters2. Prioritize non-intensive cluster3. Different policies for each cluster

187

thread

Threads in the system

thread

thread

thread

thread

thread

thread

Non-intensive cluster

Intensive cluster

thread

thread

thread

Memory-non-intensive

Memory-intensive

Prioritized

higherpriority

higherpriority

Throughput

Fairness

Page 188: Architecting and Exploiting Asymmetry in Multi-Core Architectures

TCM: Throughput and Fairness

7.5 8 8.5 9 9.5 104

6

8

10

12

14

16

TCM

ATLAS

PAR-BS

STFM

FRFCFS

Weighted Speedup

Max

imum

Slo

wdo

wn

188

Better system throughput

Bett

er fa

irnes

s24 cores, 4 memory controllers, 96 workloads

TCM, a heterogeneous scheduling policy,provides best fairness and system throughput

Page 189: Architecting and Exploiting Asymmetry in Multi-Core Architectures

TCM: Fairness-Throughput Tradeoff

189

12 12.5 13 13.5 14 14.5 15 15.5 162

4

6

8

10

12

Weighted Speedup

Max

imum

Slo

wdo

wn

When configuration parameter is varied…

Adjusting ClusterThreshold

TCM allows robust fairness-throughput tradeoff

STFMPAR-BS

ATLAS

TCM

Better system throughput

Bett

er fa

irnes

s FRFCFS

Page 190: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Memory Control in CPU-GPU Systems Observation: Heterogeneous CPU-GPU systems

require memory schedulers with large request buffers

Problem: Existing monolithic application-aware memory scheduler designs are hard to scale to large request buffer sizes

Solution: Staged Memory Scheduling (SMS) decomposes the memory controller into three simple

stages:1) Batch formation: maintains row buffer locality2) Batch scheduler: reduces interference between

applications3) DRAM command scheduler: issues requests to DRAM

Compared to state-of-the-art memory schedulers: SMS is significantly simpler and more scalable SMS provides higher performance and fairness

190Ausavarungnirun et al., “Staged Memory Scheduling,” ISCA 2012.

Page 191: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Memory QoS in a Parallel Application Threads in a multithreaded application are inter-

dependent Some threads can be on the critical path of

execution due to synchronization; some threads are not

How do we schedule requests of inter-dependent threads to maximize multithreaded application performance?

Idea: Estimate limiter threads likely to be on the critical path and prioritize their requests; shuffle priorities of non-limiter threads to reduce memory interference among them [Ebrahimi+, MICRO’11]

Hardware/software cooperative limiter thread estimation: Thread executing the most contended critical section Thread that is falling behind the most in a parallel for loop

191Ebrahimi et al., “Parallel Application Memory Scheduling,” MICRO 2011.

Page 192: Architecting and Exploiting Asymmetry in Multi-Core Architectures

192

Some Related Past Work That I could not cover…

How to handle prefetch requests in a QoS-aware multi-core memory system? Prefetch-aware shared resource management, ISCA’11. Prefetch-aware memory controllers, MICRO’08, IEEE-TC’11. Coordinated control of multiple prefetchers, MICRO’09.

How to design QoS mechanisms in the interconnect? Topology-aware, scalable QoS, ISCA’11. Slack-based packet scheduling, ISCA’10. Efficient bandwidth guarantees, MICRO’09. Application-aware request prioritization, MICRO’09.

ISCA 2011 Talk

Micro 2009 Talk

Micro 2008 Talk

Page 193: Architecting and Exploiting Asymmetry in Multi-Core Architectures

Summary: Memory QoS Approaches and Techniques Approaches: Smart vs. dumb resources

Smart resources: QoS-aware memory scheduling Dumb resources: Source throttling; channel partitioning Both approaches are effective in reducing interference No single best approach for all workloads

Techniques: Request scheduling, source throttling, memory partitioning All approaches are effective in reducing interference Can be applied at different levels: hardware vs. software No single best technique for all workloads

Combined approaches and techniques are the most powerful Integrated Memory Channel Partitioning and Scheduling

[MICRO’11] 193

Page 194: Architecting and Exploiting Asymmetry in Multi-Core Architectures

194

Partial List of Referenced/Related Papers

Page 195: Architecting and Exploiting Asymmetry in Multi-Core Architectures

195

Heterogeneous Cores M. Aater Suleman, Onur Mutlu, Moinuddin K. Qureshi, and Yale N. Patt,

"Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures" Proceedings of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pages 253-264, Washington, DC, March 2009. Slides (ppt)

M. Aater Suleman, Onur Mutlu, Jose A. Joao, Khubaib, and Yale N. Patt,"Data Marshaling for Multi-core Architectures"Proceedings of the 37th International Symposium on Computer Architecture (ISCA), pages 441-450, Saint-Malo, France, June 2010. Slides (ppt)

Jose A. Joao, M. Aater Suleman, Onur Mutlu, and Yale N. Patt,"Bottleneck Identification and Scheduling in Multithreaded Applications"

Proceedings of the 17th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), London, UK, March 2012. Slides (ppt) (pdf)

Page 196: Architecting and Exploiting Asymmetry in Multi-Core Architectures

196

QoS-Aware Memory Systems (I) Rachata Ausavarungnirun, Kevin Chang, Lavanya Subramanian, Gabriel Loh, and

Onur Mutlu,"Staged Memory Scheduling: Achieving High Performance and Scalability in Heterogeneous Systems"

Proceedings of the 39th International Symposium on Computer Architecture (ISCA), Portland, OR, June 2012.

Sai Prashanth Muralidhara, Lavanya Subramanian, Onur Mutlu, Mahmut Kandemir, and Thomas Moscibroda, "Reducing Memory Interference in Multicore Systems via Application-Aware Memory Channel Partitioning"

Proceedings of the 44th International Symposium on Microarchitecture (MICRO), Porto Alegre, Brazil, December 2011

Yoongu Kim, Michael Papamichael, Onur Mutlu, and Mor Harchol-Balter,"Thread Cluster Memory Scheduling: Exploiting Differences in Memory Access Behavior" Proceedings of the 43rd International Symposium on Microarchitecture (MICRO), pages 65-76, Atlanta, GA, December 2010. Slides (pptx) (pdf)

Eiman Ebrahimi, Chang Joo Lee, Onur Mutlu, and Yale N. Patt,"Fairness via Source Throttling: A Configurable and High-Performance Fairness Substrate for Multi-Core Memory Systems" ACM Transactions on Computer Systems (TOCS), April 2012.

Page 197: Architecting and Exploiting Asymmetry in Multi-Core Architectures

197

QoS-Aware Memory Systems (II) Onur Mutlu and Thomas Moscibroda,

"Parallelism-Aware Batch Scheduling: Enabling High-Performance and Fair Memory Controllers"

IEEE Micro, Special Issue: Micro's Top Picks from 2008 Computer Architecture Conferences (MICRO TOP PICKS), Vol. 29, No. 1, pages 22-32, January/February 2009.

Onur Mutlu and Thomas Moscibroda, "Stall-Time Fair Memory Access Scheduling for Chip Multiprocessors" Proceedings of the 40th International Symposium on Microarchitecture (MICRO), pages 146-158, Chicago, IL, December 2007. Slides (ppt)

Thomas Moscibroda and Onur Mutlu, "Memory Performance Attacks: Denial of Memory Service in Multi-Core Systems" Proceedings of the 16th USENIX Security Symposium (USENIX SECURITY), pages 257-274, Boston, MA, August 2007. Slides (ppt)

Page 198: Architecting and Exploiting Asymmetry in Multi-Core Architectures

198

QoS-Aware Memory Systems (III) Eiman Ebrahimi, Rustam Miftakhutdinov, Chris Fallin, Chang Joo Lee, Onur Mutlu,

and Yale N. Patt, "Parallel Application Memory Scheduling"Proceedings of the 44th International Symposium on Microarchitecture (MICRO), Porto Alegre, Brazil, December 2011. Slides (pptx)

Boris Grot, Joel Hestness, Stephen W. Keckler, and Onur Mutlu,"Kilo-NOC: A Heterogeneous Network-on-Chip Architecture for Scalability and Service Guarantees"

Proceedings of the 38th International Symposium on Computer Architecture (ISCA), San Jose, CA, June 2011. Slides (pptx)

Reetuparna Das, Onur Mutlu, Thomas Moscibroda, and Chita R. Das,"Application-Aware Prioritization Mechanisms for On-Chip Networks" Proceedings of the 42nd International Symposium on Microarchitecture (MICRO), pages 280-291, New York, NY, December 2009. Slides (pptx)

Page 199: Architecting and Exploiting Asymmetry in Multi-Core Architectures

199

Heterogeneous Memory Justin Meza, Jichuan Chang, HanBin Yoon, Onur Mutlu, and Parthasarathy

Ranganathan, "Enabling Efficient and Scalable Hybrid Memories Using Fine-Granularity DRAM Cache Management"

IEEE Computer Architecture Letters (CAL), May 2012.

HanBin Yoon, Justin Meza, Rachata Ausavarungnirun, Rachael Harding, and Onur Mutlu,"Row Buffer Locality-Aware Data Placement in Hybrid Memories"SAFARI Technical Report, TR-SAFARI-2011-005, Carnegie Mellon University, September 2011.

Benjamin C. Lee, Engin Ipek, Onur Mutlu, and Doug Burger,"Architecting Phase Change Memory as a Scalable DRAM Alternative"Proceedings of the 36th International Symposium on Computer Architecture (ISCA), pages 2-13, Austin, TX, June 2009. Slides (pdf)

Benjamin C. Lee, Ping Zhou, Jun Yang, Youtao Zhang, Bo Zhao, Engin Ipek, Onur Mutlu, and Doug Burger,"Phase Change Technology and the Future of Main Memory"IEEE Micro, Special Issue: Micro's Top Picks from 2009 Computer Architecture Conferences (MICRO TOP PICKS), Vol. 30, No. 1, pages 60-70, January/February 2010.

Page 200: Architecting and Exploiting Asymmetry in Multi-Core Architectures

200

Flash Memory Yu Cai, Eric F. Haratsch, Onur Mutlu, and Ken Mai,

"Error Patterns in MLC NAND Flash Memory: Measurement, Characterization, and Analysis" Proceedings of the Design, Automation, and Test in Europe Conference (DATE), Dresden, Germany, March 2012. Slides (ppt)

Page 201: Architecting and Exploiting Asymmetry in Multi-Core Architectures

201

Latency Tolerance Onur Mutlu, Jared Stark, Chris Wilkerson, and Yale N. Patt,

"Runahead Execution: An Alternative to Very Large Instruction Windows for Out-of-order Processors"

Proceedings of the 9th International Symposium on High-Performance Computer Architecture (HPCA), pages 129-140, Anaheim, CA, February 2003. Slides (pdf)

Onur Mutlu, Hyesoon Kim, and Yale N. Patt, "Techniques for Efficient Processing in Runahead Execution Engines"

Proceedings of the 32nd International Symposium on Computer Architecture (ISCA), pages 370-381, Madison, WI, June 2005. Slides (ppt) Slides (pdf)

Onur Mutlu, Hyesoon Kim, and Yale N. Patt, "Address-Value Delta (AVD) Prediction: Increasing the Effectiveness of Runahead Execution by Exploiting Regular Memory Allocation Patterns"

Proceedings of the 38th International Symposium on Microarchitecture (MICRO), pages 233-244, Barcelona, Spain, November 2005. Slides (ppt) Slides (pdf)

Page 202: Architecting and Exploiting Asymmetry in Multi-Core Architectures

202

Scaling DRAM: Refresh and Parallelism Jamie Liu, Ben Jaiyen, Richard Veras, and Onur Mutlu,

"RAIDR: Retention-Aware Intelligent DRAM Refresh"Proceedings of the 39th International Symposium on Computer Architecture (ISCA), Portland, OR, June 2012.

Yoongu Kim, Vivek Seshadri, Donghyuk Lee, Jamie Liu, and Onur Mutlu,"A Case for Exploiting Subarray-Level Parallelism (SALP) in DRAM"

Proceedings of the 39th International Symposium on Computer Architecture (ISCA), Portland, OR, June 2012.