Memory Hierarchy

66
Memory Hierarchy

Transcript of Memory Hierarchy

Page 1: Memory Hierarchy

Memory Hierarchy

Page 2: Memory Hierarchy

2

Random-Access Memory Static RAM (SRAM)

Each cell stores bit with a six-transistor circuit. Retains value indefinitely, as long as it is kept powered. Relatively insensitive to disturbances such as electrical

noise. Faster and more expensive than DRAM.

Dynamic RAM (DRAM) Each cell stores bit with a capacitor and transistor. Value must be refreshed every 10-100ms. Sensitive to disturbances. Slower and cheaper than SRAM.

Tran. Accessper bit time Persist? Sensitive? Cost Applications

SRAM 6 1X Yes No 100x cache memories

DRAM 1 10X No Yes 1X Main memories, frame buffers

Page 3: Memory Hierarchy

3

Conventional DRAM Organization

d x w DRAM: dw total bits organized as d supercells of size w bits

cols

rows

0 1 2 3

0

1

2

3

internal row buffer

16 x 8 DRAM chip

addr

data

supercell(2,1)

2 bits/

8 bits/

memorycontroller

(to CPU)

Page 4: Memory Hierarchy

4

Reading DRAM Supercell (2,1)

cols

rows

RAS = 20 1 2 3

0

1

2

internal row buffer

16 x 8 DRAM chip

3

addr

data

2/

8/

memorycontroller

Step 1(a): Row access strobe (RAS) selects row 2. Step 1(b): Row 2 copied from DRAM array to row buffer.

Page 5: Memory Hierarchy

5

Reading DRAM Supercell (2,1)

cols

rows

0 1 2 3

0

1

2

3

internal row buffer

16 x 8 DRAM chip

CAS = 1

addr

data

2/

8/

memorycontroller

supercell (2,1)

supercell (2,1)

To CPU

Step 2(a): Column access strobe (CAS) selects column 1. Step 2(b): Supercell (2,1) copied from buffer to data lines,

and eventually back to the CPU.

Page 6: Memory Hierarchy

6

Memory Modules

: supercell (i,j)

64 MB memory moduleconsisting ofeight 8Mx8 DRAMs

addr (row = i, col = j)

Memorycontroller

DRAM 7

DRAM 0

031 78151623243263 394047485556

64-bit doubleword at main memory address A

bits0-7

bits8-15

bits16-23

bits24-31

bits32-39

bits40-47

bits48-55

bits56-63

64-bit doubleword

031 78151623243263 394047485556

64-bit doubleword at main memory address A

Page 7: Memory Hierarchy

7

Enhanced DRAMs

All enhanced DRAMs are built around the conventional DRAM core.

Fast page mode DRAM (FPM DRAM)- Access contents of row with [RAS, CAS, CAS, CAS, CAS]

instead of [(RAS,CAS), (RAS,CAS), (RAS,CAS), (RAS,CAS)]. Extended data out DRAM (EDO DRAM)

- Enhanced FPM DRAM with more closely spaced CAS signals. Synchronous DRAM (SDRAM)

- Driven with rising clock edge instead of asynchronous control signals.

Double data-rate synchronous DRAM (DDR SDRAM)- Enhancement of SDRAM that uses both clock edges as control

signals. Video RAM (VRAM)

- Like FPM DRAM, but output is produced by shifting row buffer- Dual ported (allows concurrent reads and writes)

Page 8: Memory Hierarchy

8

Registers vs. Data Cache (1)

Registers Explicitly managed by the compiler Can use information available at compile time to

preload data into registers and to purge data more effectively.

Outperforms a data cache by nearly a factor of two in both speed and cost.

Not easy to allocate for objects requiring multiple storage units.

Aliasing problem.

Page 9: Memory Hierarchy

9

Registers vs. Data Cache (2)

Data Caches Based on the “locality of reference” of programs

- Temporal vs. spatial Takes dynamic program behavior into account Invisible to programmers Architecture-independent

- Some make the cache visible to ISA. Coherency problem in multiprocessor system

Page 10: Memory Hierarchy

10

Vanilla SDRAM FSB 100MHz, 133MHz (PC100, PC133) Memory bandwidth: FSB x 1 data / clock cycle x

64bits / 1 data- FSB 100MHz, 8 bytes/clock = 800MB/s

DDR SDRAM Double data rate PC1600: Vanilla SDRAM @ 100MHz x 2 PC2100: Vanilla SDRAM @ 133MHz x 2

RDRAM PC600, PC700, PC800 PC600: 600MHz x 32bits = 2.4GB/s

Page 11: Memory Hierarchy

11

Nonvolatile Memories

Nonvolatile memories retain value even if powered off. Generic name is read-only memory (ROM). Misleading because some ROMs can be read and modified.

Types of ROMs Programmable ROM (PROM) Eraseable programming ROM (EPROM) Electrically eraseable PROM (EEPROM) Flash memory

Firmware Program stored in a ROM

- Boot time code, BIOS (Basic Input/Output System)- Graphics cards, disk controllers, etc.

Page 12: Memory Hierarchy

12

Flash Memory

© Samsung Electronics, Co.

Page 13: Memory Hierarchy

13

Flash Memory Characteristics

Operations Read Write or Program – change state from 1

to 0 Erase – change state from 0 to 1

Unit Page (sector) – management or program

unit Block – erase unit

1 1 1 1 1 1 1 1

1 1 0 1 1 0 1 0

0 1 0 1 0 0 1 0

1 1 1 1 1 1 1 1

write

write

erase

Page 14: Memory Hierarchy

14

NOR vs. NAND Flash (1)

NOR Flash Random, direct access interface Fast random reads Slow erase and write Boot image, BIOS, Cellular phone,

etc. NAND Flash

I/O mapped access Smaller cell size, lower cost Smaller size erase blocks Better performance for erase and

write Solid state file storage, MP3, Digital

camera, etc.

Page 15: Memory Hierarchy

15

NOR vs. NAND Flash (2)

Characteristics of Various Memory Devices

Page 16: Memory Hierarchy

16

Flash Advantages

Non-volatile Small Light-weight Low-power Robust Fast read access times (compared to disks)

Page 17: Memory Hierarchy

17

Flash Drawbacks

Much slower write access times No in-place-update

A write should be preceded by an erase operation. Erase operations can only be performed in a much

larger unit than the write operation. Limited lifetime

Typically, 100,000 – 1,000,000 program/erase cycles Bad blocks (for NAND)

Page 18: Memory Hierarchy

18

Flash Memory Application

• Low Cost and High Density

Code Memory-NOR

BIOS/Networking(PC/router/hub)

Telecommunications(switcher)

Cellular Phone(code & data)

POS / PDA / PCA(code & data)

• Fast Random Access• XIP

Mass Storage-NAND

Memory Cards(mobile computers)

Solid-State Disk(rugged & reliable storage)

Digital Camera(still & moving pictures)

Voice/Audio Recorder(near CD quality)

Page 19: Memory Hierarchy

19

Flash-based Data Storage (1)

MultiMedia Card (MMC) / CompactFlash A microprocessor provides

many capabilities. - Host independence from

details of erasing and programming flash memory

- Sophisticated system for errors (bad blocks, ECC)

- Power management for low power operation

Page 20: Memory Hierarchy

20

Flash-based Data Storage (2)

FFD 2.5” from M-Systems Solid-state flash disk in a 2.5” disk Up to 90GB ATA-6: interface speed of 100MB/s 40MB/s sustained read/write rates Released: March 10, 2004 ~$40,000 for 90GB

Benefits Reliable and robust: no mechanical parts Small, light-weight, low power consumption

Page 21: Memory Hierarchy

21

Typical Bus Structure

A bus is a collection of parallel wires that carry address, data, and control signals.

Buses are typically shared by multiple devices.

mainmemory

I/O bridge

bus interface

ALU

register file

CPU chip

system bus memory bus

Page 22: Memory Hierarchy

22

Modern PC Architecture

MemoryControllerHub (MCH)

I/OControllerHub (ICH)

Page 23: Memory Hierarchy

23

Disk Geometry

Disks consist of platters, each with two surfaces. Each surface consists of concentric rings called tracks. Each track consists of sectors separated by gaps.

spindle

surfacetracks

track k

sectors

gaps

Page 24: Memory Hierarchy

24

Multiple-platter View

Aligned tracks form a cylinder

surface 0

surface 1surface 2

surface 3surface 4

surface 5

cylinder k

spindle

platter 0

platter 1

platter 2

arm

read/write heads move in unison

from cylinder to cylinder

Page 25: Memory Hierarchy

25

Disk Operation

Disk Operation (Single-platter view)

The disk surface spins at a fixedrotational rate

spindle

By moving radially, the arm can position the read/write head over any track.

The read/write headis attached to the endof the arm and flies over the disk surface ona thin cushion of air.

spin

dle

spindle

spin

dle

spindle

Page 26: Memory Hierarchy

26

Disk Device (3)

Hard Disk Internals

Our Boeing 747 will fly at the altitude of only a few mm at the speed of approximately 65mph periodically landing and taking off.

And still the surface of the runway, which consists of a few mm-think layers, will stay intact for years.

Page 27: Memory Hierarchy

27

Disk Access Time

Average time to access some target sector approximated by : Taccess = Tavg seek + Tavg rotation + Tavg transfer

Seek time Time to position heads over cylinder containing target sector. Typically 9ms

Rotational latency Time waiting for first bit of target sector to pass under r/w head. ½ x 1/RPMs x 60 sec/1min

Transfer time Time to read the bits in the target sector. 1/RPM x 1/(avg #sectors/track) x 60sec/1min

Important points: Access time dominated by seek time. First bit in a sector is the most expensive, the rest are free.

Page 28: Memory Hierarchy

28

Hard Disk Data Sheet model Baracuda ATA II cheetah 73 capacity 30GB 73GB plates # 3 12 heads # 6 24 RPM 7200 10025 sector size 512B same sector/track 63 463 track/in 21368 18145 seek time

read 8.2 ms 5.85ms write 9.5ms 6.35ms track to track(r) 1.2ms 0.6ms track to track(w) 1.9ms 0.9ms

Page 29: Memory Hierarchy

29

Logical Disk Blocks

Modern disks present a simple abstract view of the complex sector geometry:

The set of available sectors is modeled as a sequence of block-sized logical blocks (0, 1, 2, …)

Mapping between logical blocks and actual (physical) sectors

Maintained by hardware/firmware device called disk controller.

Converts requests for logical blocks into (surface, track, sector) triples

Disk controller also performs some intelligent functions Buffering, caching, prefetching, scheduling, etc.

Page 30: Memory Hierarchy

30

I/O Bus

mainmemory

I/O bridge

bus interface

ALU

register file

CPU chip

system bus memory bus

disk controller

graphicsadapter

USBcontroller

mousekeyboard monitor

disk

I/O bus Expansion slots forother devices suchas network adapters.

Page 31: Memory Hierarchy

31

Reading a Disk Sector (1)

mainmemory

ALU

register file

CPU chip

disk controller

graphicsadapter

USBcontroller

mousekeyboard monitor

disk

I/O bus

bus interface

CPU initiates a disk read by writing a command, logical block number, and destination memory address to a port (address) associated with disk controller.

Page 32: Memory Hierarchy

32

Reading a Disk Sector (2)

mainmemory

ALU

register file

CPU chip

disk controller

graphicsadapter

USBcontroller

mousekeyboard monitor

disk

I/O bus

bus interface

Disk controller reads the sector and performs a direct memory access (DMA) transfer into main memory.

Page 33: Memory Hierarchy

33

Reading a Disk Sector (3)

mainmemory

ALU

register file

CPU chip

disk controller

graphicsadapter

USBcontroller

mousekeyboard monitor

disk

I/O bus

bus interface

When the DMA transfer completes, the disk controller notifies the CPU with an interrupt (i.e., asserts a special “interrupt” pin on the CPU)

Page 34: Memory Hierarchy

34

Storage vs. CPU Trends

metric 1980 1985 1990 1995 2000 2000:1980

$/MB 8,000 880 100 30 1 8,000access (ns) 375 200 100 70 60 6typical size(MB) 0.064 0.256 4 16 64 1,000

DRAM

metric 1980 1985 1990 1995 2000 2000:1980

$/MB 19,200 2,900 320 256 100 190access (ns) 300 150 35 15 2 100SRAM

metric 1980 1985 1990 1995 2000 2000:1980

$/MB 500 100 8 0.30 0.05 10,000access (ms) 87 75 28 10 8 11typical size(MB) 1 10 160 1,000 9,000 9,000

Disk

1980 1985 1990 1995 2000 2000:1980processor 8080 286 386 Pent P-IIIclock rate(MHz) 1 6 20 150 750 750cycle time(ns) 1,000 166 50 6 1.6 750

CPU

Page 35: Memory Hierarchy

35

The CPU-Memory Gap

The increasing gap between DRAM, disk, and CPU speeds

1

10

100

1,000

10,000

100,000

1,000,000

10,000,000

100,000,000

1980 1985 1990 1995 2000

year

ns

Disk seek time

DRAM access time

SRAM access time

CPU cycle time

Page 36: Memory Hierarchy

36

Locality

Principle of Locality: Temporal locality: Recently referenced items are

likely to be referenced in the near future. Spatial locality: Items with nearby addresses tend to

be referenced close together in time.

Locality Example: Data

- Reference array elements in succession: Spatial locality- Reference sum each iteration: Temporal locality

Instructions- Reference instructions in sequence: Spatial locality- Cycle through loop repeatedly: Temporal locality

sum = 0;for (i = 0; i < n; i++)

sum += a[i];return sum;

Page 37: Memory Hierarchy

37

Memory Hierarchies

Some fundamental and enduring properties of hardware and software: Fast storage technologies cost more per byte and

have less capacity. The gap between CPU and main memory speed is

widening. Well-written programs tend to exhibit good locality.

They suggest an approach for organizing memory and storage systems known as a memory hierarchy.

Page 38: Memory Hierarchy

38

An Example Memory Hierarchy

registers

on-chip L1cache (SRAM)

main memory(DRAM)

local secondary storage(local disks)

Larger, slower,

and cheaper (per byte)storagedevices

remote secondary storage(distributed file systems, Web servers)

Local disks hold files retrieved from disks on remote network servers.

Main memory holds disk blocks retrieved from local disks.

off-chip L2cache (SRAM)

L1 cache holds cache lines retrieved from the L2 cache memory.

CPU registers hold words retrieved from L1 cache.

L2 cache holds cache lines retrieved from main memory.

L0:

L1:

L2:

L3:

L4:

L5:

Smaller,faster,and

costlier(per byte)storage devices

Page 39: Memory Hierarchy

39

Caching

Cache A smaller, faster storage device that acts as a staging

area for a subset of the data in a larger, slower device. Fundamental idea of a memory hierarchy:

- For each k, the faster, smaller device at level k serves as a cache for the larger, slower device at level k+1.

Why do memory hierarchies work? Programs tend to access the data at level k more often

than they access the data at level k+1. Thus, the storage at level k+1 can be slower, and thus

larger and cheaper per bit. Net effect: A large pool of memory that costs as much as

the cheap storage near the bottom, but that serves data to programs at the rate of the fast storage near the top.

Page 40: Memory Hierarchy

40

Caching in a Memory Hierarchy

0 1 2 3

4 5 6 7

8 9 10 11

12 13 14 15

Larger, slower, cheaper storagedevice at level k+1 is partitionedinto blocks.

Data is copied betweenlevels in block-sized transfer units

8 9 14 3Smaller, faster, more expensivedevice at level k caches a subset of the blocks from level k+1

Level k:

Level k+1: 4

4

4 10

10

10

Page 41: Memory Hierarchy

41

General Caching Concepts

Program needs object d, which is stored in some block b.

Cache hit- Program finds b in the

cache at level k. (e.g., block 14)

Cache miss- b is not at level k, so level

k must fetch it from level k+1. (e.g., block 12)

- If level k cache is full, then some current block (a “victim”) must be replaced (evicted).

Request14

Request12

9 3

0 1 2 3

4 5 6 7

8 9 10 11

12 13 14 15

Level k:

Level k+1:

1414

12

14

4*

4*12

12

0 1 2 3

Request12

4*4*12

Page 42: Memory Hierarchy

42

Hardware0On-Chip TLBAddress translations

TLB

Web browser

10,000,000Local diskWeb pagesBrowser cache

Web cache

Network buffer cache

Buffer cache

Virtual Memory

L2 cache

L1 cache

Registers

Cache Type

Web pages

Parts of files

Parts of files

4-KB page

32-byte block

32-byte block

4-byte word

What Cached

Web proxy server

1,000,000,000Remote server disks

OS100Main memory

Hardware1On-Chip L1

Hardware10Off-Chip L2

AFS/NFS client

10,000,000Local disk

Hardware+OS

100Main memory

Compiler0 CPU registers

Managed By

Latency (cycles)

Where Cached

Examples of Caching in the Hierarchy

Page 43: Memory Hierarchy

Cache Memories

Page 44: Memory Hierarchy

44

Cache Memories Cache memories are small, fast SRAM-based

memories managed automatically in hardware. Hold frequently accessed blocks of main memory

CPU looks first for data in L1, then in L2, then in main memory.

Typical bus structure:

mainmemory

I/Obridge

bus interfaceL2 cache

register file

ALU

CPU chip

cache bus system bus memory bus

L1 cache

Page 45: Memory Hierarchy

45

Inserting L1 Cache

a b c dblock 10

p q r sblock 21

...

...

w x y zblock 30

...

The big slow main memoryhas room for many 4-wordblocks.

The small fast L1 cache has roomfor two 4-word blocks.

The tiny, very fast CPU register filehas room for four 4-byte words.

The transfer unit betweenthe cache and main memory is a 4-word block(16 bytes).

The transfer unit betweenthe CPU register file and the cache is a 4-byte block.

line 0

line 1

Page 46: Memory Hierarchy

46

General Org of a Cache Memory

• • • B–110

• • • B–110

valid

valid

tag

tagset 0:

B = 2b bytesper cache block

E lines per set

S = 2s sets

t tag bitsper line

1 valid bitper line

Cache size: C = B x E x S data bytes

• • •

• • • B–110

• • • B–110

valid

valid

tag

tagset 1: • • •

• • • B–110

• • • B–110

valid

valid

tag

tagset S-1: • • •

• • •

Cache is an array of sets.

Each set contains one or more lines.

Each line holds a block of data.

Page 47: Memory Hierarchy

47

Addressing Caches

t bits s bits b bits

0m-1

<tag> <set index> <block offset>

Address A:

• • • B–110

• • • B–110

v

v

tag

tagset 0: • • •

• • • B–110

• • • B–110

v

v

tag

tagset 1: • • •

• • • B–110

• • • B–110

v

v

tag

tagset S-1: • • •

• • •

The word at address A is in the cache ifthe tag bits in one of the <valid> lines in set <set index> match <tag>.

The word contents begin at offset <block offset> bytes from the beginning of the block.

Page 48: Memory Hierarchy

48

Direct-Mapped Cache

Simplest kind of cache Characterized by exactly one line per set.

valid

valid

valid

tag

tag

tag

• • •

set 0:

set 1:

set S-1:

E=1 lines per setcache block

cache block

cache block

Page 49: Memory Hierarchy

49

Accessing Direct-Mapped Caches

Set selection Use the set index bits to determine the set of

interest.

valid

valid

valid

tag

tag

tag

• • •

set 0:

set 1:

set S-1:t bits s bits

0 0 0 0 10m-1

b bits

tag set index block offset

selected set

cache block

cache block

cache block

Page 50: Memory Hierarchy

50

Accessing Direct-Mapped Caches

Line matching and word selection Line matching: Find a valid line in the selected set

with a matching tag Word selection: Then extract the word

1

t bits s bits100i0110

0m-1

b bits

tag set index block offset

selected set (i):

(3) If (1) and (2), then cache hit,

and block offset selects

starting byte.

=1? (1) The valid bit must be set

= ?(2) The tag bits in the cache

line must match thetag bits in the address

0110 w3w0 w1 w2

30 1 2 74 5 6

Page 51: Memory Hierarchy

51

Direct-Mapped Cache Simulation

M=16 byte addresses, B=2 bytes/block, S=4 sets, E=1 entry/set

Address trace (reads):0 [00002], 1 [00012], 13 [11012], 8 [10002], 0 [00002]

xt=1 s=2 b=1

xx x

1 0 m[1] m[0]

v tag data0 [00002] (miss)

(1)

1 0 m[1] m[0]

v tag data

1 1 m[13] m[12]

13 [11012] (miss)

(3)

1 1 m[9] m[8]

v tag data8 [10002] (miss)

(4)

1 0 m[1] m[0]

v tag data

1 1 m[13] m[12]

0 [00002] (miss)

(5)

0 M[0-1]1

1 M[12-13]1

1 M[8-9]1

1 M[12-13]1

0 M[0-1]1

1 M[12-13]1

0 M[0-1]1

Page 52: Memory Hierarchy

52

Why Use Middle Bits as Index?

High-Order Bit Indexing Adjacent memory lines would

map to same cache entry Poor use of spatial locality

Middle-Order Bit Indexing Consecutive memory lines

map to different cache lines Can hold C-byte region of

address space in cache at one time

4-line Cache High-OrderBit Indexing

Middle-OrderBit Indexing

00011011

0000000100100011010001010110011110001001101010111100110111101111

0000000100100011010001010110011110001001101010111100110111101111

Page 53: Memory Hierarchy

53

Set Associative Caches

Characterized by more than one line per set

valid tagset 0: E=2 lines per set

set 1:

set S-1:

• • •

cache block

valid tag cache block

valid tag cache block

valid tag cache block

valid tag cache block

valid tag cache block

Page 54: Memory Hierarchy

54

Accessing Set Associative Caches

Set selection identical to direct-mapped cache

valid

valid

tag

tagset 0:

valid

valid

tag

tagset 1:

valid

valid

tag

tagset S-1:

• • •

t bits s bits0 0 0 0 1

0m-1

b bits

tag set index block offset

Selected set

cache block

cache block

cache block

cache block

cache block

cache block

Page 55: Memory Hierarchy

55

Accessing Set Associative Caches

Line matching and word selection must compare the tag in each valid line in the

selected set.

1 0110 w3w0 w1 w2

1 1001

t bits s bits100i0110

0m-1

b bits

tag set index block offset

selected set (i):

=1? (1) The valid bit must be set.

= ?(2) The tag bits in one of the cache lines must

match the tag bits inthe address

(3) If (1) and (2), then cache hit, and

block offset selects starting byte.

30 1 2 74 5 6

Page 56: Memory Hierarchy

56

Multi-Level Caches

Options: separate data and instruction caches, or a unified cache

size:speed:$/Mbyte:line size:

200 B3 ns

8 B

8-64 KB3 ns

32 B

128 MB DRAM60 ns$1.50/MB8 KB

30 GB8 ms$0.05/MB

larger, slower, cheaper

MemoryMemory

L1 d-cache

RegsUnified

L2 Cache

UnifiedL2

Cache

Processor

1-4MB SRAM6 ns$100/MB32 B

L1 i-cache

diskdisk

Page 57: Memory Hierarchy

57

Processor ChipProcessor Chip

Intel Pentium Cache Hierarchy

L1 Data1 cycle latency

16 KB4-way assoc

Write-through32B lines

L1 Instruction16 KB, 4-way

32B lines

Regs. L2 Unified128KB--2 MB4-way assocWrite-back

Write allocate32B lines

L2 Unified128KB--2 MB4-way assocWrite-back

Write allocate32B lines

MainMemory

Up to 4GB

MainMemory

Up to 4GB

Page 58: Memory Hierarchy

58

Cache Performance Metrics

Miss Rate Fraction of memory references not found in cache

(misses/references) Typical numbers:

- 3-10% for L1- can be quite small (e.g., < 1%) for L2, depending on size, etc.

Hit Time Time to deliver a line in the cache to the processor

(includes time to determine whether the line is in the cache)

Typical numbers:- 1 clock cycle for L1- 3-8 clock cycles for L2

Miss Penalty Additional time required because of a miss

- Typically 25-100 cycles for main memory

Page 59: Memory Hierarchy

59

Writing Cache Friendly Code

Repeated references to variables are good (temporal locality)

Stride-1 reference patterns are good (spatial locality)

Examples: cold cache, 4-byte words, 4-word cache blocks

int sumarrayrows(int a[M][N]){ int i, j, sum = 0;

for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum;}

int sumarraycols(int a[M][N]){ int i, j, sum = 0;

for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum;}

Miss rate = Miss rate = 1/4 = 25% 100%

Page 60: Memory Hierarchy

60

The Memory Mountain

Read throughput (read bandwidth) Number of bytes read from memory per second

(MB/s)

Memory mountain Measured read throughput as a function of spatial

and temporal locality. Compact way to characterize memory system

performance.

Page 61: Memory Hierarchy

61

Memory Mountain Test Function

/* The test function */void test(int elems, int stride) { int i, result = 0; volatile int sink;

for (i = 0; i < elems; i += stride)result += data[i];

sink = result; /* So compiler doesn't optimize away the loop */}

/* Run test(elems, stride) and return read throughput (MB/s) */double run(int size, int stride, double Mhz){ double cycles; int elems = size / sizeof(int);

test(elems, stride); /* warm up the cache */ cycles = fcyc2(test, elems, stride, 0); /* call test(elems,stride) */ return (size / stride) / (cycles / Mhz); /* convert cycles to MB/s */}

Page 62: Memory Hierarchy

62

Memory Mountain Main Routine/* mountain.c - Generate the memory mountain. */#define MINBYTES (1 << 10) /* Working set size ranges from 1 KB */#define MAXBYTES (1 << 23) /* ... up to 8 MB */#define MAXSTRIDE 16 /* Strides range from 1 to 16 */#define MAXELEMS MAXBYTES/sizeof(int)

int data[MAXELEMS]; /* The array we'll be traversing */

int main(){ int size; /* Working set size (in bytes) */ int stride; /* Stride (in array elements) */ double Mhz; /* Clock frequency */

init_data(data, MAXELEMS); /* Initialize each element in data to 1 */ Mhz = mhz(0); /* Estimate the clock frequency */ for (size = MAXBYTES; size >= MINBYTES; size >>= 1) {

for (stride = 1; stride <= MAXSTRIDE; stride++) printf("%.1f\t", run(size, stride, Mhz));printf("\n");

} exit(0);}

Page 63: Memory Hierarchy

63

The Memory Mountain

s1

s3

s5

s7

s9

s11

s13

s15

8m

2m 512k 12

8k 32k 8k

2k

0

200

400

600

800

1000

1200

read

thro

ug

hp

ut (

MB

/s)

stride (words) working set size (bytes)

Pentium III Xeon550 MHz16 KB on-chip L1 d-cache16 KB on-chip L1 i-cache512 KB off-chip unifiedL2 cache

Ridges ofTemporalLocality

L1

L2

mem

Slopes ofSpatialLocality

xe

Page 64: Memory Hierarchy

64

Ridges of Temporal Locality

Slice through the memory mountain with stride=1 illuminates read throughputs of different caches and

memory

0

200

400

600

800

1000

1200

8m

4m

2m

10

24

k

51

2k

25

6k

12

8k

64

k

32

k

16

k

8k

4k

2k

1k

working set size (bytes)

rea

d t

hro

ug

pu

t (M

B/s

)

L1 cacheregion

L2 cacheregion

main memoryregion

Page 65: Memory Hierarchy

65

A Slope of Spatial Locality

Slice through memory mountain with size=256KB shows cache block size.

0

100

200

300

400

500

600

700

800

s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 s11 s12 s13 s14 s15 s16

stride (words)

rea

d t

hro

ug

hp

ut

(MB

/s)

one access per cache line

Page 66: Memory Hierarchy

66

Concluding Observations

Programmer can optimize for cache performance How data structures are organized How data are accessed

- Nested loop structure- Blocking is a general technique

All systems favor “cache friendly code” Getting absolute optimum performance is very

platform specific- Cache sizes, line sizes, associativities, etc.

Can get most of the advantage with generic code- Keep working set reasonably small (temporal locality)- Use small strides (spatial locality)