15-447 Computer ArchitectureFall 2007 © November 7th, 2007 Majd F. Sakr [email protected]...
-
date post
21-Dec-2015 -
Category
Documents
-
view
213 -
download
0
Transcript of 15-447 Computer ArchitectureFall 2007 © November 7th, 2007 Majd F. Sakr [email protected]...
15-447 Computer Architecture Fall 2007 ©
November 7th, 2007
Majd F. Sakr
www.qatar.cmu.edu/~msakr/15447-f07/
CS-447– Computer Architecture
M,W 10-11:20am
Lecture 19Memory Hierarchy
15-447 Computer Architecture Fall 2007 ©
During This Lecture
° Introduction to the Memory Hierarchy
• Processor Memory Gap
• Locality
• Latency Hiding
15-447 Computer Architecture Fall 2007 ©
The Big Picture
Processor (active)
Computer
Control(“brain”)
Datapath(“brawn”)
Memory(passive)(where programs, data live whenrunning)
DevicesInput
Output
Keyboard, Mouse
Display, Printer
Disk,Network
15-447 Computer Architecture Fall 2007 ©
Processor-DRAM Memory Gap (latency)
µProc60%/yr.(2X/1.5yr)
DRAM9%/yr.(2X/10 yrs)1
10
100
1000198
0198
1 198
3198
4198
5 198
6198
7198
8198
9199
0199
1 199
2199
3199
4199
5199
6199
7199
8 199
9200
0
DRAM
CPU
198
2Processor-MemoryPerformance Gap:(grows 50% / year)
Per
form
ance
Time
“Moore’s Law”
15-447 Computer Architecture Fall 2007 ©
°SRAM:• value is stored on a pair of inverting gates
• very fast but takes up more space than DRAM (4 to 6 transistors)
Memories:
15-447 Computer Architecture Fall 2007 ©
DRAM:• value is stored as a charge on capacitor (must be refreshed)
• very small but slower than SRAM (factor of 5 to 10)
Word line Pass
Transistor
Bit line
Capacitor
Memories:
15-447 Computer Architecture Fall 2007 ©
° Users want large and fast memories!
° SRAM access times are .5 – 5ns at cost of $4000 to $10,000 per GB.
°DRAM access times are 50-70ns at cost of $100 to $200 per GB.
°Disk access times are 5 to 20 million ns at cost of $.50 to $2 per GB.
Memory
2004
15-447 Computer Architecture Fall 2007 ©
Storage Trends
metric 1980 1985 1990 1995 2000 2005 2005:1980
$/MB 8,000 880 100 30 1 0.20 40,000access (ns) 375 200 100 70 60 50 8typical size(MB) 0.064 0.256 4 16 64 1,000 15,000
DRAM
metric 1980 1985 1990 1995 2000 2005 2005:1980
$/MB 19,200 2,900 320 256 100 75 256access (ns) 300 150 35 15 12 10 30
SRAM
metric 1980 1985 1990 1995 2000 2005 2005:1980
$/MB 500 100 8 0.30 0.05 0.001 10,000access (ms) 87 75 28 10 8 4 22typical size(MB) 1 10 160 1,000 9,000 400,000 400,000
Disk
15-447 Computer Architecture Fall 2007 ©
CPU Clock Rates
1980 1985 1990 1995 2000 2005 2005:1980
processor 8080 286 386 Pentium P-III P-4
clock rate(MHz) 1 6 20 150 750 3,000 3,000cycle time(ns) 1,000 166 50 6 1.3 0.3 3,333
15-447 Computer Architecture Fall 2007 ©
The CPU-Memory Gap
0
1
10
100
1,000
10,000
100,000
1,000,000
10,000,000
100,000,000
1980 1985 1990 1995 2000 2005
Year
ns
Disk seek time
DRAM access time
SRAM access time
CPU cycle time
The gap widens between DRAM, disk, and CPU speeds. The gap widens between DRAM, disk, and CPU speeds.
15-447 Computer Architecture Fall 2007 ©
Locality° Principle of Locality:
• Programs tend to reuse data and instructions near those they have used recently, or that were recently referenced themselves.
• Temporal locality: Recently referenced items are likely to be referenced in the near future.
• Spatial locality: Items with nearby addresses tend to be referenced close together in time.
Locality Example:• Data
– Reference array elements in succession (stride-1 reference pattern):
– Reference sum each iteration:
• Instructions
– Reference instructions in sequence:
– Cycle through loop repeatedly:
sum = 0;for (i = 0; i < n; i++)
sum += a[i];return sum;
Spatial locality
Spatial locality
Temporal locality
Temporal locality
15-447 Computer Architecture Fall 2007 ©
Locality Example
° Claim: Being able to look at code and get a qualitative sense of its locality is a key skill for a professional programmer.
° Question: Does this function have good locality?
int sum_array_rows(int a[M][N]){ int i, j, sum = 0;
for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum;}
15-447 Computer Architecture Fall 2007 ©
Locality Example
°Question: Does this function have good locality?
int sum_array_cols(int a[M][N]){ int i, j, sum = 0;
for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum;}
15-447 Computer Architecture Fall 2007 ©
Memory Hierarchy (1/3)°Processor
• executes instructions on order of nanoseconds to picoseconds
• holds a small amount of code and data in registers
°Memory• More capacity than registers, still limited
• Access time ~50-100 ns
°Disk• HUGE capacity (virtually limitless)• VERY slow: runs ~milliseconds
15-447 Computer Architecture Fall 2007 ©
Memory Hierarchy (2/3) Processor
Size of memory at each level
Increasing Distance
from Proc.,Decreasing
speed
Level 1
Level 2
Level n
Level 3
. . .
Higher
Lower
Levels in memory
hierarchy
As we move to deeper levels the latency goes up and price per bit goes down.
15-447 Computer Architecture Fall 2007 ©
Memory Hierarchy (3/3)° If level closer to Processor, it must be:
• smaller
• faster
• subset of lower levels (contains most recently used data)
°Lowest Level (usually disk) contains all available data
°Other levels?
15-447 Computer Architecture Fall 2007 ©
Memory Caching
°We’ve discussed three levels in the hierarchy: processor, memory, disk
°Mismatch between processor and memory speeds leads us to add a new level: a memory cache
° Implemented with SRAM technology
15-447 Computer Architecture Fall 2007 ©
Memory Hierarchy Analogy: Library (1/2)°You’re writing a term paper (Processor) at a table in Library
°Library is equivalent to disk• essentially limitless capacity
• very slow to retrieve a book
°Table is memory• smaller capacity: means you must return book when table fills up
• easier and faster to find a book there once you’ve already retrieved it
15-447 Computer Architecture Fall 2007 ©
Memory Hierarchy Analogy: Library (2/2)
°Open books on table are cache• smaller capacity: can have very few open books fit on table; again, when table fills up, you must close a book
• much, much faster to retrieve data
° Illusion created: whole library open on the tabletop
• Keep as many recently used books open on table as possible since likely to use again
• Also keep as many books on table as possible, since faster than going to library
15-447 Computer Architecture Fall 2007 ©
Memory Hierarchy Basis°Disk contains everything.
°When Processor needs something, bring it into to all higher levels of memory.
°Cache contains copies of data in memory that are being used.
°Memory contains copies of data on disk that are being used.
°Entire idea is based on Temporal Locality: if we use it now, we’ll want to use it again soon (a Big Idea)