CS252/CullerLec 5.1
2/5/02
CS203A Computer ArchitectureLecture 15
Cache and Memory Technologyand Virtual Memory
CS252/CullerLec 5.2
2/5/02
Main Memory Background• Random Access Memory (vs. Serial Access
Memory)• Different flavors at different levels
– Physical Makeup (CMOS, DRAM)– Low Level Architectures (FPM,EDO,BEDO,SDRAM)
• Cache uses SRAM: Static Random Access Memory– No refresh (6 transistors/bit vs. 1 transistor
Size: DRAM/SRAM 4-8, Cost/Cycle time: SRAM/DRAM 8-16
• Main Memory is DRAM: Dynamic Random Access Memory– Dynamic since needs to be refreshed periodically– Addresses divided into 2 halves (Memory as a 2D matrix):
» RAS or Row Access Strobe» CAS or Column Access Strobe
CS252/CullerLec 5.3
2/5/02
Static RAM (SRAM)
• Six transistors in cross connected fashion– Provides regular AND inverted outputs– Implemented in CMOS process
Single Port 6-T SRAM Cell
CS252/CullerLec 5.4
2/5/02
• SRAM cells exhibit high speed/poor density• DRAM: simple transistor/capacitor pairs in
high density form
Dynamic RAM
Word Line
Bit Line
C
Sense Amp
.
.
.
CS252/CullerLec 5.5
2/5/02
DRAM logical organization (4 Mbit)
• Square root of bits per RAS/CAS
Column Decoder
Sense Amps & I/O
Memory Array(2,048 x 2,048)
A0…A10
…
11
D
Q
Word LineStorage CellR
ow D
ecod
er…
•Access time of DRAM = Row access time + column access time + refreshing
CS252/CullerLec 5.6
2/5/02
Other Types of DRAM
• Synchronous DRAM (SDRAM): Ability to transfer a burst of data given a starting address and a burst length – suitable for transferring a block of data from main memory to cache.
• Page Mode DRAM: All bits on the same ROW (Spatial Locality)– Don’t need to wait for wordline to recharge– Toggle CAS with new column address
• Extended Data Out (EDO)– Overlap Data output w/ CAS toggle– Later brother: Burst EDO (CAS toggle used to get next addr)
• Rambus DRAM (RDRAM) - Pipelined control
CS252/CullerLec 5.7
2/5/02
Main Memory Organizations
CPU
Cache
Bus
Memory
CPU
Bus
Memory
Multiplexor
Cache
CPU
Cache
Bus
Memorybank 1
Memorybank 2
Memorybank 3
Memorybank 0
one-word widememory organization
wide memory organization interleaved memory organization
DRAM access time >> bus transfer time
CS252/CullerLec 5.8
2/5/02
Memory Interleaving
Interleaved memory is more flexible than wide-access memory in
that it can handle multiple independent accesses at once.
Add- ress
Addresses that are 0 mod 4
Addresses that are 2 mod 4
Addresses that are 1 mod 4
Addresses that are 3 mod 4
Return data
Data in
Data out Dispatch
(based on 2 LSBs of address)
Bus cycle
Memory cycle
0
1
2
3
0
1
2
3
Module accessed
Time
CS252/CullerLec 5.9
2/5/02
Memory Access Time Example
• Assume that it takes 1 cycle to send the address, 15 cycles for each DRAM access and 1 cycle to send a word of data.
• Assuming a cache block of 4 words and one-word wide DRAM, miss penalty = 1 + 4x15 + 4x1 = 65 cycles
• With main memory and bus width of 2 words, miss penalty = 1 + 2x15 + 2x1 = 33 cycles. For 4-word wide memory, miss penalty is 17 cycles. Expensive due to wide bus and control circuits.
• With interleaved memory of 4 memory banks and same bus width, the miss penalty = 1 + 1x15 + 4x1 = 20 cycles. The memory controller must supply consecutive addresses to different memory banks. Interleaving is universally adapted in high-performance computers.
CS252/CullerLec 5.10
2/5/02
Virtual Memory
• Idea 1: Many Programs sharing DRAM Memory so that context switches can occur
• Idea 2: Allow program to be written without memory constraints – program can exceed the size of the main memory
• Idea 3: Relocation: Parts of the program can be placed at different locations in the memory instead of a big chunk.
• Virtual Memory:(1) DRAM Memory holds many programs running at
same time (processes)(2) use DRAM Memory as a kind of “cache” for disk
CS252/CullerLec 5.11
2/5/02
Data movement in a memory hierarchy.
Memory Hierarchy: The Big Picture
Pages Lines
Words
Registers
Main memory
Cache
Virtual memory
(transferred explicitly
via load/store) (transferred automatically
upon cache miss) (transferred automatically
upon page fault)
CS252/CullerLec 5.12
2/5/02
Virtual Memory has own terminology
• Each process has its own private “virtual address space” (e.g., 232 Bytes); CPU actually generates “virtual addresses”
• Each computer has a “physical address space” (e.g., 128 MegaBytes DRAM); also called “real memory”
• Address translation: mapping virtual addresses to physical addresses– Allows multiple programs to use (different chunks of
physical) memory at same time– Also allows some chunks of virtual memory to be
represented on disk, not in main memory (to exploit memory hierarchy)
CS252/CullerLec 5.13
2/5/02
Mapping Virtual Memory to Physical Memory
0
Physical Memory
Virtual Memory
Heap
64 MB
• Divide Memory into equal sized“chunks” (say, 4KB each)
0
• Any chunk of Virtual Memory assigned to any chunk of Physical Memory (“page”)
Stack
Heap
Static
Code
SingleProcess
CS252/CullerLec 5.14
2/5/02
Handling Page Faults
• A page fault is like a cache miss– Must find page in lower level of hierarchy
• If valid bit is zero, the Physical Page Number points to a page on disk
• When OS starts new process, it creates space on disk for all the pages of the process, sets all valid bits in page table to zero, and all Physical Page Numbers to point to disk – called Demand Paging - pages of the process are
loaded from disk only as needed
CS252/CullerLec 5.15
2/5/02
Comparing the 2 levels of hierarchy
•Cache Virtual Memory•Block or Line Page•Miss Page Fault•Block Size: 32-64B Page Size: 4K-16KB•Placement: Fully AssociativeDirect Mapped, N-way Set Associative
•Replacement: Least Recently UsedLRU or Random (LRU) approximation
•Write Thru or Back Write Back•How Managed: Hardware + SoftwareHardware (Operating System)
CS252/CullerLec 5.16
2/5/02
How to Perform Address Translation?
• VM divides memory into equal sized pages• Address translation relocates entire pages
– offsets within the pages do not change– if make page size a power of two, the virtual address
separates into two fields:
– like cache index, offset fields
Virtual Page Number Page Offset virtual address
CS252/CullerLec 5.17
2/5/02
Mapping Virtual to Physical Address
Virtual Page Number Page Offset
Page OffsetPhysical Page Number
Translation
31 30 29 28 27 .………………….12 11 10
29 28 27 .………………….12 11 10
9 8 ……..……. 3 2 1 0
Virtual Address
Physical Address
9 8 ……..……. 3 2 1 0
1KB page size
CS252/CullerLec 5.18
2/5/02
Address Translation• Want fully associative page placement• How to locate the physical page?• Search impractical (too many pages)• A page table is a data structure which
contains the mapping of virtual pages to physical pages– There are several different ways, all up to the
operating system, to keep this data around
• Each process running in the system has its own page table
CS252/CullerLec 5.19
2/5/02
Address Translation: Page Table
Virtual Address (VA):virtual page nbr offset
Page TableRegister
Page Table is located in physical memory
indexintopagetable
+
PhysicalMemory
Address (PA)
Access Rights: None, Read Only, Read/Write, Executable
Page Table
Val-id
AccessRights
PhysicalPageNumber
V A.R. P. P. N.0 A.R.
V A.R. P. P. N.
...
...
disk
CS252/CullerLec 5.20
2/5/02
Page Tables and Address Translation
The role of page table in the virtual-to-physical address translation process.
Page table
Main memory
Valid bits
Page table register
Virtual page
number
Other f lags
CS252/CullerLec 5.21
2/5/02
Protection and Sharing in Virtual Memory
Virtual memory as a facilitator of sharing and memory protection.
Page table for process 1
Main memory
Permission bits
Pointer Flags
Page table for process 2
To disk memory
Only read accesses allow ed
Read & w rite accesses allowed
CS252/CullerLec 5.22
2/5/02
Optimizing for Space
• Page Table too big!– 4GB Virtual Address Space ÷ 4 KB page
220 (~ 1 million) Page Table Entries 4 MB just for Page Table of single process!
• Variety of solutions to tradeoff Page Table size for slower performance when miss occurs in TLB
Use a limit register to restrict page table size and let it grow with more pages,Multilevel page table, Paging page tables, etc.
(Take O/S Class to learn more)
CS252/CullerLec 5.23
2/5/02
How Translate Fast?
•Problem: Virtual Memory requires two memory accesses!– one to translate Virtual Address into Physical Address (page
table lookup)– one to transfer the actual data (cache hit)– But Page Table is in physical memory! => 2 main memory
accesses!•Observation: since there is locality in pages of
data, must be locality in virtual addresses of those pages!
•Why not create a cache of virtual to physical address translations to make translation fast? (smaller is faster)
• For historical reasons, such a “page table cache” is called a Translation Lookaside Buffer, or TLB
CS252/CullerLec 5.24
2/5/02
Typical TLB FormatVirtual Physical Valid Ref Dirty Access
Page Nbr Page Nbr Rights
•TLB just a cache of the page table mappings
• Dirty: since use write back, need to know whether or not to write page to disk when replaced• Ref: Used to calculate LRU on replacement
• TLB access time comparable to cache (much less than main memory access time)
“tag” “data”
CS252/CullerLec 5.25
2/5/02
Translation Look-Aside Buffers•TLB is usually small, typically 32-4,096 entries
• Like any other cache, the TLB can be fully associative, set associative, or direct mapped
Processor TLB Cache MainMemory
misshit
data
hit
miss
DiskMemory
OS FaultHandler
page fault/protection violation
PageTable
data
virtualaddr.
physicaladdr.
CS252/CullerLec 5.26
2/5/02
Translation Lookaside Buffer
Virtual-to-physical address translation by a TLB and how the resulting physical address is used to access the cache memory.
Virtual page number
Byte offset
Byte offset in word
Physical address tag
Cache index
Valid bits
TLB tags
Tags match and entry is valid
Physical page number Physical
address
Virtual address
Tra
nsla
tion
Other flags
CS252/CullerLec 5.27
2/5/02
Valid Tag Data
Page offset
Page offset
Virtual page number
Physical page numberValid
1220
20
16 14
Cache index
32
DataCache hit
2Byte
offset
Dirty Tag
TLB hit
Physical page number
Physical address tag
31 30 29 15 14 13 12 11 10 9 8 3 2 1 0 DECStation 3100/MIPS R2000Virtual Address
TLB
Cache
64 entries, fully
associative
Physical Address
16K entries,
direct mapped
CS252/CullerLec 5.28
2/5/02
Test 2, Dec 5 (Tuesday)
Topics:
1. Compiler+VLIW: Sections 4.1-4.3, Lectures 9 and paper 9a
2. multithreading and multimedia: Lecture 10
3. Network Processors: Lectures 11, 11a and 12
4. Cache Memory: Sections Sections 5.1-5.7 and Lectures 13,14
5. Memory Organization and Virtual Memory: Sections 5.8-5.10 and Lecture 15
Project II Due on Dec 7 in the class
Top Related