Cache Memory

22
Cache Memory Cache Memory By JIA HUANG By JIA HUANG

description

Cache Memory. By JIA HUANG. "Computer Science has only three ideas: cache, hash, trash.“ - Greg Ganger, CMU. The idea of using caching. Using very short time to access the recently and frequently data. cache: fast but expensive. disks: cheap but slow. Type of cache. - PowerPoint PPT Presentation

Transcript of Cache Memory

Page 1: Cache Memory

Cache MemoryCache Memory

By JIA HUANGBy JIA HUANG

Page 2: Cache Memory

"Computer Science has only three ideas: cache, hash, trash.“

- Greg Ganger, CMU

Page 3: Cache Memory

The idea of using cachingThe idea of using caching

Using very short time to access the recently Using very short time to access the recently and frequently dataand frequently data

cache: fast but expensive disks: cheap but slow

Page 4: Cache Memory

Type of cacheType of cache

CPU cacheCPU cache Disk cacheDisk cache Other cachesOther caches

Proxy web cacheProxy web cache

Page 5: Cache Memory

Usage of cachingUsage of caching

Caching is used widely in:Caching is used widely in: Storage systemsStorage systems DatabasesDatabases Web serversWeb servers MiddlewareMiddleware ProcessorsProcessors Operating systemsOperating systems RAID controllerRAID controller Many other applicationsMany other applications

Page 6: Cache Memory

Cache AlgorithmsCache Algorithms

Famous algorithmsFamous algorithms LRU (Least Recently Used)LRU (Least Recently Used) LFU (Least Frequently Used)LFU (Least Frequently Used)

Not so Famous algorithmsNot so Famous algorithms LRU –KLRU –K 2Q2Q FIFOFIFO othersothers

Page 7: Cache Memory

LRU (Least Recently Used)LRU (Least Recently Used)

LRU is implemented by a linked list.LRU is implemented by a linked list. Discards the least recently used items Discards the least recently used items

first.first.

Page 8: Cache Memory

LFU (least frequently used)LFU (least frequently used)

Counts, how often an item is Counts, how often an item is needed. Those that are used least needed. Those that are used least often are discarded first.often are discarded first.

Page 9: Cache Memory

LRU vs. LFULRU vs. LFU

The The fundamental locality principlefundamental locality principle claims claims that if a process visits a location in the that if a process visits a location in the memory, it will probably revisit the memory, it will probably revisit the location and its neighborhood soonlocation and its neighborhood soon

The The advanced locality principleadvanced locality principle claims claims that the probability of revisiting will that the probability of revisiting will increased if the number of the visits is increased if the number of the visits is biggerbigger

Page 10: Cache Memory

DisadvantagesDisadvantages

LRULRU – problem with – problem with process scans a huge database

LFU – process scans a huge database, but it made the performance even worse.

Page 11: Cache Memory

can there be a better algorithm?

YesYes

Page 12: Cache Memory

New algorithmNew algorithm

ARC (Adaptive Replacement Cache)ARC (Adaptive Replacement Cache)

- it combines the virtues of LRU and LFU, while avoiding vices of both. The basic idea behind ARC is to adaptively, dynamically and relentlessly balance between "recency" and "frequency" to achieve a high hit ratio.

--Invented by IBM in 2003 ( Almaden Research Center, San Jose)

Page 13: Cache Memory

How it works?How it works?

Page 14: Cache Memory

ARCARC

L1: pages were seen once recently("recency")

L2: pages were seen at least twice recently ("frequency")

If L1 contains exactly c pages --replace the LRU page in L1 else --replace the LRU page in L2. Lemma: The c most recent

pages are in the union of L1 and L2.

L1L1

L2L2

LRULRU

LRULRU

MRUMRU

MRUMRU

Page 15: Cache Memory

ARCARC

Divide L1 into T1 (top) & B1 (bottom)

Divide L2 into T2 (top) & B2 (bottom)

T1 and T2 contain c pages in cache and in directory

B1 and B2 contain c pages in directory, but not in cache

If T1 contains more than p pages, --replace LRU page in T1, else --replace LRU page in T2.

L2L2

L1L1

MRUMRU

MRUMRU

LRULRU

LRULRU

Page 16: Cache Memory

Adapt target size of T1 to an observed workload

A self-tuning algorithm: hit in T1 or T2 : do nothing hit in B1: increase target of

T1 hit in B2: decrease target of T1

L2L2"frequency"

L1L1 "recency"

Midpoint

ARCARC

Page 17: Cache Memory

ARCARC

ARC has low space complexity. A realistic implementation had a total space overhead of less than 0.75%.

ARC has low time complexity; virtually identical to LRU.

ARC is self-tuning and adapts to different workloads and cache sizes. In particular, it gives very little cache space to sequential workloads, thus avoiding a key limitation of LRU.

ARC outperforms LRU for a wide range of workloads.

Page 18: Cache Memory

ExampleExample

For a huge, real-life workload generated by a large commercial search engine with a 4GB cache, ARC's hit ratio was dramatically better than that of LRU (40.44 percent vs. 27.62 percent).

-IBM (Almaden Research Center)

Page 19: Cache Memory

ARC vs. LRU ARC vs. LRU

Page 20: Cache Memory

ARC vs. LRUARC vs. LRU

Page 21: Cache Memory

ARCARC

Currently, ARC is a research Currently, ARC is a research prototype and will be available prototype and will be available to customers via many of IBM's to customers via many of IBM's existing and future products.existing and future products.

Page 22: Cache Memory

ReferencesReferences

http://en.wikipedia.org/wiki/Cachhttp://en.wikipedia.org/wiki/Caching#Other_cachesing#Other_caches

http://www.cs.biu.ac.il/~wisemanhttp://www.cs.biu.ac.il/~wiseman/2os/2os/os2.pdf/2os/2os/os2.pdf

http://www.almaden.ibm.com/Sthttp://www.almaden.ibm.com/StorageSystems/autonomic_storaorageSystems/autonomic_storage/ARC/index.shtmlge/ARC/index.shtml

http://www.http://www.almadenalmaden..ibmibm.com/.com/cscs/people//people/dmodhadmodha/arc-fast./arc-fast.pdfpdf