Cache

From ScienceZero
Jump to: navigation, search

A cache is a transparent holding place for often used data. The main purpose is to reduce the average latency in retrieving data but it can also greatly reduces traffic to and from the main storage for common access patterns.

CPU <--> Cache <--> Storage

The terms CPU, Cache and Storage is used here in a very broad sense. Cache and Storage may be implemented separately on different planets or on the same storage medium. Even if the Cache and Storage is on the same medium the cache can be far faster than the Storage because of the way data is organised.

The downside is that the cache adds complexity, reduces predictability of timing and almost always causes a worst case timing that is slower than for an un-cached system.

A replacement policy is used to decide how to make room for a new element in the cache. A write policy is used to decide what happens to data written by the CPU. Policies can be dynamic and change under the control of hardware or software to increase performance.

Common replacement policies

  • Random - A random element is evicted and written to storage.
  • Not recently used - A not recently used element is evicted and written to storage.


Common write policies

  • Write-back - Data written by the CPU is written to the cache.
  • Write-through - Data written by the CPU is written to the cache and to the storage.
  • Write-around - Data written by the CPU is written to storage only.