Content deleted Content added
Guy Harris (talk | contribs) →Content delivery network: Put back the {{main}} template, as other sections have it even if they link the article in question pin the body. |
m Disambiguating links to General-purpose computing on graphics processing units (link changed to General-purpose computing on graphics processing units (software)) using DisamAssist. |
||
(6 intermediate revisions by 6 users not shown) | |||
Line 69:
Earlier [[graphics processing unit]]s (GPUs) often had limited read-only [[texture cache]]s and used [[swizzling (computer graphics)|swizzling]] to improve 2D [[locality of reference]]. [[Cache miss]]es would drastically affect performance, e.g. if [[mipmapping]] was not used. Caching was important to leverage 32-bit (and wider) transfers for texture data that was often as little as 4 bits per pixel.
As GPUs advanced, supporting [[General-purpose computing on graphics processing units (software)|general-purpose computing on graphics processing units]] and [[compute kernel]]s, they have developed progressively larger and increasingly general caches, including [[instruction cache]]s for [[shader]]s, exhibiting functionality commonly found in CPU caches. These caches have grown to handle [[synchronization primitive]]s between threads and [[atomic operation]]s, and interface with a CPU-style MMU.
===DSPs===
Line 88:
The time aware least recently used (TLRU) is a variant of LRU designed for the situation where the stored contents in cache have a valid lifetime. The algorithm is suitable in network cache applications, such as ICN, [[content delivery network]]s (CDNs) and distributed networks in general. TLRU introduces a new term: time to use (TTU). TTU is a time stamp on content which stipulates the usability time for the content based on the locality of the content and information from the content publisher. Owing to this locality-based time stamp, TTU provides more control to the local administrator to regulate in-network storage.
In the TLRU algorithm, when a piece of content arrives, a cache node calculates the local TTU value based on the TTU value assigned by the content publisher. The local TTU value is calculated by using a locally
=====Least frequent recently used=====
Line 130:
Write-through operation is common when operating over [[unreliable network]]s, because of the enormous complexity of the coherency protocol required between multiple write-back caches when communication is unreliable. For instance, web page caches and [[client-side]] caches for [[distributed file system]]s (like those in [[Network File System (protocol)|NFS]] or [[Server Message Block|SMB]]) are typically read-only or write-through specifically to keep the network protocol simple and reliable.
[[Web search engine]]s also frequently make web pages they have indexed available from their [[Search engine cache|cache]]. This can prove useful when web pages from a web server are temporarily or permanently inaccessible.
[[Database caching]] can substantially improve the throughput of [[database]] applications, for example in the processing of [[Database index|indexes]], [[Data dictionary|data dictionaries]], and frequently used subsets of data.
A [[distributed cache]]<ref>{{cite journal|last1=Paul|first1=S.|last2=Fei|first2=Z.|date=1 February 2001|title=Distributed caching with centralized control|journal=Computer Communications|volume=24|issue=2|pages=256–268|citeseerx=10.1.1.38.1094|doi=10.1016/S0140-3664(00)00322-4}}
=={{anchor|The difference between buffer and cache}}<!--Former section name used in external links-->Buffer vs. cache==
|