Memory hierarchy: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
Line 1:
In Georgian National Language ==See also== for GEORGIAN TECHNICAL UNIVERSITY
{{expansion}}
(Made by Aexander Janiashvili, GEORGIANtech)
 
'''მიკროპროცესორები დღეს - და განვითარების პრიორიტეტები!'''
The [[hierarchical]] arrangement of [[computer storage|storage]] in current [[computer architecture]]s is called the '''memory hierarchy'''. It is designed to take advantage of [[memory locality]] in [[computer program]]s. Each level of the hierarchy is of higher [[speed]] and lower [[latency (engineering)|latency]], and is of smaller size, than lower levels.
 
Most modern [[Central processing unit|CPUs]] are so fast that for most program workloads the [[locality of reference]] of memory accesses, and the efficiency of the [[caching]] and memory transfer between different levels of the hierarchy, is the practical limitation on processing speed. As a result, the CPU spends much of its time idling, waiting for memory I/O to complete.
 
The memory hierarchy in most computers is as follows:
 
* [[Processor register]]s – fastest possible access (usually 1 CPU cycle), only hundreds of bytes in size
* Level 1 (L1) [[CPU cache|cache]] – often accessed in just a few cycles, usually tens of kilobytes
* Level 2 (L2) [[CPU cache|cache]] – higher latency than L1 by 2× to 10×, often 512KB or more
* Level 3 (L3) [[CPU cache|cache]] – (optional) higher latency than L2, often multiple MB's
* [[Primary storage|Main memory]] ([[DRAM]]) – may take hundreds of cycles, but can be multiple gigabytes
* [[Disk storage]] – hundreds of thousands of cycles latency, but very large
==Management ==
Modern [[programming language]]s mainly assume two levels of memory, main memory and disk storage, though directly accessing registers is allowable in rare cases. Programmers are responsible for moving data between disk and memory through file I/O. Hardware is in charge of moving data between memory and caches. Compilers are trying to optimize the usage of caches and registers.
 
==See also==
(Made by Aexander Janiashvili)
მიკროპროცესორები დღეს - და განვითარების პრიორიტეტები!
 
შესავალი
Line 55 ⟶ 38:
-ასევე პროცში არის მართვის სქემები, რომლებიც მართავენ (ზნაჩიტ ბრძანებათა/მონაცემთა მიმდევრობითობას აკონტროლებენ) რეგისტრების მთელ "მეურნეობაში".
-და ბოლოს -მონაცემთა შ/სალტე. ის აკავშირებს ALU, FPU-ს და რეგ-ებს ერთმანეთთან მართვის სქემებიდან ბრძანების მიღების მერე. გავაიასნებ - ზუსტად ამ შიდა სალტეზეა საუბარი როცა იძახიან "პროცესორი 2.4გჰც-ზე მუშაობს"ო!!!
{{expansion}}
 
The [[hierarchical]] arrangement of [[computer storage|storage]] in current [[computer architecture]]s is called the '''memory hierarchy'''. It is designed to take advantage of [[memory locality]] in [[computer program]]s. Each level of the hierarchy is of higher [[speed]] and lower [[latency (engineering)|latency]], and is of smaller size, than lower levels.
 
Most modern [[Central processing unit|CPUs]] are so fast that for most program workloads the [[locality of reference]] of memory accesses, and the efficiency of the [[caching]] and memory transfer between different levels of the hierarchy, is the practical limitation on processing speed. As a result, the CPU spends much of its time idling, waiting for memory I/O to complete.
 
The memory hierarchy in most computers is as follows:
 
* [[Processor register]]s – fastest possible access (usually 1 CPU cycle), only hundreds of bytes in size
* Level 1 (L1) [[CPU cache|cache]] – often accessed in just a few cycles, usually tens of kilobytes
* Level 2 (L2) [[CPU cache|cache]] – higher latency than L1 by 2× to 10×, often 512KB or more
* Level 3 (L3) [[CPU cache|cache]] – (optional) higher latency than L2, often multiple MB's
* [[Primary storage|Main memory]] ([[DRAM]]) – may take hundreds of cycles, but can be multiple gigabytes
* [[Disk storage]] – hundreds of thousands of cycles latency, but very large
==Management ==
Modern [[programming language]]s mainly assume two levels of memory, main memory and disk storage, though directly accessing registers is allowable in rare cases. Programmers are responsible for moving data between disk and memory through file I/O. Hardware is in charge of moving data between memory and caches. Compilers are trying to optimize the usage of caches and registers.