Content deleted Content added
No edit summary |
Primergrey (talk | contribs) |
||
(41 intermediate revisions by 23 users not shown) | |||
Line 1:
{{multiple issues|
{{lead rewrite|date=August 2012}}
{{technical|date=August 2012}}
}}▼
{{short description|Type of computing platform}}
'''Computing with
==Details==
▲'''Computing with Memory''' refers to computing platforms where function response is stored in memory array, either one or two-dimensional, in the form of lookup tables (LUTs) and functions are evaluated by retrieving the values from the LUTs. These computing platforms can follow either a purely spatial computing model, as in '''''Field Programmable Gate Arrays''''' (FPGAs), or a temporal computing model, where a function is evaluated across multiple clock cycles. The latter approach aims at reducing the overhead of programmable interconnect in FPGA by folding interconnect resources inside a computing element. It uses dense two-dimensional memory arrays to store large multiple-input multiple-output LUTs. '''''Computing with Memory''''' differs from '''''Computing in Memory''''' or [[Processor-in-memory]] (PIM) concepts, widely investigated in the context of integrating a processor and memory on the same chip to reduce the memory bandwidth and latency. These architectures seek to reduce the distance the data travels between the processor and the memory. Berkeley IRAM project is one notable contribution in the area of PIM architectures.
<!-- Deleted image removed: [[Image:Memory Logic Block.png|thumb|right|alt=Time-multiplexed execution of mapped application using embedded memory blocks.|Functional block diagram of memory-based computation]] -->
Computing with memory platforms are typically used to provide the benefit of hardware
Contrary to the purely spatial computing model of FPGA, a reconfigurable computing platform that employs a temporal computing model (or a combination of both temporal and spatial) has also been investigated <ref name="Ref 7">S. Paul and S. Bhunia,
== References ==▼
▲}}
<!-- Deleted image removed: [[:Image:Memory Logic Block.png]] shows the high-level block diagram of MBC. -->
Each computing element incorporates a two-dimensional memory array for storing LUTs, a small controller for sequencing evaluation of sub-functions and a set of temporary registers to hold the intermediate outputs from individual partitions. A fast, local routing framework inside each computing block generates the address for LUT access. Multiple such computing elements can be spatially connected using FPGA-like programmable interconnect architecture to enable mapping of large functions. The local time-multiplexed execution inside the computing elements can drastically reduce the requirement of programmable interconnects leading to large improvement in energy-delay product and better scalability of performance across technology generations. The memory array inside each computing element can be realized by [[content-addressable memory]] (CAM) to drastically reduce the memory requirement for certain applications.<ref name="Ref 7"/>
==See
* [[Computational RAM]]
* [[Field-programmable gate array]] (FPGA)
* [[Memoization]]
* [[Reconfigurable computing]]
{{Reflist|2}}
[[Category:Computer engineering]]
[[Category:
|