Hybrid-core computing: Difference between revisions

Content deleted Content added
Rpmasson (talk | contribs)
No edit summary
Rpmasson (talk | contribs)
No edit summary
Line 3:
'''Hybrid-core''' computing is the technique of extending a commodity [[instruction set architecture]] (e.g. [[x86]]) with application-specific instructions to accelerate application performance. It is a form of [[heterogeneous computing]] wherein multiple computational units (i.e. graphics processing unit (GPU)), custom acceleration logic (application-specific integrated circuit ([[ASIC]]) or reconfigurable field-programmable gate array ([[FPGA]])) coexist with a "commodity" processor.
 
Hybrid-core processing differs from the more general heterogeneous computing<ref>Heterogeneous Processing: a Strategy for Augmenting Moore's Law". Linux Journal, January 2006</ref> in that the computational units share a common logical address space, are cache coherent and an executable is composed of a single instruction stream—in essence a contemporary [[coprocessor]]. The instruction set of a hybrid-core computing system contains instructions that can be dispatched either to the host instruction set or to the application-specific hardware.<ref>"Convey Computer Corp. "The Convey HC-1 Computer Architecture Overview"</ref> The main advantage of hybrid-core over typical hybrid implementations is that the programming model is much simpler because of the shared virtual memory.
 
Typically, hybrid-core computing is best deployed where the predominance of computational cycles are spent in a few identifiable kernels, as is often seen in high-performance computing applications. Acceleration is especially pronounced when the kernel’s logic maps poorly to a sequence of commodity processor instructions, and/or maps well to the application-specific hardware.
Typically, hybrid-core computing is used to accelerate applications beyond what is currently physically possible with off-the-shelf processors (i.e., to circumvent obstacles such as the memory-wall and power-wall), or to reduce power & cooling costs in a data center by reducing computational footprint.
 
Hybrid-core computing is used to accelerate applications beyond what is currently physically possible with off-the-shelf processors, or to lower power & cooling costs in a data center by reducing computational footprint. (i.e., to circumvent obstacles such as the power/density challenges faced with today's commodity processors).<ref> "New Microarchitecture Challenges in the Coming Generations of CMOS Process Technologies," Fred Pollack, Director of Microprocessor Research Labs http://research.ac.upc.edu/HPCseminar/SEM9900/Pollack1.pdf </ref>