Content deleted Content added
m Open access bot: doi added to citation with #oabot. |
|||
Line 9:
: Approximate arithmetic circuits:<ref>Jiang et al., "[http://www.ece.ualberta.ca/~jhan8/publications/survey.pdf Approximate Arithmetic Circuits: A Survey, Characterization, and Recent Applications]", the Proceedings of the IEEE, Vol. 108, No. 12, pp. 2108 - 2135, 2020.</ref> [[Adder (electronics)|adders]],<ref>J. Echavarria, et al. "FAU: Fast and Error-Optimized Approximate Adder Units on LUT-Based FPGAs", FPT, 2016.</ref><ref>J. Miao, et al. "[https://repositories.lib.utexas.edu/bitstream/handle/2152/19706/miao_thesis_201291.pdf?sequence=1 Modeling and synthesis of quality-energy optimal approximate adders]", ICCAD, 2012</ref> [[Binary multiplier|multipliers]]<ref>{{Cite book|last1=Rehman|first1=Semeen|last2=El-Harouni|first2=Walaa|last3=Shafique|first3=Muhammad|last4=Kumar|first4=Akash|last5=Henkel|first5=Jörg|date=2016-11-07|title=Architectural-space exploration of approximate multipliers|publisher=ACM|pages=80|doi=10.1145/2966986.2967005|isbn=9781450344661}}</ref> and other [[logical circuit]]s can reduce hardware overhead.<ref>S. Venkataramani, et al. "[http://algos.inesc-id.pt/projectos/approx/FCT/Ref-15.pdf SALSA: systematic logic synthesis of approximate circuits]", DAC, 2012.</ref><ref>J. Miao, et al. "[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.453.7870&rep=rep1&type=pdf Approximate logic synthesis under general error magnitude and frequency constraints]", ICCAD, 2013</ref><ref>R. Hegde et al. "[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.33.8929&rep=rep1&type=pdf Energy-efficient signal processing via algorithmic noise-tolerance]", ISLPED, 1999.</ref> For example, an approximate multi-bit adder can ignore the [[carry chain]] and thus, allow all its sub-adders to perform addition operation in parallel.
; Approximate storage
: Instead of [[Computer data storage|storing data]] values exactly, they can be stored approximately, e.g., by [[Data truncation|truncating]] the lower-bits in [[floating point]] data. Another method is to accept less reliable memory. For this, in [[DRAM]]<ref>{{Cite journal|last1=Raha|first1=A.|last2=Sutar|first2=S.|last3=Jayakumar|first3=H.|last4=Raghunathan|first4=V.|date=July 2017|title=Quality Configurable Approximate DRAM|journal=IEEE Transactions on Computers|volume=66|issue=7|pages=1172–1187|doi=10.1109/TC.2016.2640296|issn=0018-9340|doi-access=free}}</ref> and [[eDRAM]], [[refresh
; Software-level approximation
: There are several ways to approximate at software level. [[Memoization]] can be applied. Some [[iteration]]s of [[Loop (computing)|loops]] can be skipped (termed as [[loop perforation]]) to achieve a result faster. Some tasks can also be skipped, for example when a run-time condition suggests that those tasks are not going to be useful ([[task skipping]]). [[Monte Carlo algorithm]]s and [[Randomized algorithm]]s trade correctness for execution time guarantees.<ref>C.Alippi, Intelligence for Embedded Systems: a Methodological approach, Springer, 2014, pp. 283</ref> The computation can be reformulated according to paradigms that allow easily the acceleration on specialized hardware, e.g. a neural processing unit.<ref>{{Cite conference|last1=Esmaeilzadeh|first1=Hadi|last2=Sampson|first2=Adrian|last3=Ceze|first3=Luis|last4=Burger|first4=Doug|title=Neural acceleration for general-purpose approximate programs|conference=45th Annual IEEE/ACM International Symposium on Microarchitecture|year=2012|doi=10.1109/MICRO.2012.48|publisher=IEEE|pages=449–460|___location=Vancouver, BC}}</ref>
|