Approximate computing: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Add: bibcode, authors 1-1. Removed URL that duplicated identifier. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Dominic3203 | Category:Software optimization | #UCB_Category 51/59
m link ASIC
Line 18:
==Application areas==
Approximate computing has been used in a variety of domains where the applications are error-tolerant, such as [[multimedia]] processing, [[machine learning]], [[signal processing]], [[Computational science|scientific computing]]. Therefore, approximate computing is mostly driven by applications that are related to human perception/cognition and have inherent error resilience. Many of these applications are based on statistical or probabilistic computation, such as different approximations can be made to better suit the desired objectives.<ref>{{cite journal |last1=Liu |first1=Weiqiang |last2=Lombardi |first2=Fabrizio |last3=Schulte |first3=Michael |title=Approximate Computing: From Circuits to Applications |journal=Proceedings of the IEEE |date=Dec 2020 |volume=108 |issue=12 |page=2103 |doi=10.1109/JPROC.2020.3033361 |doi-access=free }}</ref>
One notable application in [[machine learning]] is that Google is using this approach in their [[Tensor processing unit]]s (TPU, a custom [[application-specific integrated circuit|ASIC]]).<ref>{{cite journal |last1=Liu |first1=Weiqiang |last2=Lombardi |first2=Fabrizio |last3=Schulte |first3=Michael |title=Approximate Computing: From Circuits to Applications |journal=Proceedings of the IEEE |date=Dec 2020 |volume=108 |issue=12 |page=2104 |doi=10.1109/JPROC.2020.3033361 |doi-access=free }}</ref>
 
==Derived paradigms==