Content deleted Content added
No edit summary |
Citation bot (talk | contribs) Added bibcode. | Use this bot. Report bugs. | Suggested by Abductive | Category:Wikipedia articles needing clarification from July 2025 | #UCB_Category 391/941 |
||
(512 intermediate revisions by more than 100 users not shown) | |||
Line 1:
{{Short description|Digital circuit without clock cycles}}
{{redirect|Sequentional|enumerated sequences|sequential|the conditional assertion|sequent|the corresponding type of formal logical argumentation|sequent calculus}}
{{for|additional information|Asynchronous system}}
{{Update|part=computer science|date=March 2024|reason=Some information are outdated and referred to the past years}}
{{Use dmy dates|date=May 2023|cs1-dates=y}}
'''Asynchronous circuit''' ('''clockless''' or '''self-timed circuit''')<ref name="Horowitz2007" />{{rp|at=Lecture 12}} {{refn|group="note"|[[Globally asynchronous locally synchronous]] circuits are possible.}}<ref>{{cite book |author-last=Staunstrup |author-first=Jørgen |title=A Formal Approach to Hardware Design |date=1994 |publisher=Springer USA |isbn=978-1-4615-2764-0 |___location=Boston, Massachusetts, USA |oclc=852790160}}</ref>{{rp|pages=157–186}} is a [[sequential logic|sequential]] [[digital logic]] [[electrical network|circuit]] that does not use a global [[clock circuit]] or [[clock signal|signal]] generator to synchronize its components.<ref name="Horowitz2007">{{cite web |author-last=Horowitz |author-first=Mark |author-link=Mark Alan Horowitz |date=2007 |title=Advanced VLSI Circuit Design Lecture |url=https://web.stanford.edu/class/archive/ee/ee371/ee371.1066/ |url-status=live |publisher=Stanford University, Computer Systems Laboratory |archive-url=https://web.archive.org/web/20160421093147/http://web.stanford.edu:80/class/archive/ee/ee371/ee371.1066/ |archive-date=April 21, 2016 }}</ref><ref name="Sparsø2006">{{cite web |author-last=Sparsø |author-first=Jens |date=April 2006 |title=Asynchronous Circuit Design A Tutorial |url=https://orbit.dtu.dk/files/2775719/imm855.pdf |publisher=Technical University of Denmark}}</ref>{{rp|pages=3–5}} Instead, the components are driven by a handshaking circuit which indicates a completion of a set of instructions. Handshaking works by simple data transfer [[Communications protocol|protocols]].{{r|name="Sparsø2006"|page=115}} Many synchronous circuits were developed in early 1950s as part of bigger [[asynchronous system]]s (e.g. [[ORDVAC]]). Asynchronous circuits and theory surrounding is a part of several steps in [[integrated circuit design]], a field of [[digital electronics]] engineering.
Asynchronous circuits are contrasted with [[synchronous circuit]]s, in which changes to the signal values in the circuit are triggered by repetitive pulses called a [[clock signal]]. Most digital devices today use synchronous circuits. However asynchronous circuits have a potential to be much faster, have a lower level of power consumption, electromagnetic interference, and better modularity in large systems. Asynchronous circuits are an active area of research in [[logic design|digital logic design]].<ref>{{cite journal |author-first1=S. M. |author-last1=Nowick |author-first2=M. |author-last2=Singh |url=https://www.cs.columbia.edu/~nowick/nowick-singh-async-IEEE-DT-15-overview-article-pt1.pdf |archive-url=https://wayback.archive-it.org/all/20181221132620/http://www.cs.columbia.edu/%7Enowick/nowick%2Dsingh%2Dasync%2DIEEE%2DDT%2D15%2Doverview%2Darticle%2Dpt1.pdf |url-status=dead |archive-date=December 21, 2018 |title=Asynchronous Design — Part 1: Overview and Recent Advances |journal=IEEE Design and Test |volume=32 |issue=3 |pages=5–18 |date=May–June 2015 |doi=10.1109/MDAT.2015.2413759 |s2cid=14644656 |access-date=August 27, 2019}}</ref><ref>{{cite journal |author-first1=S. M. |author-last1=Nowick |author-first2=M. |author-last2=Singh |url=https://www.cs.columbia.edu/~nowick/nowick-singh-async-IEEE-DT-15-overview-article-pt2.pdf |archive-url=https://wayback.archive-it.org/all/20181221132622/http://www.cs.columbia.edu/%7Enowick/nowick%2Dsingh%2Dasync%2DIEEE%2DDT%2D15%2Doverview%2Darticle%2Dpt2.pdf |url-status=dead |archive-date=December 21, 2018 |title=Asynchronous Design — Part 2: Systems and Methodologies |journal=IEEE Design and Test |volume=32 |issue=3 |pages=19–28 |date=May–June 2015 |doi=10.1109/MDAT.2015.2413757 |s2cid=16732793 |access-date=August 27, 2019}}</ref>
It was not until the 1990s when viability of the asynchronous circuits was shown by real-life commercial products.{{r|name="Sparsø2006"|page=4}}
== Overview ==
All [[digital logic]] circuits can be divided into [[combinational logic]], in which the output signals depend only on the current input signals, and [[sequential logic]], in which the output depends both on current input and on past inputs. In other words, sequential logic is combinational logic with [[computer memory|memory]]. Virtually all practical digital devices require sequential logic. Sequential logic can be divided into two types, synchronous logic and asynchronous logic.
=== Synchronous circuits ===
In [[Synchronous circuit|synchronous logic circuits]], an [[electronic oscillator]] generates a repetitive series of equally spaced pulses called the ''[[clock signal]]''. The clock signal is supplied to all the components of the IC. Flip-flops only flip when triggered by the [[signal edge|edge]] of the clock pulse, so changes to the logic signals throughout the circuit begin at the same time and at regular intervals. The output of all memory elements in a circuit is called the ''[[State (computer science)|state]]'' of the circuit. The state of a synchronous circuit changes only on the clock pulse. The changes in signal require a certain amount of time to propagate through the combinational logic gates of the circuit. This time is called a [[propagation delay]].
{{As of|2021}}, timing of modern synchronous ICs takes significant engineering efforts and sophisticated [[Electronic design automation|design automation tools]].<ref name=":4">{{cite web |date=2021-07-15 |title=Why Asynchronous Design? |url=https://galois.com/blog/2021/07/why-asynchronous-design/ |access-date=2021-12-04 |website=Galois, Inc.}}</ref> Designers have to ensure that clock arrival is not faulty. With the ever-growing size and complexity of ICs (e.g. [[Application-specific integrated circuit|ASICs]]) it's a challenging task.<ref name=":4"/> In huge circuits, signals sent over clock distribution network often end up at different times at different parts.<ref name=":4"/> This problem is widely known as "[[clock skew]]".<ref name=":4"/>{{r|name="Myers2001"|page=xiv}}
The maximum possible clock rate is capped by the logic path with the longest propagation delay, called the critical path. Because of that, the paths that may operate quickly are idle most of the time. A widely distributed clock network dissipates a lot of useful power and must run whether the circuit is receiving inputs or not.<ref name=":4"/> Because of this level of complexity, testing and debugging takes over half of development time in all dimensions for synchronous circuits.<ref name=":4"/>
=== Asynchronous circuits ===
The asynchronous circuits do not need a global clock, and the state of the circuit changes as soon as the inputs change. The local functional blocks may be still employed but the ''[[clock skew]]'' problem still can be tolerated.{{r|name="Myers2001"|page=xiv}}{{r|name="Sparsø2006"|page=4}}
Since asynchronous circuits do not have to wait for a clock pulse to begin processing inputs, they can operate faster. Their speed is theoretically limited only by the [[propagation delay]]s of the logic gates and other elements.{{r|name="Myers2001"|page=xiv}}
However, asynchronous circuits are more difficult to design and subject to problems not found in synchronous circuits. This is because the resulting state of an asynchronous circuit can be sensitive to the relative arrival times of inputs at gates. If transitions on two inputs arrive at almost the same time, the circuit can go into the wrong state depending on slight differences in the propagation delays of the gates.
This is called a [[race condition]]. In synchronous circuits this problem is less severe because race conditions can only occur due to inputs from outside the synchronous system, called ''asynchronous inputs''.
Although some fully asynchronous digital systems have been built (see below), today asynchronous circuits are typically used in a few critical parts of otherwise synchronous systems where speed is at a premium, such as signal processing circuits.
==Theoretical foundation==
The original theory of asynchronous circuits was created by [[David E. Muller]] in mid-1950s.<ref>{{cite book |author-last=Muller |author-first=D. E. |title=Theory of asynchronous circuits, Report no. 66 |publisher=Digital Computer Laboratory, University of Illinois at Urbana-Champaign |date=1955}}</ref> This theory was presented later in the well-known book "Switching Theory" by Raymond Miller.<ref>{{cite book |author-last=Miller |author-first=Raymond E. |title=Switching Theory, Vol. II |publisher=Wiley |date=1965}}</ref>
The term "asynchronous logic" is used to describe a variety of design styles, which use different assumptions about circuit properties.<ref>{{cite journal|author-last1=van Berkel |author-first1=C. H. |author-first2=M. B. |author-last2=Josephs |author-first3=S. M. |author-last3=Nowick |url=https://www.cs.columbia.edu/~nowick/async-applications-PIEEE-99-berkel-josephs-nowick-published.pdf |title=Applications of Asynchronous Circuits |journal=Proceedings of the IEEE |volume=87 |number=2 |date=February 1999 |pages=234–242 |doi=10.1109/5.740016 |access-date=August 27, 2019 |archive-date=April 3, 2018 |archive-url=https://web.archive.org/web/20180403123227/http://www.cs.columbia.edu/~nowick/async-applications-PIEEE-99-berkel-josephs-nowick-published.pdf |url-status=dead}}</ref> These vary from the [[bundled delay]] model – which uses "conventional" data processing elements with completion indicated by a locally generated delay model – to [[delay-insensitive]] design – where arbitrary delays through circuit elements can be accommodated. The latter style tends to yield circuits which are larger than bundled data implementations, but which are insensitive to layout and parametric variations and are thus "correct by design".
=== Asynchronous logic ===
Asynchronous logic is the [[logic]] required for the design of asynchronous digital systems. These function without a [[clock signal]] and so individual logic elements cannot be relied upon to have a discrete true/false state at any given time. [[Boolean logic|Boolean]] (two valued) logic is inadequate for this and so extensions are required.
{{anchor|Venjunction|Sequention}}Since 1984, Vadim O. Vasyukevich developed an approach based upon new logical operations which he called ''venjunction'' (with asynchronous operator "''x''∠''y''" standing for "switching ''x'' on the background ''y''" or "if ''x'' when ''y'' then") and ''sequention'' (with priority signs "''x''<sub>''i''</sub>≻''x''<sub>''j''</sub>" and "''x''<sub>''i''</sub>≺''x''<sub>''j''</sub>"). This takes into account not only the current value of an element, [[sequential logic|but also its history]].<ref name="Vasyukevich_1984"/><ref name="Vasyukevich_1998"/><ref name="Vasyukevich_2007"/><ref name="Vasyukevich_2009"/><ref name="Vasyukevich_2011"/>
{{anchor|NCL|MTNCL|SCL}}Karl M. Fant developed a different theoretical treatment of asynchronous logic in his work ''Logically determined design'' in 2005 which used [[Multi-valued logic|four-valued logic]] with [[Nullable type|''null'']] and ''intermediate'' being the additional values. This architecture is important because it is [[quasi-delay-insensitive]].<ref name="Fant_2005"/><ref name="Fant_2007"/> Scott C. Smith and Jia Di developed an ultra-low-power variation of Fant's Null Convention Logic that incorporates [[multi-threshold CMOS]].<ref name="Smith-Di_2009"/> This variation is termed Multi-threshold Null Convention Logic (MTNCL), or alternatively Sleep Convention Logic (SCL).<ref name="Smith-Di_2011"/>
=== Petri nets ===
[[Petri net]]s are an attractive and powerful model for reasoning about asynchronous circuits (see [[Petri net#Other models of concurrency|Subsequent models of concurrency]]). A particularly useful type of interpreted Petri nets, called [[Signal transition graphs|Signal Transition Graphs]] (STGs), was proposed independently in 1985 by Leonid Rosenblum and Alex Yakovlev<ref>{{cite web |author-last1=Rosenblum |author-first1=L. Ya. |author-last2=Yakovlev |author-first2=A. V. |date=July 1985 |title=Signal Graphs: from Self-timed to Timed ones. Proceedings of International Workshop on Timed Petri Nets |___location=Torino, Italy |publisher=IEEE CS Press |pages=199–207 |url=https://www.staff.ncl.ac.uk/alex.yakovlev/home.formal/LR-AY-TPN85.pdf |url-status=live |archive-url=https://web.archive.org/web/20031023002229/http://www.staff.ncl.ac.uk:80/alex.yakovlev/home.formal/LR-AY-TPN85.pdf |archive-date=October 23, 2003}}</ref> and Tam-Anh Chu.<ref>{{cite journal |author-last=Chu |author-first=T.-A. |date=1986-06-01 |title=On the models for designing VLSI asynchronous digital systems |url=https://www.sciencedirect.com/science/article/abs/pii/S0167926086800025 |journal=Integration |language=en |volume=4 |issue=2 |pages=99–113 |doi=10.1016/S0167-9260(86)80002-5 |issn=0167-9260|url-access=subscription }}</ref> Since then, STGs have been studied extensively in theory and practice,<ref>{{cite journal |author-last1=Yakovlev |author-first1=Alexandre |author-last2=Lavagno |author-first2=Luciano |author-last3=Sangiovanni-Vincentelli |author-first3=Alberto |date=1996-11-01 |title=A unified signal transition graph model for asynchronous control circuit synthesis |url=https://doi.org/10.1007/BF00122081 |journal=Formal Methods in System Design |language=en |volume=9 |issue=3 |pages=139–188 |doi=10.1007/BF00122081 |s2cid=26970846 |issn=1572-8102|url-access=subscription }}</ref><ref>{{cite book |author-last1=Cortadella |author-first1=J. |url=http://link.springer.com/10.1007/978-3-642-55989-1 |title=Logic Synthesis for Asynchronous Controllers and Interfaces |author-last2=Kishinevsky |author-first2=M. |author-last3=Kondratyev |author-first3=A. |author-last4=Lavagno |author-first4=L. |author-last5=Yakovlev |author-first5=A. |date=2002 |publisher=Springer Berlin Heidelberg |isbn=978-3-642-62776-7 |series=Springer Series in Advanced Microelectronics |volume=8 |___location=Berlin / Heidelberg, Germany |language=en |doi=10.1007/978-3-642-55989-1}}</ref> which has led to the development of popular software tools for analysis and synthesis of asynchronous control circuits, such as Petrify<ref>{{cite web |title=Petrify: Related publications |url=https://www.cs.upc.edu/~jordicf/petrify/refs/ |access-date=2021-07-28 |website=www.cs.upc.edu}}</ref> and Workcraft.<ref>{{cite web |title=start - Workcraft |url=https://workcraft.org/ |access-date=2021-07-28 |website=workcraft.org}}</ref>
Subsequent to Petri nets other models of concurrency have been developed that can model asynchronous circuits including the [[Actor model]] and [[process calculi]].
==Benefits==
A variety of advantages have been demonstrated by asynchronous circuits. Both [[quasi-delay-insensitive]] (QDI) circuits (generally agreed to be the most "pure" form of asynchronous logic that retains computational universality){{Citation needed|date=July 2022}} and less pure forms of asynchronous circuitry which use timing constraints for higher performance and lower area and power present several advantages.
* Robust and cheap handling of [[Metastability in electronics|metastability]] of [[Arbiter (electronics)|arbiters]].
* Average-case performance: an average-case time (delay) of operation is not limited to the worst-case completion time of component (gate, wire, block etc.) as it is in synchronous circuits.{{r|name="Myers2001"|page=xiv}}{{r|name="Sparsø2006"|page=3}} This results in better latency and throughput performance.{{r|name="HPAP2001"|page=9}}{{r|name="Sparsø2006"|page=3}} Examples include ''speculative completion''<ref>{{cite book |author-last1=Nowick |author-first1=S. M. |author-first2=K. Y. |author-last2=Yun |author-first3=P. A. |author-last3=Beerel |author-first4=A. E. |author-last4=Dooply |title=Proceedings Third International Symposium on Advanced Research in Asynchronous Circuits and Systems |chapter=Speculative completion for the design of high-performance asynchronous dynamic adders |chapter-url=https://www.cs.columbia.edu/~nowick/nowick-async97-speculation-completion-fin.pdf |date=March 1997 |pages=210–223 |doi=10.1109/ASYNC.1997.587176 |isbn=0-8186-7922-0 |s2cid=1098994 |access-date=August 27, 2019 |archive-date=April 21, 2021 |archive-url=https://web.archive.org/web/20210421231646/http://www.cs.columbia.edu/~nowick/nowick-async97-speculation-completion-fin.pdf |url-status=dead}}</ref><ref>{{cite journal |author-last=Nowick |author-first=S. M. |url=https://www.cs.columbia.edu/~nowick/nowick-iee-pcgs-speculative-competion-96-published.pdf |title=Design of a Low-Latency Asynchronous Adder Using Speculative Completion |journal= IEE Proceedings - Computers and Digital Techniques|volume=143 |issue=5 |date=September 1996 |pages=301–307 |doi=10.1049/ip-cdt:19960704 |doi-broken-date=11 July 2025 |access-date=August 27, 2019 |archive-date=April 22, 2021 |archive-url=https://web.archive.org/web/20210422005641/http://www.cs.columbia.edu/~nowick/nowick-iee-pcgs-speculative-competion-96-published.pdf |url-status=dead}}</ref> which has been applied to design parallel prefix adders faster than synchronous ones, and a high-performance double-precision floating point adder<ref>{{cite journal |author-last1=Sheikh |author-first1=B. |author-first2=R. |author-last2=Manohar |url=https://www.cs.columbia.edu/~nowick/sheikh-manohar-async10-fp-adder.pdf |title=An Operand-Optimized Asynchronous IEEE 754 Double-Precision Floating-Point Adder |journal=Proceedings of the IEEE International Symposium on Asynchronous Circuits and Systems ('Async') |date=May 2010 |pages=151–162 |access-date=August 27, 2019 |archive-date=April 21, 2021 |archive-url=https://web.archive.org/web/20210421233614/http://www.cs.columbia.edu/~nowick/sheikh-manohar-async10-fp-adder.pdf |url-status=dead}}</ref> which outperforms leading synchronous designs.
** [[Early completion]]: the output may be generated ahead of time, when result of input processing is predictable or irrelevant.
** Inherent elasticity: variable number of data items may appear in pipeline inputs at any time (pipeline means a cascade of linked functional blocks). This contributes to high performance while gracefully handling variable input and output rates due to unclocked pipeline stages (functional blocks) delays (congestions may still be possible however and input-output gates delay should be also taken into account{{r|name="Tsutomu1993"|page=194}}).<ref name="HPAP2001">{{cite journal |author-last1=Nowick |author-first1=S. M. |author-first2=M. |author-last2=Singh |title=High-Performance Asynchronous Pipelines: an Overview |date=September–October 2011 |url=https://www.cs.columbia.edu/~nowick/nowick-singh-ieee-dt-11-published.pdf |journal=IEEE Design & Test of Computers |volume=28 |number=5 |pages=8–22 |doi=10.1109/mdt.2011.71 |bibcode=2011IDTC...28....8N |s2cid=6515750 |access-date=August 27, 2019 |archive-date=April 21, 2021 |archive-url=https://web.archive.org/web/20210421193250/http://www.cs.columbia.edu/~nowick/nowick-singh-ieee-dt-11-published.pdf |url-status=dead}}</ref>
** No need for timing-matching between functional blocks either. Though given different delay models (predictions of gate/wire delay times) this depends on actual approach of asynchronous circuit implementation.<ref name="Tsutomu1993">{{cite book |author-last=Sasao |author-first=Tsutomu |title=Logic Synthesis and Optimization |date=1993 |publisher=Springer USA |isbn=978-1-4615-3154-8 |___location=Boston, Massachusetts, USA |oclc=852788081}}</ref>{{rp|page=194}}
** Freedom from the ever-worsening difficulties of distributing a high-[[fan-out]], timing-sensitive clock signal.
** Circuit speed adapts to changing temperature and voltage conditions rather than being locked at the speed mandated by worst-case assumptions.{{Citation needed|date=December 2021}}{{Vague|date={{CURRENTMONTHNAME}} {{CURRENTYEAR}}}}{{r|name="Sparsø2006"|page=3}}
* Lower, on-demand power consumption;<ref name="Myers2001">{{cite book |author-last=Myers |author-first=Chris J. |title=Asynchronous circuit design |date=2001 |publisher=J. Wiley & Sons |isbn=0-471-46412-0 |___location=New York |oclc=53227301}}</ref>{{rp|page=xiv}}{{r|name="HPAP2001"|page=9}}{{r|name="Sparsø2006"|page=3}} zero standby power consumption.{{r|name="Sparsø2006"|page=3}} In 2005 [[Epson]] has reported 70% lower power consumption compared to synchronous design.<ref>[http://global.epson.com/newsroom/2005/news_2005_02_09.htm "Epson Develops the World's First Flexible 8-Bit Asynchronous Microprocessor"]{{dead link |date=October 2016 |bot=InternetArchiveBot |fix-attempted=yes}} 2005</ref> Also, clock drivers can be removed which can significantly reduce power consumption. However, when using certain encodings, asynchronous circuits may require more area, adding similar power overhead if the underlying process has poor leakage properties (for example, deep submicrometer processes used prior to the introduction of [[high-κ dielectric]]s).
**No need for power-matching between local asynchronous functional domains of circuitry. Synchronous circuits tend to draw a large amount of current right at the clock edge and shortly thereafter. The number of nodes switching (and hence, the amount of current drawn) drops off rapidly after the clock edge, reaching zero just before the next clock edge. In an asynchronous circuit, the switching times of the nodes does not correlated in this manner, so the current draw tends to be more uniform and less bursty.
* Robustness toward transistor-to-transistor variability in the manufacturing transfer process (which is one of the most serious problems facing the semiconductor industry as dies shrink), variations of voltage supply, temperature, and fabrication process parameters.{{r|name="Sparsø2006"|page=3}}
* Less severe [[electromagnetic interference]] (EMI).{{r|name="Sparsø2006"|page=3}} Synchronous circuits create a great deal of EMI in the frequency band at (or very near) their clock frequency and its harmonics; asynchronous circuits generate EMI patterns which are much more evenly spread across the spectrum.{{r|name="Sparsø2006"|page=3}}
* Design modularity (reuse), improved noise immunity and electromagnetic compatibility. Asynchronous circuits are more tolerant to process variations and external voltage fluctuations.{{r|name="Sparsø2006"|page=4}}
==Disadvantages==
* Area overhead caused by additional logic implementing handshaking.{{r|name="Sparsø2006"|page=4}} In some cases an asynchronous design may require up to double the resources (area, circuit speed, power consumption) of a synchronous design, due to addition of completion detection and design-for-test circuits.<ref name="Furber">{{cite web |author-last=Furber |author-first=Steve |title=Principles of Asynchronous Circuit Design |url=http://owlhouse.csie.nctu.edu.tw/~dannim/AsynCD/principles_of_ASYNC.pdf |work=Pg. 232 |access-date=2011-12-13 |url-status=dead |archive-url=https://web.archive.org/web/20120426050921/http://owlhouse.csie.nctu.edu.tw/~dannim/AsynCD/principles_of_ASYNC.pdf |archive-date=2012-04-26}}</ref>{{r|name="Sparsø2006"|page=4}}
* Compared to a synchronous design, as of the 1990s and early 2000s not many people are trained or experienced in the design of asynchronous circuits.<ref name="Furber"/>
* Synchronous designs are inherently easier to test and debug than asynchronous designs.<ref>
"Keep It Strictly Synchronous: KISS those asynchronous-logic problems good-bye".
Personal Engineering and Instrumentation News, November 1997, pages 53–55.
http://www.fpga-site.com/kiss.html
</ref> However, this position is disputed by Fant, who claims that the apparent simplicity of synchronous logic is an artifact of the mathematical models used by the common design approaches.<ref name="Fant_2007"/>
* [[Clock gating]] in more conventional synchronous designs is an approximation of the asynchronous ideal, and in some cases, its simplicity may outweigh the advantages of a fully asynchronous design.
* Performance (speed) of asynchronous circuits may be reduced in architectures that require input-completeness (more complex data path).<ref name="van Leeuwen 2010">{{cite book |author-last=van Leeuwen |author-first=T. M. |title=Implementation and automatic generation of asynchronous scheduled dataflow graph |date=2010 |publisher=Delft |url=https://repository.tudelft.nl/islandora/object/uuid:5d87b87f-e084-491f-a18a-9c83ac2c41e1/datastream/OBJ/download}}</ref>
* Lack of dedicated, asynchronous design-focused commercial [[Electronic design automation|EDA]] tools.<ref name="van Leeuwen 2010"/> As of 2006 the situation was slowly improving, however.{{r|name="Sparsø2006"|page=x}}
==Communication==
There are several ways to create asynchronous communication channels that can be classified by their protocol and data encoding.
===Protocols===
There are two widely used protocol families which differ in the way communications are encoded:
*'''two-phase handshake''' (also known as two-phase protocol, [[non-return-to-zero]] (NRZ) encoding, or transition signaling): Communications are represented by any wire transition; transitions from 0 to 1 and from 1 to 0 both count as communications.
*'''four-phase handshake''' (also known as four-phase protocol, or [[return-to-zero]] (RZ) encoding): Communications are represented by a wire transition followed by a reset; a transition sequence from 0 to 1 and back to 0 counts as single communication.
[[File:2 and 4 phase handshakes.svg|thumb|Illustration of two and four-phase handshakes. Top: A sender and a receiver are communicating with simple request and acknowledge signals. The sender drives the request line, and the receiver drives the acknowledge line. Middle: Timing diagram of two, two-phase communications. Bottom: Timing diagram of one, four-phase communication.]]
Despite involving more transitions per communication, circuits implementing four-phase protocols are usually faster and simpler than two-phase protocols because the signal lines return to their original state by the end of each communication. In two-phase protocols, the circuit implementations would have to store the state of the signal line internally.
Note that these basic distinctions do not account for the wide variety of protocols. These protocols may encode only requests and acknowledgements or also encode the data, which leads to the popular multi-wire data encoding. Many other, less common protocols have been proposed including using a single wire for request and acknowledgment, using several significant voltages, using only pulses or balancing timings in order to remove the latches.
===Data encoding===
There are two widely used data encodings in asynchronous circuits: bundled-data encoding and multi-rail encoding
Another common way to encode the data is to use multiple wires to encode a single digit: the value is determined by the wire on which the event occurs. This avoids some of the delay assumptions necessary with bundled-data encoding, since the request and the data are not separated anymore.
====Bundled-data encoding====
Bundled-data encoding uses one wire per bit of data with a request and an acknowledge signal; this is the same encoding used in synchronous circuits without the restriction that transitions occur on a clock edge. The request and the acknowledge are sent on separate wires with one of the above protocols. These circuits usually assume a bounded delay model with the completion signals delayed long enough for the calculations to take place.
In operation, the sender signals the availability and validity of data with a request. The receiver then indicates completion with an acknowledgement, indicating that it is able to process new requests. That is, the request is bundled with the data, hence the name "bundled-data".
Bundled-data circuits are often referred to as micropipelines, whether they use a two-phase or four-phase protocol, even if the term was initially introduced for two-phase bundled-data.
[[File:4-phase bundled-data communication.svg|left|thumb|A 4-phase, bundled-data communication. Top: A sender and receiver are connected by data lines, a request line, and an acknowledge line. Bottom: Timing diagram of a bundled data communication. When the request line is low, the data is to be considered invalid and liable to change at any time.]]
====Multi-rail encoding====
Multi-rail encoding uses multiple wires without a one-to-one relationship between bits and wires and a separate acknowledge signal. Data availability is indicated by the transitions themselves on one or more of the data wires (depending on the type of multi-rail encoding) instead of with a request signal as in the bundled-data encoding. This provides the advantage that the data communication is delay-insensitive. Two common multi-rail encodings are one-hot and dual rail. The one-hot (also known as 1-of-n) encoding represents a number in base n with a communication on one of the n wires. The dual-rail encoding uses pairs of wires to represent each bit of the data, hence the name "dual-rail"; one wire in the pair represents the bit value of 0 and the other represents the bit value of 1. For example, a dual-rail encoded two bit number will be represented with two pairs of wires for four wires in total. During a data communication, communications occur on one of each pair of wires to indicate the data's bits. In the general case, an m <math>\times</math> n encoding represent data as m words of base n.
[[File:4-phase multi-rail asynchronous communications.svg|thumb|Diagram of dual rail and 1-of-4 communications. Top: A sender and receiver are connected by data lines and an acknowledge line. Middle: Timing diagram of the sender communicating the values 0, 1, 2, and then 3 to the receiver with the 1-of-4 encoding. Bottom: Timing diagram of the sender communicating the same values to the receiver with the dual-rail encoding. For this particular data size, the dual rail encoding is the same as a 2x1-of-2 encoding.]]
==== Dual-rail encoding ====
Dual-rail encoding with a four-phase protocol is the most common and is also called ''three-state encoding'', since it has two valid states (10 and 01, after a transition) and a reset state (00). Another common encoding, which leads to a simpler implementation than one-hot, two-phase dual-rail is ''four-state encoding'', or level-encoded dual-rail, and uses a data bit and a parity bit to achieve a two-phase protocol.
==Asynchronous CPU==
<!-- [[History of general purpose CPUs#Asynchronous CPUs]] links here -->
<!--
Is "asynchronous CPU big enough to fork off its own article,
[[clockless CPU]]s ?
-->
Asynchronous [[Central processing unit|CPUs]] are one of [[History of general purpose CPUs#1990 to today: Looking forward|several ideas for radically changing CPU design]].
Unlike a conventional processor, a clockless processor (asynchronous CPU) has no central clock to coordinate the progress of data through the pipeline.
Instead, stages of the CPU are coordinated using logic devices called "pipeline controls" or "FIFO sequencers". Basically, the pipeline controller clocks the next stage of logic when the existing stage is complete. In this way, a central clock is unnecessary. It may actually be even easier to implement high performance devices in asynchronous, as opposed to clocked, logic:
* components can run at different speeds on an asynchronous CPU; all major components of a clocked CPU must remain synchronized with the central clock;
* a traditional CPU cannot "go faster" than the expected worst-case performance of the slowest stage/instruction/component. When an asynchronous CPU completes an operation more quickly than anticipated, the next stage can immediately begin processing the results, rather than waiting for synchronization with a central clock. An operation might finish faster than normal because of attributes of the data being processed (e.g., multiplication can be very fast when multiplying by 0 or 1, even when running code produced by a naive compiler), or because of the presence of a higher voltage or bus speed setting, or a lower ambient temperature, than 'normal' or expected.
Asynchronous logic proponents believe these capabilities would have these benefits:
* lower power dissipation for a given performance level, and
* highest possible execution speeds.
The biggest disadvantage of the clockless CPU is that most [[CPU design]] tools assume a clocked CPU (i.e., a [[synchronous circuit]]). Many tools "enforce synchronous design practices".<ref>{{cite web |url=https://www.eetimes.com/reality-tv-for-fpga-design-engineers/?page_number=2 |website=eetimes.com |date=2005-03-15 |access-date=2020-11-11 |author-first=Robert |author-last=Kruger |title=Reality TV for FPGA design engineers!}}</ref> Making a clockless CPU (designing an asynchronous circuit) involves modifying the design tools to handle clockless logic and doing extra testing to ensure the design avoids [[Metastability in electronics|metastable]] problems. The group that designed the [[AMULET microprocessor|AMULET]], for example, developed a tool called LARD<ref>[http://www.cs.man.ac.uk/apt/projects/tools/lard/ LARD] {{webarchive |url=https://web.archive.org/web/20050306161822/http://www.cs.man.ac.uk/apt/projects/tools/lard/ |date=March 6, 2005}}</ref> to cope with the complex design of AMULET3.
=== Examples ===
Despite all the difficulties numerous asynchronous CPUs have been built.
The [[ORDVAC]] of 1951 was a successor to the [[ENIAC]] and the first asynchronous computer ever built.<ref name="ILLIAC"/><ref name=":1"/>
The [[ILLIAC II]] was the first completely asynchronous, speed independent processor design ever built; it was the most powerful computer at the time.<ref name="ILLIAC"/>
DEC [[PDP-16]] Register Transfer Modules (ca. 1973) allowed the experimenter to construct asynchronous, 16-bit processing elements. Delays for each module were fixed and based on the module's worst-case timing.
=== Caltech ===
Since the mid-1980s, [[California Institute of Technology|Caltech]] has designed four non-commercial CPUs in attempt to evaluate performance and energy efficiency of the asynchronous circuits.<ref name="TGOAM2003">{{cite journal |author-last1=Martin |author-first1=A. J. |author-last2=Nystrom |author-first2=M. |author-last3=Wong |author-first3=C. G. |date=November 2003 |title=Three generations of asynchronous microprocessors |journal=IEEE Design & Test of Computers |volume=20 |issue=6 |pages=9–17 |doi=10.1109/MDT.2003.1246159 |bibcode=2003IDTC...20....9M |s2cid=15164301 |issn=0740-7475}}</ref><ref name=":2">{{cite book |author-last1=Martin |author-first1=A. J. |author-last2=Nystrom |author-first2=M. |author-last3=Papadantonakis |author-first3=K. |author-last4=Penzes |author-first4=P. I. |author-last5=Prakash |author-first5=P. |author-last6=Wong |author-first6=C. G. |author-last7=Chang |author-first7=J. |author-last8=Ko |author-first8=K. S. |author-last9=Lee |author-first9=B. |author-last10=Ou |author-first10=E. |author-last11=Pugh |author-first11=J. |title=Ninth International Symposium on Asynchronous Circuits and Systems, 2003. Proceedings. |chapter=The Lutonium: A sub-nanojoule asynchronous 8051 microcontroller |date=2003 |chapter-url=https://ieeexplore.ieee.org/document/1199162 |___location=Vancouver, BC, Canada |publisher=IEEE Comput. Soc |pages=14–23 |doi=10.1109/ASYNC.2003.1199162 |isbn=978-0-7695-1898-5 |s2cid=13866418|url=https://infoscience.epfl.ch/record/102894/files/lutonium.pdf }}</ref>
; Caltech Asynchronous Microprocessor (CAM)
In 1988 the Caltech Asynchronous Microprocessor (CAM) was the first asynchronous, [[Quasi Delay Insensitive|quasi delay-insensitive]] (QDI) microprocessor made by Caltech.<ref name="TGOAM2003"/><ref name=":3">{{cite journal |author-last=Martin |author-first=Alain J. |date=2014-02-06 |others=Computer Science Technical Reports |title=25 Years Ago: The First Asynchronous Microprocessor |publisher=California Institute of Technology |url=https://resolver.caltech.edu/CaltechAUTHORS:20140206-111915844 |language=en |doi=10.7907/Z9QR4V3H}}</ref> The processor had 16-bit wide [[Reduced instruction set computer|RISC]] ISA and [[Harvard architecture|separate instruction and data memories]].<ref name="TGOAM2003"/> It was manufactured by [[MOSIS]] and funded by [[DARPA]]. The project was supervised by the [[Office of Naval Research]], the [[Army Research Office]], and the [[Air Force Research Laboratory|Air Force Office of Scientific Research]].{{r|name="TGOAM2003"|page=12}}
During demonstrations, the researchers loaded a simple program which ran in a tight loop, pulsing one of the output lines after each instruction. This output line was connected to an oscilloscope. When a cup of hot coffee was placed on the chip, the pulse rate (the effective "clock rate") naturally slowed down to adapt to the worsening performance of the heated transistors. When [[liquid nitrogen]] was poured on the chip, the instruction rate shot up with no additional intervention. Additionally, at lower temperatures, the voltage supplied to the chip could be safely increased, which also improved the instruction rate – again, with no additional configuration.{{Citation needed|date=December 2021}}
When implemented in [[gallium arsenide]] ({{chem|H|Ga|As|3}}) it was claimed to achieve 100MIPS.{{r|name="TGOAM2003"|page=5}} Overall, the research paper interpreted the resultant performance of CAM as superior compared to commercial alternatives available at the time.{{r|name="TGOAM2003"|page=5}}
; MiniMIPS
In 1998 the MiniMIPS, an experimental, asynchronous [[MIPS I]]-based microcontroller was made. Even though its [[SPICE]]-predicted performance was around 280 MIPS at 3.3 V the implementation suffered from several mistakes in layout (human mistake) and the results turned out be lower by about 40% (see table).{{r|name="TGOAM2003"|page=5}}
; The Lutonium 8051
Made in 2003, it was a [[Quasi-delay-insensitive circuit|quasi delay-insensitive]] asynchronous microcontroller designed for energy efficiency.<ref name=":2"/>{{r|name="TGOAM2003"|page=9}} The microcontroller's implementation followed the [[Harvard architecture]].<ref name=":2"/>
{| class="wikitable center"
|+Performance comparison of the Caltech CPUs (in [[Millions of instructions per second|MIPS]]) .{{refn|group="note"|[[Dhrystone]] was also used.{{r|name="TGOAM2003"|pages=4, 8}}}}
!Name
!Year
!Word size (bits)
!Transistors (thousands)
!Size (mm)
!Node size (μm)
!{{abbr|1.5V|Supply voltage}}
!2V
!3.3V
!5V
!10V
|-
|CAM [[SCMOS]] || 1988 || 16 || 20 || N/A || [[1.5 μm process|1.6]]||N/A||5||N/A||18||26
|-
|MiniMIPS [[CMOS]] || 1998 || 32 || 2000 || 8×14 || 0.6 ||60||100||180|| N/A || N/A
|-
|Lutonium [[Intel 8051|8051]] [[CMOS]] || 2003 || 8 || N/A || N/A || [[180 nm process|0.18]]|| 200 || N/A || N/A || N/A || 4
|}
=== Epson ===
In 2004, Epson manufactured the world's first bendable microprocessor called ACT11, an 8-bit asynchronous chip.<ref>[http://www.eetimes.com/conf/isscc/showArticle.jhtml?articleID=59302081&kc=3681 "Seiko Epson tips flexible processor via TFT technology"] {{Webarchive |url=https://web.archive.org/web/20100201021253/http://eetimes.com/conf/isscc/showArticle.jhtml?articleID=59302081&kc=3681 |date=2010-02-01}} by Mark LaPedus 2005</ref><ref>[https://ieeexplore.ieee.org/document/1493974 "A flexible 8b asynchronous microprocessor based on low-temperature poly-silicon TFT technology"] by Karaki et al. 2005. Abstract: "A flexible 8b asynchronous microprocessor ACTII ... The power level is 30% of the synchronous counterpart."</ref><ref>[http://www.holtronic.ch/White_papers/SE2005_1.pdf "Introduction of TFT R&D Activities in Seiko Epson Corporation"] by Tatsuya Shimoda (2005?) has picture of "A flexible 8-bit asynchronous microprocessor, ACT11"</ref><ref>[http://www.epson.co.jp/e/newsroom/2005/news_2005_02_09.htm "Epson Develops the World's First Flexible 8-Bit Asynchronous Microprocessor"]</ref><ref>[http://www.pcadvisor.co.uk/news/index.cfm?newsid=4547 "Seiko Epson details flexible microprocessor: A4 sheets of e-paper in the pipeline] by Paul Kallender 2005</ref> Synchronous flexible processors are slower, since bending the material on which a chip is fabricated causes wild and unpredictable variations in the delays of various transistors, for which worst-case scenarios must be assumed everywhere and everything must be clocked at worst-case speed. The processor is intended for use in [[smart cards]], whose chips are currently limited in size to those small enough that they can remain perfectly rigid.
=== IBM ===
In 2014, IBM announced a [[SyNAPSE]]-developed chip that runs in an asynchronous manner, with one of the highest [[transistor count]]s of any chip ever produced. IBM's chip consumes orders of magnitude less power than traditional computing systems on pattern recognition benchmarks.<ref>[http://www.darpa.mil/NewsEvents/Releases/2014/08/07.aspx "SyNAPSE program develops advanced brain-inspired chip"] {{webarchive|url=https://web.archive.org/web/20140810011226/http://www.darpa.mil/NewsEvents/Releases/2014/08/07.aspx |date=2014-08-10}}. August 07, 2014.</ref>
===Timeline===
<!-- tubes, and therefore not a "microprocessor": TODO check if all of these are tube-based -->
* [[ORDVAC]] and the (identical) [[ILLIAC I]] (1951)<ref name="ILLIAC">"In the 1950 and 1960s, asynchronous design was used in many early mainframe computers, including the ILLIAC I and ILLIAC II ... ." [https://books.google.com/books?id=DPGJEPZGXMQC&pg=PA322&lpg=PA322 Brief History of asynchronous circuit design]</ref><ref name=":1">"The Illiac is a binary parallel asynchronous computer in which negative numbers are represented as two's complements." – final summary of [http://www.bitsavers.org/pdf/univOfIllinoisUrbana/illiac/ILLIAC/ILLIAC_Design_Techniques_May55.pdf "Illiac Design Techniques"] 1955.</ref>
* [[Johnniac]] (1953)<ref name="Johnniac">[http://www.rand.org/content/dam/rand/pubs/research_memoranda/2005/RM5654.pdf Johnniac history written in 1968]</ref>
* [[WEIZAC]] (1955)
* Kiev (1958), a Soviet machine using the programming language with pointers much earlier than they came to the PL/1 language<ref name="Glu62">V. M. Glushkov and E. L. Yushchenko. Mathematical description of computer "Kiev". UkrSSR, 1962 (in Russian)</ref>
* [[ILLIAC II]] (1962)<ref name="ILLIAC" /><!-- built out of discrete transistors, and therefore not a "microprocessor"; closest thing appears to be async DEC modules; TODO: check around to find any others if they existed -->
* [[Victoria University of Manchester]] built [[Atlas Computer (Manchester)|Atlas]] (1964)
<!-- build out of SSI ICs -->
* ICL 1906A and 1906S mainframe computers, part of the 1900 series and sold from 1964 for over a decade by [[International Computers Limited|ICL]]<ref>{{cite web |url=http://www.cs.man.ac.uk/CCS/res/res18.htm |title=Computer Resurrection Issue 18}}</ref>
* Polish computers [[Jacek Karpiński#KAR-65|KAR-65 and K-202]] (1965 and 1970 respectively)
* [[Honeywell]] CPUs 6180 (1972)<ref>"Entirely asynchronous, its hundred-odd boards would send out requests, earmark the results for somebody else, swipe somebody else's signals or data, and backstab each other in all sorts of amusing ways which occasionally failed (the "op not complete" timer would go off and cause a fault). ... [There] was no hint of an organized synchronization strategy: various "it's ready now", "ok, go", "take a cycle" pulses merely surged through the vast backpanel ANDed with appropriate state and goosed the next guy down. Not without its charms, this seemingly ad-hoc technology facilitated a substantial degree of overlap ... as well as the [segmentation and paging] of the Multics address mechanism to the extant 6000 architecture in an ingenious, modular, and surprising way ... . Modification and debugging of the processor, though, were no fun." [http://www.multicians.org/mga.html#6180 "Multics Glossary: ... 6180"]</ref> and Series 60 Level 68 (1981)<ref>"10/81 ... DPS 8/70M CPUs" [http://www.multicians.org/chrono.html Multics Chronology]</ref><ref>"The Series 60, Level 68 was just a repackaging of the 6180." [http://www.multicians.org/features.html#tag2.4 Multics Hardware features: Series 60, Level 68]</ref> upon which [[Multics]] ran asynchronously
<!-- microprocess (all of these?) TODO: check -->
* Soviet bit-slice microprocessor modules (late 1970s)<ref>[http://worldwide.espacenet.com/publicationDetails/originalDocument?CC=US&NR=4124890A&KC=A&FT=D&ND=3&date=19781107&DB=EPODOC&locale=en_EP A. A. Vasenkov, V. L. Dshkhunian, P. R. Mashevich, P. V. Nesterov, V. V. Telenkov, Ju. E. Chicherin, D. I. Juditsky, "Microprocessor computing system," Patent US4124890, Nov. 7, 1978]</ref><ref>[http://www.computer-museum.ru/articles/?article=116 Chapter 4.5.3 in the biography of D. I. Juditsky (in Russian)]</ref> produced as К587,<ref>{{cite web |url=http://www.cpu80.ru/home/seria-587 |title=Серия 587 - Collection ex-USSR Chip's |access-date=2015-07-16 |url-status=dead |archive-url=https://web.archive.org/web/20150717061828/http://www.cpu80.ru/home/seria-587 |archive-date=2015-07-17}}</ref> К588<ref>{{cite web |url=http://www.cpu80.ru/home/seria-588 |title=Серия 588 - Collection ex-USSR Chip's |access-date=2015-07-16 |url-status=dead |archive-url=https://web.archive.org/web/20150717082004/http://www.cpu80.ru/home/seria-588 |archive-date=2015-07-17}}</ref> and К1883 (U83x in East Germany)<ref>{{cite web |url=http://www.cpu80.ru/home/seria-u83-k1883 |title=Серия 1883/U830 - Collection ex-USSR Chip's |access-date=2015-07-19 |url-status=dead |archive-url=https://web.archive.org/web/20150722062052/http://www.cpu80.ru/home/seria-u83-k1883 |archive-date=2015-07-22}}</ref>
* Caltech Asynchronous Microprocessor, the world-first asynchronous microprocessor (1988)<ref name="TGOAM2003"/><ref name=":3"/>
* [[ARM architecture|ARM]]-implementing [[AMULET microprocessor|AMULET]] (1993 and 2000)
* Asynchronous implementation of [[MIPS architecture|MIPS]] R3000, dubbed [https://web.archive.org/web/20080509090359/http://www.async.caltech.edu/mips.html MiniMIPS] (1998)
* Several versions of the [[XAP processor]] experimented with different asynchronous design styles: a bundled data XAP, a 1-of-4 XAP, and a 1-of-2 (dual-rail) XAP (2003?)<ref name="Spadavecchia"/>
* ARM-compatible processor (2003?) designed by Z. C. Yu, [[Steve Furber|S. B. Furber]], and L. A. Plana; "designed specifically to explore the benefits of asynchronous design for security sensitive applications"<ref name="Spadavecchia"/>
* SAMIPS (2003), a synthesisable asynchronous implementation of the MIPS R3000 processor<ref>{{Cite arXiv |last1=Zhang |first1=Qianyi |last2=Theodoropoulos |first2=Georgios |date=2024 |title=SAMIPS: A Synthesised Asynchronous Processor |class=cs.AR |eprint=2409.20388}}</ref><ref>{{Cite book |last1=Zhang |first1=Qianyi |last2=Theodoropoulos |first2=Georgios |date=2003 |editor-last=Omondi |editor-first=Amos |editor2-last=Sedukhin |editor2-first=Stanislav |chapter=Towards an Asynchronous MIPS Processor |chapter-url=https://link.springer.com/chapter/10.1007/978-3-540-39864-6_12 |title=Advances in Computer Systems Architecture |series=Lecture Notes in Computer Science |language=en |___location=Berlin, Heidelberg |publisher=Springer |pages=137–150 |doi=10.1007/978-3-540-39864-6_12 |isbn=978-3-540-39864-6}}</ref>
* "Network-based Asynchronous Architecture" processor (2005) that executes a subset of the [[MIPS architecture]] instruction set<ref name="Spadavecchia">[http://www.era.lib.ed.ac.uk/bitstream/1842/860/1/Spadavecchia_thesis.pdf "A Network-based Asynchronous Architecture for Cryptographic Devices"] by Ljiljana Spadavecchia 2005 in section "4.10.2 Side-channel analysis of dual-rail asynchronous architectures" and section "5.5.5.1 Instruction set"</ref>
* ARM996HS processor (2006) from Handshake Solutions
* HT80C51 processor (2007?) from Handshake Solutions.<ref>[http://www.keil.com/dd/chip/3931.htm "Handshake Solutions HT80C51"] "The Handshake Solutions HT80C51 is a Low power, asynchronous 80C51 implementation using handshake technology, compatible with the standard 8051 instruction set."</ref>
* Vortex, a [[Superscalar processor|superscalar]] [[History of general-purpose CPUs|general purpose CPU]] with a [[Load–store architecture|load/store architecture]] from Intel (2007);<ref name=":0">{{cite book |author-last=Lines |author-first=Andrew |title=13th IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC'07) |chapter=The Vortex: A Superscalar Asynchronous Processor |date=March 2007 |pages=39–48 |doi=10.1109/ASYNC.2007.28 |isbn=978-0-7695-2771-0 |s2cid=33189213}}</ref> it was developed as Fulcrum Microsystem test Chip 2 and was not commercialized, excepting some of its components; the chip included [[DDR SDRAM]] and a 10Gb Ethernet interface linked via Nexus system-on-chip net to the CPU<ref name=":0"/><ref>{{Cite book |author-last=Lines |author-first=A. |title=11th Symposium on High Performance Interconnects, 2003. Proceedings. |chapter=Nexus: An asynchronous crossbar interconnect for synchronous system-on-chip designs |date=2003 |___location=Stanford, CA, USA |publisher=IEEE Comput. Soc |pages=2–9 |doi=10.1109/CONECT.2003.1231470 |isbn=978-0-7695-2012-4 |s2cid=1799204}}</ref>
* SEAforth [[multi-core]] processor (2008) from [[Charles H. Moore]]<ref>[http://www.intellasys.net/index.php?option=com_content&task=view&id=21&Itemid=41 SEAforth Overview] {{webarchive |url=https://web.archive.org/web/20080202055942/http://www.intellasys.net/index.php?option=com_content&task=view&id=21&Itemid=41 |date=2008-02-02}} "... asynchronous circuit design throughout the chip. There is no central clock with billions of dumb nodes dissipating useless power. ... the processor cores are internally asynchronous themselves."</ref>
* GA144<ref>[http://www.greenarraychips.com "GreenArrayChips"] "Ultra-low-powered multi-computer chips with integrated peripherals."</ref> [[multi-core]] processor (2010) from [[Charles H. Moore]]
* TAM16: 16-bit asynchronous microcontroller IP core (Tiempo)<ref>[http://www.tiempo-ic.com/uploads/Docs/TAM16_Datasheet.pdf?page=uploads/Docs/Tiempo%20TAM16%20IP%20Data%20Sheet%201.2.pdf Tiempo: Asynchronous TAM16 Core IP]</ref>
* Aspida asynchronous [[DLX]] core;<ref>{{cite web |title=ASPIDA sync/async DLX Core |url=http://opencores.org/project,aspida |website=OpenCores.org |access-date=September 5, 2014}}</ref> the asynchronous open-source DLX processor (ASPIDA) has been successfully implemented both in ASIC and FPGA versions<ref>[https://www.ics.forth.gr/carv/asynchronous-circuits-systems-2001-2010 "Asynchronous Open-Source DLX Processor (ASPIDA)"].</ref>
==See also==
* [[
* [[Event camera]] (asynchronous camera)
* [[Perfect clock gating]]
* [[Petri net]]s
* [[Sequential logic]] (asynchronous)
*[[Signal transition graphs]]
*{{Annotated link|Transputer}}
==Notes==
{{reflist|group="note"}}
==References==
{{Reflist|refs=
<ref name="Fant_2005">{{cite book |title=Logically determined design: clockless system design with NULL convention logic (NCL) |author-first=Karl M. |author-last=Fant |date=February 2005 |edition=1 |publisher=[[Wiley-Interscience]] / [[John Wiley and Sons, Inc.]] |publication-place=Hoboken, New Jersey, USA |isbn=978-0-471-68478-7 |lccn=2004050923 |url=https://books.google.com/books?id=UTHFcdvvHQcC}} (xvi+292 pages)</ref>
<ref name="Fant_2007">{{cite book |title=Computer Science Reconsidered: The Invocation Model of Process Expression |chapter= |author-first=Karl M. |author-last=Fant |date=August 2007 |edition=1 |publisher=[[Wiley-Interscience]] / [[John Wiley and Sons, Inc.]] |publication-place=Hoboken, New Jersey, USA |isbn=978-0-471-79814-9 |lccn=2006052821 |pages= |url=http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471798142.html |access-date=2023-07-23}} (xix+1+269<!--+7 blank--> pages)</ref>
<ref name="Smith-Di_2009">{{cite book |title=Designing Asynchronous Circuits using NULL Conventional Logic (NCL) |author-last1=Smith |author-first1=Scott C. |author-last2=Di |author-first2=Jia |date=2009 |publisher={{ill|Morgan & Claypool Publishers|d|Q64605783}} |series=Synthesis Lectures on Digital Circuits & Systems |id=Lecture #23 |issn=1932-3166 |eissn=1932-3174 |isbn=978-1-59829-981-6 |pages=61–73 |url=http://www.gbv.de/dms/tib-ub-hannover/734874170.pdf |access-date=2023-09-10 |postscript=none}}; {{cite book |title=Designing Asynchronous Circuits using NULL Conventional Logic (NCL) |author-last1=Smith |author-first1=Scott C. |author-last2=Di |author-first2=Jia |___location=[[University of Arkansas]], Arkansas, USA |series=Synthesis Lectures on Digital Circuits & Systems |date=2022 |orig-date=2009-07-23 |publisher=[[Springer Nature Switzerland AG]] |isbn=978-3-031-79799-6 |issn=1932-3166 |eissn=1932-3174 |id=Lecture #23 |doi=10.1007/978-3-031-79800-9 |pages= |url=https://books.google.com/books?id=_4JyEAAAQBAJ |access-date=2023-09-10}} (x+86+6 pages)</ref>
<ref name="Smith-Di_2011">{{cite web |author-last1=Smith |author-first1=Scott C. |author-last2=Di |author-first2=Jia |title=U.S. 7,977,972 Ultra-Low Power Multi-threshold Asychronous Circuit Design |url=http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=7977972.PN.&OS=PN/7977972&RS=PN/7977972 |access-date=2011-12-12}}</ref>
<ref name="Vasyukevich_1984">{{cite journal |title=<!-- not a typo! -->Whenjunction as a logic/dynamic operation. Definition, implementation and applications |language= |author-first=Vadim O. |author-last=Vasyukevich |date=1984 |journal=Automatic Control and Computer Sciences <!-- |publisher=Allerton Press |issn=1558-108X --> |volume=18 |issue=6 |pages=68–74}} (NB. The function was still called ''whenjunction'' instead of ''venjunction'' in this publication.)</ref>
<ref name="Vasyukevich_1998">{{cite journal |title=Monotone sequences of binary data sets and their identification by means of venjunctive functions |author-first=Vadim O. |author-last=Vasyukevich |journal=Automatic Control and Computer Sciences <!-- |publisher=Allerton Press |issn=1558-108X --> |date=1998 |volume=32 |issue=5 |pages=49–56}}</ref>
<ref name="Vasyukevich_2007">{{cite journal |title=Decoding asynchronous sequences |author-first=Vadim O. |author-last=Vasyukevich |journal=Automatic Control and Computer Sciences |publisher=Allerton Press |issn=1558-108X |volume=41 |number=2 |date=April 2007 |doi=10.3103/S0146411607020058 |s2cid=21204394 |pages=93–99}}</ref>
<ref name="Vasyukevich_2009">{{cite web |author-first=Vadim O. |author-last=Vasyukevich |title=Asynchronous logic elements. Venjunction and sequention |date=2009 |publisher= |url=http://asynlog.balticom.lv/Content/Files/en.pdf |archive-url=https://web.archive.org/web/20110722160840/http://asynlog.balticom.lv/Content/Files/en.pdf <!-- https://ghostarchive.org/archive/20221009/http://asynlog.balticom.lv/Content/Files/en.pdf --> |archive-date=2011-07-22 |url-status=live}} (118 pages)</ref>
<ref name="Vasyukevich_2011">{{cite book |author-first=Vadim O. |author-last=Vasyukevich |title=Asynchronous Operators of Sequential Logic: Venjunction & Sequention — Digital Circuits Analysis and Design |publisher=[[Springer-Verlag]] |publication-place=Berlin / Heidelberg, Germany |___location=Riga, Latvia |date=2011 |edition=1st |series=Lecture Notes in Electrical Engineering |volume=101 |isbn=978-3-642-21610-7 |doi=10.1007/978-3-642-21611-4 |issn=1876-1100 |lccn=2011929655}} (xiii+1+123+7 pages) (NB. The back cover of this book erroneously states volume 4, whereas it actually is volume 101.)</ref>
}}
==Further reading==
*[https://web.archive.org/web/20071215230357/http://www.handshakesolutions.com/ TiDE] from Handshake Solutions in The Netherlands, Commercial asynchronous circuits design tool. Commercial asynchronous ARM (ARM996HS) and 8051 (HT80C51) are available.
*[https://www.cs.columbia.edu/~nowick/ald-nowick-tr-intro.pdf An introduction to asynchronous circuit design] {{Webarchive |url=https://web.archive.org/web/20100623121342/http://www.cs.columbia.edu/~nowick/ald-nowick-tr-intro.pdf |date=23 June 2010}} by Davis and Nowick
*[http://theseusresearch.com/NullConventionLogic.htm Null convention logic], a design style pioneered by Theseus Logic, who have fabricated over 20 ASICs based on their NCL08 and NCL8501 microcontroller cores [https://web.archive.org/web/20070927214117/http://scism.sbu.ac.uk/ccsv/ACiD-WG/AsyncIndustryStatus.pdf]<!-- NOTE THIS SEEMS TO BE THE SAME AS THE REFERENCE BELOW ?? -->
*[https://web.archive.org/web/20111009112125/http://www.scism.lsbu.ac.uk/ccsv/ACiD-WG/AsyncIndustryStatus.pdf The Status of Asynchronous Design in Industry] Information Society Technologies (IST) Programme, IST-1999-29119, D. A. Edwards W. B. Toms, June 2004, via ''www.scism.lsbu.ac.uk''
*The [http://brej.org/red_star/ Red Star] is a version of the MIPS R3000 implemented in asynchronous logic
*The [https://web.archive.org/web/20060807232502/http://www.cs.manchester.ac.uk/apt/projects/processors/amulet/ Amulet microprocessors] were asynchronous ARMs, built in the 1990s at [[University of Manchester]], England
*The [https://www.gtheodoropoulos.com/Research/Projects/samips/samips.html SAMIPS] synthesised asynchronous MIPS R3000 processor.
*The [http://www.asyncart.com N-Protocol] developed by Navarre AsyncArt, the first commercial asynchronous design methodology for conventional FPGAs
*[http://www.henning-mersch.de/pgpsalm/ PGPSALM] an asynchronous implementation of the 6502 microprocessor
*[https://web.archive.org/web/20071214045029/http://async.caltech.edu/ Caltech Async Group home page]
*[http://www.tiempo-ic.com/ Tiempo]: French company providing asynchronous IP and design tools
*[https://archive.today/20130122151803/http://www.eetimes.com/showArticle.jhtml?articleID=59302120 Epson ACT11 Flexible CPU Press Release]
*[http://async.org.uk Newcastle upon Tyne Async Group page]
==External links==
*{{Commons category-inline|Asynchronous circuits}}
{{Digital electronics}}
{{DEFAULTSORT:Asynchronous Circuit}}
[[Category:Automata (computation)]]
[[Category:Clock signal]]
|