Booth's multiplication algorithm: Difference between revisions

Content deleted Content added
No edit summary
Bender the Bot (talk | contribs)
m External links: HTTP to HTTPS for Blogspot
 
(471 intermediate revisions by more than 100 users not shown)
Line 1:
'''Booth's{{short multiplicationdescription|Algorithm [[algorithm]]'''that will multiplymultiplies two signed binary numbers in [[two's complement|two's complement notation]].}}
{{Use dmy dates|date=April 2022}}
'''Booth's multiplication algorithm''' is a [[multiplication algorithm]] that multiplies two signed [[base 2|binary]] numbers in [[two's complement|two's complement notation]]. The [[algorithm]] was invented by [[Andrew Donald Booth]] in 1950 while doing research on [[crystallography]] at [[Birkbeck, University of London|Birkbeck College]] in [[Bloomsbury]], [[London]].<ref name="Booth_1951"/> Booth's algorithm is of interest in the study of [[computer architecture]].
 
==ProcedureThe algorithm==
Booth's algorithm examines adjacent pairs of [[bit]]s of the ''N''-bit multiplier ''Y'' in signed [[two's complement]] representation, including an implicit bit below the [[least significant bit]], ''y''<sub>−1</sub> = 0. For each bit ''y''<sub>''i''</sub>, for ''i'' running from 0 to ''N'' − 1, the bits ''y''<sub>''i''</sub> and ''y''<sub>''i''−1</sub> are considered. Where these two bits are equal, the product accumulator ''P'' is left unchanged. Where ''y''<sub>''i''</sub> = 0 and ''y''<sub>''i''−1</sub> = 1, the multiplicand times 2<sup>''i''</sup> is added to ''P''; and where ''y''<sub>i</sub> = 1 and ''y''<sub>i−1</sub> = 0, the multiplicand times 2<sup>''i''</sup> is subtracted from ''P''. The final value of ''P''<nowiki> is the signed product.</nowiki>{{Citation needed|reason=No source for the algorithm|date=April 2025}}
* Convert the factors to [[two's complement]] notation.
* Find the negative of the multiplicand.
* Count how many bits are in the multiplicand. Add that many 0s to the left of the multiplier.
* Add a 0 to the right of the multiplier.
 
The representations of the multiplicand and product are not specified; typically, these are both also in two's complement representation, like the multiplier, but any number system that supports addition and subtraction will work as well. As stated here, the order of the steps is not determined. Typically, it proceeds from [[Least significant bit|LSB]] to [[Most significant bit|MSB]], starting at ''i'' = 0; the multiplication by 2<sup>''i''</sup> is then typically replaced by incremental shifting of the ''P'' accumulator to the right between steps; low bits can be shifted out, and subsequent additions and subtractions can then be done just on the highest ''N'' bits of ''P''.<ref name="Chen_1992"/> There are many variations and optimizations on these details.
* Count how many bits are in the multiplier. That many times, do this :
*# If the two rightmost bits are...
*#* 00 or 11: do nothing.
*#* 01: add the multiplicand to the left half of the multiplier. Ignore any overflow.
*#* 10: add the negative of the multiplicand to the left half of the multiplier. Ignore any overflow.
*# Perform a right [[arithmetic shift]] on the result.
 
The algorithm is often described as converting strings of 1s in the multiplier to a high-order +1 and a low-order −1 at the ends of the string. When a string runs through the MSB, there is no high-order +1, and the net effect is interpretation as a negative of the appropriate value.
* Remove the rightmost bit from the multiplicand.
 
==A typical implementation==
==Example==
[[Image:Calculator walther hg.jpg|thumb|right|A Walther WSR160 [[Odhner Arithmometer|arithmometer]] from 1960. Each turn of the crank handle adds ''(up)'' or subtracts ''(down)'' the operand set to the top register from the value in the accumulator register at the bottom. [[Arithmetic shift|Shifting]] the adder left or right multiplies the effect by ten.]]
Find 3 &times; -4:
Booth's algorithm can be implemented by repeatedly adding (with ordinary unsigned binary addition) one of two predetermined values ''A'' and ''S'' to a product ''P'', then performing a rightward [[arithmetic shift]] on ''P''. Let '''m''' and '''r''' be the [[multiplicand]] and [[Multiplication#Terminology|multiplier]], respectively; and let ''x'' and ''y'' represent the number of bits in '''m''' and '''r'''.
 
# Determine the values of ''A'' and ''S'', and the initial value of ''P''. All of these numbers should have a length equal to (''x''&nbsp;+&nbsp;''y''&nbsp;+&nbsp;1).
* The multiplicand, 3, is 0011; its negative is 1101.
## A: Fill the most significant (leftmost) bits with the value of '''m'''. Fill the remaining (''y''&nbsp;+&nbsp;1) bits with zeros.
* The multiplier, -4, is 1100; with all zeroes added, it is 000011000.
## S: Fill the most significant bits with the value of (&minus;'''m''') in two's complement notation. Fill the remaining (''y''&nbsp;+&nbsp;1) bits with zeros.
## P: Fill the most significant ''x'' bits with zeros. To the right of this, append the value of '''r'''. Fill the least significant (rightmost) bit with a zero.
# Determine the two least significant (rightmost) bits of ''P''.
## If they are 01, find the value of ''P''&nbsp;+&nbsp;''A''. Ignore any overflow.
## If they are 10, find the value of ''P''&nbsp;+&nbsp;''S''. Ignore any overflow.
## If they are 00, do nothing. Use ''P'' directly in the next step.
## If they are 11, do nothing. Use ''P'' directly in the next step.
# [[arithmetic shift|Arithmetically shift]] the value obtained in the 2nd step by a single place to the right. Let ''P'' now equal this new value.
# Repeat steps 2 and 3 until they have been done ''y'' times.
# Drop the least significant (rightmost) bit from ''P''. This is the product of '''m''' and '''r'''<nowiki>.</nowiki>{{Citation needed|reason=No source for the Implementation|date=April 2025}}
 
==Example==
* Perform the loop four times :
Find 3 &times; (&minus;4), with '''m''' = 3 and '''r''' = &minus;4, and ''x'' = 4 and ''y'' = 4:
** 0000110'''00'''. The last two bits are 00.
** 000001100. A right shift.
** 0000011'''00'''. The last two bits are 00.
** 000000110. A right shift.
** 0000001'''10'''. The last two bits are 10.
** 110100110. Add the negative of the multiplicand.
** 111010011. A right shift.
** 1110100'''11'''. The last two bits are 11.
** 111101001. A right shift.
 
* Them result= is0011, 11110100-m = 1101, whichr is= -12.1100
* A = 0011 0000 0
* S = 1101 0000 0
* P = 0000 1100 0
* Perform the loop four times:
*# P = 0000 110'''0 0'''. The last two bits are 00.
*#* P = 0000 0110 0. Arithmetic right shift.
*# P = 0000 011'''0 0'''. The last two bits are 00.
*#* P = 0000 0011 0. Arithmetic right shift.
*# P = 0000 001'''1 0'''. The last two bits are 10.
*#* P = 1101 0011 0. P = P + S.
*#* P = 1110 1001 1. Arithmetic right shift.
*# P = 1110 100'''1 1'''. The last two bits are 11.
*#* P = 1111 0100 1. Arithmetic right shift.
* The product is 1111 0100, which is &minus;12.
 
The above-mentioned technique is inadequate when the multiplicand is the [[two's complement#Most negative number|most negative number]] that can be represented (e.g. if the multiplicand has 4 bits then this value is &minus;8). This is because then an overflow occurs when computing -m, the negation of the multiplicand, which is needed in order to set S. One possible correction to this problem is to extend A, S, and P by one bit each, while they still represent the same number. That is, while &minus;8 was previously represented in four bits by 1000, it is now represented in 5 bits by 1 1000. This then follows the implementation described above, with modifications in determining the bits of A and S; e.g., the value of '''m''', originally assigned to the first ''x'' bits of A, will be now be extended to ''x''+1 bits and assigned to the first ''x''+1 bits of A. Below, the improved technique is demonstrated by multiplying &minus;8 by 2 using 4 bits for the multiplicand and the multiplier:
==Why does Booth's algorithm work ?==
* A = 1 1000 0000 0
Consider a positive multiplier consisting of a block of 1s surrounded by 0s. For example, 00111110. The product is given by:
* S = 0 1000 0000 0
:<math>M \times (2^5 + 2^4 + 2^3 + 2^2 + 2^1) = M \times 62 </math>
* P = 0 0000 0010 0
* Perform the loop four times:
*# P = 0 0000 001'''0 0'''. The last two bits are 00.
*#* P = 0 0000 0001 0. Right shift.
*# P = 0 0000 000'''1 0'''. The last two bits are 10.
*#* P = 0 1000 0001 0. P = P + S.
*#* P = 0 0100 0000 1. Right shift.
*# P = 0 0100 000'''0 1'''. The last two bits are 01.
*#* P = 1 1100 0000 1. P = P + A.
*#* P = 1 1110 0000 0. Right shift.
*# P = 1 1110 000'''0 0'''. The last two bits are 00.
*#* P = 1 1111 0000 0. Right shift.
* The product is 11110000 (after discarding the first and the last bit) which is &minus;16.
 
==How it works==
Consider a positive multiplier consisting of a block of 1s surrounded by 0s. For example, 00111110. The product is given by:
<math display="block"> M \times \begin{array}{|r|r|r|r|r|r|r|r|}\hline 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 \\ \hline \end{array} = M \times (2^5 + 2^4 + 2^3 + 2^2 + 2^1) = M \times 62 </math>
where M is the multiplicand. The number of operations can be reduced to two by rewriting the same as
<math display="block"> M \times \begin{array}{|r|r|r|r|r|r|r|r|}\hline 0 & 1 & 0 & 0 & 0 & 0 & -1 & 0 \\ \hline \end{array} = M \times (2^6 - 2^1) = M \times 62. </math>
:<math>M \times (2^6 - 2^1) </math>
The product can be then generated by one addition and one subtraction of the multiplicand. This scheme can be extended to any number of blocks of 1s in a multiplier (including the case of single 1 in a block). Thus,
:<math> M \times (00111010) = M \times (2^5 + 2^4 + 2^3 + 2^1) = M \times (2^6 - 2^3 + 2^2 - 2^1) </math>
Booth's algorithm follows this scheme by performing a subtraction when it encounters the first 1 of a block (1-0) and an addition when it encounters the end of the block (0-1). This works for a negative multiplier as well. For the same reason, the Booth's algorithm performs fewer additions and subtraction than a normal multiplication algorithm in most cases.
 
In fact, it can be shown that any sequence of 1s in a binary number can be broken into the difference of two binary numbers:
==History==
 
The algorithm was invented by [[Andrew D. Booth]] circa [[1957]] while doing research on [[crystallography]] at [[Birkbeck, University of London|Birkbeck College]] in [[Bloomsbury, London|Bloomsbury]], [[London]]. Booth invented this approach in a quest to find a fast way to multiply numbers with desk calculators as much of his early works involved a great deal of calculations with these devices. In machines of his era, [[shift]]ing was faster than addition, and indeed for some patterns of [[binary number]]s, his algorithm would be faster. Surprisingly the algorithm also works for [[signed number]]s. Booth's algorithm remains to be an interest in the study of [[computer architecture]].
<math display="block"> (\ldots 0 \overbrace{1 \ldots 1}^{n} 0 \ldots)_{2} \equiv (\ldots 1 \overbrace{0 \ldots 0}^{n} 0 \ldots)_{2} - (\ldots 0 \overbrace{0 \ldots 1}^{n} 0 \ldots)_2. </math>
 
Hence, the multiplication can actually be replaced by the string of ones in the original number by simpler operations, adding the multiplier, shifting the partial product thus formed by appropriate places, and then finally subtracting the multiplier. It is making use of the fact that it is not necessary to do anything but shift while dealing with 0s in a binary multiplier, and is similar to using the mathematical property that 99&nbsp;=&nbsp;100&nbsp;&minus;&nbsp;1 while multiplying by 99.
 
This scheme can be extended to any number of blocks of 1s in a multiplier (including the case of a single 1 in a block). Thus,
 
<math display="block"> M \times \begin{array}{|r|r|r|r|r|r|r|r|}\hline 0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 \\ \hline \end{array}\ = M \times (2^5 + 2^4 + 2^3 + 2^1) = M \times 58 </math>
<math display="block"> M \times \begin{array}{|r|r|r|r|r|r|r|r|}\hline 0 & 1 & 0 & 0 & -1 & 1 & -1 & 0 \\ \hline \end{array} = M \times (2^6 - 2^3 + 2^2 - 2^1) = M \times 58. </math>
 
Booth's algorithm follows this old scheme by performing an addition when it encounters the first digit of a block of ones (0 1) and subtraction when it encounters the end of the block (1 0). This works for a negative multiplier as well. When the ones in a multiplier are grouped into long blocks, Booth's algorithm performs fewer additions and subtractions than the normal multiplication algorithm.
 
== Pentium multiplier implementation ==
Intel's [[Pentium]] microprocessor uses a radix-8 variant of Booth's algorithm in its 64-bit hardware multiplier. Because of the way it implements the radix-8 multiplication, it needs a complex auxiliary circuit to perform the special case of multiplication by 3 in a way that minimizes latency, combining the use of [[Carry-lookahead adder|carry-lookahead]], [[carry-select addition|carry-select]], and [[Kogge–Stone addition]].<ref>{{Cite web |first=Ken|last=Shirriff|title=The Pentium contains a complicated circuit to multiply by three |url=https://www.righto.com/2025/03/pentium-multiplier-adder-reverse-engineered.html |access-date=2025-03-03|website=righto.com}}</ref>
 
== See also ==
* [[MultiplicationBinary ALUmultiplier]]
* [[Non-adjacent form]]
* [[Redundant binary representation]]
* [[Wallace tree]]
* [[Dadda multiplier]]
 
==References==
{{reflist|refs=
# Collin, Andrew. [http://www.cs.man.ac.uk/CCS/res/res05.htm Andrew Booth's Computers at Birkbeck College]. ''Resurrection'', Issue 5, Spring 1993. London: ''Computer Conservation Society''.
<ref name="Booth_1951">{{cite journal |title=A Signed Binary Multiplication Technique |author-first=Andrew Donald |author-last=Booth |author-link=Andrew Donald Booth |journal=The Quarterly Journal of Mechanics and Applied Mathematics<!-- Q.J. Mech. Appl. Math. --> |volume=IV |issue=2 |date=1951 |orig-year=1950-08-01 |pages=236–240 |url=http://bwrc.eecs.berkeley.edu/Classes/icdesign/ee241_s00/PAPERS/archive/booth51.pdf |access-date=2018-07-16 |url-status=live |archive-url=https://web.archive.org/web/20180716222422/http://bwrcs.eecs.berkeley.edu/Classes/icdesign/ee241_s00/PAPERS/archive/booth51.pdf |archive-date=2018-07-16}} Reprinted in {{cite book |title=A Signed Binary Multiplication Technique |author-first=Andrew Donald |author-last=Booth |author-link=Andrew Donald Booth |publisher=[[Oxford University Press]] |pages=100–104}}</ref>
# Patterson, David and John Hennessy. <u>Computer Organization and Design: The Hardware/Software Interface, Second Edition.</u> ISBN 1558604286. San Francisco, California: ''Morgan Kaufmann Publishers''. 1998.
<ref name="Chen_1992">{{cite book |title=Signal processing handbook |author-first=Chi-hau |author-last=Chen |publisher=[[CRC Press]] |date=1992 |isbn=978-0-8247-7956-6 |page=234 |url=https://books.google.com/books?id=10Pi0MRbaOYC&pg=PA234}}</ref>
# Stallings, William. <u>Computer Organization and Architecture: Designing for performance, Fifth Edition.</u> ISBN 0130812943. New Jersey: ''Prentice-Hall, Inc.''. 2000.
}}
[[Category:Number theoretic algorithms]]
 
[[Category:Computer arithmetic]]
==Further reading==
* {{cite journal |author-last=Collin |author-first=Andrew |title=Andrew Booth's Computers at Birkbeck College |journal=Resurrection |issue=5 |date=Spring 1993 |___location=London |publisher=[[Computer Conservation Society]] |url=http://www.cs.man.ac.uk./CCS/res/res05.htm#e}}
* {{cite book |author-last1=Patterson |author-first1=David Andrew |author-link1=David Patterson (computer scientist) |author-first2=John Leroy |author-last2=Hennessy |author-link2=John L. Hennessy |title=Computer Organization and Design: The Hardware/Software Interface |edition=Second |isbn=1-55860-428-6 |___location=San Francisco, California, USA |publisher=[[Morgan Kaufmann Publishers]] |date=1998 |url=https://archive.org/details/computerorganiz000henn }}
* {{cite book |author-last=Stallings |author-first=William |author-link=William Stallings |title=Computer Organization and Architecture: Designing for performance |edition=Fifth |isbn=0-13-081294-3 |___location=New Jersey |publisher=[[Prentice-Hall, Inc.]] |date=2000 |url=https://archive.org/details/computerorganiza00will }}
* {{cite web |title=Advanced Arithmetic Techniques |author-first=John J. G. |author-last=Savard |date=2018 |orig-year=2006 |work=quadibloc |url=http://www.quadibloc.com/comp/cp0202.htm |access-date=2018-07-16 |url-status=live |archive-url=https://web.archive.org/web/20180703001722/http://www.quadibloc.com/comp/cp0202.htm |archive-date=2018-07-03}}
 
==External links==
* [http://www.geoffknagge.com/fyp/booth.shtml Radix-4 Booth Encoding]
* [https://web.archive.org/web/20171017093721/http://www.russinoff.com/libman/text/node65.html Radix-8 Booth Encoding] in [https://web.archive.org/web/20070927194831/http://www.russinoff.com/libman/ A Formal Theory of RTL and Computer Arithmetic]
* [http://www.ecs.umass.edu/ece/koren/arith/simulator/Booth/ Booth's Algorithm JavaScript Simulator]
* [https://philosophyforprogrammers.blogspot.com/2011/05/booths-multiplication-algorithm-in.html Implementation in Python]
 
[[Category:1950 introductions]]
[[de:Booth-Verfahren]]
[[Category:1950 in London]]
[[Category:1950 in science]]
[[Category:Binary arithmetic]]
[[Category:Computer arithmetic algorithms]]
[[Category:Multiplication]]
[[Category:Birkbeck, University of London]]