Booth's multiplication algorithm: Difference between revisions

Content deleted Content added
Bender the Bot (talk | contribs)
m External links: HTTP to HTTPS for Blogspot
 
(482 intermediate revisions by more than 100 users not shown)
Line 1:
'''Booth's{{short multiplicationdescription|Algorithm algorithm'''that is an [[algorithm]] used to multiplymultiplies two signed binary numbers in [[two's complement|two's complement notation]].}}
{{Use dmy dates|date=April 2022}}
'''Booth's multiplication algorithm''' is a [[multiplication algorithm]] that multiplies two signed [[base 2|binary]] numbers in [[two's complement|two's complement notation]]. The [[algorithm]] was invented by [[Andrew Donald Booth]] in 1950 while doing research on [[crystallography]] at [[Birkbeck, University of London|Birkbeck College]] in [[Bloomsbury]], [[London]].<ref name="Booth_1951"/> Booth's algorithm is of interest in the study of [[computer architecture]].
 
==HistoryThe algorithm==
Booth's algorithm examines adjacent pairs of [[bit]]s of the ''N''-bit multiplier ''Y'' in signed [[two's complement]] representation, including an implicit bit below the [[least significant bit]], ''y''<sub>−1</sub> = 0. For each bit ''y''<sub>''i''</sub>, for ''i'' running from 0 to ''N'' − 1, the bits ''y''<sub>''i''</sub> and ''y''<sub>''i''−1</sub> are considered. Where these two bits are equal, the product accumulator ''P'' is left unchanged. Where ''y''<sub>''i''</sub> = 0 and ''y''<sub>''i''−1</sub> = 1, the multiplicand times 2<sup>''i''</sup> is added to ''P''; and where ''y''<sub>i</sub> = 1 and ''y''<sub>i−1</sub> = 0, the multiplicand times 2<sup>''i''</sup> is subtracted from ''P''. The final value of ''P''<nowiki> is the signed product.</nowiki>{{Citation needed|reason=No source for the algorithm|date=April 2025}}
 
The representations of the multiplicand and product are not specified; typically, these are both also in two's complement representation, like the multiplier, but any number system that supports addition and subtraction will work as well. As stated here, the order of the steps is not determined. Typically, it proceeds from [[Least significant bit|LSB]] to [[Most significant bit|MSB]], starting at ''i'' = 0; the multiplication by 2<sup>''i''</sup> is then typically replaced by incremental shifting of the ''P'' accumulator to the right between steps; low bits can be shifted out, and subsequent additions and subtractions can then be done just on the highest ''N'' bits of ''P''.<ref name="Chen_1992"/> There are many variations and optimizations on these details.
The [[algorithm]] was invented by [[Andrew D. Booth]] circa [[1957]] while doing research on [[crystallography]] at [[Birkbeck, University of London|Birkbeck College]] in [[Bloomsbury, London|Bloomsbury]], [[London]]. Booth invented this approach in a quest to find a fast way to multiply numbers with desk calculators as much of his early works involved a great deal of calculations with these devices. In machines of his era, [[shift]]ing was faster than addition, and indeed for some patterns of [[binary number]]s, his algorithm would be faster. Surprisingly the algorithm also works for [[signed number]]s. Booth's algorithm remains to be an interest in the study of [[computer architecture]].
 
The algorithm is often described as converting strings of 1s in the multiplier to a high-order +1 and a low-order −1 at the ends of the string. When a string runs through the MSB, there is no high-order +1, and the net effect is interpretation as a negative of the appropriate value.
==Procedure==
 
==A typical implementation==
# Convert the [[multiplicator]] to a binary number in [[two's complement]] notation.
[[Image:Calculator walther hg.jpg|thumb|right|A Walther WSR160 [[Odhner Arithmometer|arithmometer]] from 1960. Each turn of the crank handle adds ''(up)'' or subtracts ''(down)'' the operand set to the top register from the value in the accumulator register at the bottom. [[Arithmetic shift|Shifting]] the adder left or right multiplies the effect by ten.]]
# Convert the [[multiplicand]] to a binary number in [[two's complement]] notation.
Booth's algorithm can be implemented by repeatedly adding (with ordinary unsigned binary addition) one of two predetermined values ''A'' and ''S'' to a product ''P'', then performing a rightward [[arithmetic shift]] on ''P''. Let '''m''' and '''r''' be the [[multiplicand]] and [[Multiplication#Terminology|multiplier]], respectively; and let ''x'' and ''y'' represent the number of bits in '''m''' and '''r'''.
# Add 0s to the left of the multiplicator in a quantity equivalent to the number of bits in the multiplicand.
# Add a 0 to the right of the binary number that results from the previous step.
# Check the two rightmost bits of the resultant binary number:
## If they're 00 or 11 do nothing.
## If they're 01, add the multiplicand to the left half of the binary number. Ignore any carry or overflow.
## If they're 10, subtract the multiplicand to the left half of the binary number. Ignore any carry or overflow.*
# Perform an [[arithmetic shift]]** to the right on the resultant binary number.
# Repeat step 5 and 6 as many times as there are bits in the multiplicator.
# Remove the rightmost bit from the resultant binary number.
 
# Determine the values of ''A'' and ''S'', and the initial value of ''P''. All of these numbers should have a length equal to (''x''&nbsp;+&nbsp;''y''&nbsp;+&nbsp;1).
<small><nowiki>*</nowiki> Note that we are subtracting two's complements, in other words: add -(multiplicand).</small>
## A: Fill the most significant (leftmost) bits with the value of '''m'''. Fill the remaining (''y''&nbsp;+&nbsp;1) bits with zeros.
 
## S: Fill the most significant bits with the value of (&minus;'''m''') in two's complement notation. Fill the remaining (''y''&nbsp;+&nbsp;1) bits with zeros.
<small><nowiki>**</nowiki> Note that an arithmetic shift preserves the sign of a 2's complement number.</small>
## P: Fill the most significant ''x'' bits with zeros. To the right of this, append the value of '''r'''. Fill the least significant (rightmost) bit with a zero.
# Determine the two least significant (rightmost) bits of ''P''.
## If they are 01, find the value of ''P''&nbsp;+&nbsp;''A''. Ignore any overflow.
## If they are 10, find the value of ''P''&nbsp;+&nbsp;''S''. Ignore any overflow.
## If they are 00, do nothing. Use ''P'' directly in the next step.
## If they are 11, do nothing. Use ''P'' directly in the next step.
# [[arithmetic shift|Arithmetically shift]] the value obtained in the 2nd step by a single place to the right. Let ''P'' now equal this new value.
# Repeat steps 2 and 3 until they have been done ''y'' times.
# Drop the least significant (rightmost) bit from ''P''. This is the product of '''m''' and '''r'''<nowiki>.</nowiki>{{Citation needed|reason=No source for the Implementation|date=April 2025}}
 
==Example==
Find 3 &times; (&minus;4), with '''m''' = 3 and '''r''' = &minus;4, and ''x'' = 4 and ''y'' = 4:
 
* m = 0011, -m = 1101, r = 1100
We wish to multiply 3 * -4:
* A = 0011 0000 0
* S = 1101 0000 0
* P = 0000 1100 0
* Perform the loop four times:
*# P = 0000 110'''0 0'''. The last two bits are 00.
*#* P = 0000 0110 0. Arithmetic right shift.
*# P = 0000 011'''0 0'''. The last two bits are 00.
*#* P = 0000 0011 0. Arithmetic right shift.
*# P = 0000 001'''1 0'''. The last two bits are 10.
*#* P = 1101 0011 0. P = P + S.
*#* P = 1110 1001 1. Arithmetic right shift.
*# P = 1110 100'''1 1'''. The last two bits are 11.
*#* P = 1111 0100 1. Arithmetic right shift.
* The product is 1111 0100, which is &minus;12.
 
The above-mentioned technique is inadequate when the multiplicand is the [[two's complement#Most negative number|most negative number]] that can be represented (e.g. if the multiplicand has 4 bits then this value is &minus;8). This is because then an overflow occurs when computing -m, the negation of the multiplicand, which is needed in order to set S. One possible correction to this problem is to extend A, S, and P by one bit each, while they still represent the same number. That is, while &minus;8 was previously represented in four bits by 1000, it is now represented in 5 bits by 1 1000. This then follows the implementation described above, with modifications in determining the bits of A and S; e.g., the value of '''m''', originally assigned to the first ''x'' bits of A, will be now be extended to ''x''+1 bits and assigned to the first ''x''+1 bits of A. Below, the improved technique is demonstrated by multiplying &minus;8 by 2 using 4 bits for the multiplicand and the multiplier:
# Multiplicator (-4): 1100
* A = 1 1000 0000 0
# Multiplicand (3): 0011
* S = 0 1000 0000 0
# Add 0s to the left of the multiplicator (1100) in a quantity equivalent to the number of bits in the multiplicand (4 bits): 0000 1100
* P = 0 0000 0010 0
# Add a 0 to the right of the binary number that results from the previous step: 0000 1100 0
#* RightmostPerform bit checkthe loop four times:
#*# P = 0 0000 110001'''0 0'''. The last two bits (checkare performed)00.
*#* P = 0 0000 0001 0. Right shift.
## 0000 110'''0 0''' (did nothing)
*## P = 0 0000 011000'''01 0'''. (ashift-rightThe performedlast [1/4])two bits are 10.
*#* P = 0 1000 0001 0. P = P + S.
## 0000 011'''0 0''' (check performed)
*#* P = 0 0100 0000 1. Right shift.
## 0000 011'''0 0''' (did nothing)
*# P = 0 0100 000'''0 1'''. The last two bits are 01.
## 0000 001'''1 0''' (ashift-right performed [2/4])
*#* P = 1 1100 0000 1. P = P + A.
## 0000 001'''1 0''' (check performed)
*#* P = 1 1110 0000 0. Right shift.
## 1101 001'''1 0''' (subtract performed)
#*# P = 1 1110 100000'''10 10'''. The last two (ashift-rightbits performedare [3/4])00.
*#* P = 1 1111 0000 0. Right shift.
## 1110 100'''1 1''' (check performed)
* The product is 11110000 (after discarding the first and the last bit) which is &minus;16.
## 1110 100'''1 1''' (did nothing)
## 1111 010'''0 1''' (ashift-right performed [4/4])
# Remove the rightmost bit from the resultant binary number: 1111 0100
 
==How it works==
1111 0100 in two's complement = -(00001011+1) = -(00001100) = -12
Consider a positive multiplier consisting of a block of 1s surrounded by 0s. For example, 00111110. The product is given by:
<math display="block"> M \times \begin{array}{|r|r|r|r|r|r|r|r|}\hline 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 \\ \hline \end{array} = M \times (2^5 + 2^4 + 2^3 + 2^2 + 2^1) = M \times 62 </math>
where M is the multiplicand. The number of operations can be reduced to two by rewriting the same as
<math display="block"> M \times \begin{array}{|r|r|r|r|r|r|r|r|}\hline 0 & 1 & 0 & 0 & 0 & 0 & -1 & 0 \\ \hline \end{array} = M \times (2^6 - 2^1) = M \times 62. </math>
 
In fact, it can be shown that any sequence of 1s in a binary number can be broken into the difference of two binary numbers:
==Why does Booth's Algorithm work ?==
 
<math display="block"> (\ldots 0 \overbrace{1 \ldots 1}^{n} 0 \ldots)_{2} \equiv (\ldots 1 \overbrace{0 \ldots 0}^{n} 0 \ldots)_{2} - (\ldots 0 \overbrace{0 \ldots 1}^{n} 0 \ldots)_2. </math>
Consider a positive multiplicator consisting of a block of 1s surrounded by 0s. For example, 00111110. The product is given by:
:<math>M \times (2^5 + 2^4 + 2^3 + 2^2 + 2^1) = M \times 62 </math>
where M is the multiplicand. The number of operations can be reduced to two by rewriting the same as
:<math>M \times (2^6 - 2^1) </math>
The product can be then generated by one addition and one subtraction of the multiplicand. This scheme can be extended to any number of blocks of 1s in a multiplicator (including the case of single 1 in a block). Thus,
:<math> M \times (00111010) = M \times (2^5 + 2^4 + 2^3 + 2^1) = M \times (2^6 - 2^3 + 2^2 - 2^1) </math>
Booth's algorithm follows this scheme by performing a subtraction when it encounters the first 1 of a block (1-0) and an addition when it encounters the end of the block (0-1). This works for a negative multiplicator as well. For the same reason, the Booth's algorithm performs fewer additions and subtraction than a normal multiplication algorithm in most cases.
 
Hence, the multiplication can actually be replaced by the string of ones in the original number by simpler operations, adding the multiplier, shifting the partial product thus formed by appropriate places, and then finally subtracting the multiplier. It is making use of the fact that it is not necessary to do anything but shift while dealing with 0s in a binary multiplier, and is similar to using the mathematical property that 99&nbsp;=&nbsp;100&nbsp;&minus;&nbsp;1 while multiplying by 99.
== See also ==
 
This scheme can be extended to any number of blocks of 1s in a multiplier (including the case of a single 1 in a block). Thus,
* [[Multiplication ALU]]
 
<math display="block"> M \times \begin{array}{|r|r|r|r|r|r|r|r|}\hline 0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 \\ \hline \end{array}\ = M \times (2^5 + 2^4 + 2^3 + 2^1) = M \times 58 </math>
<math display="block"> M \times \begin{array}{|r|r|r|r|r|r|r|r|}\hline 0 & 1 & 0 & 0 & -1 & 1 & -1 & 0 \\ \hline \end{array} = M \times (2^6 - 2^3 + 2^2 - 2^1) = M \times 58. </math>
 
Booth's algorithm follows this old scheme by performing an addition when it encounters the first digit of a block of ones (0 1) and subtraction when it encounters the end of the block (1 0). This works for a negative multiplier as well. When the ones in a multiplier are grouped into long blocks, Booth's algorithm performs fewer additions and subtractions than the normal multiplication algorithm.
 
== Pentium multiplier implementation ==
Intel's [[Pentium]] microprocessor uses a radix-8 variant of Booth's algorithm in its 64-bit hardware multiplier. Because of the way it implements the radix-8 multiplication, it needs a complex auxiliary circuit to perform the special case of multiplication by 3 in a way that minimizes latency, combining the use of [[Carry-lookahead adder|carry-lookahead]], [[carry-select addition|carry-select]], and [[Kogge–Stone addition]].<ref>{{Cite web |first=Ken|last=Shirriff|title=The Pentium contains a complicated circuit to multiply by three |url=https://www.righto.com/2025/03/pentium-multiplier-adder-reverse-engineered.html |access-date=2025-03-03|website=righto.com}}</ref>
 
== See also ==
* [[Binary multiplier]]
* [[Non-adjacent form]]
* [[Redundant binary representation]]
* [[Wallace tree]]
* [[Dadda multiplier]]
 
==References==
{{reflist|refs=
<ref name="Booth_1951">{{cite journal |title=A Signed Binary Multiplication Technique |author-first=Andrew Donald |author-last=Booth |author-link=Andrew Donald Booth |journal=The Quarterly Journal of Mechanics and Applied Mathematics<!-- Q.J. Mech. Appl. Math. --> |volume=IV |issue=2 |date=1951 |orig-year=1950-08-01 |pages=236–240 |url=http://bwrc.eecs.berkeley.edu/Classes/icdesign/ee241_s00/PAPERS/archive/booth51.pdf |access-date=2018-07-16 |url-status=live |archive-url=https://web.archive.org/web/20180716222422/http://bwrcs.eecs.berkeley.edu/Classes/icdesign/ee241_s00/PAPERS/archive/booth51.pdf |archive-date=2018-07-16}} Reprinted in {{cite book |title=A Signed Binary Multiplication Technique |author-first=Andrew Donald |author-last=Booth |author-link=Andrew Donald Booth |publisher=[[Oxford University Press]] |pages=100–104}}</ref>
<ref name="Chen_1992">{{cite book |title=Signal processing handbook |author-first=Chi-hau |author-last=Chen |publisher=[[CRC Press]] |date=1992 |isbn=978-0-8247-7956-6 |page=234 |url=https://books.google.com/books?id=10Pi0MRbaOYC&pg=PA234}}</ref>
}}
 
==Further reading==
* {{cite journal |author-last=Collin |author-first=Andrew |title=Andrew Booth's Computers at Birkbeck College |journal=Resurrection |issue=5 |date=Spring 1993 |___location=London |publisher=[[Computer Conservation Society]] |url=http://www.cs.man.ac.uk./CCS/res/res05.htm#e}}
* {{cite book |author-last1=Patterson |author-first1=David Andrew |author-link1=David Patterson (computer scientist) |author-first2=John Leroy |author-last2=Hennessy |author-link2=John L. Hennessy |title=Computer Organization and Design: The Hardware/Software Interface |edition=Second |isbn=1-55860-428-6 |___location=San Francisco, California, USA |publisher=[[Morgan Kaufmann Publishers]] |date=1998 |url=https://archive.org/details/computerorganiz000henn }}
* {{cite book |author-last=Stallings |author-first=William |author-link=William Stallings |title=Computer Organization and Architecture: Designing for performance |edition=Fifth |isbn=0-13-081294-3 |___location=New Jersey |publisher=[[Prentice-Hall, Inc.]] |date=2000 |url=https://archive.org/details/computerorganiza00will }}
* {{cite web |title=Advanced Arithmetic Techniques |author-first=John J. G. |author-last=Savard |date=2018 |orig-year=2006 |work=quadibloc |url=http://www.quadibloc.com/comp/cp0202.htm |access-date=2018-07-16 |url-status=live |archive-url=https://web.archive.org/web/20180703001722/http://www.quadibloc.com/comp/cp0202.htm |archive-date=2018-07-03}}
 
==External links==
# Collin, Andrew. [http://www.cs.man.ac.uk/CCS/res/res05.htm Andrew Booth's Computers at Birkbeck College]. ''Resurrection'', Issue 5, Spring 1993. London: ''Computer Conservation Society''.
* [http://www.geoffknagge.com/fyp/booth.shtml Radix-4 Booth Encoding]
# Patterson, David and John Hennessy. <u>Computer Organization and Design: The Hardware/Software Interface, Second Edition.</u> ISBN 1558604286. San Francisco, California: ''Morgan Kaufmann Publishers''. 1998.
* [https://web.archive.org/web/20171017093721/http://www.russinoff.com/libman/text/node65.html Radix-8 Booth Encoding] in [https://web.archive.org/web/20070927194831/http://www.russinoff.com/libman/ A Formal Theory of RTL and Computer Arithmetic]
# Stallings, William. <u>Computer Organization and Architecture: Designing for performance, Fifth Edition.</u> ISBN 0130812943. New Jersey: ''Prentice-Hall, Inc.''. 2000.
* [http://www.ecs.umass.edu/ece/koren/arith/simulator/Booth/ Booth's Algorithm JavaScript Simulator]
[[Category:Number theoretic algorithms]]
* [https://philosophyforprogrammers.blogspot.com/2011/05/booths-multiplication-algorithm-in.html Implementation in Python]
[[Category:Computer arithmetic]]
 
[[Category:1950 introductions]]
[[de:Booth-Verfahren]]
[[Category:1950 in London]]
[[Category:1950 in science]]
[[Category:Binary arithmetic]]
[[Category:Computer arithmetic algorithms]]
[[Category:Multiplication]]
[[Category:Birkbeck, University of London]]