Content deleted Content added
Cup o' Java (talk | contribs) m →Implementation issues: Spelling/grammar correction |
{{MOS|article|date=July 2025| MOS:FORMULA - avoid mixing {{tag|math}} and {{tl|math}} in the same expression}} |
||
(165 intermediate revisions by 81 users not shown) | |||
Line 1:
{{Short description|Calculations where numbers' precision is only limited by computer memory}}
{{
In [[computer science]], '''arbitrary-precision arithmetic''', also called '''bignum arithmetic''', '''multiple precision arithmetic''', or sometimes '''infinite-precision arithmetic''', indicates that [[calculation]]s are performed on numbers whose [[numerical digit|digits]] of [[precision (arithmetic)|precision]] are limited only by the available [[memory (computers)|memory]] of the host system. This contrasts with the faster fixed-precision arithmetic found in most [[arithmetic logic unit]] (ALU) hardware, which typically offers between 8 and 64 [[bit]]s of precision.▼
{{MOS|article|date=July 2025| [[MOS:FORMULA]] - avoid mixing {{tag|math}} and {{tl|math}} in the same expression}}
{{Floating-point}}
▲In [[computer science]], '''arbitrary-precision arithmetic''', also called '''bignum arithmetic''', '''multiple
Several modern [[programming language]]s have built-in support for bignums,<ref>{{Cite web|last=dotnet-bot|title=BigInteger Struct (System.Numerics)|url=https://docs.microsoft.com/en-us/dotnet/api/system.numerics.biginteger|access-date=2022-02-22|website=docs.microsoft.com|language=en-us}}</ref><ref>{{Cite web|title=PEP 237 -- Unifying Long Integers and Integers|url=https://peps.python.org/pep-0237/|access-date=2022-05-23|website=Python.org|language=en}}</ref><ref>{{Cite web|title=BigInteger (Java Platform SE 7 )|url=https://docs.oracle.com/javase/7/docs/api/java/math/BigInteger.html|access-date=2022-02-22|website=docs.oracle.com}}</ref><ref>{{Cite web|title=BigInt - JavaScript {{!}} MDN|url=https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt|access-date=2022-02-22|website=developer.mozilla.org|language=en-US}}</ref> and others have libraries available for arbitrary-precision [[Integer_(computer_science)|integer]] and [[floating-point]] math.
Arbitrary precision is used in applications where the speed of [[arithmetic]] is not a limiting factor, or where [[Floating point error mitigation|precise results]] with very large numbers are required. It should not be confused with the [[symbolic computation]] provided by many [[computer algebra system]]s, which represent numbers by expressions such as {{math|''π''·sin(2)}}, and can thus ''represent'' any [[computable number]] with infinite precision.
==Applications==
A common application is [[public-key cryptography]], whose algorithms commonly employ arithmetic with integers having hundreds of digits.<ref>
Arbitrary precision arithmetic is also used to compute fundamental [[mathematical constant]]s such as [[pi|π]] to millions or more digits and to analyze the properties of the digit strings<ref>{{cite journal |author=R. K. Pathria |
Arbitrary-precision arithmetic can also be used to avoid [[arithmetic overflow|overflow]], which is an inherent limitation of fixed-precision arithmetic. Similar to
In many cases, the task or the programmer can guarantee that the integer values in a specific application will not grow large enough to cause an overflow. Such
Some programming languages such as [[Lisp (programming language)|Lisp]], [[Python (programming language)|Python]], [[Perl]], [[Haskell (programming language)|Haskell]]
==Implementation issues==
Arbitrary-precision arithmetic is considerably slower than arithmetic using numbers that fit entirely within processor registers, since the latter are usually implemented in [[Arithmetic logic unit|hardware arithmetic]] whereas the former must be implemented in software. Even if the [[computer]] lacks hardware for certain operations (such as integer division, or all floating-point operations) and software is provided instead, it will use number sizes closely related to the available hardware registers: one or two words only
Numbers can be stored in a [[fixed-point arithmetic|fixed-point]] format, or in a [[floating-point]] format as a [[significand]] multiplied by an arbitrary exponent. However, since division almost immediately introduces infinitely repeating sequences of digits (such as 4/7 in decimal, or 1/10 in binary), should this possibility arise then either the representation would be truncated at some satisfactory size or else rational numbers would be used: a large integer for the [[numerator]] and for the [[denominator]]. But even with the [[greatest common divisor]] divided out, arithmetic with rational numbers can become unwieldy very quickly: 1/99
The size of arbitrary-precision numbers is limited in practice by the total storage available, and computation time.
Numerous [[algorithms]] have been developed to efficiently perform arithmetic operations on numbers stored with arbitrary precision. In particular, supposing that {{math|''N''}} digits are employed, algorithms have been designed to minimize the asymptotic [[Computational complexity theory|complexity]] for large {{math|''N''}}.
The simplest algorithms are for [[addition]] and [[subtraction]], where one simply adds or subtracts the digits in sequence, carrying as necessary, which yields an
[[Comparison (computer programming)|Comparison]] is also very simple. Compare the high
For [[multiplication]], the most straightforward algorithms used for multiplying numbers by hand (as taught in primary school) require
For [[Division (mathematics)|division]], see
For a list of algorithms along with complexity estimates, see
For examples in [[x86]]
==Pre-set precision==
In some languages such as [[REXX]] and [[Object REXX|ooRexx]], the precision of all calculations must be set before doing a calculation. Other languages, such as [[Python (programming language)|Python]] and [[Ruby (programming language)|Ruby]], extend the precision automatically to prevent overflow.
==Example==
The calculation of [[factorial]]s can easily produce very large numbers. This is not a problem for their usage in many
But if exact values for large factorials are desired, then special software is required, as in the [[pseudocode]] that follows, which implements the classic algorithm to calculate 1, 1×2, 1×2×3, 1×2×3×4, etc. the successive factorial numbers.
constants:
Array digit[1:Limit] of integer; %The big number.▼
Integer carry,d; %Assistants during multiplication.▼
variables:
Array text[1:Limit] of character;%Scratchpad for the output.▼
▲ Constant tdigit[0:9] of character = ["0","1","2","3","4","5","6","7","8","9"];
digit[1]:=1; %The big number starts with 1,▼
'''for''' n:=1 '''to''' FactorialLimit '''do''' %Step through producing 1!, 2!, 3!, 4!, etc. ▼
last
digit[1]
d:=digit[i]*n + carry; %The classic multiply.▼
▲
carry := 0 ''% Start a multiply by n.''
'''
'''if''' last >= Limit '''then''' croak("Overflow!"); %If possible!▼
'''
▲ text:=" "; %Now prepare the output.
digit[last] := carry '''mod''' Base
text[Limit - i + 1]:=tdigit[digit[i]]; %Reversing the order.▼
carry := carry '''
'''for''' i := 1 '''to''' last: ''% Translate from binary to text.''
'''print''' text[Limit - last + 1:Limit], " = ", n, "!"
With the example in view, a number of details can be discussed. The most important is the choice of the representation of the big number. In this case, only integer values are required for digits, so an array of fixed-width integers is adequate. It is convenient to have successive elements of the array represent higher powers of the base.
The second most important decision is in the choice of the base of arithmetic, here ten. There are many considerations. The scratchpad variable
There is also the issue of printing the result in base ten, for human consideration. Because the base is already ten, the result could be shown simply by printing the successive digits of array ''digit'', but they would appear with the highest-order digit last (so that 123 would appear as "321"). The whole array could be printed in reverse order, but that would present the number with leading zeroes ("00000...000123") which may not be appreciated, so
{| class="wikitable" style="text-align: right; white-space: nowrap; line-height: 80%
! colspan=2 style="text-align: center" | Factorial numbers
! colspan=2 style="text-align: center" | Reach of computer integers
Line 109 ⟶ 114:
| 16-bit || style="text-align: left" | 65535
|-
| 3 62880 = || 9!
|-
| 36 28800 = || 10!
|-
| 399 16800 = || 11!
Line 118 ⟶ 123:
| 32-bit || style="text-align: left" | 42949 67295
|-
| 62270 20800 = || 13!
|-
| 8 71782 91200 = || 14!
|-
| 130 76743 68000 = || 15!
|-
| 2092 27898 88000 = || 16!
|-
| 35568 74280 96000 = || 17!
|-
| 6 40237 37057 28000 = || 18!
|-
| 121 64510 04088 32000 = || 19!
|-
| 2432 90200 81766 40000 = || 20!
| 64-bit || style="text-align: left" | 18446 74407 37095 51615
|-
| 51090 94217 17094 40000 = || 21!
|-
| 11 24000 72777 76076 80000 = || 22!
|-
| 258 52016 73888 49766 40000 = || 23!
|-
| 6204 48401 73323 94393 60000 = || 24!
|-
| 1 55112 10043 33098 59840 00000 = || 25!
|-
| 40 32914 61126 60563 55840 00000 = || 26!
|-
| 1088 88694 50418 35216 07680 00000 = || 27!
|-
| 30488 83446 11713 86050 15040 00000 = || 28!
|-
| 8 84176 19937 39701 95454 36160 00000 = || 29!
|-
| 265 25285 98121 91058 63630 84800 00000 = || 30!
|-
| 8222 83865 41779 22817 72556 28800 00000 = || 31!
Line 164 ⟶ 169:
| 128-bit || style="text-align: left" | 3402 82366 92093 84634 63374 60743 17682 11455
|-
| 1 03331 47966 38614 49296 66651 33752 32000 00000 = || 35!
|}
For a single-digit multiply the working variables must be able to hold the value (base
==History==
IBM's first business computer, the [[IBM 702]] (a [[vacuum
An early widespread implementation was available via the [[IBM 1620]] of 1959–1970. The 1620 was a decimal-digit machine which used discrete transistors, yet it had hardware (that used [[lookup table]]s) to perform integer arithmetic on digit strings of a length that could be from two to whatever memory was available. For floating-point arithmetic, the mantissa was restricted to a hundred digits or fewer, and the exponent was restricted to two digits only. The largest memory supplied offered
==Software libraries==
Line 184 ⟶ 189:
== See also ==
* [[Fürer's algorithm]]
* [[Karatsuba algorithm]]
* [[Mixed-precision arithmetic]]
* [[Schönhage–Strassen algorithm]]
* [[
* [[Little Endian Base 128]]
== References ==
{{reflist}}
* {{Cite book|last=Knuth |first=Donald |authorlink=Donald Knuth |title=Seminumerical Algorithms |series=[[The Art of Computer Programming]] |volume=2 |year=2008 |edition=3rd |publisher=Addison-Wesley |isbn=0-201-89684-2|postscript=<!-- Bot inserted parameter. Either remove it; or change its value to "." for the cite to end in a ".", as necessary. -->{{inconsistent citations}} }}, Section 4.3.1: The Classical Algorithms▼
== Further reading ==
▲* {{Cite book|last=Knuth |first=Donald |
*{{cite book|author=Derick Wood|year=1984|title=Paradigms and Programming with Pascal|publisher=Computer Science Press|isbn=0-914894-45-5}}
*{{cite book|author=Richard Crandall, Carl Pomerance|year=2005|title=Prime Numbers|publisher=Springer-Verlag|isbn=9780387252827}}, Chapter 9: Fast Algorithms for Large-Integer Arithmetic
== External links ==
* [https://web.archive.org/web/20101019002107/http://oopweb.com/Assembly/Documents/ArtOfAssembly/Volume/Chapter_9/CH09-3.html#HEADING3-1 Chapter 9.3 of ''The Art of Assembly''] by [[Randall Hyde]] discusses multiprecision arithmetic, with examples in [[x86]]-assembly.
* Rosetta Code task [http://rosettacode.org/wiki/Arbitrary-precision_integers_%28included%29 Arbitrary-precision integers] Case studies in the style in which over
{{data types}}
|