Arbitrary-precision arithmetic: Difference between revisions

Content deleted Content added
{{MOS|article|date=July 2025| MOS:FORMULA - avoid mixing {{tag|math}} and {{tl|math}} in the same expression}}
 
(166 intermediate revisions by 82 users not shown)
Line 1:
{{Short description|Calculations where numbers' precision is only limited by computer memory}}
{{RefimproveMore citations needed|date=July 2007}}
In [[computer science]], '''arbitrary-precision arithmetic''', also called '''bignum arithmetic''', '''multiple precision arithmetic''', or sometimes '''infinite-precision arithmetic''', indicates that [[calculation]]s are performed on numbers whose [[numerical digit|digits]] of [[precision (arithmetic)|precision]] are limited only by the available [[memory (computers)|memory]] of the host system. This contrasts with the faster fixed-precision arithmetic found in most [[arithmetic logic unit]] (ALU) hardware, which typically offers between 8 and 64 [[bit]]s of precision.
{{MOS|article|date=July 2025| [[MOS:FORMULA]] - avoid mixing {{tag|math}} and {{tl|math}} in the same expression}}
{{Floating-point}}
In [[computer science]], '''arbitrary-precision arithmetic''', also called '''bignum arithmetic''', '''multiple -precision arithmetic''', or sometimes '''infinite-precision arithmetic''', indicates that [[calculation]]s are performed on numbers whose [[numerical digit|digits]] of [[precision (arithmetic)|precision]] are potentially limited only by the available [[memory (computers)|memory]] of the host system. This contrasts with the faster [[fixed-precision arithmetic]] found in most [[arithmetic logic unit]] (ALU) hardware, which typically offers between 8 and 64 [[bit]]s of precision.
 
Several modern [[programming language]]s have built-in support for bignums,<ref>{{Cite web|last=dotnet-bot|title=BigInteger Struct (System.Numerics)|url=https://docs.microsoft.com/en-us/dotnet/api/system.numerics.biginteger|access-date=2022-02-22|website=docs.microsoft.com|language=en-us}}</ref><ref>{{Cite web|title=PEP 237 -- Unifying Long Integers and Integers|url=https://peps.python.org/pep-0237/|access-date=2022-05-23|website=Python.org|language=en}}</ref><ref>{{Cite web|title=BigInteger (Java Platform SE 7 )|url=https://docs.oracle.com/javase/7/docs/api/java/math/BigInteger.html|access-date=2022-02-22|website=docs.oracle.com}}</ref><ref>{{Cite web|title=BigInt - JavaScript {{!}} MDN|url=https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt|access-date=2022-02-22|website=developer.mozilla.org|language=en-US}}</ref> and others have libraries available for arbitrary-precision [[Integer_(computer_science)|integer]] and [[floating-point]] math. Rather than storestoring values as a fixed number of binary [[bit]]sbits related to the size of the [[processor register]], these implementations typically use variable-length [[array data structure|arrays]] of digits.
 
Arbitrary precision is used in applications where the speed of [[arithmetic]] is not a limiting factor, or where [[Floating point error mitigation|precise results]] with very large numbers are required. It should not be confused with the [[symbolic computation]] provided by many [[computer algebra system]]s, which represent numbers by expressions such as {{math|''π''·sin(2)}}, and can thus ''represent'' any [[computable number]] with infinite precision.
 
==Applications==
 
A common application is [[public-key cryptography]], whose algorithms commonly employ arithmetic with integers having hundreds of digits.<ref>[http{{cite web |url=https://arstechnica.com/news.ars/post/20070523-researchers-307-digit-key-crack-endangers-1024-bit-rsa.html |title=Researchers: 307-digit key crack endangers 1024-bit RSA] |author=Jacqui Cheng |date=May 23, 2007}}</ref><ref>{{cite web|url=http://www.rsa.com/rsalabs/node.asp?id%3D2218 |title=RSA Laboratories - 3.1.5 How large a key should be used in the RSA cryptosystem? |access-date=2012-03-31 |url-status=dead |archive-url=https://web.archive.org/web/20120401144624/http://www.rsa.com/rsalabs/node.asp?id=2218 |archive-date=2012-04-01 }} recommends important RSA keys be 2048 bits (roughly 600 digits).</ref> Another is in situations where artificial limits and [[arithmetic overflow|overflows]] would be inappropriate. It is also useful for checking the results of fixed-precision calculations, and for determining theoptimal optimumor near-optimal valuevalues for coefficients needed in formulae, for example the √⅓<math display=inline>\sqrt{\frac{1}{3}}</math> that appears in [[Gaussian integration]].<ref>{{cite report|url=https://tel.archives-ouvertes.fr/tel-00477243/en |title=Intégration numérique avec erreur bornée en précision arbitraire. Modélisation et simulation | author=Laurent Fousse | publisher=Université Henri Poincaré - Nancy I | language=fr | date=2006 }} </ref>
 
Arbitrary precision arithmetic is also used to compute fundamental [[mathematical constant]]s such as [[pi|π]] to millions or more digits and to analyze the properties of the digit strings<ref>{{cite journal |author=R. K. Pathria |authorlinkauthor-link=Raj Pathria |title=A Statistical Study of the Randomness Among the First 10,000 Digits of Pi |year=1962 |journal=Mathematics of Computation |volume=16 |issue=78 |pages=188–197 |url=httphttps://www.ams.org/journals/mcom/1962-16-078/S0025-5718-1962-0144443-7/ |accessdateaccess-date=2014-01-10 |doi=10.1090/s0025-5718-1962-0144443-7|doi-access=free }} A quote example from this article: "Such an extreme pattern is dangerous even if diluted by one of its neighbouring blocks"; this was the occurrence of the sequence 77 twenty-eight times in one block of a thousand digits.</ref> or more generally to investigate the precise behaviour of functions such as the [[Riemann zeta function]] where certain questions are difficult to explore via analytical methods. Another example is in rendering [[fractal]] images with an extremely high magnification, such as those found in the [[Mandelbrot set]].
 
Arbitrary-precision arithmetic can also be used to avoid [[arithmetic overflow|overflow]], which is an inherent limitation of fixed-precision arithmetic. Similar to aan 5-digitautomobile's [[odometer]]'s display which changesmay change from 99999 to 00000, a fixed-precision integer may exhibit ''[[Integer overflow|wraparound]]'' if numbers grow too large to represent at the fixed level of precision. Some processors can instead deal with overflow by ''[[saturation arithmetic|saturation]],'' which means that if a result would be unrepresentable, it is replaced with the nearest representable value. (With 16-bit unsigned saturation, adding any positive amount to 65535 would yield 65535.) Some processors can generate an [[exception handling|exception]] if an arithmetic result exceeds the available precision. Where necessary, the exception can be caught and recovered from—for instance, the operation could be restarted in software using arbitrary-precision arithmetic.
 
In many cases, the task or the programmer can guarantee that the integer values in a specific application will not grow large enough to cause an overflow. Such However,guarantees asmay timebe passesbased andon conditionspragmatic change,limits: thea boundsschool ofattendance theprogram guaranteemay canhave bea exceeded.task limit Forof example4,000 implementationsstudents. ofA the [[binary search]] method that employ the form (L + R)/2programmer may function incorrectly whendesign the ''sum''computation ofso Lthat andintermediate R exceeds the machine word size, although theresults individualstay variableswithin themselvesspecified remainprecision validboundaries.
 
Some programming languages such as [[Lisp (programming language)|Lisp]], [[Python (programming language)|Python]], [[Perl]], [[Haskell (programming language)|Haskell]] and, [[Ruby (programming language)|Ruby]] and [[Raku (programming language)|Raku]] use, or have an option to use, arbitrary-precision numbers for ''all'' integer arithmetic. This enables integers to grow to any size limited only by the available memory of the system. Although this reduces performance, it eliminates the possibilityconcern of incorrect results (or exceptions) due to simple overflow. It also makes it possible to almost guarantee that arithmetic results will be the same on all machines, regardless of any particular machine's [[Word (data type)|word size]]. The exclusive use of arbitrary-precision numbers in a programming language also simplifies the language, because ''a number is a number'' and there is no need for multiple types to represent different levels of precision.
 
==Implementation issues==
 
Arbitrary-precision arithmetic is considerably slower than arithmetic using numbers that fit entirely within processor registers, since the latter are usually implemented in [[Arithmetic logic unit|hardware arithmetic]] whereas the former must be implemented in software. Even if the [[computer]] lacks hardware for certain operations (such as integer division, or all floating-point operations) and software is provided instead, it will use number sizes closely related to the available hardware registers: one or two words only and definitely not N words. There are exceptions, as certain ''[[variable word length machine|variable word length]]'' machines of the 1950s and 1960s, notably the [[IBM 1620]], [[IBM 1401]] and the [[Honeywell ''Liberator''200]] series, could manipulate numbers bound only by available storage, with an extra bit that delimited the value.
 
Numbers can be stored in a [[fixed-point arithmetic|fixed-point]] format, or in a [[floating-point]] format as a [[significand]] multiplied by an arbitrary exponent. However, since division almost immediately introduces infinitely repeating sequences of digits (such as 4/7 in decimal, or 1/10 in binary), should this possibility arise then either the representation would be truncated at some satisfactory size or else rational numbers would be used: a large integer for the [[numerator]] and for the [[denominator]]. But even with the [[greatest common divisor]] divided out, arithmetic with rational numbers can become unwieldy very swiftlyquickly: 1/99 1/100 = 1/9900, and if 1/101 is then added, the result is 10001/999900.
 
The size of arbitrary-precision numbers is limited in practice by the total storage available, and computation time.
The size of arbitrary-precision numbers is limited in practice by the total storage available, the variables used to index the digit strings, and computation time. A 32-bit operating system may limit available storage to less than 4&nbsp;GB.<!-- could make thrashing statement for numbers larger than physical RAM --> A programming language using 32-bit integers can only index 4&nbsp;GB. If multiplication is done with an {{math|[[Big O notation|O]](''N''<sup>2</sup>)}} algorithm, it would take on [[Order of approximation|the order of]] {{math|10<sup>12</sup>}} steps to multiply two one-million word numbers.
 
Numerous [[algorithms]] have been developed to efficiently perform arithmetic operations on numbers stored with arbitrary precision. In particular, supposing that {{math|''N''}} digits are employed, algorithms have been designed to minimize the asymptotic [[Computational complexity theory|complexity]] for large {{math|''N''}}.
 
The simplest algorithms are for [[addition]] and [[subtraction]], where one simply adds or subtracts the digits in sequence, carrying as necessary, which yields an ''{{math|O''(''N'')}} algorithm (see [[big O notation]]).
 
[[Comparison (computer programming)|Comparison]] is also very simple. Compare the high -order digits (or machine words) until a difference is found. Comparing the rest of the digits/words is not necessary. The worst case is ''O''{{math|<math>\Theta</math>(''N'')}}, but usually it willmay gocomplete much faster with operands of similar magnitude.
 
For [[multiplication]], the most straightforward algorithms used for multiplying numbers by hand (as taught in primary school) require O{{math|<math>\Theta</math>(''N''<sup>2</sup>)}} operations, but [[multiplication algorithm]]s that achieve {{math|O(''N''&nbsp;log(''N'')&nbsp;log(log(''N'')))}} complexity have been devised, such as the [[Schönhage–Strassen algorithm]], based on [[fast Fourier transform]]s, and there are also algorithms with slightly worse complexity but with sometimes superior real-world performance for smaller {{math|''N''}}. The [[Karatsuba algorithm|Karatsuba]] multiplication is such an algorithm.
 
For [[Division (mathematics)|division]], see: [[Divisiondivision algorithm]].
 
For a list of algorithms along with complexity estimates, see: [[Computationalcomputational complexity of mathematical operations]].
 
For examples in [[x86]]- assembly, see: [[#External links|Externalexternal links]].
 
==Pre-set precision==
In some languages such as [[REXX]] and [[Object REXX|ooRexx]], the precision of all calculations must be set before doing a calculation. Other languages, such as [[Python (programming language)|Python]] and [[Ruby (programming language)|Ruby]], extend the precision automatically to prevent overflow.
 
==Example==
The calculation of [[factorial]]s can easily produce very large numbers. This is not a problem for their usage in many formulaeformulas (such as [[Taylor series]]) because they appear along with other terms, so that—given careful attention to the order of evaluation—intermediate calculation values are not troublesome. If approximate values of factorial numbers are desired, [[Stirling's approximation]] gives good results using floating-point arithmetic. The largest representable value for a fixed-size integer variable may be exceeded even for relatively small arguments as shown in the table below. Even floating-point numbers are soon outranged, so it may help to recast the calculations in terms of the [[logarithm]] of the number.
 
But if exact values for large factorials are desired, then special software is required, as in the [[pseudocode]] that follows, which implements the classic algorithm to calculate 1, 1×2, 1×2×3, 1×2×3×4, etc. the successive factorial numbers.
 
constants:
Constant Limit = 1000; %Sufficient digits.
Constant Base Limit = 10;1000 %The base of the simulated arithmetic''% Sufficient digits.''
text:Base =" ";10 ''%Now prepareThe base of the outputsimulated arithmetic.''
Constant FactorialLimit = 365; ''% Target number to solve, 365!''
Array digit[1:Limit] of integer; %The big number.
Constant tdigit: Array[0:9] of character = ["0","1","2","3","4","5","6","7","8","9"];
Integer carry,d; %Assistants during multiplication.
Integer last,i; %Indices to the big number's digits.
variables:
Array text[1:Limit] of character;%Scratchpad for the output.
Array digit: Array[1:Limit] of integer;0..9 ''% The big number.''
Constant tdigit[0:9] of character = ["0","1","2","3","4","5","6","7","8","9"];
Integer carry, d; : Integer ''% Assistants during multiplication.''
BEGIN
digit last:=0; Integer ''% Index into the %Clear thebig wholenumber's arraydigits.''
Array text: Array[1:Limit] of character; ''% Scratchpad for the output.''
digit[1]:=1; %The big number starts with 1,
last:=1; %Its highest-order digit is number 1.
digit[1*] :=1; 0 %The big''% numberClear startsthe withwhole 1,array.''
'''for''' n:=1 '''to''' FactorialLimit '''do''' %Step through producing 1!, 2!, 3!, 4!, etc.
last carry:=0; 1 ''%Start aThe multiplybig bynumber starts as a n.single-digit,''
digit[1] '''for''' i:= 1 '''to''' last '''do''' ''%Step alongits everyonly digit is 1.''
d:=digit[i]*n + carry; %The classic multiply.
'''for''' n := 1 '''to''' FactorialLimit: '''do''' ''% Step through producing 1!, 2!, 3!, 4!, etc. ''
digit[i]:=d '''mod''' Base; %The low-order digit of the result.
carry:=d '''div''' Base; %The carry to the next digit.
carry := 0 ''% Start a multiply by n.''
'''next''' i;
'''whilefor''' carry > 0i := 1 '''to''' last: ''%Store theStep carryalong inevery the big numberdigit. ''
d := digit[i] * n + carry; ''% Multiply a single %The classic multiplydigit.''
'''if''' last >= Limit '''then''' croak("Overflow!"); %If possible!
last digit[i] :=last +d 1;'''mod''' Base ''% Keep the low-order digit of the %One more digitresult.''
digit[last] carry :=carry d '''moddiv''' Base; ''%Placed Carry over to the next digit.''
carry:=carry '''div''' Base; %The carry reduced.
'''Wendwhile''' carry > 0: ''%With nStore >the Base,remaining maybecarry >in 1the digitbig extranumber.''
'''if''' last >= Limit: '''then''' croakerror("Overflow!overflow"); %If possible!
text:=" "; %Now prepare the output.
'''for''' i last :=1 '''to''' last '''do'''+ 1 ''%Translate fromOne binarymore to textdigit.''
digit[last] := carry '''mod''' Base
text[Limit - i + 1]:=tdigit[digit[i]]; %Reversing the order.
carry := carry '''nextdiv''' i;Base ''% Strip the last digit %Arabic numerals putoff the low order lastcarry.''
'''Print''' text," = ",n,"!"; %Print the result!
'''next''' n;text[*] := " " ''% Now %On toprepare the next factorial upoutput.''
'''for''' i := 1 '''to''' last: ''% Translate from binary to text.''
END;
text[Limit - i + 1] := tdigit[digit[i]]; ''% Reversing the order.''
'''print''' text[Limit - last + 1:Limit], " = ", n, "!"
 
With the example in view, a number of details can be discussed. The most important is the choice of the representation of the big number. In this case, only integer values are required for digits, so an array of fixed-width integers is adequate. It is convenient to have successive elements of the array represent higher powers of the base.
 
The second most important decision is in the choice of the base of arithmetic, here ten. There are many considerations. The scratchpad variable ''{{mvar|d''}} must be able to hold the result of a single-digit multiply ''plus the carry'' from the prior digit's multiply. In base ten, a sixteen-bit integer is certainly adequate as it allows up to 32767. However, this example cheats, in that the value of ''{{mvar|n''}} is not itself limited to a single digit. This has the consequence that the method will fail for {{math|''n'' > 3200}} or so. In a more general implementation, ''{{mvar|n''}} would also use a multi-digit representation. A second consequence of the shortcut is that after the multi-digit multiply has been completed, the last value of ''carry'' may need to be carried into multiple higher-order digits, not just one.
 
There is also the issue of printing the result in base ten, for human consideration. Because the base is already ten, the result could be shown simply by printing the successive digits of array ''digit'', but they would appear with the highest-order digit last (so that 123 would appear as "321"). The whole array could be printed in reverse order, but that would present the number with leading zeroes ("00000...000123") which may not be appreciated, so wethis decidedimplementation to buildbuilds the representation in a space-padded text variable and then printprints that. The first few results (with spacing every fifth digit and annotation added here) are:
 
{| class="wikitable" style="text-align: right; white-space: nowrap; line-height: 80%;"
! colspan=2 style="text-align: center" | Factorial numbers
! colspan=2 style="text-align: center" | Reach of computer integers
Line 109 ⟶ 114:
| 16-bit || style="text-align: left" | 65535
|-
| 3 62880 = || 9!
|-
| 36 28800 = || 10!
|-
| 399 16800 = || 11!
Line 118 ⟶ 123:
| 32-bit || style="text-align: left" | 42949 67295
|-
| 62270 20800 = || 13!
|-
| 8 71782 91200 = || 14!
|-
| 130 76743 68000 = || 15!
|-
| 2092 27898 88000 = || 16!
|-
| 35568 74280 96000 = || 17!
|-
| 6 40237 37057 28000 = || 18!
|-
| 121 64510 04088 32000 = || 19!
|-
| 2432 90200 81766 40000 = || 20!
| 64-bit || style="text-align: left" | 18446 74407 37095 51615
|-
| 51090 94217 17094 40000 = || 21!
|-
| 11 24000 72777 76076 80000 = || 22!
|-
| 258 52016 73888 49766 40000 = || 23!
|-
| 6204 48401 73323 94393 60000 = || 24!
|-
| 1 55112 10043 33098 59840 00000 = || 25!
|-
| 40 32914 61126 60563 55840 00000 = || 26!
|-
| 1088 88694 50418 35216 07680 00000 = || 27!
|-
| 30488 83446 11713 86050 15040 00000 = || 28!
|-
| 8 84176 19937 39701 95454 36160 00000 = || 29!
|-
| 265 25285 98121 91058 63630 84800 00000 = || 30!
|-
| 8222 83865 41779 22817 72556 28800 00000 = || 31!
Line 164 ⟶ 169:
| 128-bit || style="text-align: left" | 3402 82366 92093 84634 63374 60743 17682 11455
|-
| 1 03331 47966 38614 49296 66651 33752 32000 00000 = || 35!
|}
 
WeThis implementation could trymake tomore effective use the available arithmetic of the computer's morebuilt efficientlyin arithmetic. A simple escalation would be to use base 100 (with corresponding changes to the translation process for output), or, with sufficiently wide computer variables (such as 32-bit integers) we could use larger bases, such as 10,000. Working in a power-of-2 base closer to the computer's built-in integer operations offers advantages, although conversion to a decimal base for output becomes more difficult. On typical modern computers, additions and multiplications take constant time independent of the values of the operands (so long as the operands fit in single machine words), so there are large gains in packing as much of a bignumber as possible into each element of the digit array. The computer may also offer facilities for splitting a product into a digit and carry without requiring the two operations of ''mod'' and ''div'' as in the example, and nearly all arithmetic units provide a ''[[carry flag]]'' which can be exploited in multiple-precision addition and subtraction. This sort of detail is the grist of machine-code programmers, and a suitable assembly-language bignumber routine can run much faster than the result of the compilation of a high-level language, which does not provide direct access to such facilities but instead maps the high-level statements to its model of the target machine using an optimizing compiler.
 
For a single-digit multiply the working variables must be able to hold the value (base-&minus;1)²{{sup|2}} + carry, where the maximum value of the carry is (base-&minus;1). Similarly, the variables used to index the digit array are themselves limited in width. A simple way to extend the indices would be to deal with the bignumber's digits in blocks of some convenient size so that the addressing would be via (block ''i'', digit ''j'') where ''i'' and ''j'' would be small integers, or, one could escalate to employing bignumber techniques for the indexing variables. Ultimately, machine storage capacity and execution time impose limits on the problem size.
 
==History==
IBM's first business computer, the [[IBM 702]] (a [[vacuum -tube]] machine) of the mid-1950s, implemented integer arithmetic ''entirely in hardware'' on digit strings of any length from one1 to 511 digits. The earliest widespread software implementation of arbitrary-precision arithmetic was probably that in [[Maclisp]]. Later, around 1980, the [[operating system]]s [[VAX/VMS]] and [[VM/CMS]] offered bignum facilities as a collection of [[literal string|string]] [[subprogram|functionfunctions]]s in the one case and in the languages [[EXEC 2]] and [[REXX]] in the other.
 
An early widespread implementation was available via the [[IBM 1620]] of 1959–1970. The 1620 was a decimal-digit machine which used discrete transistors, yet it had hardware (that used [[lookup table]]s) to perform integer arithmetic on digit strings of a length that could be from two to whatever memory was available. For floating-point arithmetic, the mantissa was restricted to a hundred digits or fewer, and the exponent was restricted to two digits only. The largest memory supplied offered sixty-thousand60 000 digits, however [[Fortran]] compilers for the 1620 settled on fixed sizes such as ten10, though it could be specified on a control card if the default was not satisfactory.
 
==Software libraries==
Line 184 ⟶ 189:
 
== See also ==
* [[Fürer's algorithm]]
 
* [[Karatsuba algorithm]]
* [[Mixed-precision arithmetic]]
* [[Toom–Cook multiplication]]
* [[Schönhage–Strassen algorithm]]
* [[Fürer'sToom–Cook algorithmmultiplication]]
* [[Little Endian Base 128]]
 
== References ==
 
{{reflist}}
 
* {{Cite book|last=Knuth |first=Donald |authorlink=Donald Knuth |title=Seminumerical Algorithms |series=[[The Art of Computer Programming]] |volume=2 |year=2008 |edition=3rd |publisher=Addison-Wesley |isbn=0-201-89684-2|postscript=<!-- Bot inserted parameter. Either remove it; or change its value to "." for the cite to end in a ".", as necessary. -->&#123;&#123;inconsistent citations&#125;&#125; }}, Section 4.3.1: The Classical Algorithms
== Further reading ==
* {{Cite book|last=Knuth |first=Donald |authorlinkauthor-link=Donald Knuth |title=Seminumerical Algorithms |series=[[The Art of Computer Programming]] |volume=2 |year=2008 |edition=3rd |publisher=Addison-Wesley |isbn=978-0-201-89684-2|postscript=<!-- Bot inserted parameter. Either remove it; or change its value to "." for the cite to end in a ".", as necessary. -->&#123;&#123;inconsistent citations&#125;&#125; 8}}, Section 4.3.1: The Classical Algorithms
*{{cite book|author=Derick Wood|year=1984|title=Paradigms and Programming with Pascal|publisher=Computer Science Press|isbn=0-914894-45-5}}
*{{cite book|author=Richard Crandall, Carl Pomerance|year=2005|title=Prime Numbers|publisher=Springer-Verlag|isbn=9780387252827}}, Chapter 9: Fast Algorithms for Large-Integer Arithmetic
 
== External links ==
* [https://web.archive.org/web/20101019002107/http://oopweb.com/Assembly/Documents/ArtOfAssembly/Volume/Chapter_9/CH09-3.html#HEADING3-1 Chapter 9.3 of ''The Art of Assembly''] by [[Randall Hyde]] discusses multiprecision arithmetic, with examples in [[x86]]-assembly.
* Rosetta Code task [http://rosettacode.org/wiki/Arbitrary-precision_integers_%28included%29 Arbitrary-precision integers] Case studies in the style in which over 4795 programming languages compute the value of 5**4**3**2 using arbitrary precision arithmetic.
 
{{data types}}