Arbitrary-precision arithmetic: Difference between revisions

Content deleted Content added
m Implementation issues: Spelling/grammar correction
Tag: possible vandalism
Line 20:
==Implementation issues==
 
wrong wrong wrong wrong wrongerations]]
Arbitrary-precision arithmetic is considerably slower than arithmetic using numbers that fit entirely within processor registers, since the latter are usually implemented in [[Arithmetic logic unit|hardware arithmetic]] whereas the former must be implemented in software. Even if the [[computer]] lacks hardware for certain operations (such as integer division, or all floating-point operations) and software is provided instead, it will use number sizes closely related to the available hardware registers: one or two words only and definitely not N words. There are exceptions, as certain ''[[variable word length machine|variable word length]]'' machines of the 1950s and 1960s, notably the [[IBM 1620]], [[IBM 1401]] and the Honeywell ''Liberator'' series, could manipulate numbers bound only by available storage, with an extra bit that delimited the value.
 
Numbers can be stored in a [[fixed-point arithmetic|fixed-point]] format, or in a [[floating-point]] format as a [[significand]] multiplied by an arbitrary exponent. However, since division almost immediately introduces infinitely repeating sequences of digits (such as 4/7 in decimal, or 1/10 in binary), should this possibility arise then either the representation would be truncated at some satisfactory size or else rational numbers would be used: a large integer for the [[numerator]] and for the [[denominator]]. But even with the [[greatest common divisor]] divided out, arithmetic with rational numbers can become unwieldy very quickly: 1/99 – 1/100 = 1/9900, and if 1/101 is then added the result is 10001/999900.
 
The size of arbitrary-precision numbers is limited in practice by the total storage available, the variables used to index the digit strings, and computation time. A 32-bit operating system may limit available storage to less than 4&nbsp;GB.<!-- could make thrashing statement for numbers larger than physical RAM --> A programming language using 32-bit integers can only index 4&nbsp;GB. If multiplication is done with an {{math|[[Big O notation|O]](''N''<sup>2</sup>)}} algorithm, it would take on [[Order of approximation|the order of]] {{math|10<sup>12</sup>}} steps to multiply two one-million word numbers.
 
Numerous [[algorithms]] have been developed to efficiently perform arithmetic operations on numbers stored with arbitrary precision. In particular, supposing that ''N'' digits are employed, algorithms have been designed to minimize the asymptotic [[Computational complexity theory|complexity]] for large ''N''.
 
The simplest algorithms are for [[addition]] and [[subtraction]], where one simply adds or subtracts the digits in sequence, carrying as necessary, which yields an ''O''(''N'') algorithm (see [[big O notation]]).
 
[[Comparison (computer programming)|Comparison]] is also very simple. Compare the high order digits (or machine words) until a difference is found. Comparing the rest of the digits/words is not necessary. The worst case is ''O''(''N''), but usually it will go much faster.
 
For [[multiplication]], the most straightforward algorithms used for multiplying numbers by hand (as taught in primary school) require O(''N''<sup>2</sup>) operations, but [[multiplication algorithm]]s that achieve O(''N''&nbsp;log(''N'')&nbsp;log(log(''N''))) complexity have been devised, such as the [[Schönhage–Strassen algorithm]], based on [[fast Fourier transform]]s, and there are also algorithms with slightly worse complexity but with sometimes superior real-world performance for smaller ''N''. The [[Karatsuba algorithm|Karatsuba]] multiplication is such an algorithm.
 
For [[Division (mathematics)|division]], see: [[Division algorithm]].
 
For a list of algorithms along with complexity estimates, see: [[Computational complexity of mathematical operations]]
 
For examples in [[x86]]-assembly, see: [[#External links|External links]].