Euclidean algorithm: Difference between revisions

Content deleted Content added
Line 12:
 
This is boring i hate this stuff lol
 
== Running time ==
 
[[Image:Euclidean algorithm running time X Y.png|thumb|256px|Plot of the running time for gcd(x,y). Red indicates a fast computation, while successively bluer points indicate slower computations]]
 
When analyzing the running time of Euclid's algorithm, it turns out that the inputs requiring the most divisions are two successive [[Fibonacci number]]s (because their ratios are the [[convergent (continued fraction)|convergents]] in the slowest [[continued fraction]] expansion to converge, that of the [[golden ratio]]), and the worst case requires [[Big O notation|''O''(''n'')]] divisions, where ''n'' is the number of digits in the input. However, the divisions themselves are not constant time operations; the actual time complexity of the algorithm is <math>O(n^2)</math>.
The reason is that division of two ''n''-bit numbers takes time <math>O(n(m+1))</math>, where ''m'' is the length of the quotient. Consider the computation of gcd(''a'',''b'') where ''a'' and ''b'' have at most ''n'' bits, let <math>a_0,\dots,a_k</math> be the sequence of numbers produced by the algorithm, and let <math>n_0,\dots,n_k</math> be their lengths. Then <math>k=O(n)</math>, and the running time is bounded by
:<math>O\Big(\sum_{i<k}n_i(n_i-n_{i+1}+2)\Big)\le O\Big(n\sum_{i<k}(n_i-n_{i+1}+2)\Big)\le O(n(n_0+2k))\le O(n^2).</math>
 
This is considerably better than Euclid's original algorithm, in which the modulus operation is effectively performed using repeated subtraction in <math>O(2^n)</math> steps. Consequently, that version of the algorithm requires <math>O(2^n n)</math> time for ''n''-digit numbers, or <math>O(m \log{m})</math> time for the number ''m''.
 
Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity. An alternative algorithm, the [[binary GCD algorithm]], exploits the [[binary numeral system|binary]] representation used by computers to avoid divisions and thereby increase efficiency, although it too is ''O''(''n''²); it merely shrinks the constant hidden by the [[big-O notation]] on many real machines.
 
 
boring boring broing lol