Talk:Karatsuba algorithm: Difference between revisions

Content deleted Content added
Cewbot (talk | contribs)
m Maintain {{WPBS}} and vital articles: 2 WikiProject templates. Create {{WPBS}}. Keep majority rating "C" in {{WPBS}}. Remove 2 same ratings as {{WPBS}} in {{Maths rating}}, {{WikiProject Computing}}. Remove 1 deprecated parameter: field.
 
(32 intermediate revisions by 15 users not shown)
Line 1:
{{WikiProject banner shell|class=C|
{{maths rating
{{WikiProject Mathematics|priority=low}}
|class=start
{{WikiProject Computing |importance=Low |science=y |science-importance=Low |software=y |software-importance=Low}}
|importance=low
}}
|field=applied
|historical=}}
 
== Divide and conquer (February 2011) ==
Line 14 ⟶ 13:
 
By the way, it's impossible to write that "Karatsuba algorithm is a notable example of "Divide and Conquer" or "Binary Splitting", since just Karatsuba invented this idea, the names "Divide and Conquer", "Binary Splitting" were called much later. Somebody called the Karatsuba idea "Divide and Conquer", so Karatsuba didn't use somebody else' idea, somebody else used the Karatsuba idea. See you the difference? To write "Karatsuba algorithm is a notable example of "Divide and Conquer" means to write that Karatsuba used the method "Divide and Conquer" to create a fast multiplication, but Karatsuba just invented this method (in computational mathematics).[[Special:Contributions/91.79.192.32|91.79.192.32]] ([[User talk:91.79.192.32|talk]]) 14:05, 1 February 2011 (UTC)
 
The comments above by 91.79.192.32 are utter nonsense and reflect a near complete lack of cognitive competence. Relative timing of the development of concepts and algorithms has no bearing on whether the algorithms are examples of the concepts. -- [[User:Jibal|Jibal]] ([[User talk:Jibal|talk]]) 22:26, 17 January 2020 (UTC)
 
== Divide and conquer ==
Line 22 ⟶ 23:
 
I think this is wrong. According to [[Merge_sort]], von Neumann invented Merge Sort in 1945. <small>—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/84.74.154.5|84.74.154.5]] ([[User talk:84.74.154.5|talk]]) 14:11, 23 July 2008 (UTC)</small><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
 
 
I think, this is an absurd to compare von Neumann Merge Sort and the Karatsuba fast multiplication algorithm. Only after the Karatsuba algorithm the history of fast algorithms began. Such fast processes like AGM method of Gauss, Newton method etc. become FAST only with application of fast multiplication. Von Neumann or anybody else results can not help here. That is why the Karatsuba algorithm can be considered as the frist FAST algorithm in the history of computations. <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/83.149.209.253|83.149.209.253]] ([[User talk:83.149.209.253|talk]]) 14:37, 31 March 2010 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
Line 36:
 
''"To equal such different by importance results as the Karatsuba method and von Neumann Sorting means to diminish the advantage of the Karatsuba result."'' — This sentence shows a lot of ignorance. Claiming that a fast sort algorithm is not important is uninformed at best. In fact, most applications depend on sorting rather than on multiplication of long numbers. — [[User:Adrianwn|Adrianwn]] ([[User talk:Adrianwn|talk]]) 08:19, 1 April 2010 (UTC)
 
The comments above by 83.149.209.253 are utter nonsense and reflect a near complete lack of cognitive competence. -- [[User:Jibal|Jibal]] ([[User talk:Jibal|talk]]) 22:33, 17 January 2020 (UTC)
 
== Rule of thumb ==
Line 68 ⟶ 70:
:And what exactly do you want to tell us by that? Furthermore, could you please explain [http://en.wikipedia.org/w/index.php?title=Karatsuba_algorithm&curid=6395589&diff=368458594&oldid=368191659 this edit]? I reverted a previous, similar edit of yours, because it was unclear and seemed redundant. – [[User:Adrianwn|Adrianwn]] ([[User talk:Adrianwn|talk]]) 05:37, 17 June 2010 (UTC)
 
==Original research by 93.118.212.93 (Florin Matei) I (collapsed)==
{{cot|Subdivision 1:4 that is not Toom-Cook}}
== ;Further subdivision 1:4 ==to 4:
 
== Further subdivision 1:4 ==
if the operands are 1:4 length they might be a particular form of 4:4 n then we might find better orders for Karatsuba algo (1960) <span style="font-size: smaller;" class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/93.118.212.93|93.118.212.93]] ([[User talk:93.118.212.93|talk]]) 20:54, 25 May 2012 (UTC)</span><!-- Template:Unsigned IP --> <!--Autosigned by SineBot-->
 
Line 79 ⟶ 83:
i think it could b a chance to b a problem oriented on programmer analysts problematic :-) [[Special:Contributions/93.118.212.93|93.118.212.93]] ([[User talk:93.118.212.93|talk]]) 12:52, 13 February 2013 (UTC)
 
== ;Possible linear long integer multiplyings algo ==:
 
i think that could be possible keeping a O(1) *n loop to count , in O(1) the number
Line 90 ⟶ 94:
sssswell, its not the egiptian algo, maybe bit oriented processing n dynamic programming might achieve O(N) or similary processing time... who knows cz im such poor encoder when it comes to write programmes :( <span style="font-size: smaller;" class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/93.118.212.93|93.118.212.93]] ([[User talk:93.118.212.93|talk]]) 15:15, 2 February 2013 (UTC)</span><!-- Template:Unsigned IP --> <!--Autosigned by SineBot-->
 
== ;n*log(n) muls or even better ==:
 
wow, im sorry lets say we want to multiply a1 n b1, to avoid confusion
a1*b1=m*m-n*n; m*m =(m1;m2)*(m1;m2) , m1 the most significant part m1=3*a+b, m2=3*b+a, computing m1*m1, m2*m2 n m1*m2 from "autosolve". good luck n dont forget credits, ok?? Florin Matei, Romania [[Special:Contributions/93.118.212.93|93.118.212.93]] ([[User talk:93.118.212.93|talk]]) 19:16, 6 December 2012 (UTC)
Line 104 ⟶ 109:
idk abt the rite place 4 this, apparently is all i can afford it here, not to mention the brain treat... they r doing this 2 me 4 their fun, i guess [[Special:Contributions/93.118.212.93|93.118.212.93]] ([[User talk:93.118.212.93|talk]]) 09:29, 9 February 2013 (UTC)
 
== ;Karatsuba idea n estimating MSb mantissa for factorials ==:
i think due to the arithmetic progression of operands n using binary codifyings 4 numbers we could use ideas similary to Karatsuba to multiply in polynomial time a great deal of numbers such as factorials, only for the significant mantissa (n exponent) the details r let to u :) <span style="font-size: smaller;" class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/93.118.212.93|93.118.212.93]] ([[User talk:93.118.212.93|talk]]) 11:53, 2 February 2013 (UTC)</span><!-- Template:Unsigned IP --> <!--Autosigned by SineBot-->
 
Line 110 ⟶ 115:
 
 
== ;Possible linear algorithm simple n efficient just like K idea ==:
 
ok, once n last 4 everybody that wanna challenge this idea: my challenge 2 u is that using bit
Line 124 ⟶ 129:
:This is the general Toom-Cook idea. Please read and understand the paper by Bernstein, as I told You before.--[[User:LutzL|LutzL]] ([[User talk:LutzL|talk]]) 19:09, 9 February 2013 (UTC)
swell, r u sure that Toom-Cook is linear 4 splitting variant k=2, if i rmb well, bcz of those coetients from the system Toom-Cook is not O(N) complexity in any case. my ideas r meant as an iq test also [[Special:Contributions/93.118.212.93|93.118.212.93]] ([[User talk:93.118.212.93|talk]]) 10:07, 10 February 2013 (UTC)
 
{{cob}}
 
== How it has been computed? ==
Line 133 ⟶ 140:
However, it doesn't explains the algorithm for compute that. Is there the time for applying long multiplication, or just apply Karatsuba algorithm again to that expression? [[Special:Contributions/31.42.239.14|31.42.239.14]] ([[User talk:31.42.239.14|talk]]) 21:54, 25 February 2013 (UTC)
 
==Original research by 93.118.212.93 (Florin Matei) II (collapsed)==
== ok, abt some a bit different K idea possible able 4 generalization ==
{{cot|Subdivision 1:4 that is not Toom-Cook}}
== ;ok, abt some a bit different K idea possible able 4 generalization ==:
multiplying A=(a1;a2) with B=(b1;b2) might be planned like this (a1+b1)*(a2+b2), (a1-b1)*(a2-b2), and a1*b1... i think this could challenge Toom-Cook algo, by some K. generalization that use mostly algebraic sums n possibly finding more simplier systems of equation... the algebraic sums to b multiplied that ofer decent agebraig sums of desired coetients might b search evn automatically, by the computer...good luck! (i posted similary tries on Toom - Cook talk page.) [[Special:Contributions/93.118.212.93|93.118.212.93]] ([[User talk:93.118.212.93|talk]]) 10:51, 27 March 2013 (UTC)
 
Line 141 ⟶ 150:
wedd want the value of the product for some xo=2^k value... we plan to do 3 small multiplications h1(1)*h2(1), l1(1)*l2(1) n (h1+l1)(1)*(h2+l2)(1) but, after obtaining the values (using K. classic algo) we multiply properly with 2^t two of the resulted polynoms action that sooner or later, is expected to ofer the desired results of the multiplication of the two polynoms n aplied to the x0 =2^k value meaning the multiplication is over... time looks pretty appealing, IF its working :) [[Special:Contributions/93.118.212.93|93.118.212.93]] ([[User talk:93.118.212.93|talk]]) 07:06, 28 March 2013 (UTC)
 
== ;A possible generalization aiding by complex number possibilities ? ==:
 
if we have to multiply 2 polinoms but in fact looking 4 their value resulted number obtained for x0= 2^k for the polinom product, we might use as xi, the values that create the system of equations, xi= roots_complex_order_j(1)... j somewhere 1...n, n the numbers of plan partitions: it could help geting a decent system of linear equations thats solves more fast... :) [[Special:Contributions/93.118.212.93|93.118.212.93]] ([[User talk:93.118.212.93|talk]]) 08:21, 28 March 2013 (UTC)
 
Line 153 ⟶ 163:
 
 
hi, i think there could be written algos based on grid concept for the stored values, n a small square 4 data, also, that possibly work in O(N) time n O(N^(2/3)) mem, for example or even less O(N^(1/2)), O((log(N)), or evn O(1) mem for O(N) time, so i dont think really its need to trade time for mem . for example O(1) mem algo first seek for target without filling then intelligent border keeping also condition of conexivitty (local test) 4 the remaining zone... only if there is no bordering n need to spilt, then proceed splitting... O(N^1/2) of O(N^2/3) are evn more simple they use O(N) time. it some cautious terms N is the maximum number of pixel of a possible image to b filled... — Preceding unsigned comment added by 93.118.212.93 (talk) 10:29, 18 January 2013 (UTC) <span style="font-size: smaller;" class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/93.118.212.93|93.118.212.93]] ([[User talk:93.118.212.93|talk]]) </span><!-- Template:Unsigned IP --> <!--Autosigned by SineBot-->
 
 
;a lil-big difference in K. algo idea:
 
this idea could make the difference between O(N^1.59) n O(N*log(N)) by simply redisn the plan of recursive calls of this good idea: basically instead of planning 3 muls of T(n/2) we plan (and that is reffering to the muls of sums) the T(n/2) sum n then the two remaining two T(n/4) sums. i agree there is some terms let 4 autosolve but this doesent yet charge the data stack (hypotetical data stack of some implementention of the algo) n those remaining unconsidered ("yet") terms are expected to become smaller n smaller. practically if the tack grows with N/2+N/4+N/4=N then this small idea could lead us to an O(n*log(n)) time complexity algo. i agree my "demonstration" here is far to b complete, take it as a Russian style problem , if u wish :) Florin Matei, Romania [[Special:Contributions/93.118.212.93|93.118.212.93]] ([[User talk:93.118.212.93|talk]]) 13:12, 14 April 2013 (UTC)
:No, russian style is to leave the solution of a problem to the reader. What you want is to leave the formulation of a problem to the reader. This is crank style. Come back when you have an implementation with demonstrable runtimes.--[[User:LutzL|LutzL]] ([[User talk:LutzL|talk]]) 12:13, 15 April 2013 (UTC)
 
well, another pure creative idea is that observing/remarking first that if we manage to write the master method equivalent for finding O() of Karatsuba algo idea in such way that being possible the following kinda write T(N)=3*(1-p)*T(N/2), p a small percentage of 1 meaning the problem solved first from each of those 3 planned, we might find better O() for K. algo idea... basically insted of planning a mul of T(N/2) we solve a small percentage of that n planning only the remaining (1-p)*T(N/2) claiming we already talk abt a better O()... as a rule of thumb the lil mul n the remaining mul should b equalize in time wasting in order to find a most representative O() [[Special:Contributions/93.118.212.93|93.118.212.93]] ([[User talk:93.118.212.93|talk]]) 05:15, 20 April 2013 (UTC)
 
;inspired by Karatsuba's idea:
 
if we have to multiply 2 long integers of N bits each, first we might do the multiplyings of first 3*N/4 with first 3*N/4 bits, obtaining recursively a result, then we might do the same for the last 3*N/4 bits multiplied with others operand 3*N/4 bits .we might b remarking that we already got (3/8)*N (first)+ (3/8)*N (last) required bits of the result now we might b focusing on the middle of the operands (pretending that at the middle of operands there is some aiding hypotethic floating point)... at a summary view in order to get a clue abt master method equation, i got 8^alfa=2*6^alfa ... [[Special:Contributions/93.118.212.93|93.118.212.93]] ([[User talk:93.118.212.93|talk]]) 13:42, 26 May 2013 (UTC) Florin Matei, Romania
{{cob}}
 
== "Invented by Karatsuba"/"Kolmogorov was very agitated" ==
 
The lead of our article claims that this algorithm was "invented by Karatsuba". Given that the original publication was joint with Ofman, is there reliable sourcing (independent of Karatsuba himself) that this algorithm was invented solely by Karatsuba? If so, what was Ofman's contribution? —[[User:David Eppstein|David Eppstein]] ([[User talk:David Eppstein|talk]]) 02:24, 5 August 2013 (UTC)
:Hi, I believe the history section does a good job on that: invented by Karatzuba during a seminar of Kolmogorov, published two years later by Kolmogorov and Ofman (another student in the seminar I read somewhere?) under the names of Karatsuba and Ofman. I do not believe that minutes of the seminar exist, so the amount and influence of brainstorming is not reconstructable.--[[User:LutzL|LutzL]] ([[User talk:LutzL|talk]]) 14:46, 5 August 2013 (UTC)
 
The history section is problematic, even if we ignore the plausible claims for Karatsuba, because that section also says "Kolmogorov was very agitated about the discovery; he communicated it at the next meeting of the seminar, which was then terminated." That is an uncharitable depiction that doesn't seem to be recorded in any of the sources. This sentence should either be sourced or removed.
 
== Algorithms are generally invented, not discovered ==
 
The opening paragraphs states that Karatsuba ''discovered'' this algorithm. Although the principles or properties underlying the algorithm may indeed have been discovered, the algorithm itself would have been ''invented'', unless it was somehow preexistent. I posted this instead of modifying the article directly in case somebody has a justification for the current wording. <!-- Template:Unsigned IP --><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/38.104.57.138|38.104.57.138]] ([[User talk:38.104.57.138#top|talk]]) 16:01, 14 April 2017 (UTC)</small> <!--Autosigned by SineBot-->
 
:Nonsense. This O(n^1.58) algorithm has always existed; therefore it was discovered. On can generally interchange "discovered" and "invented" for algorithms, but the simpler and more fundamental the algorithm, the more "discovered" is appropriate, and the more ad hoc the algorithm, the more "invented" is appropriate. -- [[User:Jibal|Jibal]] ([[User talk:Jibal|talk]]) 22:40, 17 January 2020 (UTC)
 
== Karatsuba method is not (comparatively) as fast as the article said ==
 
The description in the article indicate:
 
''"For example, to multiply two 1024-digit numbers (n = 1024 = 2^10), the traditional algorithm requires (2^10)^2 = 1,048,576 single-digit multiplications, whereas the Karatsuba algorithm requires 3^10 = 59,049 thus being ~17.758 times faster"
''
 
What the article don't say is that the 59,049 Karatsuba's multiplications are NOT ''single-digit'' multiplications! (like the traditional algorithm) so they're comparing pears vs apples.
 
In order to correctly do comparisons, we need to set a ''standard''. If the standard is '''single-digit multiplications''', then the results in the article change.
 
In the example of the product of 12345 and 6789, the traditional algorithm requires 5 digits X 4 digits = 20 single-digit multiplications. The description indicate:
 
Only three multiplications, which operate on smaller integers, are used to compute three partial results:
z2 = 12 × 6 = 72
z0 = 345 × 789 = 272205
z1 = (12 + 345) × (6 + 789) − z2 − z0 = 357 × 795 − 72 − 272205 = 283815 − 72 − 272205 = 11538
 
However, 12 × 6 is NOT one multiplication but 2 single-digit multiplications, and 345 × 789 requires 9 single-digit multiplications, and (12 + 345) × (6 + 789) also requires 9 single-digit multiplications, giving a total of 20 multiplications! (not 3).
 
In this way, Karatsuba's method is not as fast as the article said...
 
Where is the error in my reasoning, if any? [[Special:Contributions/2806:2F0:93C0:FD4B:E92A:EC7:7FF6:5CF0|2806:2F0:93C0:FD4B:E92A:EC7:7FF6:5CF0]] ([[User talk:2806:2F0:93C0:FD4B:E92A:EC7:7FF6:5CF0|talk]]) 01:46, 14 June 2023 (UTC)
:This is a divide-and-conquer algorithm, so the multiplications in z0 and z1 wouldn't be carried out by the conventional nxn method, but rather by calling Karatsuba recursively for each of z0 and z1. I find the article's example poorly chosen and awkward, so I'm not guaranteeing there's nothing wrong with it; but what you're talking about isn't it. [[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 06:22, 14 June 2023 (UTC)
::I am not talking nor referring to advanced details on the method, but about a single and specific point: the comparison of the number of operations on both the traditional algorithm and Karatsuba algorithm, that the article said that is ''~17.758 times faster'' in the first example...
::The posterior example on Karatsuba method (the one that specify ''"Only three multiplications, which operate on '''smaller integers''' (NOT single-digit integers), are used to compute three partial results"'') uses multiplications over ''three-digits numbers''. For this reason I concluded that the first example, the one that requires 1024*1024 = 1,048,576 single-digit multiplications, would require: 1024/3 = 342*342 three-digits multiplications; that is: 116,964 multiplications.
::In this way, if the traditional algorithm requires 116,964 ('''three-digits''') multiplications and Karatsuba algorithm requires 59,049 ('''three-digits''') multiplications (as the second example clearly indicate), then Karatsuba is just 1.98 times faster (and ''not'' 17.758 times faster as the article said). [[Special:Contributions/2806:2F0:93C0:FD4B:B85C:D1F7:77AB:F805|2806:2F0:93C0:FD4B:B85C:D1F7:77AB:F805]] ([[User talk:2806:2F0:93C0:FD4B:B85C:D1F7:77AB:F805|talk]]) 04:58, 21 June 2023 (UTC)
:::I've removed these quantitative claims because (a) they're way overprecise; and (b) they appear to be [[WP:OR]]. [[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 07:21, 21 June 2023 (UTC)
::::To answer the OP's original question "Where is the error in my reasoning": the error is that the OP is confusing the number of single-digit multiplications at the bottom of the recursion (given for the example of two 1024-digit numbers) with the number of recursive multiplications at the top of the recursion (given for the example of 12345 and 6789 and always three). The top-level recursive multiplications are not single-digit. The multiplications at the bottom level of the recursion would be single-digit in whatever base you are using (which really should be much larger than base-10). —[[User:David Eppstein|David Eppstein]] ([[User talk:David Eppstein|talk]]) 07:34, 21 June 2023 (UTC)
:::::As usual, the prof put it best. [[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]] 07:49, 21 June 2023 (UTC)
 
== Was Babbage faster? ==
 
"four multiplications...were known to Charles Babbage," isn't that faster than the "quadratic 'grade school' algorithm" (wherein each digit of a multiplicand is multiplied by each digit of a multiplier and
shifted results are added)? If so, that'd mean the Karatsuba algorithm wasn't the first faster multiplication algorithm. [[Special:Contributions/213.41.102.186|213.41.102.186]] ([[User talk:213.41.102.186|talk]]) 10:17, 15 December 2023 (UTC)
 
:The great observation of Karatsuba is that 3 multiplications are sufficient where Babbage needed 4 multiplications, and that this leads to an algorithm that is faster (for large factors) than all previously known algorithms. [[User:D.Lazard|D.Lazard]] ([[User talk:D.Lazard|talk]]) 11:03, 15 December 2023 (UTC)
::Indeed. But, the four multiplications known to Babbage already beats "grade school," right? [[Special:Contributions/213.41.102.186|213.41.102.186]] ([[User talk:213.41.102.186|talk]]) 15:04, 15 December 2023 (UTC)
:::Why do you think that? BTW, the 4 multiplications are the same as the "quadratic 'grade school' algorithm" with 2-digit numbers in a high radix. — [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 15:35, 15 December 2023 (UTC)
::::Oh, doesn't beat, understood. [[Special:Contributions/213.41.102.186|213.41.102.186]] ([[User talk:213.41.102.186|talk]]) 16:50, 15 December 2023 (UTC)