Kolmogorov complexity: Difference between revisions

Content deleted Content added
KalGari81 (talk | contribs)
added a paragraph to conditional Kolmogorov complexity (including reference)
copyedit, +wikilink
Line 142:
}}</ref>
 
Andrey Kolmogorov later [[multiple discovery|independently published]] this theorem in ''Problems Inform. Transmission'' in 1965.<ref>{{cite journal|volume=1 |issue=1 |year=1965 |pages=1–7 |title=Three Approaches to the Quantitative Definition of Information |url=http://www.ece.umd.edu/~abarg/ppi/contents/1-65-abstracts.html#1-65.2 |journal=Problems Inform. Transmission |first=A.N. |last=Kolmogorov |url-status=dead |archive-url=https://web.archive.org/web/20110928032821/http://www.ece.umd.edu/~abarg/ppi/contents/1-65-abstracts.html |archive-date=September 28, 2011 }}</ref> in 1965. [[Gregory Chaitin]] also presents this theorem in the ''J.[[Journal of the ACM]]''&nbsp;– Chaitin's paper was submitted October 1966 and revised in December 1968, and cites both Solomonoff's and Kolmogorov's papers.<ref>{{cite journal | last1 = Chaitin | first1 = Gregory J. | title = On the Simplicity and Speed of Programs for Computing Infinite Sets of Natural Numbers| journal = Journal of the ACM | volume = 16 | pages = 407–422| year = 1969 | doi = 10.1145/321526.321530 | issue = 3| citeseerx = 10.1.1.15.3821 | s2cid = 12584692 }}</ref>
 
The theorem says that, among algorithms that decode strings from their descriptions (codes), there exists an optimal one. This algorithm, for all strings, allows codes as short as allowed by any other algorithm up to an additive constant that depends on the algorithms, but not on the strings themselves. Solomonoff used this algorithm and the code lengths it allows to define a "universal probability" of a string on which inductive inference of the subsequent digits of the string can be based. Kolmogorov used this theorem to define several functions of strings, including complexity, randomness, and information.
 
When Kolmogorov became aware of Solomonoff's work, he acknowledged Solomonoff's priority.<ref>{{cite journal | last1=Kolmogorov | first1=A. | title=Logical basis for information theory and probability theory | journal=IEEE Transactions on Information Theory | volume=14|issue=5 | pages=662–664 | year=1968 | doi =10.1109/TIT.1968.1054210 | s2cid=11402549 }}</ref> For several years, Solomonoff's work was better known in the Soviet Union than in the Western WorldWest. The general consensus in the scientific community, however, was to associate this type of complexity with Kolmogorov, who was concerned with randomness of a sequence, while Algorithmic Probability became associated with Solomonoff, who focused on prediction using his invention of the universal prior probability distribution. The broader area encompassing descriptional complexity and probability is often called Kolmogorov complexity. The computer scientist [[Ming Li]] considers this an example of the [[Matthew effect (sociology)|Matthew effect]]: "...to everyone who has, more will be given..."<ref>{{Cite book |doi=10.1007/978-0-387-49820-1_1 |chapter=Preliminaries |title=An Introduction to Kolmogorov Complexity and its Applications |url=https://archive.org/details/introductiontoko00limi_695 |url-access=limited |pages=[https://archive.org/details/introductiontoko00limi_695/page/n23 1]–99 |series=Texts in Computer Science |year=2008 |last1=Li |first1=Ming |last2=Vitányi |first2=Paul |isbn=978-0-387-33998-6 }}</ref>
 
There are several other variants of Kolmogorov complexity or algorithmic information. The most widely used one is based on [[self-delimiting program]]s, and is mainly due to [[Leonid Levin]] (1974).