Algorithmic information theory | Wikipedia audio article

preview_player
Показать описание
This is an audio version of the Wikipedia Article:


00:02:37 1 Overview
00:06:53 2 History
00:09:43 3 Precise definitions
00:12:47 4 Specific sequence



Listening is a more natural way of learning, when compared to reading. Written language only began at around 3200 BC, but spoken language has existed long ago.

Learning by listening is a great way to:
- increases imagination and understanding
- improves your listening skills
- improves your own spoken accent
- learn while on the move
- reduce eye strain

Now learn the vast amount of general knowledge available on Wikipedia through audio (audio article). You could even learn subconsciously by playing the audio while you are sleeping! If you are planning to listen a lot, you could try using a bone conduction headphone, or a standard speaker instead of an earphone.

Listen on Google Assistant through Extra Audio:
Other Wikipedia audio articles at:
Upload your own Wikipedia articles through:
Speaking Rate: 0.713959178620419
Voice name: en-US-Wavenet-C


"I cannot teach anybody anything, I can only make them think."
- Socrates


SUMMARY
=======
Algorithmic information theory is a subfield of information theory and computer science that concerns itself with the relationship between computation and information. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."Algorithmic information theory principally studies complexity measures on strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers.
The theory was founded by Ray Solomonoff, who published the basic ideas on which the field is based as part of his invention of algorithmic probability—a way to overcome serious problems associated with the application of Bayes' rules in statistics. He first described his results at a Conference at Caltech in 1960, and in a report, February 1960, "A Preliminary Report on a General Theory of Inductive Inference."Algorithmic information theory was later developed independently by Andrey Kolmogorov, in 1965 and Gregory Chaitin, around 1966. There are several variants of Kolmogorov complexity or algorithmic information; the most widely used one is based on self-delimiting programs and is mainly due to Leonid Levin (1974).
Per Martin-Löf also contributed significantly to the information theory of infinite sequences. An axiomatic approach to algorithmic information theory based on the Blum axioms (Blum 1967) was introduced by Mark Burgin in a paper presented for publication by Andrey Kolmogorov (Burgin 1982). The axiomatic approach encompasses other formulations.
Рекомендации по теме