How a Russian student invented a faster multiplication method

preview_player
Показать описание
To advance the field of computer science, mathematician Kolmogorov tried to optimise the multiplication algorithm we learn in elementary school. After failing to do so, he conjectured that no faster algorithms exist. This gave rise to Karatsuba's fast multiplication algorithm, an algorithm named after Anatoly Karatsuba that is faster than the elementary school algorithm. This video gives an introduction to theoretical computer science and Kolmogorov's conjecture, explains the algorithm, proves that it has a runtime faster than quadratic, and goes over the history of multiplication algorithms that came afterwards.

0:00 Theoretical Computer Science
5:25 Kolmogorov
7:34 Karatsuba
15:12 The Post-FFT Era
Рекомендации по теме
Комментарии
Автор

Kolmogorov is one of the coolest men I've heard of. Admitting defeat and then anonymously supporting the kid. wild

yurr
Автор

Props to Kolmogorov, he could have sent the paper in his own name without giving credit to an unknown student and take all the merit. The academic world is sometimes ruthless

soyanchd
Автор

For anyone curious at 13:38 N^1.6 is used as an approximation. It's really N ^ log base 2 of 3. If you want to enter it into a calculator use the change of base formula. Log(3) / Log(2)

kitsurubami
Автор

I think the fact we don't teach fast fourier transform in elementary school says a lot about society.

alexray
Автор

I love how you bring nearly-unreachable knowledge to the community through interesting and easy-to-understand videos. I would never know this bit of theoretical CS otherwise. Keep up the good work!!!

tytywuu
Автор

Between Karatsuba and FFT there is a Toom-Cook algorithm, from 1963-66. As FFT, it treats both numbers as polynomials, evaluate the values naivly in some points (for small numbers! Like 0, +-1, -2, +inf), multiply them and then interpolate it back to polynomial form.
"2 way" toom-cook recreates Karatsuba. The original "3 way" and "for way" have the complexity O(N^1.465) and O(N^1.404). The GMP library (a hefty library for big numbers) uses naive, Karatsuba, "3", "4", "6.5" and "8.5-way" toom-cook, and fft, using each algorithm for numbers of different lengths.

bartekltg
Автор

It's amazing how a simple problem like multiplication can devolve into such complex mathematical discoveries. Who would have thought that multiplying optimally is insanely more difficult than adding.

Ricocossa
Автор

This is what I studied in my 200-level, 300-level, and 400-level computer science algorithms class. Good explanation!

MattWyndham
Автор

14:47 That was honest of Kolmogorov. I have met a few people in my career who would pretend to have done work which was actually done by someone else. They would then take the credit for the other person's work.

simonmultiverse
Автор

17:20 note that also loglogN is practically constant like the k^log*(n) since loglog(N) where N is the numbers of atoms in the observable universe is around 8. If N is the number of atoms in the observable universe then loglogN is actually smaller than 4^log*(N).

tomerwolberg
Автор

I loved fast inverse square root and finally you've released some more videos! Makes my day. Take however long you want, they're worth it.

haiguyzimnew
Автор

Content like this is why I still pay my internet bill. Thoughtfully presented, beautifully explained, and utterly fascinating even to a cynical math-o-phobe like me. Eighteen minutes well spent. I look forward to future content as a new subscriber. Bravo!

nicholashall
Автор

Wonderful video :) I am writing an Algorithms exam next week and wanted to take a break from learning but ended up learning about the algorithm more than in my lecture and in a more exciting and relaxing way. Thank you for this masterpiece and wonderful editing!

polarisinglol
Автор

Even if the lower bound is Ω(N × log N), there is still mathematical progress to be made (or disproven) in finding an algorithm which is that efficient with smaller and smaller inputs.

rebmcr
Автор

Kolmogorov also advanced the study of fluid flow turbulence so much that they named a constant after him and still refer to his work to this day!

roberthigbee
Автор

I have just discovered this channel and the animations and the gradients are so beautiful, the content, so mesmerising, that I instantly subscribed.
Thank you.

gligoradrian
Автор

This whole video is incredibly interesting and explains lots of things very well, but I am laughing so hard at 17:00 . The deadpan delivery of that line “log star of the number of atoms in the universe... is five.”

DavidTriphon
Автор

9:30 and 16:00 I think it would've been better if you used actual numbers and showed a practical example of the calculation instead of empty digit boxes/partially filled circle shapes, it would be easier to keep track on and follow what you're talking about. Since the video started with practical examples for the easier algorithms I also was expecting practical examples for the more complicated algorithms. Having to follow where you put which blank box or which abstract circle is filled by how much and trying to find out why you gave the circles these fill values while at the same time also trying to listen to what you are saying is rather irritating.

kyoai
Автор

Fun fact: When computers are multiplying whole numbers, the compiler will often optimize the code, so it doubles (or halves) the number one or more times (which is a single operation in the computer, known as bit shifting) and then add or subtract a konstant to achieve the result.

So a code of x * 9 (which is (x * 8) + 1) would be compiled as an equivalent to (x << 3) + x, where '<<' is notation for the bit shift to the left (means doubling 3 times).

mikkolukas
Автор

I really enjoyed this, good to see more coming from this channel. Excitedly looking forward for more!

mickharrigan