The Fastest Multiplication Algorithm

preview_player
Показать описание

How can you multiply two enormous numbers? We all learn an approach in school which has n^2 total multiplications by single digit numbers. The first improvement was the Karatsuba Algorithm which uses a divide and conquer approach and a bit of clever algebra to reduce 4 multiplication to 3 (in 2x2 case) and more generally to about n^1.58 multiplications asymptotically. For a computer with a 32-bit hardware multiplier, one can iteratively apply the approach until the numbers are within the size of the multiplier. Toom-Cook is a generalization of the Karatsuba Algorithm and is useful in various cryptography applications, but a real change happened with Schonhage-Strassen which use discrete fast fourier transforms to get to O(n log n log log n) complexity, used for applications like the Great Internet Mersenne Prime search. The theoretical best result is Harvey and Hoeven who achieved O(n log n), albeit this becomes more efficient for only impractically large numbers.

0:00 Review of normal multiplication
1:42 The Karatsuba Algorithm for 2x2
3:32 Example of Karatsuba
4:34 Karatsuba for larger numbers
5:45 Complexity of Karatsuba for size 2^k
7:09 Computer architecture and hardware multipliers
8:26 Newer algorithms (Schonhage-Strassen, and Harvey and Hoeven)

Check out my MATH MERCH line in collaboration with Beautiful Equations

COURSE PLAYLISTS:

OTHER PLAYLISTS:
► Learning Math Series
►Cool Math Series:

BECOME A MEMBER:

MATH BOOKS I LOVE (affilliate link):

SOCIALS:
Рекомендации по теме
Комментарии
Автор

I remember my professor saying nlogn might be possible, and then a few years later Harvey-Hoeven proved it! A video on the discrete fourier transform would be awesome. I want to understand how decomposing wave functions has applications in things like multiplication, factoring, and discrete logarithms.

Bunnokazooie
Автор

The most brilliant part of Karatsuba's algorithm was that he took it up as a challenge to prove Kolmogorov wrong (claim that the lower bound cant be any lesser than n^2) and came up with this brilliant manipulation. Like damn dude, like how confident do you have to be to challenge and disprove perhaps one of the greatest mathematician of the USSR, rather the world! He did all this while he was just a student.

swagatochatterjee
Автор

4:10 "you can immediatly tell what 7 times 8 is" me: 64... wait no...

BoBoNUto
Автор

Note that in a program multiplying by a power of two can be just a bit-shift. But you have to recognise when it's a power of two

williamchamberlain
Автор

Recently, I wrote a program in C implementing the grade-school multiplication algorithm. It's working fine for multiplying two numbers up to around 130 digits long, but the result for numbers greater than 130 digits starts to differ slightly from that of Wolfram Alpha. I don't know what the issue is. Well i guess it's time to try the Karatsuba algorithm and see how it goes. and as usual great video professor!!

bhavesh.adhikari
Автор

There is this lovely tension between theory and practicality in maths.

The Greeks: there are an infinity of primes because a finite list can be multiplied together, add 1 and factorise to get a new prime.
Cryptographers: take 2 large primes and multiply, bet you can't find the original primes.
Karatsuba: multiplying those primes is hard, let's make it faster.
Schonhage-Strassen, and Harvey and Hoeven: for numbers so big we can't in fact work with them let's make it faster.

Lovely - now we need a theoretical proof that n log(n) is indeed the limit.

andrewharrison
Автор

I love learning how things actually works in mathematics for larger numbers.

The love only increased further during my electrical engineering days where we had a subject, aptly, called Advanced Engineering Mathematics (AEM) 1 & 2.

For sure it was difficult than anything I had done till then but at the same time it felt so cool to understand it and solve problems.

In the end, AEM was amongst my top 4 highest scoring subjects. 😅

prashantsingh
Автор

Fascinating. Stumbled onto this and I’d never really thought about the rate determining steps in larger multiplication tasks. Thanks

NowInAus
Автор

the fun thing about hardware multiplication is they often continue to break the problem down even further. from what i've heard, modern hardware generally breaks it down to 8x8 bit multiplications and then uses the fastest 8x8 bit multiplication known to mankind: a 65k entry table. and you even have a separate table for every 8x8 bit multiplication that has to be done. and while you could do 64x64 bit multiplication (which is implemented in x86_64, i have no idea how to get the full answer in C though) using 27 tables instead of 64 using Karatsuba, they're all done at the same time so in terms of speed that's just more layers of addition. so in case you were wondering how we manage to keep doubling those transistor counts...

what i think is really interesting though is that in grade school you're effectively left to assume that division isn't much slower than multiplication. but in reality, multiplication has all these different ways you can improve, while with division it's things like "wow we cut it down by a factor of 4, still 10x slower than multiplication though because none of it is parallel" for hardware and "so basically you multiply a bunch of times because multiplication is just that much faster" for low complexity algorithms (just checked before posting this, turns out dividers can achieve the same complexity as multipliers, though once again, this is by turning it into multiple multiplications). also doesn't help division's case that integer dividers have to provide two almost unrelated answers at once (quotient and remainder), not just one big one.

rubixtheslime
Автор

The first time I heard about the FFT algorithm for multiplication, it seemed... impossible. What did multiplication have to do with the frequency domain after all? I didn't really look into it further since I didn't need to multiply giant numbers anyway. I think it was a 3bluebrown video talking about convolutions in probability, and used long multiplication as an example to introduce it. That was a giant AHA! moment. Of course multiplication is just convolution followed by the carry additions! I immediately opened up a python prompt and had to try it. Mind blown! What a beautifully weird algorithm. Using the FFT to convolve digits of a number in the frequency domain? Brilliant! Maybe a bit useless for everyday computing, but still!

slembcke
Автор

"We all learn how to do this in high school."
We learned it in elementary school. By high school, most people were getting rusty at it because we could use calculators, though I always lost my calculator and so got better. (I still was able to write this before I "instantly" knew what 7*8 was, though. It's 56, I remember now.) On yhe ither hand, I like the way you explicitly write all 4 1-digit multiplications instead of just writing out 2 multiplications, one for each of the bottom digits, like I learned to do.

Mr.Nichan
Автор

The best part about O(n log n) multiplication isn't doing it (as is clear from the fact that we don't actually do it), it's analysing other things that use multiplication and just having it be O(n log n) and not muddy the entire analysis with more complicated terms.

moreon
Автор

5:30 This reminds me of the DeepMind matrix multiplication thing, or the one-up to the 2*2 decomposition of larger matricies. That saves up to 33% less calculation. I forget that guys name.

readjordan
Автор

also the same Strassen as in the Solovay–Strassen primality test!

johnchessant
Автор

Hey dr, I hope you are doing well! Missed your classes (during the COVID era) and videos very much! Your videos helped me get an A on Calculus. Im also glad you got over 300k subscribers.
I am also recommending your videos to my first year friends, because they are extremely helpful.

intereststcentury
Автор

I can't believe I've never seen 1:09 multiplication where the "cross" of the bottom one's digit to the top ten's digit is completely stored for the additive solution section. Only the tens addition of a phantom ten's on the top ten's digit. This visual makes more sense of large number multiplication.

AttackOnTyler
Автор

This video was really helpful. I think I know enough to code it up on my own.

gblake
Автор

Binary multiplication is a different beast all together. It's adding the multiplicand to a running sum of adding exponentiated copies of itself according to the reversed sequence of ones and zeros in the multiplier.

But I'll definitely be trying this method on binary numbers.

SameAsAnyOtherStranger
Автор

I was taught a slightly different method for multiplying multi-digit numbers, but the point of the video still stands.

Instead of just looking at individual digits and having n^2 numbers that need adding together at the end, I was taught to run the entire top number against each digit of the bottom number, with proper carrying, which gave me just n numbers to add together, with n being the number of digits in the bottom number.

harmsc
Автор

In highschool? Im sorry i learned multiplication in grade 2

ilikeapplejuice