AI Just Solved a 53-Year-Old Problem! | AlphaTensor, Explained

preview_player
Показать описание
In a year where we've seen Artificial intelligence move forward in a significant way, AlphaTensor is the most exciting breakthrough we have accomplished.

This video briefly introduces AlphaTensor and what it means to us.

📚 My 3 favorite Machine Learning books:

Disclaimer: Some of the links included in this description are affiliate links where I'll earn a small commission if you purchase something. There's no cost to you.
Рекомендации по теме
Комментарии
Автор

I really admire the amount of work put into this video: the research, editing and everything else. Kudos.

tane_ma
Автор

Without having considered the counts themselves, I would have expected it to be a larger channel given the way the video is structured and how precise some edits are. But that aside, it's interesting how we went from finding neural patterns to solve tasks to finding neural patterns that optimize finding neural patterns to solve tasks. Essentially analogous to a factory creating better factories. I'm aware that in public topics such as AI are usually only mentioned in a shallow manner, referencing typical generic developments of poor face or speech neural networks, even providing merely mediocre samples of them additionally, whereas there are so many more intricate creations from the past 10 yrs. While the media is often seen as immediate, it depicts new types of technology as concepts of the future or reduces their depth, even though they already are defining the present. Maybe too metaphorical, I just think very few people are aware of the state of models and have an insight on what is to come soon.

absence
Автор

It used to be a manual process. When transistor count increased, one of the first thing computer engineers did was to put 10x more logic gates into the multiplication part of the ALU by using many predicates like filling in a truth table.

Having a known software method to optimize them further is nice, but be reminded that hardware made for the task is already super optimized with memory bandwidth and latency being the limit

harrytsang
Автор

The real breakthrough here is AI being able to improve the design of a crucial building block of its own implementation.

Axacqk
Автор

Always high quality, high entropy content. DeepMind's application of Reinforcement Learning (RL) to solve problems is fascinating, and I agree that it will be interesting to see how they use RL moving forward: what other problems will be "gamified" in the future ...

paulallen
Автор

Jesus ai advancements in the last year have been insane. This is phenomenal

jackrdye
Автор

I am stunned, now subbed and shocked by how juicy and fun to watch && also insightful you made this video, you brought us through the whole journey in minutes and explained every obstacle.

On top of that, your visuals and audio is fantastic, extremely high quality content you produce.
Please more :)

SinanAkkoyun
Автор

This is the video I’ve been looking for describing AlphaTensor. Concise and to the point about why this is important. Subbed.

ozzyphantom
Автор

Its one of those things that are great in theory but mildly useful in practice. Something like 40% speedup was achieved by the paper authors.

michaelnurse
Автор

Thank you. I heard about the article when it came out, but didn't realise the implications at a practical level. The machine improving its own hardware and software to get better. That's the path to singularity

mmuschalik
Автор

I loved your hardware interpretation of the results because today IBM and basically everyone is trying to make better chips for matrix mul tasks from ASICs to Accelerators, and a new algorithm eliminates a huge compute cost 💯💯. Thanks a ton for the video, I too am working with matrices this moment.

jaykaku
Автор

Your channel and your tweets are really helpful in my ML learning process.

The quality of your videos is really high and the content is incredible useful to better understand ML.

Vamos Santiago!

mmenendezg
Автор

Dude! Fantastic video. Full marks on making an exciting, clear, attention-grabbing and informative video! Honestly, the production was faultless.

JasonAndrewsUK
Автор

For non square matrix multiplications:
(axb)(bxc), the number of multiplications will be = a*b*c

daniellim
Автор

Lemme just say...
Sir you are amazing🤩

The hardwork and information with atmost detail with creative editing makes this video very very engaging and connecting.

Hats 👑⛑👒🎩 off to your effort.

Subscribed right away. 😀😄

AleX_
Автор

I think I understand the implication? We're talking about an algorithms ability to adapt, right?

We can see that in some neural networks that novel behavior appears after millions of millions of repetitions, which I guess is similar to adaptation. But a AI's ability to use different techniques on different problems as a fundamental feature versus a result of extensive training might be a big deal.

I really enjoyed the video. You kept me interested and whisked me along your train of thought with very little effort on my part. Perfect story telling, great explanation of all the necessary topics and you made it engaging. Awesome work.

williamseipp
Автор

Provocative presentation - instant subscriber.
Things you said that caught my attention:
1. Concisely stated the problem, i.e. computation time for matrix multiplication
2. Nice historical summary of approaches to solve the stated problem
3. Clear understanding of "small gains" may yield large rewards. Examples of applying this "algorithm optimization method": solving complex problems found in general relativity and fluid mechanics (non-linear differential tensor and vector systems) or standard quantum mechanics (or string/membrane theory)
4. Your enthusiastic imagination; unafraid to jump ahead to what might be next

Looking forward to what else grabs your attention

johnpayne
Автор

In quantum computing, when we introduce noise to the system, the calculation of the density matrix for the state of the quantum system grow exponential. Maybe this will help a lot in that field.

vinhkhangtrankhuong
Автор

Quickly saw this is an awesome channel and quickly subscribed.

But I'm shocked it only has 12k subscribers.

You deserve a million.

Thanks for sharing.

OlatundeAdegbola
Автор

Can AlphaTensor solve x265 decode on Playstation 3? There are no good compilers, even IBM had separate compilers for the PPE and the SPEs in their own CellBE CPU and nVidia kept the architecture details of the G70 confidential. Better yet, can AlphaTensor produce an LLVM backend for Playstation 3? Or optimise LLVM backends more generally?

andrewmalcolm