Exponential Progress of AI: Moore's Law, Bitter Lesson, and the Future of Computation

preview_player
Показать описание


OUTLINE:
0:00 - Overview
0:37 - Bitter Lesson by Rich Sutton
6:55 - Contentions and opposing views
9:10 - Is evolution a part of search, learning, or something else?
10:51 - Bitter Lesson argument summary
11:42 - Moore's Law
13:37 - Global compute capacity
15:43 - Massively parallel computation
16:41 - GPUs and ASICs
17:17 - Quantum computing and neuromorphic computing
19:25 - Neuralink and brain-computer interfaces
21:28 - Deep learning efficiency
22:57 - Open questions for exponential improvement of AI
28:22 - Conclusion

CONNECT:
- Subscribe to this YouTube channel
Рекомендации по теме
Комментарии
Автор

0:00 - Overview
0:37 - Bitter Lesson by Rich Sutton
6:55 - Contentions and opposing views
9:10 - Is evolution a part of search, learning, or something else?
10:51 - Bitter Lesson argument summary
11:42 - Moore's Law
13:37 - Global compute capacity
15:43 - Massively parallel computation
16:41 - GPUs and ASICs
17:17 - Quantum computing and neuromorphic computing
19:25 - Neuralink and brain-computer interfaces
21:28 - Deep learning efficiency
22:57 - Open questions for exponential improvement of AI
28:22 - Conclusion

lexfridman
Автор

How does Lex put out so much high quality content so quickly? Surely he has already ascended and merged with our future AI overlords

DigiByteGlobalCommunity
Автор

@Lex Fridman, your calmness and self control are very inspiring. The way you manage to collect your thoughts and how you approach the topics are things that many people, including myself, could learn from you.
Thank you for your effort <3

simonmarelis
Автор

Thanks for all you do Lex!!! Keep up the amazing work!!!

cs
Автор

Thank you for breaking down these complex topics that I could never understand on my own and making them more accessible. Keep up the work!

DM
Автор

can't wait for a discussion about neuralink between you and musk <3

ltdnfjr
Автор

I appreciate these videos of yours. Its a high quality video presentation of a state of literature scientific paper. But with the added benifit of a russian romantacising it, and providing food for thought at the end. Thank you!

FaroukHaidar
Автор

Thanks Lex, first time I have seen one of these here. Great addition beyond your regular podcast. Keep up the great work.

marzx
Автор

Extremely well put presentation. Easy to understand while touching on profound information, not an easy balance to strike.

Chris_tothefuture
Автор

Lex, thanks again for the great content. With so many choices of ways to spend ones time, this is definitely a favored choice.

chrisschmidt
Автор

Singularity?
For many years now I have thought that, like falling into a sufficiently large black hole, one can pass through the event horizon without even noticing. Nothing bad happens at that time but there is no going back from there. There might be a long time between that and meeting ones end at the singularity.
To my mind we passed through the event horizon in 1976. The year I first became aware of microprocessors, the fact that we can actually own computers of our own, that they would be everywhere, that they would change our lives dramatically in my lifetime.

heater
Автор

Thanks lex for breaking down topics which I need to study :P

prostabkundu
Автор

Loving these discussions. One of the few channels here consumption of which isn't a waste of time.

genuineprofile
Автор

Really like this video format -- just you talking about some subject

prestonjensen
Автор

thank you for the time and hard work you put into these great videos.
i appreciate it dearly.
you tube university!

danielwestereng
Автор

Lex, I love this new format for your show! You really know your stuff. I'm an expert chess player (you can look me up) and really enjoyed your interview with Garry Kasparov. I think you should get Erik Kislik on your show. He's an International Master in chess and wrote two popular books on applied logic in chess, which is a pretty rare subject for a chess professional to delve into. One of the books won FIDE Book of the Year. I also found his podcast Logical Chess Thinking on Spotify. Seems like a very high IQ guy. He is one of the top chess coaches in the world (maybe number one now), a computer chess expert, and the only person alive right now who went from beginner to International Master in chess as a self-taught adult. Some kind of super-learner, with an emphasis on clear logical thinking. I'd love to hear you guys discuss computer chess, AI, and applied logic. Would be one of your top 5 interviews, imo.

peterbannon
Автор

It seems like there's a big hole in this argument because it doesn't consider the variety of scalings that a computer program might have. Say we can complete N computations in a reasonable amount of time (a few hours, a few years, etc, depending on how badly you want to solve this problem). If the interesting problem size is n, an exponential time algorithm will take ~e^(an) computations to solve. if N=e^(bt), we can solve the problem at time an=bt -> t=an/b, an amount of time that scales linearly with the problem size n. If, however, we can find a polynomial time algorithm c n^p, then the problem can be solved at time t=p/b log(cn). So you see that the solvable problem size grows very fast.

Now consider that a unit of programming work takes a fixed amount of time. Decreasing the constant c is probably not worth your time, but finding a polynomial time algorithm where there previously wasnt one will help with long term progress. Unfortunately, it's very uncommon to estimate the runtime of training an AI (even basic scaling laws - computer scientists could learn a lot from the way physicists are able to correctly find scaling laws without rigorous arguments). And people don't often estimate what type of improvement their AI training trick provides.

crazedvidmaker
Автор

Great video!Thanks a lot for doing these Lex!!

Cyroavernus
Автор

You're an inspiration to be doing podcasts like a beast despite the pandemic!

emilsargsyan
Автор

I was totally shocked at 22:00. People working in AI have been doing a fantastic job.

pablo_brianese