filmov
tv
3blue1brown
0:22:43
How might LLMs store facts | Chapter 7, Deep Learning
0:27:14
How large language models work, a visual intro to transformers | Chapter 5, Deep Learning
0:03:22
Grant Sanderson (3Blue1Brown): Best Way to Learn Math | AI Podcast Clips
0:05:13
The most unexpected answer to a counting puzzle
0:18:40
But what is a neural network? | Chapter 1, Deep learning
0:12:52
Why slicing a cone gives an ellipse (beautiful proof)
0:15:30
What 'Follow Your Dreams' Misses | Harvey Mudd Commencement Speech 2024
0:00:44
3Blue1Brown angrily lashes out at you for being bad at math (POV)
0:26:10
Attention in transformers, visually explained | Chapter 6, Deep Learning
0:16:13
This pattern breaks, but for a good reason | Moser's circle problem
0:17:26
Researchers thought this was a bug (Borwein integrals)
0:20:57
But what is the Fourier Transform? A visual introduction.
0:17:05
The essence of calculus
0:15:11
Bayes theorem, the geometry of changing beliefs
0:21:58
Group theory, abstraction, and the 196,883-dimensional monster
0:15:16
Why do colliding blocks compute pi?
0:27:16
Differential equations, a tourist's guide | DE1
0:18:18
Hilbert's Curve: Is infinite math useful?
0:15:42
Divergence and curl: The language of Maxwell's equations, fluid flow, and more
0:10:43
Bertrand's Paradox (with 3blue1brown) - Numberphile
0:18:42
The impossible chessboard puzzle
0:09:52
Vectors | Chapter 1, Essence of linear algebra
0:05:32
Math Meme Review with Grant Sanderson (3Blue1Brown)
0:11:15
The hardest problem on the hardest test
Вперёд