MIT 6.S191 (2023): The Modern Era of Statistics

preview_player
Показать описание
MIT Introduction to Deep Learning 6.S191: Lecture 9
The Modern Era of Statistics
Lecturer: Ramin Hasani
2023 Edition

Lecture Outline - coming soon!

Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
Рекомендации по теме
Комментарии
Автор

This presentation was a delight to watch!
The ability of NCP architecture to learn causal relationships and focus on what's important is impressive!

mohammadamin
Автор

This approach to NN looks very promising to me. Wish the best luck to Dr. Hasani and his colleges

antonkot
Автор

I found interesting the concepts of double descent, kolmogorov functions, robustness and effective dimensionality (I think this is why dimensionality reduction techniques can be beneficial)

JoaoVitorBRgomes
Автор

The talk was fantastic, much like the other sessions in the series! I'd pinpoint this lecture as the point where I begin to sense a challenge in keeping pace. My takeaway was a general sense that LTC is leaning towards a more innovative strategy for enhancement rather than focusing solely on scaling and fine-tuning.

liu
Автор

So, 19 neurons as opposed to a much larger number of neurons. But is the amount of computation genuinely reduced? These liquid neurons seem much more complex; so, are there really savings in complexity/computation?

tantzer
Автор

great lecture. thank you for sharing!!!! a gem on the horizon for all of us studying epistemology and ontology.

aninvisibleneophyte
Автор

Thank you! Great lecture. Very respectfully, I've thought about dynamical systems and neural networks, but not specifically with over parametrization, biologically inspired liquid networks and dynamic causal models. Great examples of learning causal relationships.

eddiejennings
Автор

This course has helped me learn so much! Thank you! This lecture was amazing especially!

superman
Автор

Yeah, we need to explore new architecture for neuron networks. Nowadays architecture mainly depends on backprop( one way to learn), this method is not right. Learning from the forward pass and from the result(backward pass) should be two main factors of the learning process.

peki_ooooooo
Автор

It is not that n equations require n unknowns - it is that solving for n unknowns requires n equations. Why would equations require unknowns?

venkatasivagabbita
Автор

Excellent information on mathematical structure of NN. Appreciate his inspiring dedication 🙏

umachandran
Автор

Interesting view regarding the kolomogorov Arnold representation. His buddies at MIT just released KAN paper, I wonder how this idea evolves.

sheevys
Автор

hmm. I've thought about the reason for that for a while and my conclusion is that this "double descent" occurs when using CNNs to image data where the convolution produces more number of samples than that of the original set (with some redundancy).

liujay
Автор

I think neural activation functions should also be space dependent

Both Space and time dependent I think..

ELKADUSUNhalifesi
Автор

Hi, thanks for the great videos. Quick question, How do you visualize the activated neurons as in 41:09 in this video. could you please share is there is any package or software to do so? Thanks a bunch! Really enjoyed your lectures.

mojganmadadi
Автор

This guy starts by saying ML isn't an ad-hoc field but by halfway through is introducing another ad-hoc architecture 🤣

Adsgjdkcis
Автор

I wonder jow did you get the idea to arrange "Modern era of Statistics"😊

Raymond_Cooper
Автор

Can we get a source for Reed et al. DeepMind paper that is mentioned about 10:00? I cant find the source

daniel-mika
Автор

High dimensional statistics is the real “modern” era of statistics

prod.kashkari
Автор

The difficulty of this lecture escalates quickly.

creativeuser