Machine Learning for Physicists (Lecture 1)

preview_player
Показать описание
Lecture 1: Structure of a neural network.

Contents: Introduction (the power of deep neural networks in applications), brief discussion of the lecture outline, structure of a neural network and information processing steps, very brief introduction to python and jupyter, implementing a deep neural network efficiently in basic python (without any additional packages like tensorflow), illustration: complicated functions generated by deep neural networks

Lecture series by Florian Marquardt: Introduction to deep learning for physicists. The whole series covers: Backpropagation, convolutional networks, autoencoders, recurrent networks, Boltzmann machines, reinforcement learning, and more.

Lectures given in 2019, tutorials delivered in 2020 online. Friedrich-Alexander Universität Erlangen-Nürnberg, Germany.

Рекомендации по теме
Комментарии
Автор

This is a very fine introductory lecture on deep learning. Really looking forward to watching the next ones. Thanks a lot for sharing Professor !

sebastienmartin
Автор

Your english is very pleasing. Fellow italian physicist here, beginning my PhD studies in astrophysical techniques. Will benefit from these videos for sure, thank you

Cannongabang
Автор

Thanks for uploading this content, and for all in your channel

trigocuantico
Автор

Thanks alot prof
I am interested in this field iam applied physics undergrad students and currently studying ML and i was looking to integrate this two fields with each other

ahmedsaeed
Автор

1:16:55 Professor, I have one question, you said vertical axes correspond to y2 and horizontal axes correspond to y1. In my opinion, the codes are written by row-major in double for loops, so vertical axes mean to y1 and horizontal axes mean to y2. Am I right? plz recommend me.

flftqgo
Автор

ReLU is a switch. f(x)=x connect, f(x)=0 disconnect. The dot product of a number of dot products is still a dot product. What is actually happening in a ReLU network then? Do you see.
Also fast transforms like the FFT and fast Walsh Hadamard transform can be viewed as fixed systems of dot products that are obviously very quick to compute. You can think of ways to include them in neural networks. I think there is a blog AI624 maybe.

nguyenngocly