Lesson 8 (2019) - Deep Learning from the Foundations

preview_player
Показать описание

In this course, we will learn to implement a lot of things that are inside Fastai and PyTorch. We will learn things that we could use to build our own deep learning libraries. Along the way, we will learn to implement papers, which is an important skill to master when making state of the art models.

In today's lesson we'll discuss the purpose of this course, which is, in some ways, the opposite of part 1. This time, we're not learning practical things that we will use right away, but are learning foundations that you can build on. This is particularly important nowadays because this field is moving so fast. We'll also talk about why the last two lessons of this course are about Swift, not Python (Chris Lattner, the original creator of Swift, and now lead of Swift for TensorFlow, will be joining Jeremy to co-teach these lessons).

We'll also discuss the structure of this course, which is extremely "bottom up" (whereas part 1 was extremely "top down"). We'll start from the lowest level foundations (matrix multiplication) and gradually build back up to state of the art models.

We'll gradually refactor and accelerate our first, pure python, matrix multiplication, and in the process will learn about broadcasting and einstein summation. We'll then use this to create a basic neural net forward pass, and in the process will start looking at how neural networks are initialized (a topic we'll be going into in great depth in the coming lessons).

Then we will implement the backwards pass, including a brief refresher of the chain rule (which is really all the backwards pass is). We'll then refactor the backwards path to make it more flexible and concise, and finally we'll see how this translates to how PyTorch actually works.
Рекомендации по теме
Комментарии
Автор

Waited more than 4 months to watch this series...

mathaka-ekathuwa
Автор

this is gold. thank you, Jeremy Howard and Fast.AI

buh
Автор

That was great! Waited a few month for this, the first lesson already made things so much clearer, it's important to understand concepts by building them yourself, thank you, great lesson!

michaelmuller
Автор

This is going to be an exciting journey!! Thank you Jeremy.

ReneeSLiu-zxtj
Автор

1:43:00 On Convolutional layers' initialisation

palaache
Автор

Great lesson and clear understanding the basic concepts

AbdulQayyum-kdgf
Автор

Hi Jeremy, could you please create a playlist for part 2 of this course?

antoinemercier
Автор

In which lecture does he teach Seq2Seq, attention, transformer etc??

RetainTheDark
Автор

Hello, Our local Meetup is waiting for Part 2 to 2020 version (and fastbook), and it’s been 2 years since the 2019 lesson was posted. Still a very good start to the foundations. Sadly SWIFT for Tensorflow project is dead, and Chris Lattner seems to have left the group.

datasciyinfo
Автор

Trying to run the code at 1:04:26 a[i, None] will yeild a different shape than a[i].unsqueeze(-1) and wont work, if anybody is trying to run that be aware.

dalissonfigueiredo
Автор

Putting it here for my reference:
Broadcasting 52:11

kshitijpatil
Автор

Will this part two happen again ? Want to know whether to wait or not, as there are many things going on on transformers and gans, want to make sure I look at the latest available

alanfortunysicart
Автор

Can I start this series after andrew deeplearning.ai course

aryanchauhan
Автор

1:49:48
Doesn't mse give us the loss?

ypred = lin2(relu(lin1(x)))
loss = mse(ypred, y) # mse(ypred, one_hot(y))

Am I awful? How am I awful?

jonatani
Автор

Anyone know what the @ is within some of the functions?

MasayoMusic
Автор

1:45:11
How can subtracting the same number from every point change their variance? Variance is independent from the mean, isn't it?

jonatani
Автор

Can anyone help, that if I can start deep learning from this playlist or I need to first check the part1 mentioned in video

smohanku
Автор

04:23 Complete Machine Learning summed up in 15 seconds.

akshaytiwari