Lesson 1: Deep Learning 2018

preview_player
Показать описание

Рекомендации по теме
Комментарии
Автор

I am so glad I found this course, this is the way it should be taught. Everywhere.

DanielRamBeats
Автор

Hey friends, I recommend using google colab (free online jupyter notebook) . It has free gpu built in and is built on top of Jupyter notebooks. So it is a better alternative for Crestle! Hope this helps.

MentorMelv
Автор

00:00:01 Welcome to Part 1, Version 2 of “Practical Deep Learning for Coders”,
Check the Fastai community for help on setting up your system on “forums.fast.ai26”

00:02:11 The “Top-Down” approach to study, vs the “Bottom-Up”,
Why you want a nVidia GPU (Graphic Processing Unit = a video card) for Deep Learning

00:04:11 Use crestle.com89 if you don’t have a PC with a GPU.

00:06:11 Use paperspace.com65 instead of crestle.com89, for faster and cheaper GPU computing. Technical hints to make it work with a Jupyter Notebook.

00:12:30 Start with Jupyter Notebook lesson1.ipynb ‘Dogs vs Cats’

00:20:20 Our first model: quick start.
Running our first Deep Learning model with the ‘resnet34’ architecture, epoch, accuracy on validation set.

00:24:11 “Analyzing results: looking at pictures” in lesson1.ipynb

00:30:45 Revisiting Jeremy & Rachel’s approach of “Top-Down vs Bottom-Up” teaching philosophy, in details.

00:33:45 Explaining the “Course Structure” of Fastai, with a slide showing its 8 steps.
Looking at Computer Vision, then Structured Data (or Time Series) with the Kaggle Rossmann Grocery Sales competition, then NLP (Natural Language Processing), then Collaborative Filtering for Recommendation Systems, then Computer Vision again with ResNet.

00:44:11 What is Deep Learning ? A kind of Machine Learning.

00:49:11 The Universal Approximation Theorem, and examples used by Google corporation.

00:58:11 More examples using Deep Learning, as shown in the PowerPoint from Jeremy course in ML1 (Machine Learning 1)
What is actually going on in a Deep Learning model, with convolutional network.

01:02:11 Adding a Non-Linear Layer to our model, sigmoid or ReLu (rectified linear unit), SGD (Stochastic Gradient Descent)

01:08:20 A paper on “Visualizing and Understanding Convolutional Networks”, implementation on ‘lesson1.ipynb’, ‘cyclical learning rates’ with Fastai library as “lr_find” or learning rate finder.
Why it starts training a model but stops before 100%: use Learner Schedule Finder.

01:21:30 Why you need to use Numpy and Pandas libraries with Jupyter Notebook: hit ‘TAB’ for more info, or “Shift-TAB” once or twice or thrice (three times) to bring up the documentation for the code.
Enter ‘?’ before the function, or ‘??’ to look at the code in details.

01:24:40 Using the ‘H’ shortcut in Jupyter Notebook, to see the Keyboard Shortcuts.

01:25:40 Don’t forget to turn off your session in Crestle or Paperspace, or you end up being charged.

RenaissanceAGI
Автор

I don't understand at all what you are doing and I love it! Big thanks for making these courses. That's definitely better than starting slow.

kox
Автор

Amazing Course and top-down approach. Get your hands dirty first then refine and understand better.

ronaldokun
Автор

Thanks for Jeremy Howard for this great lecture as free :)
I would love to enjoy the lectures and the lectures would be great sources to help me be better at deep learning

bayesianlee
Автор

I remember it was very useful to have an index of the contents, feel free to correct and improve it.

Introduction, requirements, setup:

Notebook "Dog vs Cat":

Course structure and contents:

Machine learning and deep learning:

Neural network, Gradient Descent, GPUs:

Examples, CNN, Activations:

Loss, Derivative, SGD:

Convolution Intuition and Visualization:

Learning rate, Cyclic LR, Find LR:

Train a new model, Jupyter and VM howto:

CesareMontresor
Автор

I have zero doubt in this man's command of the subject matter, but I am initially a little disappointed with the organization of this course.
The video player on the site is too small so having a separate tab for youtube, video notes on the site, and additional important notes on a separate github repo, and having to constantly check all 3 because apparently the video content is liable to be out of date at any moment.. is tedious to say the least

hunterphillips
Автор

@16:18 Did Jeremy mean to say "Practical Deep learning" instead of "Practical machine learning" course ? Was he referring to v1 of the MOOC ?
I couldn't find any course made by fast.ai entitled "Practical Machine Learning"

HarounShihab
Автор

I like it better to watch the videos here, because I can scale the size to half my screen and I am able to see the video while coding simultaneously. On the fastai homepage, I only see the choice between too small and full screen.

soren-
Автор

would happily watch these on the fast.ai page if the video could be larger(like in youtube) without having to go into fullscreen mode.
Amazing videos/course btw!

wfpnknw
Автор

25:05 I believe first column is cat and second column is dog according to the data.classes. not as jeremy says there...?!

yourxylitol
Автор

Very recommendable! Touches even a new paper on super convergence by Leslie Smith

bingeltube
Автор

I'm dying to know, @4:44, what are the two data science tools that were rated more important than Jupyter?:D

smithkurta
Автор

Hi, I'm really confused about the courses. The main page for this course says it is called "Practical Deep Learning For Coders, Part 1", but at ~ 16:25, Jeremy talks about checking out a practical machine learning course. Is that an entirely different course? I don't see anything about it on fast.ai nor course.fast.ai.

TheArtificiallyIntelligent
Автор

i cant thank you enough for this course!

robosergTV
Автор

49:29 I think it should be s(x) = 1 / (1 + e^-x)

ARainsby
Автор

Hi :)
First, Kudos for the amazing DL videos.
I have one question, will 'Practical Machine Learning' video series be published online ?

vyjgfyh
Автор

Oh it's up :3 Happy New year and thank you so much JHoward

lethanh-svrd
Автор

I think there is an error in the gradient descent update function 1:06:40. It should be new_x = x - derivative. (with the minus). Otherwise, with the +, is getting far from the minimum.

franfdk