Tariq Rashid - A Gentle Introduction to Neural Networks and making your own with Python

preview_player
Показать описание
PyData London 2016

Neural networks are not only a powerful data science tool, they're at the heart of recent breakthroughs in deep learning and artificial intelligence.

This talk, designed for a complete beginners and anyone non-technical, will introduce the history and ideas behind neural networks. You won't need anything more than basic school maths. We'll gently build our own neural network in Python too.

Ideas: - search for intelligence machines, what's easy for us not easy for computers.

DIY: - MINST dataset - simple network 3 layer - matrix multiplication to aid calculations - preprocessing, priming weights - 95% accuracy with very simple code! - improvements lead to 98% accuracy!

00:10 Help us add time stamps or captions to this video! See the description for details.

Рекомендации по теме
Комментарии
Автор

Thanks for the simplistic explanation of how neural network functions.

sundareswaransenthilvel
Автор

Very nice intro to NN for non-programmers.

ravisawhney
Автор

awesome talk! i finally understand how to build neural network in Python!

invinity
Автор

Hi Tariq, This is simply excellent. I also read your book now. This is very good.
I have a question. In the example you have used here, you have shown a classification problem. DO you have an example for a regression problem where I have some continuous X variables and some Boolean X variables, predicting a continuous Y variable.

KayYesYouTuber
Автор

At 10:02 Tariq says "..its short and long thats a ladybird" Of course he meant caterpillar. Sorry to be so pedantic. The video is excellent. I learnt a lot.

avonstar
Автор

On page 48 of Make Your Own Neural Network in the diagram of the NN, why does neuron 2 in layer 1 connect with all 3 neurons in layer 2 and the other neurons in layer 1 connect with only 2 neurons in layer 2? He uses the same diagram in his presentation.

kloro
Автор

Hello Tariq thank you very much for your really good book
I already bought it and studied. But there is something that I don't understand.

And I think it is a bug? Hopefully I am wrong and you can give me the correct answer.

I thought the sigmoid function is used in the back propagation.
But when I have a look in the slides (video 40:59) the the sigmoid is processed during the inbound/forward propagation (from the input-layer to the output-layer).
Can this be?

I appreciate your answer.
thanks regards, Mario

mariomueller
Автор

What type of neural network did we just watch being made?

Jables