Artificial Neural Networks explained

preview_player
Показать описание
In this video, we explain the concept of artificial neural networks and show how to create one (specifically, a multilayer perceptron or MLP) in code with Keras.

🕒🦎 VIDEO SECTIONS 🦎🕒

00:30 Help deeplizard add video timestamps - See example in the description
04:15 Collective Intelligence and the DEEPLIZARD HIVEMIND

💥🦎 DEEPLIZARD COMMUNITY RESOURCES 🦎💥

👋 Hey, we're Chris and Mandy, the creators of deeplizard!
👀 CHECK OUT OUR VLOG:

👉 Check out the blog post and other resources for this video:

💻 DOWNLOAD ACCESS TO CODE FILES
🤖 Available for members of the deeplizard hivemind:

🧠 Support collective intelligence, join the deeplizard hivemind:

🤜 Support collective intelligence, create a quiz question for this video:

🚀 Boost collective intelligence by sharing this video on social media!

❤️🦎 Special thanks to the following polymaths of the deeplizard hivemind:
Tammy
Prash
Zach Wimpee

👀 Follow deeplizard:

🎓 Deep Learning with deeplizard:

🎓 Other Courses:

🛒 Check out products deeplizard recommends on Amazon:

📕 Get a FREE 30-day Audible trial and 2 FREE audio books using deeplizard's link:

🎵 deeplizard uses music by Kevin MacLeod

❤️ Please use the knowledge gained from deeplizard content for good, not evil.
Рекомендации по теме
Комментарии
Автор

I have never been disappointed by any video that you have posted so far! You have a gift of breaking down complicated topics into simple explanation that make it easier for beginners to learn. Absolutely love the channel and the videos

eishitayadav
Автор

here is code from video thank me later :3

from keras.models import Sequential
from keras.layers import Dense, Activation

model = Sequential ([
Dense(32, input_shape=(10, ), activation='relu'),
Dense(2, activation='softmax'),
])

mandarjoshi
Автор

In case it is confusing to anyone, it's worth saying that the Keras model being created has 3 layers if you consider the input layer to be the first layer, but Keras Sequential doesn't require you to explicitly specify that first layer, just its shape as an parameter for the second layer.

BrettClimb
Автор

Artificial? More like "Artistic and beneficial!" Thanks again for sharing such informative and well-made videos.

PunmasterSTP
Автор

Clear and short, understander for a beginner!

Megan-bhto
Автор

Best tutorial so far. Simple and short.

MuhammadAli-hurz
Автор

These videos are fantastic!! I'll be taking a ML learning course next year, but honestly I feel like anything I learn at my school about ML is going to come second to these videos when it comes to quality. Keep up the great work :)

lukefernandez
Автор

OMG great tutorials really blow my mind

ankitaharwal
Автор

The narration is precise.
Thanks a bunch.

supsshares
Автор

Your Videos are helping me a lot! Thanks a ton!!!❤️ ❤️

gourabsarker
Автор

Hello! I find your videos very helpful and thank you for making them! Your talking speed is fast, and it will be more friendly for me if you can stop a little(like half a second) between each topic. Thank you again!

weiqingzhang
Автор

Hello, when I typed out the code from the video:


from keras.models import Sequential
from keras.layers import Dense, Activation

model = Sequential ([
Dense(32, input_shape=(10, ), activation='relu'),
Dense(2, activation='softmax'),
])


I kept on getting the following error:
TypeError: descriptor '_fields' for 'OpDef' objects doesn't apply to 'OpDef' object


Do you know what this error is and how to fix it? Thank you.

merazmamun
Автор

In the Quiz section (on deeplizard website ), MCQ question 1 (Question by Chris):

In a neural network, a particular node's output depends
- the weighted sum of inputs
- the product of inputs
- the sum of inputs
- the sum of weighted connecitons

Although, I did choose the correct answer (the weighted sum of inputs) but I don't think this applicable to nodes of the input layer...right? 'cause the output of the input layer node is just simply the input data feature, which the node transmits to the nodes of the 2nd layer (hidden layer).

ssffyy
Автор

Is the second layer used in this neural network the output layer?

GauravSingh-kuxy
Автор

Congrats! Amazing videos! Where are the sources (Jupiter notebook)? Your GitHub is completely empty. Tks

qzwwzt
Автор

The import statement with the latest versions of tensorflow is slightly different. Instead of from keras.models, It should be from tensorflow.keras.models since now keras is contained inside tensorflow itself.

tanuj
Автор

Hi good Video u should put Affiliate Links on the top so u can earn more money !! Greetings from Germany

Muenchen
Автор

It's interesting that you use keyword argument input_shape=(10, ) for the first layer, but no input_shape is defined for the second layer. Is the idea that in a sequence of layers, that the first hidden layer must specify the shape of the original input, but subsequent layers can infer the shape from the previously defined layers? (e.g. The second hidden layer in this video can infer input_shape=(32, ) because that was the shape of the previous layer?)

Stefan-hlfe
Автор

Hello, Where do I get the document you are using in this video for reference?

mjshaheed
Автор

Is there any procedure to specify the number on neuron on dense layer? In the first dense layer 32 neurons were provided. Could you please explain why did you choose 32?

amsainju