Understanding AI - Lesson 2 / 15: Hidden Layers

preview_player
Показать описание

Dive deeper into the world of Neural Networks with Lesson 2 of the "Understanding AI" course! In this session, we explore how a simple genetic algorithm helps optimize network parameters. We'll also see the power of hidden layers and their role in shaping the behavior of neural networks. Join me as we move beyond single-input neurons and venture into the realm of multi-layer perceptrons!

Discover the significance of hidden neurons and nodes, understanding why they're termed "hidden". Gain insights into the terminology and we'll also debunk common misconceptions about activation functions.

Join me on this learning journey! 🚀🧠

🚗THE PLAYGROUND🚗

💬DISCORD💬

⭐LINKS⭐

#HiddenLayers #NeuralNetworks #AIPlayground #MachineLearning #Perceptron #ActivationFunctions #ScikitLearn #AIProgression

⭐TIMESTAMPS⭐
00:00 Introduction
00:45 Genetic Algorithm
07:03 What the Network Really Learns
11:40 Two Inputs
31:08 Hidden Layers
38:37 Boolean Operations
41:22 Homework 3
41:32 Misconceptions
42:32 Homework 4
Рекомендации по теме
Комментарии
Автор

I can't just explain how much I love your content, I tried to understand what is actually a ML model looks like after training but I was unable to because everyone in YouTube used library for this, but then I found your mini image recognition tutorial where you have beautiful illustrated how to extract features and by only using a couple how decision boundary can be set.You have also gave your trained model on the source code and that's aha moment for me finally I saw ML model is nothing but some coordinate.

You did it also for neural network.I hadn't understood it properly before this series.

You are really making me and others likes me believe it's possible for us to learn and understand complex topics like this.

Ben Eater's 8bit breadboard computer demystified that computer isn't really black box by building a Turing complete breadboard computer that you have to program by toggling dip switch on and off just like the early computer!

You are doing same thing for AI, optimizing neural networks weight and bias by just like the very first perceptron!

I don't actually get not having millons of views, but I hope it will be very soon.

Wish you all the best from Bangladesh 🇧🇩

Coder.tahsin
Автор

So well explained, another great lesson.

diegocassinera
Автор

Very nice tutorial! I am working as a professional software engineer the last 25 years but on NN I am a kind of newbie. Such kind of tutorial helped me to easier understand a few concepts. Quick note: I would like to see a bit more explanation on the boolean operators. It took me a while to understand that we OR or AND the gray area and not the black one. Other than this, I can't wait for the next tutorials! :D

ekalyvio
Автор

The universe thanks you for this course.

tomekatomek
Автор

I'm starting to understand what I'm doing when training dl model with keras 😂 thank you 😁

vlad_the_player
Автор

i start the 2nd lesson course here. thanks

MRX-nmdn
Автор

I actually found a balance that makes the two stop very close without crashing. Speed: -0.55, forward: -0.3, threshold: -0.28.

peryMimon
Автор

Thanks again! And again I learned something new from your video!

fdorsman
Автор

This idea of interaction with the neural network was excellent 👏👏👏👏😁. Because of some questions, I was thinking: But why does the cart keep walking even when the signal stops ?!?! 🤔 then I remembered phase one, when you showed how to add physics to the system. So using only one neuron, I changed those physical parameters. And it is very interesting how small changes influence the entire system. It is a full plate for mathematicians who like to explain chaos theory. 😂😂😅😁👍

DanielJoseAutodesk
Автор

Thanks Radu. I learn something new today. I had to make NN for boolen express just to predict for a set of data. I couldn't imagine the NN of boolen expression could be applied for this self driving car

eridarael
Автор

Thank you for the interesting, detailed explanation, I learned something. Coding this Radu 👍

difficultdo
Автор

@25:30 you talk about just friction slowing them down ? Are you allowing a constant friction and accelerate just to overcome that or is the friction you talk about the brakes? Because if it was brakes why cant we prevent collision?

alwysrite
Автор

I'm using Chrome on a Mac and can't get the fine control of the values to work. I've tried holding down the shift key and many others but I can only change the values in steps of 0.1, Is there a work around. Thanks for the videos, they are excellent.

garryokeeffe
Автор

As a beginner please give me where to start to understand well NN

hamzamizo
Автор

Radu, I really like your videos, but for me, this one misses the mark. 1. The graph for your trigger points could have been explained as simply: " the x axis is distance measured and the y axis is speed. The line is the point where the output changes state. " I still have no idea why you would go through the trouble of showing it as a 3d plain. 2. Hidden layers are demonstrated but not really explained. I would say that "each node provides an output based on a weighted combination of inputs verse threshold(or bais as you refer to it). This allows for the formation of logic gates as demonstrated". Not hating, just think us programmers tend to needlessly over complicate things at time, and it tends to turn off a good portion of the audience your targeting.

josh