Training Neural Networks: Crash Course AI #4

preview_player
Показать описание
Today we’re going to talk about how neurons in a neural network learn by getting their math adjusted, called backpropagation, and how we can optimize networks by finding the best combinations of weights to minimize error. Then we’ll send John Green Bot into the metaphorical jungle to find where this error is the smallest, known as the global optimal solution, compared to just where it is relatively small, called local optimal solutions, and we'll discuss some strategies we can use to help neural networks find these optimized solutions more quickly.

Crash Course is produced in association with PBS Digital Studios

Thanks to the following patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:

Eric Prestemon, Sam Buck, Mark Brouwer, Timothy J Kwist, Brian Thomas Gossett, Haxiang N/A Liu, Jonathan Zbikowski, Siobhan Sabino, Zach Van Stanley, Bob Doye, Jennifer Killen, Nathan Catchings, Brandon Westmoreland, dorsey, Indika Siriwardena, Kenneth F Penttinen, Trevin Beattie, Erika & Alexa Saur, Justin Zingsheim, Jessica Wode, Tom Trval, Jason Saslow, Nathan Taylor, Khaled El Shalakany, SR Foxley, Sam Ferguson, Yasenia Cruz, Eric Koslow, Caleb Weeks, Tim Curwick, David Noe, Shawn Arnold, William McGraw, Andrei Krishkevich, Rachel Bright, Jirat, Ian Dundore
--

Want to find Crash Course elsewhere on the internet?

#CrashCourse #ArtificialIntelligence #MachineLearning
Рекомендации по теме
Комментарии
Автор

I work in CS, and this is an accurate and concise way of describing neural nets

GameByGame
Автор

Im a software engineer by trade and have to say I love this series thus far. Shows a great simplification of neural networks and the concepts that help them run. I cant wait to see how the lab is put together :D

VigiliaMortisYT
Автор

As an AI scientist, I'm echoing the comments from others: this was an excellent explanation of the basics of neural network training, without digging into the complexities of gradient descent. Great job, can't wait for the lab!

mattkuhn
Автор

Episode 100 of CC AI: * John Green bot is solving climate change whilst baking 1000 cookies of all different flavors on his way to Pluto *

colorsrrealduh
Автор

Software Engineer here. I've already recommended this crash course to so many young engineers who want to get into ML and AI. Great series.

boopiechot
Автор

I'm a college student learning this stuff in class and you have helped me so much; also the altitude metaphor is an amazing way of visualizing this thanks Jabrils!

yann
Автор

anyone else charmed by how Jabril delicately redresses John Green bot everytime after inserting a new cassette?

sabinescholle
Автор

I feel like this guy is the most chill guy on crash course ever

kieranmcgarvey
Автор

Thank you for pointing out possible unlrelated correlation when working on big datasets.

wepotgifter
Автор

Can't wait for the next episode! I love how this series makes this topic super accessible while not glimpsing over important information.

misode
Автор

10:04 when someone's checking you out

khaliah
Автор

Wow~ this is by far the most intuitive analogies and best graphics/animations I have seen to explain AI! Thank you!

pfever
Автор

Loving the John Green Bot in the jungle metaphor, you guys are doing an awesome job!

CappellaZagreb
Автор

Did 800 people actually die from becoming tangled in their bed sheets in 2009?!?!

williamcrosby
Автор

10:04 OMG Guys I think she likes me!!!

MrWilliam
Автор

This is excellent. It reminds me of many of the concepts I learned as part of how WinBugs works when doing Network Meta Analysis. Thank you.

sohaib-ashraf
Автор

i love this series! this episode gave me lots of reminders to the statistics series you guys did. i love the connections

ShinyaMiyami
Автор

Hey, this was a crazy simple explanation I fell in love with it. I am definitely going to follow and looking forward more such videos.

vinitmanerikar
Автор

Add-on fact: Computational optimization of molecular stability (i.e. in drug development) is done with a similar algorithm. The "plane" used in that case is entropy (or energy). Re-inserting energy into the system to watch it change molecular structure to a possibly lower energy level (higher stability) is a common step in that process.

StarrshProductions
Автор

This is wonderful! I think I’m finally understanding this stuff! Great video!

TheTwick