Building the Gradient Descent Algorithm in 15 Minutes | Coding Challenge

preview_player
Показать описание
What's happening guys, welcome to the second episode of CodeThat!

In this ep I try to build a regression machine learning model using a gradient descent algorithm we build completely from scratch using Python. The only dependency we use in the challenge is numpy!

Oh, and don't forget to connect with me!

Happy coding!
Nick

P.s. Let me know how you go and drop a comment if you need a hand!
#machinelearning #codingchallenge #gradientdescent
Рекомендации по теме
Комментарии
Автор

i need to say this: you are the gamechanger here!!
as a data scientist +2 years of experience, i ALWAYS learn something new with your content! please nich, never stop doing this things, and also, never cut your smile in your face, even if your are having bugs!!
thanks for everything

javierjdaza
Автор

Set the time limit to 20 mins from next time
Because you are even explaining us.

This is really awesome!!

lakshman
Автор

Love the channel Nicholas, have recently graduated from an NLP Master's degree and seeing you explain stuff in a simpler way and your coding challenges is really helping me connect with the material I've learned! Keep it up and I'll keep watching!

alyt
Автор

Hey Nicholas! Love your channel and I'm really appreciating these 15 minute coding challenges - please keep it up! Also, you can disable those annoying VS Code popups you ran into at 8:35 by going to Code > Preferences > Settings, then typing "editor.hover.enable", then unchecking the "Editor > Hover" option. Hope that's useful!

spencerbertsch
Автор

Once you initialized lr to 0.0, I knew you were going to forget to change it lol. Love the challenges tho, keep doing them, I think it would be cool to see how you implement a neural network from scratch

nikitaandriievskyi
Автор

the zoom in on the unsaved icon was personal 💀
one of the reasons why I use autosave

Powercube
Автор

wow. you make the subject come alive with excitements and simplicity. you are really gifted. i will take you over hard to understand but smart Ph.D professors from Ivy league any day.

spicytuna
Автор

Awesome video !! It's preety cool to see such theoretical concepts coded and explained like this. Keep going Nich !!

MiguelNFer
Автор

I've been following your channel for a while now and I always find new cool stuff here. Keep up the good work, it's really helpful. Also, I love your positive personality, you really make complex stuff look entertaining.

Beowulf
Автор

Great Video!
Would be cool to come back to this and add visualization during gradient descend using matplotlib and show what is actually happening.
For example drawing data points, regression line, individual loss between line and data points and showing stats like current step, w, b, total loss! :)

juliansteden
Автор

the essence of Deep learning in a few lines of code... awesome

cavaliereoscuro
Автор

Wow. This youtuber has only 197k. For this absolutely high-quality videos. you deserver more than 1m+, only thing to say, is keep grinding, and you'll get to it.

Mohacks
Автор

This is a very novel and cool way to teach coding. I really enjoyed it, and it was good to see you troubleshoot and get stuff wrong.

williamstephenjones
Автор

You are so good at explaining these complicated concepts. Also, if you want to close the explore tab in VSCode try: Ctrl + b

sergioquijano
Автор

This was oddly intense. Great job Nicholas! Even though you ran out of time, this video is still a win to me. 😉

leonardputtmann
Автор

Amazing! I'm learning so much watching you code. Thank you for sharing.

einsteinsboi
Автор

I think you missed dividing the derivative by 2. Because in the formula for cost function, we have (1/2*no. of training data)*sum of squared error, when we take the derivative, 2 from dldw and 1/2 from cost function cancel each other. Anyway, it was a cool video, keep up the good work brother

dipendrathakuri
Автор

Really nice video! Love the energy and the enthusiasm. Thanks for the help!

VictorGiustiniPerez_
Автор

i'll give you half a win, since it was a small detail

brunospfc
Автор

ChatGPT won this challenge instantaneously lol :

import numpy as np

# Set the learning rate
learning_rate = 0.01

# Set the number of iterations
num_iterations = 1000

# Define the data points
X = np.array([[0, 1], [1, 0], [1, 1], [0, 0]])
y = np.array([1, 1, 0, 0])

# Initialize the weights
weights = np.zeros(X.shape[1])

# Train the model
for i in range(num_iterations):
# Compute the predicted values
y_pred = 1 / (1 + np.exp(-1 * np.dot(X, weights)))

# Compute the error
error = y - y_pred

# Update the weights
weights += learning_rate * np.dot(X.T, error)

# Print the weights
print("Weights:", weights)


A.I. description of the code: "This script defines a simple dataset with four data points and trains a model using the gradient descent algorithm to learn the weights that minimize the error between the predicted values and the true values. The model uses a sigmoid activation function to make predictions.

The script initializes the weights to zeros, and then iteratively updates the weights using the gradient descent algorithm, computing the predicted values, the error, and the gradient of the error with respect to the weights. The learning rate determines the size of the step taken in each iteration.

After training the model, the final weights are printed out. You can use these weights to make predictions on new data points by computing the dot product of the data points and the weights, and applying the sigmoid function."

ibrahimx