Learning - Lecture 4 - CS50's Introduction to Artificial Intelligence with Python 2020

preview_player
Показать описание
00:00:00 - Introduction
00:00:15 - Machine Learning
00:01:15 - Supervised Learning
00:08:11 - Nearest-Neighbor Classification
00:12:30 - Perceptron Learning
00:33:19 - Support Vector Machines
00:39:31 - Regression
00:42:37 - Loss Functions
00:49:33 - Overfitting
00:55:44 - Regularization
00:59:42 - scikit-learn
01:09:57 - Reinforcement Learning
01:13:02 - Markov Decision Processes
01:19:56 - Q-learning
01:38:54 - Unsupervised Learning
01:40:19 - k-means Clustering

This course explores the concepts and algorithms at the foundation of modern artificial intelligence, diving into the ideas that give rise to technologies like game-playing engines, handwriting recognition, and machine translation. Through hands-on projects, students gain exposure to the theory behind graph search algorithms, classification, optimization, reinforcement learning, and other topics in artificial intelligence and machine learning as they incorporate them into their own Python programs. By course's end, students emerge with experience in libraries for machine learning as well as knowledge of artificial intelligence principles that enable them to design intelligent systems of their own.

***

This is CS50, Harvard University's introduction to the intellectual enterprises of computer science and the art of programming.

***

HOW TO SUBSCRIBE

HOW TO TAKE CS50

HOW TO JOIN CS50 COMMUNITIES

HOW TO FOLLOW DAVID J. MALAN

***

CS50 SHOP

***

LICENSE

CC BY-NC-SA 4.0
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License

David J. Malan
Рекомендации по теме
Комментарии
Автор

I am blessed to have this lecture on internet for free

kamalkumarmukiri
Автор

He is so awesome, smoothly and clearly speaking

chairathful
Автор

Thank you very much to professors Brian Yu and David J. Malan, these lectures are so engrossing and helpful!

maybeonce
Автор

Waow these lectures are really great. Never enjoyed learning something this much 🙌🏽🙌🏽🙌🏽🙌🏽🙌🏽
Thank you so much for making these!

harshitarawat
Автор

This training shows its quality in many ways, this time it was for me regularisation and holdout cross validation parts. As in other training it was pure linear regression. But here we have much practical information instead of juggling unnecessary mathematical details that we might focus on later if we have to. So I prefer once more old school 2 hours no break class to a final technology, mobile supported, super fancy stuff.

cagri
Автор

Thankyou Prof.David J Malan and cs50 for making such effort.

nfs
Автор

THE BEST LECTURE. EXACTLY WHAT I WANTED

yash-vhtk
Автор

amazing course, are you planning to publish a next level course for this subject as well :) ?

maciejwozniak
Автор

The perceptron learning rule is a basic algorithm used in machine learning for binary classification tasks. It is based on the concept of an artificial neuron called a perceptron, which takes multiple inputs, applies weights to them, and produces an output based on a specified activation function.

The perceptron learning rule follows a simple iterative process to adjust the weights of the inputs until the perceptron can correctly classify the given inputs. Here are the steps involved:

1. Initialize the weights and bias to small random values.
2. For each training example, calculate the weighted sum of the inputs and the bias.
3. Apply the activation function to the weighted sum to obtain the perceptron's output.
4. Compare the output with the desired target output.
5. If the output is correct, continue to the next training example.
6. If the output is incorrect, adjust the weights and bias according to the learning rule to reduce the error.
7. Repeat steps 2-6 for a specified number of iterations or until the perceptron achieves the desired accuracy.

The learning rule updates the weights and bias using the following formula:
Δwᵢ = η * (target - output) * xᵢ

where:
- Δwᵢ is the change to be made to the weight of input xᵢ.
- η (eta) is the learning rate, which controls the magnitude of the weight updates.
- target is the desired target output.
- output is the current output of the perceptron.
- xᵢ is the value of the ith input.

A real-life example of the perceptron learning rule is image classification. Suppose you want to develop a system that can distinguish between images of cats and dogs. You can represent each image as a vector of numerical features (e.g., pixel values) and use a perceptron to classify them.

Initially, the perceptron's weights and bias are randomly assigned. You feed the training data (labeled images) into the perceptron, and it tries to classify them as either a cat or a dog based on the given features. If the perceptron misclassifies an image, the learning rule adjusts the weights and bias to correct the error. This process continues iteratively until the perceptron achieves satisfactory accuracy in classifying the images.

By using a large dataset of labeled images and repeating this process, the perceptron can learn to recognize patterns and make accurate predictions, allowing it to classify unseen images as cats or dogs. chat GPT 🤓

milakohen
visit shbcf.ru