Learning to See [Part 8: More Assumptions...Fewer Problems?]

preview_player
Показать описание
In this series, we'll explore the complex landscape of machine learning and artificial intelligence through one example from the field of computer vision: using a decision tree to count the number of fingers in an image. It's gonna be crazy.

@welchlabs
Рекомендации по теме
Комментарии
Автор

Correction at 10:30. Thank you to Jendrik Weise, fejfo's games, and Andrew Kay for pointing this out. I mistakenly multiplied the probability of randomly selecting 5 correctly labeled examples from a set of 16 total examples, with the remaining 11 examples incorrectly labeled, p = 2/10, 000, by the number or trials 65, 536 to compute the probability of this event happening one or more times. That is, I computed P("getting lucky" one or more times) = p*n. I based this off the common formula for adding the probabilities of mutually exclusive events to compute the overall probability of one or the other event occurring. As pointed out by fejfo's games, this approximation is reasonable for the n=16 case, but does not work for n=65, 536.

Instead, we need to use the binomial distribution. In thinking through this again, it was helpful for me to think about this as the probability of tossing one or more heads in n=65, 536 trials for a "bent coin" where P(heads) = 2/10, 000 = p. As correctly pointed out by Jendrik Weise, fejfo's games, and Andrew Kay, this probability is: 1- (1-p)^n = The conclusion is the same: "getting lucky is highly probable", but my means of getting there were incorrect. Took me way too long to correct this - sorry for the delay!

WelchLabsVideo
Автор

This is the future of education. Animations are wonderful - colors are wonderful - annotations appear instantly - music tracks the mood of logical progression. With either a blackboard or book, this material would take twice as long for half the clarity. I sincerely appreciate the time you put into making these videos.

mpete
Автор

I am so lost... I'm gonna have to watch this a few times.

markthesecond
Автор

"A probability greater than 1"

Nope. The approximation 1 - (1 - x)^n ~= nx is only valid when x and n are sufficiently small, and 65536 is not sufficiently small. The result should be 1 - (9998/10000)^65536 = Getting an answer bigger than 1 doesn't mean the outcome is certain, it means you did something wrong.

AndrewKay
Автор

This and 3Blue1Brown are the best channels on youtube.

jacobkantor
Автор

This is the first video in the series to genuinely confuse me.

jacobkantor
Автор

I've been binge watching your channel for the last week or so. you need more subs!!!!

blakeaustin
Автор

Your videos motivated me to learn Python image processing

qwertymanzzz
Автор

1:00 measuring puppy cuteness actually requires machine learning; that next series needs this one to continue.

ryanmurray
Автор

Let's take a moment to also appreciate how visually appealing the style of your videos are. Up to the top with 3B1B, but using a different approach, using physical objects to make the concepts more sensible and intuitive. Thanks for awesome videos.

EmadGohari
Автор

Learning is easy, they said.
Even a baby can do it, they said.

fca
Автор

You deserve so much more than you receive.
Thank you.

vuvffufg
Автор

@5:34 is the situation that we are using to evaluate...ie getting the sampled data right and missing all others(i.e test data) as per rule G1(i.e we are trying to evalute the worst possible performance of the rule G1 and hwo to be sure this is the right setup we are valuting our rule for worst possible performance?? Answer is, .even if we take a random sample of the superset the correctly identifed perices are not coming up often ..see @5:17 about random smpling, where he says that we didn't randomy sample our data and then choose G1 and got the news that it fitted perfectly but)..The picture at @5:34 is the superset of all possibilities of 4squares ; And G1 to be successful only at the training sample and not at the other samples is just 0.0002(This is the crux)..This compared to the larger rule i.e G4 will eventually turnout that bigger the rule(almost 1 probability @10:26).i.e .more chances it has on working great only on sampled data and wrong (i.e misclassifying) on the test data aka not generalized as a rule...(I wrote the above explanation for some people who say this lecture was confusing which included me too unitll I rewatched it many times over many days to eventually shake my head into it)

paedrufernando
Автор

This is my favorite series on all of youtube right now keep up the great work!!

njrom
Автор

I am a junior comp sci major in college and want to go into machine learning. I can't thank you enough for making these videos. You a great style of web video Nd all your information is clear and concise.
Thank you

lolzist
Автор

Incredible! This is the best intro course on machine learning for both beginners and advanced level. Keep it up

sidsarasvati
Автор

I really enjoy the little texts at the bottom right of this video XD

MegaRainnyday
Автор

I love this series, I'm learning a lot. Thanks.

masdemorf
Автор

Things have been clear to me up until 6:25 in this video.

1) We chose g1(X)=x_1 out of 2, 048 potential functions, so why is your probability based on 8 functions? Why those 8 in particular?

2) Where did 2^16 come from for the number of potential rules for g4 at 9:45? I thought there were, again, 2048 potential functions to choose from. Is it (16 inputs)*(2 outputs)*(2048 potential functions)? I'm grasping for straws and re-watching parts 7 and 8 have not helped.

3) Where are you getting this n*p = 8*P(5 matches) = (16/10, 000) approximation from at 8:25? Isn't this a Geometric Distribution, where p = 'chance that a population permutation equals our population sample' and n = 'the number of rules this is being tested on'. Where we're interested in getting the probability of all n functions matching only the training set (i.e. overfitting).

Probability, why must you always complicate everything!

WateryIce
Автор

I need to say thank you, I'm learning a lot. Your series are really interesting and you're a great explainer.

vitorpinheiro