Ali Ghodsi, Lec 19: PAC Learning

preview_player
Показать описание
Description
Рекомендации по теме
Комментарии
Автор

THE BEST - BRIEF - DETAILED - EXPLANATION I HAVE FOUND, THANK YOU VERY MUCH.

lamis_
Автор

This lecture is pure gold. Thank you prof.

sudowiz
Автор

I wish I attended these lectures. Truly amazing!

maanasvohra
Автор

And done. Thanks Prof. Your style of teaching agrees most with me.

pranavchat
Автор

Hi, It's really a very clear and detailed lecture! Thank you sooo much! And could you please talk more about APAC learning and its sampling complexity? Thanks!

Aaaa-jpcx
Автор

that was an awesome explanation. its starting to make more sense, but like he said, its a topic that one could dedicate a whole semester to.

ahsin.shabbir
Автор

Thanks a lot for you course, I hope it will help me to reach a high mark for my coming exam ! :)

solalvernier
Автор

Appreciate your very clear and simple explanation. Looking forward to watch more of your videos.👏🌸

alitabesh
Автор

Thank you! Now it became clearer for me

nelya.kulch
Автор

Wish my professor was good as you =) great lesson!

davidecremona
Автор

Super clear, thanks! I guess I used to persian professor's explanations and teaching styles!

haniyek
Автор

Minor quibble: In the proof of error bounds, when you drew e^-epsilon and 1-epsilon, I think you've drawn the mirror image. They should slope up and to the left. I also think it seemed a bit arbitrary to point out that e^-x is greater than 1-epsilon. There are infinite functions that are greater than 1-epsilon. What made Leslie Valiant pick e^-x specifically, in the first place?

KW-mdbq
Автор

11:41 Why are we considering all of the m points? He clearly said that this classifier correctly classifies the m points from the training data. Then, he looked at the probability that it will classify a random point (from test set) correctly. P(classifying random point correctly) = 1 - P(misclassifying random point) = 1-epsilon. Now, we want the probability it will classify all the random points correctly. And these random points should be from the test set. Why does he do (1-epsilon)^m? Where am I going wrong?

desiquant
Автор

Thanks very much for the video. I still think the setting is quite problematic. By stating yi=c(xi), we are presuming that the true models are deterministic, rather than probabilistic. In other words, yi can be fully determined by xi. This is a huge unrealistic restriction in most applications.

shakesbeer
Автор

Not sure I understand why the hypothesis class of planes in 2D has VC dimension 3.
+ - + can't be classified correctly by any plane. Am I misunderstanding the definition of shattering?

samlaf
Автор

I have a question. How can shatter a positive and negative case that are located in a line. I mean consider a straight line where we have three points on it, + - +.

soryahozhabr
Автор

This guy looks like he's never slept in his entire life.

vectoralphaSec
Автор

Hi Prof,
Nice tutorial!
I have two question here!
In PAC Learning:
Is |H| the size of all hypothesis which training error = 0? How can we count it?

yanjenhuang
Автор

hope he can think straigntly before speak it out. times & times recorrect his statements...

diegomabrary