Explainable AI explained! | #3 LIME

preview_player
Показать описание
▬▬ Resources ▬▬▬▬▬▬▬▬▬▬▬

▬▬ Timestamps ▬▬▬▬▬▬▬▬▬▬▬
00:00 Introduction
02:31 Paper explanation
03:34 The math / formula
08:35 LIME in practice

▬▬ Support me if you like 🌟
Рекомендации по теме
Комментарии
Автор

One of the best and wholesome explanations on this topic! Kudos!

sakshamgulati
Автор

The objective function part is amazingly well explained.

yangbai
Автор

Wow, very clear and concise explanation of a fairly complex technique. Thank you very much for the effort you put in to this. Very helpful!

briggstwitchell
Автор

i just want it to ongratulate you!... you are the only one in you tube that can have a solid background (books and papers) about what is teaching in a video. thaank you so much... your video help me a lot!...,

arturocdb
Автор

Wow! Such a cool explanation. I had some intuition issues after reading the paper that are now solved due to this video. Thanks a lot!

dvabhinav
Автор

great work! I am a lucky person cuz I found this channel, as My major interests are GNN and XAI. thank you so much for brilliant works.

hlskojm
Автор

Concise and great explanation on the topic.

techjack
Автор

Really cool video. Love the format that mix theory and coding. Just one point for disscussion: We need to measure Recall even for a binary classifier or for balanced data. It measrus a classifier's ability to find out all positive examples. And precison measures how well are the positive predictions. So they measure two different type of errors. In statistics we talk about Error of type one or type two

hqi
Автор

thank you so much for the splendid explanation!

ngwoonyee
Автор

Really amazing!! It's very exciting and cool to conduct such research; it is also very explainable."

ayaeldeeb
Автор

13:20 I believe "edu" has something to do with "atheism" :) Thanks for the explanations!

borauyar
Автор

It is just clearly explained ! Thanks a lot

youssefbenhachem
Автор

Du bist sehr hilfreich! Quick questions from a relatively non tech person:
1) How does this scale for models with millions of params eg could you even use this on an LLM?
2) Am I right in understanding that you need a mini real data set to train locally every time then?

MoMo-ibej
Автор

Damn bro, this helped me a lot with my final thesis

ramalaccX
Автор

LIME can explain how the Complex model behaviors about the interested sample point, but it cannot guarantee the prediction of the complex model around the sample is correct predictions. am I right? Also, we could use LIME to explain the incorrect prediction of the complex model. For example, we could take two samples one correct prediction and one incorrect one, then we could make a simple surrogate model abound the two samples so that we could see how the complex model predicts and what is wrong with the complex model. Am I right again?

heejuneAhn
Автор

Thanks for this. Its funny that "ever_married" feature is inversely proportional to getting a stroke :)

randalllionelkharkrang
Автор

Thank you for the amazing explaination

Anirudh-cfoc
Автор

I think in the last example, you cannot say the model is wrong or not. LIME just explain how it behaves near the sample. If you can explain so easily why do you use complex model. You only can say using the ground truth. Some features that you might think is not logical still might affect that specific local points.

heejuneAhn
Автор

When the feature is not continuous, how do we sample the neighborhood? For example, binary features, Just a combination of features? or consider the binary features to be continuous ones?

heejuneAhn
Автор

What is this for? I am intrigued ( im a musician)

kenalexandremeridi