Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data Scientist, Aviva

preview_player
Показать описание


- - -

Kasia discussed complexities of interpreting black-box algorithms and how these may affect some industries. She presented the most popular methods of interpreting Machine Learning classifiers, for example, feature importance or partial dependence plots and Bayesian networks. Finally, she introduced Local Interpretable Model-Agnostic Explanations (LIME) framework for explaining predictions of black-box learners – including text- and image-based models - using breast cancer data as a specific case scenario.

Рекомендации по теме
Комментарии
Автор

One issue with lime that is overlooked is: can you really trust the explanations?
The examples in the video had features that could be intuitively understood. But in some cases, there are no features a human can interpret himself.
Then you need to somehow verify that the explanations describe local model behavior appropriately before you make any conclusions on feature importance.

Lime is a great method to check your model relevance when you know what to expect.
If you deal with smt very abstract and understudied you will have a hard time. Imagine working with that breast cancer data, but you don't have the tabular description. All you have are the images themselves. In the wolf example you can tell clearly from the explanation that the model is detecting snow, but you won't be able to tell if the cancer microscopy model is relevant from LIME alone.

fedorgalkin
Автор

Great speaker on LIME! She breaks it down to make it easier to understand

dav
Автор

She is so smart and make those nitty-gritty details really interesting through her sweet presentation

ahsanulhaque
Автор

Very good presentation, with just enough and interesting technical detail

ViviMagri
Автор

x1.5 makes it more understandable.




Great talk.

janzaibmbaloch
Автор

This was really good - thankyou for putting this up and sharing.

geoffbenstead
Автор

Great presentantion. Cant believe it was 5 years ago, even the concerns she addressed.

bessa
Автор

Does LIME uses Permutation or Perturbation (tweaking feature values) of the input of interest (x) we would like to explain?

bryanparis
Автор

I am beginning to work with H2O/lime. Thank you for providing the rational toward using this program.

williamhenry
Автор

This is a wonderful presentation that touches such an important part of deep networks development. I just was wondering if LIME can be used to interpret time series classification problems and how it would look like?

amirnasser
Автор

12:02 I DO have an interest in understanding deeper LIME, so... MANY THANKS for this perfect video!!

bryanparis
Автор

Great talk. I'm excited to try out lime.

aristoi
Автор

Great piece of work!! Really appreciate!
Curious to understand when not to use LIME for tabular and Image data?

Secondly, what are pros of LIME over Shaply values when tabular data is concern and GRAD CAM/ SILENCY MAP/ GRADIENT MAP when image data is concern?

Thank you in advance!

akshayijantkar
Автор

As scientists, we are obliged to seek the truth before adhering to legislation.

nkristianschmidt