Interpreting ML models with explainable AI

preview_player
Показать описание
We often trust our high-accuracy ML models to make decisions for our users, but it’s hard to know exactly why or how these models came to specific conclusions. Explainable AI provides a suite of tools to help you interpret your ML model’s predictions. Listen to this discussion regarding how to use Explainable AI to ensure our ML models are treating all users fairly. Watch for a presentation on how to analyze image, text, and tabular models from a fairness perspective, using Explanations on AI Platform. Finally, learn how to use the What-if Tool, an open source visualization tool for optimizing your ML model’s performance and fairness.

Speaker: Sara Robinson

Watch more:

#GoogleCloudNext

AI218

product: Cloud - General; fullname: Sara Robinson; event: Google Cloud Next 2020;
Рекомендации по теме
Комментарии
Автор

This is exactly what I’ve been looking for. Thanks for making these videos!

livantorres
Автор

A very useful and helpful video! Thank you.

peterpaul
Автор

Great video. Thank you.

Can you provide information on how to explore the Global and Local feature explainability entirely in Colab regression/Keras models (possibly with What-If Tool)?

coryrandolph
Автор

can we try those demos in the free tier google cloud ?

username
Автор

How to use deploy model with explainable ai in custom classification pipeline for logistic regression?

dipanwitamitra
Автор

Is there feature importance for multivariate forecasting AutoML problem?

souravthakur
Автор

It's helpful, thanks!
However, Is this explainable AI service available for pytorch models? I can see only for TensorFlow models?

prachijadhav
Автор

Can explainable Ai can be used for text data?

sahilaseeja