Making sense of ML Black Box: Interpreting ML Models Using SHAP

preview_player
Показать описание
Extracting insights from a complex machine learning model is not easy hence for many people machine learning models are in a sense black box. This is a problem especially in high stake sectors like banking and healthcare. In this talk we will discuss how we can increase transparency, auditability, and stability of the model using valuable insights we can get from SHAP and explain reasoning behind individual predictions and how this can be aggregated into powerful model-level insights. We will also see the code to calculate SHAP values.

Audience level: Intermediate

Speaker: Ravi Singh. Data Scientist at HBO Europe developing predictive models and influencing and driving the way the marketing team consume data and insights through highly usable and visual data analysis products.