The MOST IMPORTANT topic in Machine Learning right now

preview_player
Показать описание
Today we're discussing the most important topic in Machine Learning right now, namely model explainability. It is one of the hottest discussion points in the data community, because ultimately if we cannot understand how the models arrive at the predictions, it renders them useless in many practical applications.

Like I mentioned in the video, I'm linking all the releavant links:

-------------------------------------------
If you'd like to make my day, and help me keep going (with a better mic, lol), feel free to get me my beloved coffee :)

Instagram: @karo_sowinska
Рекомендации по теме
Комментарии
Автор

I don't know why youtube hides these awesome channels from my recommendation. Absolutely loved this content.

ibrahimkhurshid
Автор

I've noticed a move away from blackbox models to clear ones instead, and I love it. A lot of companies will be moving towards this too

duckmeat
Автор

When you combine the data sources you lose on explainability (though increase accuracy). For XAI better to have a separate model for each "kind" of data, then have them "vote" on the outcome. E.g. for a fraud detection algo we had one model whose features would pick up gibberish usernames, another whose features analyzed the IP address space, another that was trained on the user's behavior. Then we could see which of the "triggers" would go off.

Deep learning does "aggregate" low level features into middle-range features (neurons), but these are not human-interpretable. You need to create your own interpretable middle-range features. You do lose on performance a bit this way. But you could have one model with all the data fused together that actually does the trading, and another set of simpler models that is trained on the output space of your complex model to "explain" it.

Just like explaining human decision making, you have how you REALLY decided (instinct/intuition), and the linguistic rationale/justification that backs this up. This also comes up in court cases or medical diagnoses. Judges / radiologists can always find a justification for their decision, but is that how they really decided? No, it's all muddled up in their human brains.

griffmccoy
Автор

That’s why it was so valuable to me to have graduated a quantitative economics degree. You learn so much about interpretability and causality. Great video :)

simonlauer
Автор

You make want to learn this field, your way of talking about it is glamorous
Love it

sunshinenassima
Автор

Thanks for your great video. I've noticed you are not doing youtube video too much but I encourage you to do so because you're making a good stuff!
Btw, I think your mic is not wreck, it's your necklace that made the noise! :D

mryderoc
Автор

I just discovered your channel and wow! It's really refreshing and so full of candour. Thank you so much Karolina :).

bigdhav
Автор

Not sure about the industry but I do face this issue with one of my friend, despite of having a great model he always fail to explain it to me, how the model is working, I'll surely share this with him, I hope he learns. Moreover, I've seen many people working with different toolkits like RASA & Kaldi but most of them don't know what's really happening inside, they just pick the data clean it & feed it to the model unsure about what's going to happen next. I really hope that people start focusing on model explainability.

sukeshseth
Автор

Really love the way you explain ML topics

calebmnb
Автор

This was amazing ! Thanks for sharing your experience

FarisSkt
Автор

Hey Karolina, nice to meet you! I just found your channel, love what you're doing!

I like how clear and detailed your explanations are as well as the depth of knowledge you have surrounding the topic! Since I run a tech education channel as well, I love to see fellow Content Creators sharing, educating, and inspiring a large global audience. I wish you the best of luck on your YouTube Journey, can't wait to see you succeed! Your content really stands out and you've put so much thought into your videos, I applaud you on that!

Cheers, take care, and keep up the great work ;)

empowercode
Автор

Hi Karolina. Interesting story. One question remains unanswered to me. Totally get what they mean about black box modelling BUT - why do you think they agreed to meet with you? What do you think was in it for them?

laurielounge
Автор

I'm so excited I found this channel. I'm just learning ML because I want to do something similar to what you just described. Though I mostly expect my efforts to not yield any significant results on the stock market It's an exciting topic and will surely bring a lot of challenges which in turn will require me to learn a lot of new things. This is like a whole new pandoras box compared to my C/C++ dev dayjob 😅

forecaster
Автор

May I ask you if you tried by yourself this system?

albertocatania
Автор

I am master student in statistics and going to write my master project next semester with reinforcement learning with applications in the stock market :D Nice video. Did you try to give your algorithm real money to trade?

GodOfWar
Автор

Thanks. Machine learning algo are all pretty much same. Good to distinguish apple or orange, for now

ChoogyNet
Автор

Any chance you could do a video on causal ML? I imagine it's relatively similar to explainable ML

roshan
Автор

Really amazing facts and empowering video
Your heavy eyes are describing the hard wirk done or either the high weekends😅😅 overall really appreciable video and nature wish you grow more

theshah
Автор

This was so insightful Karolina! And such fun story telling 😃 Thank you for sharing these insights! I better share a KoFi with you now 😃😊

diahidvegi
Автор

Advanced ML approaches with explainable inner workings? I'm in! Thanks for sharing.

nathankomer