[09x04] Bayesian Logistic Regression | Turing.jl | Probability of Spotting Japanese Wolf Spiders

preview_player
Показать описание
As a motivating example, you'll create a Bayesian Logistic Regression model to predict the presence of a Japanese Wolf Spider on a beach in Japan based on the Median Grain Size of the sand on the beach.

00:00 Intro
00:50 Set-up
02:20 Data
06:38 Pluto Notebook
11:13 Bayesian Logistic Regression
22:55 Final Thoughts
24:49 Outro

##############################
# Links for this tutorial
##############################

# data for this tutorial

# Code for this tutorial (GitHub)

# Suzuki et al. "Distribution of an endangered burrowing spider Lycosa ishikariana in the San'in Coast of Honshu, Japan (Araneae: Lycodidae)." Acta Arachnologica, 2006. (PDF)

# doggo dot jl. "[05x03] Logistic Regression | Classification | Supervised Learning | Machine Learning [Julia]". (YouTube video)

##############################
# Links for this series
##############################

# Link to Series 9 Playlist [Julia Probabilistic Programming for Beginners]

# The Julia Programming Language

# VS Code

##############################

Join Button (Channel Membership):
If you like what I do, then please consider Joining and becoming a Channel Member.

Thank you!
Рекомендации по теме
Комментарии
Автор

Thanks for your cool videos! 🌹🌹🌹
Could you please make a video to explain how to use Probabilistic programming in case we have a time series dataset?

mortezababazadeh
Автор

Thank you so much for this great content !

QQ-xxmo
Автор

Bro i need all Bayesian inferences on python

musiknation
Автор

Some comments:
- When you show the mean estimates, you say the results are consistent with our prior belief. *Technically*, since the uniform prior assigns probability zero to any value outside of its range, your estimates are forced to be consistent with it. It's generally recommended to use priors that assign nonzero probability over the entire support of the parameter (e.g. unbounded for means, non-negative for variances, etc.) precisely so that in the event your data actually is not consistent with your prior belief, the posterior can still eventually reach low prior probability regions.
- Finally, it's unusual but possible to see values around 0.99 when the sampler explores the posterior very efficiently so values below 1 are not an immediate cause for alarm. I have never ever seen values below that, but if I did (say, something like 0.8) then my first thought is that there must be some bug in the rhat calculation itself. Would love to hear about what's the smallest rhat someone has legitimately seen though!

Anyway, thanks a lot for your great videos, they're helping make my transition into Julia a lot less painful. :)

luna_fazio