17 Probabilistic Graphical Models and Bayesian Networks

preview_player
Показать описание
Virginia Tech
Machine Learning
Fall 2015
Рекомендации по теме
Комментарии
Автор

The course at my university is well given but goes into a lot of details. Your videos help me see the forest through the trees while still being complete and correct. Thank you for the quality content.

perkelele
Автор

Wow... this is amazing. Why cant my university professors teach this clearly? In this age where we do not read textbooks much (and rely on lectures after lectures) there needs to be major improvements in teaching...

tradertim
Автор

🎯 Key Takeaways for quick navigation:

00:00 📊 Probabilistic graphical models, such as Bayesian networks, represent probability distributions through graphs, enabling the visualization of conditional independence structures.
01:34 🎲 Bayesian networks consist of nodes (variables) and directed edges representing conditional dependencies, allowing the representation of full joint probability distributions.
03:21 🔀 Bayesian network structures reveal conditional independence relationships, simplifying the calculation of conditional probabilities and inference.
09:10 🧠 Naive Bayes and logistic regression can be viewed as specific Bayesian networks, with the former relying on conditional independence assumptions.
11:55 📜 Conditional independence is a key concept in Bayesian networks, defining that each variable is independent of its non-descendants given its parents.
15:15 ⚖️ Inference in Bayesian networks often involves calculating marginal probabilities efficiently, which can be achieved through variable elimination, avoiding full enumeration.
23:54 ⚙️ Variable elimination is a technique used in Bayesian networks to replace summations over variables with functions, reducing computational complexity for inference.
24:05 🧮 Variable elimination is a technique used to compute marginal probabilities efficiently by eliminating variables one by one.
28:07 ⏱️ In tree-structured Bayesian networks, variable elimination can achieve linear time complexity for exact inference.
29:02 📊 Learning in a fully observed Bayesian network is straightforward, involving counting probabilities based on training data.

ytpah
Автор

Thanks so much for this video! It was very insightful for me👍🏾

Ama-be
Автор

Excellent way of explaining. Probably can share the tables size reduction with variable elimination to benefit those that is still not familar with computing the table sizes.

kaiyongong
Автор

Thanks beyond measurement in money, Bert!

johng
Автор

He mentioned the next class at the end. What video is the next class?

nelsonekos
Автор

What is the link to the next video immediately after this?

nelsonekos
Автор

I finally understood the concept of conditional independence, thanks a lot!

sanyuktasuman
Автор

How can I used Baysien Network for Machine Learning, and what is the suitable software available for that?

ghady
Автор

Sorry, why doesn't the f function depend on r at 21:50? I mean I know it's in the conditional, but to me that means if r changes the function would change (so yeah its a parameter not a variable but should still be there?)

AllTheFishAreDead
Автор

I still don't understand the difference between enumeration and variable elimination.

huangbinapple
Автор

Sorry, I am confused the first rule for independence in Bayes nets: "Each variable is conditionally independent of its non-descendents given its parents". Why the non-descendents of a node has to do with its parents?

LouisChiaki
Автор

please can you give more personal tutorial on how to carry calculations on Bayesian network

harcourtpameela
Автор

Greate explanation! Thank you for the video!

rezaqorbani
Автор

That was super clear explanation for me. Thanks !

kdl
Автор

I have a question about the independence in bayes nets:
at 13:14 you say that C is independent of B and D (because of the first rule).
at 14:30 you say that a variable is independent of every other variable than its Markov blanket, but there D is included.
Does that mean still that C is independant of D because of the first rule or not. I'm a little confused at this point.


Anyways great videos, great explanations, thank you very much for creating them.

Raventouch
Автор

With the given respect to you, but not to the people that created this "Variable elimination" thing, this variable "elimination" sounds like bullshit because you are already computing all the possible states of the variable that you are going to eliminate, meaning that you aren't eliminating nothing already. Or I'm wrong?

UsefulMotivation