From a5a703c11d4c31a70be6d401449892f0dbb662da Mon Sep 17 00:00:00 2001 From: wkaiz Date: Fri, 1 Nov 2024 16:25:59 -0700 Subject: [PATCH] fixing images --- bayes-nets/d-separation.md | 18 +++++++++--------- bayes-nets/elimination.md | 4 ++-- bayes-nets/structure.md | 6 +++--- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/bayes-nets/d-separation.md b/bayes-nets/d-separation.md index 30b49b7..ba548fe 100644 --- a/bayes-nets/d-separation.md +++ b/bayes-nets/d-separation.md @@ -63,11 +63,11 @@ An analogous proof can be used to show the same thing for the case where $$X$$ h ## 6.4.2 Common Cause -![Common Cause with no observations](../assets/images/cause_free.PNG) +Common Cause with no observations *Figure 3: Common Cause with no observations.* -![Common Cause with Y observed](../assets/images/cause_observed.PNG) +Common Cause with Y observed *Figure 4: Common Cause with Y observed.* @@ -100,11 +100,11 @@ $$P(X | Z, y) = \frac{P(X, Z, y)}{P(Z, y)} = \frac{P(X|y) P(Z|y) P(y)}{P(Z|y) P( ## 6.4.3 Common Effect -![Common Effect with no observations](../assets/images/effect_free.PNG) +Common Effect with no observations *Figure 5: Common Effect with no observations.* -![Common Effect with Y observed](../assets/images/effect_observed.PNG) +Common Effect with Y observed *Figure 6: Common Effect with Y observed.* @@ -134,7 +134,7 @@ Common Effect can be viewed as ``opposite'' to Causal Chains and Common Cause This same logic applies when conditioning on descendants of $$Y$$ in the graph. If one of $$Y$$'s descendant nodes is observed, as in Figure 7, $$X$$ and $$Z$$ are not guaranteed to be independent. -![Common Effect with child observations](../assets/images/effect_children.PNG) +Common Effect with child observations *Figure 7: Common Effect with child observations.* @@ -171,8 +171,8 @@ Any path in a graph from $$X$$ to $$Y$$ can be decomposed into a set of 3 consec **Active triples**: We can enumerate all possibilities of active and inactive triples using the three canonical graphs we presented below in the figures. -![Active triples](../assets/images/active.PNG){:width="49%"} -![Inactive triples](../assets/images/inactive.PNG){:width="49%"} +Active triples +Inactive triples ## 6.4.5 Examples @@ -193,7 +193,7 @@ $$R \perp\!\!\!\perp B | T'$$ -- Not guaranteed

$$R \perp\!\!\!\perp T' | T$$ -- Guaranteed -![Example 2](../assets/images/lrbdtt.png){:width="50%"} +Example 2 This graph contains combinations of all three canonical graphs (can you list them all?). @@ -211,7 +211,7 @@ $$L \perp\!\!\!\perp B | T'$$ -- Not guaranteed

$$L \perp\!\!\!\perp B | T, R$$ -- Guaranteed -![Example 3](../assets/images/rtds.png) +Example 3 This graph contains combinations of all three canonical graphs.

diff --git a/bayes-nets/elimination.md b/bayes-nets/elimination.md index cac521b..bb599cc 100644 --- a/bayes-nets/elimination.md +++ b/bayes-nets/elimination.md @@ -22,11 +22,11 @@ An alternate approach is to eliminate hidden variables one by one. To **eliminat A **factor** is defined simply as an _unnormalized probability_. At all points during variable elimination, each factor will be proportional to the probability it corresponds to, but the underlying distribution for each factor won't necessarily sum to 1 as a probability distribution should. The pseudocode for variable elimination is here: -![Variable Elimination](../assets/images/VarElim.png) +Variable Elimination Let's make these ideas more concrete with an example. Suppose we have a model as shown below, where $$T$$, $$C$$, $$S$$, and $$E$$ can take on binary values. Here, $$T$$ represents the chance that an adventurer takes a treasure, $$C$$ represents the chance that a cage falls on the adventurer given that they take the treasure, $$S$$ represents the chance that snakes are released if an adventurer takes the treasure, and $$E$$ represents the chance that the adventurer escapes given information about the status of the cage and snakes. -![Variable Elimination](../assets/images/another_bayes_nets.png) +Variable Elimination

diff --git a/bayes-nets/structure.md b/bayes-nets/structure.md index 3962564..7a110a6 100644 --- a/bayes-nets/structure.md +++ b/bayes-nets/structure.md @@ -13,11 +13,11 @@ In this class, we will refer to two rules for Bayes Net independences that can b - **Each node is conditionally independent of all its ancestor nodes (non-descendants) in the graph, given all of its parents.** - ![Parents](../assets/images/parents.png) +Parents - **Each node is conditionally independent of all other variables given its Markov blanket.** A variable’s Markov blanket consists of parents, children, and children’s other parents. - ![Markov Blanket](../assets/images/blanket.png) +Markov Blanket Using these tools, we can return to the assertion in the previous section: that we can get the joint distribution of all variables by joining the CPTs of the Bayes Net. @@ -28,7 +28,7 @@ This relation between the joint distribution and the CPTs of the Bayes net works

Let's revisit the previous example. We have the CPTs $$P(B)$$ , $$P(E)$$ , $$P(A |B,E)$$ , $$P(J | A)$$ and $$P(M | A)$$ , and the following graph: -![Basic Bayes Net Examples](../assets/images/basic_bayes_nets.png) +Basic Bayes Net Examples For this Bayes net, we are trying to prove the following relation: