Pasted image 20260405233534.png

(a) Conditional Probability Table

In a Noisy-OR model, the probability that the effect Y is absent (Y=0) is the product of the independent failure probabilities of each active cause Xi.

Let qi be the probability that Y=0 given that only Xi=1. The general Noisy-OR formula is:

P(Y=0X)=i:Xi=1qiP(Y=1X)=1i:Xi=1qi

First, we extract the failure probabilities q1,q2,q3 from the given rows:

  1. Find q2:
    P(Y=10,1,0)=13P(Y=00,1,0)=23. Therefore, q2=23.
  2. Find q3:
    P(Y=10,1,1)=45P(Y=00,1,1)=15.
    Using the Noisy-OR formula: q2q3=1523q3=15 q3=310.
  3. Find q1:
    P(Y=11,1,1)=56P(Y=01,1,1)=16.
    Using the formula: q1q2q3=16q1(15)=16 q1=56.

Now, we calculate the missing values in the table:

Completed Table:

X1 X2 X3 P(Y=1X1,X2,X3)
0 0 0 0
1 0 0 1/6
0 1 0 1/3
0 0 1 7/10
1 1 0 4/9
1 0 1 3/4
0 1 1 4/5
1 1 1 5/6

Pasted image 20260405233526.png

a) Joint Probability Distribution

The joint probability distribution for a Bayesian network is the product of the conditional probability distributions of each node given its parents.

Looking at the graph, we can identify the parents for each node:

Therefore, the joint probability distribution is:

P(A,B,C,D,E)=P(A)P(C)P(BA,C)P(DC)P(EB)

b) Conditional Independence Assumptions

The fundamental assumption of a Bayesian network (the Local Markov Property) is that each node is conditionally independent of its non-descendants given its parents. Applying this rule to each node in the model gives us the following conditional independence assumptions:


Pasted image 20260405233749.png

a) Joint Probability Distribution

In an undirected model, the joint probability distribution is factored into potential functions (often denoted as ϕ) over the maximal cliques (in this case, the pairs of connected nodes).

The joint probability distribution is:

p(A,B,C,D)=1Zϕ(A,B)ϕ(B,C)ϕ(C,D)

Where Z is the partition function (a normalization constant) that ensures all probabilities sum to 1:

Z=A,B,C,Dϕ(A,B)ϕ(B,C)ϕ(C,D)

b) Potential Functions

The problem states that each variable takes a value of 0 or 1, and it is 9 times more probable for neighboring variables to have equal values than different values.

Since the rule applies identically across the entire chain, we can use the same potential function for all pairs: ϕ(A,B)=ϕ(B,C)=ϕ(C,D).

We can define the potential function ϕ(X,Y) as:

Expressed as a table (or matrix) where the rows represent X{0,1} and columns represent Y{0,1}:

Y=0 Y=1
X=0 9 1
X=1 1 9