#Recording Breakdown
#Slides Breakdown

Recording Breakdown

Course Project Introduction and Group Formation

Bayesian Networks vs. Markovian Networks

Computational Complexity in Probabilistic Networks

Handling Large Nodes and Small Datasets

Local Probabilistic Models vs. Classical Decision Trees

Techniques for Simplifying Complex Networks

Noisy-OR and Noisy-MAX Models Explained

Application of Noisy-MAX in Medical Dosage and Risk Assessment

Combining Local Models and Context-Specific Simplifications

Applying Local Probabilistic Concepts to Markovian Networks

Cliques vs. Groups - Definitions and Differences

Term Definition Key Characteristics
Group A set of individuals sharing some common features, but not necessarily highly similar ones. May share a few features, loosely connected.
Clique A fully connected subset where all nodes are closely related or highly similar. High similarity, strong interconnections, fully connected.

Clique-Based Network Decomposition and Simplification

Quantitative Example of Clique Decomposition

Calculating Normalization Constant (Z) in Markovian Networks

Approximations and Relative Probabilities

Summary of Simplification Techniques

Technique Purpose Outcome
Local Probabilistic Models Decompose large nodes into sub-nodes Reduced CPT size, simpler calculations
Context-Specific Independencies Condition on known contexts to prune irrelevant branches Reduced computations by ignoring irrelevant variables
Noisy-OR / Noisy-MAX Models Efficiently combine independent causes Halved computational steps, accurate probabilities
Clique Decomposition Split large networks into fully connected subgraphs Independent processing, scalable inference
Relative Probability Approximation Approximate normalization constants and probabilities Computational efficiency with acceptable accuracy

Course Project Details and Support

Administrative and Exam Information


Key Insights and Conclusions


Glossary of Key Terms

Term Definition
Bayesian Network (BN) Directed acyclic graph representing cause-effect relationships with conditional probabilities.
Markovian Network Undirected graph representing symmetric dependencies between nodes on the same level.
Conditional Probability Table (CPT) Table detailing probabilities of node states given parent states in a BN.
Local Probabilistic Model Model focusing on smaller sub-networks or sub-nodes to simplify computations.
Context-Specific Independence (CSI) Independence that holds under certain variable assignments, allowing branch pruning.
Noisy-OR Model Probabilistic model where multiple independent causes can produce an effect via OR operation.
Noisy-MAX Model Extension of Noisy-OR for multi-valued variables and graded effects.
Clique Fully connected subgraph used for decomposing Markovian networks.
Normalization Constant (Z) Sum over all state probabilities used to normalize probability distributions.

Frequently Asked Questions (FAQ)

Q: What is the difference between Bayesian and Markovian networks?
A: Bayesian networks are directed and model cause-effect relations, while Markovian networks are undirected and model symmetric dependencies among nodes at the same level.

Q: How do we handle large nodes with many variables?
A: By decomposing large nodes into sub-nodes, creating trees inside nodes, and reducing CPT sizes exponentially to simplify calculations.

more here

Q: What is the role of Noisy-OR in probabilistic inference?
A: Noisy-OR models efficiently combine multiple independent causes affecting a single effect, reducing computational steps needed for joint probability calculation.

Q: How does clique decomposition improve performance?
A: It breaks large networks into smaller fully connected subgraphs, allowing parallel and independent computation, thus reducing complexity.

Q: How is the project structured and supported?
A: Students form groups to work on cognitive modeling projects using real-world datasets, with support from domain experts focusing on psychological state assessment.


Slides Breakdown

Slide 2: The Representation Challenge

This slide outlines why standard Bayesian Networks struggle as they scale up.

more on LCM here

Slide 3: Types of Local Structure

To fix the parameter explosion, the lecture identifies four primary types of local structures:

Slides 4, 5 & 6: Context-Specific Independence (CSI) & Tree-CPDs

These slides explain how to visualize and define CSI using tree structures.

Slides 7 & 8: Independence of Causal Influence (ICI) & The Noisy-OR Model

These slides transition to ICI, focusing on the Noisy-OR model.

Slides 9 & 10: The Calculation Complexity and the Noisy-OR Trick

Slides 11 & 12: Naming and Logic behind Noisy-OR

Slide 13: Summary: CSI vs. ICI

This slide provides a comparative table to distinguish the two main concepts:

more here

Slides 14, 15 & 16: Generalizing ICI & Noisy-MAX

Slides 17, 18 & 19: The Need for Hybrid Models