Outline

The Problem of Induction
Bayesian Inference
Hypothesis Space

Part 1: The Problem of Induction

What is Inductive Learning?

Inductive learning is the process of moving from specific observations to general rules.

Hume's Challenge

The philosopher David Hume argued that induction cannot be rationally justified.

Part 2: Bayesian Inference

Bayesian inference provides the mathematical framework to update our beliefs as we see new data.

P(h|d)P(d|h)P(h)

Example: The ”Fair vs. Biased” Coin

  1. Calculate Total Probability (Evidence) P(d): $$P(d) = P(d|hf)P(hf) + P(d|hb)P(hb)$$$$P(d) = (0.5×0.5)+(0.9×0.5) = 0.25+0.45 = 0.70$$
  2. Calculate Posterior for hbiased: $$P(hb|d) = \frac{Likelihood × Prior Total} {Probability (Evidence)} = \frac{0.9 × 0.5}{ 0.70} = \frac{0.45} {0.70} ≈ 0.643$$
  3. After seeing one Head, our belief that the coin is biased increased from 50% to 64.3%
  4. Repeat the same 3 steps after getting a second head using 0.643 as the new prior, the prior will increase further more to 76.4%

When repeating the same steps, taking our 76.4% belief as the New Prior and observe 3 more flips,

Part 3: Hypothesis Space

Underfitting vs. Overfitting

The "Hypothesis Space" (H) represents the pool of possible models the algorithm can choose from.

The Matching Principle

Inductive Bias

Because of Hume's Problem, algorithms must use an "Inductive Bias" to prefer certain hypotheses over others.

Summary

Concept Role in Learning
Induction Generalizing from samples to populations.
Hume’s Problem Pure induction is logically impossible.
Bayes’ Theorem Provides a mathematical way to update beliefs.
Hypothesis Space The ”search area” for the learning algorithm.
Priors The ”initial guess” that solves Hume’s problem.