The final lecture introduces Uncertainty Management in Rule-based Expert Systems, moving away from exact mathematics and into how systems handle ambiguity.

In classical logic, knowledge is assumed to be perfect: a statement is either exactly TRUE or exactly FALSE. However, most real-world problems do not provide clear-cut facts. Uncertainty is defined as the lack of exact knowledge needed to reach a perfectly reliable conclusion.

Sources of Uncertain Knowledge

Expert systems must navigate several sources of ambiguity:


Certainty Factors (CF) Theory

To handle this uncertainty, the lecture introduces Certainty Factors, an approach originally developed for the MYCIN expert system.

A Certainty Factor (cf) measures an expert's degree of belief in a hypothesis.

When a rule fires, the certainty factor assigned to its conclusion is propagated through the reasoning chain. Here is how the mathematics work for different rule structures:

1. Single Premise Rules

If a rule relies on a single piece of evidence, you simply multiply the certainty of the evidence by the certainty assigned to the rule.

cf(H,E)=cf(E)×cf

2. Conjunctive (AND) Rules

When a rule requires multiple conditions to be true (e.g., IF A is true AND B is true), the system takes the minimum certainty factor among all the conditions, and multiplies it by the rule's certainty.

cf(H,E1E2)=min[cf(E1),cf(E2)]×cf

3. Disjunctive (OR) Rules

When a rule requires only one of several conditions to be true (e.g., IF A is true OR B is true), the system takes the maximum certainty factor among the conditions, and multiplies it by the rule's certainty.

cf(H,E1E2)=max[cf(E1),cf(E2)]×cf

Combining Certainty Factors

The most complex part of this theory occurs when two different rules lead to the exact same conclusion. Common sense dictates that if two independent pieces of evidence support the same hypothesis, our confidence in that hypothesis should increase.

To calculate this, the system merges the individual certainty factors (cf1 and cf2) using a specific piecewise equation depending on their signs:

This is the step-by-step breakdown of how an expert system resolves the medical diagnosis tree in Example 2.

Pasted image 20260511211424.png

The goal of this network is to determine the final Certainty Factor (CF) for the hypothesis "HAVING COLD". The system does this by evaluating 7 interconnected rules based on patient symptoms.

(Note: Because the exact patient inputs are partially cut off in the document snippet, I will use a realistic set of assumed patient symptoms to demonstrate the exact mathematical mechanics the system uses to reach a conclusion).

Assumed Patient Evidence (Inputs):


Phase 1: Resolving the Intermediate Hypotheses

We now have two sub-conclusions to figure out: "soar troth" and "cold symptoms."

1. Calculate "soar troth" (Rules 3 & 4)

2. Calculate "cold symptoms" (Rules 1 & 2)


Phase 2: Evaluating the Final Hypothesis Rules

Now we evaluate the three rules that point to "HAVING COLD", using our newly calculated intermediate CFs where necessary.


Phase 3: The Final Combination

We have three independent rules firing for the final diagnosis:

Merge 1: Combine the positive evidence (R6 and R7)

Using the Incremental Belief formula:

cfnew1=0.38+0.28×(10.38)cfnew1=0.38+(0.28×0.62)=0.5536

Merge 2: Combine the positive result with the negative evidence (R5)

Because we are combining a positive (0.5536) and a negative (0.12), we must use the Conflicting Rules formula: cf=cf1+cf21min[|cf1|,|cf2|]

cffinal=0.5536+(0.12)1min(|0.5536|,|0.12|)cffinal=0.433610.12cffinal=0.43360.88=0.4927

Conclusion

With the corrected tree structure, the system uses the conflicting evidence (the presence of sneezing combined with cold symptoms slightly reduces the likelihood of a standard cold in this specific rule base) to temper its final diagnosis. The final Certainty Factor for "HAVING COLD" is 0.49 (or 49% confidence).