Lecture 1

1. Fundamental Definitions

2. The Four Categories of AI

The definitions of AI are organized into four main approaches:

3. Strong AI vs. Weak AI (Comparison)

4. Key Characteristics of Agents (Properties)

For an agent to be considered intelligent, it generally exhibits these four properties:

5. "Consists Of" & System Architecture


Lecture 2

1. Fundamental Definitions


2. "Consists Of" & Core Architectures

The Agent Architecture

An agent is fundamentally composed of two parts:

The 4 Pillars of Rationality Rationality is evaluated based on four components:

  1. Performance measure: How to know the agent succeeded.

  2. Prior knowledge: What the agent knows about the environment beforehand.

  3. Actions: The capabilities the agent can perform.

  4. Percept sequence: What the agent has perceived to date.

The PEAS Framework (Specifying the Task Environment) Used to define the relevant external factors impacting an agent:


3. Key Comparisons

Table 1: Types of Physical Agents

Agent Type Sensors Actuators
Human Agent Eyes, ears, and other organs. Hands, legs, mouth, and other body parts.
Robotic Agent Cameras and infrared range finders. Various motors.

Table 2: Rationality Distinctions

Concept Characteristics
Rationality Maximizes the expected outcome based on what it has perceived. Involves learning from perceived information to avoid repeating mistakes.
Omniscience Having total actual wisdom. Rational agents are not omniscient because percepts rarely supply all relevant information.
Perfection Maximizes the actual outcome. Rationality is about doing the best with what you have, not necessarily achieving a flawless actual outcome.

Table 3: Task Environment Characteristics

Characteristic Focus Type 1 Type 2 (The Contrast)
Observability Fully Observable: Sensors detect all aspects required to choose an action (perfect information). Partially Observable: Parts of the environment are inaccessible; agent must make informed guesses.
Certainty Deterministic: The next state depends only on the current state and the agent's action. Stochastic: Non-deterministic; aspects are beyond the agent's control.
Time/Planning Episodic: Current action choice does not depend on previous actions (sporadic). Sequential: Current choice affects future actions; requires planning ahead.
Stability Static: The environment doesn't change while the agent deliberates. Dynamic: The environment changes during deliberation. (Note: Semi-dynamic means the environment itself doesn't change, but the performance score drops over time).
State Space Discrete: A limited, distinct, clearly defined number of percepts and actions. Continuous: Features a range of values.
Population Single Agent: Operating by itself in an environment. Multiagent: Many agents working together.

Lecture 3

1. Fundamental Definitions & Concepts


2. "Consists Of" & Core Architectures

A Model-Based Agent consists of:

A Learning Agent consists of:


3. Key Comparisons

Agents vs. Objects

Feature Agents Objects
Autonomy Embody a stronger notion of autonomy; they decide for themselves whether to perform an action requested by another agent. Possess a weaker notion of autonomy compared to agents.
Behavior Capable of flexible, reactive, proactive, and social behavior. The standard object model has nothing to say about such flexible types of behavior.
Control A multiagent system is inherently multi-threaded, meaning each agent is assumed to have at least one thread of control. Typically lack independent, multi-threaded control in standard models.

Four Basic Agent Types (Pros & Cons)

Agent Type Core Characteristics Pros Cons
Simple Reflex Direct mapping from perceptions to actions using condition-action rules. Has no memory; action depends only on the current percept. The simplest agent. Fast reaction times, making it well-suited for dynamic environments. Limited intelligence; ignores percept history; fails in partially observable environments; prone to infinite loops, cannot learn or adapt to new situations.
Goal-Based Uses knowledge about a goal to guide actions through search and planning. Needs the current state and a goal state to make decisions. Goal-oriented behavior. Can solve complex problems requiring planning and can flexibly adapt by replanning. Computationally expensive. Defining usable goals is challenging, and it struggles with incomplete information. Cannot learn or adapt to new situations.
Utility-Based Uses a utility function to evaluate goals based on factors like speed and safety. Focuses on degrees of happiness or success. Makes rational decisions that maximize expected utility. Can handle uncertainty via probabilities and manage complex, time-variant preferences. Defining an accurate utility function is challenging. Calculating expected utility is computationally expensive. Utility functions can be subjective.
Model-based Reflex Maintains an internal state (model) to track parts of the world not currently visible. Uses transition and sensor models to predict how the world evolves and how actions affect it. More complex than simple reflex; requires constant updating of the internal world state.
Learning Can be applied to any architecture to improve performance over time. Includes a learning element, critic (evaluator), and problem generator for exploration. Exploration can be costly in the short term (e.g., fewer tips for a taxi driver while experimenting).

Alternative Architectural Perspectives

Architecture Category Characteristics
Reactive Behavior-based architectures, such as simple reflex agents.
Deliberative (Intentional) Involves thoughtful and planned action, reasoning before acting, and relying on internal knowledge-based models of the world.
Hybrid Combines elements of both reactive and deliberative architectures.
Learning Utilizes systems like Reinforcement Learning and Deep Learning.

Lecture 4

1. Fundamental Definitions


2. "Consists Of" & Core Frameworks

A Formal Problem consists of 5 components:

  1. Initial state: The starting configuration.

  2. Possible set of actions: What the agent can do.

  3. Transition model: A description of what each action does and the resulting consequence.

  4. Goal test: Determines whether a given state is a goal state (or belongs to a set of possible goal states).

  5. Path cost: The numeric cost associated with the sequence of actions.

Solving a Problem formally consists of 4 phases:

  1. Goal formulation: Defining the objective.

  2. Problem formulation: Defining the states and actions.

  3. Search: Finding a solution sequence.

  4. Execution: Carrying out the planned actions.

A Planning Agent consists of/relies on:


3. Key Comparisons

Table 1: Levels of Agent Representation

Representation Level Description Characteristics & Usage
Atomic A state is a "black box" with no internal structure. Used in search, game-playing, and Hidden Markov Models (HMM). Standard for problem-solving agents.
Factored States are represented as a vector of attribute values (variables or properties). Example: State = [Location: Room A, SoC: 80%]. Frequently used by planning agents.
Structured States include complex relationships between objects. Actions can affect specific variables or relationships. Frequently used by planning agents.

Table 2: Graph vs. Tree

Feature Graph Tree
Parenting Nodes can have multiple parents. Nodes have a single parent.
Cycles Can contain loops or cyclic paths. Acyclic; strictly without cycles.
Relationship A graph is not necessarily a tree. Any tree is a graph.
Conversion Can be turned into a tree by replacing undirected links with two directed links and avoiding loops. The resulting structure after removing cycles from a graph search.

Table 3: State Space Graph vs. Search Tree

Concept Characteristics
State Space Graph In this graph, each state occurs only once. It is rarely built fully in memory because it is usually too large.
Search Tree Built on demand. A single node in a search tree represents an entire path (plan) mapped out in the state space graph. Because different paths can lead to the same state, there is lots of repeated structure in the tree.

Lecture 5

1. Fundamental Definitions


2. "Consists Of" & Core Frameworks

A Search Problem consists of:

  1. A state space: Configurations of the world.

  2. A successor function: World dynamics (with actions and costs).

  3. A start state

  4. A goal test: Characteristics to find.

A Tree Node consists of (Infrastructure for Search Algorithms):

Tree Complexity Parameters consist of:


3. Key Comparisons

Table 1: State Space Graph vs. Search Tree (Revisited for this Lecture)

Feature State Space Graph Search Tree
Usage Can be used directly by developing a suitable search technique. Often easier to build first before searching.
Nodes represent Abstracted world configurations (states). Plans for reaching states.
Cost representation Arcs represent actions and costs. Plans have costs (the sum of action costs).

Table 2: Search Strategy Types

Strategy Type Characteristics
Uninformed (Blind) Uses only the information available in the problem definition.
Informed Uses domain-specific knowledge beyond the problem definition (heuristics, metaheuristics).