首页
友链
关于

AI导论 课程笔记

02 / 20 / 2024 (最后编辑于 02 / 20 / 2024)
预计阅读时间 6 分钟

由于中间有很长一段时间在课上做ICS-PA去了,笔记中间断线了许多 虽然这课确实比较水没讲啥就是

Agent

Introduction

Agent: anything that can be viewed as perceiving its envivronment through sensors and acting upon that environment through actuators.

Rational agent: select an action to maximize given performance measure based on the percept sequence and built-in knowledge * Different from omniscience * Different from perfect

Designing agents: PEAS * Performance measure * Environment * Actuators * Sensors

Type of Environment

Fully observable: Everything the agent needs to make decision is accesible via its sensors. vs. partially observable, when not fully accesible, the agent needs to make informed guesses. In decision theory: perfect information vs. imperfect information.

Deterministic: agent does the change of world state. * The decision only depends on the agent decision and current state. vs. stochastic: the change of the world state beyond the control of agent (in some aspects). * Need to make guesses about the world change.

Episodic: current decision doesn’t depends on previous action. vs. sequential: current choice will affect future decision.

Static: the environment won’t change chronologically. vs. dynamic: the environment changes upon time. vs. semidynamic: the environment doesn’t change, but the agents performance score(e.g. the performance measure) will change with the passage of time.

Discrete: the percepts and actions is limited, dinstinct and clearly defined. vs. continuous: the percepts and actions is limited in a range.

Single agent: only one agent in the environment. vs. multiagent: multiple agent working together.

Type of Agent

  1. Simple reflex agents percepts -> condtion-based action rules -> actions

    • Actions only depend on current percept.
    • Drawback: may develop deadloop.
  2. Reflex agents with state/model

    • Know how world evolves.
  3. Goal-based agents

    • Goal as the performance measure.
    • Know what it will be like if it does some action.
  4. Utility-based agents

    • Consider the problem as Utilities beyond a simple goal.

Evaluating search algo:

  1. completeness
  2. time complexity -> can be evaluated by counting total nodes generated
  3. space complexity
  4. optimality

Aritificial Intelligence Beyond Classical Search

Hill-Climbing Algorithm

Simulated Annealing

Genetic Algorithms


Constraint Satisfaction Problem