In AI, an agent in AI is an autonomous entity that continually interacts with its environment, perceiving it through sensors and acting upon it through actuators to fulfill specified objectives.
Example
Planning agent define
Search problems occur anytime you do not have a 'direct' method to analytically solve the problem. It consists of a state space, successor function (with actions and costs), a start state (totally defined) and a goal test (way to know you are in a goal state), and a solution that is a sequence of actions (plan) which transforms the start state into a goal state.
Types of problems and what they return:
Uninformed/Informed search: Returns a 'plan' (e.g., path finding).
Local search: Returns a 'goal state' (e.g., training a NN).
Adversarial search: Returns a 'move' (e.g., chess, Go).
Example
World state
Search state
World state:
Agent positions: 120
Food count: 30
Ghost positions: 12
Agent facing: NSEW
How to calculate:
World states: 120 * 230 * 122 * 4
States for pathing: 120
States for eat-all-dots: 120 * 230
The state space graph is a mathematical representation of a search problem. It contains the following components:
Nodes: Abstracted world configurations.
Arcs: Successors (action results).
Goal test: Set of goal nodes (maybe only one).
In a state space graph, each state only occurs once. We can rarely build this full graph in memory (as it is too big), but it is a useful idea.
A search tree:
A 'what if' tree of plans and their outcomes
The start state is the root node
Children correspond to successors
Nodes show states, but correspond to PLANS that achieve those states
For most problems, we can never actually build the whole tree (e.g., Go)
State
Compare
Size
Example
Tree search
A type of uninformed search, depth-first search (DFS), is a recursive algorithm for traversing or searching tree or graph data structures. It explores as far as possible along each branch before backtracking.
Imagine walking through a maze. You choose a path and follow it until you reach a dead end. Then you go back to the last decision point and try a different path. DFS mirrors this process.
Properties of search algorithms
Properties of DFS
Describe
Analogy
(image)
Properties
Compared to BFS, DFS outperforms it when:
However, BFS can outperform DFS when:
Idea
Analogy
Properties
Cost-sensitive search
Uniform cost search (UCS) is
Analogy
Properties
Define
Heuristics
Example
Combining UCS and greedy search,
When should A* terminate
Is A* optimal? What went wrong?
Proof
(3 slide images into 1 later)
Minimax
Properties
Monte Carlo tree search (MCTS) is
Analogy
Upper confidence trees (UCT)
Steps
UCT vs minmax tree