| Final Examination (Description and Topic List Subject to Minor Changes) |
|
CSE 415: Introduction to Artificial Intelligence The University of Washington, Seattle, Winter 2018 |
| Date: Tuesday, March 13 (2:30-4:30PM) |
| Format: The format of the final exam will be similar to that of the midterm exam. However, the exam will be longer. The topics covered will be drawn from the following, which includes some topics from the first part of the course and some from the second. |
Topics:
State-space search
States, state spaces, operators, preconditions, moves,
Heuristic evaluation functions,
Iterative depth-first search, recursive depth-first search,
Breadth-first search, best-first search, uniform-cost search,
Iterative deepening, A* search.
Admissible heuristics, Consistent heuristics
Genetic search
Application to the traveling Salesman Problem
Problem formulation
States, operators, goal criteria
Rittel and Webber's 10 characteristics of wicked problems
Minimax search for 2-player, zero-sum games
Static evaluation functions
Backed up values
Alpha-beta pruning
Zobrist hashing
Expectimax search
Probabilistic reasoning
Conditional probability
Priors, likelihoods, and posteriors
Bayes' rule
Naive Bayes modeling
The joint probability distribution
Marginal probabilities
Independence of random variables
Markov Decision Processes
States, actions, transition model, reward function
Values, Q-states, and Q-values
Bellman updates
Policies, policy extraction
Reinforcement Learning
Model-based vs model-free learning
Policy evaluation
Temporal difference learning
Q-learning
Epsilon-greedy learning
Exploration functions for Q-learning
Application to the Towers-of-Hanoi puzzle and Grid World
Classification using Naive Bayes classifiers
Classification using Naive Bayes
Division by P(E) not necessary for classification
Laplace smoothing: Adding 1 to counts when estimating P(Ei | Cj): why and how
Perceptrons
How to compute AND, OR, and NOT.
Simple pattern recognition (e.g., 5 x 5 binary image
inputs for optical character recognition)
Training sets, training sequences, and the perceptron
training algorithm.
Linear separability and the perceptron training theorem.
The meaning of deep convolutional networks
The Future of AI
Asimov's three laws of robotics, Kurzweil's "singularity"
|