| Final Examination |
|
CSE 415: Introduction to Artificial Intelligence The University of Washington, Seattle, Autumn 2017 |
| Date: Monday, December 11 (2:30-4:30PM) |
| Format: The format of the final exam will be similar to that of the midterm exam. However, the exam will be longer. The topics covered will be drawn from the following, which includes some topics from the first part of the course and some from the second. |
Topics:
The Turing Test
State-space search
States, state spaces, operators, preconditions, moves,
Heuristic evaluation functions,
Iterative depth-first search, recursive depth-first search,
Breadth-first search, best-first search, uniform-cost search,
Iterative deepening, A* search.
Admissible heuristics
Genetic search
Application to the traveling Salesman Problem
Problem formulation
States, operators, goal criteria
Rittel and Webber's 10 characteristics of wicked problems
Minimax search for 2-player, zero-sum games
Static evaluation functions
Backed up values
Alpha-beta pruning
Zobrist hashing
Expectimax search
Probabilistic reasoning
Conditional probability
Priors, likelihoods, and posteriors
Bayes' rule
Odds and conversion between odds and probability
Naive Bayes modeling
The joint probability distribution
Marginal probabilities
Independence of random variables
Conditional independence
Use of conditional independence in Bayes Nets
Markov Decision Processes
States, actions, transition model, reward function
Values, Q-states, and Q-values
Bellman updates
Policies, policy extraction
Reinforcement Learning
Model-based vs model-free learning
Policy evaluation
Temporal difference learning
Q-learning
Exploration functions for Q-learning
Parameters alpha and epsilon used in Q-learning
Classification using Naive Bayes classifiers
Classification using Naive Bayes
Division by P(E) not necessary for classification
Laplace smoothing: Adding 1 to counts when estimating P(Ei | Cj): why and how
Perceptrons
How to compute AND, OR, and NOT.
Simple pattern recognition (e.g., 5 x 5 binary image
inputs for optical character recognition)
Training sets, training sequences, and the perceptron
training algorithm.
Linear separability and the perceptron training theorem.
The meaning of deep convolutional networks
The Future of AI
Asimov's three laws of robotics, Kurzweil's "singularity"
|