Assignments
CSE 473: Introduction to Artificial Intelligence
The University of Washington, Seattle, Spring 2023

Due Dates:

The due date for each assignment can be found on the course's calendar.

Assignment 1: Python Warm-up and Introduction to Language Models

INDIVIDUAL WORK. (lead staff members for A1 are Kevin Farhat and Emilia Gan.)
 

Assignment 2: Heuristics in Search

INDIVIDUAL WORK.

(lead staff members: Markus and Khushi)


 

Assignment 3: Game-Playing Agents

Adversarial Search: This particular assignment may be done either individually or in partnerships of two. A team of two will submit only one set of files in GradeScope for the assignment. Starter code will be provided for each option. In both options, you'll implement alpha-beta pruning and Zobrist hashing.

The authors of winning agents in each option get extra credit.

Option A: Backgammon

Backgammon (individual work), a move generator is given. Do both deterministic and stochastic simplified backgammon (DSBG and SSBG). You'll use minimax/alpha-beta search for DSBG and expectiminimax search for SSBG. Both DSBG and SSBG are derived from standard Backgammon by eliminating some of the complications of the normal game, such as the betting and doubling-cube rules, and dealing with "doubles" dice roles in a special way. Students who wish to expand their agents to deal with the standard Backgammon rules can do so in Assignment 6, option B.

(lead staff members for option A: Kevin and Kenan)

Option B: Baroque Chess

This option is for teams of two who together build one agent.

Baroque Chess, also known as Ultima, is a variation on the game of chess. It has particularly interesting moves, especially for capturing pieces. Unlike Checkers or normal Chess, no world-champion computer agent has yet been crowned (AFAIK).

Note that this option includes writing a move generator (which is part of the fun of this option). Do minimax/alpha-beta and implement a capability to play a stochastic opponent (use expectimax search). As with Option A, your Baroque Chess agent can be extended as your project in option B of Assignment 6.

(lead staff members for option B: Khushi and Wisdom)
 

Assignment 4: Written Exercises I

(Individual work).

Written exercises on heuristic search and Markov Decision Processes. For those who wish to submit Latex-formatted solutions, a Latex template file plus the needed images is available as a Zip archive.
 

Assignment 5: Reinforcement Learning

Value Iteration and Q-Learning, in the context of problem solving. Partnerships are optional on this assignment. This assignment gives you experience in coding key parts of two fundamental algorithms in the area of Reinforcement Learning. Starter code, including a graphical user interface, is provided. (lead staff members: Phuong, Markus)
 

Assignment 6: Project

Projects may be done in teams of two or individually. Choose from one of 3 options.

Option A: Language Model building. (Extends A1a)

Follow the guidelines in the Option A details page to experiment with code that builds language models. (lead staff members: Kevin and Emilia)

Option B: Enhanced Game Agents (Extends A3)

Develop either an enhanced Baroque Chess player or an enhanced Backgammon player. Here are the goals for this option:

  1. Play well (i.e., improve the performance of the agent that you created in A3.)
  2. If your choice of game is Backgammon, play the normal game, including using the doubling cube and handling "dice doubles" according to the standard rules.
  3. If your choice of game is Baroque Chess, implement the "transparency" features provided in the guidelines.
  4. Add a chat capability that is aware of the game state, game dynamics, and that adds an entertaining element to game play.
  5. The chat feature should avoid complete hallucination or lying.
(lead staff members: Kenan and Wisdom)

Option C: Probabilistic Context-Free Grammar Parsing

Implement both the standard CKY algorithm (for parsing with CFGs) and the Ney version of the CKY algorithm (for parsing with PCFGs). Explore how each works with two example grammars. A detailed explanation of the algorithms is given in chapters of the latest draft of Jurafsky and Martin's book on language processing. Links are provided in the Option C details page. (lead staff members: Khushi and Phuong)
 

Assignment 7: Written Exercises II

Written exercises on: Probabilistic Reasoning, Bayes Nets, Markov Models (and HMMs), Perceptrons, Neural Networks, Probabilistic Context-Free Grammars, and Social Issues in Artificial Intelligence. This is an individual-work assignment. A Latex template is available.