Readings and Textbook Information for CSE 415, Spring 2021 |
This quarter, we are using Python as a Second Language and selected chapters of The Elements of Artificial Intelligence by S. Tanimoto. The new versions use the Python programming language for examples, rather than Lisp. The tutorial on Python as a Second Language is available to CSE 415 students here. Other readings come from books by Sutton and Barto, by Mitchell, and by Tan, Steinbach and Kumar. |
A recommended reference is S. Russell and P. Norvig's Artificial Intelligence: A Modern Approach, 3rd edition. |
The reading on problem solving with state-space search consists of several parts. First, for the basics of state-space search, see State-Space Search. For mathematical background on combinatorics (especially permutations and combinations) see the Combinatorics chapter of Grinstead and Snell's book on Probability, paying attention to pages 75-82 on basic counting and permutations, and pages 92-96 on combinations. For the topic of problem-formulation methodology, see Applying AI In Problem Solving. One of the advanced search techniques is Case-Based Reasoning, which can be viewed as the application of a particular sort of strategy to state-space search. The relevant reading consists of pages 1 to 9 of Part V of the Elements of Artificial Intelligence with Python. An optional reading item for adversarial search is the classical paper by Arthur Samuel on the design, implementation, experiments with a checkers-playing program during the 1950s. In that paper one can discern not only the fundamentals of 2-person, zero-sum game playing, and caching state values, but also some of the origins of feature-based reinforcement learning. |
For a basic introduction to probabilistic reasoning in AI, see Probabilistic Reasoning. |
For the subject of reinforcement learning, start with
Chapter 1 of Sutton
and Barto through section 1.6 (pages 1-11).
Continue by reading Chapter 2 up to Section 2.4 (pages 19-25).
Then read Chapter 3 (Finite Markov Decision Processes). For the Value Iteration algorithm,
read Chapter 4 from its beginning at p.57 through section 4.4 (p. 67). For Temporal-Difference
learning and Q-Learning, read Chapter 6 (sections 6.1 though 6.5).
(For additional applications of MDPs, see the online book by F. Spieksma. Although it uses somewhat different terminology, the various examples show how broadly MDPs may be applicable.) For additional machine learning techniques, continue with neural networks or the introduction to classification, in pp.145-149 of Chapter 4 of Tan, Steinbach and Kumar. A good reference on Naive Bayes classification is Chapter 3 (pp.1-7) of Mitchell. |
Bayes nets are covered in a variety of online resources. The lecture slides are a recommended source of basic definitions. For the theory of D-separation, this excerpt from Judea Pearl's book Causality may be helpful. |
A good reference on Hidden Markov Models is this appendix in Jurafsky and Martin's new edition of their book on Speech and Language Processing. Appendix A. This UW handout covers the Forward Algorithm and Viterbi Algorithm. |
Here are our readings on NLP. First, an overview of structural methods: natural language understanding. Next, the Latent Semantic Analysis method is covered here. (Excerpted from Introduction to Python for Artificial Intelligence, published in the IEEE Computer Society Ready Notes series.) |
If time permits, we may cover aspects of image understanding. Here are reading relevant to that. Image understanding, and Guzman scene analysis. |
(Note: The readings authored by Tanimoto are copyrighted and are provided only for the use of students currently registered in CSE 415.) |