Project 3: Markov Decision Processes
Adapted from the Berkeley Pac-Man Projects originally created by John DeNero and Dan Klein.Table of Contents
Pacman seeks reward.
Should he eat or should he run?
When in doubt, Q-learn.
Introduction
In this project, you will implement value iteration. You will test your agents first on Gridworld (from class), then apply them to a simulated robot controller (Crawler) and Pacman.
As in previous projects, this project includes an autograder for you to grade your solutions on your machine. This can be run on all questions with the command:
python autograder.py
It can be run for one particular question, such as q2, by:
python autograder.py -q q2
It can be run for one particular test by commands of the form:
python autograder.py -t test_cases/q2/1-bridge-grid
The code for this project contains the following files, which are available here :
Files you'll edit: | |
valueIterationAgents.py |
A value iteration agent for solving known MDPs. |
rtdpAgents.py |
A RTDP agent for solving known MDPs. |
analysis.py |
A file to put your answers to questions given in the project. |
Files you should read but NOT edit: | |
mdp.py |
Defines methods on general MDPs. |
learningAgents.py |
Defines the base classes ValueEstimationAgent and QLearningAgent . |
util.py |
Utilities, including util.Counter , which is particularly useful for Q-learners. |
gridworld.py |
The Gridworld implementation. |
featureExtractors.py |
Classes for extracting features on (state,action) pairs. Used for the approximate Q-learning agent (in qlearningAgents.py). |
Files you can ignore: | |
environment.py |
Abstract class for general reinforcement learning environments. Used by gridworld.py . |
graphicsGridworldDisplay.py |
Gridworld graphical display. |
graphicsUtils.py |
Graphics utilities. |
textGridworldDisplay.py |
Plug-in for the Gridworld text interface. |
crawler.py |
The crawler code and test harness. You will run this but not edit it. |
graphicsCrawlerDisplay.py |
GUI for the crawler robot. |
autograder.py |
Project autograder |
testParser.py |
Parses autograder test and solution files |
testClasses.py |
General autograding test classes |
test_cases/ |
Directory containing the test cases for each question |
reinforcementTestClasses.py |
Project 3 specific autograding test classes |
Files to Edit and Submit: You will fill in portions of analysis.py
during the assignment. You should submit these files with your code and comments. Please do not change the other files in this distribution or submit any of our original files other than these files.
Evaluation: Your code will be autograded for technical correctness. Please do not change the names of any provided functions or classes within the code, or you will wreak havoc on the autograder. However, the correctness of your implementation -- not the autograder's judgements -- will be the final judge of your score. If necessary, we will review and grade assignments individually to ensure that you receive due credit for your work.
Academic Dishonesty: We will be checking your code against other submissions in the class for logical redundancy. If you copy someone else's code and submit it with minor changes, we will know. These cheat detectors are quite hard to fool, so please don't try. We trust you all to submit your own work only; please don't let us down. If you do, we will pursue the strongest consequences available to us.
Getting Help: You are not alone! If you find yourself stuck on something, contact the course staff for help. Office hours, section, and the discussion forum are there for your support; please use them. If you can't make our office hours, let us know and we will schedule more. We want these projects to be rewarding and instructional, not frustrating and demoralizing. But, we don't know when or how to help unless you ask.
Discussion: Please be careful not to post spoilers.
MDPs
To get started, run Gridworld in manual control mode, which uses the arrow keys:
python gridworld.py -m
You will see the two-exit layout from class. The blue dot is the agent. Note that when you press up, the agent only actually moves north 80% of the time. Such is the life of a Gridworld agent!
You can control many aspects of the simulation. A full list of options is available by running:
python gridworld.py -h
The default agent moves randomly
python gridworld.py -g MazeGrid
You should see the random agent bounce around the grid until it happens upon an exit. Not the finest hour for an AI agent.
Note: The Gridworld MDP is such that you first must enter a pre-terminal state (the double boxes shown in the GUI) and then take the special 'exit' action before the episode actually ends (in the true terminal state called TERMINAL_STATE
, which is not shown in the GUI). If you run an episode manually, your total return may be less than you expected, due to the discount rate (-d
to change; 0.9 by default).
Look at the console output that accompanies the graphical output (or use -t
for all text). You will be told about each transition the agent experiences (to turn this off, use -q
).
As in Pacman, positions are represented by (x,y)
Cartesian coordinates and any arrays are indexed by [x][y]
, with 'north'
being the direction of increasing y
, etc. By default, most transitions will receive a reward of zero, though you can change this with the living reward option (-r
).
Question 1 (6 points): Value Iteration
Write a value iteration agent in ValueIterationAgent
, which has been partially specified for you in valueIterationAgents.py
. Your value iteration agent is an offline planner, not a reinforcement learning agent, and so the relevant training option is the number of iterations of value iteration it should run (option -i
) in its initial planning phase. ValueIterationAgent
takes an MDP on construction and runs value iteration for the specified number of iterations before the constructor returns.
Value iteration computes k-step estimates of the optimal values, Vk. In addition to running value iteration, implement the following methods for ValueIterationAgent
using Vk.
computeActionFromValues(state)
computes the best action according to the value function given byself.values
.computeQValueFromValues(state, action)
returns the Q-value of the (state, action) pair given by the value function given byself.values
.
These quantities are all displayed in the GUI: values are numbers in squares, Q-values are numbers in square quarters, and policies are arrows out from each square.
Important: Use the "batch" version of value iteration where each vector Vk is computed from a fixed vector Vk-1 (like in lecture), not the "online" version where one single weight vector is updated in place. This means that when a state's value is updated in iteration k based on the values of its successor states, the successor state values used in the value update computation should be those from iteration k-1 (even if some of the successor states had already been updated in iteration k). The difference is discussed in Sutton & Barto in the 6th paragraph of chapter 4.1.
Note: A policy synthesized from values of depth k (which reflect the next k rewards) will actually reflect the next k+1 rewards (i.e. you return k+1). Similarly, the Q-values will also reflect one more reward than the values (i.e. you return Qk+1).
You should return the synthesized policy k+1.
Hint: Use the util.Counter
class in util.py
, which is a dictionary with a default value of zero. Methods such as totalCount
should simplify your code. However, be careful with argMax
: the actual argmax you want may be a key not in the counter!
Note: Make sure to handle the case when a state has no available actions in an MDP (think about what this means for future rewards).
To test your implementation, run the autograder:
python autograder.py -q q1
The following command loads your ValueIterationAgent
, which will compute a policy and execute it 10 times. Press a key to cycle through values, Q-values, and the simulation. You should find that the value of the start state (V(start)
, which you can read off of the GUI) and the empirical resulting average reward (printed after the 10 rounds of execution finish) are quite close.
python gridworld.py -a value -i 100 -k 10
Hint: On the default BookGrid, running value iteration for 5 iterations should give you this output:
python gridworld.py -a value -i 5
Grading: Your value iteration agent will be graded on a new grid. We will check your values, Q-values, and policies after fixed numbers of iterations and at convergence (e.g. after 100 iterations).
Question 2 (1 point): Bridge Crossing Analysis
BridgeGrid
is a grid world map with the a low-reward terminal state and a high-reward terminal state separated by a narrow "bridge", on either side of which is a chasm of high negative reward. The agent starts near the low-reward state. With the default discount of 0.9 and the default noise of 0.2, the optimal policy does not cross the bridge. Change only ONE of the discount and noise parameters so that the optimal policy causes the agent to attempt to cross the bridge. Put your answer in question2()
of analysis.py
. (Noise refers to how often an agent ends up in an unintended successor state when they perform an action.) The default corresponds to:
python gridworld.py -a value -i 100 -g BridgeGrid --discount 0.9 --noise 0.2
Grading: We will check that you only changed one of the given parameters, and that with this change, a correct value iteration agent should cross the bridge. To check your answer, run the autograder:
python autograder.py -q q2
Question 3 (5 points): Policies
Consider the DiscountGrid
layout, shown below. This grid has two terminal states with positive payoff (in the middle row), a close exit with payoff +1 and a distant exit with payoff +10. The bottom row of the grid consists of terminal states with negative payoff (shown in red); each state in this "cliff" region has payoff -10. The starting state is the yellow square. We distinguish between two types of paths: (1) paths that "risk the cliff" and travel near the bottom row of the grid; these paths are shorter but risk earning a large negative payoff, and are represented by the red arrow in the figure below. (2) paths that "avoid the cliff" and travel along the top edge of the grid. These paths are longer but are less likely to incur huge negative payoffs. These paths are represented by the green arrow in the figure below.
In this question, you will choose settings of the discount, noise, and living reward parameters for this MDP to produce optimal policies of several different types. Your setting of the parameter values for each part should have the property that, if your agent followed its optimal policy without being subject to any noise, it would exhibit the given behavior. If a particular behavior is not achieved for any setting of the parameters, assert that the policy is impossible by returning the string 'NOT POSSIBLE'
.
Here are the optimal policy types you should attempt to produce:
- Prefer the close exit (+1), risking the cliff (-10)
- Prefer the close exit (+1), but avoiding the cliff (-10)
- Prefer the distant exit (+10), risking the cliff (-10)
- Prefer the distant exit (+10), avoiding the cliff (-10)
- Avoid both exits and the cliff (so an episode should never terminate)
To check your answers, run the autograder:
python autograder.py -q q3
question3a()
through question3e()
should each return a 3-item tuple of (discount, noise, living reward) in analysis.py
.
Note: You can check your policies in the GUI. For example, using a correct answer to 3(a), the arrow in (0,1) should point east, the arrow in (1,1) should also point east, and the arrow in (2,1) should point north.
Note: On some machines you may not see an arrow. In this case, press a button on the keyboard to switch to qValue display, and mentally calculate the policy by taking the arg max of the available qValues for each state.
Grading: We will check that the desired policy is returned in each case.
Question 4 (5 points): RTDP
In the first question you implemented an agent that uses value iteration to find the optimal policy for a given MDP.
In this question, you will implement an agent that uses RTDP to find good policy, quickly. The agent has been partially
specified for you in rtdpAgents.py
.
(We've updated the gridworld.py
, graphicsGridworldDisplay.py
and added a new file rtdpAgents.py
, please download the latest files.
If you are curious, you can see the changes we made in the commit history here)
In RTDP, the agent only updates the values of the relevant states. This is different from value iteration, where the agent performs Bellman updates on every state. Bonet and Geffner (2003) implement RTDP for a SSP MDP. However, the grid world is not a SSP MDP. Instead, it is a IHDR MDP*. In order to implement RTDP for the grid world you will perform asynchronous updates to only the relevant states. Note, relevant states are the states that the agent actually visits during the simulation. You will also implement an admissible heuristic function that forms an upper bound on the value function. Assume that the living cost are always zero.
In order to efficiently implement RTDP, you will need a hash table for storing updated values of states. Initially the values of this function are given by a heuristic function and the table is empty. Then, every time the value of state not in the table is updated, an entry for that state is created. For the states not in the table the initial value is given by the heuristic function.
The following command loads your RTDPAgent
and runs it for 10 iteration.
python gridworld.py -a rtdp -i 10
You will now compare the performance of your RTDP implementation with value iteration on the BigGrid
.
You can load the big grid using the option -g BigGrid
.
Now answer the following questions:
- Plot the average reward (from the start state) for value iteration (VI) on the
BigGrid
vs time the planner took. The picture shows the result of running value iteration on the big grid. You may find the following command useful:python gridworld.py -a value -i 100 -k 1000 -g BigGrid -q -w 40
. - By running this command and varying the
-i
parameter you can change the number of iterations allowed for your planner. For example, 1 through 100. (Note, you shoud vary -i enough so that you can obeserve the convergence of average reward). For each of the cases you can record the time your planner took and also the average reward. - Plot the same average reward for RTDP on the
BigGrid
vs time.
Hints:
- You may find the
getGoalReward
andgetGoalState
methods we added ingridWorld.py
helpful for defining your heuristic function. - You can use python's built in
time
module to calculate the time your planner took. - You may find
weighted_choice
helpful for stochastically selecting the next state. - If your RTDP trial is taking to long to reach the terminal state, you may find it helpful to terminate a trial after a fixed number of steps.
You may use the
max_iters
variable for this purpose. -
util.py
has a function for calculating manhattan distance. -
BigGrid
is defined ingridWorld.py
.
We will now change the back up strategy used by RTDP.
Instead of immediately updating a state, insert all the visited states in a simulated trial in stack and update them in the reverse order.
Plot the average reward, again for the start state, for RTDP with this back up strategy (RTDP-reverse) on the BigGrid
vs time.
Submit a pdf named rtdp.pdf
containing the performance of the three methods (VI, RTDP, RTDP-reverse) in a single graph.
You don't to submit the code for plotting these graphs.
Explain the oberved behavior in a few sentences.
Also, explain the heuristic function and why it is admissible (proof is not required, a simple line explaining it is fine).
Optional (Extra credit)
- Using problem relaxation and A* search create a better heuristic.
- Implement a new agent that uses LRTDP (Bonet and Geffner, 2003).
*Please refer to the slides if these acronyms do not make sense to you.
Submission
1. Go to course Dropbox page
2. Choose PS3.
3. Click "Choose File" and submit your version of valueIterationAgents.py
, rtdpAgents.py
, rtdp.pdf
, and
analysis.py
.