Assignment 3: Adversarial Search
CSE 415: Introduction to Artificial Intelligence
The University of Washington, Seattle, Winter 2024
The reading for this assignment is Search in The Elements of Artificial Intelligence with Python.
Due Wednesday, January 31 at 11:59 PM. Partnerships are optional for this assignment.

Here is the starter code for this assignment.

A. Introduction

K-in-a-Row

In this assignment, you'll create a program that implements an agent that can participate in a game of K-in-a-Row with Forbidden Squares (defined below). Then you'll demonstrate your agent, providing one or more transcripts of the agent playing against another agent.
PART I: Agent Code (80 points). Your program should consist of a single file called minimax_agent.py.

Your program will consist mainly of a collection of specific functions, for playing games of "K in a Row". We define a K in a Row game as a kind of generalized Tic-Tac-Toe with the following features:

  • (a) Just as in Tic-Tac-Toe, there are two players: one plays X and the other plays O;
  • (b) the board is rectangular, but is not necessarily 3 by 3; it is mRows by nColumns, where these numbers are chosen by the Game Runner (referee) at the beginning of the game;
  • (c) a player wins by getting K in a row, where K is not necessarily 3; K can be any integer greater than 1 and less than or equal to the maximum of mRows and nColumns;
  • (d) there can be "forbidden squares" on the board; these are chosen at the beginning of the game by the Game Runner; a square on the board that is available is represented by a blank, whereas a forbidden square is represented by a dash "-" ;
  • (e) there can be "handicaps" in the initial state, meaning that some X and/or O tokens can be set up on the board by the Game Runner in order either to influence the succeeding play or to change the balance of advantage and disadvantage to the players.

The starter code consists of several files:

Files you will modify and turn in:

  • minimax_agent.py

Files you should read and understand:

  • game.py
  • agent.py
  • runner.py

Other files:

  • transcript.py
  • autograder.py

Please do not add any additional files. Keep all the code for your agent in minimax_agent.py. You will turn in only this file along with your game transcripts, so any changes made to other files will not be reflected in your grading.

minimax_agent.py contains a class MinimaxAgent that must contain the following functions:

  1. __init__(self, initial_state, piece):

    This is your agent's constructor and should perform any necessary setup upon receiving the initial state and piece that your agent will be playing as.

  2. introduce(self):

    This function should return a multi-line string to introduce your agent. This should include your agent's name (you get to make this up), your name and netid, and anything else you'd like the agent to say at the start of the game.

  3. nickname(self):

    This function should return a short version of the playing agent's name (16 characters or fewer). This name will be used to identify the player's moves in game transcripts.

  4. choose_move(self, state, time_limit):

    This function selects which move to make given a game board (state). It is the main entry point into your game playing algorithm, and is called by the Game Runner.

    If the time_limit argument is not None, the Game Runner will cut off your function once that many seconds have passed. If you have not returned a valid move by that time, you will forfeit the game. To make sure this does not happen, you can use the time_limit parameter to end your search early and return your best move so far.

    You may assume that the state provided to this function will be a valid state where there is a possible space to move and no winner has been established.

  5. minimax(self, state, depth_remaining, time_limit, alpha, beta, z_hashing):

    This function should be called by choose_move and should implement the minimax algorith. state and depth_remaining should be used for recursively searching deeper layers and stopping at the correct layer, and will be used in our testing. The remaining arguments are up to you to use how you wish, with the caveat that your code must be able to handle a value of None for any of the remaining arguments which indicates to not use the associated feature.

    alpha, beta, and z_hashing are all present for the extra credit opportunities on this assignment. If you do not do the extra credit, you can safely ignore these arguments.

    Your minimax function will be graded based on how many times it calls your static evaluation function. Make sure to not leave any unnecessary static evals in your code.

    The minimax function must return a move and a float value that corresponds to the evaluation of that move.

  6. static_eval(self, state):

    This is your static evaluation function. Given a game state, it should return a numeric evaluation of that state where a higher value means that the state is better for X and a lower value is a state better for O. Note that this is regardless of which piece your agent is playing. For this assignment we define that X will always be the maximizing player and O always the minimizing player.

    Your static evaluation function should be non-trivial and should return different evaluations for states that are clearly different. We will grade based on comparing your evaluations across states that are easily distinguished by a human as significantly better or worse, so don't worry too much about determining small differences between mostly even states.

    We provide little guidance for this function beyond the lecture slides. This will be the main opportunity for you to set your agent apart from the rest of the class for the tournament, so making sure that this function gives good results and runs quickly will go a long ways towards improving your agent's performance.

    For our testing, you may assume that the state parameter is a valid state that matches the size and k values of the initial state passed to your agent in the constructor.

Running a Game:

You will use the file runner.py to run games between agents. agent.py is a sample agent that plays randomly and is provided for you to test against. At the bottom of runner.py, you'll see that the file is already set up to play your agent against our sample agent. You may change this to test different starting configurations as well as play against yourself or another agent. See the instructions in that file for making those changes.

Running Local Tests:

You can use the file autograder.py to run several tests we have provided to verify your code is correct. To run all the tests, you can simply run the command python3 autograder.py or use any other method you like to run the file with python. In addition to these tests, you can also use the gradescope autograder to test your code. It may take us 2-3 days to set up the gradescope autograder after we release this assignment. Note that gradescope will have tests that are not given to you in the starter code, so passing all tests on your machine does not guarantee a full grade in gradescope. We will use the autograder score from gradescope as the bulk of you grade on this assignment, so let us know on ed if something seems wrong.

PART II: Game Transcript (20 points).

As part of your turn-in, you will submit at least one game transcript. Follow the instructions above along with the guidelines below to run the game and generate your transcript.

Using the runner.py program to run a K-in-a-Row match, create a transcript of a game between your agent and the agent of another student in the class.

Set up the match so that your agent plays 'X' and your opponent plays 'O'. Use the following game instance for your match. Your agent should play X and the opponent should play O. For this run, you should set a time limit of 1.00 seconds per move (e.g., by editing the appropriate line of code near the bottom of the runner.py program).

The following is the representation of the initial board in a KinARow game called "5 in a Row on 7-by-7 board with Corners Forbidden". This can be found in game.py as the no_corners() function.

               [['-',' ',' ',' ',' ',' ','-'],
                [' ',' ',' ',' ',' ',' ',' '],
                [' ',' ',' ',' ',' ',' ',' '],
                [' ',' ',' ',' ',' ',' ',' '],
                [' ',' ',' ',' ',' ',' ',' '],
                [' ',' ',' ',' ',' ',' ',' '],
                ['-',' ',' ',' ',' ',' ','-']]

runner.py has the ability to generate PDF or HTML transcripts of your game. When the transcript option is enabled, it will first attempt to generate a PDF, and if that fails, will create an HTML transcript as a backup. The PDF generation requires a special library to use, so you will need to run pip3 install pyppeteer before you can generate PDFs. In the chance that this library cannot be installed or does not work for you, we will accept the HTML transcript as an alternative, though we vastly prefer the PDF transcript in your submission. Note that occasionally the PDF generation code will output an error to the console, but will still generate a valid PDF, so make sure the check the file output.

Make sure to set the correct options in runner.py to generate a transcript file, and turn in that file (PDF preferred, HTML okay) with your submission to gradescope. (For an example of such a file, click here

Extra Credit Options

There are 2 ways to get extra credit in Part I and 3 ways to get extra credit in Part II for a max of 25 points of extra credit. If you do any extra credit, you will also submit a file ExtraCredit.txt describing what extra credit options you completed.

in Part I:

  • A. Implement alpha-beta cutoffs. If you choose this, your minimax function should still work without alpha-beta pruning whenever minimax is called with values of None for alpha and beta. Use this ability to run your minimax function with and without alpha-beta cutoffs, reporting the number of static evaluation calls in each case. Include these values in your ExtraCredit.txt file. Be sure to use a fixed-depth search in the comparisons, such as 4 ply.
  • B. Implement Zobrist hashing. We are not planning to cover this in class, so this would require reading about the technique online or discussing it with the staff. If you complete this option, provide a brief description of zobrist hashing in your ExtraCredit.txt file.
Competition Option.

After your program is working, you may choose to take steps to qualify for the competition if you wish. To enter the competition, first your agent must qualify, and then you submit your evidence of qualification. For your agent to qualify, you must indicate so in ExtraCredit.txt and submit game transcripts for two games that meet the following criteria.

  1. Each game must be between your agent and a different opponent. (So your agent competes against two other agents.) The submitted games may have your agent playing either X or O (your choice).
  2. Your agent must be the winner of at least one of these games.
  3. The two games must involve a version of K-in-a-Row in which (a) K is at least 5 and the board is at least 7 by 7, but it can be bigger if you wish.
If you enter your agent and supply this qualifying information, you will automatically get 5 more points of extra credit, even if your agent doesn't win any more games. If your agent is a finalist (one of the top 8 agents in the class), you'll get 5 more points of extra credit. Finally, if your agent is the overall winner in its category, you'll receive five additional points or a total of 15 points for participating in and winning the competition.
What to Turn In.

Turn in both your minimax_agent.py file and your game-transcript file(s). Each game-transcript file should be the PDF (preferred) or HTML file that is generated by runner.py.
If you are doing any of the extra credit items, then also turn in your file ExtraCredit.txt that explains those special features of your agent and/or what opponent agents your submitted games involve.
If you wish to enter the competition, explain that in the ExtraCredit.txt file, and submit the additional qualifying game transcript.

Updates and Corrections

If necessary, updates and corrections will be posted here and/or mentioned in class or in ED.