Technology Lab

Create an interactive prototype that incorporates data generated by or processed by a language model. You can choose to implement this for web or mobile, using your preferred combination of tools and frameworks.

The goals of this technology lab are to ensure your familiarity with:

  • A web or mobile technology stack.
  • Programmatically interacting with language models.
  • Integrating language models with an interface framework.
  • Managing data persistence.
  • Managing your project with Git.

You will also need all of these in your larger group project, so this technology lab supports your rapid experimentation and learning. This assignment first outlines requirements for this technology lab, then provides examples that may be helpful for you to get started.

Requirements

  • This lab may be completed individually or in groups of two.
  • Your work must be tracked in GitLab, including in-progress work. To obtain your GitLab project, sign up in the provided document, then take ownership of that project.
  • Your interactive prototype must make non-trivial use of your chosen interface framework.
  • Your interactive prototype must include at least one core functionality that leverages a language model, through integrating a hosted service (e.g., Gemini's API) or through integrating a local model (e.g., through ollama).
  • Your interactive prototype must store and retrieve data that is persistent across sessions.
  • If you prefer, you may consult AI assistants as part of this lab. Document how you interacted with AI to obtain a desired response, whether and why you adopted AI-generated code, and what modifications you made to AI-generated code (e.g., briefly document and track this with GitLab).
  • Plan ahead to share both a demonstration of your prototype and insights you gained through its development.

Additional Details and Guidance

  • A goal is to ensure your comfort with using Git in a collaborative project. We very strongly recommend adopting an appropriate workflow, including the use of Feature Branches.
  • A goal is to ensure your comfort with an interface framework. Ensure you exercise enough of your chosen tools and frameworks to demonstrate this (e.g., using different types of interactive elements, experimenting with styling of such elements).
  • A goal is to ensure your comfort programmatically interacting with language models, including in constructing their input and in interpreting their output. Ensure you meaningfully transform data around invocation of a language model. Examples might include:
    • Pre-processing input (e.g., analyzing input events to identify gestures, feeding those to a language model).
    • Chaining language models together (e.g., supporting a complex task by using output from one language model invocation as part of the input to another, using a language model invocation as part of deciding how to route different tasks to different appropriate downstream invocations).
    • Post-processing language model output (e.g., parsing a JSON object generated by a language model to populate a list view).
  • A goal is to ensure your comfort with data persistence. Ensure you appropriately store and retrieve data relevant to your functionality (e.g., in browser local storage, in an integrated database, in a hosted database).

Deliverables

GitLab Code

Ensure the full code of your prototype is committed to your GitLab project:

  • Include a brief README.md that details how to build and run your prototype.
  • You may choose to omit an API key needed to run your prototype. If so, clearly document such additional necessary configuration in the README.
  • Be responsive to any questions from the course staff regarding your prototype.

Due: Fri Apr 18, 8pm

 

Video Demonstration

A video demonstrating key functionality of your prototype:

  • Must not exceed 2 minutes in length.

Submit via Canvas.

Due: Fri Apr 18, 8pm

 

Reflection

A reflection that does not exceed 800 words.

  • Author this reflection in the Drive folder corresponding to your technology lab group.
  • Begin by copying the provided template document.
  • Images may be included and do not count against the word limit.

Include 4 sections:

  • Describe your prototype, including what needs or tasks it supports and how it integrates AI.
    • This must not exceed 250 words.
  • Describe two things you learned or insights you gained through development of your prototype.
    • Each must not exceed 200 words.
    • We are open to a wide range of learning as part of this lab, including but not limited to learning about your chosen interface framework, your chosen AI tool, or their integration in your prototype.
    • Describe what you learned and how you applied that learning in your prototype.
  • Reflect upon your prototype relative to a principle of human-AI interaction (e.g., a principle articulated in one of the papers discussed in "Principles of Human-AI Interaction").
    • This must not exceed 150 words.
    • Begin by stating and crediting the principle, then reflect on your prototype relative to that principle.
    • We are open to a wide range of reflections, including but not limited to considering how your prototype addresses or is informed by a principle, considering how a principle exposes a concern or limitation in your prototype, considering how your prototype might be modified or extended according to a principle, or considering how your use of AI or the process of developing your prototype advanced your understanding of a principle.

Submit via Canvas.

Due: Fri Apr 18, 8pm

 

Slide and Presentation

Prepare and present 1 slide, in the provided presentation shared by the entire class:

  • Include 1 or 2 images that convey key aspects of your prototype.
  • Include brief summaries of the two things you learned or insights you gained through development of your prototype.

Presentation will be limited to 1 minute.

Slide Due: Fri Apr 18, 8pm

Presentation: In-Class, Tue Apr 22

 

Example

Example 1: Creating an AI Story Generator Using Next.js

In this example (available in GitLab), we created an AI story generator that accepts a prompt describing a story a person would like to generate. The story prompt is combined with a system prompt, then sent to the Gemini API, which returns a generated story. The previously-generated stories are stored in the browser's local storage.

You can reference this example to start to imagine how components may come together in your lab and project.

Story Generator Web Prototype

Screenshot of the Story Generator webpage, showing a list of past stories on the left. The second story is selected. On the right, there is a prompt entry box, a button to regenerate the story, and story content corresponding to the selected story.

Implementation

This example uses:

  • Interface Frameworks: Next.js with React and Tailwind CSS.
  • Integrated AI: Google's Gemini 2.0 Flash.
  • Storage: Browser local storage.

Implementation Steps

  1. We first followed Next.js's guide to create a new project. We used the npx create-next-app@latest command and accepted all default settings (e.g., TypeScript, TailWind CSS, App Router).

  2. We used npm to install the packages for Gemini (@google/generative-ai) and for request handling (axios).

  3. We created the folder api/generateStory within src/app/. Following the Gemini SDK's usage example, we added code in route.tsx to initialize the model and run a prompt. We obtained a Gemini API key, which required registration but provides a free tier which seems it will be sufficient for this course. We added that key to an .env.local file in the project directory and as GEMINI_API_KEY=<obtained_gemini_api_key>.

  4. Using VS Code Copilot, we asked Claude to create code for a prompt text box and a submit button in page.tsx and to modify the above route.tsx to accept a POST request from the page. We further refined the design a bit and added support for dark mode. These modifications to the generated project can be seen in our GitLab commit.

In-Progress Story Generator: Text Box with Button

Screenshot of a Story Generator development prototype webpage, showing a prompt entry box and a button to generate a story.

  1. After testing the ability to generate a story, we added local storage using localStorage.setItem and localStorage.getItem. We stored the prompt and the generated story as JSON.

  2. The codebase was cluttered, with all its features in page.tsx. With the help of Copilot, we refactored the components into their respective files in src/app/components.

  3. We asked Copilot to help create a sidebar that displays past stories, including support for adding and deleting entries.

  4. We tested the prototype, identified minor functional and stylistic issues, and made modifications to address those. This set of modifications can be seen in our GitLab commit.

Example 2: Using a Local Language Model

In another GitLab branch of the same project, we connected our story generator to a local LLM hosted by ollama. Informed by the list of models it supports here, we selected gemma3 with 4B parameters.

  1. We started the ollama server using ollama serve. This allowed starting a chat with a model by using ollama run <model_name> in another window.

Ollama Running in a Terminal

Screenshot of a ollama running in terminal with the gemma3 model. The prompt is 'Generate a story about a cat playing a violin.' The model generates a long story, starting with 'The rain in Oakhaven was a mournful, insistent thing, drumming a steady rhythm against the windows of Silas Blackwood’s cottage.'

  1. In the web project, we installed ollama using npm i ollama. Informed by the ollama documentation, we swapped out the Gemini API for the ollama API.

  2. That was everything. The Story Generator continues to work, but is now using a local LLM running on our own computer.

Assessment of Example

As a short reflection, we consider how the staff might assess the example if it were submitted for this assignment.

  • It seems to meet the requirements. It includes the use of an interface framework, an integration of AI, and the use of persistent storage. Several steps in its development were captured in GitLab.
  • It seems to fall short in the extent to which it integrates AI. The prototype merely combines input with a short system prompt, then directly displays the language model's output. Developing this did not require meaningful experimentation with the AI or other insight into its integration.
  • There are some artifacts of AI code generation, including code in StoryHistoryItem.tsx that is not very readable:
    className={`p-3 pl-6 border-b border-gray-200 dark:border-gray-700 cursor-
    pointer hover:bg-gray-100 dark:hover:bg-gray-700 flex justify-between ${
        isSelected ? "bg-blue-50 dark:bg-blue-900/30" : ""
    }`}
    
    If we continued to develop this prototype, this would likely have become difficult to extend (e.g., while trying to maintain consistency when adding new elements across the prototype).

Overall, this prototype might receive a ⭐️⭐️⭐️ (Near Expectations) or ⭐️⭐️ (Problematic) rating, primarily due to it lacking ambition and failing to meaningfully explore its integration of AI. Preparing this writeup required significantly more time and collaboration than development of the example itself, so the minimal time investment would also probably not be expected to result in a strong rating. If a student had required significantly more time investment to develop a similar prototype, it would be important to reflect on the learning that student found necessary in their development.

In preparing individual contribution statements based on this submission, it would also be clear that Mingyuan did all of the development, with James collaborating only through conversations and in this documentation. This is also obvious in the GitLab history of the example project. Although we both would receive the same rating for this assignment in Canvas, our eventual grades might be adjusted based on any patterns in contribution across assignments.