P.A.T.C.H. - Proactive Automated Text Correction Helper

Overview

A CLI tool that helps developers find and fix image alt-text issues in JSX and web components. It parses <img> (and <Image>) tags, runs rule-based checks for missing or weak alt text, then uses OpenAI to suggest WCAG-aligned descriptions. The tool outputs a severity-ranked markdown report and optional one-command fixes you can apply selectively. Built to support—not replace—human review and user testing.


Setup

The audit tool uses OpenAI to analyze alt text. You must set your API key locally; it must never be committed to Git.

  1. Copy the example env file
    • Mac/Linux: cp .env.example .env
    • Windows (PowerShell): Copy-Item .env.example .env
  2. Edit .env and replace your_openai_api_key_here with your real key.
    Get a key at: OpenAI API keys
  3. Do not commit .env — it is listed in .gitignore. Do not paste your key in code, README, or chat.

If you prefer not to use a .env file, you can set the variable in your shell before running the tool (e.g. export OPENAI_API_KEY=sk-... on Mac/Linux, or set it in your terminal/IDE environment).


How to Use

Setup

  1. Clone the repository and install dependencies:
    npm install
    
  2. Make the CLI scripts executable and link the package:
    chmod +x bin/*.js
    npm link
    
  3. Create a .env file in the project root with your OpenAI API key:
    OPENAI_API_KEY=sk-...your-key-here...
    

Running an Audit

fixer-run-audit <file>

Example:

fixer-run-audit src/App.jsx

This will:

  • Parse all <img> and <Image> tags from the file
  • Flag missing, empty, or generic alt text
  • Send the image tags and context to OpenAI for severity ranking and suggestions
  • Generate a report in reports/audit-<timestamp>.md
  • Save fix data to .alt-fixer-cache.json

Applying Fixes

Apply a single fix by its id:

fixer-run-fixes 1

Apply all fixes at once:

fixer-run-fixes all

Fix ids correspond to the numbers in the generated report. The fixer will patch the original file in place at the exact line numbers recorded during the audit.


Output Files

File Description
reports/audit-<timestamp>.md Human-readable audit report with severity rankings and suggested alt text
.alt-fixer-cache.json Machine-readable fix data used by fixer-run-fixes

Demo

The demo/ folder contains a minimal JSX file with intentional alt-text issues so you can see the full audit → fix flow. Use it for screenshots, screen recording, or to verify the pipeline.

How to run the demo

From the project root:

# 1. Run audit on the demo file (writes report + cache into demo/)
npx fixer-run-audit demo/demo-capture.jsx --out-dir demo

# 2. Copy latest report to demo/audit-report.md and apply all fixes to demo/demo-capture-fixed.jsx
npm run demo:run

Before (demo/demo-capture.jsx)

Source file with missing, empty, and generic alt text so the audit finds Critical and High issues:

/**
 * Demo file for screenshots / screen recording.
 * Intentionally contains alt-text issues so the audit finds Critical + High.
 */
function ProductPage() {
  return (
    <div className="product-page">
      <img src="/hero.png" alt="" />
      <img src="/banner.jpg" />
      <img src="/icons/cart.svg" alt="image" />
      <img src="/product-shoe.png" alt="photo" />
      <img src="/about/team.jpg" alt="Team photo in office" />
    </div>
  );
}

After (demo/demo-capture-fixed.jsx)

Same file after applying suggested fixes (improved alt text, missing alt added):

/**
 * Demo file for screenshots / screen recording.
 * Intentionally contains alt-text issues so the audit finds Critical + High.
 */
function ProductPage() {
  return (
    <div className="product-page">
      <img src="/hero.png" alt="Hero section" />
      <img alt="Banner image" src="/banner.jpg" />
      <img src="/icons/cart.svg" alt="Shopping cart icon" />
      <img src="/product-shoe.png" alt="Product shoe image" />
      <img src="/about/team.jpg" alt="Team photo in office" />
    </div>
  );
}

Terminal output (capture)

Example terminal output from running the demo is saved in the repo for reference:

Contents summary:

  • Step 1 — Audit: Parse images → precheck → OpenAI analysis → report and cache written under demo/.
  • Step 2 — Apply fixes: Report copied to demo/audit-report.md, fixes applied to demo/demo-capture-fixed.jsx (e.g. 4 applied, 0 skipped).

Use this file for documentation or screenshots of the CLI output.


Project Details

Motivation and Abstract

Web developers often work in fast paced coding environments where efficiency is prioritized, and accessibility for images and visual content is frequently overlooked. Many existing accessibility tools scan broadly for WCAG issues but may not assess the quality of implementations for alt text. Rule-based tools can detect missing alt attributes, but often provide limited guidance and do not evaluate whether existing alt text meets accessibility standards. Generative AI has been explored as an automated auditor, yet relying on AI alone risks reducing accessibility to a checklist while ignoring human context.

Building on Ashlee M. Boyer’s critique in “How to Dehumanize Accessibility with AI,” this project proposes a lightweight developer tool focused specifically on image accessibility for web development. The system first detects missing or low quality alt text using deterministic checks similar to pre-AI tools. It then leverages AI to evaluate existing alt text (or lack of alt text) and generate improved, WCAG-aligned descriptions, taking into account the image and it’s contextual placement in the code. The tool produces a comprehensive, readable, severity ranked report, which includes the original alt text, suggested improvements, and guidance for future accessibility practices. Developers can optionally apply suggested fixes directly to the code.

Designed to complement, not replace, human user testing, this system combines deterministic detection with AI-assisted feedback to reduce developer overhead while helping developers recognize and prevent image accessibility issues earlier in the workflow. By making the report the central output, the tool not only fixes current issues but also helps developers learn to implement better image accessibility in future projects.

Background Research

Our system builds upon existing research and perspectives regarding the current state of digital accessibility tools and developer workflows:

The Human Impact of Accessibility Gaps In his reflection, “Growing Up With Accessibility Gaps: A Screen Reader User Reflects”, Michael Taylor details his lifelong experience navigating the digital world as a blind screen reader user relying on tools like Apple’s VoiceOver. He highlights how seemingly minor design omissions create compounding barriers across different facets of life. For instance, unlabeled images and inaccessible video players on social media platforms actively excluded him from peer conversations during his youth. In educational settings, improperly formatted digital materials, such as scanned PDFs, forced him to spend excessive time simply trying to access content rather than learning it. Furthermore, inaccessible customization tools on e-commerce sites completely blocked his ability to make independent purchases. Taylor’s account underscores that accessibility bugs are not merely technical errors—they are systemic barriers that cause real social isolation and loss of independence.

The Developer Struggle with Accessibility In his blog post, “We Need to Talk: Accessibility Programming…”, Michael VanOverbeek (Ritchie), a blind game developer and accessibility engineer, highlights a critical bottleneck: accessibility programming itself is often inaccessible to developers. He details the immense frustration of dealing with undocumented APIs, a lack of language support (such as missing C# bindings for AccessKit), and the fundamental inability to visualize the accessibility tree in real-time. VanOverbeek emphasizes that developers often fail to implement accessibility not out of a lack of care, but because standard development environments do not make accessibility features or errors visible. This account strongly validates our project’s approach. By creating a VS Code extension that visualizes the accessibility tree and highlights structural errors directly within the editor, we provide the clear, real-time guidance developers need to build inclusive software by default without feeling overwhelmed.

Goal

The goal of this project is to build a terminal based tool that helps developers identify and fix missing or low quality alt text in JSX and web application components. Instead of attempting to automate accessibility entirely, the system focuses on catching common omissions early and supporting developers in making informed fixes. The tool combines rule based checks with constrained AI generated suggestions to propose candidate alt text for images that are missing descriptions. Developers remain fully in control of reviewing and applying changes. The goal is to reduce one of the most common accessibility gaps in modern web development while keeping the workflow lightweight and developer friendly. Our pipeline will not only provide optional fixes, generations, and suggestions for alt-texts, but will provide a report with severity ranking for all image-accessibility issues. In addition, unlike many other current checkers, our pipeline will consider the context of the code with the image.

Tasks

The first task the system supports is scanning JSX files to detect images that are missing alt text or have clearly insufficient alt attributes. The user runs the CLI audit command from the project root and provides a target file or directory. The system parses the JSX and produces a structured report listing flagged elements, their locations, and severity levels. Each issue is assigned an ID so it can be referenced later. The interface supports this by keeping the output concise and terminal friendly, allowing developers to quickly understand where problems exist. This task is guided by background research showing that missing alt text is still an incredibly overlooked aspect in web-development.

The second task is generating suggested alt text for images. For each issue, the system produces a short explanation of why the image needs alt text and provides an AI generated candidate description. Or, if the current alt-text is not up to standard by WCAG and class discussed guidelines, it will provide an optional suggestion to the developer. Suggestions are intentionally constrained and review based rather than automatically applied. The output includes the issue ID, severity, reasoning, and suggested alt text so developers can evaluate whether the fix is appropriate for their context. This design reflects research indicating that developers prefer assistive AI that supports decision making rather than silently modifying code. The report will also contain what aspects of WCAG should be checked upon for image accessibility, and sort the various issues by severity ranking.

The third task enables developers to selectively apply fixes. After reviewing the audit results, the user runs the fixes command and can choose to apply all suggestions or only specific issue IDs.

The system updates only the relevant JSX elements and writes a summary of what was changed. Developers can then re run the audit to confirm that the issues were resolved. The interface is designed to preserve developer control and avoid unexpected code modifications.

Storyboard (Initial)

This was our initial storyboard plan. We decided to scope it down a lot more, but provides the initial plans of our tool. Miro link

Validation

Validation Overview

To validate the system, we tested both the audit and fix generation steps using real web pages from sites we previously audited during our accessibility implementation assignment. Because we had access to the codebases for these sites, we were able to run the tool directly on JSX files and compare results against issues we had already identified manually.

We evaluated two things:

  • Audit accuracy — The tool was run on pages with known image accessibility issues, including missing alt attributes and vague alt text. It consistently flagged the same issues identified during manual review.
  • Fix quality — Generated alt text suggestions were compared against alt text we had previously written by hand. Suggestions were assessed against three WCAG-derived criteria: whether the alt text captures the key aspects of the image, whether it is concise but descriptive, and whether it avoids duplicating adjacent text in the code.

Overall, the tool reliably detected missing alt text and correctly prioritized it as most critical in reports. Suggestions for improving existing alt text varied in quality depending on the amount of context available in the surrounding code and the descriptiveness of the image’s file path.

Compliance with Success Criteria

This project is a CLI tool that generates markdown reports and patches source files. Compliance is evaluated in the context of the tool’s output and the user’s terminal/editor environment.

  • 1.1.1 Non-text Content — Supports. The tool’s core purpose is ensuring alternative text is available for all non-text content.
  • 1.3.1 Info and Relationships — Supports. Information and relationships in generated reports are conveyed through structure, not visuals alone.
  • 1.3.4 Orientation — Supports. Dependent on the user’s terminal/editor environment.
  • 1.4.3 Contrast (Minimum) — Supports. Dependent on the user’s terminal/editor environment.
  • 1.4.4 Resize Text — Supports. Dependent on the user’s terminal/editor environment.
  • 1.4.10 Reflow — Supports. Dependent on the user’s terminal/editor environment.
  • 1.4.11 Non-text Contrast — Supports. Generated reports use severity emojis and markdown formatting with proper contrast.
  • 2.4.3 Focus Order — Supports. Generated reports follow a logical order from most to least severe.
  • 4.1.2 Name, Role, Value — Supports. Reports and documentation clearly communicate what actions can be performed.
  • 4.1.3 Status Messages — Supports. CLI status messages communicate progress and results at each step of the pipeline.

Timeline

Week 9 (Milestone One – System Implementation) By Week 9, we will complete a working minimum viable prototype that can analyze a JSX component using specific commands and return suggested alt text in a comprehensive report. The primary focus of this milestone is building each individual part of the process.

  • Cami will extract all image tags from a given file with accurate line numbers and alt text
  • Jeongjoon will assign severity rankings and generate improved alt text suggestions using WCAG guidance.
  • Lanxuan will produce the human-readable audit report and save machine-readable fix data.
  • Sanjana will make the commands runnable and implement automatic file patching. By the end of Week 9, the system will accept a JSX file as input, extract all image tags and their alt text from the file, generate and submit a prompt to OpenAI to fix/generate alt text, create a readable audit report with the response, and be able to do this with runnable commands and automatically patch files.

Week 10 (Stabilization and Integration) In Week 10, we will focus on testing, debugging, and ensuring our individual parts are well integrated.

  • One team member will validate the accuracy of the parsed files and check edge cases.
  • Another member will test the consistency of OpenAI responses and refine prompts.
  • A third member will test and improve the change-summary file that documents which fixes were applied and why.
  • The remaining member will lead validation and documentation. Validation will include two components.
    • First, we will conduct an accessibility audit using a set of prepared JSX files that intentionally contain known accessibility errors regarding their alt text. We will verify that our system detects these issues and their severity consistently.
    • Second, we will perform a correctness analysis specific to our tool by confirming that suggested alt text is in line with the images, applied fixes result in syntactically valid JSX, and re-running the tool after fixes shows that alt text issues have been resolved.

Final Presentation Week During the final week, we will refine the prototype, stabilize fix application, and finalize documentation. Responsibilities will be divided across code polishing, validation reporting, and presentation preparation. We will prepare a poster that explains the motivation, system architecture, example outputs, validation results, limitations, and future work. We will also prepare a demonstration that shows running the tool on a sample JSX file, reviewing the generated report, applying selected fixes, and confirming improvements. By the final presentation, the system will demonstrate a complete workflow from analysis to visualization and controlled modification, supported by documented validation results.

Feasibility Analysis

This project is technically ambitious but viable within the three-week class timeline. The core functionality, including parsing JSX files, reconstructing reading order, detecting common accessibility issues, and generating a markdown report, can be implemented using standard JavaScript/Node.js tooling and a command-line interface, which simplifies scope by avoiding the complexities of a full IDE extension or GUI. The AI-assisted fix generation represents a stretch goal; if time or technical constraints prevent full integration, the fallback plan is to focus on visualization and rule-based detection, ensuring a functional minimum viable product. The team has varying availability, with most members able to dedicate a few hours per week, so successful completion relies on coordinated collaboration and timely delivery of individual tasks. Key feasibility considerations include ensuring the parsing and traversal logic is accurate, that generated fixes do not break the JSX syntax, and that AI suggestions are constrained to targeted nodes. By keeping the system terminal-based and modular, these risks are minimized, making the project achievable while leaving room for optional AI enhancements.

Disability Model Analysis

Disability Justice Principle 1

Definition The principle of Leadership of Those Most Impacted, from Sins Invalid’s Disability Justice framework, argues that people who are most directly affected by systems of oppression should lead the design, analysis, and transformation of those systems. Rather than relying solely on academic experts or technologists, this principle centers the lived experiences and expertise of disabled people themselves.

Application to Our Project: Our accessibility visualization tool is designed to support developers in understanding how screen readers interpret JSX structures. However, we recognize that this system does not replace the lived experience of blind or screen reader users. Instead, it attempts to expose structural issues (such as missing alt text or improper heading hierarchy) so developers can better anticipate accessibility barriers. Importantly, our system does not claim to determine what is “fully accessible.” It reconstructs reading order and flags structural WCAG violations, but it does not simulate subjective user experience. This distinction is critical. Following this principle, we explicitly position our tool as a supplement to user testing, not a substitute. This avoids the disability dongle problem—where technology claims to “solve” disability without disabled leadership. However, we acknowledge a limitation: our team does not currently include blind or screen reader users in the design process. While we draw from disability scholarship (including Ashlee M. Boyer’s critique of AI-driven accessibility), we are not fully meeting this principle because we are not centering direct disabled leadership in early-stage design. In future iterations, incorporating consultation with screen reader users would be necessary to better align with this principle.

Disability Justice Principle 2

Definition Collective Access emphasizes that access needs are diverse, dynamic, and shared across communities. Access should not be treated as an individual burden or checklist, but as a collective responsibility. This principle encourages flexibility and acknowledges that access needs vary across contexts and bodyminds.

Application to Our Project Our tool aims to promote collective access by shifting accessibility responsibility earlier into the development process. Rather than treating accessibility as an afterthought or compliance checklist, the system visualizes how structural design decisions affect screen reader navigation. By reconstructing reading order and clearly showing hierarchy and nesting, the tool makes invisible access barriers visible. This encourages developers to consider accessibility as part of normal workflow, rather than as an external add-on. However, we are careful not to reduce collective access to a purely automated checklist. While we implement rule-based checks (missing alt, skipped headings, empty buttons), we explicitly avoid claiming that passing these checks equals full accessibility. Accessibility remains contextual and relational. Additionally, the tool allows developers to selectively apply fixes rather than automatically rewriting code. This preserves developer agency and prevents over-automation that might ignore nuanced context.

Additional Information and Questions

Is the technology ableist? Explain why it is or is not. The technology risks ableism if it is interpreted as replacing disabled perspectives with automated AI evaluation. If developers rely solely on this tool and skip user testing, the system could reinforce the idea that accessibility is merely technical compliance rather than lived experience. However, the design intentionally limits this risk. The tool:

  • Does not claim to simulate real user experience
  • Does not override developer decisions automatically
  • Requires explicit user approval before applying fixes
  • Encourages re-running audits and continued review Because it positions itself as a visualization and auditing aid rather than a replacement for disabled expertise, it mitigates but does not eliminate ableist risks.

Is the technology informed by disabled perspectives? Explain the relationship to disabled perspectives. The project is informed conceptually by disability scholarship, particularly critiques of AI-based accessibility overlays. We explicitly respond to concerns raised in “How to Dehumanize Accessibility with AI” by ensuring our tool supports human oversight. However, we acknowledge that the project does not yet include direct collaboration with blind or screen reader users. Therefore, while informed by disabled theory, it is not yet fully grounded in disabled community leadership. Future iterations should include structured consultation or usability feedback from screen reader users to better align with disability justice principles.

Does the technology oversimplify disability/identity? Explain how it does or does not. The tool focuses specifically on structural screen reader accessibility in JSX files. It does not address:

  • Cognitive accessibility
  • Neurodivergent access needs
  • Color contrast
  • Motion sensitivity
  • Intersectional identity concerns Because of this, the system risks narrowing “accessibility” to screen reader compliance. We mitigate this by clearly defining the scope of the tool as structural DOM and screen reader order analysis, not comprehensive accessibility evaluation. Accessibility is multidimensional and intersectional. Our system addresses one dimension—structural reading order—and must not be interpreted as representing the full spectrum of disability or identity.

Learnings and Future Improvements

One of the biggest learnings from this project was the importance of properly scoping a technical idea. Our original concept was significantly broader and included several different accessibility visualization and auditing components. As we began implementation, it became clear that trying to address too many issues at once would make the system difficult to build and evaluate effectively. Narrowing the focus to a specific WCAG related problem, image alt text accessibility, allowed us to design a tool that addressed a common accessibility gap in a more thoughtful and practical way. Focusing on a single issue also helped us better understand the technical challenges involved in parsing JSX, identifying accessibility problems, and integrating automated suggestions into a developer workflow. Another key learning was that AI does not need to be the immediate solution for every part of an accessibility tool. Many accessibility checks, such as detecting missing alt attributes, can be reliably implemented using deterministic rule based approaches that have existed in accessibility tooling for many years. Using these checks first makes the system faster and more predictable, while reserving AI for tasks that benefit from language generation, such as suggesting improved alt text. We also learned that AI must be integrated carefully when working with accessibility related systems. The quality of generated alt text depended heavily on the context available in the surrounding code, such as image file names or nearby descriptive content. When context was limited, the suggestions were often generic, which reinforced that AI should act as an assistive tool rather than an automated replacement for human judgment. Developers still need to review suggestions to ensure that alt text reflects the actual purpose and meaning of the image in the interface. Looking forward, there are several ways this project could be extended. One improvement would be deeper integration into development environments so that accessibility issues appear more naturally during coding. For example, the tool could be implemented as a Visual Studio Code extension or published as an npm package that integrates directly with existing developer workflows. Another improvement would be providing a clearer visual interface for the generated reports. A web based viewer or editor integration could present flagged issues similarly to how linters display warnings and errors within the code editor. This type of interface would make accessibility feedback easier to understand and more actionable while developers are actively writing code.

Authors and acknowledgment

Show your appreciation to those who have contributed to the project.

License

For open source projects, say how it is licensed.

Project status

If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.