Link Search Menu Expand Document

Student Projects

Team: Krish Jain & Anna Spiro

AI and Job Displacement

This project examines how public discourse around automation and AI-driven job displacement has evolved over time, and how it differs across stakeholder groups. The corpus spans historical newspaper archives (Newspaper Navigator, HRTC corpus), Reddit, consulting firm statements on AI, and labor union statements on AI.

Research Questions:

  1. How do historical views on automation contrast with contemporary views on AI, and how have ideas evolved with time?
  2. What are the differences between current corporate and worker views of AI, and what metaphors or historical examples does each group reach for to make their case?

Readings / Methods:


Team: Nicholas Batchelder, Kevin Wu & Kevin Zhang

Analyzing Emotional Dependency in LLM Discourse

This project investigates whether LLMs facilitate emotional dependency in users, and how this affects users’ perceptions of human social relationships. The team scrapes confessional Reddit posts (e.g., from r/MyBoyfriendIsAI) to study whether AI consistency, sycophancy, and judgment-free interaction lead to measurable social withdrawal from human relationships.

Research Question: To what extent do LLMs facilitate emotional dependency, and how does this substitution affect a user’s perspective of human social structures?

Readings / Methods:


Team: Khushi Khandelwal

Narratives of AI in Art — A Computational Thematic Analysis Framework

This project maps how narratives about AI in art diverge between mainstream media and tech/PR communications, using a machine-in-the-loop hybrid pipeline. The corpus includes news articles (NYT, Guardian, Wired, etc.) and company blog posts / press releases, comparing how each source frames topics like democratization, job displacement, copyright, and artistic integrity.

Research Questions:

  1. Which specific themes (e.g., “Democratization” vs. “Job displacement”) are more prevalent in PR versus news?
  2. How do artist-centered narratives differ from technology-centered narratives in their framing of AI?
  3. Do certain narratives co-occur with specific years or major industry events?

Readings / Methods:


Team: Dean Light, Ann Baturytski & Marx Wang

Paper Theme Gym

This project aims to make computational thematic analysis accessible to social scientists by building “Paper Theme Gym” (PTG), a configurable tool that uses LLM role-playing (Coder, Code Aggregator, Code Reviewer, Theme Coder, Theme Aggregator agents) to perform reflexive thematic analysis at scale. PTG segments documents into sub-populations and visualizes results using a Grammar of Graphics layer. The running example compares how CMU vs. UW researchers describe AI in their papers.

Readings / Methods:


Team: Imani Finkley

AI Autobiographies

This project studies how large language models craft autobiographical texts, and whether the stories they tell about themselves depend on the intended audience. Finkley prompted four LLMs (Claude Sonnet 4.5, Gemini 2.5 Flash, GPT-4o, OLMo3-32B) to write 250-word autobiographies targeting different audiences (non-AI user, AI user, AI engineer, CEO of AI company, AI itself), producing a corpus of 240 stories. Analysis focuses on agency (authorial voice), theme (e.g., tool, savior, threat, trickster), and narrative structure (autobiographer type: Memoirist, Dramatic, Philosopher).

Research Questions:

  1. How do LLMs structure autobiographical texts — what type of autobiographer are they?
  2. What are the dominant themes of AI-generated autobiographies, and how do they relate to popular AI narratives (Frankenstein, savior, threat, tool, mirror, etc.)?

Readings / Methods:


Team: Andrew Shaw, Shreya Sathyanarayanan & Yash Mishra

Moral Values and Cultural Narratives about AI

This project investigates whether different cultural attitudes toward AI correlate with different moral values, and whether cross-national differences in AI sentiment are grounded in philosophical views about personhood. The team analyzes AI-related tweets from three countries, extracting moral values and sentiment to find correlations and connect findings to philosophical literature.

Hypothesis: Different cultural attitudes toward AI stem from different philosophical views — and anxieties — about the nature and importance of personhood.

Readings / Methods:


Team: Ge Yan

Humanoid Robots and the Stories We Believe

This project computationally analyzes public discourse around the “fully autonomous” humanoid robots. The student examines the expert vs. public perception gap around autonomy claims, using stance detection and framing analysis on Twitter/X posts and media coverage.

Research Questions / Hypotheses:

  • H1: Humanoid robot posts contain higher rates of mind-attribution language (“decide”, “understand”, “want”) than industrial/research robot posts.
  • H2: Posts without technical context produce more polarized reply stances than those with technical context.
  • H3: Skeptical replies contain higher rates of physics/interaction language (“grasp”, “slip”, “friction”), consistent with Moravec’s paradox.

Readings / Methods:


Team: Ran Tang

A Social Network Analysis of Moltbook

This project applies Social Network Analysis (SNA) to Moltbook — an AI agent community — to investigate whether AI agents can self-organize into community structures similar to those found in human online networks. Data was collected via API for posts and comments from January 27–31. Network metrics (modularity, clustering coefficient, reciprocity) are used to compare AI agent interaction patterns to known human social network benchmarks (e.g., Reddit’s 30–50% reciprocity rate).

Research Questions:

  1. Do AI agents naturally form tight-knit sub-communities, or is their interaction random?
  2. To what degree are interactions among AI agents reciprocated?

Readings / Methods:


Team: Jay Dharmadhikari

Sycophancy as a Function of Narrative

This project investigates how narrative context (user persona, model role, and prompt framing) shapes sycophantic behavior in LLMs. The hypothesis is that most sycophancy can be detected or eliminated by controlling the narrative — i.e., by systematically varying the “social situation” the LLM is placed in. The broader motivation is that studying which social situations incentivize AI sycophancy also illuminates when humans are incentivized to be sycophantic.

Research Question / Hypothesis: Sycophancy is a function of narrative and model — most sycophantic behavior can be controlled by manipulating user persona, model role, and prompt framing.

Readings / Methods:


Team: Jack Zhang

From Insights to Artifacts: Adapting Advanced Methodologies

This project empirically measures whether ML research has shifted from epistemic discoveries and insights toward legible, immediately reusable releases (artifacts) over time. Motivated by the exponential growth of ML conferences (e.g., NeurIPS growing from ~1,500 submissions in 2012 to over 21,000 in 2025) and severely constrained reviewer bandwidth, Jack will be analyzing 10,000+ papers (2012–2025) by extracting, classifying, and tracking the linguistic framing and citation impact of “contribution claims.”

Research Question / Hypothesis: Papers have shifted from epistemic discoveries and insights toward artifacts (datasets, methods, tools), and the community disproportionately rewards artifact contributions over knowledge contributions.

Readings / Methods:

  • Cheng et al. (ACM FAccT 2025). Metaphor elicitation and semantic clustering — collected 12,000+ open-text metaphors, clustered via embeddings, and scored on latent dimensions (competence, warmth, agency) using LMs.
  • Pramanick et al. Built a hierarchical taxonomy for NLP paper contributions (Artifacts vs. Knowledge), manually annotated 2,000 abstracts, and trained SciBERT to classify sentences at scale across 29,000 papers.
  • Mosbach et al. (EMNLP 2024). Citation intent graphs as a stretch goal — moving beyond raw citation counts to measure how papers are used (foundational tool vs. passing context).