Computer Ethics
Administrative
Autumn 2024
Lecture: Tuesdays from 2:30 to 3:20pm in OUG 136
Sections: AA Thursdays 2:30 to 3:20pm in MGH 058, AB Thursdays 3:30 to 4:20pm in MGH 295
Instructor: Rachel Sobel, rs@cs.washington.edu
Teaching assistant: Violet Monserate, vimons@cs.washington.edu
Office hours: By appointment (please do reach out!)
Please do not hesitate to write to the instructor about any accommodations or questions related to readings or course material.
Description
Be it social-media platforms, self-driving cars, or big data systems, the code Allen School students write—the decisions they make—influences the world in which it operates. This is a survey course about those influences and ways to think about them. We recognize, “the devil is in the implementation details.”
The course is divided into two parts: In the first part, we survey historical and local issues in tech, particularly those concerning data. We then engage with critical perspectives from disciplines such as machine ethics and science and technology studies as a framework for students to articulate their own beliefs concerning these systems. In the second part, we apply these perspectives to urgent issues in applied technologies; see the schedule for the topics we plan to consider this quarter.
Throughout, students hone their critical reading and discussion skills, preparing them for a life-long practice of grappling with the—often unanticipated—consequences of innovation.
We approach topics such as: AI ethics, data bias, utopianism, governance, inclusion, facial recognition, classification, privacy, automation, platforms, social media moderation, identity, fairness, power and control, activism, and environmental impact.
We aim to have you feel this course experience is an essential part of your Allen School education despite being (or because it is!) very different from most CSE courses.
Objectives
By the end of this course students will:
- Obtain awareness of issues arising from the use of computers in contemporary sociotechnical systems
- Articulate technological harms to individuals and groups in the language of critical perspectives
- Appreciate how historical, cultural, economic, and political factors contribute to how technologies are built and designed
- View themselves as both subjects and creators of sociotechnical systems
- Understand and articulate complex arguments pertaining to values in technology
- Recognize the diversity of stakeholders and views when considering a technology
- Amplify voices and values not traditionally considered in technological development (e.g., in design processes)
- Re-imagine and speculate alternative histories and futures for using and coexisting with computers
Schedule
(may change up to a week in advance)
Introduction: A Brief History
Thu, Sep 26 Groundwork
Required:
-
Read "Why the Luddites Matter" [url] by Z.M.L., 2018 (5 pages).
“That which makes the Luddites so strange, so radical, and so dangerous is not that they wanted everyone to go back to living in caves (they didn’t want that), but that they thought that those who would be impacted by a new technology deserved a voice in how it was being deployed.”
-
Read "Telephone operation was a good career for women. Then it got automated." [pdf] [url] by Dylan Matthews, 2023 (9 pages).
When ATT moved from human operators to digit dialing, a huge number of jobs were automated away. What happened to the people who had those jobs – and those who might have gone into those careers a few later? And what were the economic benefits of this huge transition?
-
Read "The Anti-Digit Dialing League" [pdf] [url] by John Wilcock (1 page).
Optional:
-
Read "Organizational Frictions and Increasing Returns to Automation: Lessons from AT&T in the Twentieth Century" [pdf] by James Feigenbaum et al., 2021 (26 pages).
Digit dialing was invented a hundred years before the job of telephone operator was automated away. In this interesting recent paper, Feigenbaum and Gross dive into the organizational and technological reasons for the delay.
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
Before Class:
Who's behind the keyboard?
Tue, Oct 01 Groundwork
Required:
-
Read chapter one from "Artificial Unintelligence: How Computers Misunderstand the World" [pdf] [url] by Meredith Broussard, 2018.
In her introductory chapter, computer scientist and data journalist Broussard lays out both her love of and skepticism for computing technology. You might find a bit of yourself in her.
-
Read "Be Careful What You Code For" [url] by danah boyd, 2016 (2 pages).
danah boyd, a researcher at Micosoft and at Data and Society, highlights just how few guardrails there are for developers, from the consequences of algorithmic bias to the implications of crazy metaphors. She offers a call to action, solutions, and ample evidence for considering the implications of code. We highly recommend chasing down some of the links provided.
Optional:
-
Read "How to read a book" [pdf] by Paul N. Edwards, 2000 (8 pages).
Here, an adroit scholar walks through some tips and tricks for reading more effectively. He hits the major points and includes some bonus tips, like where to best organize your reading notes. This is an invaluable resource as our course’s weekly reading load begins to increase. Skim now, but revisit throughout the course.
-
Read "Discussion Leading" [pdf] [url] by John Rickford et al., 2007.
This resource from the Stanford Teaching Commons offers an in-depth analysis of how to have better discussions. Their recommendations, from setting an agenda, asking questions, and increasing discussant engagement are all a part of how to create a better climate for discourse. For leading discussions in this class and beyond, it’s worth a read.
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
-
Check out "Tackling Climate Change with Machine Learning" [pdf] [url] by David Rolnick et al., 2019 (97 pages).
Also check out their website.
-
Check out "Green AI" [pdf] [url] by Roy Schwartz et al., 2019.
-
Check out "Open letter to Jeff Bezos and the Amazon Board of Directors" [url] by Amazon Employees for Climate Justice, 2019.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- What do you want to get out of our class discussions?
- Do you feel able to change outcomes of how tech affects society?
- Preview the course project, part zero.
Deconstructing a Data System
Thu, Oct 03 Data
Required:
-
Read "At Amazon's New Checkout-Free Store, Shopping Feels Like Shoplifting" [pdf] [url] by Jake Bullinger, 2018 (2 pages).
Jake Bullinger describes the experience had by some of the first shoppers of the Checkout-Free Amazon Go store and considers its economic implications. As you’re reading the article, look for possible tensions, critiques, or questions which it raises. Also think about the ways in which data is used by this store.
-
Read "In Amazon Go, no one thinks I'm stealing" [url] by Ashlee Clark Thompson (2 pages).
Ashlee Clark Thompson reflects on her experience of shopping in the Amazon Go store: “Amazon Go isn’t going to fix implicit bias or remove the years of conditioning under which I’ve operated. But in the Amazon Go store, everyone is just a shopper, an opportunity for the retail giant to test technology, learn about our habits and make some money.”
-
Read chapter four from "Artificial Unintelligence: How Computers Misunderstand the World" [pdf] [url] by Meredith Broussard, 2018.
While focusing specifically on data journalism, Broussard well explains the means to which data can be put to explain the world. In particular, read it to understand the different means people take to “challenge false claims about technology.”
Optional:
-
Read "Inside Amazon Go, a Store of the Future" [url] by Nick Wingfield, 2018 (1 page).
High level description of the store; touches on themes like convenience, how this tech could affect jobs, and the vagueness of plans surrounding the system at this time; Look at the photos if you don’t visit the Amazon Go store.
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
-
Check out "The Loneliest Grocery" [url] by Joshua McNichols.
-
Check out ""Good" isn't good enough" [pdf] by Ben Green, 2019 (4 pages).
This paper, by a postdoc at the AI Now Institute and formerly of MIT, summarizes many of the themes we touch on throughout the quarter. It synthesizes many of the arguments we cover and applies them as a call for action to data scientists in particular. The author’s arguments are equally relevant to computer scientists.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Today’s a light reading day, but day four isn’t so we recommend you get started on that.
- How do the readings differ in their views of the Amazon Go store? Did the Amazonians consider each of these perspectives? Should they have? How might you classify them?
- Have you visited an Amazon Go store? If so, how did you feel? If not, how do you think you would feel?
Conceptions of Data
Tue, Oct 08 Data
Required:
-
Read "Chapter 1: Conceptualising Data" [pdf] by Rob Kitchin, 2014 (25 pages).
The introduction to the book describes ways of thinking about what data is (ontologies) and goes on to discuss ethical and political considerations of data. It postulates the framework of “data assemblages” and how thoughts about them influence their own conceptions.
-
Read "On Being a Data Skeptic" [pdf] by Cathy O'Neil, 2014 (19 pages).
This rapid-fire, well-articulated article is about the advantages and perils of data science, with ample advice and examples to advocate for why those who use data ought to be this special kind of “skeptical”.
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
-
Check out "The era of blind faith in big data must end" [url] by Cathy O'Neil.
-
Check out "How ImageNet Roulette, a Viral Art Project That Exposed Facial Recognition's Biases, Is Changing Minds About AI" [url] by Naomi Rea, 2019.
-
Check out "An Investigation of the Facts Behind Columbia’s U.S. News Ranking" [url] by Michael Thaddeus, 2022.
Find the creep of proxies in university rankings in Thaddeus’s own words: “Almost any numerical standard, no matter how closely related to academic merit, becomes a malignant force as soon as universities know that it is the standard. A proxy for merit, rather than merit itself, becomes the goal.”
Before Class:
-
Daily assignment, due at 8pm the night before class:
Use the following questions as a resource to guide your reading. Then, respond to a few of them, or otherwise on theme.
- What is the difference between data, information, and knowledge? How are they related?
- How can wider or economic concerns “frame” data? That is, in what sense do data act and how? (Answers might include: as an economic resource, as a form of power or knowledge, etc.) Explain why.
- How have politics or economics influenced how some data have been defined or created?
- What are reasons or incentives for controlling:
- The creation of data
- The access to data
- The standards of data, such as metrics or units
- The means of data collection, such as sensors or know-how
- From “On being a data skeptic” explain “measuring the distortion.”
- What is the relationship between models and proxies? Why are proxies used? Give an example.
- Why might O’Neil have singled out “nerds” and “business people” separately? What do the differences in her comments indicate about how those groups view problems differently? Do you agree?
- Submit a clarifying question which you’d like to discuss in class.
- Finish the course project, part zero by tonight.
"Data is the new oil": data politics
Thu, Oct 10 Data
Required:
-
Read "The world's most valuable resource is no longer oil, but data" [pdf] [url] , 2017 (1 page).
A short article that introduces the metaphor that “data is the new oil” which reflects the widely held view that data is now “the world’s most valuable resource”. (And check out the recent FTC filing against Facebook for an example of how this conversation has progressed since 2017!)
-
Read "Do artifacts have politics?" [pdf] [url] by Langdon Winner, 1980 (15 pages).
In this widely-cited essay, Langdon winner makes the case that technologies embody social relations. He argues that we should develop a language for considering technology which not only focuses on it as a tool, or its use, but also the meaning of its design and social arrangements which it facilitates. Langdon asks: “what, after all, does modern technology make possible or necessary in political life?”. Consider this while you read the piece.
-
Read "Anatomy of an AI System" [pdf] [url] by Kate Crawford et al., 2018 (14 pages).
Kate Crawford and Vladan Joler consider the production of the Amazon Echo Dot with an astounding breadth by mapping the human labor, data, and material resources required to build it. Kate Crawford is a co-founder of the AI Now institute at NYU which is breaking ground on many questions relevant to the social implications of AI.
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
-
Check out "Google Will Not Renew Pentagon Contract That Upset Employees" [url] by Daisuke Wakabayashi et al., 2018.
-
Check out "The Societal Implications of Nanotechnology" [url] by Langdon Winner, 2003 (5 pages).
Winner’s testimony before Congress
-
Check out "How ICE Picks Its Targets in the Surveillance Age" [pdf] [url] by McKenzie Funk, 2019.
Consider reading this harrowing, and physically proximal, telling of the real-life implications of some of these data systems.
-
Check out "An Open Letter to the Members of the Massachusetts Legislature Regarding the Adoption of Actuarial Risk Assessment Tools in the Criminal Justice System" [pdf] [url] by Chelsea Barabas et al., 2017 (8 pages).
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Pick one aspect of “Anatomy of an AI System” and discuss it with someone outside of class. In a couple of sentences, what did you talk about?
- Did any aspect of the Amazon Echo AI system surprise or interest you? Which aspect?
- What conclusions can we draw from the tomato picking example in “Do Artifacts Have Politics?”
Operationalization and Classification
Tue, Oct 15 Data
Required:
-
Read introduction (pg. 1 - 16; 31 - 32; 17 pages total) from "Sorting things out: classification and its consequences" [pdf] by Geoffrey C. Bowker et al., 1999.
Sorting Things Out is a classic text on classification and standardization. The introduction writes about the importance of considering the ubiquity of classification and the processes that generate them, standardize them, and enforcement. It also looks at how classification has caused harm and how the processes which create standards can at times yield an inferior solution. The authors take an expansive view of classification, so be prepared to think about the American Psychiatric Association’s Diagnostic and Statistical Manual (DSM) or VHS vs. Betamax.
-
Read "Do algorithms reveal sexual orientation or just expose our stereotypes?" [url] by Blaise Aguera y Arcas et al., 2018 (5 pages).
This essay sets out to debunk a scientific study which claimed to have built a “sexual orientation detector” using machine learning. “Do algorithms reveal sexual orientation or just expose our stereotypes?” presents a thorough analysis of the offending paper and shows that one move to debunk “junk science” is validate the study’s results against some other baseline. In this case, the authors use Amazon’s Mechanical Turk. As you’re reading this, think about what one can learn from a face?
Optional:
-
Read chapter one (pg. 46 - 50 from "Infrastructure" on) from "Sorting things out: classification and its consequences" [pdf] by Geoffrey C. Bowker et al., 1999 (377 pages).
-
Explore "Datasets Have Worldviews" [url] by Dylan Baker, 2022.
Check out this interactive demo about classification.
-
Read "Deep neural networks are more accurate than humans at detecting sexual orientation from facial images." [url] by Yilun Wang et al., 2017.
This is the article critiqued by “Do algorithms…”
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
-
Check out "Drawing a Line" [url] by Tableau Employee Ethics Alliance, 2019.
-
Check out "Engaging the ethics of data science in practice" [pdf] [url] by Solon Barocas et al., 2017 (3 pages).
-
Check out "Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI" [pdf] [url] by Philip E. Agre, 1997 (28 pages).
Agre, an AI researcher in the 1990s, convincingly walks the line between a critial perspective and that of a practitioner, evoking why practitioners may bristle at critique.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- “Do algorithms…” claims to focus on the underlying “science.” Why do you think the authors did so? Why was this distinction important?
- What strategies did “Do algorithms…” use to make its argument?
- The authors of “Do algorithms…” conclude that the paper they examined was misguided. Drawing on the discussion of classification in “Sorting Things Out,” think of another misguided method (any method, scientific or otherwise, and not necessarily one related to the “Do Algorithms” paper). Now, describe that misguided method which you have identified.
Follow the Money
Thu, Oct 17 Critical Perspectives
Required:
-
Read Prologue, Introduction, and Chapter One of "Subprime Attention Crisis" [pdf] by Tim Hwang, 2020 (24 pages).
Tim Hwang investigates the funding model at the root of much of the internet, showing how big tech financializes attention and casting a light on the potential instability at the heart of the online ads economy. You might also enjoy his recent podcast interview with Ezra Klein, which covers many of the topics explored by the book.
-
Read "Amazon’s financial shell game let it create an “impossible” monopoly" [url] by Cory Doctorow, 2024.
Doctorow examines FTC filings against Amazon to get a close look at the internals of Amazon’s financials, and ponders the implications of Amazon’s monopolistic position in the business of online retail.
-
Read "Less money and more fear: what’s going on with tech" [pdf] [url] by Elizabeth Lopatto, 2023 (6 pages).
This quick article explores the funding structure of VC-driven tech, with particular attention to the role of low interest rates and unprofitable firms. Many observers attribute the success of gig economy companies like Uber to the decade of low interest rates, allowing them to borrow money for ‘free’ without ever turning a profit.
Optional:
-
Read Chapter Six and Epilogue of "Subprime Attention Crisis" [pdf] by Tim Hwang, 2020 (21 pages).
The end of Hwang’s book explores where we go from here, offering a variety of solutions to the problems the book identifies with the ads ecosystem.
-
Read "Europe's hidden security crisis" [pdf] [url] by Johnny Ryan and Wolfie Christl, 2023 (24 pages).
A fascinating look at how Real-Time Bidding creates security vulnerabilities for European leaders and security personnel.
-
Read "The Revolution that Died on Its Way to Dinner" [pdf] [url] by Joe Fassler, 2024 (14 pages).
This New York Times opinion essay examines the cultivated meat industry, from the heights of hype to its downfall as startup funding declined. Cultivated meat isn’t a computing industry, but this piece is an interesting illustration of the power of funding, interest rates, and attitudes in what projects get funded, when.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Why is targeted advertising “significantly more expensive than nontargeted advertising”?
- How did low interest rates impact the funding model of tech startups? What kind of impacts do you think this might have had on the corporate ecosystem we live in?
- What are the dynamics of the advertising marketplace Tim Hwang describes? What does he mean when he talks about its “resemblance to financial markets”? What does this resemblance entail?
- Think about your own internet usage. In a given day, who is making money off of your actions, and how? Who is spending money, and how? Think about server costs, ad budgets, ad clicks, computational complexity, etc.
Data Feminism
Tue, Oct 22 Critical Perspectives
Required:
-
Read introduction and chapter one from "Invisible Women: Data Bias in a World Designed for Men" [pdf] by Caroline Criado-Perez, 2019 (25 pages).
Criado-Perez investigates the gender gap in big data, describing the adverse effects for women caused by holes in data gathering, lack of consideration of women’s distinct needs, and more.
-
Read "Being Glue" [url] by Tanya Reilly.
Staff engineer Tanya Reilly discusses the work that holds technical teams together - key to senior software engineers, but often unrewarded or even punished in junior devs. You can also watch a video of this talk if you prefer.
-
Read "Patriarchy, technology, and conceptions of skill" [pdf] by Judy Wajcman, 1991 (16 pages).
When considering the future of work, one question that’s often raised is how technology negatively impacts the amount of “skill” required to complete a task, aka “deskilling”. In “Patriarchy, Technology, and Conceptions of Skill”, Judy Wajcman questions the underlying assumption that skill is entirely technically derived. Instead, she considers how men’s historical control over technology in the workplace has extensively influenced the ideological and material conceptions of skill, thus concluding in part that “definitions of skill, then, can have more to do with ideological and social constructions than with technical competencies which are possessed by men and not by women”.
Optional:
-
Read "Introduction: Why Data Science Needs Feminism" [pdf] [url] by Catherine D'Ignazio et al., 2020.
Through a historical examination of women in technology, D’Ignazio and Klein, both leading scholars in the field, introduce us to feminism and its role in shaping technologies. This is just a taste and the whole book is worth a read.
-
Read "Technically Female: Women, Machines, and Hyperemployment" [url] by Helen Hester, 2016 (10 pages).
This essay surveys a history of “electronic secretaries” to frame relevant questions of today’s tech, such as: Why are AI assistants so often feminized? We question what it means for technology to “do gender” and in service of which “imagined technology user”? Yet we can turn that question around and ask who “does technology”? and how does labor gets redistributed with the introduction of new software and AI assistants? Ultimately, Hester asks us to confront questions concerning lived experiences of gender and how its programmed, productive vs. reproductive labor, and the (dis)advantages of automation.
-
Read "Testosterone rex: unmaking the myths of our gendered minds" [url] by Cordelia Fine, 2017.
If you were looking to throw the book at someone who continues to insist that sex differences are sufficient to explain gender differences, this would be that book. Take her word for it: “Every influence is modest, made up of countless small instances of its kind. That’s why everything—a doll packaged in pink, a sexist joke, a male-only expert panel—can seem trivial, of intangible effect. But that’s exactly why calling out even seemingly minor points of sexism matters. It all adds up, and if no one sweats the small stuff, the big stuff will never change.” And it’s funny, too: “If we stop believing that boys and men are emotional cripples and fly-by-night Casanovas who just want sex, and start believing that they’re full, complete human beings who have emotional and relational needs, imagine what might happen.”
-
Read "Gender Equality Paradox Monkey Business: Or, How to Tell Spurious Causal Stories about Nation-Level Achievement by Women in STEM" [url] , 2020 (5 pages).
A widely known study argued that countries with more gender equity in society have fewer women studying STEM, but this article accompanies a peer-reviewed publication casting doubt on the study’s analysis – a scholarly back-and-forth also playing out in the blogosphere.
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
-
Check out "Reflecting on one very, very strange year at Uber" [url] by Susan Fowler, 2017 (4 pages).
This blog post, by the author of “What Have We Done” contributed to the resignation of Uber’s CEO, Travic Kalanick.
-
Check out "Google Walkout: Employees Stage Protest Over Handling of Sexual Harassment" [url] by Daisuke Wakabayashi et al., 2018.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Why is feminism relevant to data science?
- Wacjman writes about the “way in which capitalist social relations affect technological design”. Reflect on how social relations more broadly may be affecting the team dynamics described in Reilly’s talk “Being Glue”, or on some of the many sociotechnical examples in Criado-Perez’s “Invisible Women”. How and why do you think these situations have prioritized some people’s lives and roles over others?
- Finish the course project, part one by tonight.
Latent Identity and Privacy
Thu, Oct 24 Critical Perspectives
Required:
-
Read "It's Not Privacy, and It's Not Fair" [pdf] by Cynthia Dwork et al., 2013 (6 pages).
This law review paper is the missing link between the concept of control and of privacy as represented by the (optional) Deluze piece and the Barocas piece, respectively.
-
Read "Think You're Discreet Online? Think Again" [pdf] [url] by Zeynep Tufekci, 2019 (2 pages).
How ought we make sense of questions such as privacy, classification, tracking, and surveillance in the era of big data and computational inference? Zeynep Tufeci asks us to consider these questions by looking at examples of the collective implications of a “privacy-comprised world”.
-
Read "Big data's end run around procedural privacy protections" [pdf] [url] by Solon Barocas et al., 2014 (2 pages).
Solon Barocas and Helen Nissenbaum, both well-known AI ethics scholars, consider “why the increasingly common practice of vacuuming up innocuous bits of data may not be quite so innocent: who knows what inferences might be drawn on the basis of which bits?”
Optional:
-
Read "We Need to Take Back Our Privacy" [pdf] by Zeynep Tufekci, 2022.
Through the lense of reproductive rights, we again see Tufekci turn to fundamental questions of privacy and surveillance—rights historically challenged by the introduction of new technologies.
-
Watch "Deleuze the Societies of Control" [url] .
This video highlights some significant passages in “Postscript” and explains what’s going on by connecting it back to contemporary questions of control. Only the first 10 minutes actually cover the essay and the next 12 or so are on “commentary”, by posing relevant questions and extrapolating “Postscript”’s ideas into the future.
-
Read "Postscript on the Societies of Control" [pdf] [url] by Gilles Deleuze, 1992 (4 pages).
“[J]ust as the corporation replaces the factory, perpetual training tends to replace the school, and continuous control to replace the examination. Which is the surest way of delivering the school over to the corporation.” Deleuze considers the technologies of power, and what it means to be in a “control state”. One wonders what he would have to say about this virtual world.
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
-
Check out "The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy" [url] by Jemio et al..
-
Check out "Pregnancy Tracking with Garmin" [url] , 2021.
-
Check out "Why Hong Kongers Are Toppling Lampposts" [url] by Sidney Fussell, 2019.
-
Check out "neveragain.tech" [url] by Leigh Honeywell, 2016.
-
Check out "Cegłowski Senate Testimony" [url] by Maciej Cegłowski, 2019 (10 pages).
-
Check out "How the Federal Government Buys Our Cell Phone Location Data" [url] by Bennett Cyphers.
Before Class:
-
Daily assignment, due at 8pm the night before class:
Identify a new system or one we’ve discussed in class that makes decisions which affect people’s lives in some meaningful way. Describe it and then answer the following questions:
- Does this system rely on data collection to make these decisions?
- Where does this information come from?
- What’s the consent model?
- What questions related to individual privacy does it raise?
Governance
Tue, Oct 29 Critical Perspectives
Required:
-
Read Excerpt from chapter one, "STORIES CHANGED BY AN ALGORITHM" (12 pages), Interlude (4 pages) and excerpt from chapter two, from "WHICH ALGORITHMS ARE HIGH STAKES" to the end (20 pages) (36 pages total) from "Voices in the code" [pdf] by David Robinson, 2022.
Robinson delivers a poignant and approachable introduction to the question of how to govern AI and other algorithmic decision systems by both surveying the field and opening a case study on kidney transplants.
-
Read the introduction from "Automating Inequality" [pdf] by Virginia Eubanks, 2017 (13 pages).
In the introduction to her acclaimed book, Eubanks captures the personal consequences of poorly designed decision systems, particularly on poor and working class people.
Optional:
-
Read "Fairness and Machine Learning" [url] by Solon Barocas et al., 2019.
Scholars Barocas, Narayanan, and Hardt collaborate on one of the most readable and insightful texts to address questions of fairness in AI, ML, and related decision systems.
Before Class:
-
Daily assignment, due at 8pm the night before class:
Chose one of the four governance strategies Robinson lists in his second chapter. Identify a technology or system (one not mentioned in the text) that you think could benefit from this kind of governance. Why?
Platform or Publisher?
Thu, Oct 31 Misinformation and Platforms
Required:
-
Read chapter one (pg. 1 - 14) from "Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media" [pdf] by Tarleton Gillespie, 2018 (288 pages).
Tarleton Gillespie, a social media scholar, establishes the groundwork for understanding social media platforms. Beginning with an example of content moderation on Facebook, he makes the case that content moderation is an essential element of these social media companies and that the act of providing content to users comes with many value laden decisions–picking up myths of openness, free speech, neutrality, and more.
-
Read "The Great Delusion Behind Twitter" [pdf] [url] by Ezra Klein, 2022 (4 pages).
Here the journalist Ezra Klein attempts to boil-down the issue of regulating internet speech specifically as it pertains to Twitter—is it a town square? What metaphor applies?
Optional:
-
Read "Split Screen: How Different Are Americans’ Facebook Feeds? – The Markup" [url] by Sam Morris et al..
Use this page, designed by the data journalism publication, the Markup, to get a sense of different filter bubbles on Facebook. Try out a couple different options to see what different groups of people are seeing right now.
-
Read "Why Facebook Can't Fix Itself" [url] by Andrew Marantz, 2020 (5 pages).
Here we get a close look into the implementation of content moderation strategies at Facebook. Pay attention to how what Gillespie talks about applies.
-
Read "" [url] by Tarleton Gillespie, 2023.
In a Mastodon thread, Gillespie, of Custodians of the Internet, draws out the relationship between recommendation algorithms and content moderation, their cumulative harms, and the role of the courts vs. Congress with respect to the section 230 case appearing at the Supreme Court.
-
Read introduction (9 pages) from "Kill All Normies" [pdf] [url] by Angela Nagle, 2017.
This does a better job of describing some of the thorny “grey” areas and mechanisms for radicalization that we will otherwise discuss on the platform days, and is an illustrative antidote to the clearer cut “malicious” content mentioned in the other pieces.
-
Read "Inside Nextdoor's "Karen problem"" [url] by Makena Kelly, 2020.
Notice how content moderation can have racial disparate impacts.
-
Read "The Moderators" [url] by Adrian Chen, et al..
Watch this 20 minute documentary on content moderators, but beware of graphic content.
-
Read "Rethinking the Public Sphere: A Contribution to the Critique of Actually Existing Democracy" [pdf] [url] by Nancy Fraser, 1990 (56 pages).
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
-
Check out "Read the Letter Facebook Employees Sent to Mark Zuckerberg About Political Ads" [url] by The New York Times, 2019.
-
Check out "“So You Won't Take Down Lies?”: AOC Blasts Mark Zuckerberg in Testy House Hearing" [url] by Alison Durkee.
Also see how she solicited the public for questions to ask the CEO.
-
Check out "A Reckoning at Facebook" [url] by Nicholas Thompson, 2018.
-
Check out ""I Have Blood On My Hands": A Whistleblower Says Facebook Ignored Global Political Manipulation" [url] by Craig Silverman et al., 2020 (6 pages).
This article quotes the internal Facebook memo mentioned in the Marantz piece.
-
Check out "When War Struck, Ukraine Turned to Telegram" [url] by Matt Burgess, 2022.
Before Class:
-
Daily assignment, due at 8pm the night before class:
In a couple paragraphs address the following: What are platforms? Why does this term matter so much? Who are their stakeholders?
Content Moderation Algorithms and Free Speech
Tue, Nov 05 Misinformation and Platforms
Required:
-
Read "It's the (Democracy-Poisoning) Golden Age of Free Speech" [pdf] [url] by Zeynep Tufekci, 2018 (4 pages).
Zeynep Tufekci considers how power to censor functions on our oversaturated social networks, and the role of misinformation and the attention economy in this. The article provides striking clarity to issues we collectively face on this platform.
-
"How social media could teach us to be better citizens" [pdf] by Ethan Zuckerman, 2022 (5 pages).
Zuckerman comments on historical aspects of social media, connecting to the mid-20th century, asking what social media does for us.
-
Read Automatic Detection from "Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media" [pdf] by Tarleton Gillespie, 2018 (14 pages).
Tarleton Gillespie, a social media scholar, establishes the groundwork for understanding social media platforms. In this short excerpt from Chapter 4, “Three Imperfect Solutions to the Problem of Scale”, Gillespie considers algorithmic “automatic detection” techniques for content moderation.
Optional:
-
"Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media" [pdf] by Tarleton Gillespie, 2018 (37 pages).
Tarleton Gillespie, a social media scholar, establishes the groundwork for understanding social media platforms. In Chapter 4, “Three Imperfect Solutions to the Problem of Scale”, Gillespie considers several models for content moderation, one of which is algorithmic “automatic detection” techniques.
-
Read chapter 17 from "Genius Makers : The Mavericks Who Brought AI to Google, Facebook, and the World" [pdf] [url] , 2021.
In this chapter of his book on the rise of neural networks, Metz, a veteran journalist of Silicon Valley, concisely describes the claims which some make about those tools. Will AI “solve” content moderation for us? Read on.
-
Read "Hey Elon: Let Me Help You Speed Run The Content Moderation Learning Curve" [url] by Mike Masnick, 2022.
Will Elon Musk solve free speech? Read on.
-
Read "The Problem of Free Speech in an Age of Disinformation" [url] by Emily Bazelon, 2020 (14 pages).
Read this article for an up-to-date account of free speech and its history in the United States. Notice the similarities between analog and social media. It questions how different governmental approaches to speech may be making us more or less free.
-
Read "How Facebook Hides How Terrible It Is With Hate Speech" [url] by Noah Giansiracusa.
Cut through the claims of AI content moderation and listen to Facebook’s own assessment that they ‘may action as little as 3-5% of hate … on Facebook.’
-
Read "The Risk of Racial Bias in Hate Speech Detection" [pdf] [url] by Maarten Sap et al., 2019 (9 pages).
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
-
Check out "Jigsaw" [url] .
Examine this as a “moonshot” type projects and how it attempts to reimagine people’s internet experiences using machine learning type systems (such as with perspective). At the same time, consider it in light of Gillespie’s comments on Jigsaw on page 109 of “Custodians.”
-
Check out "Fighting Neo-Nazis and the Future of Free Expression" [url] , 2017.
Before Class:
-
Daily assignment, due at 8pm the night before class:
How would you change the governance of a social media site you use?
Try to be specific in exactly what problem you are addressing and the trade-offs involved in your proposal.
Your answer might include:
- De-platforming
- More automated moderation
- Hire more moderators
- Regulation…
- e.g. changing liability of publisher
- or otherwise
History Forward and Backward
Thu, Nov 07 Course Project Related Discussions
Required:
-
Read "The Rape Kit's Secret History" [pdf] [url] by Kennedy, Pagan, 2020.
[Feel free to skip if triggering or otherwise too upsetting to you.] Here is a nearly-lost history of the effort behind getting a technology widely adopted. It is not a computing example, but it is an excellent example of writing about the context behind something now in our society, the challenges are often not technical, and the leaders often go unrecognized. The whole thing is fascinating, but it is long, so if you need to skim, try to focus on the politics needed for this technology to succeed.
-
Read page 5 onwards from "The Triple Revolution (original)" [pdf] [url] , 1964 (16 pages).
This famous memo to President Lyndon B. Johnson was drafted by the Ad Hoc Committee of the Triple Revolution, comprised of notable social activists, scientists and technologists, among others. It warns that revolutions in social justice, automation, and weapon development, and that if urgent social and economic changes are not made “the nation will be thrown into unprecedented economic and social disorder.” Consider what happens when our Utopian projects are societal.
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
-
Check out "San Francisco police linked a woman to a crime using DNA from her rape exam, D.A. Boudin says" [url] by Cassidy, Megan, 2022.
[Feel free to skip if triggering or otherwise too upsetting to you.] Here is a nearly-lost history of the effort behind getting a technology widely adopted. It is not a computing example, but it is an excellent example of writing about the context behind something now in our society, the challenges are often not technical, and the leaders often go unrecognized. The whole thing is fascinating, but it is long, so if you need to skim, try to focus on the politics needed for this technology to succeed.
-
Check out "Somali Workers in Minnesota Force Amazon to Negotiate" [url] by Karen Weise, 2018.
-
Check out "Tech Won't Build It" [pdf] [url] , 2018 (28 pages).
Before Class:
-
Daily assignment, due at 8pm the night before class:
Keep the historical and argument sections of your course project in mind as you do the readings. Then, try to answer these questions or others of your own.
- Both of these readings describe trying to change the world where the challenge was not primarily about creating a new technology.
- Who were the protagonists trying to convince?
- What actions did they take to succeed?
- What was the purpose of each article? What were some techniques used in the writing to achieve this purpose?
- How is writing about the future different than writing about the past?
- In the history of technology, who gets to have their story told? Who doesn’t?
- Both of these readings describe trying to change the world where the challenge was not primarily about creating a new technology.
How does it work?
Tue, Nov 12 Automating Humanity
Required:
-
Read "Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models" [pdf] [url] by Shaolei Ren et al, 2023 (10 pages).
In response to criticisms of the high carbon footprint of generative AI technologies, Shaolei Ren and his team investigate the water footprint of generative AI - known to have caused tech giants from Google to Microsoft to miss enivronmental goals in recent years.
-
Read "Generative AI's end-run around copyright won't be resolved by the courts" [pdf] [url] by Arvind Narayanan and Sayash Kapoor, 2024 (7 pages).
Generative AI technologies rely on huge amounts of training data - information created and sourced by real people, who are not compensated for their work when chatGPT spits it out. Unfortunately, Narayanan and Kapoor argue that current copyright regimes (and those who try to use them to attain labor justice) are poorly suited to addressing the real economic and ethical harms of AI technologies.
-
Read (stop or start skimming at "Language as the key to general AI") "Can machines learn how to behave?" [url] by Blaise Agüera y Arcas, 2022 (21 pages).
Agüera y Arcas, whom we’ve seen elsewhere in class, makes the argument that there is no clear line separating what AI models can understand from what people do. The issue, in his view, is only that we need to teach those models what is and is not ok.
Optional:
-
Read "What if Generative AI turned out to be a Dud" [pdf] [url] by Gary Marcus, 2023 (6 pages).
Drawing on increasing disenchantment with chatGPT and related technologies in the business world, Gary Marcus explores the consequences that might ensue if these technologies are not all they’ve been hyped up to be.
-
Read "ChatGPT Is Dumber Than You Think" [pdf] [url] by Ian Bogost, 2022 (5 pages).
Bogost playfully unpacks the implications of large language models such as GPT-3 from OpenAI. Do they work as our enthusiasm for them might project?
-
Read "AI's Jurassic Park moment" [url] by Gary Marcus, 2022 (2 pages).
Marcus, in his very abbreviated style, lays out why current large language models built on deep learning fail and why we might need another approach.
-
Read "Language Models Understand Us, Poorly" [pdf] [url] by Jared Moore, 2022 (5 pages).
Jared, who more than anyone else created this course and taught it many times, lays out a couple of views on language understanding. You can also review the slides.
-
Read "Language Models as Agent Models" [url] by Jacob Andreas, 2022 (10 pages).
Andreas makes the case that language models can rightfully be thought of as models of humans, of agents. The philosophical and historical context is a bit lacking here. Take cse490a1, the philosophy of AI, to really get into the details.
-
"Inside the Heart of ChatGPT’s Darkness" [url] by Gary Marcus, 2023.
Read this for some quite toxic examples of how ChatGPT fails to moderate its language.
-
"ChatGPT Is a Blurry JPEG of the Web" [pdf] [url] by Ted Chiang, 2023 (8 pages).
Chiang, in an analogy which tugs at computer science fundamentals, compares language models like ChatGPT to lossy photographs.
-
Read "Semantics derived automatically from language corpora contain human-like biases" [pdf] [url] by Aylin Caliskan et al., 2017 (4 pages).
This is the classic paper which set off the debate about bias in langauge models. You can also check out Caliskan’s similar 2022 paper finding bias in image models.
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
Before Class:
-
Daily assignment, due at 8pm the night before class:
What kind of issues are there in using large language models? Why might they be or not be a very good idea? In a sense: do they work?
- Finish the course project, part two by tonight.
Does it matter?
Thu, Nov 14 Automating Humanity
Required:
-
Read "Reclaiming conversation: the power of talk in a digital age" [pdf] by Sherry Turkle, 2015 (21 pages).
As technologists we set out to do the good work of automation. We formalize, experiment, and implement, increasing the space of what computers can do. But, as Turkle asks, is this what we want? Do we want to be replacing each other? She says, “It seemed that we all had a stake in outsourcing the thing we do best—understanding each other, taking care of each other.” Consider what happens when our Utopian projects are personal.
-
Read "Why AI Isn't Going to Make Art" [pdf] [url] by Ted Chiang, 2024 (11 pages).
Noted AI critic (and science fiction author) Ted Chiang weighs in on the question of AI generated art, exploring both technical and philosophical limits. What makes something art? What gives a work meaning? And what are the limits of generative AI technologies, both now and in the future?
Optional:
-
Read "One Day, AI Will Seem as Human as Anyone. What Then?" [pdf] [url] by Joanna Bryson, 2022 (6 pages).
Bryson investigates what it is that people value and whether we should be enlarging the tent to fit in AI. She says, “our values are the way that we hold our societies together.”
-
"The Good Life: Lessons from the world's longest scientific study of happiness" [url] by Robert Waldinger et al., 2023.
As one of the authors of this study has said: “We found that the strongest predictors of who not just stayed happy but who was healthy as they went through life - the strongest predictors were the warmth and the quality of their relationships with other people.” What counts as a relationship with other people?
-
Read "The first known chatbot associated death" [url] by Gary Marcus, 2023.
A sad example of something that happened when a chatbot was used for the wrong purposes.
-
Read "My Weekend With an Emotional Support A.I. Companion" [pdf] [url] by Erin Griffith, 2023.
-
"ChatGPT Should Not Exist" [pdf] [url] by David Golumbia (4 pages).
Golumbia argues that those replacing human creativity with models such as ChatGPT are nihilist, that all they are doing is spread meaninglessness.
-
Read "Robots should be slaves" [pdf] by Joanna Bryson, 2009 (10 pages).
This is a more academic version of the argument Bryson makes in “One Day…”
-
Read "Computer power and human reason: From judgment to calculation." [url] by Joseph Weizenbaum, 1976.
Joseph Weizenbaum, writting as a professor at MIT in the 1960s, responds to the thoughtlessness present in programmers of the time especially in their attempts to replace human communication. Check out a recent javascript implementation of his Eliza chatbot
-
Read "The Most Human Human" [url] by Brian Christian, 2011.
A fun read that gives you entry to Christrian’s mindset and to philosophical and computer science ideas as he was preparing to try to come off as a human in a sort of Turing test.
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
-
Check out "'It's Hurting Like Hell': AI Companion Users Are In Crisis, Reporting Sudden Sexual Rejection" [url] by Samantha Cole, 2023.
-
Check out "Voice assistants could ‘hinder children’s social and cognitive development’" [url] by Amelia Hill, 2022.
-
Check out "We provided mental health support to about 4,000 people — using GPT-3. Here’s what happened 👇" [url] , 2023.
-
Check out "Therapy by chatbot? The promise and challenges in using AI for mental health" [url] by Yuki Noguchi, 2023.
Before Class:
-
Daily assignment, due at 8pm the night before class:
Automated therapists have been proposed ever since Weizenbaum released his Eliza program in the 1970s. Do you think this would be a good idea? Why or why not?
Don’t just assume that we will solve all of ways that language models fail to be like people. (“Soon enough, language models will be indistinguishable from people. People are therapists so why shouldn’t language models be therapists, too?” is not an interesting argument.)
Techno-Utopianism
Tue, Nov 19 The Society of Tech
Required:
-
Read "Three Expensive Milliseconds" [pdf] [url] by Paul Krugman, 2014 (1 page).
While reading this article, consider: who is today’s infrastructure for? What are some metrics being optimized for which have led to some (perhaps) unexpected consequences? How are these metrics and systems shaping our world?
-
Read chapter two from "Geek Heresy" [pdf] by Kentaro Toyama, 2015 (21 pages).
The classically trained computer scientist Toyama, in an excerpt from his book about realizing how Utopian tech fails the most marginalized, describes the role of tech as amplifying—whether for good or for bad.
-
Read "Survival of the Richest" [pdf] [url] by Douglas Rushkoff, 2019 (3 pages).
From Douglas Rushkoff, a noted media theorist read of, “Apocalypto – the intolerance for presentism leads us to fantasize a grand finale. “Preppers” stock their underground shelters while the mainstream ponders a zombie apocalypse, all yearning for a simpler life devoid of pings, by any means necessary. Leading scientists – even outspoken atheists – prove they are not immune to the same apocalyptic religiosity in their depictions of “the singularity” and “emergence”, through which human evolution will surrender to that of pure information.” This idea is investigated more deeply in his book “Present Shock”.
Optional:
-
Listen to "Speed" [url] by RadioLab, 2013.
An enlivening radio show which goes over many of the same concerns about speed as the readings.
-
Read "Tech Leaders Justify Project To Create Army Of AI-Controlled Bulletproof Grizzly Bears As Inevitable Part Of Progress" [url] , 2022.
The title explains it all
-
Read "The Microsoft Provocateur" [pdf] [url] by Ken Auletta, 1997 (14 pages).
This piece shows us the thinking of tech folks at a pivotal time when the nature of the internet was not yet decided. Skim the sections which are just about Mhyrvold’s life—we mean to focus on founders and their utopian visions.
-
Read "Origin Stories of Tech Companies if Their Founders Had Been Women" [url] by Ginny Hogan, 2019 (1 page).
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
Before Class:
-
Daily assignment, due at 8pm the night before class:
Oftentimes in computer science, we consider only the upshot, only the Utopian worlds we might create.
- For one of the technologies mentioned in the readings or another of your choosing, comment on the future worlds we imagine for it (e.g. a Utopia) and complications to that future which might arise in practice (e.g. a Dystopia).
- What are other utopias you have heard described when talking about emerging technologies? Do you find those utopias compelling? Why or why not?
Harder, Faster, Better, Stronger?
Thu, Nov 21 The Society of Tech
Required:
-
Read chapter 1 (pg. 13 - 36; 23 pages total) from "Pressed for time: The acceleration of life in digital capitalism" [pdf] by Judy Wajcman, 2015.
Wajcman investigates the demand for speed and efficiency in our current society, particularly as encouraged by technologies (like smartphones) and their creators (like silicon valley corporations).
-
Read Chapter two from "The Atlas of AI" [pdf] by Kate Crawford, 2021 (35 pages).
Crawford, in her tour of the complex social and political systems which make up what we call AI, homes in on the labor practices which undergird modern trends.
Optional:
-
Read "The Automation Charade" [url] by Astra Taylor, 2018 (5 pages).
Astra Taylor urges: “We shouldn’t simply sit back, awestruck, awaiting the arrival of an artificially intelligent workforce. We must also reckon with the ideology of automation, and its attendant myth of human obsolescence.”
-
Read "Communist Commentary on "The Triple Revolution"" [pdf] by Richard Loring, 1964 (10 pages).
This essay was published contemporaneously to the “Triple Revolution”, and is largely favorable of the reforms demanded. It also touches on the utility of utopianism in futurism, while considering labor issues in a distinctly Marxist, but still American, manner. In particular, the authors summarize, and take issue with, the Triple Revolution as saying “it is useless to fight the path progress is taking and they should therefore re-direct the aims of their fight to seek a better future in a world in which labor and its role will no longer be a basic factor.” The scan we have is a bit difficult to read, but we have been unable to find another.
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
-
Check out "The Code: Silicon Valley and the Remaking of America" [pdf] by Margaret O'Mara, 2019.
-
Check out "Humane: A New Agenda for Tech (44 min. watch)" [url] , 2019.
From the Center for Human Technology . Also examine their website
-
Check out "Are we having an ethical crisis in computing?" [pdf] [url] by Moshe Y. Vardi, 2018 (1 page).
Before Class:
-
Daily assignment, due at 8pm the night before class:
Recall that in the “The Triple Revolution,” the authors observe:
There is no question that cybernation does increase the potential for the provision of funds to neglected public sectors. Nor is there any question that cybernation would make possible the abolition of poverty at home and abroad. But the in industrial system does not possess any adequate mechanisms to permit these potentials to become realities. The industrial system was designed to produce an ever-increasing quantity of goods as efficiently as possible, and it was assumed that the distribution of the power to purchase these goods would occur almost automatically. The continuance of the income-through-jobs link as the only major mechanism for distributing effective demand—for granting the right to consume—now acts as the main brake on the almost unlimited capacity of a cybernated productive system.
Consider the impact of our current regulatory and political framework on such cybernetic systems. (E.g. In the U.S., there’s a lower tax rate on income from investments—as from ‘unicorn’ start-ups. Or: The U.S. government, like some others, invests a lot of money into basic research programs as in computer science.)
- In what ways do these social frameworks have an impact on what technology gets built?
- In what was do the technologies which get built shape our world?
- The readings cover externalities which arise in this interaction between social and technical systems. What’s an example of one of these? (And how does the term “accelerationism” relate?)
Experiences of Injustice in Computing
Tue, Nov 26 Computing and Racial Equity
Required:
-
Read "Critical Race Theory for HCI" [pdf] [url] by Ihudiya Finda Ogbonnaya-Ogburu et al., 2020 (16 pages).
This article calls human-computer interaction research, and computing research more generally, to explicitly attend to race, namely through critical race theory. Through this theory, analysis of the field, and storytelling, the authors show that despite (some) efforts in computing, racism persists and bares redressal. Pay particular attention to the stories.
-
Read "Roles for computing in social change" [pdf] [url] by Rediet Abebe et al., 2020 (9 pages).
This reading offers concrete suggestions specific to computing research. It advances four roles such research can play in discussions around fairness, bias, and accountability.
-
Read introduction through page 16 from "Race after technology: Abolitionist tools for the new jim code" [pdf] by Ruha Benjamin, 2019.
Benjamin, in a recent book, offers a “race conscious orientation to emerging technology not only as a mode of critique but as a prerequisite for designing technology differently.” Affiliated with Princeton’s Center on Information Technology and Policy, she brings a fresh perspective to many of the foundations of computing.
Optional:
-
Read introduction (16 pages) from "Stuck in the shallow end: education, race, and computing" [pdf] by Jane Margolis, 2008.
This introductory chapter details an early 2000s study which attempted to figure out why so few black and latinx students were enrolling in computer science courses. It, “shows how segregation and inequality along racial lines operate on a daily basis in our schools, despite our best intentions” (16).
-
Listen to particularly minutes 15 through 30 from "Episode 12: Confronting Our Reality: Racial Representation and Systemic Transformation with Dr. Timnit Gebru" [url] by Dylan Doyle-Burke et al..
Gebru, who received her Ph.D. from Stanford and then worked for Google before very publicly not working for Google, discusses the necessity of focusing on your values (hers being racial justice) in conjunction with your work.
-
Read "Combating Anti-Blackness in the AI Community" [pdf] [url] by Devin Guillory, 2020.
This reading is more about a computing research community than about the external impact of our technology, but each influences the other.
Feeling motivated? Here's a few actions other people have taken in response to today's themes. (i.e., relevant responses)
Before Class:
-
Daily assignment, due at 8pm the night before class:
Answer at least two of the following.
On “Critical Race Theory for HCI”:
- What is interest convergence?
- Why do you think the authors include information about their background? What does this achieve?
- Why do the authors contend that theory is an important focus?
- What does “true recognition of the pervasiveness of racism” (pg. 9) require?
- Do you have any stories along the lines of those shared? Would you be interested with sharing any with the group?
Choose one of the roles from “Roles for Computing in Social Change.”
- How might your work as a computer scientist engage with your chosen role?
- What social problem would you like to work on?
Responses
Tue, Dec 03 Participating in the Society of Tech
Required:
-
Read "Digital Pregnancy Test Deconstruction (Twitter Thread)" [pdf] [url] by Foone Turing, 2020.
This and the next reading offer informal Twitter threads with some surprising twists on how a technology actually works and why it may or may not be useful. Is this misleading people or providing an ethically valuable service or both? If you had chosen this for your project, what would you have investigated?
-
Read "Response to Digital Pregnancy Test Deconstruction (Also on Twitter)" [pdf] [url] by Naomi Wu, 2020.
(See above.)
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Look back through the “relevant responses” underneath the required and optional readings for each class. Either provide two of your own which fit the theme or choose two from those listed.
- Why did you choose these two? How would you categorize them? Describe the response taken. Might you act similarly? Why or why not?
- Briefly reflect on how you may have reacted differently to the short readings for today a few months ago, now that you have mostly completed this class.
- Look back through the “relevant responses” underneath the required and optional readings for each class. Either provide two of your own which fit the theme or choose two from those listed.
Departure
Thu, Dec 05 Participating in the Society of Tech
Required:
-
Read chapter 12 from "Artificial Unintelligence: How Computers Misunderstand the World" [pdf] [url] by Meredith Broussard, 2018.
Having read Broussard’s commentaries on the first day and as an introduction to the data unit, we now finish with her conclusion: a renewed plea for computing technology to serve the people who made it—humans.
-
Read foreward and introduction (pg. xi - xxvi; pg. 1 - 5; 23 pages total) from "Hope in the dark: Untold histories, wild possibilities" [pdf] by Rebecca Solnit, 2016.
Solnit, a writer and activist, reflects on our desire for social, cultural, or political change given the appearance that we have not arrived there (considering issues from global warming, to human rights abuse). Originally responding to the war in Iraq, she explores how news cycles and our personal narratives frame these issues and makes the case for hope nonetheless–“tiny and temporary victories”
Optional:
-
Read From 'SAN DIEGO' on from "The Code: Silicon Valley and the Remaking of America" [pdf] by Margaret O'Mara, 2019.
From the UW historian O’Mara hear this story of how two UW CSE alumni have navigated their careers as computer scientists.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Do you feel able to change outcomes of how tech affects society?
- What’s an idea from this course that every UW CSE student ought to understand?
- Finish the course project, part three by tonight.