Computer Ethics
Finding the devil in the implementation details.
Administrative
Winter 2020
Time: Wednesdays and Fridays from 3:30 to 4:30pm
Location: NAN 181
Instructor: Jared Moore
Email: jlcmoore@cs.washington.edu
Office hours: TBD
Please do not hesitate to write to us about any accommodations or questions related to readings or course material. Additional meetings are available by appointment.
Description
Be it social media platforms, robots, or big data systems, the code Allen School students write—the decisions they make—influences the world in which it operates. This is a survey course about those influences and how to think about them. We recognize “the devil is in the implementation details.”
The course is divided into two parts: In the first part, we survey historical and local issues in tech, particularly those concerning data. We then engage with critical perspectives from disciplines such as machine ethics and science and technology studies as a framework for students to articulate their own beliefs concerning these systems. In the second part, we apply these perspectives to urgent issues in applied technologies, such as facial recognition and misinformation.
Throughout students hone their critical reading and discussion skills, preparing them for a life-long practice of grappling with the—often unanticipated—consequences of innovation.
We cover topics such as: AI ethics, social good, utopianism, governance, inclusion, facial recognition, classification, privacy, automation, platforms, speculative design, identity, fairness, power and control, activism, and subversive technologies.
Objectives
By the end of this course students will:
- Obtain awareness of issues arising from the use of computers in contemporary sociotechnical systems.
- Articulate technological harms to individuals and groups in the language of critical perspectives.
- Appreciate how historical, cultural, economic, and political factors contribute to how technologies are built and designed.
- View themselves as both subjects and creators of sociotechnical systems.
- Understand and articulate complex arguments pertaining to values in technology.
- Recognize the diversity of stakeholders and views when considering a technology.
- Amplify voices and values not traditionally considered in technological development (e.g. in design processes)
- Re-imagine and speculate alternative histories and futures for using and coexisting with computers.
Schedule
Introduction: A Brief History
Wed, Jan 08 Groundwork
Required:
-
Read "Our Numbered Days: The Evolution of the Area Code" by Megan Garber, 2014 (5 pages)
Are you a “425” or a 206”? Through an exploration of phone area codes, Megan Garber shows us how cultures can be built up around artifacts spawned by engineering decisions. It traces the history nicely, while also discussing the hype cycle and push back from the community, as well as how such choices continue to resonate long after they’re made.
-
Read "Why the Luddites Matter" by Z.M.L., 2018 (5 pages)
“That which makes the Luddites so strange, so radical, and so dangerous is not that they wanted everyone to go back to living in caves (they didn’t want that), but that they thought that those who would be impacted by a new technology deserved a voice in how it was being deployed.”
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Check out "The Anti-Digit Dialing League" by John Wilcock (1 page)
-
Check out "About CPSR" by Douglas Schuler, 2008
Who's behind the keyboard?
Fri, Jan 10 Groundwork
Required:
-
Read "How to read a book" by Paul N. Edwards, 2000 (8 pages)
Here, an adroit scholar walks through some tips and tricks for reading more effectively. He hits the major points and includes some bonus tips, like where to best organize your reading notes. This is an invaluable resource as our course’s weekly reading load begins to increase. Skim now, but revisit throughout the course.
-
Read "Be Careful What You Code For" by danah boyd, 2016 (2 pages)
danah boyd, a researcher at Micosoft and at Data and Society, highlights just how few guardrails there are for developers, from the consequences of algorithmic bias to the implications “crazy metaphors”. She offers a call to action, solutions, and ample evidence for considering the implications of code. We highly recommend chasing down some of the links provided.
-
Read introduction through page 16 from "Race after technology: Abolitionist tools for the new jim code" by Ruha Benjamin, 2019
Benjamin, in a recent book, offers a “race conscious orientation to emerging technology not only as a mode of critique but as a prerequisite for designing technology differently.” Affiliated with Princeton’s Center on Information Technology and Policy, she brings a fresh perspective to many of the foundations of computing.
Optional:
-
Read "The Cloud Is Not the Territory" by Ingrid Burrington et al., 2014 (3 pages)
-
Read "Discussion Leading Guidelines"
This resource from the Stanford Teaching Commons offers an in-depth analysis of how to have better discussions. Their recommendations, from setting an agenda, asking questions, and increasing discussant engagement are all a part of how to create a better climate for discourse. For leading discussions in this class and beyond, it’s worth a read.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- What do you want to get out of our class discussions?
- Do you feel able to change outcomes of how tech affects society?
- Preview the course project, part one
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Check out "Tackling Climate Change with Machine Learning" by David Rolnick et al., 2019 (97 pages)
Also check out their website.
-
Check out "Green AI" by Roy Schwartz et al., 2019
-
Check out "Open letter to Jeff Bezos and the Amazon Board of Directors" by Amazon Employees for Climate Justice, 2019
Deconstructing a Data System
Wed, Jan 15 Data
Required:
-
Read "At Amazon's New Checkout-Free Store, Shopping Feels Like Shoplifting" by Jake Bullinger, 2018 (2 pages)
Jake Bullinger describes the experience had by some of the first shoppers of the Checkout-Free Amazon Go store and considers its economic implications. As you’re reading the article, look for possible tensions, critiques, or questions which it raises. Also think about the ways in which data is used by this store.
-
Read "In Amazon Go, no one thinks I'm stealing" by Ashlee Clark Thompson (2 pages)
Ashlee Clark Thompson reflects on her experience of shopping in the Amazon Go store: “Amazon Go isn’t going to fix implicit bias or remove the years of conditioning under which I’ve operated. But in the Amazon Go store, everyone is just a shopper, an opportunity for the retail giant to test technology, learn about our habits and make some money.”
Optional:
-
Read "Inside Amazon Go, a Store of the Future" by Nick Wingfield, 2018 (1 page)
High level description of the store; touches on themes like convenience, how this tech could affect jobs, and the vagueness of plans surrounding the system at this time; Look at the photos if you don’t visit the Amazon Go store.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Today’s a light reading day, but day four isn’t so we recommend you get started on that.
- Summarize the Amazon Go readings.
- How do the readings differ in their views of the Amazon Go store? Did the Amazonians consider each of these perspectives? Should they have? How might you classify them?
- (Optional) Visit one of the Amazon Go stores downtown (I will be going before class on Wed. 1/15)
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Check out "The Loneliest Grocery" by Joshua McNichols
-
Check out "“Good” isn’t good enough" by Ben Green, 2019 (4 pages)
This paper, by a postdoc at the AI Now Institute and formerly of MIT, summarizes many of the themes we touch on throughout the quarter. It synthesizes many of the arguments we cover and applies them as a call for action to data scientists in particular. The author’s arguments are equally relevant to computer scientists.
Conceptions of Data
Fri, Jan 17 Data
Required:
-
Read "Chapter 1: Conceptualising Data" by Rob Kitchin, 2014 (25 pages)
The introduction to the book describes ways of thinking about what data is (ontologies) and goes on to discuss ethical and political considerations of data. It postulates the framework of “data assemblages” and how thoughts about them influence their own conceptions.
-
Read "On Being a Data Skeptic" by Cathy O'Neil, 2014 (26 pages)
This rapid-fire, well-articulated article is about the advantages and perils of data science, with ample advice and examples to advocate for why those who use data ought to be this special kind of “skeptical”.
Before Class:
-
Daily assignment, due at 8pm the night before class:
Answer each of the following questions with no more than a sentence or two. These may seem like a lot of questions, but we recommend you use them as a resource to guide your reading.
- What is the difference between data, information, and knowledge? How are they related?
- How can wider or economic concerns “frame” data? That is, in what sense do data act and how? (Answers might include: as an economic resource, as a form of power/knowledge, etc.) Explain why.
- How have politics or economics influenced how some data have been defined or created?
- What are reasons or incentives for controlling:
- The creation of data
- The access to data
- The standards of data, such as metrics or units
- The means of data collection, such as sensors or know-how
- In “On being a data skeptic” explain “measuring the distortion.”
- What is the relationship between models and proxies? Why are proxies used? Give an example.
- Why might the author have singled out “nerds” and “business people” separately? What do the differences in her comments indicate about how they view problems differently? Do you agree?
- Think of a clarifying question you’d like to discuss in class.
"Data is the new oil": data politics
Wed, Jan 22 Data
Required:
-
Read (skim) from "The world's most valuable resource is no longer oil, but data" , 2017 (1 page)
A short article that introduces the metaphor that “data is the new oil” which reflects the widely held view that data is now “the world’s most valuable resource”.
-
Read "Do artifacts have politics?" by Langdon Winner, 1980 (15 pages)
In this widely-cited essay, Langdon winner makes the case that technologies embody social relations. He argues that we should develop a language for considering technology which not only focuses on it as a tool, or its use, but also the meaning of its design and social arrangements which it facilitates. Langdon asks: “what, after all, does modern technology make possible or necessary in political life?”. Consider this while you read the piece.
-
Read "Anatomy of an AI System" by Kate Crawford et al., 2018 (14 pages)
Kate Crawford and Vladan Joler consider the production of the Amazon Echo Dot with an astounding breadth by mapping the human labor, data, and material resources required to build it. Kate Crawford is a co-founder of the AI Now institute at NYU which is breaking ground on many questions relevant to the social implications of AI.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Pick one aspect of “Anatomy of an AI System” and discuss it with someone outside of class. In a couple of sentences, what did you talk about?
- Did any aspect of the Amazon Echo AI system surprise or interest you? Which aspect?
- What conclusions can we draw from the tomato picking example in “Do Artifacts Have Politics?”
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Check out "Google Will Not Renew Pentagon Contract That Upset Employees" by Daisuke Wakabayashi et al., 2018
-
Check out "The Societal Implications of Nanotechnology" by Langdon Winner, 2003 (5 pages)
Winner’s testimony before Congress
-
Check out "An Open Letter to the Members of the Massachusetts Legislature Regarding the Adoption of Actuarial Risk Assessment Tools in the Criminal Justice System" by Chelsea Barabas et al., 2017 (8 pages)
Operationalization and Classification
Fri, Jan 24 Data
Required:
-
Read introduction (pg. 1 - 16; 31 - 32; 17 pages total) from "Sorting things out: classification and its consequences" by Geoffrey C. Bowker et al., 1999 (377 pages)
Sorting Things Out is a classic text on classification and standardization. The introduction writes about the importance of considering the ubiquity of classification and the processes that generate them, standardize them, and enforcement. It also looks at how classification has caused harm and how the processes which create standards can at times yield an inferior solution. The authors take an expansive view of classification, so be prepared to think about the American Psychiatric Association’s Diagnostic and Statistical Manual (DSM) or VHS vs. Betamax.
-
Read "Do algorithms reveal sexual orientation or just expose our stereotypes?" by Blaise Aguera y Arcas et al., 2018 (5 pages)
This essay sets out to debunk a scientific study which claimed to have built a “sexual orientation detector” using machine learning. “Do algorithms reveal sexual orientation or just expose our stereotypes?” presents a thorough analysis of the offending paper and shows that one move to debunk “junk science” is validate the study’s results against some other baseline. In this case, the authors use Amazon’s Mechanical Turk. As you’re reading this, think about what one can learn from a face?
Optional:
-
Read "Deep neural networks are more accurate than humans at detecting sexual orientation from facial images." by Yilun Wang et al., 2017
This is the article critiqued by “Do algorithms…”
Before Class:
-
Daily assignment, due at 8pm the night before class:
- “Do algorithms…” claims to focus on the underlying “science.” Why do you think the authors did so? Why was this distinction important?
- What strategies did “Do algorithms…” use to make its argument?
- “Do algorithms…” concludes that the paper it examines was misguided. Perhaps using what you learned from “Sorting Things Out,” think of another method based on misguided assumptions. What is it?
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Check out Chapter 1 (pg. 46 - 50 from “Infrastructure…” on) from "Sorting things out: classification and its consequences" by Geoffrey C. Bowker et al., 1999 (377 pages)
-
Check out "Drawing a Line" by Tableau Employee Ethics Alliance, 2019
-
Check out "Engaging the ethics of data science in practice" by Solon Barocas et al., 2017 (3 pages)
-
Check out "Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI" by Philip E. Agre, 1997 (28 pages)
Agre, an AI researcher in the 1990s, convincingly walks the line between a critial perspective and that of a practitioner, evoking why practitioners may bristle at critique.
Wed, Jan 29
Before Class:
- Jared is out of town; students can bring in a draft of their project to share with each other.
Moral Machines
Fri, Jan 31 Critical Perspectives
Required:
-
Read "Whose Life Should Your Car Save?" by Azim Shariff et al., 2016 (1 page)
This is a high-level introduction to the moral machine project that also discusses how politics and business view ethical dilemmas, and that a tragedy of the commons-type scenario could undermine our common expectations for machine behavior.
-
Read "Why the moral machine is a monster" by Abby Everett Jaques, 2019 (10 pages)
This paper presents a fierce-yet-thoughtful critique of the Moral Machine experiment. It also highlights the importance of analyzing the structural implications of the problem: “In a slogan, the problem is that an algorithm isn’t a person, it’s a policy. And you don’t get policy right by just assuming that an answer that might be fine in an individual case will generalize. You have to look at the overall structure you create when you aggregate the individual transactions….The right question is what kind of world will I be creating if this is the rule. What will the patterns of advantage and disadvantage be if this preference is encoded in our autonomous vehicles. (pg 5-6)”
Optional:
-
Read "The Trolley Problem" by Judith Jarvis Thomson, 1985 (21 pages)
Judith Jarvis Thomson marches you through 21 pages of analyzing various hypothetical scenarios in order to expose the difficulty of using abstractions, and how a normative/ethical tenet may on the face seem straight forward, but really be undermined if one looks at the details. Makes it seem difficult to think that there are an exhaustive set of rules to encode which actions are permissible, not to mention moral.
-
Read "Ethical Machines (published)" by Irving John Good, 1982 (5 pages)
I.J. Good discusses problems with certain considerations one should make when thinking about whether and how a machine could exhibit “ethical” behavior.
-
Read "Ethical Machines (unpublished)" by Irving John Good, 1980 (16 pages)
The longer version is able to better parse the implications of the arguments in part because it’s more in depth due to its length. Notable in particular are the final consideration of a “synergistic relation with the boss” and the shakespeare reference.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Judge a few scenarios on the MIT Moral Machines website.
- Summarize the readings.
- What are the trade-offs of looking at individual ethics cases (such as in the “Trolley Problem” or “Moral Machines”) as compared to a ‘structural analysis’ as described by Jaques?
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Check out "Wielding Rocks and Knives, Arizonans Attack Self-Driving Cars" by Simon Romero, 2018
-
Check out "Tech Billionaires Think SimCity Is Real Life" by Nicole M. Aschoff, 2019 (4 pages)
About Alphabet’s Sidewalk Labs and their attempt to create a “smart city” in Tronto. This is quite related to conversations about autonomous vehicles, creating moral machines, etc. Also check out one community response to the project.
"Techne" and Belonging
Wed, Feb 05 Critical Perspectives
Required:
-
Read "Technically Female: Women, Machines, and Hyperemployment" by Helen Hester, 2016 (10 pages)
This essay surveys a history of “electronic secretaries” to frame relevant questions of today’s tech, such as: Why are AI assistants so often feminized? We question what it means for technology to “do gender” and in service of which “imagined technology user”? Yet we can turn that question around and ask who “does technology”? and how does labor gets redistributed with the introduction of new software and AI assistants? Ultimately, Hester asks us to confront questions concerning lived experiences of gender and how its programmed, productive vs. reproductive labor, and the (dis)advantages of automation.
Optional:
-
Read "Patriarchy, technology, and conceptions of skill" by Judy Wajcman, 1991 (16 pages)
When considering the future of work, one question that’s often raised is how technology negatively impacts the amount of “skill” required to complete a task, aka “deskilling”. In “Patriarchy, Technology, and Conceptions of Skill”, Judy Wajcman questions the underlying assumption that skill is entirely technically derived. Instead, she considers how men’s historical control over technology in the workplace has extensively influenced the ideological and material conceptions of skill, thus concluding in part that “definitions of skill, then, can have more to do with ideological and social constructions than with technical competencies which are possessed by men and not by women”.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- On “Technically Female: Women, Machines, and Hyperemployment.”
- What is “hyperemployement”?
- How can we use the criticism of “hyperemployment” it to explain the “productive vs. reproductive” labor distinction in the essay?
- How does “hyperemployment” and “productive vs. reproductive labor” distinction relate to conceptions of skill?
- On “Technically Female: Women, Machines, and Hyperemployment.”
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Check out "Reflecting on one very, very strange year at Uber" by Susan Fowler, 2017 (4 pages)
This blog post, by the author of “What Have We Done” contributed to the resignation of Uber’s CEO, Travic Kalanick.
-
Check out "Google Walkout: Employees Stage Protest Over Handling of Sexual Harassment" by Daisuke Wakabayashi et al., 2018
-
Check out "We must do more to address gender harassment at the Allen School" by Camille Cobb, 2018 (1 page)
Latent Identity and Privacy
Fri, Feb 07 Critical Perspectives
Required:
-
Read "It's Not Privacy, and It's Not Fair" by Cynthia Dwork et al., 2013 (6 pages)
This law review paper is the missing link between the concept of control and of privacy as represented by the (optional) Deluze piece and the Barocas piece, respectively.
-
Read "Think You're Discreet Online? Think Again" by Zeynep Tufekci, 2019 (2 pages)
How ought we make sense of questions such as privacy, classification, tracking, and surveillance in the era of big data and computational inference? Zeynep Tufeci asks us to consider these questions by looking at examples of the collective implications of a “privacy-comprised world”.
-
Read "Big data's end run around procedural privacy protections" by Solon Barocas et al., 2014 (2 pages)
Solon Barocas and Helen Nissenbaum, both well-known AI ethics scholars, consider ”why the increasingly common practice of vacuuming up innocuous bits of data may not be quite so innocent: who knows what inferences might be drawn on the basis of which bits?”
Optional:
-
Read "Deleuze the Societies of Control"
This video highlights some significant passages in “Postscript” and explains what’s going on by connecting it back to contemporary questions of control. Only the first 10 minutes actually cover the essay and the next 12 or so are on “commentary”, by posing relevant questions and extrapolating “Postscript”’s ideas into the future.
-
Read "Postscript on the Societies of Control" by Gilles Deleuze, 1992 (4 pages)
“just as the corporation replaces the factory, perpetual training tends to replace the school, and continuous control to replace the examination. Which is the surest way of delivering the school over to the corporation.” Deleuze considers the technologies of power, and what it means to be in a “control state”.
Before Class:
-
Daily assignment, due at 8pm the night before class:
Identify a new system or one we’ve discussed in class that makes decisions which affect people’s lives in some meaningful way. Describe it and then answer the following questions:
- Does this system rely on data collection to make these decisions?
- Where does this information come from?
- What’s the consent model?
- What questions related to individual privacy does it raise?
- Finish the course project, part one by tonight.
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Check out "Why Hong Kongers Are Toppling Lampposts" by Sidney Fussell, 2019
-
Check out "neveragain.tech" by Leigh Honeywell, 2016
-
Check out "Cegłowski Senate Testimony" by Maciej Cegłowski, 2019 (10 pages)
Reimagining
Wed, Feb 12 Critical Perspectives
Required:
-
Read page 151 until chapter end from "Rise of the machines: A cybernetic history" by Thomas Rid, 2016 (8 pages)
Rid, a scholar of conflict and information technology, aptly summarizes the act of reimagining as it took place around the idea of the “cyborg,” particularly as evoked by Donna Haraway (see optional readings). This kind of reimagining is important to consider in the historical development of technology. Note how this work references I.J. Good and other thinkers in the history of AI and ethical machines.
-
Read "Dynamicland and the Whimsical Digital Object" by Olivia Kan-Sperling, 2019 (3 pages)
A relatively short piece that comments on a programming lab–dynamicland–which, while situated near Silicon Valley, eschews the ethos of human transcendence. The article is sweet and attentive to the project, without withholding criticism, but concludes that the conceptual indulgences it is most weary of are also what make it amazing and notable.
-
Read "The Internet Is Acid, and America Is Having a Bad Trip" by Douglas Rushkoff, 2018 (1 page)
Douglas Rushkoff, a noted media theorist, draws connection between software and hallucination. Keenly attuned to the wild abstractions and metaphor used in computation: “Computers and networks were part of a much larger cultural phenomenon: a realization that reality is a collaboration…It seemed as if the world was about to become a whole lot more like a lucid dream, where the future was less a place we arrived at than a thing we created together.” Where are we today? Where do we go from here?
Optional:
-
Read sections 1 and 5 (10 pages total) from "A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late 20th Century" by Donna Haraway, 2006 (42 pages)
Readers not familiar with critical theory will likely struggle with this reading, particularly the fluid language and arcane references. Haraway uses the extended metaphor of cyborg as a vehicle to explore the relationship between gender and identity, nature and technology—pushing back on traditional conceptions of radial feminism and affinity vs. identity politics, and fiercely critiquing the language of capitalism. Although much tech has changed since 1991 when it was written, the essay stays as influential as ever.
-
Read "Mother of Invention" by Nnedi Okorafor, 2018 (11 pages)
This story is giving voice to themes concerning motherhood and giving birth in a way that’s not often reflected in futurist narratives. Also talks about the notion of symbiosis between Anwuli and Obi 3 (the house), and how tech can be an extension of self. It shows a vision where the computer has agency to exhibit an ethic of “care”.
Before Class:
-
Daily assignment, due at 8pm the night before class:
Choose one of the media documents linked below. In a couple of sentences, summarize what it does and how it connects to the readings.
- Zach Lieberman became known for co-creating openFrameworks, an open source c++ toolkit for creative coding. Have a look at some of his delightful daily sketches which he continues to update on his Instagram.
- Morehshin Allahyari’s lecture titled On Digital Colonialism and Re-Figuring: here she talks about the use of digital technologies and activism, digital colonialism, and re-figuring as it relates to the process of making her project called Material Speculation; also check out She Who Sees the Unknown.
- “The Glass Room” is an interactive installation produced by Firefox and curated by Tactical Tech that acts as “a place to explore how technology and data are shaping our perception, experiences, and understanding of the world.”
- “Onyx Ashanti is designing a process for reprogramming himself utilizing his own musical comprehension and sound design, he calls it Sonocybernetics.” Learn more about Onyx’s work here and view a video of sono-cybernetics performed: “of gesture and laser cymatics.”
- Adam Harvey is an American artist and researcher currently based in Berlin. His work engages with issues surrounding AI, surveillance, and computer vision.
- “The Computer for the 21st Century” is an article about Xerox Parc’s prototypes into ubiquitous computing. It can be viewed as a harbinger for the current IoT revolution.
Choose one of the following technological futures (such as Obi 3 in “Mother of Invention”). What do you see as the purpose of re-imagining it?
- Computer use (such as in “Dynamicland” or the procedurally generated sketches of Zach Lieberman),
- Our belief in the abstraction of software (such as in “Acid Trip”),
- The exclusivity of classifications and labels (such as in “The Cyborg Manifesto”),
- The role of technology in nature (such as in “The Cyborg Manifesto”),
- Or any other media which we’ve covered.
Defacing Recognition
Fri, Feb 14 Facial Recognition
Required:
-
Read "Physiognomy's New Clothes" by Blaise Aguera y Arcas et al., 2017 (16 pages)
Can an algorithm detect criminality from your face alone? In this paper, the authors critique a 2016 paper that claims to be able to do so. By focusing on the historical context and current science, this article refutes the paper claims while providing a salient illustration of the danger of using Neural Networks inappropriately.
-
Read introduction , background, and discussion (pg. 1 - 2; pg. 2 - 5; pg.10 - 14; 7 pages total) from "The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition" by Os Keyes, 2018 (21 pages)
This paper surveys the HCI papers to understand how researchers understand “gender” and the implications for how Automatic Gender Recognition systems are developed. As you’re reading this, think about who gets to decide what is “gender” when developing a product and how different operationalizations of this concept can impact how AGR systems are deployed and impact Trans people.
-
Read "STEALING UR FEELINGS" by Noah Levenson, 2019
Also consider watching the video
Optional:
-
Read "Facial feature discovery for ethnicity recognition" by Cunrui Wang et al., 2019 (17 pages)
This is another example of the ways facial recognition may err.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- How does the field of physiognomy reinforce the power of some at the expense of others? Does the comparison to scientific racism seem reasonable?
- Within the context of facial recognition systems, there are many incentives for increasing systems’ efficacy, such as to advance political or economic goals. How do these incentives influence:
- The system’s ability to control and exert power?
- Philosophies of human nature and essentialism?
- Reputability of the science and facial recognition’s applications? In answering this question, pick a specific incentive of a political or economic nature and one of the above externalities.
- What are possible blind-spots in our own conceptions of human traits?
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Check out "Amazon Workers Demand Jeff Bezos Cancel Face Recognition Contracts With Law Enforcement" by Kate Conger, 2018
-
Check out "CV Dazzle" by Adam Harvey, 2017
An art project which demonstrates the adversarial susceptibility of facial recognition. Read about how it works here
-
Check out "The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition" by Os Keyes, 2018 (21 pages)
Section 6, Design Recommendations (14 - 16)
The politics of faces
Wed, Feb 19 Facial Recognition
Required:
-
Read "Excavating AI: The Politics of Training Sets for Machine Learning" by Kate Crawford et al., 2019 (14 pages)
Again we see Crawford examining the history and politics of a machine learning system, here: computer vision training data sets. She and Paglen examine datasets like ImageNet, UTKFace, etc. and apply many of the critiques we’ve seen up to this point.
-
Read "Halt the use of facial-recognition technology until it is regulated" by Kate Crawford, 2019 (1 page)
Facial recognition technologies have a uniquely powerful potential. Yet few legislative safeguards are in place. It is even less clear that the technologies can actually function in a safe or desirable manner. Kate Crawford argues “These tools are dangerous when they fail and harmful when they work”—ultimately calling for urgent and comprehensive regulation of facial recognition.
Before Class:
-
Daily assignment, due at 8pm the night before class:
In 150 - 300 words: The State of Washington is considering banning all use of facial recognition technology. Do you agree with this move? Write a letter to your district’s representative or to the House Speaker expressing your position. Cite each of the readings when making your argument. (As a starting point, consider identifying the stakeholders in your argument.)
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Check out "On the Perils of Automated Face Recognition" by Dallas Card, 2018 (4 pages)
Recall that this was an example reading for the course project.
-
Check out "Facial Recognition Technology in Public Housing Prompts Backlash" by Lola Fadulu, 2019 (2 pages)
-
Check out "SB 5376 - 2019-20" , 2019
Washington State is currently considering privacy legislation which could cover government use of facial recognition technology. You might draft a public comment with regard to what you have learned in the class, engaging with the technology such as Hoffman does.
Constructing a Political Argument
Fri, Feb 21
Required:
-
Read "The Automation Charade" by Astra Taylor, 2018 (5 pages)
Astra Taylor urges: “We shouldn’t simply sit back, awestruck, awaiting the arrival of an artificially intelligent workforce. We must also reckon with the ideology of automation, and its attendant myth of human obsolescence.”
-
Read page 5 onwards from "The Triple Revolution (original)" , 1964 (16 pages)
This famous memo to President Lyndon B. Johnson was drafted by the Ad Hoc Committee of the Triple Revolution, comprised of notable social activists, scientists and technologists, among others. It warns that revolutions in social justice, automation, and weapon development, and that if urgent social and economic changes are not made “the nation will be thrown into unprecedented economic and social disorder”.
Optional:
-
Read "Is this time different? The opportunities and challenges of artificial intelligence" by Jason Furman, 2016 (17 pages)
Before Class:
-
Daily assignment, due at 8pm the night before class:
These questions are structured to prepare you for the argument section of the course project.
- Both of these readings make calls for action. Describe each. In doing so, consider: Who is the intended audience? Which stakeholders are considered?
- Choose one rhetorical move which one of the authors made. What was it? Was it compelling? Examples include:
- Use of narrative — what was the purpose of using storytelling?
- Specific examples used by the author(s) and what made the example effective.
- Enumerating concepts, facts, etc. — what did this do for the overall argument?
- How might you incorporate a similar rhetorical move into your own writing (such as in the course project)?
- Finish the course project, part two by tonight.
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Check out "Somali Workers in Minnesota Force Amazon to Negotiate" by Karen Weise, 2018
-
Check out "Tech Won't Build It" , 2018 (28 pages)
Platform or Publisher?
Wed, Feb 26 Misinformation and Platforms
Required:
-
Read chapter one (pg. 1 - 24) from "Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media" by Tarleton Gillespie, 2018 (288 pages)
Tarleton Gillespie, a social media scholar, establishes the groundwork for understanding social media platforms. Beginning with an example of content moderation on Facebook, he makes the case that content moderation is an essential element of these social media companies and that the act of providing content to users comes with many value laden decisions–picking up myths of openness, free speech, neutrality, and more.
-
Read from "Blue Feed, Red Feed" by Jon Keegan, 2016 (1 page)
Skim. When thinking about content moderation and the role that platforms play in curation, it helps to see first hand how the data appear to and is shaped by the content moderation systems. Play with this app created by the Wall Street Journal to see how Facebook feeds differ between political leanings.
-
Read introduction (9 pages) from "Kill All Normies" by Angela Nagle, 2017 (11 pages)
This does a better job of describing some of the thorny “grey” areas and mechanisms for radicalization that we will otherwise discuss on the platform days, and is an illustrative antidote to the clearer cut “malicious” content mentioned in the other pieces.
Optional:
-
Read "The Moderators" by Adrian Chen, et al.
Watch this 20 minute documentary on content moderators, but beware of graphic content.
-
Read "Rethinking the Public Sphere: A Contribution to the Critique of Actually Existing Democracy" by Nancy Fraser, 1990 (56 pages)
Before Class:
-
Daily assignment, due at 8pm the night before class:
In 150 - 300 words, address the following: What are platforms? Why does this term matter so much? Who are the stakeholders?
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Check out "Read the Letter Facebook Employees Sent to Mark Zuckerberg About Political Ads" by The New York Times, 2019
-
Check out "“So You Won't Take Down Lies?”: AOC Blasts Mark Zuckerberg in Testy House Hearing" by Alison Durkee
Also see how she solicited the public for questions to ask the CEO.
-
Check out "A Reckoning at Facebook" by Nicholas Thompson, 2018
-
Check out "Tech employees can make up for executives" by Jared Moore, 2019 (1 page)
Content Moderation Algorithms and Free Speech
Fri, Feb 28 Misinformation and Platforms
Required:
-
Read "It's the (Democracy-Poisoning) Golden Age of Free Speech" by Zeynep Tufekci, 2018 (4 pages)
Zeynep Tufekci considers how power to censor functions on our oversaturated social networks, and the role of misinformation and the attention economy in this. The article provides striking clarity to issues we collectively face on this platform.
-
Read chapter four (pg. 74 - 110; 36 pages total) from "Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media" by Tarleton Gillespie, 2018 (288 pages)
Read the whole chapter, but, if you’re short on time, focus on the “automatic detection” section (pg. 97 to 110). Tarleton Gillespie, a social media scholar, establishes the groundwork for understanding social media platforms. In chapter 4, “Three Imperfect Solutions to the Problem of Scale”, Gillespie considers several models for content moderation, one of which is algorithmic “automatic detection” techniques.
Optional:
-
Read "The Risk of Racial Bias in Hate Speech Detection" by Maarten Sap et al., 2019 (9 pages)
Before Class:
-
Daily assignment, due at 8pm the night before class:
In 150 - 300 words: Identify a system that attempts to alleviate some of the concerns raised about platforms. What does it do? What ideologies and values inform it? What potential abuses might be overlooked by this design?
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Check out "Jigsaw"
Examine this as a “moonshot” type projects and how it attempts to reimagine people’s internet experiences using machine learning type systems (such as with perspective). At the same time, consider it in light of Gillespie’s comments on Jigsaw on page 109 of “Custodians.”
-
Check out "Fighting Neo-Nazis and the Future of Free Expression" , 2017
Techno-Utopianism
Wed, Mar 04 The Society of Tech
Required:
-
Read "The Dark Side of Techno-Utopianism" by Andrew Marantz, 2019 (6 pages)
A summary of technologists’ relationships with media and how looking at books and the printing press can help us understand how contemporary technologists idealize their own creations.
-
Read "Communist Commentary on "The Triple Revolution"" by Richard Loring, 1964 (10 pages)
This essay was published contemporaneously to the “Triple Revolution”, and is largely favorable of the reforms demanded. It also touches on the utility of utopianism in futurism, while considering labor issues in a distinctly Marxist, but still American, manner. In particular, the authors summarize, and take issue with, the Triple Revolution as saying “it is useless to fight the path progress is taking and they should therefore re-direct the aims of their fight to seek a better future in a world in which labor and its role will no longer be a basic factor.” The scan we have is a bit difficult to read, but we have been unable to find another.
Optional:
-
Read "The Microsoft Provocateur" by Ken Auletta, 1997 (14 pages)
This piece shows us the thinking of tech folks at a pivotal time when the nature of the internet was not yet decided. Skim the sections which are just about Mhyrvold’s life—we mean to focus on founders and their utopian visions.
-
Read "Origin Stories of Tech Companies if Their Founders Had Been Women" by Ginny Hogan, 2019 (1 page)
Before Class:
-
Daily assignment, due at 8pm the night before class:
- What historical precedents do you find are useful for thinking about tech, technologists, and their power in society?
- Do you find the analogy to books and publishers in “The Dark Side of Techno-Utopianism” compelling?
- What are utopias you’ve heard described when talking about emerging technologies? Do you find those utopias compelling? Why or why not?
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Check out "Why Joi Ito needs to resign" by Arwa Mboya
-
Check out "“What Have We Done?”: Silicon Valley Engineers Fear They've Created a Monster" by Susan Fowler, 2018 (2 pages)
-
Check out "Computer power and human reason: From judgment to calculation." by Joseph Weizenbaum, 1976
Joseph Weizenbaum, writting as a professor at MIT in the 1960s, responds to the thoughtlessness present in programmers of the time.
Harder, Faster, Better, Stronger?
Fri, Mar 06 The Society of Tech
Required:
-
Read "Three Expensive Milliseconds" by Paul Krugman, 2014 (1 page)
While reading this article, consider: who is today’s infrastructure for? What are some metrics being optimized for which have led to some (perhaps) unexpected consequences? How are these metrics and systems shaping our world?
-
Read chapter 1 (pg. 13 - 36; 23 pages total) from "Pressed for time: The acceleration of life in digital capitalism" by Judy Wajcman, 2015
Wajcman, whom we saw previously in ”Techne” and Belonging, investigates the demand for speed and efficiency in our current society, particularly as encouraged by technologies (like smartphones) and their creators (like silicon valley corporations).
-
Read "Survival of the Richest" by Douglas Rushkoff, 2019 (3 pages)
From Douglas Rushkoff, also the author of “The Internet is Acid, and America is Having a Bad Trip”: “Apocalypto – the intolerance for presentism leads us to fantasize a grand finale. “Preppers” stock their underground shelters while the mainstream ponders a zombie apocalypse, all yearning for a simpler life devoid of pings, by any means necessary. Leading scientists – even outspoken atheists – prove they are not immune to the same apocalyptic religiosity in their depictions of “the singularity” and “emergence”, through which human evolution will surrender to that of pure information.” This idea is investigated more deeply in his book “Present Shock”.
Before Class:
-
Daily assignment, due at 8pm the night before class:
Describe acceleration and at least one way it has shaped our world.
Recall that in the “The Triple Revolution,” the authors observe:
There is no question that cybernation does increase the potential for the provision of funds to neglected public sectors. Nor is there any question that cybernation would make possible the abolition of poverty at home and abroad. But the inindustrial system does not possess any adequate mechanisms to permit these potentials to become realities. The industrial system was designed to produce an ever-increasing quantity of goods as efficiently as possible, and it was assumed that the distribution of the power to purchase these goods would occur almost automatically. The continuance of the income-through-jobs link as the only major mechanism for distributing effective demand—for granting the right to consume—now acts as the main brake on the almost unlimited capacity of a cybernated productive system.
In what ways has the current structure of our regulatory and political framework allowed for cybernetic systems (such as social media platforms and the stock market) to shape our world? Is this the world that the systems’ creators set out to create? If not, list some unintended consequences. What are the mechanisms that shape these systems? Who gets to decide?
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Check out "The Code: Silicon Valley and the Remaking of America" by Margaret O'Mara, 2019
-
Check out "Humane: A New Agenda for Tech (44 min. watch)" , 2019
From the Center for Human Technology . Also examine their website
-
Check out "Are we having an ethical crisis in computing?" by Moshe Y. Vardi, 2018 (1 page)
Responses
Wed, Mar 11 Participating in the Society of Tech
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Look back through the “relevant responses” associated with each day of class. Choose two to focus on. Why did you choose these two? How would you categorize them? Describe the response taken. Might you act similarly? Why or why not?
- At this point in the class, do you feel empowered to act on your values? Why or why not?
Departure
Fri, Mar 13 Participating in the Society of Tech
Required:
-
Read foreward and introduction (pg. xi - xxvi; pg. 1 - 5; 23 pages total) from "Hope in the dark: Untold histories, wild possibilities" by Rebecca Solnit, 2016
Solnit, a writer and activist, reflects on our desire for social, cultural, or political change given the appearance that we have not arrived there (considering issues from global warming, to human rights abuse). Originally responding to the war in Iraq, she explores how news cycles and our personal narratives frame these issues and makes the case for hope nonetheless–“tiny and temporary victories”
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Do you feel able to change outcomes of how tech affects society?
- What’s an idea from this course that every UW CSE student ought to understand?
- Finish the course project, part three by tonight.