Computer Ethics
Administrative
Winter 2022
Time: Wednesdays and Fridays from 3:30 to 4:20pm
Location: online for weeks 1-4 then HCK 320
Instructor: Jared Moore
Email: jlcmoore@cs.washington.edu
Office hours: by appointment (please do ask to chat!)
Please do not hesitate to write to the instructor about any accommodations or questions related to readings or course material.
Description
Be it social-media platforms, robots, or big data systems, the code Allen School students write—the decisions they make—influences the world in which it operates. This is a survey course about those influences and ways to think about them. We recognize, “the devil is in the implementation details.”
The course is divided into two parts: In the first part, we survey historical and local issues in tech, particularly those concerning data. We then engage with critical perspectives from disciplines such as machine ethics and science and technology studies as a framework for students to articulate their own beliefs concerning these systems. In the second part, we apply these perspectives to urgent issues in applied technologies; see the schedule for the topics we plan to consider this quarter.
Throughout, students hone their critical reading and discussion skills, preparing them for a life-long practice of grappling with the—often unanticipated—consequences of innovation.
We approach topics such as: AI ethics, social good, utopianism, governance, inclusion, facial recognition, classification, privacy, automation, platforms, speculative design, identity, fairness, power and control, activism, and subversive technologies.
We aim to have you feel this course experience is an essential part of your Allen School education despite being (or because it is!) very different from most CSE courses.
Objectives
By the end of this course students will:
- Obtain awareness of issues arising from the use of computers in contemporary sociotechnical systems
- Articulate technological harms to individuals and groups in the language of critical perspectives
- Appreciate how historical, cultural, economic, and political factors contribute to how technologies are built and designed
- View themselves as both subjects and creators of sociotechnical systems
- Understand and articulate complex arguments pertaining to values in technology
- Recognize the diversity of stakeholders and views when considering a technology
- Amplify voices and values not traditionally considered in technological development (e.g., in design processes)
- Re-imagine and speculate alternative histories and futures for using and coexisting with computers
Schedule
(may change up to a week in advance)
Introduction: A Brief History
Wed, Jan 05 Groundwork
Required:
-
Read "Our Numbered Days: The Evolution of the Area Code" by Megan Garber, 2014 (5 pages)
Are you a “425” or a “206?” Through an exploration of phone area codes, Megan Garber shows us how cultures can be built up around artifacts spawned by engineering decisions. It traces the history nicely, while also discussing the hype cycle and push back from the community, as well as how such choices continue to resonate long after they are made.
-
Read "Why the Luddites Matter" by Z.M.L., 2018 (5 pages)
“That which makes the Luddites so strange, so radical, and so dangerous is not that they wanted everyone to go back to living in caves (they didn’t want that), but that they thought that those who would be impacted by a new technology deserved a voice in how it was being deployed.”
Optional:
-
Read "Organizational Frictions and Increasing Returns to Automation: Lessons from AT&T in the Twentieth Century" by James Feigenbaum et al., 2021 (56 pages)
Check out this interesting recent paper on why Bell Labs may have moved to digit dialing!
Before Class:
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "The Anti-Digit Dialing League" by John Wilcock (1 page)
-
Read "About CPSR" by Douglas Schuler, 2008
Who's behind the keyboard?
Fri, Jan 07 Groundwork
Required:
-
Read chapter one from "Artificial Unintelligence: How Computers Misunderstand the World" by Meredith Broussard, 2018
In her introductory chapter, computer scientist and data journalist Broussard lays out both her love of and skepticism for computing technology. You might find a bit of yourself in her.
-
Read "Be Careful What You Code For" by danah boyd, 2016 (2 pages)
danah boyd, a researcher at Micosoft and at Data and Society, highlights just how few guardrails there are for developers, from the consequences of algorithmic bias to the implications of crazy metaphors. She offers a call to action, solutions, and ample evidence for considering the implications of code. We highly recommend chasing down some of the links provided.
Optional:
-
Read "How to read a book" by Paul N. Edwards, 2000 (8 pages)
Here, an adroit scholar walks through some tips and tricks for reading more effectively. He hits the major points and includes some bonus tips, like where to best organize your reading notes. This is an invaluable resource as our course’s weekly reading load begins to increase. Skim now, but revisit throughout the course.
-
Read introduction through page 16 from "Race after technology: Abolitionist tools for the new jim code" by Ruha Benjamin, 2019
Benjamin, in a recent book, offers a “race conscious orientation to emerging technology not only as a mode of critique but as a prerequisite for designing technology differently.” Affiliated with Princeton’s Center on Information Technology and Policy, she brings a fresh perspective to many of the foundations of computing.
-
Read "Discussion Leading" by John Rickford et al., 2007
This resource from the Stanford Teaching Commons offers an in-depth analysis of how to have better discussions. Their recommendations, from setting an agenda, asking questions, and increasing discussant engagement are all a part of how to create a better climate for discourse. For leading discussions in this class and beyond, it’s worth a read.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- What do you want to get out of our class discussions?
- Do you feel able to change outcomes of how tech affects society?
- Preview the course project, part zero
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "Tackling Climate Change with Machine Learning" by David Rolnick et al., 2019 (97 pages)
Also check out their website.
-
Read "Green AI" by Roy Schwartz et al., 2019
-
Read "Open letter to Jeff Bezos and the Amazon Board of Directors" by Amazon Employees for Climate Justice, 2019
Deconstructing a Data System
Wed, Jan 12 Data
Required:
-
Read "At Amazon's New Checkout-Free Store, Shopping Feels Like Shoplifting" by Jake Bullinger, 2018 (2 pages)
Jake Bullinger describes the experience had by some of the first shoppers of the Checkout-Free Amazon Go store and considers its economic implications. As you’re reading the article, look for possible tensions, critiques, or questions which it raises. Also think about the ways in which data is used by this store.
-
Read "In Amazon Go, no one thinks I'm stealing" by Ashlee Clark Thompson (2 pages)
Ashlee Clark Thompson reflects on her experience of shopping in the Amazon Go store: “Amazon Go isn’t going to fix implicit bias or remove the years of conditioning under which I’ve operated. But in the Amazon Go store, everyone is just a shopper, an opportunity for the retail giant to test technology, learn about our habits and make some money.”
-
Read chapter four from "Artificial Unintelligence: How Computers Misunderstand the World" by Meredith Broussard, 2018
While focusing specifically on data journalism, Broussard well explains the means to which data can be put to explain the world. In particular, read it to understand the different means people take to “challenge false claims about technology.”
Optional:
-
Read "Inside Amazon Go, a Store of the Future" by Nick Wingfield, 2018 (1 page)
High level description of the store; touches on themes like convenience, how this tech could affect jobs, and the vagueness of plans surrounding the system at this time; Look at the photos if you don’t visit the Amazon Go store.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Today’s a light reading day, but day four isn’t so we recommend you get started on that.
- Summarize the Amazon Go readings.
- How do the readings differ in their views of the Amazon Go store? Did the Amazonians consider each of these perspectives? Should they have? How might you classify them?
- Have you visited an Amazon Go store? If so, how did you feel? If not, how do you think you would feel? (In non-stay-at-home times, I’d encourage you to visit one, time-permitting.)
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "The Loneliest Grocery" by Joshua McNichols
-
Read ""Good" isn't good enough" by Ben Green, 2019 (4 pages)
This paper, by a postdoc at the AI Now Institute and formerly of MIT, summarizes many of the themes we touch on throughout the quarter. It synthesizes many of the arguments we cover and applies them as a call for action to data scientists in particular. The author’s arguments are equally relevant to computer scientists.
Conceptions of Data
Fri, Jan 14 Data
Required:
-
Read "Chapter 1: Conceptualising Data" by Rob Kitchin, 2014 (25 pages)
The introduction to the book describes ways of thinking about what data is (ontologies) and goes on to discuss ethical and political considerations of data. It postulates the framework of “data assemblages” and how thoughts about them influence their own conceptions.
-
Read "On Being a Data Skeptic" by Cathy O'Neil, 2014 (26 pages)
This rapid-fire, well-articulated article is about the advantages and perils of data science, with ample advice and examples to advocate for why those who use data ought to be this special kind of “skeptical”.
Before Class:
-
Daily assignment, due at 8pm the night before class:
Use the following questions as a resource to guide your reading. Then, respond to at least three of them.
- What is the difference between data, information, and knowledge? How are they related?
- How can wider or economic concerns “frame” data? That is, in what sense do data act and how? (Answers might include: as an economic resource, as a form of power or knowledge, etc.) Explain why.
- How have politics or economics influenced how some data have been defined or created?
- What are reasons or incentives for controlling:
- The creation of data
- The access to data
- The standards of data, such as metrics or units
- The means of data collection, such as sensors or know-how
- From “On being a data skeptic” explain “measuring the distortion.”
- What is the relationship between models and proxies? Why are proxies used? Give an example.
- Why might O’Neil have singled out “nerds” and “business people” separately? What do the differences in her comments indicate about how those groups view problems differently? Do you agree?
- Submit a clarifying question which you’d like to discuss in class.
- Finish the course project, part zero by tonight.
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "The era of blind faith in big data must end" by Cathy O'Neil
-
Read "How ImageNet Roulette, a Viral Art Project That Exposed Facial Recognition's Biases, Is Changing Minds About AI" by Naomi Rea, 2019
-
Read "" by Michael Thaddeus, 2022
Find the creep of proxies in university rankings in Thaddeus’s own words: “Almost any numerical standard, no matter how closely related to academic merit, becomes a malignant force as soon as universities know that it is the standard. A proxy for merit, rather than merit itself, becomes the goal.”
"Data is the new oil": data politics
Wed, Jan 19 Data
Required:
-
Read (skim) from "The world's most valuable resource is no longer oil, but data" , 2017 (1 page)
A short article that introduces the metaphor that “data is the new oil” which reflects the widely held view that data is now “the world’s most valuable resource”.
-
Read "Do artifacts have politics?" by Langdon Winner, 1980 (15 pages)
In this widely-cited essay, Langdon winner makes the case that technologies embody social relations. He argues that we should develop a language for considering technology which not only focuses on it as a tool, or its use, but also the meaning of its design and social arrangements which it facilitates. Langdon asks: “what, after all, does modern technology make possible or necessary in political life?”. Consider this while you read the piece.
-
Read "Anatomy of an AI System" by Kate Crawford et al., 2018 (14 pages)
Kate Crawford and Vladan Joler consider the production of the Amazon Echo Dot with an astounding breadth by mapping the human labor, data, and material resources required to build it. Kate Crawford is a co-founder of the AI Now institute at NYU which is breaking ground on many questions relevant to the social implications of AI.
Optional:
-
Read "How ICE Picks Its Targets in the Surveillance Age" by McKenzie Funk, 2019
Consider reading this harrowing, and physically proximal, telling of the real-life implications of some of these data systems.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Pick one aspect of “Anatomy of an AI System” and discuss it with someone outside of class. In a couple of sentences, what did you talk about?
- Did any aspect of the Amazon Echo AI system surprise or interest you? Which aspect?
- What conclusions can we draw from the tomato picking example in “Do Artifacts Have Politics?”
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "Google Will Not Renew Pentagon Contract That Upset Employees" by Daisuke Wakabayashi et al., 2018
-
Read "The Societal Implications of Nanotechnology" by Langdon Winner, 2003 (5 pages)
Winner’s testimony before Congress
-
Read "An Open Letter to the Members of the Massachusetts Legislature Regarding the Adoption of Actuarial Risk Assessment Tools in the Criminal Justice System" by Chelsea Barabas et al., 2017 (8 pages)
Operationalization and Classification
Fri, Jan 21 Data
Required:
-
Read introduction (pg. 1 - 16; 31 - 32; 17 pages total) from "Sorting things out: classification and its consequences" by Geoffrey C. Bowker et al., 1999 (377 pages)
Sorting Things Out is a classic text on classification and standardization. The introduction writes about the importance of considering the ubiquity of classification and the processes that generate them, standardize them, and enforcement. It also looks at how classification has caused harm and how the processes which create standards can at times yield an inferior solution. The authors take an expansive view of classification, so be prepared to think about the American Psychiatric Association’s Diagnostic and Statistical Manual (DSM) or VHS vs. Betamax.
-
Read "Do algorithms reveal sexual orientation or just expose our stereotypes?" by Blaise Aguera y Arcas et al., 2018 (5 pages)
This essay sets out to debunk a scientific study which claimed to have built a “sexual orientation detector” using machine learning. “Do algorithms reveal sexual orientation or just expose our stereotypes?” presents a thorough analysis of the offending paper and shows that one move to debunk “junk science” is validate the study’s results against some other baseline. In this case, the authors use Amazon’s Mechanical Turk. As you’re reading this, think about what one can learn from a face?
Optional:
-
Read chapter one (pg. 46 - 50 from "Infrastructure" on) from "Sorting things out: classification and its consequences" by Geoffrey C. Bowker et al., 1999 (377 pages)
-
Read "Deep neural networks are more accurate than humans at detecting sexual orientation from facial images." by Yilun Wang et al., 2017
This is the article critiqued by “Do algorithms…”
Before Class:
-
Daily assignment, due at 8pm the night before class:
- “Do algorithms…” claims to focus on the underlying “science.” Why do you think the authors did so? Why was this distinction important?
- What strategies did “Do algorithms…” use to make its argument?
- The authors of “Do algorithms…” conclude that the paper they examined was misguided. Drawing on the discussion of classification in “Sorting Things Out,” think of another misguided method (any method, scientific or otherwise, and not necessarily one related to the “Do Algorithms” paper). Now, describe that misguided method which you have identified.
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "Drawing a Line" by Tableau Employee Ethics Alliance, 2019
-
Read "Engaging the ethics of data science in practice" by Solon Barocas et al., 2017 (3 pages)
-
Read "Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI" by Philip E. Agre, 1997 (28 pages)
Agre, an AI researcher in the 1990s, convincingly walks the line between a critial perspective and that of a practitioner, evoking why practitioners may bristle at critique.
Moral Machines
Wed, Jan 26 Critical Perspectives
Required:
-
Read "Whose Life Should Your Car Save?" by Azim Shariff et al., 2016 (1 page)
This is a high-level introduction to the moral machines project that also discusses how politics and businesses view ethical dilemmas, and that a tragedy of the commons-type scenario could undermine our common expectations for machine behavior.
-
Read "The dark side of the ‘Moral Machine’ and the fallacy of computational ethical decision-making for autonomous vehicles" by Hubert Etienne, 2021 (22 pages)
Consider this take on autonomous vehicles and the moral machine experiment in particular. Appreciate the way that Etienne unpacks the assumptions implicit in various aspects of AVs and the statements which people make about them.
Optional:
-
Read "Why the moral machine is a monster" by Abby Everett Jaques, 2019 (10 pages)
This paper presents a fierce-yet-thoughtful critique of the Moral Machine experiment. It also highlights the importance of analyzing the structural implications of the problem: “In a slogan, the problem is that an algorithm isn’t a person, it’s a policy. And you don’t get policy right by just assuming that an answer that might be fine in an individual case will generalize. You have to look at the overall structure you create when you aggregate the individual transactions….The right question is what kind of world will I be creating if this is the rule. What will the patterns of advantage and disadvantage be if this preference is encoded in our autonomous vehicles. (pg 5-6)”
-
Read "The Trolley Problem" by Judith Jarvis Thomson, 1985 (21 pages)
Judith Jarvis Thomson marches you through 21 pages of analyzing various hypothetical scenarios in order to expose the difficulty of using abstractions, and how a normative/ethical tenet may on the face seem straight forward, but really be undermined if one looks at the details. Makes it seem difficult to think that there are an exhaustive set of rules to encode which actions are permissible, not to mention moral.
-
Read "Ethical Machines (published)" by Irving John Good, 1982 (5 pages)
I.J. Good discusses problems with certain considerations one should make when thinking about whether and how a machine could exhibit “ethical” behavior.
-
Read "Ethical Machines (unpublished)" by Irving John Good, 1980 (16 pages)
The longer version is able to better parse the implications of the arguments in part because it’s more in depth due to its length. Notable in particular are the final consideration of a “synergistic relation with the boss” and the shakespeare reference.
-
Read chapter 7 from "Braintrust: What neuroscience tells us about morality" by Patricia S. Churchland, 2018
Here Churchland discusses human ethical reasoning as a form of the “brain’s continuous decision-making.” Motivated by neuroscience, she makes the case that to suggest that, “the moral understanding that underlies specific rules is more like a skill than like a concrete proposition.” This has interesting applications for how we approach learning about ethics as well as for how we go about creating ethical systems.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Judge a few scenarios on the MIT Moral Machines website.
- Do you agree with the arguments of one of the readings more than those of the other? Why or why not? Cite or paraphrase those arguments to support your answer.
- What are the trade-offs of looking at individual ethics cases (such as in the “Trolley Problem” or “Moral Machines”) as compared to the bigger pictures (the kind of assumptions Etienne mentions) or a “structural analysis” (as described by Jaques)?
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "I Quit My Job to Protest My Company’s Work on Building Killer Robots" by Liz O'Sullivan, 2019
-
Read "Wielding Rocks and Knives, Arizonans Attack Self-Driving Cars" by Simon Romero, 2018
-
Read "Tech Billionaires Think SimCity Is Real Life" by Nicole M. Aschoff, 2019 (4 pages)
About Alphabet’s Sidewalk Labs and their attempt to create a “smart city” in Tronto. This is quite related to conversations about autonomous vehicles, creating moral machines, etc. Also check out one community response to the project.
Data Feminism
Fri, Jan 28 Critical Perspectives
Required:
-
Read "Introduction: Why Data Science Needs Feminism" by Catherine D'Ignazio et al., 2020
Through a historical examination of women in technology, D’Ignazio and Klein, both leading scholars in the field, introduce us to feminism and its role in shaping technologies. This is just a taste and the whole book is worth a read.
-
Read "Gender Equality Paradox Monkey Business: Or, How to Tell Spurious Causal Stories about Nation-Level Achievement by Women in STEM" , 2020 (5 pages)
A widely known study argued that countries with more gender equity in society have fewer women studying STEM, but this article accompanies a peer-reviewed publication casting doubt on the study’s analysis – a scholarly back-and-forth also playing out in the blogosphere.
Optional:
-
Read "Patriarchy, technology, and conceptions of skill" by Judy Wajcman, 1991 (16 pages)
When considering the future of work, one question that’s often raised is how technology negatively impacts the amount of “skill” required to complete a task, aka “deskilling”. In “Patriarchy, Technology, and Conceptions of Skill”, Judy Wajcman questions the underlying assumption that skill is entirely technically derived. Instead, she considers how men’s historical control over technology in the workplace has extensively influenced the ideological and material conceptions of skill, thus concluding in part that “definitions of skill, then, can have more to do with ideological and social constructions than with technical competencies which are possessed by men and not by women”.
-
Read "Technically Female: Women, Machines, and Hyperemployment" by Helen Hester, 2016 (10 pages)
This essay surveys a history of “electronic secretaries” to frame relevant questions of today’s tech, such as: Why are AI assistants so often feminized? We question what it means for technology to “do gender” and in service of which “imagined technology user”? Yet we can turn that question around and ask who “does technology”? and how does labor gets redistributed with the introduction of new software and AI assistants? Ultimately, Hester asks us to confront questions concerning lived experiences of gender and how its programmed, productive vs. reproductive labor, and the (dis)advantages of automation.
-
Read "Testosterone rex: unmaking the myths of our gendered minds" by Cordelia Fine, 2017
If you were looking to throw the book at someone who continues to insist that sex differences are sufficient to explain gender differences, this would be that book. Take her word for it: “Every influence is modest, made up of countless small instances of its kind. That’s why everything—a doll packaged in pink, a sexist joke, a male-only expert panel—can seem trivial, of intangible effect. But that’s exactly why calling out even seemingly minor points of sexism matters. It all adds up, and if no one sweats the small stuff, the big stuff will never change.” And it’s funny, too: “If we stop believing that boys and men are emotional cripples and fly-by-night Casanovas who just want sex, and start believing that they’re full, complete human beings who have emotional and relational needs, imagine what might happen.”
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Why is feminism relevant to data science?
- What considerations does data feminism require us to make?
- “Gender Equality Paradox” is an example of data feminism at work. How so?
- Finish the course project, part one by tonight.
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "Reflecting on one very, very strange year at Uber" by Susan Fowler, 2017 (4 pages)
This blog post, by the author of “What Have We Done” contributed to the resignation of Uber’s CEO, Travic Kalanick.
-
Read "Google Walkout: Employees Stage Protest Over Handling of Sexual Harassment" by Daisuke Wakabayashi et al., 2018
Latent Identity and Privacy
Wed, Feb 02 Critical Perspectives
Required:
-
Read "It's Not Privacy, and It's Not Fair" by Cynthia Dwork et al., 2013 (6 pages)
This law review paper is the missing link between the concept of control and of privacy as represented by the (optional) Deluze piece and the Barocas piece, respectively.
-
Read "Think You're Discreet Online? Think Again" by Zeynep Tufekci, 2019 (2 pages)
How ought we make sense of questions such as privacy, classification, tracking, and surveillance in the era of big data and computational inference? Zeynep Tufeci asks us to consider these questions by looking at examples of the collective implications of a “privacy-comprised world”.
-
Read "Big data's end run around procedural privacy protections" by Solon Barocas et al., 2014 (2 pages)
Solon Barocas and Helen Nissenbaum, both well-known AI ethics scholars, consider “why the increasingly common practice of vacuuming up innocuous bits of data may not be quite so innocent: who knows what inferences might be drawn on the basis of which bits?”
Optional:
-
Watch "Deleuze the Societies of Control"
This video highlights some significant passages in “Postscript” and explains what’s going on by connecting it back to contemporary questions of control. Only the first 10 minutes actually cover the essay and the next 12 or so are on “commentary”, by posing relevant questions and extrapolating “Postscript”’s ideas into the future.
-
Read "Postscript on the Societies of Control" by Gilles Deleuze, 1992 (4 pages)
“[J]ust as the corporation replaces the factory, perpetual training tends to replace the school, and continuous control to replace the examination. Which is the surest way of delivering the school over to the corporation.” Deleuze considers the technologies of power, and what it means to be in a “control state”. One wonders what he would have to say about this virtual world.
Before Class:
-
Daily assignment, due at 8pm the night before class:
Identify a new system or one we’ve discussed in class that makes decisions which affect people’s lives in some meaningful way. Describe it and then answer the following questions:
- Does this system rely on data collection to make these decisions?
- Where does this information come from?
- What’s the consent model?
- What questions related to individual privacy does it raise?
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy" by Jemio et al.
-
Read "Pregnancy Tracking with Garmin" , 2021
-
Read "Why Hong Kongers Are Toppling Lampposts" by Sidney Fussell, 2019
-
Read "neveragain.tech" by Leigh Honeywell, 2016
-
Read "Cegłowski Senate Testimony" by Maciej Cegłowski, 2019 (10 pages)
Reimagining: Computers for Thinking
Fri, Feb 04 Critical Perspectives
Required:
-
Read "The Internet Is Acid, and America Is Having a Bad Trip" by Douglas Rushkoff, 2018 (1 page)
Douglas Rushkoff, a noted media theorist, draws connection between software and hallucination. Keenly attuned to the wild abstractions and metaphor used in computation: “Computers and networks were part of a much larger cultural phenomenon: a realization that reality is a collaboration…It seemed as if the world was about to become a whole lot more like a lucid dream, where the future was less a place we arrived at than a thing we created together.” Where are we today? Where do we go from here?
-
Read page 151 until chapter end from "Rise of the machines: A cybernetic history" by Thomas Rid, 2016 (8 pages)
Rid, a scholar of conflict and information technology, aptly summarizes the act of reimagining as it took place around the idea of the “cyborg,” particularly as evoked by Donna Haraway (see optional readings). This kind of reimagining is important to consider in the historical development of technology. Note how this work references I.J. Good and other thinkers in the history of AI and ethical machines.
Optional:
-
Read sections 1 and 5 (10 pages total) from "A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late 20th Century" by Donna Haraway, 2006 (42 pages)
Readers not familiar with critical theory will likely struggle with this reading, particularly the fluid language and arcane references. Haraway uses the extended metaphor of cyborg as a vehicle to explore the relationship between gender and identity, nature and technology—pushing back on traditional conceptions of radial feminism and affinity vs. identity politics, and fiercely critiquing the language of capitalism. Although much tech has changed since 1991 when it was written, the essay stays as influential as ever.
-
Read "Epistemological Pluralism: Styles and Voices within the Computer Culture" by Sherry Turkle et al., 1990 (31 pages)
In this classic paper witness the meeting of minds of a great scholar of science studies and one of computer science as they discuss how it is that computers may allow us to expand our own minds. Find yourself in the diverse stories of computers as tools for thinking.
-
Read "Dynamicland and the Whimsical Digital Object" by Olivia Kan-Sperling, 2019 (3 pages)
A relatively short piece that comments on a programming lab–dynamicland–which, while situated near Silicon Valley, eschews the ethos of human transcendence. The article is sweet and attentive to the project, without withholding criticism, but concludes that the conceptual indulgences it is most weary of are also what make it amazing and notable.
-
Read "What the dormouse said-- : how the sixties counterculture shaped the personal computer industry" by John Markoff et al., 2005 (364 pages)
This history traces computer science back to the counter-cultural roots of the 1960s bay area. Read it to appreciate the environment in which the imaginative and libertarian ideas of computing took off. Consider how a person’s background influences what it means for them to reimagine.
Before Class:
-
Daily assignment, due at 8pm the night before class:
Choose a vision of our technological present or future. What do you see as the purpose of re-imagining it? Example visions include:
- The way we use computers (such as in “Dynamicland” and “Epistemological”),
- Our belief in the abstraction of software (such as in “What the Dormouse Said” and “Epistemological”),
- The exclusivity of classifications and labels (such as in “The Cyborg Manifesto”),
- The role of technology in nature (such as in “The Cyborg Manifesto”),
- or any other media which we’ve covered.
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "Mother of Invention" by Nnedi Okorafor, 2018 (11 pages)
This story is giving voice to themes concerning motherhood and giving birth in a way that’s not often reflected in futurist narratives. Also talks about the notion of symbiosis between Anwuli and Obi 3 (the house), and how tech can be an extension of self. It shows a vision where the computer has agency to exhibit an ethic of “care”.
Platform or Publisher?
Wed, Feb 09 Misinformation and Platforms
Required:
-
Read chapter one (pg. 1 - 14) from "Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media" by Tarleton Gillespie, 2018 (288 pages)
Tarleton Gillespie, a social media scholar, establishes the groundwork for understanding social media platforms. Beginning with an example of content moderation on Facebook, he makes the case that content moderation is an essential element of these social media companies and that the act of providing content to users comes with many value laden decisions–picking up myths of openness, free speech, neutrality, and more.
-
Read "Opinion | Facebook Is Bad. Fixing It Rashly Could Make It Much Worse." by Farhad Manjoo, 2021
Here tech journalist Manjoo attempts to boil-down the issue of regulating internet speech in the U.S., commenting on public health and the checks and balances of governmental institutions.
-
Read introduction (9 pages) from "Kill All Normies" by Angela Nagle, 2017 (11 pages)
This does a better job of describing some of the thorny “grey” areas and mechanisms for radicalization that we will otherwise discuss on the platform days, and is an illustrative antidote to the clearer cut “malicious” content mentioned in the other pieces.
Optional:
-
Read "Split Screen: How Different Are Americans’ Facebook Feeds? – The Markup" by Sam Morris et al.
Use this page, designed by the data journalism publication, the Markup, to get a sense of different filter bubbles on Facebook. Try out a couple different options to see what different groups of people are seeing right now.
-
Read "Why Facebook Can't Fix Itself" by Andrew Marantz, 2020 (5 pages)
Here we get a close look into the implementation of content moderation strategies at Facebook. Pay attention to how what Gillespie talks about applies.
-
Read "How We Analyzed the Cost of Trump’s and Biden’s Campaign Ads on Facebook" by Jeremy B Merrill, 2020
This article details how one media outlet, The Markup, in conjunction with data from an NYU research project, attempts to measure the influence of social media companies and their moderation.
-
Read "Facebook Seeks Shutdown of NYU Research Project Into Political Ad Targeting" by Jeff Horwitz, 2020
Here find more details about Facebook’s efforts to push back against the very research described above.
-
Read "Inside Nextdoor's "Karen problem"" by Makena Kelly, 2020
Notice how content moderation can have racial disparate impacts.
-
Read "The Moderators" by Adrian Chen, et al.
Watch this 20 minute documentary on content moderators, but beware of graphic content.
-
Read "Rethinking the Public Sphere: A Contribution to the Critique of Actually Existing Democracy" by Nancy Fraser, 1990 (56 pages)
Before Class:
-
Daily assignment, due at 8pm the night before class:
In 150 - 300 words, address the following: What are platforms? Why does this term matter so much? Who are the stakeholders?
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "Read the Letter Facebook Employees Sent to Mark Zuckerberg About Political Ads" by The New York Times, 2019
-
Read "“So You Won't Take Down Lies?”: AOC Blasts Mark Zuckerberg in Testy House Hearing" by Alison Durkee
Also see how she solicited the public for questions to ask the CEO.
-
Read "A Reckoning at Facebook" by Nicholas Thompson, 2018
-
Read ""I Have Blood On My Hands": A Whistleblower Says Facebook Ignored Global Political Manipulation" by Craig Silverman et al., 2020 (6 pages)
This article quotes the internal Facebook memo mentioned in the Marantz piece.
Content Moderation Algorithms and Free Speech
Fri, Feb 11 Misinformation and Platforms
Required:
-
Read "It's the (Democracy-Poisoning) Golden Age of Free Speech" by Zeynep Tufekci, 2018 (4 pages)
Zeynep Tufekci considers how power to censor functions on our oversaturated social networks, and the role of misinformation and the attention economy in this. The article provides striking clarity to issues we collectively face on this platform.
-
Read chapter 17 from "Genius Makers : The Mavericks Who Brought AI to Google, Facebook, and the World" , 2021
In this chapter of his book on the rise of neural networks, Metz, a veteran journalist of Silicon Valley, concisely describes the claims which some make about those tools. Will AI “solve” content moderation for us? Read on.
-
Read chapter four (pg. 74 - 110; 36 pages total) from "Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media" by Tarleton Gillespie, 2018 (288 pages)
Read the whole chapter, but, if you’re short on time, focus on the “automatic detection” section (pg. 97 to 110). Tarleton Gillespie, a social media scholar, establishes the groundwork for understanding social media platforms. In chapter 4, “Three Imperfect Solutions to the Problem of Scale”, Gillespie considers several models for content moderation, one of which is algorithmic “automatic detection” techniques.
Optional:
-
Read "The Problem of Free Speech in an Age of Disinformation" by Emily Bazelon, 2020 (14 pages)
Read this article for an up-to-date account of free speech and its history in the United States. Notice the similarities between analog and social media. It questions how different governmental approaches to speech may be making us more or less free.
-
Read "How Facebook Hides How Terrible It Is With Hate Speech" by Noah Giansiracusa
Cut through the claims of AI content moderation and listen to Facebook’s own assessment that they ‘may action as little as 3-5% of hate … on Facebook.’
-
Read "The Risk of Racial Bias in Hate Speech Detection" by Maarten Sap et al., 2019 (9 pages)
Before Class:
-
Daily assignment, due at 8pm the night before class:
From the 2016 Russian misinformation campaign, to the 2021 booting of the U.S. president, to the recent (late 2021) whistle-blower revelations regarding mental health and hate speech, a maelstrom has surrounded Facebook and other popular social media platforms for the past few years (e.g. see the optional reading, “How Facebook Hides How Terrible It Is With Hate Speech”).
What do you think ought to be done about the content and its moderation on Facebook? Try to be specific in exactly what problem you are addressing and the trade-offs involved in your proposal.
Your answer might include:
- De-platforming
- More automated moderation
- Hire more moderators
- Regulation…
- e.g. changing liability of publisher
- or otherwise
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "Jigsaw"
Examine this as a “moonshot” type projects and how it attempts to reimagine people’s internet experiences using machine learning type systems (such as with perspective). At the same time, consider it in light of Gillespie’s comments on Jigsaw on page 109 of “Custodians.”
-
Read "Fighting Neo-Nazis and the Future of Free Expression" , 2017
History Forward and Backward
Wed, Feb 16 Course Project Related Discussions
Required:
-
Read "The Rape Kit's Secret History" by Kennedy, Pagan, 2020
[Feel free to skip if triggering or otherwise too upsetting to you.] Here is a nearly-lost history of the effort behind getting a technology widely adopted. It is not a computing example, but it is an excellent example of writing about the context behind something now in our society, the challenges are often not technical, and the leaders often go unrecognized. The whole thing is fascinating, but it is long, so if you need to skim, try to focus on the politics needed for this technology to succeed.
-
Read page 5 onwards from "The Triple Revolution (original)" , 1964 (16 pages)
This famous memo to President Lyndon B. Johnson was drafted by the Ad Hoc Committee of the Triple Revolution, comprised of notable social activists, scientists and technologists, among others. It warns that revolutions in social justice, automation, and weapon development, and that if urgent social and economic changes are not made “the nation will be thrown into unprecedented economic and social disorder.” Consider what happens when our Utopian projects are societal.
Optional:
-
Read "Is this time different? The opportunities and challenges of artificial intelligence" by Jason Furman, 2016 (17 pages)
Before Class:
-
Daily assignment, due at 8pm the night before class:
Keep the historical and argument sections of your course project in mind as you do the readings and try to answer some of these questions or other questions of your own.
- Both of these readings describe trying to change the world where the challenge was not primarily about creating a new technology.
- Who were the protagonists trying to convince?
- What actions did they take to succeed?
- What was the purpose of each article? What were some techniques used in the writing to achieve this purpose?
- How is writing about the future different than writing about the past?
- Beyond writing about the future vs. the past, how else are these articles different?
- Both of these readings describe trying to change the world where the challenge was not primarily about creating a new technology.
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "San Francisco police linked a woman to a crime using DNA from her rape exam, D.A. Boudin says" by Cassidy, Megan, 2022
[Feel free to skip if triggering or otherwise too upsetting to you.] Here is a nearly-lost history of the effort behind getting a technology widely adopted. It is not a computing example, but it is an excellent example of writing about the context behind something now in our society, the challenges are often not technical, and the leaders often go unrecognized. The whole thing is fascinating, but it is long, so if you need to skim, try to focus on the politics needed for this technology to succeed.
-
Read "Somali Workers in Minnesota Force Amazon to Negotiate" by Karen Weise, 2018
-
Read "Tech Won't Build It" , 2018 (28 pages)
Defacing Recognition
Fri, Feb 18 Facial Recognition
Required:
-
Read chapters 15 and 16 from "Genius Makers : The Mavericks Who Brought AI to Google, Facebook, and the World" , 2021
In this chapter of his book on the rise of neural networks, Metz, a veteran journalist of Silicon Valley, provides an up-to-date account of the roles that these, particularly computer vision, methods have played in both bigotry and weaponization. Can AI solve those problems for us?
-
Watch "STEALING UR FEELINGS" by Noah Levenson, 2019
Also consider reading through the implementation!
Optional:
-
Read "Physiognomy's New Clothes" by Blaise Aguera y Arcas et al., 2017 (16 pages)
Can an algorithm detect criminality from your face alone? In this paper, the authors critique a 2016 paper that claims to be able to do so. By focusing on the historical context and current science, this article refutes the paper claims while providing a salient illustration of the danger of using Neural Networks inappropriately.
-
Read introduction , background, and discussion (pg. 1 - 2; pg. 2 - 5; pg.10 - 14; 7 pages total) from "The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition" by Os Keyes, 2018 (21 pages)
This paper surveys the HCI papers to understand how researchers understand “gender” and the implications for how Automatic Gender Recognition systems are developed. As you’re reading this, think about who gets to decide what is “gender” when developing a product and how different operationalizations of this concept can impact how AGR systems are deployed and impact Trans people.
-
Read "Facial feature discovery for ethnicity recognition" by Cunrui Wang et al., 2019 (17 pages)
This paper, pilloried by Agüera et al., is another example of the ways facial recognition may err.
-
Read "Dark matters: On the surveillance of blackness" by Simone Browne, 2015
Chapter three, page 108 on
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Within the context of facial recognition systems there are many incentives for increasing systems’ efficacy. Chose one incentive (e.g. make money or win an election). Now, how does that incentive influence (pick one of the below):
- The system’s ability to control, or direct the behavior of its users?
- Ideas about what it means to be human?
- Reputability of the science and applications of the facial recognition systems?
- What are possible blind-spots in our own conceptions of human traits?
- Within the context of facial recognition systems there are many incentives for increasing systems’ efficacy. Chose one incentive (e.g. make money or win an election). Now, how does that incentive influence (pick one of the below):
- Finish the course project, part two by tonight.
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "New Coalition Calls to End ‘Racist’ A.I. Research Claiming to Match Faces to Criminal Behavior" by Dave Gershgorn, 2020
Notice the specific attention to the racist implications of facial recognition technology.
-
Read "Amazon Workers Demand Jeff Bezos Cancel Face Recognition Contracts With Law Enforcement" by Kate Conger, 2018
-
Read "CV Dazzle" by Adam Harvey, 2017
An art project which demonstrates the adversarial susceptibility of facial recognition. Read about how it works here
-
Read "The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition" by Os Keyes, 2018 (21 pages)
Section 6, Design Recommendations (14 - 16)
The politics of faces
Wed, Feb 23 Facial Recognition
Required:
-
Read "Excavating AI: The Politics of Training Sets for Machine Learning" by Kate Crawford et al., 2019 (14 pages)
Again we see Crawford examining the history and politics of a machine learning system, here: computer vision training data sets. She and Paglen examine datasets like ImageNet, UTKFace, etc. and apply many of the critiques we’ve seen up to this point.
-
Read "System Error: Where Big Tech Went Wrong and How We Can Reboot" by Rob Reich et al., 2021
From a different vantage, but nonetheless pertinent to facial recognition technology in particular, appreciate the appeal that these authors—who teach a similar computer science ethics class at Stanford—make toward democracy: using the tools of civil society to combat the ills of technology.
Optional:
-
Read "On the genealogy of machine learning datasets: A critical history of ImageNet" by Emily Denton et al., 2021 (11 pages)
Consider reading this genealogical history of ImageNet, which, not unlike your projects, examines the decisions and statements which went into the creation of the data set
-
Read "Facial Recognition Technologies in the Wild: A Call for a Federal Office" by Erik Learned-Miller et al., 2020
Hear from ascendant researchers Buolamwini and Morgenstern on exactly what they would recommend for regulating facial recognition technology.
-
Read "How Wrongful Arrests Based on AI Derailed 3 Men's Lives" by Khari Johnson, 2022
Hear about what can go wrong in deployed facial recognition systems.
-
Read "Regulating Biometrics: Global Approaches and Urgent Questions" by Amba Kak, 2020
Scan this survey of the legislative attempts to answer questions on the use of biometric technologies and particularly facial recognition. How do the approaches differ?
Before Class:
-
Daily assignment, due at 8pm the night before class:
In the past few years there have been a variety of attempts to regulate the use of facial recognition technology (e.g. see some of the relevant responses for today or those detailed in “Regulating Biometrics”) with some regulatory controls prohibiting governmental use of facial recognition, some prohibiting commercial use in public areas (e.g. cafes, outside, but not inside a company’s buildings), some imposing moratoriums so as to give the legislative body more time, some requiring reporting on the use of these technologies with oversight boards, and more.
Oftentimes in these debates the two sides do not necessarily disagree on whether there should be regulation but rather what kind of regulation should be imposed. For example the recent Washington State moratorium on facial recognition technology received support from Microsoft (a purveyor of facial recognition technology projects).
Comment on these trends and perhaps address the following questions. Try to draw on sources beyond the required reading for class today.
- Why have regulations been proposed?
- What issues are at stake?
- What kinds of arguments do proponents of the regulation of facial recognition technology use? What kinds of arguments do opponents of the regulation use?
- What regulatory controls might be more favorable to those building facial recognition technology? What controls might be less favorable to them?
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "Halt the use of facial-recognition technology until it is regulated" by Kate Crawford, 2019 (1 page)
Facial recognition technologies have a uniquely powerful potential. Yet few legislative safeguards are in place. It is even less clear that the technologies can actually function in a safe or desirable manner. Kate Crawford argues “These tools are dangerous when they fail and harmful when they work”—ultimately calling for urgent and comprehensive regulation of facial recognition.
-
Read "Safe or Just Surveilled?: Tawana Petty on the Fight Against Facial Recognition Surveillance"
Activist Tawana Petty discusses her opposition to racist facial recognition technology.
-
Read "On the Perils of Automated Face Recognition" by Dallas Card, 2018 (4 pages)
Recall that this was an example reading for the course project.
-
Read "Facial Recognition Technology in Public Housing Prompts Backlash" by Lola Fadulu, 2019 (2 pages)
-
Read "SB 5376 - 2019-20" , 2019
In the 2020 session, the Washington State legislature considered privacy legislation which could cover government use of facial recognition technology. Could you have drafted a public comment with regard to what you have learned in the class, engaging with the technology such as Hoffman does.
Techno-Utopianism
Fri, Feb 25 The Society of Tech
Required:
-
Read "Reclaiming conversation: the power of talk in a digital age" by Sherry Turkle, 2015 (21 pages)
As technologists we set out to do the good work of automation. We formalize, experiment, and implement, increasing the space of what computers can do. But, as Turkle asks, is this what we want? Do we want to be replacing each other? She says, “It seemed that we all had a stake in outsourcing the thing we do best—understanding each other, taking care of each other.” Consider what happens when our Utopian projects are personal.
-
Read "Communist Commentary on "The Triple Revolution"" by Richard Loring, 1964 (10 pages)
This essay was published contemporaneously to the “Triple Revolution”, and is largely favorable of the reforms demanded. It also touches on the utility of utopianism in futurism, while considering labor issues in a distinctly Marxist, but still American, manner. In particular, the authors summarize, and take issue with, the Triple Revolution as saying “it is useless to fight the path progress is taking and they should therefore re-direct the aims of their fight to seek a better future in a world in which labor and its role will no longer be a basic factor.” The scan we have is a bit difficult to read, but we have been unable to find another.
Optional:
-
Read "The Automation Charade" by Astra Taylor, 2018 (5 pages)
Astra Taylor urges: “We shouldn’t simply sit back, awestruck, awaiting the arrival of an artificially intelligent workforce. We must also reckon with the ideology of automation, and its attendant myth of human obsolescence.”
-
Read "The Microsoft Provocateur" by Ken Auletta, 1997 (14 pages)
This piece shows us the thinking of tech folks at a pivotal time when the nature of the internet was not yet decided. Skim the sections which are just about Mhyrvold’s life—we mean to focus on founders and their utopian visions.
-
Read "Origin Stories of Tech Companies if Their Founders Had Been Women" by Ginny Hogan, 2019 (1 page)
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Oftentimes in computer science, we consider only the upshot, only the Utopian worlds we might create. Today has two readings, both of which consider the implications of technologies, one on an interpersonal level and the other on a societal level.
- For one of the technologies mentioned in the readings or another of your choosing, comment on the future worlds we imagine for it (e.g. a Utopia) and complications to that future which might arise in practice (e.g. a Dystopia).
- What are other utopias you have heard described when talking about emerging technologies? Do you find those utopias compelling? Why or why not?
- Oftentimes in computer science, we consider only the upshot, only the Utopian worlds we might create. Today has two readings, both of which consider the implications of technologies, one on an interpersonal level and the other on a societal level.
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "Why Joi Ito needs to resign" by Arwa Mboya
-
Read "“What Have We Done?”: Silicon Valley Engineers Fear They've Created a Monster" by Susan Fowler, 2018 (2 pages)
-
Read "Computer power and human reason: From judgment to calculation." by Joseph Weizenbaum, 1976
Joseph Weizenbaum, writting as a professor at MIT in the 1960s, responds to the thoughtlessness present in programmers of the time.
Harder, Faster, Better, Stronger?
Wed, Mar 02 The Society of Tech
Required:
-
Read "Three Expensive Milliseconds" by Paul Krugman, 2014 (1 page)
While reading this article, consider: who is today’s infrastructure for? What are some metrics being optimized for which have led to some (perhaps) unexpected consequences? How are these metrics and systems shaping our world?
-
Read chapter 1 (pg. 13 - 36; 23 pages total) from "Pressed for time: The acceleration of life in digital capitalism" by Judy Wajcman, 2015
Wajcman investigates the demand for speed and efficiency in our current society, particularly as encouraged by technologies (like smartphones) and their creators (like silicon valley corporations).
-
Read "Survival of the Richest" by Douglas Rushkoff, 2019 (3 pages)
From Douglas Rushkoff, a noted media theorist read of, “Apocalypto – the intolerance for presentism leads us to fantasize a grand finale. “Preppers” stock their underground shelters while the mainstream ponders a zombie apocalypse, all yearning for a simpler life devoid of pings, by any means necessary. Leading scientists – even outspoken atheists – prove they are not immune to the same apocalyptic religiosity in their depictions of “the singularity” and “emergence”, through which human evolution will surrender to that of pure information.” This idea is investigated more deeply in his book “Present Shock”.
Optional:
-
Listen to "Speed" by RadioLab, 2013
An enlivening radio show which goes over many of the same concerns about speed as the readings.
-
The title explains it all
Before Class:
-
Daily assignment, due at 8pm the night before class:
Describe acceleration and at least one way it has shaped our world.
Recall that in the “The Triple Revolution,” the authors observe:
There is no question that cybernation does increase the potential for the provision of funds to neglected public sectors. Nor is there any question that cybernation would make possible the abolition of poverty at home and abroad. But the inindustrial system does not possess any adequate mechanisms to permit these potentials to become realities. The industrial system was designed to produce an ever-increasing quantity of goods as efficiently as possible, and it was assumed that the distribution of the power to purchase these goods would occur almost automatically. The continuance of the income-through-jobs link as the only major mechanism for distributing effective demand—for granting the right to consume—now acts as the main brake on the almost unlimited capacity of a cybernated productive system.
Has the current structure of our regulatory and political framework allowed for cybernetic systems (such as social media platforms and the stock market) to shape our world? Consider:
- In what ways?
- Is this the world that the systems’ creators set out to create? If not, list some unintended consequences.
- What are the mechanisms that shape these systems? Who gets to decide on them?
Feeling motivated? Here are a few relevant responses to today’s themes:
-
Read "The Code: Silicon Valley and the Remaking of America" by Margaret O'Mara, 2019
-
Read "Humane: A New Agenda for Tech (44 min. watch)" , 2019
From the Center for Human Technology . Also examine their website
-
Read "Are we having an ethical crisis in computing?" by Moshe Y. Vardi, 2018 (1 page)
Project Discussion
Fri, Mar 04 Course Project Related Discussions
Required:
-
Read "Digital Pregnancy Test Deconstruction (Twitter Thread)" by Foone Turing, 2020
This and the next reading offer informal Twitter threads with some surprising twists on how a technology actually works and why it may or may not be useful. Is this misleading people or providing an ethically valuable service or both? If you had chosen this for your project, what would you have investigated?
-
Read "Response to Digital Pregnancy Test Deconstruction (Also on Twitter)" by Naomi Wu, 2020
(See above.)
Before Class:
-
Daily assignment, due at 8pm the night before class:
Come to class prepared to share a description of your project to others.
Responses
Wed, Mar 09 Participating in the Society of Tech
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Look back through the “relevant responses” underneath the required and optional readings for each class. Either provide two of your own which fit the theme or choose two from those listed. Why did you choose these two? How would you categorize them? Describe the response taken. Might you act similarly? Why or why not?
- At this point in the class, do you feel empowered to act on your values? Why or why not?
Departure
Fri, Mar 11 Participating in the Society of Tech
Required:
-
Read chapter 12 from "Artificial Unintelligence: How Computers Misunderstand the World" by Meredith Broussard, 2018
Having read Broussard’s commentaries on the first day and as an introduction to the data unit, we now finish with her conclusion: a renewed plea for computing technology to serve the people who made it—humans.
-
Read foreward and introduction (pg. xi - xxvi; pg. 1 - 5; 23 pages total) from "Hope in the dark: Untold histories, wild possibilities" by Rebecca Solnit, 2016
Solnit, a writer and activist, reflects on our desire for social, cultural, or political change given the appearance that we have not arrived there (considering issues from global warming, to human rights abuse). Originally responding to the war in Iraq, she explores how news cycles and our personal narratives frame these issues and makes the case for hope nonetheless–“tiny and temporary victories”
Optional:
-
Read From 'SAN DIEGO' on from "The Code: Silicon Valley and the Remaking of America" by Margaret O'Mara, 2019
From the UW historian O’Mara hear this story of how two UW CSE alumna have navigated their careers as computer scientists.
Before Class:
-
Daily assignment, due at 8pm the night before class:
- Do you feel able to change outcomes of how tech affects society?
- What’s an idea from this course that every UW CSE student ought to understand?
- Finish the course project, part three by tonight.