Findings from a discussion-based course on computer ethics
This page covers the pedagogical approach of our course:
- in abstract,
- regarding its theoretical frameworks,
- in light of related courses,
- and provides an example activity.
For more information, consult our:
- poster write-up,
- course project,
- resources for students,
- and Jared’s paper at ACM FAT* ethics instruction in computer science.
Here we share findings from a discussion-based course on computer ethics, first taught at the University of Washington School of Computer Science in Winter 2020. Recent works in computer science and related fields have demonstrated the limitations of both the conversations around ethics and its instruction, particularly around concepts like artificial intelligence. At the same time, computing professionals have increasingly begun to navigate ethical issues in their places of employment. For example, consider recent employee actions at Google. We draw on classic work in engineering education, science and technology studies, and, through consultation with other scholars, current pedagogy in computer ethics.
This senior-level course for undergraduates unifies the aforementioned literatures with an eye for students to recognize not only the application of ethics individually and professionally, but also societally. In an active-learning setting, it offers students examples of practitioners’ individual responses with regard to societal issues, technical analyses of relevant systems, and exposure to current theory and critical perspectives. It does so through a novel set of readings, associated summaries, reading and discussion questions, in and out of classroom activities, daily and course-long learning outcomes, a multi-part course project, and a user-friendly website.
Designed to be modular and shareable, this course provides instructors—particularly those new to the literature—a means through which to overcome some of the ‘engineering mindset’ while staying abreast of developments in the field. Indeed, given the increased attention to the social impact of AI and computing technologies, students have access to an almost real-time stream of commentary and analysis on emerging issues (e.g. see our topics).
Who we are
This course addresses drawbacks from previous versions (in Winter 2018 and 2019) and was produced in collaboration between Jared Moore at the University of Washington and Johan Michalove at Australian National University and under the guidance of Dan Grossman. It draws on Jared’s related paper, which explains some of the background literature and motivation, but the course stands as its own work.
In constructing and teaching this course, we follow a number of theoretical frameworks: situated learning, a perspectival understanding, social constructivism, actor-network theory, and specific definitions of ethics, agency, and representation.
Situated learning tells us that knowledge is “distributed among people and their environments, including objects, artifacts, tools, books, and the communities of which they are a part” (Greeno, Collins & Resnick, 1996; p. 17). Indeed, “the constraints and affordances of social practices and of the material and technological systems of environments” (ibid) are hugely important. Combined with a perspectival understanding, one focused on the perspective of any learner, that tells us students’ “ability to construct perspectival understandings that are situated in activity and that are organized according to principles that are taken as defining the conception “ (Greeno & van de Sande, 2007; pg. 14). In this interpretation we follow, in particular, Johri and Olds’ analysis of the above, especially regarding the role of “social and material context,” “activities and interactions,” and “participation and identity” (Johri & Olds, 2011). These frameworks go hand-in-hand with a social constructivist approach and a complication of a simple narrative of technological determinism, particularly as Wajcman evokes (Wajcman, 2015; p. 27).
Similarlly, we follow actor-network theory (Bruni, 2007), in emerging the agency of students and future engineers in their associations with each other, their employers, technological artifacts, etc. We follow Metcalf (Metcalf, 2019; p.4) in defining ethics primarily “as social phenomena and not as primarily philosophical abstractions”, which accords with ethics as both personal and societal (Moor, 1985 and Herkert, 2005). We call for greater political engagement in ethics to increase the representation of other histories. So does Hoffman with regard to antidiscrimination (Hoffman, 2019).
This course draws on and is in conversation with many others, namely ethics courses in computer science departments in the United States. Some of these are detailed in the resources page.
In considering (computer) ethics education, one arrives at a few choices to make. Some of the current divides we walk include:
- embedded or stand-alone?
- discussion-based or lecture-style?
- micro-ethics or macro-ethics?
- include a focus on philosophical frameworks (from consequentialism, virtue ethics, and deontology to specific frameworks like Value-Sensitive Design)?
- humanities or engineering focused?
As supported by the aforementioned theoretical frameworks, we articulate what we think is a somewhat unique combination of those divides. Our course is a stand-alone, discussion-based seminar which does not dwell on philosophical frameworks but instead focuses on critical scholarship, largely from Science and Technology Studies, in complement with technical, core computer science, content. In this sense, it focuses more on macro-ethics and recognizes the engineering bent of its students while avowing the worth of approaches from the humanities.
Here we include the activity used on the last day in our data unit. Students, in groups groups, were assigned one of the following prompts and asked to follow the directions on the board.
“In a relational database, the schema defines the tables, the fields in each table, and the relationships between fields and tables.”
Design such a schema for the prompt your group has been given. For our examples, a single table (or even a list) will suffice. Fields might be integers, floats, strings, or blobs and contain information such as a unique identifier, the age of person, various pieces of demographic information, etc. Favor quantity of data-points over quality of the actual schema (e.g. whether the schema should be decomposed into multiple tables).
Fitness to adopt a dog: You’ve been hired as data scientists by the United States Department of Agriculture (USDA) to help improve animal welfare. Your task is to create a model that can be used for ranking adoption applicants (people) at animal shelters by “adoption fitness.” Your fitness score might reflect policies detailed in the Animal Welfare Act (AWA). Design a questionnaire for applicants that reflects the relevant schema and data you require for your model. If you finish, discuss the collection method for the data. Discuss trade-offs.
Demolition ranking: As part of a new effort by the Seattle City Council, the city plans to use “impartial AI” to determine which houses will be demolished in order to create affordable housing. You’ve been tasked to design a prototype ranking system which recommends which properties should be demolished first. Your proposed design might comply with standards related to eminent domain. Design the schema of the data required. If you finish, discuss the collection method for the data. Discuss trade-offs.
Retirement estimator: You’ve been hired by a social media company for a data science internship. Congrats! Your task is to determine what data is needed to determine whether a user will retire soon. This will be used for interface changes, advertising, and other unknown future applications. The first step is to design a schema of the data required for your classifier. If you finish, discuss the collection method for the data. Discuss trade-offs.
Get in touch
Reach out if you’d like to collaborate and learn more about our day-by-day and unit-by-unit:
- discussion questions,
- and assignments.