We’ve created this page as a resource for folks to further act on their own values in computer science. The page contains four parts:

This list is of course biased (e.g., to the UW and to students) and incomplete (e.g., non-exhaustive), but we hope that it is at least useful. In each category, items are, generally, displayed in order of perceived relevance or worth. It was created by Jared Moore.

Please get in touch if you have suggestions or questions!


Mailing lists

  • The data science community newsletter from NYU is comprehensive, “witty, [and] informative weekly newsletter launched in 2015.” Published bi-monthly. highly recommended

  • Upturn, which works “Towards Justice in Technology” has a weekly mailing list called Equal Future in which they “send out a short summary of what [their] team is reading.” highly recommended

  • Responsible Data and their mailing list — “The RD community is a place for those who are using data in social change and advocacy to develop practical approaches to deal with the ethical, legal, social and privacy-related challenges they face.”

Traditional Media

  • The New York Times has Shira Ovide’s On Tech newsletter as a follow up to Charlie Wardzel’s Privacy newsletter. It’s subscriber only.

  • The Markup which is led by Julia Angwin, formerly of Pro Publica, “a new publication illuminating how powerful institutions are using technology in ways that impact society.” highly recommended

  • Slate’s Future Tense which “explores how emerging technologies will change the way we live.”

  • Douglas Rushkoff, in his incredible Team Human podcast, covers content quite relevant to the course, such as his discussion with danah boyd.


In presenting opportunities, we encourage a conception of computer science beyond just software engineering at large companies. That is, we consider: what other paths might a student take? Indeed, opportunities might entail jobs, internships, research positions, extra-organizational groups, volunteer positions, etc. These opportunities are always changing so, for the most up to date information, check mailing lists and job boards and use these as a reference.


If you’re interested in doing research, check out and contact the academic groups listed below. Find more here. Consider reaching out to the faculty or grad students associated with labs which interest you. Start early and be persistent.


  • Ben Green, of “Good isn’t Good Enough”, has collected a number of Data and technology for good jobs. It’s a long list.

  • The folks at 80000 hours are quite pointed (and perhaps accelerationist) with their positioning—“How can you best use [the 80,000 of your career] to help solve the world’s most pressing problems?”—but their job board is worth a look.

  • For those more on the research and academic track, the Open Tech Fund, which “supports internet freedom online” lists, among a number of alternative (and highfalutin) forms of support, like their research-oriented Fellowships (Research and Multidisciplinary).

  • Along similar lines is the Open Philanthropy AI Fellowship. “With this program, we seek to fully support a small group of the most promising PhD students in AI and ML who are interested in making the long-term, large-scale impacts of AI a central focus of their research.”


There are many organizations and actors which relate to some combination of the ethics, politics, social good, of computer science, data science, AI, etc.:

On (or near) campus

  • The Center for an Informed Public is an interdisciplinary research center, founded by researchers in the Information School, School of Law, and HCDE, that works to combat misinformation. Check them out!

  • The Change Group is a computer science-focused, but cross-disciplinary, group which holds weekly seminars on, historically, the use of computing technologies in the developing world and among under-resourced groups. They’ve also got a (mailing list).

  • Relatedly, consider TASCHA, the Technology & Social Change Group at the Information School. They’re not as active.

  • The Design Use Build (DUB), is “an interdisciplinary group of people to share ideas, collaborate on research, and advance teaching related to the interaction between design, people, and technology.” Their weekly lunch talk is worth checking out.

  • The eScience institute which is “advancing data-intensive discover in all fields” has a number of cool ways to get involved

  • We know that acting on our values often means having a diverse group of people in the room. See the Allen School’s efforts and/or the Diversity Allies (mailing list).

  • For those policy-inclined, the Tech Policy Lab, also a cross-disciplinary groups (again mostly silo-ed, but this time in the law school), might be appealing. They hold weekly lunches, as detailed in their (mailing list), which, after a few months tend to be repetitive and news (as opposed to theory) focused.

  • Nonetheless, for those more interested in theory, the SSNet, which, through its mailing list, “brings together faculty and graduate students at the University of Washington who share an interest in Science, Technology and Society Studies” might suffice.

  • Outside of campus, the Seattle Privacy Coalition’s Tech-Activism 3rd Monday (discussed on their mailing list) — “is an informal meetup designed to connect software creators and activists who are interested in censorship, surveillance, and open technology.”

There are also a number of undergraduate student groups, which operate similarly.

Off campus

Of course, there are a plethora of related organizations off campus. Here are a few of note.


  • Preeminent in this space, and founded by Kate Crawford, is the AI Now Institute (mailing list) based as a center out of NYU in New York. They are “a research institute examining the social implications of artificial intelligence” and have both critical perspectives and technical depth.

  • Also in New York is Data and Society (mailing list), founded by danah boyd, which appears strikingly similar in that it “studies the social implications of data-centric technologies & automation” but they are much more focused on the social and, slightly more incidentally, the technical.

  • Upturn, which “advances equity and justice in the design, governance, and use of technology” is new within the past couple of years. They’re a small operation, but offer hard-hitting investigative pieces and policy documents.

  • Older, and compared to the ACLU of the web, is the Electronic Frontier Foundation (EFFetor mailing list. They claim to be the “leading nonprofit defending digital privacy, free speech, and innovation.” Look to them for the defense of net neutrality and the like.

  • Center for Effective Altruism (mailing list) and Effective Altruism try to apply a science of ethics in “doing good.” Theirs is an interesting perspective to be aware of.

  • Such organizations also exist outside of the US. Chile, for example, has Derechos Digitales, CL (mailing list) and Mexico has R3D, MX.

Universities and centers

  • Harvard’s Berkman Klein Center on Internet and Society (mailing list) has long been a/the seat on these topics in the US. Their mailing list comes weekly and contains lots of info. Still, it’s mostly promotional and it’s unclear how accessible any of their postings are to those who haven’t already ‘made it.’

  • Less loud, but seemingly more impactful on tech policy is Princeton’s Center for Information Technology Policy (mailing list). They mailing list isn’t really worth it—just talks that won’t be live-streamed.

  • Likewise, Georgetown’s Center on Privacy and Technology (mailing list) does great work on tech policy, but more from the policy angle and with fewer technical chops. Their mailing list is also just promotion.

  • It’s unclear how The Fordham Center on Law and Information Policy (mailing list ranks in terms of influence, but they have got one thing going: their bi-monthly mailing list is generally accessible for those interested in tech policy.

  • Back with technical chops, but with more of an eye for investigative work, the University of Toronto’s Citizen Lab (mailing list) is fascinating. Look to them for the fight against big government spyware.

  • On a different note, University of Michigan’s Center for Ethics, Society, and Computing “is dedicated to intervening when digital media and computing technologies reproduce inequality, exclusion, corruption, deception, racism, or sexism.” They’re new so keep an eye out.

  • Australia’s 3AI, which “was created to enable the safe, ethical and effective design, integration, management and regulation of cyber-physical systems” takes refreshingly different tack on the topics we’ve discussed. Also, Johan, who co-designed this course, now teaches there.


There are many, many more related organizations. Use the Civic Tech Field Guide as a means to find more.

For example, you might want to get involve in Climate Change AI, which “aims to facilitate work at the nexus of climate change and machine learning” (mailing list).

Conferences and groups

There are also a number of academic groups which work in this space. Here are a few.

  • The AAAI (Association for the Advancement of Artificial Intelligence) holds the ACM conference on Artificial Intelligence, Ethics, and Society

  • The ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*) is “A computer science conference with a cross-disciplinary focus that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.”

  • The ACM Special Interest Group (SIG) on Computers and Society (CAS) aims “to raise awareness about the impact that technology has on society, and to support and advance the efforts of those who are involved in this important work.”

  • The IEEE Society on the Social Implications of Technology hosts the International Symposium on Technology and Society (ISTAS) and publishes Technology and Society. They “provide forums for us (including you) to interact, and address the challenges and opportunities the application of technology can have for our the world.”


As should be obvious, many other courses cover topics similar to this one. Here, we list a few in particular. Many, in fact, motivated this very course.


Of course, offerings change. When looking for courses to take, we recommend exploring the time schedules of (including the grad offerings), and talking with advisors in, Computer Science, Information Science, Human-Centered Design and Engineering, and others.

  • Hoffman, in her graduate level course Data, Politics, and Power: Critical and Ethical Perspectives on Big Data and Algorithms for the information school, delivers a comprehensive walk through the implications of big data systems with a heavy focus on the foundations from sociology, philosophy, and science and technology studies.

  • Blaise Aguera y Arcas, who leads a large AI team at Google and of “Physiogomy’s New Clothes,” taught the Intelligence Machinery Course at the University of Washington last year and Johan was the teaching assistant. The syllabus is not public, but look out for more publications of his. The course “examine[s] large-scale trends and patterns” of how “Companies, governments, and institutions are rushing to explore and exploit the possibilities that AI opens up.”

  • Os Keyes, of “Misgendering Machines,” taught Human Centered Data Science as a graduate-level introduction to “fundamental principles of data science and its human implications.” It covers more on research ethics, a subject we omit.

  • CSE 599H, for graduate students, appears like a reasonable extension of this class. In fall 2019, under Jen Mankoff, it was 3D-printer focused. In spring 2020, Kurtis Heimerl will be teaching it with, as we understand it, a different focus like Barath Raghavan’s class at USC.


There are a number of seminars and talk series which relate more tangentially to this one.

See the on campus organizations. Also consider the CSE Colloquium series.


  • The Critical Data Studies Reading list is quite a good one.

  • Cornell’s information science course Ethics and Policy in Data Science (syllabus), made by Solon Barocas, offers deep coverage of data science, policy and theory. It’s much harder hitting and is able to dwell on many of the data science specific issues which we omit for the sake of time.

  • Daniel Greene’s Technology,Culture, and Society course at the University of Maryland College Park College of Information offers a quite recent take with readable, but still pointed, articles. “It is designed to help future members of the tech sector, broadly construed,understand the different social conflicts engulfing ‘tech’ and the lived experiences of people on different sides of these conflicts.”

  • Alvaro Bedoya’s The Color of Surveillance: Law and History course at Georgetown University walks us through the sordid technological pasts of surveillance with ample commentary on the modern age.

  • Data and Ethics, an introduction to critical perspectives regarding technology at UC Berkeley was made by Anna Hoffman, whose work we cover in the course.

  • Rahul Tongia’ and Lorrie Cranor’s 2007 Computers and Society course at Carnagie Mellon also provides a series of case studies. While introductory, it both provides a worthwhile historical perspective (from only ten years ago!) of how ethics have changed and is more pointed in its critical perspectives.

  • In her Medium post, Casey Fiesler, of The University of Colorodo, Boulder, famously collects a large number of technology ethics syllabi. We used this as a basis to find some of the courses included here.

  • Moritz Hardt, a renowned machine learning researchers, taught Fairness in Machine Learning at UC Berkeley as a very technical sense graduate seminar. “The focus is on understanding and mitigating discrimination based on sensitive characteristics, such as, gender, race, religion, physical ability, and sexual orientation.”

  • MIT’s (now infamous) Joi Ito and Harvard’s Jonathan Zittrain, in The Ethics and Governance of Artificial Intelligence (syllabus) provide good coverage on classical computing dilemma: like robots and artificial general intelligence. Also, it has recorded lectures!

  • Angèle Christin, in The Politics of Algorithms in Stanford’s Communication Department, presents a valuable resource for understanding the political and social implications of many computing technologies. In particular, we draw from the section on “Facebook, Filter Bubbles, and the Public Sphere.”

  • For more in-depth scholarship with a media studies angle on the constructs of race and technology, check out Lori Kido Lopez and Jackie Land’s collaborated syllabus on Critical Race and Digital Studies.

  • UC Berkeley’s Social Implications of Computing offers a nice series of case studies of the implications of computing in a variety of domains.

  • Stanford’s lecture-style Computers, Ethics, and Public Policy offers case studies in four principal areas in a traditional ethics manner. It does not cover so much theory.