UW CSE 582, Spring 2023
W/F 3pm-04:20pm, CSE2 G04
Teaching Assistant: Lucille Njoo
As AI technologies have become increasingly prevalent, there is a growing awareness that decisions we make about our data, methods, and tools are often tied up with their impact on people and societies. This course introduces students to real-world applications of AI, potential ethical implications associated with their design, and technical solutions to mitigate associated risks.
The class will study topics in the intersection of AI, ethics, and computer science for social good, with applications to Natural Language Processing, Speech, Vision and Robotics. Centered around classical and state-of-the-art research, lectures will cover philosophical foundations of ethical research along with concrete case studies, ethical challenges in development of intelligent systems, and machine learning approaches to address critical issues.
Methodologically, we will discuss how to analyze large scale data sets generated by people or data sets aggregating information about people, and how to ethically and technically reason about them through data-, network-, and people-centered models. From an engineering perspective, students will apply and extend existing machine learning libraries (e.g., Scikit-learn, PyTorch) to textual/vision problems. There will be an emphasis on practical design and implementation of useful and ethical AI systems, with annotation and coding assignments, a course project, and an ethics paper-reviewing assignment. Discussion topics include:
- Philosophical foundations: what is ethics, history, medical and psychological experiments, IRB and human subjects, ethical decision making, moral dilemmas and ethical frameworks to reason about them.
- Fairness and bias: fairness in machine learning, algorithms to identify social biases in data, models, and learned representations, and approaches to debiasing AI systems.
- Civility in communication: techniques to monitor trolling, hate speech, abusive language, cyberbullying, toxicity.
- Misinformation and propaganda: approaches to identify propaganda and manipulation in news, to identify fake news, political framing.
- Privacy: privacy protection algorithms against demographic inference and personality profiling.
- Green AI: energy and climate considerations in building large-scale AI models.
- Optional additional topics: AI for social good: low-resource AI and its applications for disaster response and monitoring epidemics; fairness in decision support systems; Ethical design and more careful experimental methods in AI; Intellectual property; Digital preservation.
Calendar is tentative and subject to change. More details will be added as the quarter continues.
|1||03/29||Introduction||Motivation, course overview and requirements. Examples of projects in computational ethics [slides] [readings]|
|03/31||Human subjects||History: medical, psychological experiments, IRB and human subjects. Participants, labelers, and data. [slides] [readings]|
|2||04/05||Human Subjects||Paper discussion [slides] [readings]|
|04/07||Philosophical Foundations||Ethical frameworks, benefit and harm, power, automation [slides] [readings]|
|3||04/12||Philosophical Foundations||Paper discussion [slides] [readings]||project teams due|
|04/14||Social bias in AI models||Psychological foundations of bias; social bias and disparities in NLP models [slides] [readings]|
|4||04/19||Social bias in AI models||Paper discussion [slides] [readings]|
|04/21||Project Workshopping||Brainstorming / feedback workshop between pairs of project teams||project proposal due Friday, April 21st, 11:59pm|
|5||04/26||Privacy auditing and protection in LLMs||Privacy auditing and protection in large language models [slides] [readings]|
|04/28||Privacy auditing and protection in LLMs||Paper discussion [slides] [readings]|
|6||05/03||Mitigating harms from LLMs||Practical/technical approaches to mitigate harms from large language models, including bias detection and removal, factuality, etc [slides] [readings]|
|05/05||Mitigating harms from LLMs||Paper discussion [slides] [readings]||Ethics review due|
|7||05/10||Hate speech||Identifying and countering hate speech/toxicity/abuse [slides] [readings]|
|05/12||Hate speech||Paper discussion [slides] [readings]||project mid-quarter report due Friday, May 12th, 11:59pm|
|8||05/17||Misinformation||Fact-checking and fake news detection. Computational propaganda and political misinformation. Detection of generated text. [slides] [readings]|
|05/26||Individual meetings with Instructors on projects|
|10||05/31||Project Presentations||Project Presentations|
|11||06/05||Final report submission due|
Mattermost. Course communication will be via Mattermost, an open-source communications platform similar to IRC for which the Allen School has an internally-hosted instance. You should sign into using your CSE NetID and join the “team” for the class using an invite link which has been sent to you via email. Mattermost will be used for sharing discussion points and questions prior to in-class paper discussions.
Google Drive. Course materials, including lectures, reading lists, etc., are in a Google Drive folder which has been shared with all students. You will also use Google Drive for submitting your ethics reviews and project proposals.
There will be three components to course grades.
- Paper readings and discussion (20%).
- 10%: Submit discussion questions before class and participate regularly during class.
- 10%: Lead at least one discussion.
- Ethics review (30%). Review ethics reviewing guidelines and write an ethics review for a research paper.
- Project (50%). Work on a research project in ethical AI.
- 10% proposal.
- 10% mid quarter check-in
- 20% final project report
- 10%: final project presentation.
Late policy. Students will have 5 late days that may be used for any deliverable at any point during the quarter. You can use at most 3 days per deadline.
Academic honesty. Homework assignments are to be completed individually. Verbal collaboration on homework assignments is acceptable, as well as re-implementation of relevant algorithms from research papers, but everything you turn in must be your own work, and you must note the names of anyone you collaborated with on each problem and cite resources that you used to learn about the problem. The project proposal is to be completed by a team. Suspected violations of academic integrity rules will be handled in accordance with UW guidelines on academic misconduct.
Accommodations. If you have a disability and have an accommodations letter from the Disability Resources office, I encourage you to discuss your accommodations and needs with me as early in the semester as possible. I will work with you to ensure that accommodations are provided as appropriate. If you suspect that you may have a disability and would benefit from accommodations but are not yet registered with the office of Disability Resources for Students, I encourage you to apply here.
Take care of yourself! As a student, you may experience a range of challenges that can interfere with learning, such as strained relationships, increased anxiety, substance use, feeling down, difficulty concentrating and/or lack of motivation. All of us benefit from support during times of struggle. There are many helpful resources available on campus and an important part of having a healthy life is learning how to ask for help. Asking for support sooner rather than later is almost always helpful. UW services are available, and treatment does work. You can learn more about confidential mental health services available on campus here. Crisis services are available from the counseling center 24/7 by phone at 1 (866) 743-7732 (more details here).
Please see UW guidelines.