CSE 390HA - 122 Honors Seminar
This is the website for the Spring 2024 iteration of CSE 390HA.
Note: looking for a different iteration of CSE 390HA? Visit the 390HA course listing.
- Instructor: Matt Wang (email: mxw@cs.washington.edu)
- Meeting Time: Tuesdays, 3:30 - 4:50 PM (starting Tuesday, April 2nd)
- Location: LOW 112
Overview
Welcome to CSE 390HA, the Honors section for CSE 122! Each week, we will discuss various topics related to computer science. Our sessions will mostly focus on the societal and cultural impacts of computer science (and more broadly, technology), with some exploration of technical concepts to support these discussions. This is intended to be an opportunity to think about computer science and other related topics in a broader context.
Notably: this course is not an opportunity to learn more programming, computer science, or add more "rigor" to 122. No background or familiarity with computer science is required outside of what is necessary for CSE 122.
All information in this class will be available on this course website. Canvas will be used for peer discussion, and Google Forms will be used for submitting work. Further course policies, including how to get credit, are listed under course policies.
Course Content
Overview & Schedule
Click on the topic entry to go to a more detailed overview!
Date | Topic | Homework (for next week) |
---|---|---|
April 2, 2024 | Introduction | Post introduction; complete required reading & one optional reading; answer reflection. |
April 9, 2024 | Computing Education | Complete required reading & one optional reading; answer reflection. |
April 16, 2024 | Accessibility & Disability | Complete required reading & one optional reading; answer reflection. |
April 23, 2024 | Machine Learning & AI | Complete required reading & one optional reading; answer reflection. |
April 30, 2024 | Privacy & Security | Complete required reading & one optional reading; vote on student choice topics. |
May 7, 2024 | User Interfaces & Human-Computer Interaction (Student's Choice) | Complete required reading & one optional reading; answer reflection. |
May 14, 2024 | AI, Revisited (Student's Choice) | Propose culminating reflection idea; complete required reading & one optional reading; answer reflection. |
May 21, 2024 | Open-Source Software (Student's Choice) | Work on culminating reflection; complete required reading & one optional reading; answer reflection. |
May 28, 2024 | Careers Panel! | Before class: complete your culminating reflection! Before the start of finals week: complete peer review. |
Week 1: Introduction
This class will be a combination of:
- a quick (but important) set of introductions!
- a typical "syllabus day" overview of course policies and expectations
- a meta discussion on what you want to get out of this class, and setting up community norms
- some priming questions for our quarter: what computer science is, why we study it, what computers do, and whether or not it's "good" or "bad"
- finally, setting the stage for next week - who gets to study computer science?
For this session, you don't have to prepare anything - just show up and bring your best self! After the session, I'll also post a more detailed summary with what we talked about.
Week 1 Summary
We generally followed the structure outlined above! First we did introductions: armed with a table tent and some Mott's fruit snacks, we partnered up and then introduced our partners to the class.
Then, we did some typical "syllabus day" activities with a co-design twist. In particular, we covered:
- Core course policies: how to get credit, disability and religious accommodations, and academic honesty.
- What this class is (an exploration of sociotechnical content; learning by discussing and creating knowledge together; a class which can be tailored towards students' interests, but needs buy-in) and is not (more Java, Matt telling you the "right" answer).
- What we all wanted to get out of this class (with a think-pair-share).
- Our community norms (which we devised and voted on).
We then answered some preliminary questions with "Yes, And" - Matt seeded an (incomplete) worldview, and students chimed in to expand it!
What is computer science (CS)? Can you come up with a formal definition?
Matt's incomplete seed: a degree and field of study at universities like UW. CS is what you study in the major: some programming, and some math.
Many great student answers here! Some choice "Yes, And"s included: CS is a broader skill about understanding how computers work and how to use them; CS is not just a formal program at universities, but informal and across ages; CS is also a field to study the impacts of computers on the world.
Why do "we" study computer science?
Matt's incomplete seed: it's a very lucrative field, and makes a lot of money! And, it's required for my major.
Wonderful student answers included: it's hard to "escape" from computing, so it's better to face it head-on; computing is everywhere in any field; computers can make you more productive; computers impact people, so we should learn about them.
How are computers used in your life today?
Matt's incomplete seed: when I think of my "computer", I think of my laptop and phone.
A diverse pool of answers, including: any data about you is stored somewhere (the cloud is just someone else's computer), computers are embedded in many devices (from calculators to cars to boats), many physical items have been manufactured with (and made more productive by) computers, and almost any form of communication!
Are computers good or bad? How do you make a decision like this?
Matt's incomplete seed: computers are mostly good, since they've rapidly improved people's lives through technological progress.
Interestingly, there was almost consensus on: computers are tools and are not inherently good or bad (like a hammer). Instead, it's how humans use and embody their values in computers that affect people in positive or negative ways.
Finally, we set the stage for next week. In particular, Matt explained the context for the required reading + the different types of optional readings. See the Homework for Week 2 tab for more information.
After class, Matt wrote this summary up, added the community norms to the website, and notified the students!
Homework for Week 2
- post a short introduction on Canvas; please include:
- your name, pronouns (optional), major, and year
- why you took this class
- what you're looking to get out of this class
- one goal for this quarter that's related to this class
- one goal for this quarter that's unrelated to this class
-
read the required reading(s):
- pages 1-25 (Introduction + Chapter 1) of Stuck In The Shallow End, Jane Margolis (et al.), MIT Press, 2008.
- one-sentence pitch: an unlikely metaphor on barriers for minoritized students in computing and a reminder that diversity problems facing computing educators are not new.
-
read (at least) one of the optional readings:
- short & sweet (one short-ish article)
- article: Computer science was always supposed to be taught to everyone, and it wasn't about getting a job, Mark Guzdial, 2021.
- one-sentence pitch: Michigan has a unique system where there are three departments that teach computing (engineering CS, information school, and "LSA" - similar to letters and sciences) - learn about some of the history and ethos.
- back & forth (two opposing viewpoints)
- article 1: Computational Thinking, Jeannette M. Wing, Communications of the ACM, 2006.
- article 2: Do We Really Need Computational Thinking, Enrico Nardelli, Communications of the ACM, 2019.
- one-sentence pitch: "computational thinking" is a pair of words that has dominated the computing education discourse (including a shoutout from Obama), but what really is it?
- rabbit hole (a set of short articles meant to spark a deep dive)
- article 1: CS education policymaking: how a (state) bill becomes a (state) law, Amy J. Ko, 2019.
- article 2: What counts as computer science in K-12 education?, Amy J. Ko, 2019.
- article 3: Computer science education bill would add new graduation requirements in Washington state, Nathalie Graham, GeekWire, 2024.
- one-sentence pitch: learn a bit about the current state of computing education policy in Washington, and how surprisingly many of Matt's coworkers are involved!
-
answer the reflection on Google Forms
Canvas; this is around seven short answer questions.
Week 2: Computing Education
Before class: complete homework for week 2.
Broadly speaking, this class will have two focuses. First, we'll dive deep into the required reading (Stuck In The Shallow End) and reflect on how it connects to computer science then and now. Then, we'll talk about the topic of the optional readings: different mechanisms for broadening participation in computing "in the large", with some focus on Washington state.
Week 2 Summary
Here is a stripped-down, (mostly) anonymized summary of the questions we discussed today! Each bullet is a different point (and most are in response to the previous bullet).
Why is the book called "Stuck in the Shallow End"?
- the history of segregated schools and pools is relevant to computer science education (CS Ed) the 2000s
- metaphorically, being "stuck in the shallow end" happens when you don't have access to the right skills and resources. This is just like CS Ed: it's not a lack of interest, but a lack of resources.
- similar to swimming, insecurity is a huge barrier to entry for CS. These insecurities stem from historical elements of society.
- expectations are built from existing societal structures - similar to the floating line that separates the shallow and deep ends of a pool, students may feel like an invisible expectation prevents them from doing CS.
- in both cases, we often don't solve the true, structural problem - often ignoring it or applying a bandaid fix.
When discussing barriers to participation in CS, how does identity come up?
- the book explicitly discusses the stereotype that computer scientists are white or Asian (or Indian); this pressure feels real!
- similar identity issues around being a "math-oriented", "STEM-oriented", or "writing-oriented" person.
- computer scientists are perceived as technical, concise speakers, and good at math - even though CS is much more than that!
- a relationship between "these groups are good at math" to "these groups are good at computing"
- core underlying issue: representation. Not seeing someone who looks like you in computer science impacts if you'd engage yourself.
- as a class, we then tried to name famous swimmers (Michael Phelps, Katie Ledecky, and Ryan Lochte) - who were all white.
- as a class, we then tried to name famous computer scientists (Bill Gates, Steve Jobs, Tim Cook, Paul Allen, Ada Lovelace, Alan Turing, Richard Feynman, "the people from Hidden Figures"). Matt commented that typically, people are unable to name women, and that none of these people are asian - even though that's a heavy stereotype of software engineers.
- "Asian" isn't a monolithic categorization, and depends on what you count as Asia (e.g. is Russia in Asia? the Middle East?). Participation and inclusion is very different for East Asians, Southeast Asians, and further breakdowns.
Did the CS education experience in the LAUSD reflect yours? Did you take CS in high school - why or why not?
Note from Matt: going to mostly omit the specific points here since they're personally identifying. Instead, these were just the broad-strokes themes - not a play-by-play of the conversation.
- large difference in expected graduation rates (~50% versus always 100%)
- in a different state, had single-digit Latino/a representation - "almost too few people to discriminate against".
- connection to generational expectations in the book. Some students grew up in engineering centers where everybody had at least one parent as an engineer: so, for them and their peers, tech was highly valued, and going to college was highly valued - the expectation was that you'd do it!
- stark differences for many students who moved during their childhood, with different emphasis placed on technological education, "traditional" education, and sports. [note from Matt: this was a big chunk of the conversation, but it's hard to do this without leaking info about each student!]
- in IB schools, taking computer science is awkward with the rest of the curriculum.
- in affluent areas in Washington, some schools had many, many CS classes (including cybersecurity and robotics)! Perhaps influenced by the demographics of the parents, more of which work in tech. Yet, some of our students still didn't take them!
What details from the book are the same now (~ 20 years later) and what are different? How does this tie into the problems facing computing education?
- book mentions "technologically rich, educationally poor". The government has funded programs, but have relied on (new) organizations like Code.org to build out courses and pedagogy.
- Scratch is much more popular and is frequently used to teach kids programming!
- the field of computer science has skyrocketed (especially with social media), with a bigger emphasis on it being a great career, having prestige, and being put on a pedestal
- everything is now digital: menus, QR Codes, even Disneyland! But, access to technology has not improved proportionally, which widens the technological gap.
- not having computer skills puts a harsher limit on your socioeconomic status.
- separate from access, teachers are now more overworked, burned out, and (sometimes) hate their jobs! Giving them more technology will not solve the problem by itself - broader reform is needed.
- accessibility has two different meanings in computer science, but both are related. Connection to CSE 121 reflection TED talk on "CS Unplugged": how to teach CS on pen and paper.
How would you define a "CS class"? Or, in other words, why is a class on how to use Word or Excel not a CS class?
- argument: Word is interacting with software, but CS is more about writing something out to accomplish a function/goal
- counterpoint: Excel does this too! You can write formulas that do most things Java methods can do (including 121 P3). Does this mean that Excel is the most used programming language in the world?
- in "Stuck in the Shallow End", there is a specific difference between computer literacy (using computers) and computer science (thinking about computers).
- everybody should learn computer literacy, it should be a mandatory part of K-12.
- but, computational thinking - and computer science - is not so different from other sciences (e.g. learning Java is like designing a lab, analyzing results - just with different tools). So, can think of this as adding another "science" (similar to biology or physics).
- computer science is not just programming - there's also an element of how computers work. For example, in AP CS Principles (AP CSP), you learn how wifi works!
- depending on how deep you go into Excel, it could be computer science? But, literacy is more about font sizes and solving "simple" problems. CSE 121's problems take hours to dive in to.
- CSE classes are about demystifying computers. Learning to use Excel is like learning to label pictures, while CS classes are like describing them in your own words and analyzing their meaning. CS is learning how these computing tools work and how you apply them in your lives.
Should we make computer science a requirement for graduating high school?
- if we do, we need to improve access to technology (via education reform) - you need a system that guarantees access to computers for all students.
- personally regret not taking CS, even though the high school had it. Avoided it because of the association that "only smart kids take it", and if it was required, could have dispelled that notion.
- completely agree: same experience in high school, and only realized after doing 121. While this doesn't "fix" the perception/insecurity issue fully, it helps quite a bit!
- had Scratch in middle school - this felt like a good balance, sicne it wasn't a heavy requirement but still left time for exploration.
- if we add another requirement, could delay graduation more. Some students already struggle to graduate because they can't pass their required classes - this would make it worse!
- earlier exposure in middle school (with simpler content and lower stakes) may be helpful.
- also, many bad experiences with high school CS teachers - perhaps mandatory requirements induce more demand for good teachers?
- or, we'd run out of good CS teachers - and we'd have a big problem!!
- requiring CS feels idealistic - would be great if we had infinite resources, but we don't.
- if the requirement is "just have a CS class", that's too broad! Especially since school curricula are often very, very specific items with learning objectives. What would these be?
- one model: don't require it, but let it fulfill a "bundle" like math or a science (instead of physics or biology), and make sure that it hits the same learning objectives.
- going back to "Stuck in the Shallow End": students didn't take CS then because they didn't know what it was. We have the same problem now, and requiring it could help fix this!
- if CS is optional, could result in less emphasis on making sure that all students can access a computer.
- feels related to "test-optional" applications, which slightly backfired (by exacerbating existing equity gaps).
Homework for Week 3
-
read/watch the required materials (each on a different accessible technology):
- article: Semantics to Screen Readers, Melanie Richards, A List Apart, 2019.
- article: Captions vs. Subtitles: Breaking Down the Differences, Jena Wallace, 3Play Media, 2023.
- video: Detecting and Defending Against Seizure-Inducing GIFs in Social Media, South, Saffo, and Borkin, ACM CHI '21, 2021.
- video: Introducing the Xbox Adaptive Controller, Xbox, 2018.
- article: Assistive Technologies - The Switch, Hampus Sethfors, Axess Lab, 2018.
- article: What makes writing more readable?, Monteleone, Brew, and McGhee, The Pudding, 2022.
-
read (at least) one of the optional readings:
- short & sweet (one short-ish article)
- article: Disabled people don't need so many fancy new gadgets. We just need more ramps., s.e. smith, Vox, 2019.
- one-sentence pitch: engineers often jump to "there's a problem? technology will fix it!" - resulting in technologies that don't actually help disabled people.
- back & forth (two opposing viewpoints)
- article 1: How generative AI tools like ChatGPT can revolutionize web accessibility, Ran Ronen, VentureBeat, 2023.
- article 2: No, 'AI' Will Not Fix Accessibility, Adrian Roselli, 2023.
- one-sentence pitch: while we haven't yet talked about AI in-depth, the intersection with accessibility is such a hot button topic - I couldn't not include it!
- rabbit hole (a set of short articles meant to spark a deep dive)
- article 1: What are legal issues associated with the design of accessible software?, UW DO-IT, 2022.
- article 2: The ADA was a victory for the disabled community, but we need more. My life shows why, Shruti Rajkumar, NPR, 2022.
- article 3: Driving Forward the ADA for Digital Inclusion, Sarah Malaier, American Foundation for the Blind, 2023.
- one-sentence pitch: learn about the ADA, one of the central laws in the US on disability rights, and reflect on how it works (and doesn't work) with technology.
- bonus (this is a long, long article): #accessiBe Will Get You Sued, Adrian Roselli, 2020.
- answer the reflection on Google Forms; this is around five short answer questions.
Week 3: Accessibility & Disability
Before class: complete homework for week 3.
Broadly speaking, this class will focus on the interplay between accessibility and technology. This will cover both technology that includes and technology that excludes, whether it be intentional or not. We'll also discuss how students can begin to build accessible software (and communities).
Week 3 Summary
Here is an updated summary of what we discussed in class!
First, quick logistics:
- after all the votes are in, students have voted on having a careers panel (instead of a research panel) and a structured culminating reflection
- the course schedule to reflect this; one consequence of this is that it changes the schedule (since we no longer have a "museum walk" last day). Instead, there will be an extra student's choice session!
We then discussed the accessible technologies in the required readings.
Did anything surprise you from the accessible technologies mentioned in the reading?
- the Xbox Adaptive Controller: did not grow up with video game consoles, and didn't think about the implications this had on disabled gamers.
- had not heard of the difference between subtitles and captions before the reading, but definitely makes sense in retrospect.
- shocked to hear that people deliberately give others seizures via flashing images, and sad that research is needed to defend against this.
- reading ability was a surprising but important one. It was jarring to see how out-of-date school standards can be.
- the adaptive controller had many different use-cases depending on the relevant disability (e.g. pressing the buttons with many different body parts), and that seems related to broader ideas when designing for accessibility.
- was familiar with the idea of switches, but had not realized that breath-controlled devices were switches (and is curious to learn more)!
Let's zero in on readability. This one is divisive, especially since some argue that reading comprehension is a core part of learning English. More broadly speaking (i.e. outside the lens of disability), good readability is just a good goal to aim for - more people understand what you're writing!. What do you think?
- short, concise text can be very helpful to those who have ADHD or other attention deficit disabilities.
- outside of disability, it's very helpful to those who don't speak English as a first language - e.g. in helping grandparents understand documents.
- important for things like taxes and bank statements to be readable by others in simple language. But, this is different from academia, where more nomenclature might be needed.
- similar to other ADA standards (e.g. all buildings need to be built to specific codes), it would be great if all government documents, bank statements, etc. were readable.
- also similar to conversations around absurdly long terms & conditions for apps (you shouldn't need a law degree to understand the terms & conditions)!
- the argument of "this is just learning English" isn't effective - there's a difference between literature analysis and finishing day-to-day tasks.
Matt then did a live screenreader demo (using VoiceOver on macOS). We didn't record this, but for a first-order approximation, this video from Google's "Chrome for Developers" channel titled "Screen Reader Basics: NVDA" is a first-order approximation that uses NVDA, an open-source screenreader for Windows.
Switching gears, what is a disability dongle?
- a product that designers make that try to solve a problem that may not actually exist, not correctly solve the problem, or overlook elements of that disability.
- when it's created, it often doesn't take into account the experiences of those who face those issues.
- learning about user research in INFO 200 right now! However, one common pitfall is when designers come in with preconceptions on how users would use their work, and only really ask questions based on the designer's own experience (rather than the user) - similar to this!
- feels like those creating disability dongles are trying to compensate for guilt/pity towards disabled people - instead of properly putting themselves in other people's shoes, and not getting the input from actual users.
- favorite example was the stair-climbing wheelchair: why not install a ramp (instead of charging people thousands of dollars)!
- instead of making something accessible in the first place, disability dongles are almost an "add-on" (that you have to pay for), which is not useful!
- real dongles are converters which exist when hardware doesn't have the necessary plug. Instead, we should try to standardize things so that a dongle isn't necessary.
- as an aside, nobody enjoys using dongles!!
What advice would you give to engineers (or engineering students) to avoid making disability dongles?
- when solving a problem, figure out if there is a systematic solution or not
- one example: "smart" glasses for deaf people that display captions on the lenses sound interesting. But, this puts the burden on deaf people. Instead, systems like closed captions or sign translators fix these systemic issues, rather than forcing deaf people to help themselves.
- currently taking HCDE 315 (inclusive design) which focuses on disability as a mismatch with how the world is designed. It's the world that we should change!
- too easy to slip into "let's build technology" and build something cool, rather than actually helping people.
After reading, what are our thoughts on AI and accessibility? Will it revolutionize accessibility, do nothing, or something in-between?
- AI feels too much like a buzzword - e.g. AI rice cookers?
- but, it's certainly helpful - e.g. with automatic captioning. However, it's still unreliable/inaccurate and may not solve the problem.
- in a way, feels like a dongle or a bandaid fix.
- going back to the plain text readability project: you could have a crowdsourced database that converts complicated text into plaintext. AI could do some of this, but it can miss a lot of nuance. Useful as a tool, but not the solution.
- what are we letting AI learn off of? Machine learning could be learning data from exclusionary sources or replicate biases.
- and, generative AI tools are still inaccurate!
In some cases, AI tools may be the "only option". In these cases, should we use it? Is it fair for us to make this judgement?
- reasonable as an only option - better than nothing. But, not a permanent solution.
- feels like a constant "imperfect solution with issues" versus "is this perpetuating the problem" debate.
- AI is not at a place where we can trust it to perform important jobs, but AI is in a "beta" phase - and we need to collect feedback!
- throwing AI at things without monitoring its use "feels weird", e.g. image-generating AIs could be trained on other images generated by AI, creating nightmare fuel
- can't discount how big of an impact this could be: imagine if someone could select and copy-paste a paragraph into ChatGPT, and ask "summarize this concisely" or "translate this into another language" - that's a huge deal!
- need to be careful of utilitarian thinking - can be easy to fall down the rabbit hole of "if it doesn't work for everybody, we can't use it". Okay to use things even if there are some downsides.
- we should help develop what people are using - so if disabled people are using ChatGPT, we should work to make that better!
What are potential solutions to the problems we discussed?
In-person, we discussed:
- implementing some legislation forcing websites to be accessible (e.g. requiring all images to have captions)
- but, people could just write "image", or add some well-intentioned (but bad) alt text.
- what would you do about legacy websites?
- creating software or languages that make it easier to be accessible (e.g. describing images by default, making it easily navigable)
- related solution: can we feed pre-existing structure from things like markdown into a website?
- teaching accessibility somewhere. But, where? Last week, we saw there are many tough questions (what age, what program, should it be required)?
- should be similar to how we teach civil engineers or architects ADA guidelines - through accreditation. Why not add it as a checkmark for a CS degree?
- for K-12, we should teach it somewhere, but maybe not the technical details (are students building websites)?
- K-12 doesn't immediately solve the problem, since you need to convince many CEOs and VPs. How long would it take for kids now to get to these positions of power?
- need some sort of workforce education; could be training programs or through representation.
- when onboarding as a research assistant, had to do mandatory modules about Title IX, OSHA, etc. - could do something similar.
- but, many people don't pay attention to mandatory trainings, and those people are the most important people to teach! You need to teach people to care!
- as a resident assistant, have to go through many accessibility trainings (over a week). But, they're quite effective - in part because they weren't just tacked on.
- in addition, there were tangible outcomes - e.g. if a poster wasn't accessible, it would be denied.
- could require government agencies to have accessible websites through law, and incentivize private companies to make accessibility a priority through tax breaks.
I also asked us all to answer this question on paper as an "exit ticket". Here are a few of the answers that touch on ideas we didn't talk about in-person.
- on a personal level, taking more initiative to add alt-text and other screen-reader friendly items for individual creations.
- teach about accessibility from a very early age - so that thinking about how to make the world accessible is something we all think about, in everything that we do. Almost making it the default!
- avoid reinventing the wheel - instead, first ask if the design is needed!
- Wordplay!
Homework for Week 4
-
read/watch the required materials:
- article: Machine Bias, Angwin, Larson, Mattu, and Kirchner, ProPublica, 2016.
- one-sentence pitch: one of the "classic" articles on algorithmic bias, but not on a machine learning system!
- video: But what is a GPT? Visual intro to transformers, 3Blue1Brown, YouTube, 2024.
- one-sentence pitch: a math-y partial explanation of how GPTs work, from the best in the business of math visual explanations.
- note: I'm not expecting you to understand any of the math, but rather to just build some intuition. In addition, some of Grant's examples are provocative, and I'll ask you to comment on them in the reflection.
-
read (at least) one of the optional readings:
- short & sweet (one short-ish article)
- article: Is generative AI bad for the environment?, Kate Saenko, The Conversation, 2023.
- one-sentence pitch: a quick primer into a surprisingly hard-to-measure problem.
- back & forth (two opposing viewpoints)
- article 1: A.I. Poses 'Risk of Extinction,' Industry Leaders Warn, Kevin Roose, The New York Times, 2023.
- article 2: AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype, Alex Hanna, Emily Bender, Scientific American, 2023.
- one-sentence pitch: "existential risk" is an attention-grabbing headline - but does it deserve this level of discussion?
- rabbit hole (a set of short articles meant to spark a deep dive)
- article 1: We read the paper that forced Timnit Gebru out of Google. Here's what it says., Karen Hao, MIT Technology Review, 2020.
- article 1.5 (key context): About Google's approach to research publication, Jeff Dean, 2020.
- article 2: 'There was all sorts of toxic behaviour': Timnit Gebru on her sacking by Google, AI's dangers and big tech's biases , John Harris, The Guardian 2023.
- one-sentence pitch: Timnit Gebru's controversial departure (resignation? firing?) from Google touches on many key themes in this class, from core AI questions on ethics, interpretability, bias, and environmental impacts, to reflections on tech culture. There's also an unlikely cameo from folks at UW.
- answer the reflection on Google Forms.
Week 4: Machine Learning & AI
Before class: complete homework for week 4.
Broadly speaking, this class will be a broad-strokes overview of machine learning with an emphasis on its societal impacts. While one big focus will be algorithmic bias and fairness, we'll also touch on many other issues (such as interpretability, provenance, labor inputs, and environmental impacts).
Week 4 Summary
We spent the first 20 minutes talking about the questions that you had from the reflections; check out the "Answering your questions" section for more!
We then mostly talked about three topics: embeddings, debiasing humans versus algorithms, and explainability. We closed with a short conversation on the environment.
What did you think about the "word embeddings" from the video? What questions do you still have?
- The video uses a very simple example (3 dimensions instead of 120000), so it's not clear if computers are actually subtracting gender to assume that things are feminine or masculine.
- Using gender could lead to the wrong context: for example, "king - man = queen" ignores that queen doesn't always refer to royalty.
- This adding/subtracting directions feels a bit binary.
- You could have issues where historical biases creep in: e.g. maybe "president" is assigned a male connotation since the U.S. has not had a female president, so "subtracting" female could give you vice-president or first lady?
- In some languages (e.g. French or Spanish), nouns themselves have genders (and this is a core part of the grammar). How does this work?
What do you think about the argument "algorithms are easier to debias than humans"? Should we use AI in areas like policing (where bias is very, very prevalent and obvious - in both humans and algorithms)?
- Algorithms & humans are actually quite similar: these algorithms are human-made. So, if the person who makes it has a subconscious bias, than the algorithm will too.
- Throughout U.S. history, policing has always been racist - so we can't just "fix the data" when all of the data is the problem. Even if we remove explicit categories (e.g. "race"), there are many proxies (e.g. see redlining and low-income housing).
- Algorithms are controlled, tailored, and have lots of nuance; they can be more explainable than humans (since you can't pry open a human's brain). So, a more realistic solution could be fixing our current algorithms.
- Change in human behaviour is very hard, especially since biases can be integrated into your environment (e.g. parents, school, where you're from). Also noticed that the locations discussed in the ProPublica article are more likely to be historically racist.
- The questions that COMPAS asks are inherently biased - is it even possible to tweak questions like "have your parents gone to jail?" to not have racial undertones?
- You could try to weigh algorithsm differently, but it's a delicate balance. For example, Google's Gemini tool went way too far in the opposite direction. And swings in either direction have big impacts on people's lives.
- Fundamental problem is that humans are prejudiced and we need to fix that within humans. But, since governments and companies are going to use these tools, we need to fix them anyways.
- In order to debias algorithms, you need completely unbiased data and people. But that's not possible - this is all systemic!
- Slight disagreement with previous point - do you need unbiased people to recognize and fix bias in data? We're all biased, and yet can see issues with current systems.
- Would like to see AI with a human in the loop - AI cannot have historical context or understand why data has different outcomes, but a human could!
- Related to our education discussion: colleges have essays because we don't think just an SAT/ACT score fully tells someone's story. Algorithms can't truly understand language, systemic oppression, or context!
COMPAS is an "explainable" algorithm. But, for most of machine learning and large language models, we can't explain why the model comes to a certain decision. But, their accuracy seems to be really good. Should we use these systems?
- In everday life, we use lots of things we don't understand (like our brains or our phones). This should be no different! (but of course, like the brain, we should still try to understand it.)
- What's most important is if they help us - and in this case, these do, so we should use them.
- Tying back to computing education - those who understand computer science (and LLMs) have much more power and can make these decisions on behalf of many other people. So, we need to think about who has this knowledge (and what biases that reflects).
- If you understand that ChatGPT is a black box and can make mistakes (and is not an infallible god), it would be good to use. However, not sure if this is how people really view it.
- There is some accountability - e.g. the recent "Lying AI Chatbot" case with Air Canada.
What are the environmental impacts of LLMs? Who pays for them?
- missing from the article: the physical cost of making the computers, like raw minerals and labour
- many of the raw minerals come from conflict zones (e.g. cobalt in the DRC)
- land use! (to host the data centers and servers)
- people who do not use these technologies still have to pay for them - e.g. people in the DRC may not even be using ChatGPT, and are being disprivileged of their own land and resources.
- climate change is regressive (i.e. marginalized communites get affected more). For example, unhoused people are disproportionately affected by climate change.
- also, climate change affects everyone! literally, everyone!
Answering your questions (from the reflections)!
How big are the datasets they use for these, and how do they get them?
These datasets are gigantic, within a few orders of magnitude of "the entire internet". For proprietary models (like GPT-4), it's hard to get an exact number (since OpenAI has chosen not to disclose this). For GPT-3 (its precursor), OpenAI's 2020 paper says roughly 500 billion tokens from the Common Crawl dataset, various public domain books, and Wikipedia. OLMo, an open-source LLM developed in part by people at UW, uses the 3 trillion token dataset called Dolma.
Many of these datasets combine archives of written books and content produced on the internet (often gathered through "web scraping"). The internet is famously toxic, and many AI models can replicate this toxicity - Microsoft's Tay was an infamous example that was shut down within 24 hours of being released. One of the most famous papers studying this phenomena (RealToxicityPrompts) is from CSE professors at UW!
How does the data source affect bias, and how can you avoid this?
Data sources certainly create biases in machine learning models! "Bias" can mean all sorts of things - from racism, sexism, and ableism to working better for certain human languages or opinions the model may repeat. It's hard to summarize the history of bias in ML in one paragraph (or even one article), but Joy Buolamwini's TED Talk is a common entrypoint.
Avoiding bias is very challenging, and broadly speaking is an open problem in machine learning (and in computer science). To summarize the lay of the land: "just get unbiased data" is typically not feasible, and may not itself resolve the problem. Many researchers work in a subfield of AI called fair machine learning (and related ideas, such as "responsible AI"). Some approaches are purely technical (can we quantify "fairness"? can we then optimize for this metric), while others focus on transparency (e.g. "Model Cards"), representation, or regulation. Timnit Gebru (from the rabbit hole readings) is one of the leaders in this field!
How do machine learning models deal with data they haven't seen before?
Long story, short, "their best"! Like the video discusses, many of these models are based on probabilistic reasoning. Typically, models will still pick the most likely option (e.g. most likely token) rather than explicitly fail, and then continue chuggling along. Among other things, this explains how language models can output gibberish (if you give it inputs of things that don't exist)!
What physical structures are necessary to make machine learning work? Where (physically) does this happen?
Great question! Generally speaking, there are two broad steps in machine learning (that involve different physical structures).
The first is "training", where the model tries to figure out the best weights (the linear algebra mentioned in the 3Blue1Brown video). Practically speaking, training is a bunch of math (in particular: matrix multiplication and some calculus + probability calculations). This is done on specialized computer chips that are really, really good at doing math; the most common example is a "Graphics Processing Unit" (GPU), which is particularly good at doing matrix multiplication (the fundamental operation for much of computer graphics and games). The exact data is not available for proprietary models (though Sam Altman has claimed that training GPT-4 was at least $100 million). Using OLMo as a proxy again, they trained their model twice: once on 1024 AMD MI250Xs (10k per chip), and once on 216 Nvidia A100s (very hard to actually buy, but also ~ 10k). Competition for buying graphics cards is fierce - this is one of the biggest "moats" established ML players have.
OLMo also publishes power consumption and carbon footprint estimates - they say that training used 239 MWh of energy, which is about the amount of energy generated by all hydroelectric dams in the Northwest United States (thanks Nathan Brunelle for finding this link!). They estimate training used about 70 tonnes of CO2, which is about 150000 miles of driving.
The second step is deploying the model - or in other words, letting people use it. Generally, this is done with servers across the country (and the world!) - every time a website is visited, a complicated set of algorithms figure out how to direct that query to a specific computer, that then tokenizes the input, does the matrix math, and then sends it back to the user. Cloud providers (such as Amazon Web Services, or AWS) own hundreds of datacenters that serve this exact purpose (and make boatloads of money). One of the most famous AWS regional datacenters is us-west-2, which is right beside us in Oregon! There is no explicitly public data on how many of these servers AWS owns, but almost every estimate puts it into the millions.
Do people who make ML models know how they work?
In short, not really (but it depends)! These models are so complicated (with billions of parameters), so it's not currently possible to truly "explain" what each of the individual pieces of the model are doing. This subfield of AI is broadly called "explainable" and "interpretable" AI, and is one of the most active fields of research.
However, researchers (and members of the public) can make broad explanations of how some parts of the model work - such as the explainer video that we just watched!
How does tokenization actually work?
In short, this is really challenging! Very briefly (and reductively): people used to use hard rules to split tokens up into nouns, verbs, adjectives, and other "rules" of grammar. However, these tended to be very manual, error-prone, and not scalable (especially to other languages). Modern approaches often blend some of this domain-specific knowledge with statistical tools that try to "guess" what tokens are based on a set of data (in other words ... more machine learning). A famous (very technical) tutorial that gets quite close to the bleeding edge is Andrej Karpathy's Let's build the GPT Tokenizer.
In their reflection, a student recommended InfiniteCraft, which is a fun game that shows some of this in action. Neal also has a ton of other fun visualizations on his website!
What is softmax and why is it used here?
I will mostly skip this question since it's not super relevant to our discussion, but long story short, it takes in a set of numbers, and "normalizes" them to all be between 0 and 1 (but keeping their relative "size" to each other). Why? In math, we typically define probabilities as being between 0 and 1 - so softmax lets us "shrink" large sets of numbers to be valid probabilities.
How can models deal with these absurd amounts of data? Would they not lose context, overfit, or not fit on a computer?
Long story short, models do lose context, overfit, and often cannot fit on cheap computers. So, lots of smart people work on this (and invent cool tricks to make it work). One of the most relevant ones (that you'll explore yourself in CSE 123) is "compression" - basically, treating the input from the user, weights, and other "big" things in the language model as things that we can put into a zip file and make smaller. If you stick around in 123, you'll learn about one of the classic ways of doing this, Huffman coding; modern linear algebra techniques like sparse matrix computation are extremely helpful.
How is AI development protected and/or regulated? Who is it regulated by?
Generally speaking, it isn't. Some existing laws that apply to technology (like intellectual property and copyright) may apply, but courts are in the process of deciding that just right now! The closest thing we have to American legislation is an executive order from the Biden administration from October 2023, but it's not yet clear how this will be enforced.
What is the cost of one query to ChatGPT (for OpenAI or the environment)?
This was a great question! Unfortunately, after some digging, I couldn't find a reliable first-party source (many articles claim to know the answer, but looking through the citations left me unsatisfied).
Some things I could verify were:
- as of April 2024, OpenAI charges $30 for 1M input tokens and $60 for 1M output tokens via its GPT-4 API. The token:word ratio seems to be 4:3, so if you estimate that a query is about 50 words and a response is 250, you get a napkin math pricing of 1.4 cents per query.
- BLOOM, a comparable LLM, is estimated to generate 1.5g of CO2 per query (per a paper written by members of the BLOOM team
- estimations are quite a bit more complicated because it's hard to factor in upfront costs from purchasing hardware and needing to repair it. There are many more estimates with "amortized" costs; the controversial Nature paper The carbon emissions of writing and illustrating are lower for AI than for humans may be of interest!
Your suggestions on what's "missing" from the legal, policy, and education discussons on AI!
Homework for Week 5
-
read/watch the required materials:
- video: Computer Security and the Internet of Things, Tadayoshi Kohno, USENIX Enigma, YouTube, 2016.
- one-sentence pitch: hear from a UW professor (who teaches security!) on some of the pragmatic research that folks do here on security :)
- video: Protecting Privacy with MATH, Minute Physics, YouTube, 2020.
- one-sentence pitch: a slightly different angle on privacy than the typical popular science topics on surveillance and passwords!
- article: The State of Consumer Data Privacy Laws in the US (And Why It Matters), Thorin Klosowski, The New York Times, 2021.
- one-sentence pitch: a quick overview of where we are with data privacy in the US!
-
read (at least) one of the optional readings:
- short & sweet (one short-ish article)
- article: Most Americans support right to have some personal info removed from online searches, Brooke Auxier, Pew Research, 2020.
- one-sentence pitch: what kinds of personal information should be removed from online searches?
- back & forth (two opposing viewpoints)
- article 1: Ban social media for kids? Fed-up parents in Senate say yes, Mary Clare Jalonick, The New York Times, 2023.
- article 2: The Protecting Kids on Social Media Act is A Terrible Alternative to KOSA, Jason Kelley, Sophia Cope, Electronic Frontier Foundation, 2023.
- one-sentence pitch: a controversial topic that spans privacy, security, and policy - pitting an uncommon bipartisan coalition against the EFF and ACLU!
- rabbit hole (a set of short articles meant to spark a deep dive)
- article 1: G.D.P.R., a New Privacy Law, Makes Europe World's Leading Tech Watchdog, Adam Satariano, The New York Times, 2018.
- article 2: Europe's Privacy Law Hasn't Shown Its Teeth, Frustrating Advocates, Adam Satariano, The New York Times, 2020.
- article 3: Meta Fined $1.3 Billion for Violating E.U. Data Privacy Rules, Adam Satariano, The New York Times, 2023.
- one-sentence pitch: the GDPR is one of the most impactful pieces of data privacy legislation, well, ever; here's a rare chance of seeing its evolution from the perspective of the same author (over the past half-decade)!
- answer the reflection on Google Forms (and vote on the student choice topics).
Week 5: Privacy & Security
Before class: complete homework for week 5.
Broadly speaking, this class will focus on data privacy and security in computing systems. We'll briefly touch on some interesting technical ideas, but mostly focus on many, many different case studies (with different stakeholders and harms). More coming soon!
Week 5 Summary
Sticky note activity: privacy for different types of data.
First, we did an activity based on categorizing different types of data and its privacy implications. We talked about items inspired from the survey on the right to be forgotten (medical records, financial records, criminal records, embarassing photos) and ones related to you as a student (student records, employment, and news articles about people). We rated them across two axes:
- would we be comfortable with the information being completely public, completely private (e.g. just available to you), or available to third parties upon request (with a password or some other system)
- if the information concerns a certain person, should we allow requests to delete that data?
The class bucketed different types of data in different areas! We touched on a few key points:
- "news articles" is a really tricky piece! Outside of existing processes for libel and defamation (when the article is clearly false), there's a delicate balancing act between freedom of speech, holding powerful people accountable, the right to privacy, and the fact that it's hard to write a straightforward rule for these situations. Two prescient examples we talked about were public reporting on items that people may want to hide from their public image for completely acceptable reasons (e.g. drag) and the rights of children and letting them move on in adulthood, especially for child stars or athletes. Who are we to tell people what an embarassing news article is, but also how can we contrast this with important news that the public deserves to know?
- "student records" is tricky too! We briefly talked about how "student record" is more expansive than simply your grades and homework in a class (it includes communication about you in classes, intersects with you as a student employee, and can even include private messages about students). In addition, we talked about FERPA and its provisions in requiring retention for a certain amount of time, and required provisions on deleting data.
Questions from the readings!
- Q: are modern cars (maybe made in the last ten years) easily hackable? Or have they fixed the problems Yoshi talked about?
- Matt's answer: as a non-expert (but someone who has worked directly in this field, using the same technology), several things are true. Some car manufacturers have taken note and have patched some of the low-hanging fruit, while others have not (and many research and industry teams have replicated these types of attacks over the last decade - Matt's expertise is in attacks on the "CAN bus" in cars). However, even for car manufacturers that have fixed these issues, there are many, many other attacks that work against cars. Notable examples that Matt is personally familiar with include "replay attacks" on keyless cars and generally poor security for in-car entertainment systems. In many fields, security is an "arms race": security researchers or hackers will find vulnerabilities, engineers will fix them, and researchers or hackers find another angle (or exploit a bug in the "fix").
- Matt's tangetial answer: one of the things that keeps the average person generally safe is that ... it's not worth the effort to set all of this up to hack into a college student's car (especially when you can usually just break the windows and hotwire it - no hacking necessary). However, this is a bigger deal for higher-profile targets, often in the lens of national security.
- Matt's post-hoc followup: if you are interested, one interesting profile of the front-to-back of car hacking is the Wired article "Hackers Remotely Kill a Jeep on the Highway—With Me in It" by Andy Greenberg (2015).
- Q: how easy is it to hack into a "smart home", and how could that work?
- Matt's answer: it's definitely possible! An infamous case is hacking into smart lightbulbs (that has reached almost meme status). There are many different approaches to take, but one of the simplest is to find something connected to the internet (e.g. a smart garage door, fridge, or lightbulb) that has bad "access controls" or other security (e.g. using the default username and password for the device, basing the password off of serial number, or just ... not having a password). After you gain access, you can then wreak all sorts of havoc! Similar to my answer about cars, there's almost an "arms race" of security here too; companies have started to patch the "low-hanging fruit", but researchers and hackers have just moved on to other techniques.
- Matt's post-hoc followup: I forgot to mention in class, but companies pay serious money to researchers who find these bugs and report them (rather than using them in the wild). For example, Apple's Security Bounty can pay upwards of millions of dollars for researchers who find these types of nasty bugs.
- Q: what is the "safest car"?
- Matt's answer: I don't have a specific make or model in mind, but there's probably a balance of minimizing your "attack surface" (i.e. things that are hackable) - so perhaps, no infotainment system, no wireless tracking, no keyless fobs, etc. - while still being modern-ish so that there is a layer of security over the computers that power all cars (e.g. the CAN bus). This would be a great question for Yoshi and other researchers!
- Q: Yoshi's video mostly talked about the brakes - could you blow up the engine? How does this work with other things (like boats or planes)?
- Matt's answer: in short, probably yes (this is what I worked on as an intern - applying these concepts to naval ships). The "CAN bus" system (fancy word: network topology and protocol) is often used in ships and planes. You can disable the thermal regulator (or coolant dispenser, temperature monitor, etc.) on some engines via a CAN bus attack, which can cause the engine to predictably overheat and enter a failure state. However, the specifics are quite complicated (and perhaps best suited in a different type of conversation).
Finally, we closed off the day with a set of discussions surrounding the complicated math (that we may not understand) that powers computer security and our obligation to explain these algorithms to the public.
Were you convinced by the MinutePhysics video? Would you feel comfortable with your data being released after "jittering"?
- There are so many technologies that use lots of math, are related to security, and we have no idea how they work (e.g. Face ID) - we just trust that they keep our information safe.
- Things like this YouTube video help though, and we should have more of these things!
- Would making the implementation open help? Maybe that would cause security issues because people can see exactly how it works...
- But, making it open also makes it easier for all of us to trust the code and audit it for bugs. This is a common argument against security through obscurity.
- Also a difference between trusting the overall idea in theory, and implementations in practice - it might have a bug, people may not prioritize it, etc.
- Would be good to have experts review and approve these solutions - almost like toothpaste commercials?
- How would we find these experts? Who is your ideal panel?
- Would be good if they are public and can be held accountable (e.g. for toothpaste commercials, who are the 4 out of 5 dentists)? And, they should be independent from whoever is implementing this security (and should not be paid by them).
- But also, people can be pretty suspicious of the government? And many of these explanations might involve complicated math that most computer science majors wouldn't understand (let alone the public).
- [at this point, Matt briefly chimed in that this exact concern has happened with security! If you're interested, you may be interested in Daniel J. Bernstein, who is famous for a Supreme Court case on cryptography and arms export regulations and his work on the Dual_EC_DRBG "backdoor", among many other contributions to computer science]
- Related note: the video was sponsored by (and done in conjunction) with the US Census Bureau - those were the experts!
What obligation, if at all, do computer scientists have to explain these processes to the general public?
- A good baseline would just be to give out the code and math itself.
- But, for the general public, it'd be pretty challenging to understand. Maybe we need to break things down more?
- One analogy could be nturition facts?
- But, much of the limiting factor here is higher education - requires much more subject matter expertise (math & computer science) than a nutrition facts label.
- And, there are people who can't fully comprehend a nutrition facts label!
- Related point: think of the foods that advertise with "only with ingredients you can pronounce" - do we want that for CS?
- Is the average person even aware that they need to be concerned about data security? This class is not representative of the "average" American (since we've all taken a computer science class). Maybe we need to make people more aware of security first.
- Related to last week's reading on "Machine Bias", wasn't the company not required to disclose how the algorithm exactly worked?
- Maybe it doesn't matter too much because, even if the average person doesn't understand it, they'll still use it.
- The math part of the security is not even that important. For example, 2-factor authentication is pretty common, but now people use SIM swapping to hack into celebrity Instagram accounts - better encryption doesn't fix those!
- Many (most?) hacks involve attacking the person ("social engineering") - e.g. spearfishing, "Nigerian prince scam" - and that's what we need to fix, with better education.
- The math issues might be more of an issue for companies like Google or bigger/more important people.
Homework for Week 6
-
read/watch the required materials:
- article: The Psychology of Design, Jon Yablonski, A List Apart, 2018.
- one-sentence pitch: a brief foray into some of the "laws" of design that are grounded in psychology and cognitive science.
- article: Dark patterns, the tricks websites use to make you say yes, explained, Sara Morrison, Vox, 2021.
- one-sentence pitch: giving a name to something you've almost certainly encountered before.
- article: The worst volume control UI in the world, Fabricio Teixeira, Medium, 2017.
- one-sentence pitch: one of the "classic" memes in user interface design, but also with a surprisingly relevant end note.
-
read (at least) one of the optional readings:
- short & sweet (one short-ish article)
- article: Programming languages are the least usable, but most powerful human-computer interfaces ever invented
- one-sentence pitch: user interfaces extend beyond just websites and apps!
- an interesting viewpoint (no back and forth this time)!
- article: Playful worlds of creative math: a design exploration, Jason Brennan, Scott Farrar, Natalie Fitzgerald, May-Li Khoe, Andy Matuschak, Khan Academy Research, 2017.
- one-sentence pitch: a brief look at what the design process might look like from one of Matt's favourite industry research groups!
- rabbit hole! This one is a bit special: instead of any articles, I've given you three demos with an educational focus. Give each of them a spin (and reflect on them)!
- demo 1: Sound, Bartosz Ciechanowski, 2022.
- demo 2: The Evolution of Trust, Nicky Case, 2017.
- demo 3: We explored university syllabi to identify the literary canon, The Pudding, 2023.
- answer the reflection on Google Forms (and review the culminating reflection prompts).
Week 6: User Interfaces & Human-Computer Interaction (Student's Choice #1)
Before class: complete homework for week 6.
Week 6 Summary
Brief look at your reflection responses!
Where have you seen laws of design in apps that you use?
- Listed apps: Spotify (2x), Instagram (2x), TikTok (2x), BeReal, Word, Google Docs, Gmail, VLC, Canvas, Ghost Commander
- Big emphasis on consistent patterns across apps - e.g. the "heart" icon meaning "like" across various social media sites, and expecting similar features (and keyboard shortcuts) across Word and Google Docs
- A note that many social media apps restrict the number of things you can do, perhaps to minimize cognitive overhead (but also can be frustrating!)
- A note that these design norms are almost a part of competition - people won't use your Word competitor (e.g. Notion, Obsidian, Bear, Evernote, ...) if it's too different in design!
Where have you seen dark patterns?
- Listed patterns: hidden costs, fake limited-time offers, microtransactions, loyalty programs and streaks, subliminal and hidden advertisements, guilt (Wikipedia donations, Duolingo), tipping
- Discussed how Duolingo has really committed to the guilt bit (including creating many joke advertisements), and how that contrasts with more serious situations (e.g. guilt would probably be viewed poorly in a college setting)
- A discussion on different types of dark patterns with tipping: from it being a hidden cost, to guilt around making sure employees are paid, to the physical interface of tipping machines (e.g. having pre-selected options being too high, making it hard to make a custom or no tip, etc.)
Where have you seen unwanted innovation?
- general listed topics: disability dongles, "adding AI to everything", requiring an app when it's not necessary (e.g. Disney, parking), forcing algorithmic timelines, changing layouts in apps (mentioned: AirDrop, Google, Instagram)
- brief discussion on how this is almost exactly the same as the disability dongle article, and how this is a common issue with engineering - building cool things without asking if it's needed!
Broadly speaking, do you think design is based on ironclad laws or more about intuition?
- Don't think there should be laws: "follow these things always" can stifle creativity, and you wouldn't be able to adapt to different situations.
- e.g. look at art history: some of the most interesting art breaks existing artistic laws and norms
- While innovation is great, having guidelines to make websites similar to previous ones is good - getting rid of these "laws" could harm usability.
- These laws could be great for teaching and getting started, especially when design is so open-ended.
- Think of them as guidelines. Some are just good design (e.g. "don't put white text on a white background"), but generally speaking you can break some of these laws and still have good looking things!
- Keep in mind that "intuition" is different for each person, and you want to be more general/intentional about thinking about what is intuitive.
- The answer depends on the context and your goal: if your goal is usability for a broad audience, you may want to prioritize design laws and intuition; but, if your goal is to create web art, ignoring the laws can help you be more creative.
- If you follow the laws of design, you may feel like you have to follow existing products (like Word) - but that could be stagnant, and then you'll always have Word.
- Need to strike a balance: if something is too foreign, people won't use it!
- Related to the history of the printing press and movable type: for more, see the Gutenberg Bible.
How does this compare/contrast with other disciplines, either artistic or tech-based?
- In photography, one of the first things you focus on is the "rule of thirds" and prioritizing symmetry and the golden ratio. But, some of the most breathtaking images come from exceptions.
- Similar to genre conventions in music: pop songs can be formulaic and follow a similar format or unspoken structure. But, songs that really break the mold are more memorable!
- Literature: some of the best books can be very nontraditional, but things that stand the test of time / the literary canon are books.
- Sometimes, breaking norms doesn't work and fails spectacularly: for example, early 2000s websites were very hard to navigate! Now, the web is more standardized (if also boring), which can make it more accessible.
Aside: is 12X code quality a law of design?
In Amy's article, she talks about how programming languages are a human interface - which would make code quality a potential law of design!
Broadly speaking, there are three types of code quality rules in 12X:
- Rules that are almost universally agreed upon, similar to "don't put white text on a white background". Indentation is the prototypical example here: almost all programmers (Java or not) follow indentation rules.
- Rules that are controversial, but we pick an option to make things consistent (and thus, more usable across the class). A classic example is how many spaces to indent by: many people disagree on 2 versus 4 spaces, whether or not you should use tabs or spaces, etc. Another is commenting style: there are many, many different approaches to commenting, but we've picked BERP & pre/post to provide focus.
- Rules that exist for pedagogical purposes (e.g. "forbidden features") - mostly so that you learn a specific skill or pattern, rather than skipping over it.
But, sometimes we break the rules of 12X code quality!
- With "BERP", the "E" (exceptions) is frequently excluded when it's not necessary.
- Context matters: for quick tests, we often don't comment our code or pick bad variable names; but for final submissions and code we'll share with others, we care more about these.
- When instructors live code, we often don't follow all the code quality guidelines (e.g. commenting) - in part because the purpose is different!
How can we stop "dark patterns" from existing? How would we define it, and what are possible legal strategies?
- Definition 1: someone convincing you to do soething you didn't want to do. But, what about ... homework? Or requiring a user to sign in?
- Definition 2: intentionally (or indirectly) misleading someone into doing something. But, what does indirectly mean? What about long terms and conditions?
- Definition 3: influencing or manipulating the consumer beyond the scope of advertising. But, isn't some advertising fine? What about the method of advertising?
- Definition 4: "obscuring" the truth as a result of manipulating the consumer's lack of information or knowledge. But, what about things like infinite scroll?
- Maybe we only apply these laws to larger companies?
- Need to avoid getting to the state of really long terms and conditions - people don't read them! This is kind of like TikTok's "you've been scrolling for a while" video - most people just keep on scrolling.
- Does Steam's "you've played __ game for __ hours" help or hurt? On one hand, it's very in-your-face and not deletable; on the other hand, it could be a badge of honour.
- What if we had lifelong screen time? Confronting the user can be very powerful!
- But, isn't that guilt or shame? And users can get used to pressing "ignore" every time?
Homework for Week 7
-
read/watch the required materials:
- article: The lawsuit that could rewrite the rules of AI copyright, James Vincent, The Verge, 2022.
- one-sentence pitch: one of the first lawsuits levied against language model developers (and about code!), by someone who is both a professional programmer and lawyer.
- article: The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work, Michael M. Grynbaum and Ryan Mac, The New York Times, 2023.
- one-sentence pitch: one of the big lawsuits - and a fun case of a news company reporting on itself.
- article: This new data poisoning tool lets artists fight back against generative AI, Melissa Heikkilä, MIT Technology Review, 2023.
- one-sentence pitch: a novel approach in the ongoing conversation with AI and copyright!
- article: GPT-4 and professional benchmarks: the wrong answer to the wrong question, Arvind Narayanan and Sayash Kapoor, AI Snake Oil, 2023.
- one-sentence pitch: a slightly deeper look at the claims that GPT-4 beats lawyers, doctors, software engineers, etc.
-
read (at least) one of the optional readings:
- short & sweet (one short-ish article)
- article: 'Without these tools, I'd be lost': how generative AI aids in accessibility, Amanda Heidt, Nature, 2024.
- one-sentence pitch: another perspective combining AI + our accessibility week!
- back & forth (two opposing viewpoints)
- video 1: How AI Could Save (Not Destroy) Education, Sal Khan, TED, 2023.
- article 2 + 3: AI Chatbots Will Help Students Learn Nothing Faster Than Ever and Why Generative AI Will Underperform Expectations in Education, Dan Meyer, 2023.
- one sentence pitch: examining a huge claim that AI will revolutionize education (with a focus on math education).
- rabbit hole (a set of short articles meant to spark a deep dive)
- article 1: Three ways AI chatbots are a security disaster, Melissa Heikkilä, MIT Technology Review, 2023.
- article 2: Multi-modal prompt injection image attacks against GPT-4V, Simon Willison, 2023.
- article 3: Your Personal Information Is Probably Being Used to Train Generative AI Models, Lauren Leffer, Scientific American, 2023.
- one-sentence pitch: a flyover of some of the issues surrounding AI + our privacy and security week!
- answer the reflection on Google Forms (which includes proposing your activity idea).
Week 7: AI, Revisited (Student's Choice #2)
Before class: complete homework for week 7.
Week 7 Summary
Answering your questions from the reflection!
Why does AI often make low-level mistakes in stem-related issues, mathematics, physics and even science?
This depends on the type of AI that you're talking about. Assuming that you mean Large Language Models (e.g. ChatGPT), the reason is that these models don't have true understanding of the world (e.g. the laws of math, physics, ...) - they are really just predicting the next token (remember the 3B1B video?). So, they're really good at making things that "look right", but aren't grounded in principles. This is broadly true of pure machine learning techniques (which look for patterns in data, rather than knowledge from the world).
(this topic is controversial, and some proponents of LLMs would disagree with me on this take - but broadly speaking, it is factually true. There are philosophical arguments on what it actually means to "understand" the world as well!)
There are other forms of AI that try to directly encode these rules into their algorithms; two keywords are "automated reasoning" and "knowledge representation" (and more broadly, "classical AI"). These are closer to COMPAS than they are to ChatGPT. However, there are also many people who try to blend both of these approaches together - indeed, that's what some LLM products (e.g. ChatGPT) likely do!
What triggered this sudden "boom" in AI? It's now so prevalent! Has it been growing behind the scenes and suddenly became super popular? Or has there been some sudden advancement that made it pop up on the radar? Have I just been missing all the progressive advancement, and this isn't actually a "boom"?
Really great question! Matt's (mostly non-expert) answer is that there were sudden "booms" in AI (so it's not on you)!
In Matt's opinion, the main restriction was "compute" (fancy word for effective hardware). Much of the core theory & algorithms for (non-LLM) machine learning existed for decades (e.g. backpropagation) - but, we were only recently able to actually compute these algorithms effectively. This includes:
- computer hardware becoming faster (particularly, GPUs - in part fueled by an increase in the popularity of gaming. thanks gamers!)
- fun stat: the average high-end GPU now can do more math per second than the fastest supercomputer in the world in 2000.
- related point: "Moore's law"
- computer memory has become bigger (remember how many parameters GPT-3 had!)
- more people writing code to effectively use this hardware (some keywords here include "GPU programming" and "CUDA", amony many others)
Separately, there is a bombshell paper called "Attention Is All You Need" (2017) that kickstarted the "transformer revolution" (the tech that powers all LLMs). That would be an example of one of the "sudden advancements" that made this pop up!
Separately, the amount of data the internet produces has grown rapidly - especially as more of the world becomes online. As we talked about last time, more data (generally) means more effectiveness!
Some related points (that Matt argues are causally related to the previous two) are massive investment from industry (often outpacing academia), which lets companies focus on speed, and the popularity of open-source machine learning software (why do companies do this? look up "commoditize your complements").
Do different AIs also train off of each other?
In short: many do! The most famous case is a "generative adversarial network" (GAN), where you can think of two AI algorithms "competing" against each other - the competition makes both of them "better". In a slightly reductive example, you could train a very effective "is this plant poisonous" detector by having:
- some AI that tries to look at a picture and correctly guess if a plant is poisonous
- another AI that tries to generate images to trick the first AI (by making it guess wrong)
You'd then have these two AIs "compete" against each other! This has powered many of the recent advances in image/video/audio AI; two high-profile examples include upscaling art and deepfakes.
How should copyright apply to AI-generated content?
From Matt: most students answered that copyright law should apply to AI-generated content, and the medium doesn't matter too much.
As a rebuttal: one common talking point in the AI + copyright conversation is that "humans who look at art and then are inspired by it are not beholden to copyright. LLM training is just like reading all the English books before writing your own. Why is AI any different?". Do you agree or disagree?
- Disagree: the core difference between a human and AI is creativity. Humans can invent beyond what they've seen, while AI cannot.
- The tension could be more the lack of giving credit rather than the actual act of training and/or generating copied images.
- When you learn art, there is a social contract based on respect for the artist and their work. A company training an AI model (and not giving credit) is not paying respect - the actual training process is red herring.
- There are copyright laws that exist now that could essentially apply to generative AI - e.g. requiring AIs to cite its sources, or for training to do so.
- AI isn't just purely copying and reproducing work - it is changing it slightly (keyword: "transformative work".
- Sidebar: could we train an AI to cite its sources? (Matt: theoretically possible, but as of now, practically infeasible)
- In pop music, there are also many situations where music sounds the same (e.g. Olivia Rodrigo and Taylor Swift's Cruel Summer). But, in music, there are some nuances about copyright (e.g. homages, sampling, covers). The core issue there (and here) is credit - you need to attribute work when necessary.
- Copyright law in general is very messy! If you watch Nathan For You, a savvy Canadian businessman does a lot of fun things with business -- including "dumb Starbucks" (which was allowed because it is parody). Also, many YouTubers complain about copyright law - e.g. is reacting to videos "transformative work"? Should there be copyright strikes for reaction videos?
- It's hard to prove copying. For example, many people rip off Mr Beast's video style, but it's hard to prove that they're actually coping off of him. With AI, you could prove it since the data is literally there.
- How related is inspiration and memory? ChatGPT can't actually completely memorize everything - it's using its data more like a stencil.
- How different is humans being inspired by things and AI extracting data? What actually is creativity??
- At the end of the day, AI is just a tool - people are the ones using AI to break copyright law, etc.
- Humans copying things requires skill and dedication - but an AI can do it effortlessly. Similarly, a human prompting an AI takes no skill.
- Disagree: prompting an AI is actually hard!
- For music covers, the artist needs to okay it - can we do the same here?
Let's say we do apply copyright law to AI-generated content. Mechanically, how would we do this? How much needs to be "reproduced" for a copyright violation to occur?
- Could we build AI that can explain "generally" what data influenced their response (e.g. 50% of this answer comes from X source, 50% comes from Y), and then mandate that AI companies properly label their data?
- Can we make something like Turnitin for AI-generated art?
- tools like this already exist - Turnitin has one, but its accuracy is dubious, and startups like GPTZero exist, but their accuracy is also controversial
- separately, there's the idea of "watermarking" AI-generated output; the challenge is that a user can just remove the watermark and/or spoof it
- post-hoc note from Matt: on the day of our session, Google announced a new video generation model called Veo, which uses a watermarker called SynthID
- Similar to what we'd expect in a school essay, we can require AI to cite "non-obvious" or "not common-sense" information (heuristic: 5+ sources say something is true, then it's common-sense).
- But, how would we define common sense? This would differ across cultures. And, is 5 sources enough?
- What if we treat datasets as opt-in (rather than opt-out); as an artist, you need to agree that you want your work to be trained on. But, not sure how we'd enforce this (certainly that is not happening right now)...
- Can we treat this like copyright infringement on Etsy? e.g. if someone uploads obviously copyrighted or trademarked stuff, take it down?
- this is very hard to do reliably, and typically only very big companies (like Disney and Nintendo) have the money to do this.
- but, Disney and Nintendo can also be tools! For example, there are Twitter bots that make t-shirt stores using other people's art. Artists have figured out that by posting a Mickey Mouse artwork (with the text "I am trying to infringe on Disney's copyright"), Disney will instantly start legal action and take those bots down. What if we do something similar with AI?
- right now, there already are huge lawsuits like this - like the ones we read about in the readings!!
- tangent: nintendo and disney are really harsh with copyright enforcement...
Tools like nightshade can help protect artists' rights, but also adversely impact other people (e.g. by reducing the accuracy of a classification algorithm). How do we balance these concerns?
- initial thought: "fighting harm with harm" is bad. But, it's more complicated than that...
- is fighting corporate greed with harm okay? Especially as companies have money for legal fights, while individual artists do not?
- nightshade actually makes the art worse (especially as you turn up its parameters) - and you won't be able to beat big tech in an arms race of detection and catching.
- it shouldn't be the artists' job to stop their work from being stolen. If a company is "changing the world", then they definitely should have the money and resources to make sure that they aren't stealing people's work (or cite sources)! Corporations need to re-evaluate and figure out how to act ethically...
- the perpetual arms race (with an "antidote" and nightshade) could also cause long-term harms with the usage of AI, since they could learn the "wrong" things - which could hurt people or stall the technology. And, this would also make new Nightshade art worse.
- similar example: captchas have gotten more distorted as AI has gotten better, and as a result, a third party (humans filling out the captchas) are also hurting.
- relating to the reading: imagine if a person with a disability impacting their vision (or visualizing things) relies on generative AI to recognize items in pictures. If their tool spits out the wrong information because of Nightshade, we are hurting accessibility use-cases of the tool.
- the conversation should not be about "fighting" GenAI, but rather creating policy - can we do something similar to the GDPR/CCPA with cookie consent notifications, which require users to opt-in to companies using their data?
- or, something like the "search by creative commons" option on Google?
- it might not be fair/moral to use nightshade on your work if someone wants to use it under fair use - e.g. for education and nonprofit use. If it's socially beneficial, then maybe this should be okay?
- agree with the fair use point - this feels like putting malware on a library computer (or public space) just because people can misuse it.
- feels dystopian: if you put art out in the world, it's not your obligation to make sure that it doesn't hurt people when people grossly misuse it - then, you couldn't do anything!!
Homework for Week 8
-
read/watch the required materials:
- video: The Rise Of Open-Source Software, CNBC, 2019.
- one-sentence pitch: a reasonably high-level general interest story on the history of open-source and some of its challenges.
- book chapter: Is Open Source Good for Security?, David Wheeler, Secure Programming HOWTO, 2015.
- one-sentence pitch: discussing one of the most common questions about open-source (with various viewpoints).
- article: This Software Giant Declared War On Amazon. Will Other Open Source Companies Follow?, David Jeans, Forbes, 2021.
- one-sentence pitch: an interesting ethical conundrum surrounding open-source and some Seattle tech companies.
- article: Open source has a funding problem, James Turner, StackOverflow Blog, 2021.
- one-sentence pitch: it's the title!
-
read (at least) one of the optional readings:
- short & sweet (one short-ish article)
- article: What Google v. Oracle means for open source, Jeffrey Robert Kaufman, opensource.com, 2021.
- one-sentence pitch: the software Supreme Court case of the decade - and on a topic you just learned about in CSE 122!
- (as an aside, the general reporting on this story is ... of dubious technical quality. I would trust technology outlets (e.g. Ars Technica) or honestly even the Wikipedia article over traditional news media (e.g. TIME magazine or WaPo/NYT))
- back & forth (two opposing viewpoints)
- article 1: Why Open Source Misses the Point of Free Software, Richard Stallman, GNU Project, 2007 (Revised 2016).
- article 2: It's time to say goodbye to the GPL, Martin Kleppmann, 2021.
- one sentence pitch: "copyleft" licenses (and also, Richard Stallman) are a divisive issue in open-source - let's take a quick look (mostly at the former)!
- rabbit hole (a set of short articles meant to spark a deep dive)
- article 1: kik, left-pad, and npm, Isaac Z. Schlueter, npm Blog, 2016
- article 2: What happens when the maintainer of a JS library downloaded 26m times a week goes to prison for killing someone with a motorbike? Core-js just found out, Thomas Claburn, The Register, 2020.
- article 3: So, what's next?, Denis Pushkarev, core-js GitHub Repository, 2023
- one-sentence pitch: a set of articles on the messy side of open-source, individual maintainers, and JavaScript.
- answer the reflection on Google Forms and work on your culminating activity.
Week 8: Open-Source Software (Student's Choice #3)
Before class: complete homework for week 8.
Week 8 Summary
Answering your questions from the reflection!
Companies obviously benefit pretty well from using open source code, especially financially, but is there any major drawbacks?
Great question! There are many - but the ones that come to Matt's mind are related to the fact that there are often few maintainers:
- if there's a critical bug, it may take a while to fix (e.g. a recent famous vulnerability in a really important Java library called "Log4j")
- if a project has a sole maintainer, them being unavailable for a while can stall important updates (e.g. the rabbit hole reading)
- the development of new features, etc. can take much longer!
How do you think AI will impact open source?
Great question! I'm not 100% sure yet, but it almost certainly will. Some folks think that AI can help save maintainers time, especially if resources are stretched thin. Others are concerned by how AI can generate more spam, low quality code, and overall add "noise" to the overlal community. Anecdotally, I've definitely reviewed code submitted by others that is AI-generated and quite broken :')
When you make a change and in the end you realize that it worked better before, is it possible to go back to the previous code without undoing new things that are working?
Wonderful question! Long story short, yes - the most common example is a software called git, which is something called a "version control software". This is also the "git" in "GitHub" and "GitLab". Among other things, git can let you revert your codebase to a previous iteration. If you're curious, CSE has a few classes on git (e.g. CSE 391, which Matt is teaching over the summer!)
How often are people making enough money to get by nowadays on open source?
It depends on what you mean by "making enough money"!
- If you mean living only on donations (e.g. through something like Patreon), the number is very, very small.
- If you mean working at a nonprofit or foundation that supports open-source, there are a few more people who do this (e.g. look up the Linux Foundation or the Rust Foundation)
- If you mean having a full-time job working on open-source, that's more common! There are some companies that are entirely built on supporting open-source products (e.g. RedHat). In addition, the biggest tech companies often devote full-time resources towards their own open-source projects: some examples include Apple's Swift programming language (used to make iOS apps), the Chromium project (which powers Google Chrome and Microsoft Edge), and Meta's React web development library.
- There are also non-programmers that support open-source: e.g. designers, community managers, people who run conferences, ...
(the talk I linked below by Evan actually talks about this!)
To what degree are tech companies contributing to open source (as in money, coding, whatever else, and how much)?
This is a great question! It's hard to quantify since some of these metrics can be subjectively interpret, but "quite a bit, though they could do more" is a pretty reasonable response. Big companies like Meta, Google, and Apple dedicate at least hundreds of engineers (and millions of dollars a year) into open-source.
Why might companies do this? One core reason is all the general benefits of open-source: many more people use and contribute to your software, and you can get "free" developers, testers, etc. If you're not planning on selling the software (or it's not the core part of your business), it might be worth it to give up the sales for "better" software.
From an economics perspective, this could also be the best thing for their business (even if it may not sound like it)! For example, Apple's makes a good chunk of its money by selling iPhones. So, it has a vested interest in making people want iPhones - which, among other things, means that it should have good apps. As a result, Apple wants to make it as easy as possible for people to make good iPhone apps - which includes spending millions of dollars on free materials teaching app development, but also supporting a free and open-source programming language to make apps (Swift) and libraries to make developing apps easier (e.g. SwiftUI).
(there are many other reasons too - from altruism, to branding/marketing, to controlling the direction of software)
Are there any super legendary open-source programs that you recommend we look at? Anything really cool and interesting that you like?
So many! Some of the most important pieces of software in the world are open-source! Linux (the operating system) is the "classic" open-source project to talk about, and it was part of the free software movement (as a fun fact, macOS and iOS is also based off of FreeBSD, a related project). Git, which I mentioned earlier, is also open-source and was created to make developing Linux easier!
Here are some other "mega famous" projects:
- implementations/runtimes of almost all programming languages (used by CS people): Python, JavaScript, TypeScript, Swift, Ruby, Go, R, Rust, ... (Java is ... a bit complicated)
- almost all modern libraries used to develop websites - from website styling (e.g. Bootstrap) to interactivity (e.g. React) to "backends" (e.g. Django) and databases (e.g. MySQL)
- WordPress, which powers something like 20% of all websites in the world!
- much of the software that supports data science and machine learning, such as NumPy, Pandas, TensorFlow, PyTorch, or OpenCV
- and, many other important tools for programmers - from command-line tools (like curl and tmux) to full applications (like VSCode)
There are also lots of fun ones. Matt loves "esoteric" programming languages - one particularly funny one is Folders, which is a programming language written with just folders (no code). There are also video games, like Mindustry!
How can I ensure that I am using open-source code correctly on platforms like GitHub without breaking copyright laws? What are the best practices to follow to avoid any copyright issues, especially when machine detection tools might not catch everything?
Long story short, open-source projects will have a license that dictates how you can use their code. Usually, this is in a file called "LICENSE" or "LICENSE.txt" or "LICENSE.md". As a developer, you'd be responsible for reading this license and making sure that you follow the rules in it. There are a few common licenses (e.g. MIT, Apache, GPL, ...) - so you can usually recognize the license and make your decision from there.
How often have you interacted with opensource code in the jobs you've had? Is it as common as these articles make it feel it is?
If anything, I would say that the articles undersell how common open-source is. In my personal experience, I've done a ton of web development and cloud computing - fields which have really, really embraced open-source. The programming languages, libraries, and software systems I worked on and with are all open-source (to name a few others outside of what I've already mentioned: Docker, Kubernetes, Node.js, Jest, ESLint, ...).
Even what you do in CSE 122 is relevant: the code editor that EdStem uses is actually an open-source project called Monaco (which also powers VSCode), the Java implementation that your code runs on is open-source (OpenJDK), the libraries used to style the website, add interactivity, and store your data are open-source, etc!
Reflecting on the Amazon and Elastic situation (required reading), do you think what Amazon did was morally/ethically correct?
- even though it is legally okay, it feels morally wrong: you're trying to compete against Elastic, and taking their main product.
- the fact that it retains a name containing "Elastic" felt particularly bad (maybe IP issues?) - especially since it harms their ability to market.
- feels strange that Amazon claimed it was a "parternship" with Elastic, when they are also competitors.
- but, Amazon did contribute engineering resources to the core product. Does that count as parternship?
- even if Amazon contributes engineering resources, the power imbalance and size of Amazon makes this an issue - Amazon doesn't need to compete?
- would it be more okay if a startup did the same thing with Elastic?
- well, Elastic (the company) still seems to be doing fine - at least the CEO is worth lots of money :)
- disallowing this sort of "copying" goes against the ethos of open-source - even though this specific instance feels shady, this is exactly what open-source is supposed to allow.
- Elastic did forfeit their rights, and Amazon is a company that is trying to earn as much money as possible - so within these bounds, perhaps moral?
- Amazon broke the social contract around open source (this is why we can't have nice things). But, if we add laws to restrict open-source, this could hurt the world,
- could we instead innovate with open-source licenses to prevent behaviour like this? We already have an existing legal framework to deal with licensing.
- what are our thoughts on the GPL and other "copyleft licenses"? (one key clause of the GPL family is that all derivative works need to be distributed with a similar license, i.e. open-source)
- open-source is supposed to be mutually beneficial, and it sounds like the GPL preserves that more: the author helps the world by making their code available, and the GPL requires the world to contribute back too.
- the GPL could force "bad faith" actors like Amazon to contribute back and be more mutually beneficial
If I release open-source code to the world, am I morally obligated to fix bugs, maintain the project over time, and/or responsible for harm it may cause?
- no: it's just your own project, and you don't owe other people anything just because they're using your code.
- you shouldn't be bound to a project just because you worked on it for a moment: you have a life and should be able to prioritize other things!
- the point of open-source is collaboration: putting the blame and responsibility on one person seems counterintuitive. If there's a problem, you could go fix it (instead of making the creator do so)!
- but, what about external harms? In other situations - like when a company makes a product - if they release something and it hurts people, they're on the hook (for at least negligence). Why would that not apply here?
- in open-source, you're not working for a company (which has different moral and ethical implications) - you're working for yourself. You owe the world less!
- companies profit off of manufacturing - so, it then makes sense to hold them accountable (with fines, laws, etc.). Open-source maintainers aren't benefitting from it in the same way.
- we can hold the consumers of open-source software accountable: if a company makes a product using open-source software hurts people, it's on the company for not double-checking the code (and being negligent)!
- someone could abuse this "open-source software has no liability" loophole.
- consumer expectations are important: similar to volunteering (where you may have lower expectations because you know they're a volunteer), we could treat open-source the same way.
You just learned about interfaces in 122. Do you think they should be copyrightable? Patentable?
- interfaces seem more comparable to a framework, structure, or strategy - it's different from copying someone's implementation.
- if interfaces are protected this way, this could harm the actual utility of the interface.
- the point of interfaces is to make compatible software!
- but, the interfaces discussed in the lawsuit are probably much more complicated than the ones we write in CSE 122. It probably required a lot of thought - which is the thing that we should protect?
- interesting analogy to recipes (when they're just a list of ingredients/instructions), which are generally not patentable/copyrightable.
Where should we draw the line for when a code "idea" should be copyrightable or patentable? What about algorithms?
- it feels like algorithms shouldn't be patented - isn't the whole point that everybody can then use them, implement them, and improve upon them?
- but, the point of patents is for that exact reason - that's why you have to explain how the thing works to patent it.
- how would you differentiate minor changes in an algorithm?
- also, there are different types of patents (e.g. patenting a process is different from a broader idea). And, there are some restrictions (novelty, non-obvious).
- algorithms also feel like a strategy rather than a tangible thing - it's not like making a tool or physical object. Can you copyright something like this (like folding in as a technique?
- related to math: it'd be really bad if someone patented or copyrighted Newton's equations, Gaussian elimination, or the idea of standard deviation.
Bonus: if you have an extra 45 minutes, I cannot recommend enough the talk The Economics of Programming Languages by Evan Czaplicki. Paradigm shift in how I thought about money in computer science, and extremely funny!
Homework for Week 9
- work on your culminating activity; when ready, submit it on Canvas
- complete the shorter than normal Google Form
(and, keep an eye out for information on the panelists)
Week 9: Panel & What's Next
Before class: complete homework for week 9 (i.e. submit your culminating reflection).
We'll have a one-hour panel from folks who work in some tech-adjacent job (i.e., they would have taken CSE 121 and 122), but with a breadth of discipline and job type. Come ready with questions :)
(here's a sketch of their bios)
- Amy Zhu is a 4th-year PhD student at UW CSE, advised by Adriana Schulz and Zachary Tatlock. Her work is focused on applying programming languages ideas to create tools for computational design and fabrication workflows. Her main interest is tools that push the boundaries of machine knitting, making it possible to create surprising objects in interesting ways.
- Ashvin Nagarajan is a software engineer at Microsoft working on Outlook Calendar. He graduated with a Bachelors and Masters in Materials Engineering from UCLA in 2022. He has accepted a seat in the Harvard Business School Class of 2027 and has future career goals of venture capitalism and entrepreneurship.
- Jesse Martinez is a fifth-year PhD student at UW CSE, advised by James Fogarty. Jesse's research is in Human-Computer Interaction with a focus on Interactive Media Accessibility; his aim is to make games more accessible through technology, and make the world more accessible through games.
- Rohini Mettu graduated from UW in 2021 with a B.A. in Sociology and minors in Informatics science and Data science. Today she is a Solutions Architect at Amazon Web Services (AWS), where she helps small businesses in the PNW learn about AWS cloud technologies. Her job is the perfect mix of her university studies, blending social and communication skills with technical knowledge.
If time permits, we'll have a closing conversation on the entire class as a whole, reflect on the goals we set in our first week of class, and talk about "what's next" in our CSE journey!
Last homework!
Before the start of finals week (i.e. by May 31st, 11:59): complete your two peer reviews. They should be assigned to you on Canvas!
Culminating Activity
Part 1: Making the Deliverable
In your culminating activity, you'll synthesize the topics we've touched on in the course into a final deliverable (either an essay or a recorded video). In it, you'll dive more into a specific problem identified within that topic and compare and contrast various solutions. You should aim to answer one of three prompts:
- In many of the topics we discussed, we identified either gaps in current laws and regulations or a lack of regulation altogether. Pick a specific problem (from our set of topics) and sketch out a potential law and/or regulation that would help address this problem. Where would this law be and who would enforce it? What are the existing laws in this space, and why does this better address the core issue? What problems does your law not address (no law is all-encompassing)?
- In almost all of our topics, we noticed that lack of education contributed to core problems (if not being the problem itself). Pick a specific problem (from our set of topics). Pretend that you could create just one "class" (or other educational program) to try to solve this problem, and address the topics we discussed in the computing education week. Who would be the audience of this class (K-12, college, working professionals, software engineers, CS majors, etc.)? Would it be required or opt-in? What prerequisites would you require? How would you train (and fund) the teachers, and who should pay for the program?
- There were many more topics we wanted to discuss than weeks in the quarter. Imagine that you had a chance to design and run a "Week 11" seminar day. What would you pick for the topic, and what would the required and optional readings be (you should pick all three categories of optional readings: short & sweet, back & forth, and rabbit hole)? What would you want to have the other students discuss, and what learning outcomes do you have for that seminar? What would the trickiest parts of the topic be (and how would you guide these conversations)?
The importance of the activity is mostly focused on the content, rather than the style. You are free to choose how to structure your essay or video however you would like, with only a handful of caveats:
- if you choose the essay option, your essay should be at minimum 2 pages (12pt font, double-spaced), including any diagrams, images, or figures you'd like. If you choose the video option, your video should be at least 5 minutes long. There are no maximum lengths (but keep in mind that your peers will be evaluating your work).
- you should cite any sources you use, including resources provided in the class. You can use whatever citation style you would like, as long as you are internally consistent.
- following our accessibility topic conversations, you should make a best effort attempt to make your deliverable accessible. This includes (but is not limited to): adding alt text for images, reading out any visual elements to your video, adding subtitles or closed captioning to your video, and picking design elements that are easy to understand and perceive.
Part 2: Peer Review
After your deliverables are submitted, I'll ask you to review two other submissions and leave a short & constructive comment. Your comment should do at least two things: highlight what you think the unique contributions were to the conversations we've had in the class (i.e., what did the student talk about that is not present in our course material and discussions), and one new perspective or idea you'd add to their culminating activity.
If you have any questions, let Matt know!
Community Norms
We initially drafted these community norms in our first lecture session, though they may evolve over time (and we might revisit them in the future)! Last updated: 04/02.
- respect others and respect others' opinions, even if you disagree with them
- listen to other people, don't just hear them
- try to engage in good faith conversations, and have constructive discussions and criticism
- use inviting, encouraging, and non-confrontational language
- minimize background noise when people are speaking
- respect the speaker
- ensure that all can hear
- active listening & eye contact
- complete the readings and speak purposefully in discussions
- try not to force others into discussion when they aren't ready
- avoid uneducated judgement
- allow for anonymous posts to protect individual's privacy
- "be nice :)"
In addition, Matt promises to:
- always finish the class on time
- do his best to upload summaries and content to the course website quickly
- respond to any communication (e.g. email, Canvas) quickly
Course Policies
Credit
This is a 1-credit, discussion-based course. To earn credit for this course, you need to complete 7 weeks of discussion activities and the culminating activity.
To complete a weekly discussion activity, you need to:
- do the assigned reading
- do any assigned activities (requires some effort for completion)
- attend the discussion for that week.
If you finish all of the above tasks for any given week, it's considered completed.
Our class will meet for 9 weeks in the quarter. This means that students can still miss up to 2 discussion activities and receive credit for the class. Details about the culminating activity will be posted towards the end of the quarter.
Readings and activities for this class are not intended to take up a significant portion of time. The focus of this class is to start conversations and reflections on computer science and its impacts on the world around us - not understanding of the material. If you have concerns about the workload for this class, we strongly encourage you reach out to the instructors to discuss.
Disability and Accessibility
All students deserve an equitable opportunity to education, regardless of if they have a temporary health condition or permanent disability. This applies to both CSE 390HA and your broader academic experience at UW. If there are ways that we can better accomodate you, please let us know.
We are happy to work with you directly or through Disability Resources for Students (DRS) to make sure that this class meets your needs. If you have not yet established services through DRS, we encourage you to contact DRS directly at uwdrs@uw.edu. DRS offers a wide range of services that support students with individualized plans while simultaneously removing the need to reveal sensitive medical information to course staff. However, these processes can take time - so we encourage students to start this process as soon as possible to avoid delays.
Religious Accomodations
Washington state law requires that UW develop a policy for accommodation of student absences or significant hardship due to reasons of faith or conscience, or for organized religious activities. The UW's policy, including more information about how to request an accommodation, is available at the registrar's page on the Religious Accomodations Policy. Accommodations must be requested within the first two weeks of this course using the Religious Accommodations Request form.
Academic Honesty and Collaboration
Broadly speaking, the philosophy and policy for academic honesty and collaboration in this class mirrors the CSE 122 Academic Honesty and Collaboration policies. In particular, all work that you submit for grading in this course must be predominantly and substantially your own. In particular, quoting from the CSE 122 syllabus:
Predominantly means that the vast majority of the work you submit on any given assignment must be your own. Submitting work that includes many components that are not your own work is a violation of this clause, no matter how small or unimportant the pieces that are not your work may be.
Substantially means that the most important parts of the work you submit on any given assignment must be your own. Submitting work that includes major components that are not your own work is a violation of this clause, no matter how little work that is not your own you include.
In this class, this primarily applies to the culminating activity and weekly discussion activities that involve submitting an artifact (e.g. a short answer response to a question). Allowed behaviours under this policy include discussing the question and answers with others or using search engines and generative AI to explore more information on the topic. Prohibited behaviours under this policy are primarily related to copying work written by others, where "others" can be other students in the class, other people in general, or generative AI tools.
You are welcome (and in fact, encouraged) to draw on outside sources when creating your artifacts. In situations like these, we simply ask that you cite these sources. The exact format (e.g. MLA or APA) is not important, as long as it is clear which works are cited and how they have influenced your own work.
Acknowledgements
Many folks at UW CSE have helped shape the overall direction of this course through direct and indirect advice, conversations, and support. Thanks to Miya Natsuhara, Brett Wortzman, Elba Garza, Lauren Bricker, Nathan Brunelle, Kevin Lin, and Rachel Sobel.
Much of the accessibility module is inspired by the 2023 autumn offering of CSE 493E: Accessibility, taught by Jennifer Mankoff. This includes some of the readings (Richards, Sethfors, South et al., Monteleone et al., s.e.smith) and the overall framing of the conversation. Thank you Jen!
The framing of the computational thinking debate in the computing education week is inspired by my colleague Ben Shapiro's great CSE 599: Computing Education Research graduate class. Thank you Ben!
I am grateful for our panelists who joined us for our last week (Amy Zhu, Ashvin Nagarajan, Jesse Martinez, and Rohini Mettu) - as well as Param and Nicole who helped me find them!
Many of the other readings, framings, and ideas come from years of taking and teaching classes like this at UCLA. There are too many influences to name, but I am particularly thankful to my peers Arjun Subramonian, Sharvani Jha, Megha Ilango, Kendrake Tsui, and Leo Krashanoff (who taught or discussed these very issues); and, to UCLA faculty/researchers Safiya Noble, Ramesh Srinivasan, Jean Ryoo, Kate Lehman, and Jane Margolis for directly or indirectly shaping this work.