Much of the following language was borrowed or adapted from the 2023 ITiCSE Working Group for Generative AI in Computing Education and the CSE 12X generative AI blurb.
Caution
If you decide to use generative AI in this course (i.e., ChatGPT, Bard, Github Copilot), make sure that it is in line within course policy. If you are unsure about what may constitute as inappropriate use of AI tools, please don’t hesitate to reach out to the course staff about the situation!
What is Generative AI?¶
Generative AI refers to a kind of artifical intelligence software that is capable of getting information in response to prompts. The software is trained on source data, and uses that training data as input to a sophisticated model that predicts the appropriate response to the prompt. It does not understand the prompts, but it produces a convincing simulation of understanding. Examples of generative AI systems that use text include ChatGPT and Bard, and generative AI models capable of generating images include Midjourney and DALL-E.
Risks of Generative AI¶
- Accuracy - If you are using generative AI tools for learning then you should always double-check the content. For example, if you are assigned to write a program that uses a specic algorithm, AI tools may generate a solution that arrives at the correct answer but does not use the required algorithm. Further, because generative AI generates answers to your prompts, it’s possible for it to fabricate sources and provide incorrect information. If you use generative AI to assist in the creation of assessed content then you are responsible for the accuracy and correctness of the work that you submit.
- Quality - Content generated may be of poor quality, and generic in nature. Code may have security flaws and may contain bugs. It is important that you understand how any generated code works and you evaluate the quality of the content.
- Learning - Generative AI can be a powerful productivity tool for users who are already familiar with the topic of the generated content because they can evaluate and revise the content as appropriate. Tasks assigned by your teachers are designed to help you learn, and relying on AI tools to complete tasks denies you the opportunity to learn, and to receive accurate feedback on your learning.
- Over-reliance - Using AI tools to do your work for you may achieve the short-term goal of assignment completion, but consistent over-reliance on AI tools may prevent you from being prepared for later examinations, subsequent coursework, or future job opportunities.
- Motivation - Some students may experience lack of motivation for tasks that generative AI can complete. It is important to understand that you need to master simple tasks (which generative AI can complete) before you can solve more complex problems (which generative AI cannot complete). Stay motivated!
Impact on Others¶
There are many consequences to inappropriate usage of AI tools. Some of these consequences may be unintended, and could potentially hard others. For example:
- Other students - You could expose other students to harm by preventing their learning or including content in a group assignment that violates academic integrity
- Faculty - Violating academic integrity standards through the use of AI tools requires time and energy to detect and discourage, and is emotionally draining to teachers and administrators, to enforce these standards.
- Institutional - Including code from AI tools that you do not understand could expose the university to loss of reputation or even financial harm through lawsuits.
Examples¶
Generally speaking, you should treat generative AI tools similar to interacting with other third parties (students in the class, other people, online forums): asking general questions is permitted, but you should not be providing or using solutions to homework problems.
Examples of Permitted Use¶
Despite the risks of using AI tools, you may use them in industry after graduation at your job. Therefore, you should learn to use them appropriately so you receive the most long-term benefit. As a student, these uses are mostly centered on helping you understand course material. This includes asking generative AI to:
- Explain a given topic, or to provide an example of how programming constructs are used.
- Produce an example that is similar to assignment questions.
- Explain the meaning of error messages.
If you use AI tools, you should always include a statement about how they were used in completion of the assignment.
An example of a prompt that is perfectly okay to ask is “What is the difference between an argument and standard input?”
In this case, such an answer from the prompt allows for the student to understand a particular concept, which can help further their understanding of it in the course.
Examples of Inappropriate Use¶
There are often times when using AI tools are not permitted. Examples of these uses are:
- Asking generative AI to complete an assignment for you - an assignment that you were meant to complete - where it can generate an entire solution for you that you don’t understand.
- Providing a generative AI tool with your solution to a homework problem, and asking it to fix bugs in the solution for you - this would be inappropriate with another student.
- Writing a code solution in a language you know, and then asking an AI tool to translate that code into the language required for the assignment. In this case, part of the assignment’s value to your learning is to better understand the required language, which you would be missing out on.
Violations
Violations of this policy will be referred to the UW Community Standards and Student Conduct office. Students found responsible for violating this policy will, at minimum, receive no credit for the assignment on which the violation occurred (regardless of whether that assignment was later resubmitted or reattempted). Additional penalties may be imposed for repeated or egregious violations.