CSE 590Y (Security Seminar), with Adversarial Deep Learning Focus
Wednesdays @ 2:30pm in CSE 203
Topic: Adversarial Machine/Deep Learning Papers from Security Conferences (USENIX Security, Oakland, EuroSP, CCS, NDSS), and Machine learning and vision conferences (ICML, ICLR, NIPS, CVPR, ICCV, ECCV, AAAI, AISTATS, etc.)
Schedule:
Seminar Structure:
The goal with this seminar is to introduce participants to Adv. Deep Learning who are not necessarily working in machine learning. Though this is a field that requires some mathematical maturity to understand and value the contributions of the papers, rather than requiring that background, the course will provide some basic level of background information to the participants during each session. For the first meeting, I will spend some time on explaining background ideas independently of the papers for that day.
Following that initial discussion, participants will select and present papers in the general area of adversarial deep learning. For participants who are already working in this space (or in related spaces), my expectation is that you will first spend about 15 or 20 mins discussing background concepts, and then spend another 15 or 20 mins presenting the paper. This way, for those familiar with the technicalities, it will serve as a review, and for those not familiar with the technicalities, it will serve as a helpful starting point to better appreciate the paper.
Here is a list of papers you can choose from. Feel free to select your own that are not in this list. However, before presenting, please email me (Earlence) the title of the paper so that I can make sure it is within the scope of the seminar. I will update this list as I find other interesting and relevant papers.
Paper Presentation Guidelines:
- Your goal is to provide an overview of the paper and foster a discussion around it, including: What problem does the paper try to address and how? How does it fit into the broader context (e.g., related work)? What are the positive and negative aspects of the paper/approach? What new research questions does it raise?
- Please be positive! It’s easy to find reasons to criticize papers. While it’s useful to also discuss these weaknesses, you should also focus on why a paper might have been accepted, what we can learn from it, or how you might build on it. Think about how you would like other seminar to discuss *your* work! :)
- The use of slides to guide and organize your presentation is preferred. You’re welcome to borrow slides from the paper’s authors if you can find them. USENIX posts videos of conference presentations; they’re great to help you prepare, but please use them sparingly during seminar.
- Keep in mind that not everyone is a security expert (yet! :)), so please make sure to explain concepts or context that those newer to the field might not yet know.
- Borrowing from USENIX’s code of conduct and speaker guidelines: “Speakers are responsible for the content of their presentations, but USENIX requests that speakers be cognizant of potentially offensive actions, language, or imagery, and that they consider whether it is necessary to convey their message. If they do decide to include it, USENIX asks that they warn the audience, at the beginning of the talk, and provide them with with the opportunity to leave the room to avoid seeing or hearing the material.” Another type of content to avoid is content identified as classified by a government, content that clearly violates a company's intellectual property, and so on. We do have one deviation from the USENIX policy: since this is a class, everyone should be able to attend. This means that you should avoid using the types of material mentioned in this bullet.
Questions?
earlence@cs.washington.edu
franzi@cs.washington.edu
yoshi@cs.washington.edu