CSE 590Y (Security Seminar)

Wednesdays @ 2:30pm in CSE 203

Topic: Recent papers from USENIX Security 2017 (or other security conferences)

Schedule:

11/1 Guest Speaker Information


Title: United We Stand: A Protocol for End-to-End Encrypted Collaborative Editing

Abstract: In recent years end-to-end encrypted protocols have become popular as a way for people to use the internet without giving service providers access to their private data. The two common patterns for e2ee protocols are messaging/chat and simple file storage. Messaging protocols, like Signal, deal with transient data. File storage protocols, like Tresorit and Boxcryptor, handle data in an opaque and coarse-grained way. This talk introduces a new protocol called United We Stand (UWS) that advances e2ee in a couple of ways. First it allows members of a team to collaborate in a fine-grained and low latency (well low-ish, anyway) way on shared data (think Google Docs, or similar). Second, there is no central server in the protocol at all; the only necessary pieces are client computers (phone, desktop, whatever) and a commodity cloud storage account per user (Dropbox, OneDrive, whatever). This means that it is quite challenging for a third party to even collect metadata about the users of the protocol.

Bio: Benjamin Ylvisaker got his PhD from the fair University of Washington back in 2010. After that he worked at the software engineering and security contractor Grammatech in Ithaca for a few years. Then followed a more teaching-focused path with a 1 year position at Swarthmore, and now 3 years at Colorado College. His research interests are in programming languages and information privacy.

10/18 Guest Speaker Information


Title: Secure Learning in Adversarial Environments

Abstract: Advances in machine learning have led to rapid and widespread deployment of software-based inference and decision making, resulting in various applications such as data analytics, autonomous systems, and security diagnostics. Current machine learning systems, however, assume that training and test data follow the same, or similar, distributions, and do not consider active adversaries manipulating either distribution. In this talk I will show that motivated adversaries can circumvent anomaly detection or classification models at test time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors in classification through poisoning attacks. I will describe my recent research about evasion attacks and poisoning attacks for machine learning systems in adversarial environments.

Bio: Bo Li is a Postdoctoral Researcher in Computer Science and Engineering working with Dr. Dawn Song at the UC Berkeley. Li will join the CS@ILLINOIS faculty in the fall of 2018. Her research focuses on machine learning, security, privacy, game theory, social networks, and adversarial deep learning. Li has received the Symantec Research Labs Graduate Fellowship in 2015.

Paper Presentation Guidelines:

Questions?

franzi@cs.washington.edu

yoshi@cs.washington.edu