CSE 599R: Agentic Systems Security (Spring 2026)

Course Description

Advances in language models have enabled a new agentic computing paradigm, in which systems integrate one or more AI “agents” that promise to take actions on behalf of users based on their natural language queries. For example, we are now seeing the emergence of agentic browsers, computer-use agents, and increasingly complex chat agents that connect with external tools. Though these systems can enable many exciting use cases, there are serious security, privacy, and safety risks to consider. For example, a successful prompt injection attack by a malicious website on an agentic browser can potentially undermine the browser’s traditional security model, breaking isolation between websites and compromising the user’s accounts.

Addressing these security risks cannot be done solely within the AI models: we must consider the design of the entire system. Thus, in this course, we will explore the question: How can we design agentic systems with strong security, privacy, and safety properties? We will explore this question from several angles: (1) understanding core traditional systems security principles, (2) (ethically) exploring security, privacy, and safety risks in current agentic systems, and (3) developing and/or evaluating emerging defenses or secure agentic system designs.

The course will involve reading and discussing the latest research on agentic systems security and related topics, several homework assignments to experiment with the latest technologies, and an independent or group-based research project.

Prerequisites

  • CSE PhD and MS students: No formal prerequisties. I recommend some familiarity with computer security, computer systems, and/or AI/ML.
  • CS/CE undergraduates: CSE 484 and CSE 473, or instructor permission (via course petition).
  • Interested non-CS grads / undergrads: Please submit course petition.