CSE 599N: Deep Learning for Neuroscience
Winter 2023, 4 Credits
Instructor: Matt Golub (mgolub@cs.washington.edu)
Teaching Assistant: Jacob Sacks (jsacks6@cs.washington.edu)
Time: Mondays and Wednesdays, 1:30pm2:50pm
Location: CSE2 287 (Gates Center)
Office Hours:
 Matt: Tuesdays, 4:305:30pm, CSE 528 (Allen Center)
 Jacob: Mondays, 11:00amNoon, CSE2 150 (Bradlee TA Office, Gates Center)
Course description:
Brains are remarkably complex, massive networks of interconnected neurons that underlie our abilities to intelligently sense, reason, learn, and interact with our world. Technologies for monitoring neural activity in the brain are revealing rich structure within the coordinated activity of these interconnected populations of neurons. In this course, we will discuss deep learning models that can be applied toward 1) understanding how neural activity in the brain gives rise to intelligent behavior and 2) designing algorithms for braininterfacing biomedical devices. Topics will focus around variational autoencoders and recurrent neural networks, along with their probabilistic foundations from classical machine learning. Coursework will include readings from the deep learning and computational neuroscience literature, programming assignments, and a final modeling project applied to neural population data.
Prerequisites:
Multivariate calculus, probability & statistics, linear algebra, and some exposure to machine learning. Programming assignments will be completed in Python. No prior knowledge of neuroscience is needed.
Course Goals:
The primary goals for the course are to:
 Build practical foundations for developing deep learning models for neuroscience and neuroengineering applications.
 Enable students to ask researchlevel questions at the intersection of deep learning and neuroscience.
 Introduce the realworld challenges and opportunities around working with experimental neuroscience data.
Textbooks (please do not purchase ahead of the first class):
Deep Learning
Ian Goodfellow, Yoshua Bengio and Aaron Courville, MIT Press, 2016.
Theoretical Neuroscience
Peter Dayan and Larry Abbott. MIT Press, 2001.
Principles of Neural Science
Eric R. Kandel, John D. Koester, Sarah H. Mack and Steven A. Siegelbaum. McGraw Hill, 2021.
We will also read foundational research papers from the neuroscience and deep learning literature.
Assignments:
The first half of the quarter will include reading assignments from the textbooks above, and three programming assignments using Python and PyTorch.
The second half of the quarter will be focused around papers from the literature. Each class will be based on one paper, and all students will be expected to submit a 1/2 page paper summary ahead of each class. Students will sign up to present one paper and facilitate the discussion of another paper.
Grading breakdown:
20% inclass participation
20% programming assignments
20% paper summaries
20% paper presentation & facilitation
20% final project
Grading policy:
Each student is granted two late days to be used toward the three programming assignments (first half of course). Each student is granted two late days to be used toward the paper summaries (second half of course). One late day is used when submitting an assignment up to 24 hours late. Two late days are used when submitting an assignment 2448 hours late. No credit will be given for assignments turned in late after accounting for these late days.
Students are welcome and encouraged to work together on the readings and programming assignments. However, all submitted programming assignments and paper summaries must be written up independently by each student. You may not simply copy another student's work. All students are bound by the UW's policies on Academic Integrity and Misconduct.
Course Outline:
  

Intro to neural data 
3 classes 
Brains, biological neurons, electrical physiology, optical physiology, Poisson processes. 

Deep learning fundamentals 
2 classes 
Architectures, regularization, optimization, Pytorch. 

Recurrent neural networks (RNNs) 
~5 classes 
Taskoptimized RNNs, datamodeling RNNs, spiking RNNs, interpretation via fixed point analysis. 

Variational autoencoders 
~5 classes 
Motivation from linearGaussian models (factor analysis, linear dynamical systems), Latent Factor Analysis via Dynamical Systems (LFADS). 

Final project presentations 
2 classes 

