Due Date: Friday (February 25)
Option 1:
Slide hard-copy under Raj’s door (CSE 566) and email Scott
your Matlab code.
Option 2:
If your answers are in electronic form, you can email Raj
and Scott your answers by
Submit: (1) your answers to any questions asked in each exercise, (2) any figures, plots, or
graphs supporting your answers, and (3) any new Matlab code that you wrote
to answer the questions in the assignment. Also email Scott (scotths@cs) any Matlab
code that you wrote.
1. Nonlinear Recurrent Networks (50 points): Write Matlab code and answer the
questions in Exercise 4 from Chapter 7 in the textbook as described in
http://people.brandeis.edu/~abbott/book/exercises/c7/c7.pdf.
Create figures reproducing Figures 7.18 and 7.19 using your code, and include
additional figures to illustrate the effects of varying tauI.
(The following files implement a nonlinear recurrent network in Matlab:
http://people.brandeis.edu/~abbott/book/exercises/c7/code/c7p5.m and
http://people.brandeis.edu/~abbott/book/exercises/c7/code/c7p5sub.m
These files are for Exercise 5 in c7.pdf but you can modify them and use
them for Exercise 4. For an analytical derivation of the stability matrix, see
your lecture notes and Mathematical Appendix Section A.3).
2. Unsupervised Learning (50 points): Write Matlab code to implement Oja’s Hebb
rule (Equation 8.16) for a single linear neuron (Equation 8.2) receiving as input the 2D
data provided in http://people.brandeis.edu/~abbott/book/exercises/c10/data/c10p1.mat
but with the mean of the data subtracted from each data point. Use “load –ASCII
c10p1.mat” and type “c10p1” to see the 100 (x,y) data points. You may plot them using
“scatter(c10p1(:,1),c10p1(:,2))”. Compute and subtract the mean (x,y) value from each
(x,y) point. Display the points again to verify that the data cloud is now centered around
0. Implement a discrete-time version (like Equation 8.7) of the Oja rule with alpha = 1.
Start with a random w vector and update it according to w(t+1) = w(t) + delta*dw/dt,
where delta is a small positive constant (e.g., delta = 0.01) and dw/dt is given by the Oja
rule (assume tauw = 1). In each update iteration, feed in a data point u = (x,y) from
c10p1. If you’ve reached the last data point in c10p1, go back to the first one and
repeat. Keep updating w until the change in w, given by norm(w(t+1) - w(t)), is negligible
(i.e., below an arbitrary small positive threshold), indicating that w has converged.
a. To illustrate the learning process, print out figures displaying the current weight vector
w and the input data scatterplot on the same graph, for different time points during the
learning process.
a. Compute the principal eigenvector (i.e., the one with largest eigenvalue) of the zero-
mean input correlation matrix (this will be of size 2 x 2). Use the matlab function “eig”
to compute its eigenvectors and eigenvalues. Verify that the learned weight vector w
is proportional to the principal eigenvector of the input correlation matrix (read
Sections 8.2 and 8.3).