Due Date: Wednesday, June 3, 2009 midnight
Submission Procedure:
Create a Zip file called "528-hw4-lastname-firstname" containing the following:
(1) Document containing your answers to any questions asked in each exercise,
as well as any figures, plots, or graphs supporting your answers,
(2) Your Matlab program files,
(3) Any other supporting material needed to understand/run your solutions in Matlab.
Upload your Zip file to: https://catalysttools.washington.edu/collectit/ dropbox/huangyp/5522
Upload your file by 11:59pm Wednesday, June 3, 2009.
Late submission policy is here.
1. Nonlinear Recurrent Networks (50 points): Write Matlab code and answer the
questions in Exercise 4 from Chapter 7 in the textbook as described in
http://people.brandeis.edu/~abbott/book/exercises/c7/c7.pdf.
Create figures reproducing Figures 7.18 and 7.19 using your code, and include
additional figures to illustrate the effects of varying tI.
(The following files implement a nonlinear recurrent network in Matlab:
http://people.brandeis.edu/~abbott/book/exercises/c7/code/c7p5.m and
http://people.brandeis.edu/~abbott/book/exercises/c7/code/c7p5sub.m
These files are for Exercise 5 in c7.pdf but you can modify them and use
them for Exercise 4. For an analytical derivation of the stability matrix, see
Mathematical Appendix Section A.3).
2. Unsupervised Learning (50 points): Write Matlab code to implement Oja’s Hebb
rule (Equation 8.16) for a single linear neuron (Equation 8.2) receiving as input the 2D
data provided in http://people.brandeis.edu/~abbott/book/exercises/c10/data/c10p1.mat
but with the mean of the data subtracted from each data point. Use “load –ASCII
c10p1.mat” and type “c10p1” to see the 100 (x,y) data points. You may plot them using
“scatter(c10p1(:,1),c10p1(:,2))”. Compute and subtract the mean (x,y) value from each
(x,y) point. Display the points again to verify that the data cloud is now centered around
0. Implement a discrete-time version (like Equation 8.7) of the Oja rule with a = 1.
Start with a random w vector and update it according to w(t+1) = w(t) + delta*dw/dt,
where delta is a small positive constant (e.g., delta = 0.01) and dw/dt is given by the Oja
rule (assume tw = 1). In each update iteration, feed in a data point u = (x,y) from
c10p1. If you’ve reached the last data point in c10p1, go back to the first one and
repeat. Keep updating w until the change in w, given by norm(w(t+1) - w(t)), is negligible
(i.e., below an arbitrary small positive threshold), indicating that w has converged.
a. To illustrate the learning process, print out figures displaying the current weight vector
w and the input data scatterplot on the same graph, for different time points during the
learning process.
b. Compute the principal eigenvector (i.e., the one with largest eigenvalue) of the zero-
mean input correlation matrix (this will be of size 2 x 2). Use the matlab function “eig”
to compute its eigenvectors and eigenvalues. Verify that the learned weight vector w
is proportional to the principal eigenvector of the input correlation matrix (read
Sections 8.2 and 8.3).