Name of Reviewer ------------------ Noah Snavely Key Contribution ------------------ Summarize the paper's main contribution(s). Address yourself to both the class and to the authors, both of whom should be able to agree with your summary. This paper presents a technique for finding the nearest subspace to a point via a reduction to nearest neighbor search. This reduction allows fast approximate nearest neighbors algorithms to be applied to nearest subspace search. Novelty -------- Does this paper describe novel work? If you deem the paper to lack novelty please cite explicitly the published prior work which supports your claim. Citations should be sufficient to locate the paper and page unambiguously. Do not cite entire textbooks without a page reference. Yes, this work is new. Reference to prior work ----------------------- Please cite explicitly any prior work which the paper should cite. Clarity ------- Does it set out the motivation for the work, relationship to previous work, details of the theory and methods, experimental results and conclusions as well as can be expected in the limited space available? Can the paper be read and understood by a competent graduate student? Are terms defined before they are used? Is appropriate citation made for techniques used? Yes, the paper is relatively clear, although it took me a while to understand their experiments. Another question I had is what version of ANN they used (i.e., priority search vs. regular search)? Technical Correctness --------------------- You should be able to follow each derivation in most papers. If there are certain steps which make overly large leaps, be specific here about which ones you had to skip. Experimental Validation ----------------------- For experimental papers, how convinced are you that the main parameters of the algorithms under test have been exercised? Does the test set exercise the failure modes of the algorithm? For theoretical papers, have worked examples been used to sanity-check theorems? Speak about both positive and negative aspects of the paper's evaluation. The experimental results are convincing in some ways. I liked that the authors presented results on synthetic data showing the runtime and accuracy of ANS with varying database size and (ambient) dimensionality. I would have also like to see similar plots for varying intrinsic dimensionality and for different values of episilon (incidentally, an epsilon = 100 seems very large to me). I would have also like to seen these plots for real data, which can have structure that affects nearest neighbor search. Their results on real data were interesting as a test of the algorithm, but I found the particular applications they tried a bit arbitrary (except for the Yale faces experiment). In addition to the experiments they tried, it would be good to show results for some really simple, intuitive applications such as finding the nearest line a point. Overall Evaluation ------------------ This paper presents a nice, simple technique for approximate nearest subspace search, and presents a pretty good evaluation. Questions and Issues for Discussion ----------------------------------- What questions and issues are raised by this paper? What issues do you think this paper does not address well? How can the work in this paper be extended? One thing that is implicit in this paper is that nearest subspace search is *harder* than nearest neighbor search (apparently, it's hardness is equal to the hardness of NN search *squared*). I don't understand the fundamental reason why this would be the case, and would like to have a better understanding of this. The accuracy vs. speed trade-off could also be explored further. How do you know how accurate you need to be for a given application (i.e., how to set the epsilon parameter, how many random projections to use, etc). I think Ian's figured out lots of ways to extend this paper. To his ideas I would ask whether it makes sense to do nearest subspace to subspace search (is that even a meaningful question?)