Name of Reviewer ------------------ Alex Colburn Key Contribution ------------------ Summarize the paper's main contribution(s). Address yourself to both the class and to the authors, both of whom should be able to agree with your summary. The key contribution is using a learning approach to find the trade-off between invariance and discriminative power in image classification descriptors. They combine different low level descriptors in a manner that produces good classifications on several standard data sets. Novelty -------- Does this paper describe novel work? If you deem the paper to lack novelty please cite explicitly the published prior work which supports your claim. Citations should be sufficient to locate the paper and page unambiguously. Do not cite entire textbooks without a page reference. I don't know, I am not familiar enough with the related work to form an opinion. Reference to prior work ----------------------- Please cite explicitly any prior work which the paper should cite. Clarity ------- Does it set out the motivation for the work, relationship to previous work, details of the theory and methods, experimental results and conclusions as well as can be expected in the limited space available? Can the paper be read and understood by a competent graduate student? Are terms defined before they are used? Is appropriate citation made for techniques used? Yes, it is fairly straightforward. It does require that one has knowledge of kernel learning and SVM methods to make any sense of the optimization framework. Technical Correctness --------------------- You should be able to follow each derivation in most papers. If there are certain steps which make overly large leaps, be specific here about which ones you had to skip. Experimental Validation ----------------------- For experimental papers, how convinced are you that the main parameters of the algorithms under test have been exercised? Does the test set exercise the failure modes of the algorithm? For theoretical papers, have worked examples been used to sanity-check theorems? Speak about both positive and negative aspects of the paper's evaluation. I assume that the test sets are sufficient. I am curious on the failures and why they think certain cases failed. There isn't much discussion on failure cases. Overall Evaluation ------------------ This seems like a good technique that improves the state of the art performance on selected data sets. Questions and Issues for Discussion ----------------------------------- What questions and issues are raised by this paper? What issues do you think this paper does not address well? How can the work in this paper be extended? They claim that what distinguishes one descriptor from another is the trade-off that it achieves between discriminative power and invariance. This is a broad claim, is it true? Should researchers stop looking for more accurate descriptors in favor of learning mixtures of existing base descriptors? Is this a big improvement? or an incremental tweak?