Name of Reviewer ------------------ Evan Herbst Key Contribution ------------------ Summarize the paper's main contribution(s). Address yourself to both the class and to the authors, both of whom should be able to agree with your summary. They train a series of classifiers less greedily than has been done by repeatedly running through the list of classifiers. They reduce running time of this extra loop by introducing some new features at each outer iteration. Novelty -------- Does this paper describe novel work? If you deem the paper to lack novelty please cite explicitly the published prior work which supports your claim. Citations should be sufficient to locate the paper and page unambiguously. Do not cite entire textbooks without a page reference. no complaints Reference to prior work ----------------------- Please cite explicitly any prior work which the paper should cite. Clarity ------- Does it set out the motivation for the work, relationship to previous work, details of the theory and methods, experimental results and conclusions as well as can be expected in the limited space available? Can the paper be read and understood by a competent graduate student? Are terms defined before they are used? Is appropriate citation made for techniques used? yes motivation: too many parameters given noisy data; runtime methods: they do eventually explain why they review svms Technical Correctness --------------------- You should be able to follow each derivation in most papers. If there are certain steps which make overly large leaps, be specific here about which ones you had to skip. Experimental Validation ----------------------- For experimental papers, how convinced are you that the main parameters of the algorithms under test have been exercised? Does the test set exercise the failure modes of the algorithm? For theoretical papers, have worked examples been used to sanity-check theorems? Speak about both positive and negative aspects of the paper's evaluation. test set seems small; don't know whether better is available they choose the best graph for their evaluation metric Overall Evaluation ------------------ I think it's good that they were able to find some sort of convergence statement for an iterative algorithm. Questions and Issues for Discussion ----------------------------------- What questions and issues are raised by this paper? What issues do you think this paper does not address well? How can the work in this paper be extended? I gather basically they shuffle iterations of optimizing each classifier by itself so they're interspersed. Is there work on moving these iterations around in different patterns? Is there a reason to think this method is better than performing m iterations on each of n classifiers, all in a random order? (These are questions I have personally.) Why do they refer to classification as "online"? Isn't it always?