This seminar is meant to provide exposure to various topics in AI research. This quarter, every week we will have presentations by CSE professors, outside speakers from industry and academia about their research projects. 590a provides an exciting opportunity to learn about AI research happening at UW and various other leading AI institutes.
Talk announcements will be made both on cse590a and uw-ai mailing list. The talk videos will also be made available online here.
Date | Venue | Speaker | Details |
---|---|---|---|
1/6 | EEB 045 | Dan Weld (UW) | Decision-Theoretic Planning to Control Crowdsourced Workflows (video)
Crowd-sourcing labor markets (e.g., Amazon Mechanical Turk) are booming, because they enable rapid construction of complex workflows that seamlessly mix human computation with computer automation. Example applications range from photo tagging to audio-visual transcription and interlingual translation. Similarly, workflows on citizen science sites (e.g. GalaxyZoo) have allowed ordinary people to pool their effort and make interesting discoveries. Unfortunately, constructing a good workflow is difficult, because the quality of the work performed by humans is highly variable. Typically, a task designer will experiment with several alternative workflows to accomplish a task, varying the amount of redundant labor, until she devises a control strategy that delivers acceptable performance.
Fortunately, this control challenge can often be formulated as an automated planning problem ripe for algorithms from the probabilistic planning and reinforcement learning literature. I describe our recent work on the decision-theoretic control of crowd sourcing and suggest open problems for future research. In particular, I discuss:
|
1/13 | EEB 045 | Carla P. Gomes (Cornell) | Challenges for AI in Computational Sustainability (video)
Computational sustainability is a new interdisciplinary research field with the overarching goal of developing computational models, methods, and tools to help manage the balance between environmental, economic, and societal needs for a sustainable future. I will provide an overview of computational sustainability, with examples ranging from wildlife conservation and biodiversity, to poverty mitigation, to materials discovery for renewable energy materials. I will also highlight cross-cutting computational themes and challenges for AI at the intersection of constraint reasoning, optimization, machine learning, and citizen science and crowd sourcing.
|
1/20 | EEB 045 | Subbarao Kambhampati (ASU) | Challenges in Planning for Human-Robot Cohabitation
Like much of AI, research into automated planning has, for the most
part, focused on planning a course of actions for autonomous agents
acting in isolation. Humans--if allowed in the loop at all--were
mostly used as a crutch to improve planning efficiency.
The significant current interest in human-machine collaboration scenarios brings with it a fresh set of planning challenges for a planning agent, including the need to model and reason about the capabilities of the humans in the loop, the need to recognize their intentions so as to provide proactive support, the need to project its own intentions so that its behavior is explainable to the humans in the loop, and finally the need for evaluation metrics that are sensitive to human factors. These challenges are complicated by the fact that the agent has at best highly incomplete models of the intentions and capabilities of the humans. In this talk, I will discuss these challenges in adapting/extending planning technology to support teaming and cohabitation between humans and automated agents. I will then describe our recent research efforts to address these challenges, including novel planning models that, while incomplete, are easier to learn; planning and plan recognition techniques that can leverage these incomplete models to provide stigmergic and proactive assistance, while exhibiting ``explainable'' behaviors. I will conclude with an evaluation of these techniques within human-robot teaming scenarios. |
1/27 | EEB 045 | Jayant Krishnamurthy (AI2) | Probabilistic Models for Learning a Semantic Parser Lexicon (video)
Lexicon learning is the first step of training a semantic parser for a new application domain, and the quality of the learned lexicon significantly affects both the accuracy and efficiency of the final semantic parser. Existing work on lexicon learning has focused on heuristic methods that lack convergence guarantees and require significant human input in the form of lexicon templates or annotated logical forms. In contrast, the proposed probabilistic models are trained directly from question/answer pairs using EM and the simplest model has a concave objective function that guarantees that EM converges to a global optimum. An experimental evaluation on a data set of 4th grade science questions demonstrates that these models improve semantic parser accuracy (35-70% error reduction) and efficiency (4-25x more sentences per second) relative to prior work, despite using less human input. The models also obtain competitive results on Geoquery without any dataset-specific engineering.
|
2/3 | EEB 045 | Ali Farhadi (UW) | Deja Vu: The Story of Vision & AI (video) |
2/10 | EEB 045 | Sumit Gulwani (MSR) |
Programming by Examples: Applications, Ambiguity Resolution, and Approach (video)
99% of computer end users do not know programming, and struggle with repetitive tasks. Programming by Examples (PBE) can revolutionize this landscape by enabling users to synthesize intended programs from example based specifications.
A key technical challenge in PBE is to search for programs that are consistent with the examples provided by the user. Our efficient search methodology is based on two key ideas: (i) Restriction of the search space to an appropriate domain-specific language that offers balanced expressivity and readability. (ii) A divide-and-conquer based deductive search paradigm that inductively reduces the problem of synthesizing a program of a certain kind that satisfies a given specification into sub-problems that refer to sub-programs or sub-specifications. Another challenge in PBE is to resolve the ambiguity in the example based specification. We will discuss two complementary approaches: (a) machine learning based ranking techniques that can pick an intended program from among those that satisfy the specification, and (b) active-learning based user interaction models. The above concepts will be illustrated using FlashFill, FlashExtract, and FlashRelate---PBE technologies for data manipulation domains. These technologies, which have been released inside various Microsoft products, are useful for data scientists who spend 80% of their time wrangling with data. The Microsoft PROSE SDK allows easy construction of such technologies. |
2/17 | EEB 045 | Jeff Bilmes (UW) |
Concrete Applications of Submodular Theory in Machine Learning and NLP (video)
Machine learning is one of the most promising areas within computer science and AI that has the potential to address many of society’s challenges. It is important, however, to develop machine learning constructs that are simple to define, mathematically rich, naturally suited to real-world applications, and scalable to large problem instances. Convexity and graphical models are two such broad frameworks that are highly successful, but there are still many problem areas for which neither is suitable. This talk will discuss submodularity, a third such framework that is becoming more popular. Despite having been a key concept in economics, discrete mathematics, and optimization for over 100 years, submodularity is a relatively recent phenomenon in machine learning and AI. We are now seeing a surprisingly diverse set of real-world problems to which submodularity is applicable. In this talk, we will cover some of the more prominent examples, drawing often from the speaker's own work. This includes applications in dynamic graphical models, clustering, summarization, computer vision, natural language processing (NLP), and parallel computing. We will see how submodularity leads to efficient and scalable algorithms while simultaneously guaranteeing high-quality solutions; in addition, we will demonstrate how these concrete applications have advanced and contributed to the purely mathematical study of submodularity.
|
2/24 | EEB 045 | Daniel Sheldon (UMass) | Big Data and Algorithms for Ecology and Conservation (video)
Ecological processes such as bird migration are complex, difficult to measure, and occur at the scale of continents, making it impossible for humans to grasp their broad-scale patterns by direct observation. Yet we urgently need to improve scientific understanding and design conservation practices help protect Earth's ecosystems from threats such as climate change and human development. Fortunately, novel data sources---such as large sensor networks and millions of bird observations reported by human "citizen scientists"---provide new opportunities to understand ecological phenomena at very large scales. The ability to fit models, test hypotheses, make predictions, and reason about human impacts on biological processes at this scale has potential to revolutionze ecology and conservation. In this talk, I will present work from two broad algorithmic frameworks designed to overcome challenges in model fitting and decision-making in large-scale ecological science. Collective graphical models permit very efficient reasoning about probabilistic models of large populations when only aggregate data is available; they apply to learn about bird migration from citizen-science data and also to learn about human mobility from data that is aggregated for privacy. Stochastic network design is a framework for designing robust networks and optimizing cascading behavior in networks; it applies to spatial conservation planning, optimizing dam removal in river networks, and increasing the resilience of road networks to natural disasters. |
3/2 | EEB 045 | Ashish Sabharwal (AI2) | Beyond Information Retrieval: Semi-Structured Reasoning for Answering Science Questions (video)
Artificial intelligence and machine learning communities have made tremendous strides in the last decade. Yet, the best systems to date still struggle with routine tests of human intelligence, such as standardized science exams posed as-is in natural language, even at the elementary-school level. Can we demonstrate human-like intelligence by building systems that can pass such tests? Unlike typical factoid-style question answering (QA) tasks, these tests challenge a student’s ability to combine multiple facts in various ways, and appeal to broad common-sense and science knowledge. Going beyond arguably shallow information retrieval (IR) and statistical correlation techniques, we view science QA from the lens of combinatorial optimization over a semi-formal knowledge base derived from text. Our structured inference system, formulated as an Integer Linear Program (ILP), turns out to be not only highly complementary to IR methods, but also more robust to question perturbation, as well as substantially more scalable and accurate than prior attempts using probabilistic first-order logic and Markov Logic Networks (MLNs). This talk will discuss fundamental challenges behind the science QA task, the progress we have made, and many challenges that lie ahead. |