AitF: Collaborative Research: Algorithms for Probabilistic Inference in the Real World

Project: Research project

Project Details


Probabilistic models are the method of choice in large scale machine learning systems for describing complex dependencies in data and performing prediction tasks across diverse fields like computer vision, natural language processing, and computational biology. A fundamental inference problem underlying many of these applications of graphical models is to find the most likely configuration of the probability distribution, called the maximum a posteriori (MAP) assignment. MAP inference problems for graphical models often reduce to well-studied combinatorial optimization problem that are known theoretically to be NP-hard in the worst-case. However, practitioners in areas such as computer vision and machine learning have made significant strides in designing heuristic algorithms to perform real-world inference accurately and efficiently. Reconciling this large gap between theory and practice in probabilistic inference, and designing efficient MAP inference algorithms with provable guarantees is a major challenge in machine learning. This proposal focuses on bridging the gap between theory and practice for probabilistic inference problems in large-scale machine learning systems by: -- Understanding why many real-world instances of MAP inference problems on graphical models are tractable. The goal is to identify structural properties and methods of analysis that differentiate real-world instances from worst-case instances of these NP-hard combinatorial optimization problems, and then design efficient algorithms with provable guarantees that would apply to most instances that come from the real-world. -- Understanding why heuristics like linear programming and other convex relaxations are so successful on real-world instances. The broad goal is to develop a theory that allows us to understand when these relaxations become integral or partially-integral on many real-world instances. -- Developing new scalable algorithms that take advantage of properties exhibited by real-world instances. Keywords: beyond worst-case analysis; inference; convex relaxations; approximation algorithms; machine learning Intellectual Merit: The project aim to bridge fundamental gaps between the theoretical and practical understanding of probabilistic inference problems and combinatorial optimization. Tackling the main goals of the project will lead to fundamental advances in both theory and practice: it will involve identifying structural properties and new beyond-worst-case paradigms that on the one hand capture real-world data-sets, and on the other hand allow us to provide provable guarantees on running time and inference quality. Broader Impact: The efficient algorithms for probabilistic inference developed as part of this proposal have the potential to be transformative in machine learning, statistics, and more applied areas like computer vision, social networks and computational biology. More broadly, our work is likely to lead to completely new directions of study in algorithms research: a major focus of the research will be the creation of new average-case models and algorithmic frameworks that are both amenable to theoretical study and which capture the essence of real-world instances. A series of workshops will be organized to bring together the theory and machine learning communities and foster new collaborations. These include a workshop hosted at Northwestern University, and workshops at conferences such as NIPS, STOC or FOCS, and UAI. A public repository of real-world inference problems arising from natural language processin
Effective start/end date9/1/168/31/22


  • National Science Foundation (CCF-1637585)


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.