Optimality of approximate inference algorithms on stable instances

Hunter Lang, David Sontag, Aravindan Vijayaraghavan

Research output: Contribution to conferencePaperpeer-review

7 Scopus citations

Abstract

Approximate algorithms for structured prediction problems—such as LP relaxations and the popular α-expansion algorithm (Boykov et al. 2001)—typically far exceed their theoretical performance guarantees on real-world instances. These algorithms often find solutions that are very close to optimal. The goal of this paper is to partially explain the performance of α-expansion and an LP relaxation algorithm on MAP inference in Ferromagnetic Potts models (FPMs). Our main results give stability conditions under which these two algorithms provably recover the optimal MAP solution. These theoretical results complement numerous empirical observations of good performance.

Original languageEnglish (US)
Pages1157-1166
Number of pages10
StatePublished - Jan 1 2018
Event21st International Conference on Artificial Intelligence and Statistics, AISTATS 2018 - Playa Blanca, Lanzarote, Canary Islands, Spain
Duration: Apr 9 2018Apr 11 2018

Conference

Conference21st International Conference on Artificial Intelligence and Statistics, AISTATS 2018
Country/TerritorySpain
CityPlaya Blanca, Lanzarote, Canary Islands
Period4/9/184/11/18

ASJC Scopus subject areas

  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Optimality of approximate inference algorithms on stable instances'. Together they form a unique fingerprint.

Cite this