TY - GEN
T1 - Convex relaxation regression
T2 - 32nd Conference on Uncertainty in Artificial Intelligence 2016, UAI 2016
AU - Azar, Mohammad Gheshlaghi
AU - Dyer, Eva L.
AU - Körding, Konrad P.
N1 - Copyright:
Copyright 2017 Elsevier B.V., All rights reserved.
PY - 2016
Y1 - 2016
N2 - Finding efficient and provable methods to solve non-convex optimization problems is an outstanding challenge in machine learning and optimization theory. A popular approach used to tackle non-convex problems is to use convex relaxation techniques to find a convex surrogate for the problem. Unfortunately, convex relaxations typically must be found on a problemby-problem basis. Thus, providing a generalpurpose strategy to estimate a convex relaxation would have a wide reaching impact. Here, we introduce Convex Relaxation Regression (CoRR), an approach for learning convex relaxations for a class of smooth functions. The idea behind our approach is to estimate the convex envelope of a function f by evaluating f at a set of T random points and then fitting a convex function to these function evaluations. We prove that with probability greater than 1-δ, the solution of our algorithm converges to the global optimizer of f with error O (log(1/δ/T)α) for some α > 0. Our approach enables the use of convex optimization tools to solve non-convex optimization problems.
AB - Finding efficient and provable methods to solve non-convex optimization problems is an outstanding challenge in machine learning and optimization theory. A popular approach used to tackle non-convex problems is to use convex relaxation techniques to find a convex surrogate for the problem. Unfortunately, convex relaxations typically must be found on a problemby-problem basis. Thus, providing a generalpurpose strategy to estimate a convex relaxation would have a wide reaching impact. Here, we introduce Convex Relaxation Regression (CoRR), an approach for learning convex relaxations for a class of smooth functions. The idea behind our approach is to estimate the convex envelope of a function f by evaluating f at a set of T random points and then fitting a convex function to these function evaluations. We prove that with probability greater than 1-δ, the solution of our algorithm converges to the global optimizer of f with error O (log(1/δ/T)α) for some α > 0. Our approach enables the use of convex optimization tools to solve non-convex optimization problems.
UR - http://www.scopus.com/inward/record.url?scp=85001944004&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85001944004&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85001944004
T3 - 32nd Conference on Uncertainty in Artificial Intelligence 2016, UAI 2016
SP - 22
EP - 31
BT - 32nd Conference on Uncertainty in Artificial Intelligence 2016, UAI 2016
A2 - Janzing, Dominik
A2 - Ihler, Alexander
PB - Association For Uncertainty in Artificial Intelligence (AUAI)
Y2 - 25 June 2016 through 29 June 2016
ER -