Computational methods for optimization via simulation using Gaussian Markov Random Fields

Mark Semelhago, Barry L Nelson, Andreas Waechter, Eunhye Song

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

There has been recent interest, and significant success, in adapting and extending ideas from statistical learning via Gaussian process (GP) regression to optimization via simulation (OvS) problems. At the heart of all such methods is a GP representing knowledge about the objective function whose conditional distribution is updated as more of the feasible region is explored. Calculating the conditional distribution requires inverting a large, dense covariance matrix, and this is the primary bottleneck for applying GP learning to large-scale OvS problems. If the GP is a Gaussian Markov Random Field (GMRF), then the precision matrix (inverse of the covariance matrix) can be constructed to be sparse. In this paper we show how to exploit this sparse-matrix structure to extend the reach of OvS based on GMRF learning for discrete-decision-variable problems.

Original languageEnglish (US)
Title of host publication2017 Winter Simulation Conference, WSC 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2080-2091
Number of pages12
ISBN (Electronic)9781538634288
DOIs
StatePublished - Jan 4 2018
Event2017 Winter Simulation Conference, WSC 2017 - Las Vegas, United States
Duration: Dec 3 2017Dec 6 2017

Other

Other2017 Winter Simulation Conference, WSC 2017
CountryUnited States
CityLas Vegas
Period12/3/1712/6/17

Fingerprint

Gaussian Markov Random Field
Simulation Optimization
Computational methods
Gaussian Process
Computational Methods
Covariance matrix
Conditional Distribution
Simulation-based Optimization
Statistical Learning
Large-scale Optimization
Inverse matrix
Feasible region
Sparse matrix
Learning Process
Objective function
Regression
Simulation
Learning

ASJC Scopus subject areas

  • Software
  • Modeling and Simulation
  • Computer Science Applications

Cite this

Semelhago, M., Nelson, B. L., Waechter, A., & Song, E. (2018). Computational methods for optimization via simulation using Gaussian Markov Random Fields. In 2017 Winter Simulation Conference, WSC 2017 (pp. 2080-2091). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/WSC.2017.8247941
Semelhago, Mark ; Nelson, Barry L ; Waechter, Andreas ; Song, Eunhye. / Computational methods for optimization via simulation using Gaussian Markov Random Fields. 2017 Winter Simulation Conference, WSC 2017. Institute of Electrical and Electronics Engineers Inc., 2018. pp. 2080-2091
@inproceedings{51b4807bf7fe4e6db4af9ffc0cae786c,
title = "Computational methods for optimization via simulation using Gaussian Markov Random Fields",
abstract = "There has been recent interest, and significant success, in adapting and extending ideas from statistical learning via Gaussian process (GP) regression to optimization via simulation (OvS) problems. At the heart of all such methods is a GP representing knowledge about the objective function whose conditional distribution is updated as more of the feasible region is explored. Calculating the conditional distribution requires inverting a large, dense covariance matrix, and this is the primary bottleneck for applying GP learning to large-scale OvS problems. If the GP is a Gaussian Markov Random Field (GMRF), then the precision matrix (inverse of the covariance matrix) can be constructed to be sparse. In this paper we show how to exploit this sparse-matrix structure to extend the reach of OvS based on GMRF learning for discrete-decision-variable problems.",
author = "Mark Semelhago and Nelson, {Barry L} and Andreas Waechter and Eunhye Song",
year = "2018",
month = "1",
day = "4",
doi = "10.1109/WSC.2017.8247941",
language = "English (US)",
pages = "2080--2091",
booktitle = "2017 Winter Simulation Conference, WSC 2017",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

Semelhago, M, Nelson, BL, Waechter, A & Song, E 2018, Computational methods for optimization via simulation using Gaussian Markov Random Fields. in 2017 Winter Simulation Conference, WSC 2017. Institute of Electrical and Electronics Engineers Inc., pp. 2080-2091, 2017 Winter Simulation Conference, WSC 2017, Las Vegas, United States, 12/3/17. https://doi.org/10.1109/WSC.2017.8247941

Computational methods for optimization via simulation using Gaussian Markov Random Fields. / Semelhago, Mark; Nelson, Barry L; Waechter, Andreas; Song, Eunhye.

2017 Winter Simulation Conference, WSC 2017. Institute of Electrical and Electronics Engineers Inc., 2018. p. 2080-2091.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Computational methods for optimization via simulation using Gaussian Markov Random Fields

AU - Semelhago, Mark

AU - Nelson, Barry L

AU - Waechter, Andreas

AU - Song, Eunhye

PY - 2018/1/4

Y1 - 2018/1/4

N2 - There has been recent interest, and significant success, in adapting and extending ideas from statistical learning via Gaussian process (GP) regression to optimization via simulation (OvS) problems. At the heart of all such methods is a GP representing knowledge about the objective function whose conditional distribution is updated as more of the feasible region is explored. Calculating the conditional distribution requires inverting a large, dense covariance matrix, and this is the primary bottleneck for applying GP learning to large-scale OvS problems. If the GP is a Gaussian Markov Random Field (GMRF), then the precision matrix (inverse of the covariance matrix) can be constructed to be sparse. In this paper we show how to exploit this sparse-matrix structure to extend the reach of OvS based on GMRF learning for discrete-decision-variable problems.

AB - There has been recent interest, and significant success, in adapting and extending ideas from statistical learning via Gaussian process (GP) regression to optimization via simulation (OvS) problems. At the heart of all such methods is a GP representing knowledge about the objective function whose conditional distribution is updated as more of the feasible region is explored. Calculating the conditional distribution requires inverting a large, dense covariance matrix, and this is the primary bottleneck for applying GP learning to large-scale OvS problems. If the GP is a Gaussian Markov Random Field (GMRF), then the precision matrix (inverse of the covariance matrix) can be constructed to be sparse. In this paper we show how to exploit this sparse-matrix structure to extend the reach of OvS based on GMRF learning for discrete-decision-variable problems.

UR - http://www.scopus.com/inward/record.url?scp=85044522936&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85044522936&partnerID=8YFLogxK

U2 - 10.1109/WSC.2017.8247941

DO - 10.1109/WSC.2017.8247941

M3 - Conference contribution

SP - 2080

EP - 2091

BT - 2017 Winter Simulation Conference, WSC 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -

Semelhago M, Nelson BL, Waechter A, Song E. Computational methods for optimization via simulation using Gaussian Markov Random Fields. In 2017 Winter Simulation Conference, WSC 2017. Institute of Electrical and Electronics Engineers Inc. 2018. p. 2080-2091 https://doi.org/10.1109/WSC.2017.8247941