Abstract
Determining whether a solution is of high quality (optimal or near optimal) is a fundamental question in optimization theory and algorithms. We develop a Monte Carlo sampling-based procedure for assessing solution quality in stochastic programs. Quality is defined via the optimality gap and our procedure's output is a confidence interval on this gap. We present a result that justifies a single-replication procedure that only requires solving one optimization problem. This is in contrast to an existing multiple-replications procedure. Even though the single replication procedure is computationally significantly less demanding, the resulting confidence interval might have low coverage probability for small sample sizes for some problems. We point to variants of this procedure that require two replications instead of one, use optimal solutions, and perform better empirically.
Original language | English (US) |
---|---|
Journal | Dagstuhl Seminar Proceedings |
Volume | 5031 |
State | Published - 2005 |
Event | Algorithms for Optimization with Incomplete Information 2005 - Wadern, Germany Duration: Jan 16 2005 → Jan 21 2005 |
Funding
This research is partially supported by the National Science Foundation under Grant DMI-0217927.
ASJC Scopus subject areas
- Software
- Hardware and Architecture
- Control and Systems Engineering