Algorithms for Nonlinear Nonconvex Optimization under Uncertainty

Project: Research project

Project Details

Description

Overview. In many real-life situations, decisions have to be made in the absence of perfect in-
formation; for example, when the future cannot be predicted with high con dence or data is not
available with high accuracy. Approximating the missing information in optimization problem for-
mulations via deterministic estimates, such as averages, often leads to solutions that are suboptimal,
or worse, infeasible with high probability. Stochastic optimization techniques have been developed
that take uncertainty explicitly into account. The two proposed research projects will develop new
methods that broaden the class of problems that currently can be solved and address shortcomings
of existing methods. This will be done by reaping the power of nonconvex nonlinear programming
(NLP) techniques whose potential has barely been tapped in stochastic optimization.
The rst project addresses the limitation that the majority of state-of-the-art methods for
chance-constrained optimization seek a global optimum of an instance. As a result, only problems
can currently be solved that either (i) permit a convex approximation whose solutions are not too
suboptimal, or (ii) possess speci c structures, such as linearity, enabling the use of enumeration
schemes with potential exponential complexity. The new sequential quadratic chance-constrained
programming framework that will be developed under this project will compute local solutions
of general nonconvex nonlinear chance-constrained optimization problems. By employing modern
NLP techniques, such as exact merit functions with steering rules and approximate subproblem
solutions, this signi cantly lifts the restrictions on problem structure and size. The development of
an e cient solver for the quadratic chance-constrained subproblem will play an essential role.
The second project deals with the optimization of processes that are simulated by a computer
program with noisy output, such as a di erential equation solver or a discrete-event simulator. In
the proposed framework, the noise is smoothed via the convolution with a Gaussian kernel, and
the resulting integrals are approximated by a sample-average approximation that is dynamically
improved with the progress of the algorithm. To ensure global convergence for nonconvex instances,
state-of-the-art NLP concepts such as trust-region model-based derivative-free algorithms build the
foundation of the novel framework. The key aspect of the proposed framework is that the recently
developed multiple importance sampling technique permits the reuse of all previously computed
simulation output; this is in stark di erent to existent methods for optimization-via-simulation
approaches and constitutes a major reduction in computational e ort.
In both projects, the convergence properties of the algorithms will be analyzed. The methods
will be implemented and their numerical performance will be evaluated on relevant problems.
Intellectual Merit. The proposed research will generate novel numerical algorithms together with
the accompanying convergence theory for general-purpose techniques that can be applied across
various disciplines. A successful outcome of the development of NLP algorithms with hot starts
could spark the design of novel methods for MINLP. In addition, the development of decomposition
algorithms for nonconvex optimization is a timely step towards harnessing the quickly growing
parallel computing power. Most current methods are unable to exploit these new resources.
Broader Impact. The availability of NLP solvers capa
StatusFinished
Effective start/end date9/15/158/31/19

Funding

  • National Science Foundation (DMS-1522747)

Fingerprint Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.