### Abstract

Last modified: 2017

In this paper we consider the optimization of a functional $F$ defined as the convolution of a function f with a Gaussian kernel. This type of objective function is of interest in the optimization of the expensive output of complex computational simulations, which often present some form of deterministic noise and need to be smoothed for the results to be meaningful. We introduce a derivative-free algorithm that computes trial points from the minimization of a regression model of the expensive function f over a trust region. The regression model is constructed from function values at sample points that are chosen randomly around iterates and trial points of the algorithm. The weights given to the individual sample points in the regression problem are obtained according to an adaptive multiple importance sampling strategy. This has two advantages. First, it makes it possible to re-use all expensive function values collected over the course of the optimization. Second, the resulting regression model converges to the second-order Taylor approximation of the convolution functional F. We prove that, with probability 1, each limit point of the iterates is a stationary point of F. Computational experiments on a set of benchmark problems with noisy functions compare the proposed algorithm with a regular derivative-free trust-region method. It is demonstrated that the proposed algorithm performs similarly efficient in the early stages of the optimization and is able to overcome convergence problems of the regular method, which might get trapped in spurious local minima induced by the noise.

Original language | English (US) |
---|---|

Publisher | Unknown Publisher |

State | Published - 2017 |

### Fingerprint

### Cite this

*A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling*. Unknown Publisher.

}

*A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling*. Unknown Publisher.

**A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling.** / Maggiar, Alvaro; Waechter, Andreas; Dolinskaya, Irina S; Staum, Jeremy C.

Research output: Book/Report › Other report

TY - BOOK

T1 - A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling

AU - Maggiar, Alvaro

AU - Waechter, Andreas

AU - Dolinskaya, Irina S

AU - Staum, Jeremy C

PY - 2017

Y1 - 2017

N2 - Originally accepted: 2015Last modified: 2017In this paper we consider the optimization of a functional $F$ defined as the convolution of a function f with a Gaussian kernel. This type of objective function is of interest in the optimization of the expensive output of complex computational simulations, which often present some form of deterministic noise and need to be smoothed for the results to be meaningful. We introduce a derivative-free algorithm that computes trial points from the minimization of a regression model of the expensive function f over a trust region. The regression model is constructed from function values at sample points that are chosen randomly around iterates and trial points of the algorithm. The weights given to the individual sample points in the regression problem are obtained according to an adaptive multiple importance sampling strategy. This has two advantages. First, it makes it possible to re-use all expensive function values collected over the course of the optimization. Second, the resulting regression model converges to the second-order Taylor approximation of the convolution functional F. We prove that, with probability 1, each limit point of the iterates is a stationary point of F. Computational experiments on a set of benchmark problems with noisy functions compare the proposed algorithm with a regular derivative-free trust-region method. It is demonstrated that the proposed algorithm performs similarly efficient in the early stages of the optimization and is able to overcome convergence problems of the regular method, which might get trapped in spurious local minima induced by the noise.

AB - Originally accepted: 2015Last modified: 2017In this paper we consider the optimization of a functional $F$ defined as the convolution of a function f with a Gaussian kernel. This type of objective function is of interest in the optimization of the expensive output of complex computational simulations, which often present some form of deterministic noise and need to be smoothed for the results to be meaningful. We introduce a derivative-free algorithm that computes trial points from the minimization of a regression model of the expensive function f over a trust region. The regression model is constructed from function values at sample points that are chosen randomly around iterates and trial points of the algorithm. The weights given to the individual sample points in the regression problem are obtained according to an adaptive multiple importance sampling strategy. This has two advantages. First, it makes it possible to re-use all expensive function values collected over the course of the optimization. Second, the resulting regression model converges to the second-order Taylor approximation of the convolution functional F. We prove that, with probability 1, each limit point of the iterates is a stationary point of F. Computational experiments on a set of benchmark problems with noisy functions compare the proposed algorithm with a regular derivative-free trust-region method. It is demonstrated that the proposed algorithm performs similarly efficient in the early stages of the optimization and is able to overcome convergence problems of the regular method, which might get trapped in spurious local minima induced by the noise.

M3 - Other report

BT - A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling

PB - Unknown Publisher

ER -