A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling

Alvaro Maggiar, Andreas Waechter, Irina S Dolinskaya, Jeremy C Staum

Research output: Book/ReportOther report

Abstract

Originally accepted: 2015
Last modified: 2017

In this paper we consider the optimization of a functional $F$ defined as the convolution of a function f with a Gaussian kernel. This type of objective function is of interest in the optimization of the expensive output of complex computational simulations, which often present some form of deterministic noise and need to be smoothed for the results to be meaningful. We introduce a derivative-free algorithm that computes trial points from the minimization of a regression model of the expensive function f over a trust region. The regression model is constructed from function values at sample points that are chosen randomly around iterates and trial points of the algorithm. The weights given to the individual sample points in the regression problem are obtained according to an adaptive multiple importance sampling strategy. This has two advantages. First, it makes it possible to re-use all expensive function values collected over the course of the optimization. Second, the resulting regression model converges to the second-order Taylor approximation of the convolution functional F. We prove that, with probability 1, each limit point of the iterates is a stationary point of F. Computational experiments on a set of benchmark problems with noisy functions compare the proposed algorithm with a regular derivative-free trust-region method. It is demonstrated that the proposed algorithm performs similarly efficient in the early stages of the optimization and is able to overcome convergence problems of the regular method, which might get trapped in spurious local minima induced by the noise.
Original languageEnglish (US)
PublisherUnknown Publisher
StatePublished - 2017

Fingerprint

Trust Region Algorithm
Derivative-free
Importance Sampling
Convolution
Regression Model
Sample point
Optimization
Iterate
Value Function
Derivative-free Methods
Trust Region Method
Sampling Strategy
Gaussian Kernel
Trust Region
Computational Simulation
Limit Point
Stationary point
Local Minima
Computational Experiments
Reuse

Cite this

@book{ad7ad360091a474d929ac33170f28555,
title = "A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling",
abstract = "Originally accepted: 2015Last modified: 2017In this paper we consider the optimization of a functional $F$ defined as the convolution of a function f with a Gaussian kernel. This type of objective function is of interest in the optimization of the expensive output of complex computational simulations, which often present some form of deterministic noise and need to be smoothed for the results to be meaningful. We introduce a derivative-free algorithm that computes trial points from the minimization of a regression model of the expensive function f over a trust region. The regression model is constructed from function values at sample points that are chosen randomly around iterates and trial points of the algorithm. The weights given to the individual sample points in the regression problem are obtained according to an adaptive multiple importance sampling strategy. This has two advantages. First, it makes it possible to re-use all expensive function values collected over the course of the optimization. Second, the resulting regression model converges to the second-order Taylor approximation of the convolution functional F. We prove that, with probability 1, each limit point of the iterates is a stationary point of F. Computational experiments on a set of benchmark problems with noisy functions compare the proposed algorithm with a regular derivative-free trust-region method. It is demonstrated that the proposed algorithm performs similarly efficient in the early stages of the optimization and is able to overcome convergence problems of the regular method, which might get trapped in spurious local minima induced by the noise.",
author = "Alvaro Maggiar and Andreas Waechter and Dolinskaya, {Irina S} and Staum, {Jeremy C}",
year = "2017",
language = "English (US)",
publisher = "Unknown Publisher",

}

A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling. / Maggiar, Alvaro; Waechter, Andreas; Dolinskaya, Irina S; Staum, Jeremy C.

Unknown Publisher, 2017.

Research output: Book/ReportOther report

TY - BOOK

T1 - A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling

AU - Maggiar, Alvaro

AU - Waechter, Andreas

AU - Dolinskaya, Irina S

AU - Staum, Jeremy C

PY - 2017

Y1 - 2017

N2 - Originally accepted: 2015Last modified: 2017In this paper we consider the optimization of a functional $F$ defined as the convolution of a function f with a Gaussian kernel. This type of objective function is of interest in the optimization of the expensive output of complex computational simulations, which often present some form of deterministic noise and need to be smoothed for the results to be meaningful. We introduce a derivative-free algorithm that computes trial points from the minimization of a regression model of the expensive function f over a trust region. The regression model is constructed from function values at sample points that are chosen randomly around iterates and trial points of the algorithm. The weights given to the individual sample points in the regression problem are obtained according to an adaptive multiple importance sampling strategy. This has two advantages. First, it makes it possible to re-use all expensive function values collected over the course of the optimization. Second, the resulting regression model converges to the second-order Taylor approximation of the convolution functional F. We prove that, with probability 1, each limit point of the iterates is a stationary point of F. Computational experiments on a set of benchmark problems with noisy functions compare the proposed algorithm with a regular derivative-free trust-region method. It is demonstrated that the proposed algorithm performs similarly efficient in the early stages of the optimization and is able to overcome convergence problems of the regular method, which might get trapped in spurious local minima induced by the noise.

AB - Originally accepted: 2015Last modified: 2017In this paper we consider the optimization of a functional $F$ defined as the convolution of a function f with a Gaussian kernel. This type of objective function is of interest in the optimization of the expensive output of complex computational simulations, which often present some form of deterministic noise and need to be smoothed for the results to be meaningful. We introduce a derivative-free algorithm that computes trial points from the minimization of a regression model of the expensive function f over a trust region. The regression model is constructed from function values at sample points that are chosen randomly around iterates and trial points of the algorithm. The weights given to the individual sample points in the regression problem are obtained according to an adaptive multiple importance sampling strategy. This has two advantages. First, it makes it possible to re-use all expensive function values collected over the course of the optimization. Second, the resulting regression model converges to the second-order Taylor approximation of the convolution functional F. We prove that, with probability 1, each limit point of the iterates is a stationary point of F. Computational experiments on a set of benchmark problems with noisy functions compare the proposed algorithm with a regular derivative-free trust-region method. It is demonstrated that the proposed algorithm performs similarly efficient in the early stages of the optimization and is able to overcome convergence problems of the regular method, which might get trapped in spurious local minima induced by the noise.

M3 - Other report

BT - A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling

PB - Unknown Publisher

ER -