A continuous Newton-type method for unconstrained optimization

Lei Hong Zhang, C. T. Kelley, Lizhi Liao*

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

13 Citations (Scopus)

Abstract

In this paper, we propose a continuous Newton-type method in the form of an ordinary differential equation by combining the negative gradient and the Newton direction. We show that for a general function f(x), our method converges globally to a connected subset of the stationary points of f(x) under some mild conditions, and converges globally to a single stationary point for a real analytic function. The method reduces to the exact continuous Newton method if the Hessian matrix of f(x) is uniformly positive definite. We report on convergence of the new method on the set of standard test problems in the literature.

Original languageEnglish
Pages (from-to)259-277
Number of pages19
JournalPacific Journal of Optimization
Volume4
Issue number2
Publication statusPublished - May 2008

Scopus Subject Areas

  • Control and Optimization
  • Computational Mathematics
  • Applied Mathematics

User-Defined Keywords

  • Continuous method
  • Global convergence
  • ODE method
  • Pseudotransient continuation
  • Unconstrained optimization

Fingerprint

Dive into the research topics of 'A continuous Newton-type method for unconstrained optimization'. Together they form a unique fingerprint.

Cite this