TY - JOUR
T1 - Stability Analysis of Gradient-Based Neural Networks for Optimization Problems
AU - Han, Qiaoming
AU - Liao, Li Zhi
AU - Qi, Houduo
AU - Qi, Liqun
N1 - Funding Information:
★ This research was supported in part by Australian Research Council, the Research Grant Council of Hong Kong, grants FRG/98-99/II-29 and FRG/99-00/II-23 from Hong Kong Baptist University, and the State Key Lab of Scientific and Engineering Computing in P. R. China.
PY - 2001/4
Y1 - 2001/4
N2 - The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new approach, we investigate the stability properties of the general gradient-based neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any trajectory of the gradient-based neural network converges to an equilibrium point, and (b) the Lyapunov stability is equivalent to the asymptotical stability in the gradient-based neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradient-based neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems. Promising simulation results of a refined gradient-based neural network on some problems are also reported.
AB - The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new approach, we investigate the stability properties of the general gradient-based neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any trajectory of the gradient-based neural network converges to an equilibrium point, and (b) the Lyapunov stability is equivalent to the asymptotical stability in the gradient-based neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradient-based neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems. Promising simulation results of a refined gradient-based neural network on some problems are also reported.
KW - Asymptotic stability
KW - Equilibrium point
KW - Equilibrium set
KW - Exponential stability
KW - Gradient-based neural network
UR - http://www.scopus.com/inward/record.url?scp=0035306809&partnerID=8YFLogxK
U2 - 10.1023/A:1011245911067
DO - 10.1023/A:1011245911067
M3 - Journal article
AN - SCOPUS:0035306809
SN - 0925-5001
VL - 19
SP - 363
EP - 381
JO - Journal of Global Optimization
JF - Journal of Global Optimization
IS - 4
ER -