On the stability of Lagrange programming neural networks for satisfiability problems of prepositional calculus
The Hopfield type neural networks for solving difficult combinatorial optimization problems have used the gradient descent algorithms to solve constrained optimization problems via penalty functions. However, it is well known that the convergence to local minima is inevitable in these approaches. Th...
Saved in:
Published in: | Neurocomputing (Amsterdam) Vol. 13; no. 2; pp. 119 - 133 |
---|---|
Main Authors: | , |
Format: | Journal Article |
Language: | English |
Published: |
Elsevier B.V
01-10-1996
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The Hopfield type neural networks for solving difficult combinatorial optimization problems have used the gradient descent algorithms to solve constrained optimization problems via penalty functions. However, it is well known that the convergence to local minima is inevitable in these approaches. The Boltzmann Machines have used a simulated annealing technique, and were proven to be able to find global minima theoretically. However they require large computational resources. Recently Lagrange programming neural networks have been proposed. They differ from the gradient descent algorithms by using anti-descent terms in their dynamical differential equations. In this paper we theoretically analyze the stability and the convergence property of one of the Lagrange programming neural networks (LPPH) when it is applied to a satisfiability problem (SAT) of prepositional calculus. We prove that (1) the solutions of the SAT are the equilibrium points of the LPPH and
vice versa, and (2) if the given expression is satisfiable, there is at least one stable equilibrium point of the LPPH. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/0925-2312(95)00087-9 |