This is the mail archive of the
gsl-discuss@sources.redhat.com
mailing list for the GSL project.
Re: High-dimensional Minimization without analytical derivatives
- From: "Robert G. Brown" <rgb at phy dot duke dot edu>
- To: James Theiler <jt at lanl dot gov>
- Cc: John Lamb <J dot D dot Lamb at btinternet dot com>,<gsl-discuss at sources dot redhat dot com>
- Date: Sun, 5 Sep 2004 17:21:46 -0400 (EDT)
- Subject: Re: High-dimensional Minimization without analytical derivatives
On Sun, 5 Sep 2004, James Theiler wrote:
> On Sat, 4 Sep 2004, John Lamb wrote:
>
> ] Calling simulated annealing SA is also potentially problematic because
> ] there is a Stochastic Approximation algorithm due to Keifer and
> ] Wolfowitz that is commonly known as SA. It can be used on nonstochastic
> ] problems by adding noise and has guaranteed convergence to a global
> ] optimum, albeit the convergence is often too slow for SA to be practical.
>
> Just a minor point, but SA (stochastic approximation) does not
> guarantee convergence to a global minimum. There are theorems
> about convergence of SA (simulated annealing) to a global minimum,
> but that convergence is arguably too slow to be practical.
>
> My own opinion[*] is that the only practical way to guarantee
> convergence to the global minimum is to make sure you are minimizing a
> convex function -- that may not be saying a lot, since for a convex
> function, there is only one local minimum and it is the global
> minimum. And it certainly doesn't help if the function you need to
> minimize doesn't happen to be convex. Sorry if this is getting afield
> of discussions about how to use GSL.
Or to put it more bluntly, if you could guarantee convergence to a
global minimum for systems of high dimensionality, I think there are a
few serious physics problems waiting for you and a lifetime position at
the Santa Fe institute to boot.
This is a >>hard<< problem, and Simulated Annealing and Genetic
Algorithms are some of the very few approaches to do something
approaching a good job. In that they'll often give a "good" minimax,
not necessarily or generally the "best" minimax.
They also need a gradient methodology of one sort or another as a
finishing step, as they are more suited to finding the right (or a
"good") valley, not decending efficiently to the bottom of the valley
they are in.
> jt
>
> [*] whenever you combine "practical" and "guarantee" into a sentence,
> I think you are inevitably in the realm of opinion.
I practically guaranteed that this is a true statement;-)
rgb
--
Robert G. Brown http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb@phy.duke.edu