This is the mail archive of the
gsl-discuss@sourceware.cygnus.com
mailing list for the GSL project.
Re: multidimensional optimization
Fabrice Rossi writes:
> - parameters:
> Is it better to have a big structure that discribes the parameters
> of the one dimensional part (as that is going to be contained in
> the gsl_min_fX_G_minimizer structure that contains the base state
> of the descent algorithm), or to pass these parameters to the
> iterate function (that runs one iteration of the descent
> algorithm)?
It might be important for the user to follow the progress of the 1d
minimisations, so perhaps the algorithm should be split twice. A
separate inner loop for the 1-d minimisation could be used like this,
do {
get gradient
do {
minimise in 1d
} while (!converged_1d)
...
} while (!converged)
I'm don't think that the user should call the existing gsl_min
functions --- we would provide a suitable minimisation function (maybe
a wrapper around an existing gsl_min function) that will take care of
the vector arithmetic so the 1d minimisation is as simple as the outer
loop.
> - iteration:
> If I focus on descent algorithms, I think that I don't need a
> different iterate function for each algorithm, basically because
> the only think that really differs between two such algorithms is
> the way the descent direction is calculated. So I think it's better
> to have a direction function that computes a new descent
> direction. I don't know any algorithm that need something else than
> the value of the function and of its gradient at the current
> estimate of the minimum and previous descent directions. I guess
> that I don't need to provide the function itself to the algorithm
> as long as I give it the current gradient and value. Any objection?
No objection. If there is a more natural way to decompose the algorithm
then it is better to use that.