This is the mail archive of the gsl-discuss@sourceware.cygnus.com mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: multidimensional optimization


Fabrice Rossi writes:
 > 1) convergence test is trickier In general (for Neural Networks
 > application) we test for numerical nullity of the gradient. This
 > means the gradient must be know and therefore that the convergence
 > test prepares part of the next iteration.  We can also use a last
 > move test.

It's true that the decomposition of "iteration" and "test" would not
work if the desired convergence test is algorithm-dependent.

However quantities such as the gradient and "last move" can be part of
the algorithm-independent state. This way they are available for testing.

e.g. The following struct could be used for a gradient solver,

typedef struct
  {
    const gsl_min_fgsolver_type * type;
    gsl_min_fgsystem * fgsystem ;
    double f ;           /* function value */
    double f1 ;          /* previous function value */
    double * x ;         /* position */
    double * x1 ;        /* previous position */
    double * g ;         /* gradient */
    double * g1 ;        /* previous gradient */
    void *state;
  }
gsl_min_fgsolver;
        
These quantities would seem to be sufficient for most convergence
tests. The convergence test itself would not compute any quantities
like the gradient. They can be computed in the main iteration. That's
how I imagine it working anyway.

 >  2) restarting Conjugate gradient methods must be restarted to work
 > properly (the descent direction must be set to the gradient from
 > time to time). Systematic restarting is not a good idea in general,
 > that's why some algorithms compute a last step size to trigger
 > restarting, or sometimes other technics.  Unfortunately, restarting
 > triggering test overlaps in general with convergence test.

It's possible for the iteration function to return an error code,
suggesting a restart or inditicating that the internal minimisation
has failed. See roots/newton.c (newton_iterate) for an example where
the iteration function returns an error code when it encounters a
difficulty.

 >  3) bracketing for the one dimensional algorithm When we have a
 > descent direction d and an current guess x, we must optimize the
 > one dimensional function g(e)=f(x+ed). Basically, we don't have an
 > initial bracketing of the minimum (as requiered by the golden
 > section for instance).  In general, we find it by trying a fixed
 > value for e. By comparing g(e) with the already known g(0), we know
 > if we need to find a smaller value e'\in]0,e[ such that g(e')<g(e)
 > and g(e')<g(0) or a bigger value.
 >  This part can be tricky and can fail. Once again, failure can tell
 > that restarting should be a good idea or that stopping is also a
 > good idea.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]