This is the mail archive of the
gsl-discuss@sources.redhat.com
mailing list for the GSL project.
Re: [Help-gsl] questions about gsl_multimin_f*_minimizers (efficiency,drawbacks)
- From: Brian Gough <bjg at network-theory dot co dot uk>
- To: Marc Baaden <baaden at smplinux dot de>
- Cc: help-gsl at gnu dot org, gsl-discuss at sources dot redhat dot com
- Date: Wed, 12 Nov 2003 12:53:56 +0000
- Subject: Re: [Help-gsl] questions about gsl_multimin_f*_minimizers (efficiency,drawbacks)
- References: <200311081040.LAA15830@apex.ibpc.fr>
Marc Baaden writes:
> I have some questions wrt the efficiency of the routines in gsl_multimin.
> I have replaced a routine in an existing Fortran code (originally using
> a Quasi-Newton minimizer (Harwell VA13A)) by a call to the gsl_multimin
> routines, with the choice of either conjugate_pr, conjugate_fr,
> steepest_descent, vector_bfgs or nm_simplex.
>
> The original Harwell VA13A algorithm should be quite similar to vector_bfgs.
> So I was rather surprised that there is quite a noticeable "performance"
> difference, the Harwell code being roughly a factor of 1.5-2 faster.
Hi,
How does the choice of line-minimisation tolerance affect the
comparison?
In addition to the number of function evaluations it would be of
interest to compare the number of direction vectors used, since this
is (to some extent) independent of the line minimisation tolerance.
--
Brian Gough