This is the mail archive of the gsl-discuss@sources.redhat.com mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

multifit_nlin on linear data


I have a multiparameter fitting problem which is nonlinear in general,
but has linear behavior in some regions where I can exactly calculate
the derivatives. So sometimes when I use gsl_multifit_fdfsolver it's
able to jump to the correct solution in a single jump, as you'd expect
it to.

But, when this happens, the second iteration of
gsl_multifit_fdfsolver_iterate returns GSL_ETOLF, because after the
first step it can neither predict nor achieve any further reduction.
To paraphrase the code in gtl/multifit/lmiterate.c

  prered is predicted reduction from step*jacobians [0]
  actred is result of taking that step [also 0]
 
  if (prered > 0)
      ratio = actred / prered;
  else
      ratio = 0;

  ...

  if (ratio >= 0.0001) {
     ... update answer ...

  else if (fabs(actred) <= GSL_DBL_EPSILON  && 
           prered <= GSL_DBL_EPSILON && 
           p5 * ratio <= 1.0) {
    return GSL_ETOLF;

Does this need rethinking, or does my problem need re-posing? My
function has regions of linearity and nonlinearity and it's hard for
me to know in advance if it's going to admit a single-step solution or
not, so it's not easy to just switch to a linear multifit algorithm
instead.

Should I be using a different termination critereon than
gsl_multifit_test_delta?

One workaround is to set the elements of the Jacobian matrix to 0.99 *
the actual derivative; it then converges nicely in 2-3 steps. 

Or, I could just accept ETOLF as an indication of being done.

--
Trevor Blackwell         tlb@tlb.org          (650) 776-7870


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]