This is the mail archive of the gsl-discuss@sources.redhat.com mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

ODE NAN INF derivs


I'm porting a neuron model to GSL. The trouble is it has lots of logs and square roots, and a square wave that causes some rather rapid changes. Rather often a sudden change occurs that causes a derivative to become NAN. With the way NAN propagates and since any comparison to NAN is false I think the step size h will not decrease. If I have this right in the section in cstd.c :

double rmax = DBL_MIN;
size_t i;

for(i=0; i<dim; i++) {
  const double D0 =
    eps_rel * (a_y * fabs(y[i]) + a_dydt * fabs(h_old * yp[i]))
      + eps_abs;
  const double r  = fabs(yerr[i]) / fabs(D0);
  rmax = GSL_MAX_DBL(r, rmax);
  }

any time there is NAN in yp[i] rmax will not change. The step will be accepted.

I'd like advice on adding a way for my derivative function to reject the step, say by returning GSL_ERANGE, without terminating the model. Perhapses the step size could be immediately reduced.

I already tried trapping the NANs and assigning weird things to them, inf, -inf, alternating + and - inf... but then all my weirdness just got accepted.

Attachment: pgp00000.pgp
Description: PGP signature


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]