This is the mail archive of the gsl-discuss@sources.redhat.com mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

High-dimensional Minimization without analytical derivatives


Hello,
I need to find a (local) maximum of my likelihood function with 80 datapoints over the 17-dimensional parameter space. I want to use gsl_multimin_fdfminimizer_vector_bfgs, (or some other gradient-based algorithm), but I would really hate to specify 17 (or maybe much more if we change the model) analytic derivatives. 
Can you please tell me if I have better options? Can I use the one-dimensional numerical derivatives gsl_diff_central instead of analytic ones when I write "my_df" function for BFGS? How would this approach (if it is feasible at all) compare to Nelder Mead Simplex algorithm provided in my version of GSL 1.4? Is there a better option that would involve numerical gradient evaluation coupled with BFGS method?
 
I really appreciate any help or advice you can give me.
Sincerely,
Anatoliy
 


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]