Gradient calculation in GSLMultiFit

I think I mentioned it a long time ago (although I am not sure), but perhaps now it could be fixed.

It seems that GSLMultiFit during the gradient calculation procedure before starting first iteration of the fit, calculates gradient for all values - even those fixed. No such thing happens for Minuit. I think this could mean a large efficiency loss when performing many small, quick separate fits for functions with lot’s of fixed parameters.

Hi,

are you sure this is happening with the current trunk revision or the latest release, 5.27.06 ?
Can you reproduce it with a small example ?

Lorenzo

My ROOT displays: 5.27/07 :slight_smile:

I’ll try to make an example.

Seems that the problem exists only with fitting via ROOT::Math etc. interfaces, not via TH::Fit(). I modified the script for the simulatnous fitting of two functions. It displays the changes of the fixed parameter before the first fit iteration.
combinedFit2.C (5.69 KB)

HI,
the problem was in the macro that I have attached in the post

The global chi2 function needs to be modified to not force the gradient calculation of all parameters. See the changes in the new macro which I attach to this post

Best Regards

Lorenzo
combinedFit2.C (5.99 KB)

Is this example in general is easily changeable to a 2D case? I thought yes, but it still counts gradient for me in the 2D case after modification. Perhaps a bug in my copy&paste or perhaps something more general.

To change it into a 2D case, I simply changed the wrapper functions to 2D:

		ROOT::Math::WrappedMultiTF1 wfRed(*psf_fin_red,2);
		ROOT::Math::WrappedMultiTF1 wfBlue(*psf_fin_blue,2);

where psf_fin_* are TF2

and data ranges to twodimensional:

		rangeRed.SetRange(-5.5, 5.5, -5.5, 5.5);

Ofcourse I fit to TH2Ds. Should any additional changes be made to the GlobalChi2 function or anything else? Sorry, my own code is two long and complicated to attach it here…

Yes , these changes should be enough !

Lorenzo

Found a bug in my copy& paste. Seems that this change not only skips unnecessary gradient calculations at the beginning, but also some calculations during normal iterations - normal iterations are now, I guess, a few dozen times faster :slight_smile: