Hi Rooters, One of the things I noticed when I try to fit a TH1 using log(likelihood) maximization, is that it is so terribly slow. When I looked into H1FitLikelihood, I saw that for each iteration and each bin log(N!) is calculated. In my particular case, that takes a lot of time, since I have about 3000 bins, with contents varying from no counts to about 100000 counts. In order to fit that, I need about 500 iteration. So I have a few questions: 1) Is there a way to avoid recalculating sum(log(Ni!)) for each iteration because it is supposed to be constant as long as the fitting range doesn't change? 2) Is there a reason to keep the sum(log(Ni!)) at all? Because it is constant, it has no effect on the maximization process (at least it shouldn't). One only needs the likelihood for the calculation of the errors. 3) Can I convince root that it should use my own log(L) calculation when fitting a TH1 (without going throught the pain of initializing a TMinuit myself, and executing one of the prehistorical 6-letter commands). Or would I have to rewrite the TH1-class in order to get that done? 4) Wouldn't Sterlings approximation for log(N!) be a good idea for larger N? Or can't one calculate the likelihood using the fact that the Poisson distribution can be approximated by a Gaussian for large N (what the threshold N should be, needs to be discussed, but a value of the order of 50 to 100 seems appropriate). In spite of everything, still a happy user, -- Gerco Dr. C.J.G. Onderwater Nuclear Physics Laboratory 312 Loomis Laboratory of Physics University of Illinois at Urbana-Champaign 1110 West Green Street Urbana, IL 61801-3080 Phone : (217) 244-7363 Fax : (217) 333-1215 E-mail: onderwat@uiuc.edu
This archive was generated by hypermail 2b29 : Tue Jan 04 2000 - 00:43:39 MET