Hi Topher and Michael, > Speaking of which, I tend to disagree with Eddy's proscription of giving > Minuit derivatives. Yes, in theory that's a very nice thing to do, but > if you have to compute it yourself through finite differences, then > you're better off leaving it to Minuit. I have never given Minuit > derivatives before because I've never encountered a case where it's > practical. > Just to get the right info into roottalk digest: If you decide to supply the derivative of the objective function yourself, than of course only an analytical expression makes sense, no point in trying to repeat minuit. This has many advantages (ok it is a bit of initial work): 1) speed : instead of calling the objective function three times for var = a, a+epsilon and a-epsilon, it will only called once with iflag=2 2) just quoting the TMinuit header: How to get the right answer from MINUIT. MINUIT offers the user a choice of several minimization algorithms. The MIGRAD algorithm is in general the best minimizer for nearly all functions. It is a variable-metric method with inexact line search, a stable metric updating scheme, and checks for positive-definiteness. Its main weakness is that it depends heavily on knowledge of the first derivatives, and fails miserably if they are very inaccurate. Eddy
This archive was generated by hypermail 2b29 : Thu Jan 01 2004 - 17:50:11 MET