Re: [ROOT] Fitting with stochastic optimizers

From: Anton Fokin (anton.fokin@smartquant.com)
Date: Fri Mar 02 2001 - 17:17:24 MET


Hi Eddy,

if we talk about stochastics, an optimizer minimizes your objective function
wihtout taking care of its shape.  In case of least squares it minimizes

Objective = Sum(Fi(P1,..PN, Xi) - Yi)^2

and reports the best set of parameters P1..PN

In fact you can use any function to estimate errors and this will not affect
the algorithm.

There is no direct Hessian matrix or other calulations which these
algorithms use. Instead they try to change a set of parameters in a smart
way to explore the search space and find the best set.

Note that stochastic algorithms do not guarantee that you find a global
minimum in finite time but they are quite helpful if you want to find a good
solution in reasonable time for complex functions and a number of
constraints.

If you carfully optimize your objective on a set of N data points, you can
spend much less time to calibrate your model after adding M << N points,
assuming that a new minimum lies near the old one.

Regards,
Anton

PS. Could someone clever help me with inverse probability function of
generalized (perhaps too much generalized:) ) Cauchy distribution?

see g(dx) function at

http://www.smartquant.com/htmldoc/TSimulatedAnnealing.html

generalized annealing parameter displacement formula ...


http://www.smartquant.com


----- Original Message -----
From: Eddy Offermann <eddy@rentec.com>
To: <anton.fokin@smartquant.com>
Cc: <roottalk@pcroot.cern.ch>
Sent: Friday, March 02, 2001 4:22 PM
Subject: Re: [ROOT] Fitting with stochastic optimizers


> Hi Anton,
>
> If we start to tinker/rewrite Minuit, I would like to see the following
> additions/improvements:
>
> 1) This refers to linear fits:
>     When fitting N data points and get M additional ones, I would like
>     to ADD the information of the new ones to my Hessian matrix and
>     gradient. I do NOT want to analyze all M+N data points again !
>     This is important for on-line applications and non-cheating
>     data analyss with time series. It is straightforward to implement
>     for a least-squares objective function.
>
>  2) Be able to easily specify Bayesian priors for my parameters
>
>  3) Have possibilty of applying a robust algorithm instead
>      of least squares like least median squares
>
> What do others wish/think ??
>
> Best regards Eddy
>
> (out of town for the next 2weeks)
>
> > X-Authentication-Warning: pcroot.cern.ch: majordomo set sender to
> owner-roottalk@root.cern.ch using -f
> > From: "Anton Fokin" <anton.fokin@smartquant.com>
> > To: "roottalk" <roottalk@pcroot.cern.ch>
> > Subject: [ROOT] Fitting with stochastic optimizers
> > Date: Fri, 2 Mar 2001 11:42:49 +0100
> > MIME-Version: 1.0
> > Content-Transfer-Encoding: 7bit
> > X-Priority: 3
> > X-MSMail-Priority: Normal
> > X-MIMEOLE: Produced By Microsoft MimeOLE V5.00.2615.200
> > X-Filter-Version: 1.3 (ram)
> >
> > Hi rooters,
> >
> > I've just seen several postings about fitting. I am curious if someone
ever
> > went into troubles with root minuit and would like to have stochastic
> > optimization as an alternative? I can provide simulated
annealing/genetics
> > if enough people want it.
> >
> > Currently I have minimization functionality for TF1/2 user defined
functions
> > on www.smartquant.com/neural.html  I can extend the package so that it
could
> > be used with TVirtualFitter.
> >
> > Regards,
> > Anton
> >
> > http://www.smartquant.com
> >
> >
> >
>
> Eddy A.J.M. Offermann
> Renaissance Technologies Corp.
> Route 25A, East Setauket NY 11733
> e-mail: eddy@rentec.com
> http://www.rentec.com
>
>
>



This archive was generated by hypermail 2b29 : Tue Jan 01 2002 - 17:50:38 MET