ROOT Version 5.20/00 Release Notes

ROOT version 5.20/00 has been released June 25, 2008. In case you are upgrading from version 5.14, please read the releases notes of version 5.16 and version 5.18 in addition to these notes.

Binaries for all supported platforms are available at:
Versions for AFS have also been updated. See the list of supported platforms:

For more information, see:

The following people have contributed to this new version:
Ilka Antcheva,
Jean-François Bastien,
Bertrand Bellenot,
Rene Brun,
Philippe Canal,
Olivier Couet,
Valeri Fine,
Fabrizio Furano,
Leo Franco,
Gerri Ganis,
Andrei Gheata,
Mihaela Gheata,
David Gonzalez Maline,
Andreas Hoecker,
Jan Iwaszkiewicz,
Lukasz Janyst,
Anna Kreshuk,
Wim Lavrijsen,
Sergei Linev,
Anar Manafov,
Diego Marcos-Segura,
Lorenzo Moneta,
Axel Naumann,
Mathieu de Naurois,
Eddy Offermann,
Valeriy Onuchin,
Timur Pocheptsov,
Fons Rademakers,
Paul Russo,
Alja Tadel,
Matevz Tadel,
Wouter Verkerke,
Guido Volpi,
Hady Zalek





Schema Evolution


Use the new DirectoryAutoAdd facility for the classes:
        TTree, TH1, TEventList, TEntryList, TGraph2D
(and hence their derived classes).
The instances of those classes are now added automatically to the current directory only when Constructe'd with arguments or Clone'd and to the directory they are read from when their are stored directly in a TKey. [Note: the default constructor never adds the object to the current directory]
The directory auto add can still be disabled for instance of TH1 and TGraph2D by setting TH1::AddDirectory.
Additionally one can disable the directory auto add for a specific class by doing:
However you might want to also be able to restore the behavior in which case you ought to do:
    TClass *cl = TClass::GetClass("myclass");
    ROOT::DirAutoAdd_t func = cl->GetDirectoryAutoAdd();
TROOT::ReadingObject is marked as deprecated.. It is still set (as it was) but is no longer used by the above mention classes.
NOTE: One side effect of this change, is that instance of TTree, TH1, TEventList, TEntryList, TGraph2D that are retrieved from a TMessage (i.e. from a socket) no longer auto register themselves to the current ROOT directory.





RGLite plug-in - a ROOT plug-in module, which implements the ROOT Grid interface and offers to ROOT users possibilities to perform a number of operations using gLite middleware from within ROOT.
Supported features:
Usage examples:

Job operations

// loading RGLite plug-in
// submitting Grid job
TGridJob *job = gGrid->Submit("JDLs/simple.jdl");
// getting status object
TGridJobStatus *status = job->GetJobStatus();
// getting status of the job.
TGridJobStatus::EGridJobStatus st( status->GetStatus() );
// when the st is TGridJobStatus::kDONE you can retrieve job's output

File Catalog operations

// loading RGLite plug-in
// changing the current directory to "/grid/dech"
// using Mkdir to create a new directory
Bool_t b = gGrid->Mkdir("root_test2");
// listing the current directory
TGridResult* result = gGrid->Ls();
// full file information
// removing the directory
b = gGrid->Rmdir("root_test2");



the new directory sql includes the following packages:


Branch creation enhancement and clarifications

TTreeFormula (TTree::Draw, TTree::Scan)

Splitting STL collections of pointers

STL collection of pointers can now be split by calling
TBranch *branch = tree->Branch( branchname, STLcollection, buffsize, splitlevel )
where STLcollection is the address of a pointer to std::vector, std::list, std::deque, std::set or std::multiset containing pointers to objects.
and where the splitlevel is a value bigger than 100 then the collection will be written in split mode. Ie. if it contains objects of any types deriving from TTrack this function will sort the objects basing on their type and store them in separate branches in split mode.

The ROOT test example in ROOTSYS/test/bench.cxx shows many examples of collections and storage in a TTree when using split mode or not. This program illustrates the important gain in space and time when using this new facility.

Parallel unzipping

Introducing a parallel unzipping algorithm for pre-fetched buffers. Since we already know what buffers are going to be read, we can decompress a few of them in advance in an additional thread and give the impression that the data decompression comes for free (we gain up to 30% in reading intensive jobs).

The size of this unzipping cache is 20% the size of the TTreeCache and can be modified with TTreeCache::SetUnzipBufferSize(Long64_t bufferSize). Theoretically, we only need one buffer in advance but in practice we might fall short if the unzipping cache is too small (synchronization costs).

This experimental feature is disabled by default, to activate it use the static function

TTreeCache::SetParallelUnzip(TTreeCacheUnzip::EParUnzipMode option = TTreeCacheUnzip::kEnable).
The possible values to pass are: The TTreeCacheUnzip is actived only if you have more than one core. To activate it with only one core useTTreeCacheUnzip::kForce option (for example to measure the overhead).

Disk and Memory Space Gain

In ROOT older than v5.20/00, the branches' last basket, also known as the write basket, was always saved in the same "key" as the TTree object and was always present in memory when reading or writing. When reading this write basket was always present in memory even if the branch was never accessed.

Starting in v5.20/00, TTree::Write closes out, compresses (when requested) and writes to disk in their own file record the write baskets of all the branches. (This is implemented via the new function TTree::FlushBaskets, TBranch::FlushBaskets, TBranch::FlushOneBaskets)

TTree::AutoSave supports a new option "FlushBaskets" which will call FlushBaskets before saving the TTree object.

Flushing the write baskets has several advantages: Note: Calling FlushBaskets too often (either directly of via AutoSave("FlushBaskets")) can lead to unnecessary fragmentation of the ROOT file, since it write the baskets to disk (and a new basket will be started at the next fill) whether or not the content was close to filling the basket or not.



Histogram package

The libHist library now depends on libMathCore which must be linked whenever one needs to use the histogram library.





TProfile and TProfile2D


New Tutorials


CINT's directory structure has been re-arranged; the top-most cint directory now contains CINT, Reflex, and Cintex. CINT's headers can now alternatively be included as #include "cint/header.h", i.e. from the cint/ subdirectory. This will become the default location in a coming release.

In view of future changes and because we want to further decouple ROOT from CINT we strongly recommend to not include any CINT headers directly. Instead please use TInterpreter as a virtual interface; it has been updated to satisfy most of ROOT's use cases of CINT. If you still need to include CINT headers directly because functionality is missing from TInterpreter then please let us know!

CINT, Reflex, and Cintex have been ported to GCC 4.3.


This release contains two big new features: the ability to use PROOF with python, and the ability to pickle (python serialize) ROOT objects. Pickling of ROOT objects is straightforward: just hand them to pickle (or cPickle) like any other python object. To use PROOF with python, derive your custom class from TPySelector, override the methods that you want to specialize, and put it in a file that is shipped to the worker nodes, e.g.:

from ROOT import TPySelector

class MyPySelector( TPySelector ):
   def Begin( self ):
      print 'py: beginning'

   def SlaveBegin( self, tree ):
      print 'py: slave beginning'

   def Process( self, entry ):
      self.fChain.GetEntry( entry )
      print 'py: processing', self.fChain.ipi
      return 1

   def SlaveTerminate( self ):
      print 'py: slave terminating'

   def Terminate( self ):
      print 'py: terminating'

The file containing the class (e.g. will be treated as a python module and should be loadable through PYTHONPATH (typically '.') at the worker node. Setup PROOF as normal, and call:

dataset.Process( 'TPySelector', 'mymodule' )

PROOF will instantiate a TPySelector instance, which will in turn pick up the python class from module 'mymodule' and forward all calls.

There are several improvements in language mappings, as well as cleanup of the code for python2.2 (Py_ssize_t handling) and MacOS 10.3. Additionally, there are code cleanups (removing use of CINT internals) that should be fully transparent to the end-user.

The language mapping improvements are:

The python presentation of ROOT objects (ObjectProxy) as well as the meta-class hierarchy have undergone an internal change where individual ObjectProxy's no longer carry a TClassRef. Instead, this has moved to the python class level. Although there is now an extra layer of indirection to retrieve the class, the code is actually faster due to lower memory usage and lower memory turnover.

Math Libraries


MathCore includes now classes which were previously contained in libCore, like TMath, TComplex and the TRandom classes. Furthermore, some of the algorithms implemented in the TF1 class have been moved to MathCore. This implies that all other ROOT library using one of these classes, such as libHist, have a direct dependency on the Mathcore library. Linking with libMathCore is therefore required for running any major ROOT application. It has been added to the list of libraries obtained when doing root-config --libs.

N.B.: users building ROOT applications and not using root-config MUST add libMathCore to their list of linking libraries.

Together with the libraries merge, many changes have been applied to both TMath and the other mathcore classes.


A major clean-up and re-structuring has been done for the functions present in TMath. Some functions have been implemented using the STL algorithms, which have better performances in term of CPU time and a template interface has been also added. Some of the basic special mathematical functions of TMath, like the error function or the gamma and beta functions use now the Cephes implementation from Stephen L. Moshier, which is used as well by the ROOT::Math functions. This implementation has been found to be more accurate and in some cases more efficient in term of CPU time. More detailed information on the new mathematical functions can be found in this presentation from M. Slawinska at a ROOT team meeting.


Mathcore include now new classes for performing fits and minimization of multi-dimensional functions. The aim of these classes is to extend and improve the fitting functionality provided in ROOT via the TVirtualFitter classes and the fitting methods present in many data analysis object, such as TH1::Fit.
The fit data are decoupled from the fitter class and described by the dedicated fit data classes like the ROOT::Fit::BinData for bin data containing coordinate values of any dimensions, bin content values and optionally errors in coordinate and bin content, and ROOT::Fit::UnBinData classes for any dimension un-bin data.
The fitter class, ROOT::Fit::Fitter, provides the functionality for fitting those data with any model function implementing the parametric function interface, ROOT::Math::IParamMultiFunction. Fit methods such as least square, bin and un-bin likelihood are supported. The fit solution is then found by using the ROOT::Math::Minimizer interface class and the results are stored in the ROOT::Fit::FitResult class. Fit parameter can be configured individually using the ROOT::Fit::FitParameterSettings class.
Various implementation of the minimizer interface can be used automatically using the ROOT plug-in manager mechanism, including the linear fitter for a fast and direct solution, in case of a linear least square model, or by using Minuit, Minuit2 or GSL minimization methods provided by the MathMore library.

Functions for filling the new ROOT::Fit::BinData classes with all the histogram and graph types have been added in the histogram library (libHist) and graph library:

MathCore Numerical Algorithms

Classes implementing numerical methods which can be used by all the other ROOT library have been added in MathCore. These originate mainly from methods present previously in the implementation of the TF1 class. Now they can be used also outside this class. In addition, in order to have a common entry point, interfaces classes for these numerical algorithms have been included. These interfaces are as well implemented by classes using the GSL library and located in the MathMore library. The library can be loaded automatically using the ROOT plug-in manager. In detail, the new classes containing implementations present previously in TF1 are: In addition we use now the ROOT convention for all enumeration names defining the type of numerical algorithms. The names start with k, like kADAPTIVE for the integration type. This change affects both MathCore and MathMore and it breaks backward compatibility.

MathCore Function interfaces

Mathcore provides as well interfaces for the evaluation of mathematical and parametric functions to be used in the the numerical methods. This release contains the following changes:

More detailed description of the current MathCore release can be found at this location.


This new release contains:

More detailed description of the current MathMore release can be found at this location.


The new physics vector classes have been moved out from the MathCore library in a new library, libGenVector. The library contains as well the CINT dictionary including main instantiations for the template classes. For this release the instantiation of some extra methods, in particular of the class ROOT::Math::TRansform3D have been added in the dictionary library. Due to a CINT limitation, the dictionary for explicit template constructors of the Rotation classes, taking as input any other type of rotation are missing. Therefore code like the following one will now work in CINT (or Python):
ROOT::Math::Rotation3D r; 
ROOT::Math::EulerAngles eulerRot(r);
A possible solution is to use the operator=:
ROOT::Math::EulerAngles eulerRot; eulerRot = r;

In addition the setter methods for the 2D,3D and 4D vector classes have been extended following a suggestion by G. Raven. Functions like SetX instead of returning a void return now a reference to the vector class itself (*this).
Detailed description of the current GenVector release can be found at this location.


Fix a bug discovered by Harals Soleng in the addition of two matrix expressions. Remove also some compilation warning found on Windows when compiling matrices instantiated using float types.
Detailed description of the current SMatrix release can be found at this location.


Two new classes have been added: In addition, the method TLinearFitter::SetBasisFunction(TObjArray * f) has been added to set directly the linear terms of the fit function.


Various fixes have been applied to different problems discovered mainly by a test program from Alfio Lazzaro. In detail:

More detailed description of the current Minuit2 release can be found at this location.


A new version, 1.2.4, has been added to fix mainly some problems found in gcc 4.3. For the detailed changes of this new UNU.RAN version see the file $ROOTSYS/math/unuran/src/unuran-1.2.4-root/NEWS.

Last modified: Tue Jun 24 17:22:42 CEST 2008


New tutorial macros available

A set of seventeen new tutorial macros has been added to $ROOTSYS/tutorials/roofit

Update of class documentation

The documentation in the code itself that is extracted by THtml to construct the online class documentation has been updated for all classes. Now all classes have (again) a short class description, as well as a (short) description of each member function and most data members. An update to the users manual is foreseen shortly after the 5.20 release.


A new feature has been added that allows to persist source code of RooFit classes that are not in ROOT distribution inside a RooWorkspace to facilitate sharing of custom code with others. To import code of custom classes call
after importing the objects themselves into the workspace. For all classes that are compiled with ACliC RooWorkspace can automatically find the source code using the ROOT TClass interface. For custom classes that are compiled externally and loaded into ROOT as shared library it might be necessary to provide the location of the source files manually using the static RooWorkspace member functions addClassDeclImportDir() and addClassImplImportDir().

When a TFile with a RooWorkspace containing source code is opened in a ROOT session that does not have the class code already loaded for the classes contained in the workspace, the code in the workspace is written to file, compiled and loaded into the ROOT session on the fly.

The code repository of RooWorkspace is designed to handle classes that have either their own implementation and header file, or are part of a group of classes that share a common header and implementation file. More complicated structuring of source code into files is not supported.

Also new accessors have been added for discrete-valued functions catfunc() and stored category functions are now also printed under their own heading in Print()

Parameterized ranges

It is now possible to use RooAbsReal derived functions as range definition for variables to construct ranges that vary as function of another variable. For example
         RooRealVar x("x","x",-10,10) ; // variable with fixed range [-10,10] 
         RooRealVar y("y","y",0,20) ; // variable with fixed range [-10,10] 
         RooFormulaVar x_lo("x_lo","y-20",y) ;      
         RooFormulaVar x_hi("x_hi","sin(y)*5",y) ;      
         x.setRange(x_lo,x_hi) ;  // Change x to have variable range depending on y
It is also possible to define parameterized named ranges in the same way
         x.setRange("signalRegion",x_lo,x_hi) ;
There are no fundamental limits to the complexity of the parameterized ranges that can be defined as long as the problem is uniquely defined. For example, given three observables x, y and z, one can define a parameterized named range 'R' of x in terms of y and of y in terms of z and ask to calculate the three dimensional integral of any function or p.d.f in terms of (x,y,z) over that range 'R' and it will be calculated correctly, taking recursive range dependencies into account. A definition of a range 'R' on the other hand where the bounds of x depend on y and the bounds of y depend on x is not allowed, and an error message will be printed to complain about the ambiguity of the problem definition. Integrals over non-rectangular regions are created the same way as integrals over rectangular regions using the RooAbsReal::createIntegral() function, the chosen mode of operation depends on the shape of the requestion integration range.

Note that in general integration over non (hyper)rectangular regions will be more computationally intensive as only a subset of the observables can be integrated analytically (all of those that do not have parameterized ranges plus those that have parameterized ranges but are not involved in the parameterization of others (e.g. x and y in the example above)

Running integrals and Cumulative distribution functions

It is now possible to create running integrals from any RooAbsReal function and to create cumulative distribution functions from any RooAbsPdf using the following methods:
        // Create int[xlo,x] f(x') dx' from f(x)
        RooAbsReal* runInt = func.createRunningIntegral(x) ;

        // Create int[xlo,x] f(x') dx' from p.d.f f(x) normalized over x
        RooAbsReal* cdf = pdf.createCdf(x) ;

        // Create int[xlo,x] f(x',y) dx' from p.d.f f(x,y) normalized over (x,y)
        RooAbsReal* cdf = pdf.createCdf(x,y) ;
As with the similarly styled function createIntegral running integrals and c.d.f. can be created over any number of observables, e.g createCdf(RooArgSet(x,y,z)) will create a three-dimensional cumulative distribution function. C.d.f and running integrals that are calculated from p.d.fs that have support for analytical integration are constructed from an appropriately reconnected RooRealIntegral. If numeric integration is required, the c.d.f or running integral is calculated by a dedicated class RooRunningIntegral that precalculates results for all observable values, which is more efficient in most use cases. Cumulative distributions functions that are calculated numerically are handled slightly differently that standard running integrals: their values is constructed to converge to exactly zero at the lower bound and exactly 1 at the upper bound so that algorithms that make use of that property of c.d.f can do so reliably.

Constraints management

New tools have been added to simplify studies with fits involving (external) constraints on parameters. The general philosophy is that constraints on parameters can be represented as probability density functions and can thus be modeled by RooAbsPdf classes (e.g. a RooGaussian for a simple Gaussian constraint on a parameter). There are two modes of operation: you can add parameter constraints to your problem definition by multiplying the constraint p.d.f.s with your 'master' p.d.f. or you specify them externally in each operation. The first mode of operation keeps all information in your master p.d.f and may make the logistics of non-trivial fitting problems easier. It works as follows: first you define your regular p.d.f, then you define your constraint p.d.f and you multiply them with RooProdPdf.
        // Construct constraint
        RooGaussian fconstraint("fconstraint","fconstraint",f,RooConst(0.8),RooConst(0.1)) ;

        // Multiply constraint with p.d.f
        RooProdPdf pdfc("pdfc","p.d.f with constraint",RooArgSet(p.d.f,fconstraint)) ;
If your top level p.d.f is already a RooProdPdf it also fine to multiply all terms together in one go. Constraints do not need to be specified a the top-level RooProdPdf, constraint p.d.f.s in any component RooProdPdf lower in the expression tree are used as well. Constraints are not used by default in fitting if present in a p.d.f. To activate the use of a constraint in fitting, use the Constrain() argument in fitTo()
        // Fit with internal constraint
        RooFitResult* r2 = pdfc.fitTo(*d,Constrain(f)) ;
This will instruct RooAbsPdf::fitTo() to included any constraint p.d.f on parameter f in the definition of the likelihood. It is possible to add multiple constraints on the same parameter to the 'master' p.d.f. If so, all constraints on a given parameter will be added to the likelihood.

The RooMCStudy class has been extended to accept the Constrain() argument as well in its constructor. If specified it will do two things: 1) it will pass the constrain argument to the fitting pass of the toy study and 2) it will modify the generation step into a two-step procedure: for each toy in the study it will first sample a value of each constrained parameter from the joint constraints p.d.f and it will then generate the observables for that experiment with the thus obtained parameter values. In this mode of operation the parameter values for each toy may thus be different. The actual parameter for each toy can be obtained with the newly added RooMCStudy::genParDataSet() member function. The calculation of the pull values for each parameter has been modified accordingly.

Alternatively, it is possible to specify constraints to both RooAbsPdf::fitTo() and the RooMCStudy constructor using the ExternalConstraint() named argument to supply constraint p.d.f.s that are not part of the 'master' p.d.f but rather an ad-hoc supplied external constraint. The argument supplied to ExternalConstraint() should be (a set of) constraint p.d.f(s), rather than (a set of) parameters for which internal constraint p.d.f.s should be picked up.

New operator class RooLinearMorph

A new numeric operator class RooLinearMorph has been added that provides a continuous transformation between two p.d.f.s shapes in terms of a linear parameter alpha. The algorithm for histograms is described in the paper by Alex Read in NUM A 425 (1999) 357-369 'Linear interpolation of histograms'. The implementation in RooLinearMorph is for continuous functions.
        // Observable and sampling binning to be used by RooLinearMorph ("cache")
        RooRealVar x("x","x",-20,20) ;
        x.setBins(1000,"cache") ;

        // End point shapes : a gaussian on one end, a polynomial on the other
        RooGaussian f1("f1","f1",x,RooConst(-10),RooConst(2)) ;
        RooPolynomial f2("f2","f2",x,RooArgSet(RooConst(-0.03),RooConst(-0.001))) ;

        // Interpolation parameter: rlm=f1 at alpha=0, rlm=f2 at alpha=1
        RooRealVar alpha("alpha","alpha",0,1.0) ;
        RooLinearMorph rlm("rlm","rlm",g1,g2,x,alpha) ;

        // Plot halfway shape
        RooPlot* frame = x.frame() ;
        rlm.plotOn(frame) ;
In short the algorithm works as follows: for both f1(x) and f2(x), the cumulative distribution functions F1(x) and F2(x) are calculated. One finds takes a value 'y' of both c.d.fs and determines the corresponding x values x1,x2 at which F1(x1)=F2(x2)=y. The value of the interpolated p.d.f fbar(x) is then calculated as fbar(alpha*x1+(1-alpha)*x2) = f1(x1)*f2(x2) / ( alpha*f2(x2) + (1-alpha)*f1(x1) ). Given that it is not easily possible to calculate the value of RooLinearMorph at a given value of x, the value for all values of x are calculated in one by (through a scan over y) and stored in a cache. NB: The range of the interpolation paramater does not need to be [0,1], it can be anything.

New workspace tool RooSimWSTool

A new tool to clone and customize p.d.f.s into a RooSimultaneous p.d.f has been added. This new tool succeeds the original RooSimPdfBuilder tool which had a similar functionality but has a much cleaner interface, partly thanks to its use of the RooWorkspace class for both input of prototype p.d.fs and output of built p.d.f.s

The simplest use case to to take a workspace p.d.f as prototype and 'split' a parameter of that p.d.f into two specialized parameters depending on a category in the dataset. For example, given a Gaussian p.d.f G(x,m,s) we want to construct a G_a(x,m_a,s) and a G_b(x,m_b,s) with different mean parameters to be fit to a dataset with observables (x,c) where c is a category with states 'a' and 'b'. Using RooSimWSTool one can create a simultaneous p.d.f from G_a and G_b from G with the following command
        RooSimWSTool wst(wspace) ;"G_sim","G",SplitParam("m","c")) ;
From this simple example one can go to builds of arbitrary complexity by specifying multiple SplitParam arguments on multiple parameters involving multiple splitting categories. Splits can also be performed in the product multiple categories, e.g.
        SplitParam("m","c,d")) ;
splits parameter m in the product of states of c and d. Another possibility is the 'constrained' split which clones the parameter for all but one state and insert a formula specialization in a chosen state that evaluates to 1 - sum_i(a_i) where a_i are all other specializations. For example, given a category c with state "A","B","C","D" the specification
will result in parameters m_A,m_B,m_C and a formula expression m_D that evaluates to (1-(m_A+m_B+m_C)). Constrained split can also be specified in product of categories. In that case the name of the remainder state follows the syntax {State1;State2} where State1 and State2 are the state names of the two spitting categories. Additional functionality exists to work with multiple prototype p.d.f.s simultaneously.

Improved infrastructure for caching p.d.f and functions

The infrastructure that exists for caching p.d.f.s, i.e. p.d.f that precalculate their value for all observable values at one and cache those in a histogram that is returned as p.d.f shape (with optional interpolation), has been expanded. This infrastructure comprises RooAbsCached the base class for all caching p.d.fs, RooAbsSelfCachedPdf a base class for end-user caching p.d.f implementations that simply cache the result of evaluate() and RooCachedPdf that can wrap and cache any input p.d.f specified in its constructor.

By default a p.d.f is sampled and cached in all observables in any given use context, with no need to specify what those are in advance. The internal code has also been changed such that all cache histograms now store pre-normalized p.d.f, which is more efficient than 'raw' p.d.f histograms that are explicitly post-normalized through integration. Multiple different use cases (e.g. definitions of what are observables vs parameters) can be cached simultaneously. Now it is also possible to specify that p.d.f.s should be sampled and cached in one or more parameter dimensionsal in addition to the automatically determined set of observables. as well.

Also a complete new line of classes with similar functionality has been added inheriting from RooAbsReal. These are RooAbsCachedReal,RooAbsSelfCachedReal and RooCachedReal. A newly added class RooHistFunc presents these shapes and is capable of handling negative entries.

New PDF error handling structure

New infrastructure has been put into place to propagate and process p.d.f evaluation errors during fitting. Previously evaluation errors were marked with a zero p.d.f value and propagated as a special condition in RooAddPdf, RooProdPdf etc to result in a zero top-level p.d.f value that was caught by the RooFit minuit interface as a special condition. Summary information on the value of the parameters and the observables was printed for the first 10 occurrences of such conditions.

Now, each p.d.f component that generates an error in its evaluation logs the error into a separate facility during fitting and the the RooFit minuit interface polls this error logging facility for problems. This allows much more detailed and accurate warning messages during the minimization phase. The level of verbosity of this new error facility can be controlled with a new
        PrintEvalErrors(Int_t code)
argument to fitTo().

The new-style error logging is active whenever MINUIT is operating on such a p.d.f. The default value for N is 3. Outside the MINUIT context the evaluation error each evualuation error will generate a separate message through RooMsgService

Other new features




The new montecarlo directory groups the packages


New Classes: TGSplitFrame, TGShapedFrame, TGEventHandler

These three classes have been primarily developed to be used in EVE. For an example of how to use them, see tutorials/eve/SplitGLView.C (this macro is used as a plugin by tutorials/eve/alice_esd_split.C).







Hierarchical context menus.

Modal Dialogs

Context Menus

The context menu of ROOT classes can be created with hierarchical sub-menus, which are more convenient and offer better organization. This makes possible to access more class methods from the context menu (without having the menu becoming larger than the screen). The next is an example of the hierarchihal submenu structure shown below.
 void SetLevelOne(EPaletteType  palette = pal3); // *MENU={Hierarchy="Candidates/SetLevelOne"}*
 void SetPalette(EPaletteType palette = pal3);   // *SUBMENU={Hierarchy="Candidates/SetPalette"}*
 void SetCatalog(const char * = "HESS") { }      // *MENU={Hierarchy="Candidates/SetCatalog"}*
 void AddCatalog(const char * = "HESS") { }      // *MENU={Hierarchy="Candidates/AddCatalog"}*
 void RemoveCatalog(const char *  = "HESS") { }  // *MENU={Hierarchy="Candidates/RemoveCatalog"}*
 void AddCandidate(const char * = "NAME") { }    // *MENU={Hierarchy="Candidates/AddCandidate"}*

 EPaletteType fPalette; //*OPTION={GetMethod="GetPalette";SetMethod="SetPalette";Items=(PrettyPalette="PrettyPalette",SchlenkPalette="Schlenk",pal3="Pal3",pal4="Pal4")}*

Hierarchical context menus.


GUI Builder

TASImage - libAfterImage library



Graphical Output







Histograms painting










Version 3 of QT is not supported anymore. If you install ROOT with the QT option you must have QT version 4 already installed.


Major changes

Minor changes, fixes and improvements

Possible performance issues with ATI drivers (fglrx)

In late 2007 ATI switched to a new driver architecture. With these drivers a significant degradation of GL performance in selection mode, up to a factor of 50, was observed. Both linux and Windows drivers were affected. The issue has been resolved in the latest driver versions.


Major changes

Minor changes, fixes and improvements


A new directory minicern has been introduced. This directory contains the zebra and hbook files required to build the h2root and g2root utilities. These small files remove dependencies on the old CERNLIB files. h2root and g2root as well as the library libHbook are automatically built when configuring ROOT and a Fortran compiler found in the system.

ROOT page - Class index - Top of the page - Valid HTML 4.01 Transitional