ROOT logo
ROOT » Download » Release Notes

ROOT Version 5.30/00 Release Notes

ROOT version 5.30/00 has been released on June 28, 2011. In case you are upgrading from an old version, please read the releases notes of version 5.22, 5.24, 5,26 and version 5.28 in addition to these notes.

The release of version 5.32 is scheduled for November 29, 2011.

For more information, see:

The following people have contributed to this new version:

Bertrand Bellenot, CERN/SFT,
Dario Berzano, INFN and University of Torino, ALICE, Proof,
Brian Bockelman, UNL/CMS,
Rene Brun, CERN/SFT,
Philippe Canal, FNAL,
Olivier Couet, CERN/SFT,
Kyle Cranmer, NYU, RooStats,
David Dagenhart, FNAL/CMS,
Gerri Ganis, CERN/SFT,
Andrei Gheata, CERN/Alice,
Sven Kreiss, NYU, RooStats,
Wim Lavrijsen, LBNL, PyRoot,
Sergei Linev, GSI,
Lorenzo Moneta, CERN/SFT,
Axel Naumann, CERN/SFT,
Eddy Offermann, Renaissance,
Giovanni Petrucciani, UCSD,
Bartolomeu Rabacal, CERN/ADL, Math,
Fons Rademakers, CERN/SFT,
Paul Russo, FNAL,
Elvin Alin Sindrilaru, CERN,
Joerg Stelzer, DESY/Atlas, TMVA,
Alja Tadel, CERN/CMS, Eve,
Matevz Tadel, CERN/Alice, Eve,
Eckhard von Toerne, University Bonn, ATLAS, TMVA,
Wouter Verkerke, NIKHEF/Atlas, RooFit,











Building with CMake

     mkdir <builddir>                  # create a empty directory in which CMake will put temporary and binary files
     cd <builddir>
     cmake [options] <rootsources>     # by default it will generate a Makefile (or NMake file on Windows)
     make [options]                    # you can use any standard make options (e.g. -jN)
     make install                      # installation to the source tree by default, use CMAKE_INSTALL_PREFIX to change it
     -DCMAKE_INSTALL_PREFIX=<installdir>  # installation prefix
     -Dxxxx=ON -Dyyyy=OFF                 # Optional ROOT components (e.g. tmva, mathcode, gdml, etc.)
     -DGIF_INCLUDE_DIR=/usr/include       # Specify locations for external libraries and packages
     -G <generator name>                  # E.g. "XCode", "Eclipse CDT4 - Unix Makefiles", "Visual Studio 9 2008"
     -DCMAKE_BUILD_TYPE=Debug             # Other build types: Release, RelWithDebInfo, MinSizeRel

TextInput library

Replace getline / editline libraries by a single, new, platform independent TextInput library. Previously, ROOT used separate readline implementations for Windows (clib/Getline) and a ll other platforms (editline/[n]curses based). The Windows prompt was limited to one line (with '$' as continuation character), no colors, etc. The new TextInput library works on all platforms the same way, with history and color highlighting, multiple inputs and outputs (prompt, GUI), and without the need to link [n]curses, i.e. it's much smaller than before. It implements all bash-like editor shortcuts and it enables us to add e.g. context sensitive help in the future.

The only known backward incompatibility is the constness of the return type of Getlinem() and Getlinem(): they now return const char*.



Object Merging

We introduced a new explicit interface for providing merging capability. If a class has a method with the name and signature:
   Long64_t Merge(TCollection *input, TFileMergeInfo*);
it will be used by a TFileMerger (and thus by PROOF) to merge one or more other objects into the current object. Merge should return a negative value if the merging failed.

If this method does not exist, the TFileMerger will use a method with the name and signature:

   Long64_t Merge(TCollection *input);
TClass now provides a quick access to these merging function via TClass::GetMerge. The wrapper function is automatically created by rootcint and can be installed via TClass::SetMerge. The wrapper function should have the signature/type ROOT::MergeFunc_t:
   Long64_t (*)(void *thisobj, TCollection *input, TFileMergeInfo*);
We added the new Merge function to TTree and THStack. We also added the new Merge function to TQCommand as the existing TQCommand::Merge does not have the right semantic (in part because TQCommand is a collection).

In TFileMerger, we added a PrintLevel to allow hadd to request more output than regular TFileMerger.

We removed all hard dependencies of TFileMerger on TH1 and TTree. (Soft dependencies still exist to be able to disable the merging of TTrees and to be able to disable the AutoAdd behavior of TH1).

The object TFileMergeInfo can be used inside the Merge function to pass information between runs of the Merge (see below). In particular it contains:

   TDirectory  *fOutputDirectory;  // Target directory where the merged object will be written.
   Bool_t       fIsFirst;          // True if this is the first call to Merge for this series of object.
   TString      fOptions;          // Additional text based option being passed down to customize the merge.
   TObject     *fUserData;         // Place holder to pass extra information.  This object will be deleted at the end of each series of objects.
The default in TFileMerger is to call Merge for every object in the series (i.e the collection has exactly one element) in order to save memory (by not having all the object in memory at the same time).

However for histograms, the default is to first load all the objects and then merge them in one go ; this is customizable when creating the TFileMerger object.

LZMA Compression and compression Level setting

ROOT I/O now support the LZMA compression algorithm to compress data in addition to the ZLIB compression algorithm. LZMA compression typically results in smaller files, but takes more CPU time to compress data. To use the new feature, the external XZ package must be installed when ROOT is configured and built: Download 5.0.3 from here and make sure to configure with fPIC:
   ./configure CFLAGS='-fPIC'
Then the client C++ code must call routines to explicitly request LZMA compression.

ZLIB compression is still the default.

Setting the Compression Level and Algorithm
There are three equivalent ways to set the compression level and algorithm. For example, to set the compression to the LZMA algorithm and compression level 5.
  1. TFile f(filename, option, title);
    f.SetCompressionSettings(ROOT::CompressionSettings(ROOT::kLZMA, 5));
  2. TFile f(filename, option, title, ROOT::CompressionSettings(ROOT::kLZMA, 5));
  3. TFile f(filename, option, title);
These methods work for TFile, TBranch, TMessage, TSocket, and TBufferXML. The compression algorithm and level settings only affect compression of data after they have been set. TFile passes its settings to a TTree's branches only at the time the branches are created. This can be overidden by explicitly setting the level and algorithm for the branch. These classes also have the following methods to access the algorithm and level for compression.
Int_t GetCompressionAlgorithm() const;
Int_t GetCompressionLevel() const;
Int_t GetCompressionSettings() const;
If the compression level is set to 0, then no compression will be done. All of the currently supported algorithms allow the level to be set to any value from 1 to 9. The higher the level, the larger the compression factors will be (smaller compressed data size). The tradeoff is that for higher levels more CPU time is used for compression and possibly more memory. The ZLIB algorithm takes less CPU time during compression than the LZMA algorithm, but the LZMA algorithm usually delivers higher compression factors.

The header file core/zip/inc/Compression.h declares the function "CompressionSettings" and the enumeration for the algorithms. Currently the following selections can be made for the algorithm: kZLIB (1), kLZMA (2), kOldCompressionAlgo (3), and kUseGlobalSetting (0). The last option refers to an older interface used to control the algorithm that is maintained for backward compatibility. The following function is defined in core/zip/inc/Bits.h and it set the global variable.

R__SetZipMode(int algorithm);
If the algorithm is set to kUseGlobalSetting (0), the global variable controls the algorithm for compression operations. This is the default and the default value for the global variable is kZLIB.

Asynchronous Prefetching

The prefetching mechanism uses two new classes (TFilePrefetch and TFPBlock) to prefetch in advance a block of tree entries. There is a thread which takes care of actually transferring the blocks and making them available to the main requesting thread. Therefore, the time spent by the main thread waiting for the data before processing considerably decreases. Besides the prefetching mechanisms there is also a local caching option which can be enabled by the user. Both capabilities are disabled by default and must be explicitly enabled by the user.

In order to enable the prefetching the user must set the rootrc environment variable TFile.AsyncPrefetching as follows: gEnv->SetValue("TFile.AsyncPrefetching", 1). Only when the prefetching is enabled can the user set the local cache directory in which the file transferred will be saved. For subsequent reads of the same file the system will use the local copy of the file from cache. To set up a local cache directory, the client can use the following commands:

TString cachedir="file:/tmp/xcache/";
// or using xrootd on port 2000  
// TString cachedir="root://localhost:2000//tmp/xrdcache1/";
gEnv->SetValue("Cache.Directory", cachedir.Data());
The TFilePrefetch class is responsible for actually reading and storing the requests received from the main thread. It also creates the working thread which will transfer all the information. Apart from managing the block requests, it also deals with caching the blocks on the local machine and retrieving them when necessary.

The TFPBlock class represents the encapsulation of a block request. It contains the chunks to be prefetched and also serves as a container for the information read.










Histogram package

All Histogram classes (including the TProfile's)















A couple of minor bug fixes related to TPython::LoadMacro, TClass::DynamicCast and the setting of pointers through operator[] on STL containers.

Math Libraries



TRandom1 and TRandom3

ROOT::Fit::Fitter and related classes


  • Add new methods Minimizer::GetHessianMatrix(double * mat) and Minimizer::GetCovMatrix(double * mat) to return the full matrices by filling the passed C arrays, which must have a dimension of at least n x n, where n is the total number of parameters. The elements for the fixed parameters will be filled with zeros. These methods are currently implemented by only Minuit and Minuit2.
  • Change default tolerance in ROOT::Math::MinimizerOptions to be 0.01 from 0.0001.
  • MathMore





    One of the core classes used by HistFactory models (RooRealSumPdf) was modified leading to substantial speed improvements (for models that use the default -standard_form option). This new version supports a few types of interpolation for the normalization of the histograms: The piece-wise logarithmic interpolation paired with a Gaussian constraint is equivalent to a log-normal constraint in a transformed version of the nuisance parameter. The benifit of this approach is that it is easy to avoid the normalization from taking on unphysical negative values. This is the prescription used by the CMS Higgs group, and agreed upon by the LHC Higgs Combination Group. There is not yet XML-based steering for the different interpolation types, but there is a simple script to modify it.

    Near term goals for HistFactory


    General Improvements

    This release brings several speed improvements to the RooStats tools and improved stability and performance with PROOF. This comes mainly through changes to the ToyMCSampler. In addition the HypoTestInverter tool has been rewritten, leading to some changes in the HypoTestResult. Finally, a new hypothesis test new called FrequentistCalculator was written, which plays the same role as the HybridCalculator but eliminates nuisance parameters in a frequentist way.


    The primary interface for this class is to return a SamplingDistribution of a given TestStatistic. The ToyMCSampler had a number of internal changes for improved performance with PROOF. These should be transparent. In addition, a new method was added RooAbsData* GenerateToyData(RooArgSet& paramPoint) that gives public access to the generation of toy data with all the same options for the treatment of nuisance parameters, binned or unbinned data, treatment of the global observables, importance sampling, etc. This is new method particularly useful for producing the expected limit bands where one needs to generate background-only pseudo-experiments in the same way that was used for the primary limit calculation.


    In the process of writing the new HypoTestInverter the conventions for p-values, CLb, CLs+b, and CLs were revisited. The situation is complicated by the fact that when performing a hypothesis test for discovery the null is background-only, but when performing an inverted hypothesis test the null is a signal+background model. The new convention is that the p-value for both the null and the alternate are taken from the same tail (as specified by the test statistic). Both CLs+b and CLb are equivalent to these p-values, and the HypoTestResult has a simple switch SetBackgroundIsAlt() to specify the pairing between (null p-value, alternate p-value) and (CLb, CLs+b).

    HypoTestInverter, HypoTestInverterResult, HypoTestInverterPlot

    These classes have been rewritten for using them with the new hypothesis test calculators. The HypoTestInverter class can now be constructed by any generic HypoTestCalculator, and both the HybridCalculator and the new FrequentistCalculator are supported. The HypoTestInverter class can be constructed in two ways: either passing an HypoTestCalculator and a data set or by passing the model for the signal, for the background and a data set. In the first case the user configure the HypoTestCalculator before passing to the HypoTestInverter. It must be configured using as null model the signal plus background model as alternate model the background model. Optionally the user can pass the parameter to scan, if it is not passed, the first parameter of interest of the null model will be used. In the second case (when passing directly the model and the data) the HypoTestInverter can be configured to use either the frequentist or the hybrid calculator. The user can then configure the class afterwards. For example set the test statistic to use via the method SetTestStatistic, number of toys to run for each hypothesis, by retrieving the contained HypoTestCalculator:
    HypoTestInverter inverter(obsData, model_B, model_SB, parameterToScan, HypoTestInverter::kFrequentist);
    ProfileLikelihoodRatioTestStat profLR( *model_SB->GetPdf() );
    FrequentistCalculator * htcalc = (FrequentistCalculator*) inverter.GetHypoTestCalculator();
    htcalc->SetToys( ntoySB, ntoyB);
    The Inverter can then run using a fixed grid of npoint between xmin and xmax or by using an automatic scan, where a bisection algorithm is used. For running a fixed grid one needs to call SetFixedScan(npoints, xmin, xmax), while for running an autoscan use the function SetAutoScan. The result is returned in the GetInterval function as an HypoTestInverterResult class. If a fixed grid is used the upper limit is obtained by using a interpolation on the scanned points. The interpolation can be linear or a spline (if result.SetInterpolationOption(HypoTestInverterResult::kSpline) is called). The upper limit, the expected P value distributions and also the upper limit distributions can be obtained from the result class.
    HypoTestInverterResult * result = inverter.GetInterval();
    double upperLimit = result->UpperLimit();
    double expectedLimit = result->GetExpectedUpperLimit(0);
    The limit values, p values and bands can be drawn using the HypoTestInverterPlot class. Example:
    HypoTestInverterPlot * plot = new HypoTestInverterPlot("Result","POI Scan Result",result);
    plot->Draw("2CL CLb");

    Where the Draw option "2CL CLb" draws in addition to the observed limit and bands, the observed CLs+b and CLb. The result is shown in this figure:


    A new tutorial StandardHypoTestInvDemo has been added to show the usage of this class and producing the picture as the one showed above.


    This is a HypoTestCalculator that returns a HypoTestResult similar to the HybridCalculator. The primary difference is that this tool profiles the nuisance parameters for the null model and uses those fixed values of the nuisance parameters for generating the pseudo-experiments, where the HybridCalculator smears/randomizes/marginalizes the nuisance parameters.


    Several improvements have been put in the class. In particular the possibility to set different integration types. One can set the different integration types available in the ROOT integration routines (ADAPTIVE, VEGAS, MISER, PLAIN for multi-dimension). In addition one can use an integration types by generating nuisance toy MC (method TOYMC). If the nuisance parameters are uncorrelated, this last method can scale up for a large number of nuisance parameters. It has been tested to work up to 50-100 parameters.


    TMVA version 4.1.2 is included in this root release. The changes with respect to ROOT 5.28 / TMVA 4.1.0 are in detail:

    Variable transformations

    Bug fixes









    TGToolBar, TRootGuiBuilder, TStyleManager, TRecorder









    TGColorDialog, TGFontDialog, TGTextEditDialogs
















    TH3 Painting





    ROOT page - Class index - Top of the page - Valid XHTML 1.0 Transitional