# ROOT Version 5.24/00 Release Notes

ROOT version 5.24/00 will be released end of June 2009. In case you are upgrading from an old version, please read the releases notes of version 5.16, 5.18, 5.20 and version 5.22 in addition to these notes.

• Bindings - packages related to the interplay with other programming languages (Python, Ruby)
• Cint - the C++ interpreter
• Core - the basic ROOT functionality
• Geometry - building, representing and drawing geometrical objects
• 2D Graphics - ROOT's two dimensional graphics interface
• 3D Graphics - ROOT's three dimensional graphics interface
• Graphical User Interface - from basic GUI elements to ROOT's own, complete dialogs
• Histograming - counting values, spectra, and drawing them
• HTML - the documentation generator
• Input/Ouput - storing and reading data
• Mathemathics - everything one can use to calculate: minimizers, matrixes, FFT, and much more
• Miscellaneous - things that didn't make it into the other groups: table
• Monte Carlo - monte carlo and physics simulation interfaces
• Networking - network-related parts, e.g. protocols and authentication interfaces
• PROOF - parallel ROOT facility
• RooFit - a fitting library
• SQL - database interfaces
• TMVA - multivariate analysis tools
• Trees - ROOT's unique container class and related utilities

Binaries for all supported platforms are available at:

      http://root.cern.ch/root/Version524.html
Versions for AFS have also been updated. See the list of supported platforms:
      http://root.cern.ch/Welcome.html

      http://root.cern.ch

The following people have contributed to this new version:
Kevin Belasco, N/A, Princeton University for MCMC,
Bertrand Bellenot, CERN/SFT,
Rene Brun, CERN/SFT,
Philippe Canal, FNAL,
Or Cohen, CERN & Weizmann, TMVA
Olivier Couet, CERN/SFT,
Kyle Cranmer, NYU/Atlas, RooStats
Dominik Dannheim, MPI-Munich/Atlas, TMVA
Valeri Fine, BNL/STAR,
Fabrizio Furano, CERN/IT,
Gerri Ganis, CERN/SFT,
Andrei Gheata, CERN/Alice,
Mihaela Gheata, CERN/Alice,
David Gonzalez Maline, CERN/SFT,
Roberto Gracia Del Baño, Universidad de Valencia, recorder,
Andreas Hoecker, CERN/Atlas, TMVA
Jan Iwaszkiewicz, CERN,
Lukasz Janyst, CERN/IT,
Anna Kreshuk, GSI,
Wim Lavrijsen, LBNL, PyRoot
Alfio Lazzaro, Milano/AtlasMinuit
George Lewis, New York University/Atlas for SPlot,
Josef Leydold, TU Vienna, Unuran
Sergei Linev, GSI,
Johan Lundberg, CERN/Atlas, class TRolke
Anar Manafov, GSI,
Lorenzo Moneta, CERN/SFT,
Axel Naumann, CERN/SFT,
Eddy Offermann, Renaissance,
Katerina Opocenska, Charles University of Prague, recorder,
Rustem Ospanov, CERN/Atlas, TMVA
Mario Pelliccioni, Turin/CMS, RooStats
Timur Pocheptsov, JINR/Dubna,
Paul Russo, FNAL,
Manuel Schiller, Heidelberg/LHCb, SMatrix,
Gregory Schott, Karlsruhe/CMS, RooStats
Stefan Schmitt, DESY, TUnfold
Peter Speckmayer, CERN/Atlas, TMVA
Joerg Stelzer, DESY/Atlas, TMVA
Matthew Strait, umn.edu, doc
Jan Therhaag, Bonn/Atlas, TMVA
Eckhard von Toerne, Bonn/Atlas, TMVA
Wouter Verkerke, NIKHEF/Atlas, RooFit
Alexander Voigt, Dresden/Atlas, TMVA
Helge Voss, MPI-K-Heidelberg/Atlas, TMVA
Andrzej Zemlja, IFJ-Krakow/Atlas

### Core

• New class TBase64 providing Base64 encoding and decoding. Base64 encoded messages are typically used in authentication protocols and to pack binary data in HTTP or mail messages.
• New method in TSystem:
   TString TSystem::GetFromPipe(const char *command)

which executes "command" in the shell and returns the output in the TString. Multi-line output is separated by \n's.
• Add proper support for Microsoft Visual C++ 9.0
• Add support for 'unix' sockets on Windows.
• New method TString::Clear() to reset the string but not to resize it to the default (small) size. Useful when the string was pre-allocated to a large size and has to be re-used.
TAttAxis::SetNdivisions(Int_t n1, Int_t n2, Int_t n3, Bool_t optim);
• The statically linked roota executable and libRoot.a are currently only supported on Linux platforms. We hope to extend this to MacOS X soon.

### Meta

• Add new macro ClassDefNV (ClassDef Non Virtual) which does not define any virtual function. ClassDef does define IsA, Streamer and ShowMember as virtual.
This should be used only in classes that are never inherited from!
• Improve performance of TClass::GetMethod (and friends)

### ACLiC

• Implement TClassEdit::InsertStd() which puts "std::" in front of all STL classes.
• The generated library now always checks with which version of ROOT the library was build and rebuilt the library if the running version of ROOT is different.
• Add support for '+' character embedded in the script's name or directory name.
• The dependency tracking file (script_C.d) is now always created when the library is built. The dependency tracking file now records with which version of ROOT the library was built and the library is now rebuilt if it is loaded in a different version of ROOT.

### I/O

• Implement proxy support in TWebFile. The proxy URL can be sprcified either via TWebFile::SetProxy() or via the shell variable http_proxy, as is being used by wget, e.g.:
   export http_proxy=http://pcsalo.cern.ch:3128
To bypass the proxy, the TWebFile ctor (or via TFile::Open()) supports the option "NOPROXY".
• Add support for streaming std::bitset STL containers
• Extend the checks done in case of a StreamerInfo checksum mismatch to avoid spurrious failures (for example because of the various possible type names for STL containers) and to report details on the nature of the mismatch: explicit list missing base classese, missing data members or the actual differences in type or comments. For example:
Warning in : The following data member of the on-file layout version 2 of class 'Tdata' differs from the in-memory layout version 2:
double mydouble; //
vs
double mydouble_two; //
Warning in : The following data member of the in-memory layout version 2 of class 'Tdata' is missing from the on-file layout version 2:
int more; //
Warning in : The following data member of the in-memory layout version 2 of class 'Tdata' is missing from the on-file layout version 2:
int three; //

• Upgrade MakeProject to be able to handle ROOT files created by for ATLAS.
• Allow user to provide a custom reallocator when the TBuffer is being passed memory. If the TBuffer does not own the memory __and__ no custom memory reallocator has been set, a Fatal error will be issued:
Fatal in : Failed to expand the data buffer because TBuffer does not own it and no custom memory reallocator was provided.
• Re-allow reading empty vector< long double >, however long double is still not supported.
• Upgrade TSQLFile to properly work with MySQL on MacOS.
• Update to the CollectionProxyInfo interface to insure the proper creation of iterator over std containers on all platforms.
• In XML and SQL output, use %e format to write float and double:
• Conversion from float/double to string per default performed with "%e" (exponential) format.
• Format can be configured with SetFloatFormat methods that one can specify precision, width arguments of printf call.
• sscanf works as before - "%f" accpet both exponential and decimal format.

### Networking

• Class TWebFile now supports proxies. Set the proxy URL either via static method:
   TWebFile::SetProxy(const char *url);
or via the shell variable http_proxy, like is used for e.g. wget.
• Class TWebFile now supports the basic authentication (AuthType Basic) scheme. The user name and password can be specified in the URL like this:
   http://username:mypasswd@pcsalo.cern.ch/files/aap.root

### Bonjour Support

New Zero-Configuration networking classes using Bonjour:
• TBonjourRecord
• TBonjourRegistrar
• TBonjourBrowser
• TBonjourResolver

Zeroconf is meant to solve the problem of finding services and connecting to them. Instead of having to know a machine's IP address and port number for the service, a machine offering a service simply announces that it offers the service. Clients who want to use a service ask for all the machines that are offering it and then the user decides which one to connect to.

Traditionally, you would have to make sure that each machine is configured correctly and on the network. Zeroconf takes care of all of this for you for a local area network. Lots of new hardware, such as printers with networking support or wireless routers, come with their own Zeroconf server to allow easy network configuration. On Mac OS X, many applications take advantage of Bonjour to advertise services, such as the ssh server, iTunes shares, or iChat availability. Zeroconf is a powerful way of simplifying your applications, and there are implementations available for most operating systems.

If you have Mac OS X, you already have Bonjour installed; otherwise, you can download the source code from the Apple website (http://developer.apple.com/Bonjour) and build and install Bonjour in relatively short order. Most modern Linux distributions come with Avahi, an LGPL implementation of Zeroconf with a compatibibility API for Bonjour. The ROOT Bonjour classes were tested to work with both Apple's Bonjour implementation and Avahi's Bonjour compatibility layer.

Service discovery consists of three steps: registering a service, browsing for available services, and resolving the service to an actual address. A server will register its services with the Bonjour daemon. Clients will browse for services to get a list to provide to the user. Finally, when it is time to connect to a service, the client will resolve the selected service to an actual IP address and port and then connect to the service provide using TCP/IP.

### XROOTD

• New version 20090610-0430
• Improvements
• Add the possibility of using the xrd command line from batch scripts
• Add support for Adler32 checksum calculation of a local unix file (including stdin) and file on a remote xrootd data server.
• Add support for the so-called Xtreme copy, allowing xrdcp to read multiple chunks from several servers, in parallel.
• Add possibility to use a different version of a given C++ compiler or linker (--with-cxx=..., etc)
• Increase flexibility in configuring openssl and openafs support
• In GSI authentication, automatize the loading of CRL; the information about the URI is looked for either in the dedicated extension on the CA certificate or from the file "<CA hash>.crl_url" and the file automatically downloaded and transformed in PEM format
• Fixes
• Server side
• Fix wrong reporting of the refresh option for Locate
• Fix incorrect propagation of selected nodes
• Prevent potential long duration loop (15 mins) after client disconnections
• Avoid potential deadlocks when trying to remove a node from a cluster
• Correct matching of incoming connection with previously dropped connection
• Correct export of cluster identification
• Correctly propagate information about files that could not be staged
• Prevent endsess deadlock when parallel streams stall due to large WAN RTT
• Fix infinite wait for primary login that will never happen if you are a manager without a meta-manager
• Prevent annoying (but not deadly) infinite loop should a server go offline that is subject to a locate request display.
• Client side
• Better handling of errno, especially for parallel streams
• Allow the client to cycle through all the remaining valid security protocols in the list of protocols returned by the server
• Fix a rare race condition happening when destroying instances with outstanding open requests
• Enforce cache coherency in the case of reads+writes in the same file
• Correctly guess the filesize of a file opened for writing in sync mode
• Make server host name check more flexible for GSI authentication
• Fix some relevant issues with cache handling on the client, including a rare but fatal bug in determining the cache holes list and the end of a cache lookup
• More complete detection of async read errors
• General
• Fix problem in handling the return code of X509_REQ_verify in XrdCryptosslX509Req.cc
• Avoid SEGV when doing an lsd admin command with authenticated xrootd clients
• Close race conditions that allowed a supervisor/manager to subscribe without declaring a data port. Initialize nostage state in XrdCmsState to prevent erroneous state declaration during initialization.
• Fix a problem with the subject name of proxies of level > 1; this was creating a failure when a Globus application was trying to use the proxy certificate
• For now, turn off IPV6 processing as it seems to create several problems.
• Fix a few issues with the available releases of gcc 4.4
• Fix a few issues with the 'icc' compiler
• Fix several issues in GSI and PWD authentication modules
• New features
• New File Residency Manager (frm), replacement for the MPS scripts
• Scripts are now provided to
• automatically donwload a CRL certificate (utils/getCRLcert)
• install the recommended verion of OpenSSL and build it with the options optimal for usage in XROOTD/SCALLA (utils/installOpenSSL.sh)
• install the recommended verion of OpenAFS and build it with the options optimal for usage in XROOTD/SCALLA (utils/installOpenAFS.sh)
• Miscellanea
• TokenAuthz and CS2 modules are no longer part of the main built; they have to be built externally

### Tree

• Significantly improve performance of TTree Proxy.
• Improve read performance of sub-branch containing vector of single types.
• Fix TTree::LoadBasket to properly handle the (new) case where no basket is stored with the TTree object.
• Fix the axis used for an histogram created by TTree::Draw for a branch of TString or std::string objects.
• MakeProxy now correctly support branches that created with a leaflist with more than one leaf (usually used for C-struct).
• TTree::CloneTree and TChain::Merge in fast mode now can recover from some mismatch errors between the input and output TTrees by falling back to using the 'slow' mode. In particular this allow a 'fast cloning' to handle files that requires schema evolution (albeit it is of course much slower).
• Make sure that the TTreeCache is not attempting to cache (wrongly) the content of branches that are in an auxiliary files.
• Make sure that FillBuffer does it work when the learning phase is over even if the entry number is 'low' for the 'current' file of a chain.
• If TTree::SetEventList is called, TTree::GetEntryList no longer relinquish ownership of the automatically created TEntryList
• Add the ability to see the TTree UserInfo list from the TBrowser
• Fix the case of reading a TTree containing an 'old' class layout that contained a std::vector that is no longer part of the current class layout
• Implement direct interfaces from TTree to the result of TSelector::Draw TTree:GetVal(int) and TTree::GetVar(int)
• Added support for "5-D" plotting.

### Parallel Coordinates

• Fix a memory leak. The TParallelCoord destructor was not called when the canvas used to draw it was closed.

### PROOF

• Warning
• The classes TProofDataSetManager and TProofDataSetManagerFile have been renamed TDataSetManager and TDataSetManagerFile
• New functionality
• Add support for session queuing in the scheduler. This allows to control the number of sessions allowed to process queries concurrently. The feature is enabled by a new parameter 'queue:fifo' in the 'xpd.schedparam' directive. In case of static worker assignment (default, random, round-robin) the max number of running sessions can be limited by another new parameter 'mxrun'; for example
xpd.schedparam default mxrun:3 queue:fifo
will run concurrently only 3 sessions. Additional requests are queued and run as soon as one of the running sessions goes idle. The current policy is FIFO, so that there is a rotation among queued sessions. In the case of load-based worker assignment, the max number of running queries is determined dynamically.
• Add support for repeat functionality in the xrd.worker directive. To avoid repeating the same line N times one can just add 'repeat=N' in the line; for example
xpd.worker worker proofwrks:2093 repeat=4
will define 4 workers on port 2093 of machine 'proofwrks'.
• Add support for port specification via the directive 'xpd.port'
• Enable variable substitution in 'xpd.' directives using the standard Scalla mechanism described in http://xrootd.slac.stanford.edu/doc/dev/Syntax_config.htm .
• Build also a binary named 'xproofd' which runs a xrootd daemon with only the XrdProofdProtocol (i.e. no data serving).
This simplifies setups when data serving is not needed and also allows to better disantagle problems related to one specific protocol. The new binary accepts the same arguments as 'xrootd' and parses the same directives form the same configuration file, with the exception of 'xpd.protocol xproofd libXrdProofd.so' which should now be dropped. AN alternative port can be specified via the new 'xpd.port' directive (see above).
• Add support for 'MasterOnly' mode in starting a PROOF session. This avoids starting the workers when one wants just to browse the datasets or retrieve results. To start a session in 'MasterOnly' mode enter "masteronly" as second argument to TProof::Open, e.g.
root[] TProof *p = TProof::Open("<masterurl>", "masteronly")
• Add full support for placeholders <uid>, <gid>, <group> and <homedir> for the directives specified via 'xpd.putenv'
• Add the configuration directive 'proofservparents' to allow specifying a different list of parent names for the 'proofserv' tasks. This is needed to avoid untimely killing of 'proofserv' instances in test setups when multiple instances of the daemons are running on the same machines under different names.
• Add the possibility to switch to asynchronous mode while running synchronously. A new button "Run in background" has been added to the dialog box. The behaviour of Ctrl-C has also been modified: the user is prompted for a choice among continuing asynchronously, stopping (terminating) or aborting the query.
• Add the possibility to define the dataset information sources via the directive 'xpd.datasetsrc'. In this way the permissions should be set correctly and the related problems disappear.
• Record the logs from the ROOT version validation tests (proofserv forked in test mode). In case of failure - or if the debug flag is on - the log files are kept under <xproof_adminpath>/rootsysvalidation/root.<tag>.log (the <tag> has all the '/' replaced by '-'). This should facilitate understanding the problems when in case of validation failures.
• Add support for automatic running of PROOF sessions in valgrind. The second argument of TProof::Open is used to trigger the relevant settings. To valgrind the master session start PROOF with TProof::Open("<master>","valgrind=master"); to valgrind two workers sessions use TProof::Open("<master>","valgrind=workers"); to valgrind master and 2 workers, use TProof::Open("<master>","valgrind=master+workers"). Other combinations are available.
The valgrind logs are available with the tag '<ordinal>-valgrind' in the log dialog or form TProofMgr::GetSessionLogs() .
To add options to valgrind execute TProof::AddEnvVar("PROOF_WRAPPERCMD", "valgrind_opts:<options>") before starting the session.
• Add new static TProof::LogViewer("<master>") to graphically browse the session logs independently of the progress dialog. The improved log window allows to chose a different master and/or session  and displays human readable information about the starting time of the session being browsed.
• A set of scripts for quick interaction with a dataset manager via PROOF are available under $ROOTSYS/etc/proof/utils/pq2 . The scripts are prefixed pq2 (proof quick query - or proof-dq2) and allow to {browse, register, remove, verify} datasets on a given PROOF master. See$ROOTSYS/etc/proof/utils/pq2/README for more information.
• Improvements
• Enable by default schema evolution in TMessage; can be disabled setting 'Proof.SchemaEvolution: 0' .
• Extend the functionality of the dataset API to obtaine information on per-server base; add also two new methods:
• TProof::SetDataSetTreeName(<dataset>,<treename>): set/change the default tree name in the TFileCollection;
• TProof::ExistsDataSet(<dataset>): check by-name the availability of a given dataset;
• In ProofBench,
• Load the macro before executing it. This allows to circumvent a problem recently fixed giving less dependency on the server version.
• In make_dset.C, simplification of the body and of the signature, eliminating one redundant argument
• In TProofOutputFile, improve flexibility in defining the URL for the local files server. The "LOCALDATASERVER" env is tested, which can defined with placeholders via the xpd.putenv directive in the xrootd/xproofd config files.
• Improving parsing of lines with memory info. This solves occasional crashes while generating the memory plots.
• In TProofMgr::GetSessionLogs:
• add the possibility to postpone the retrieval of the logs files when the TProofLog object is created. This improved functionality is exploited in the log window.
• add decoding of the session starting time and full information about the master URL
• Enable new xrootd configuration options, including the possibility to set the compiler and linker
• Cleanup of the TProofMgr fucntions DetachSession and ShutdownSession, and better handling of the internal list registration, to fix potential segvs when reopening a PROOF session inside the same ROOT session.
• Optimize the way results are transferred and merged:
• Output objects are added to the same TMessage until a HWM is reached (default 1MB; controlled by 'ProofServ.MsgSizeHWM'); this limits the number of transfers in the case of large numbers of small objects.
• Reasonably small histograms (GetSize() < MsgSizeHWM) are merged in one-go at the end instead of one-by-one to exploit, for example, the better performance of TH1::Merge on the full list of histos.
• Add possibility to compress the messages; this is controlled by ProofServ.CompressMessage <compression_level>
The default is still 'no compression' but this will allow to study the impact of compression.
• Add sort of 'progress' counter for merging is now shown on the client:
root [n] p->Process(...)
...
Mst-0: merging output objects ... / (4 workers still sending)

This asserts socket activity and fixes the timeout problems during long merging phases reported in a few cases.
• In TFileMerger, create directly the output file at the final destination do not make a local copy in the temp directory first (if needed, one can always set the temporary destination to temp followed by a TFile::Cp to the final destination); this allows to avoid reported problems with small temp partitions (see Forum).
• In XrdProofConn, enable cycling through the authentication protocol presented by the server. This only holds for the choice of the protocol, because the server currently supports only one full handshake.
• In test/stressProof.cxx, avoid interferences between the settings used for the PROOF tutorial and possible local settings (daemon, dataset manager).
• Move the validation of <proof.conf> at the moment of use; this allows to specify a file path and to dynamically create/modify/destroy the file; used by PoD.
• Improve displaying speed of large log files
• Fixes
• Fix two severe bugs in the way TTreeCache was used in PROOF: one bug was de facto disactivating the cache; the other was causing a std::bad_alloc exception to be thrown on workers when opening a remote file after a local one.
• Fix several problems in TChain::Draw including
• drawing into an existing histogram, i.e. chain->Draw("var>>myhist");
• treatment of histogram merging in case of small statistics, i.e. when
the autobinning is not or only partially active;
• usage of existing canvases when different histogram names are specified;
• Fix a problem causing a duplication of the final feedback object
• Fix problem with determining the subdir name in TFileMerger::MergeRecursive on Windows
• Make sure that the default sandbox is under $HOME/.proof • Fix a problem with dataset validation in multi-level master setups • Fix a problem with ordinal numbers in multi-master setups • Fix a problem with defining the internal paths for executables when configuring with '--prefix' • Fix backward-incompatibility issue giving the error message "unknown action code: 5112" • Fix a few problems with file retrieval from the cache • Fix a problem with iteration of a std::list occasionally causing seg-violations in TXSocket • Fix a few problems preventing correct usage of entry lists in PROOF • Fix a problem with the permissions of the credentials files created under <sandbox>/.creds • Fix a potential problem while determining the log paths in log retrieval • Do not use vnsprintf in the XrdProofd plug-in, potential source of deadlocks. • Fix a problem overwriting the local environment settings for the xrootd sec modules • In XrdProofdProofServMgr::Destroy, fix segv in message creation when all sessions are destroyed at once • Fix a problem determining the relative time order of old sessions for log retrieval • In TProof::HandleInputMessage, fix possible double delete after kPROOF_STOPPROCESS • Fix a couple of issues on reconnection to a running session (some dialog buttons not in the correct state; logs not correctly redirected) • Fix a problem creating spurious warnings during 'draw' queries ### Histogram package #### TPaletteAxis • New method Int_t TPaletteAxis::GetValueColor(z) to return the color index of the given z value. This funtion should be used after an histogram has been plotted with the option COL or COLZ like in the following example:  h2->Draw("COLZ"); gPad->Update(); TPaletteAxis *palette = (TPaletteAxis*)h2->GetListOfFunctions()->FindObject("palette"); Int_t ci = palette->GetValueColor(30.);  Then it is possible to retrieve the RGB components in the following way:  TColor *c = gROOT->GetColor(ci); float x,y,z; c->GetRGB(x,y,z);  This function is used by TPaletteAxis::GetBinColor(). #### TAxis • Implement a new function, TAxis::GetBinCenterLog(Int_t bin), as suggested at the issue #8263. #### TGraph • When adding an object in the list of funtions of a TGraph, there was a crash at the TGraph drawing time if the fitting option (gStyle->SetOptFit(1)) was on. This was reported in: https://savannah.cern.ch/bugs/?46525 The following macro reproduces the problem:  { gStyle->SetOptFit(1); TGraph *gr = new TGraph(2); gr->SetPoint(0,1,1); gr->SetPoint(1,2,2); TLatex *l1 = new TLatex(gr->GetX()[0], gr->GetY()[0], "#1"); gr->GetListOfFunctions()->Add(l1); gr->Draw("APL"); }  • Fixed the bug #45607 by creating a list of functions when using TGraph default constructor. • Fixed a bug when fitting TGraphErrors with zero error in y but non-zero error in x • In GetHistogram: if fHistogram exists, and the log scale is on, and the computed range minimum is > 0, and the fHistogram minimum is zero, then it means fHistogram limits have been computed in linear scale therefore they might be too strict and cut some points. In that case the fHistogram limits should be recomputed ie: the existing fHistogram should not be returned. A example covering this case has been added in stressGraphics. #### TH1 • Speed up TH1::GetStats, TH2::GetStats, TH3::GetStats in case the sum of weights is null and the number of entries is also null • Optimize the way the function integral is computed in TH1::FillRandom • Add new functions TH1::IsBinUnderflow(bin) and TH1::IsBinOverflow(bin) which use the global bin number. • Add new functions Int_t TH1::FindFirstBinAbove(Double_t threshold, Int_t axis) and Int_t TH1::FindLastBinAbove(Double_t threshold, Int_t axis) which find first (and last) bin with the content above the given threshold. Same function have been added in TH2 and TH3. • Add a protection in TH1::Sumw2() to avoid calling GetBinContent when the histograms are empty. • In TH1::Copy reset temporarily the kCanRebin bit before calling SetBinContent. • Fix the bug #48649in TH1::Add. • Fix a bug when calling TH1::Sumw2() on a non-empty histogram with default sum2 (i.e when TH1::GetDefaultSumw2() is true). • Add a method, TH1::ResetStats() to reset the internal statistics and force then the re-calculation suing the bin center first time is needed • Fix some problem with the statistics (in particular the number of entries) after some of the histogram operations #### TH2 • Consider in the projection of TH2 the axis range set by the user. This fix the issue https://savannah.cern.ch/bugs/index.php?47946 • Add a new option, option "o", in the projection methods: TH2::ProjectionX, TH2::ProjectionY, TH2::ProfileX and TH2::ProfileY. When an axis range is set, using option "o", the original axis range of the taget axes will be kept, but only the bins inside the selected range will be filled, while bins outside the range will be empty. #### TH3 • Add implementation of TH3::Interpolate using a tri-linear interpolation method • Fix a bug in TH3::Project3D (https://savannah.cern.ch/bugs/?46432) for the error calculation in case of weighted histogram (or when using option "E") and no axis range is set. • In the projection to Profile's, when Sumw2 is set, have the correct projected errors now with the new TProfile data member. • Add TH3::ProjectionX and TH3::ProjectionY to complement the already existing ProjectionZ. They are all impelmented using the Project3D method. • Re-implement the TH3::Project3D method using the internal methods DoProject1D and DoProject2D depending on the option. This new implementation is faster in case sub-ranges are selected and fix this issue (https://savannah.cern.ch/bugs/index.php?45494). • A similar new implementation is done for TH3::ProjectProfile. • Add the new option "o", as in TH2 for the histogram and profile projections. #### TProfile, TProfile2D, TProfile3D • Add a new data member (TArrayD fBinSumw2) for storing the sum of weight square per bin. This is needed for correct error calculation in case of profile filled with weights different than 1. The new structure is filled only when TProfile::Sumw2() is called or when TH1::SetDefaultSumw2() is set. • Add a new internal class, TProfileHelper for providing a common implementations for all TProfile classes for complex methods like Add and Merge. • Fix a bug in TProfile::GetStats method. #### THnSparse • Fix a bug where the axes of a THnSparse created by THnSparse::Projection() would be filled wrongly if the axis's range was set. • Fix a bug where the TAxis::kAxisRange bit was not reset for the new TH1/2/3 axes created by THnSparse::Projection(), if the original axis had a range and "A" was not given. • Implement new option "O" for Projection(): respect the range set for the target axis (i.e. only project bins that are in range) but create the target histogram with the full axis. • Fix a bug in the multiplication of THnSparse. • Fix a bug whe creating with a given set of axis. Ensure that the first bin of the axis is >= 1. #### THistPainter • In case the errors of the fit parameters had large values (>E+07) the errors in the fit box looked weird. The Method GetBestFormat has been changed. The problem was visible with the following macro:  { gStyle->SetOptFit(1111); h = new TH1F("h","h", 2,0.,1.); h->SetBinContent(1, 5E8); h->SetBinError(1, 4.9E8); h->Fit("pol0"); }  • In THistPainter::PaintAxis repainting (gPad->RedrawAxis()) alphanumeric labels axis on a plot done with the option HBAR (horizontal) needed some adjustements. The following macro showed the problem. The axis labels were wrongly painted:  { TCanvas* canvas = new TCanvas("Canvas", "Canvas", 0, 0, 1000, 500); canvas->Divide(2,1); THStack* stack = new THStack("Stack", "StackTitle"); TH1F* hist1 = new TH1F("Hist1", "Title1", 1, 0, 100); TH1F* hist2 = new TH1F("Hist2", "Title2", 1, 0, 100); hist1->SetFillColor(kBlack); hist2->SetFillColor(kGray); for (int i = 0; i < 4; ++i) { char dataName[50]; sprintf(dataName, "Data%d", i); hist1->Fill(dataName, 10 + 50*i); hist2->Fill(dataName, 145 - 40*i); } stack->Add(hist1); stack->Add(hist2); canvas->cd(1); stack->Draw("nostack,bar"); canvas->cd(2); stack->Draw("nostack,hbar"); }  • In THistPainter::PaintTriangles if the option SAME is used, the view limits are taken from the current TView. #### TGraphPainter • The painting option "][" did not work if the frame line width (set with gStyle->SetFrameLineWidth()) was bigger than 1. • The clipping in case of option "same" was not correct since the move from TGraph to TGraphPainter. The following small example showed the problem:  { TH1F * h1 = new TH1F("h1", "h1", 100, -3., 3.); TH1F * h2 = new TH1F("h2", "h2", 100, -3., 3.); h1->FillRandom("gaus", 5000); h2->FillRandom("gaus", 4000); h1->SetMaximum(100); h1->Draw(); h2->Draw("same"); }  #### TUnfold • Add a new version. A new class TUnfoldSys provides support for the propagation of systematic errors. • Some bugs were also fixed due to multiplication of addition of sparse matrices. #### Fitting Methods • Introduce a better treatment of the step size used when fitting an object with a TF1. Use now by default is not zero the error provided by TF1. In case of limits use an appropriate step size to avoid Minuit to go over the limits. • Fix bug https://savannah.cern.ch/bugs/?45909 when fitting with bad range values (outside the histogram range). • detect the case when the data set is empty and don't perform any minimizationin this case but exits from fitting and produce a warning message • Fix a bug when fitting histograms with option W and the bin errors are = 0. • Fix a bug in the InitGaus function when having only one data point (see https://savannah.cern.ch/bugs/?48936) • Fix a bug in calculating the error on the integral after having fitted when fix parameters were present • Fix a bug in calculating the confidence intervas when the number of bins for the given object is different from the number of bins of the fitted object. #### FitPanel • Add support for drawing the fit function confidence levels. • Make gaus the default function when fitting 1D objects. • Add GSL minimizer and use now a new widget for showing and selecting the list of available algorithms according to the minimizer. ### CINT This version contains no major new features. CINT7 has seen considerable speed improvements; only bug fixes were incorporated in the other packages. CINT5 or 7 can now be configured independently; --enable-cint5 --disable-cint7 is the default. New web site for ROOT's interpreters and dictionaries and stand-alone CINT as well as Reflex. • CINT5 and CINT7 • Improve the platform independence of paths. • Fix the lookup of types nested in classes with default template parameters. • .Lk macro.C will load the file macro.C only if it is not currently loaded. .L macro.C would unload it and all files that have been loaded later, and then reload it. Also implemented .xk macro.C. • Fix for recursive loading (e.g. autoloading) of libraries during dictionary initialization. • Fix parsing of negative values that are larger than int. • Create the proper short name for templates with templated default arguments: a<b<c>,d=e> was shortened to a<b<c>> instead of a<b<c> >. • Add missing complex<T> functions (thanks, Daniel Barna!). • Improve the casting from (free) function pointer to void* in dictionaries using a union. • Support _attribute_ in the parser (by ignoring it). • Do not convert path names to lower case anymore. • Support const static data members with inline initialization: class A{static const int i=42;}; • CINT5 • The include files are moved back from include/cint/ to include/ for backward compatibility reasons. CINT7's headers remain in include/cint7. • CINT7 • Major improvements in the CPU performance of CINT7; it is now much faster (> factor 5 for interpreting stress.cxx) • Fixes for const correctness of CINT's code, especially for const char*. • Added support for autoloading typedefs. • Fix issue with G__struct being initialized too late. • Properly identify the library that a dictionary belongs to. • Fix compiler warnings. • Fix dictionary for abs(). • Reduce memory usage by reimplementing some internal data structures. • Reflex • Full support for ClassDef() macros with the full benefits of faster I/O due to a direct access instead of a search in a map for writing out objects. • Reimplement UpdateMembers() and PathToBase() in a backward compatible way. The Member getters now support an enum argument (Reflex::INHERITEDMEMBERS_NO or Reflex::INHERITEDMEMBERS_ALSO) to request only the class's member or also inherited members. This allows to ignore possible calls to UpdateMembers(). The default value for this argument to the member getters is Reflex::INHERITEDMEMBERS_DEFAULT which will behave like Reflex::INHERITEDMEMBERS_NO until the first call to UpdateMembers(). • Fix visibility of UnionBuilder's symbols (GCC, MSVC) • Fix for unnamed types in genreflex. • genreflex got fixed for functions taking arrays or pointers thereof. • Many and improvements to the test suite, many new tests. • Further improvements to genreflex support of TObject derived classes / classes implementing the ClassDef macro. • Complain if a class derives from TObject but does not use ClassDef • Fix constness and scope of the return type of shadows' final overrider • FunctionTypeBuilder does not delete existing types anymore. • For derived types (array, typedef), delay the calculation of SizeOf() until the underlying type is know. • Added a CMake macro REFLEX_ADD_DICTIONARY for the dictionary creation, to be used by external packages. • Fix for unsigned long template parameters (A<unsigned long> which GCCXML instantiates as e.g. A<12ul>); fixes an issue with boost::array. • Added MSVC8 to genreflex's list of supported compilers, fix build for MSVC7.1. • Remove stray spaces after string in rootmap files. • Use qualified types for Reflex types instead of using namespace Reflex. • Allow selection of genreflex classes and their members via a typedef-to-class. • Implement new genreflex option --gccxmlpost to postprocess a GCC_XML output file with genreflex, useful for debugging. • Remove dictionaries for PropertyList methods that had been deprecated for years. • Suppress dictionaries for unnamed enums. • Satisfy python 2.6.2 and silence its warnings. • Rename GCC_XML output file to *_gccxmlout.xml, so it is easier to distinguish it from selection.xml file. • Improve the initialization and shutdown of Reflex; remove the memory leaks due to the reflection containers not beeing cleared at the end of the process. • Modified the behavior of Reflex dictionary (namely ClassBuilder). Rather than unconditionally erasing existing information, a 2nd ClassBuilder will either add new information or check that it is compatible with the existing information (and throwing an exception in case of problems). To be able to over-ride an existing definition, unload the class before calling ClassBuilder. • Also create dictionaries for const static data members. • Cintex • Follow changes in ROOT's CollectionProxy interface. • Don't cache Reflex::Member for functions' return types in the stubs; fixes an issue caused by duplicated dictionaries. • Add dictionary for basic_string<char> for backward compatibility. ### PyROOT Null-pointers now carry type, rather than being the None object, to make sure that correct overloads are selected. Memory policy is settable on individual functions, rather than only globally, through the _mempolicy data member that functions carry. In order to support PyPy analysis of PyROOT code, getter/setter methods have been added to the proxies. The pydoc tool already benefits from this, since PyROOT objects are now a bit easier to inspect by such standard tools By short-circuiting some paths during class proxy creation, loading of the libPyROOT module is now faster. ### Math Libraries #### MathCore • Various fixes have been applied in the fitting classes: • Fix issue #46006 for normalization of error resulting from fitting a TGraph • Fix a problem in Chi2 calculation in case of overflow • Fix issue #46601 for avoiding crashes when a linear fit fails. • Fix in the FitData classes the bug #45909 occurring when setting a function range outside histogram range • Fix default integration method to be Gauss algorithm of MathCore instead of the GSL method, when libMathmore is not built or when the plug-in manager fails to load it. • Add a protection against negative log when fitting using the Poisson log likelihood function • Improve calculation of derivative in x for fitted function. This fixes some problem observed when fitting using the error on the coordinates. • Fitter class: add new methods for calculating the error matrix after minimization, Fitter::CalculateHessErrors() and for calculating the Minos errors Fitter::CalculateMinosErrors • FitConfig: add in the configuration the possibility to select a sub-set of the parameters for calculating the Minos errors by using the method FitConfig::SetMinosErrors( listOfParameters ). If no list is passed, by default the Minos error will be computed on all parameters. • UnBinData class: add new constructor for creating a unbin data set passing a range to select the data and copy in the internal array • FitResult: the class now stores a map of the Minos error using as key the parameter index. If the Minos error has not been calculated for the parameter, FitResult::LowerError(i) and FitResult::UpperError(i) returns the parabolic error • Add a new class, MinimTransformFunction to perform a transformation of the function object to deal with limited and fixed variables. This class uses the same transformation which are also used inside Minuit, a sin transformation for double bounded variables and a sqrt transformation for single bound variable defined in the class MinimizerVariableTransformation. These classes can be used by minimizer which do not support internally the bounds (like the GSL minimizers). • Add two new method in ROOT::Math::Minimizer class: • int Minimizer::CovMatrixStatus() : returning the status of the covariance matrix. Implemented by Minuit and Minuit2 and follows original Minuit code meaning: code = 0 (not calculated), 1 (approximated), 2 (matrix was made pos def) , 3 (accurate) • bool Hesse(): to perform a full calculation of the Hessian matrix • TMath • Fix a numerical problem in TMath::ErfcInverse for small input values. Now the normal quantile function is used for implementing it. #### MathMore • Fix 2 bugs in the quartic equation solver (see issue #49031). • A protection has been added against numerical errors which could cause NaN due to wrong inputs to an acos function. This problem appears also in the GSL cubic solver. A new GSL patched cubic function has been then added in MathMore. • A wrong statement (coming from the original CERNLIB code but not applicable in this case) has been removed. • Add support for limited and fixed variables for all the GSL minimizers ("GSLMultiMin"), including the simulated annealing ("GSLSimAn") and the non-linear least square fit methods ("GSLMultiFit"). #### SMatrix • Remove an unneeded check on the element value in the factorization routines used for inverting the matrices (both for the LU and for the Bunch-Kaufmann factorization). The check was preventing to inverting matrices when some of the matrix elements (like the diagonal) were smaller than an epsilon value set to ~ 10-15. This is not needed since it is enough to check that the values are not zero (i.e. when the matrix is singular). This bug was causing several failures in the CMS code when inverting matrices. • Add the Cholesky decomposition method for symmetric positive defined matrices (thanks to Manuel Schiller). A class has been introduced, ROOT::Math::CholeskyDecomp which provaids methods for decomposing or inverting a matrix and also for solving a linear system. • New methods have also been added in SMatrix: bool SMatrix::InvertChol() and SMatrix & SMatrix::InverseChol(ifail) for the inversion of a symmetric positive defined matrix. New specialized implementation exists up to matrices with sizes 6x6. The speed is comparable to the Cramer method (SMatrix::InvertFast), but with much better accuracy. The new InvertChol method is in any case faster than the general inverter method for all symmetric matrices (SMatrix::Invert), which uses the Bunch-Kaufman decomposition. • Add also a new free function, ROOT::Math::SolveChol for solving a symmetric linear system. For users who need the solution, using this functions avoid for them performing the inversion and then a matrix multiplication. • Add support in the SMatrix class for operator m[i][j] • Add in the dictionary the typedefs for some square and symmetrix matrices based on double and floats (up to size 7) defined in the file Math/SMatrixDfwd and Math/SMatrixFfwd #### Minuit • Apply varius improvements in the TMInuitMInimizer class thanks to the feedback of Alfio Lazzaro: • implement Hess() and CovMatrixStatus(); • add new method based on SEEK. The Tolerance() value can be used to specify the volume (in unit of sigma) for searching for the global minimum • fix some of the methods, like NCalls() and GlobalCC() #### Minuit2 • Apply some fixes in MnHesse and MnPosDef classes to check correctly variables to not be zero. (use same checks as in F77Minuit) • Fix a bug introduced in DavidonErrorCalculator when checking for delgam. Negative values are allowed. This fixes a test problem given privately by A. Suter. • Uses also a tighter condition on edm when exiting the iterations (factor of 5 smaller). This is more consistent with conditions used by F77Minuit. • Fix a bug in MnCross in the standalone version of Minuit (when WARNINGMSG was not defined). • Fix a bug in the sign of the derivative for sine transformation which are used with double bound parameters. The bug could affect the minimization of function with user provided gradient and bound parameters and bound parameters. It could also affected Fumili2. Furthermore, a wrong sign for the correlation matrix could also have been obtained in some cases with bound parameters. • Use a tolerance of 0.01 instead of 0.05 in MnContours. The value of 0.01 is the same used in Minos. This is sufficient to get good quality contours. • Improve also the debug in MnContour. Add printing of points as info messages • Remove some un-necessary assert() when defining the minimization parameters. • Fix a bug in MnHesse to return the information if the matrix was made pos def. In addition change in MinimumError the codition that when the matrix was made pos def the status of the error is still considered valid and not invalid as before. This makes also the function minimum valid when a matrix was decleared pos def. • Improvements in the Minuit2Minimizer class: • implement the new methods defined in the base class: Hess() using MnHess and CovMatrixStatus(); • improve the switch-off of the info message according to the print level; • define the variables passed with zero step-size as constant (as is done in F77 Minuit) • Fix a problem in building the parallel version of Minuit2. The parallel version is built if the environment variables USE_PARALLEL_MINUIT2 and USE_OPENMP are set before compiling Minuit2 on a compiler which supports openMP (for example gcc version >= 4.2) • Add, thanks to Alfio Lazzaro, support for running Minuit2 using multi-process by using MPI. A new class MPIProcess deals with starting and terminating the MPI process. Each process calculates independently the derivatives for a given set of parameters. A Minuit2 library with MPI support can be built by defining before compilation the environment variables USE_PARALLEL_MINUIT2 and USE_MPI. #### Unuran Add constructor of Tunuran distributions using function objects defined using the mathcore interfaces: • TUnuranContDist (const ROOT::Math::IGenFunction & pdf, const ROOT::Math::IGenFunction * dpdf, bool isLogPdf); • TUnuranMultiContDist (const ROOT::Math::IMultiGenFunction & pdf, bool isLogPdf ); • TUnuranDiscrDist (const ROOT::Math::IGenFunction & func ); #### TRolke New version of TRolke from J. Lundberg. • The interface of the class has been changed. The old user interface was very hard to use, and the documentation in the source was also not on par with the correct usage. The old interface was a single get-function with 12 arguments, and the user was supposed to figure out which ~ 5 arguments were relevant for a specific model (1 out of 7 models). The new user interface is is easy to use correctly and hard to use incorrectly (TM). It's a single set-method for each method: SetPoissonBkgBinomialEff(Int_t x,Int_t y,Int_t z,Double_t tau,Int_t m); SetPoissonBkgGaussianEff(Int_t x,Int_t y,Double_t em, Double_t tau,Double_t sde); SetGaussianBkgGaussianEff(Int_t x,Double_t bm,Double_t em, Double_t sde,Double_t sdb); SetPoissondBkgknownEff(Int_t x,Int_t y,Double_t tau,Double_t e); SetGaussianBkgknownEff(Int_t x,Double_t bm,Double_t sdb,Double_t e); SetKnownBkgBinomialEff(Int_t x, Int_t z,Int_t m,Double_t b); SetknownBkgGaussianEff(Int_t x,Double_t em,Double_t sde,Double_t b);  • New methods for getting sensitivity (average limits) and related quantities and for critical number related to rejection of the null-hypothesis (no signal). • Some small Bug fixes. Some variables were used uninitialized. (Eg, input arguments which were not supposed to be used were used anyway.) ### RooFit This release of ROOT contains RooFit version 3.00. A summary of new features is listed below #### RooFit Web documentation moved to ROOT web site The starting point for online RooFit documentation (Users Manual, tutorials, slides etc) is now moved to the ROOT website http://root.cern.ch/drupal/content/roofit #### Error visualization It is now possible to visualize the effect of the uncertainties on parameters from a fit on any p.d.f. or function projection. To do so use the new VisualizeError() argument in a plotOn call  RooFitResult* fr = pdf->fitTo(*data,Save(),...) ; pdf->plotOn(frame,VisualizeError(*fr),...) ;  Two techniques for error visualization are implemented. The default is linear error progation, and results in an error band that is by construction symmetric. The linear error is calculated as  error(x) = Z* F_a(x) * Corr(a,a') F_a'(x) where F_a(x) = [ f(x,a+da) - f(x,a-da) ] / 2, with f(x) = the plotted curve 'da' = error taken from the fit result Corr(a,a') = the correlation matrix from the fit result Z = requested significance 'Z sigma band'  The linear method is fast (requires 2*N evaluations of the curve, where N is the number of parameters), but may not be accurate in the presence of strong correlations (~>0.9) and at Z>2 due to linear and Gaussian approximations made. Alternatively, errors can be visualized using a sampling method. In this method a number of curves is calculated with variations of the parameter values, as sampled from a multi-variate Gaussian p.d.f. that is constructed from the fit results covariance matrix. The error(x) is determined by calculating a central interval that capture N% of the variations for each valye of x, where N% is controlled by Z (i.e. Z=1 gives N=68%). The number of sampling curves is chosen to be such that at least 100 curves are expected to be outside the N% interval. Intervals from the sampling method can be asymmetric, and may perform better in the presence of strong correlations, but may take (much) longer to calculate. The sampling method also assumes that the uncertainty on the parameters can modeled by a multi-variate Gaussian distribution. A complete example is provided in a new tutorial macro rf610_visualerror.C, the output of which is shown below. It is also possible to visualize partial errors (from a subset of the parameters), as shown above. #### Binned dataset generation A new method RooAbsPdf::generateBinned() has been implemented that samples binned datasets (RooDataHist) from any p.d.f.  RooDataHist* data = pdf.generateBinned(x,10000) ;  This binned generation interface samples the p.d.f. at each bin center and applies a Poisson fluctuation to each sampled value. The binning of the returned RooDataHist is controlled by the default binning associated with the observables generated. To set the number of bins in x to 200, do e.g. x.setBins(200) prior to the call to generateBinned() The binned dataset generation method does not (yet) support the concept of prototype datasets. #### New minimizer interface to Minuit2, GSLMinimizer etc... A new minimizer interface, RooMinimizer has been added (contribution from Alfio Lazarro). The new minimizer is similar in functionality to the existing class RooMinuit, but supports the new ROOT abstract minimizer interface and supports multiple minimizer packages and algorithms through that interface. The present interface of RooMinimizer is identical to that of RooMinuit with two extensions • The setMinimizer(const char*) method allows to choose between "minuit" and "minuit2") as implementation for migrad(),hesse(),minos() etc... • The minimizer(const char* package, const char* alg) provides a completely generic interface to all minimizers, where package is the package (minuit,GSLminimizer) and alg is the algorithm (migrad) to be used By default, RooMinuit is still used when RooAbsPdf::fitTo() is called, but can be overridden with a Minimizer() named argument  // Minimization with MINUIT/MIGRAD through RooMinuit pdf->fitTo(data) ; // Minimization with MINUIT/MIGRAD through RooMinimizer pdf->fitTo(data,Minimizer("minuit")) ; // Minimization with MINUIT2/MIGRAD through RooMinimizer pdf->fitTo(data,Minimizer("minuit2")) ; // Minimization with GSLMultiMin/conjugatefr through RooMinimizer pdf->fitTo(data,Minimizer("GSLMultiMin","conjugatefr")) ;  Note that installation of GSL and the ROOT MathMore package is needed to access the GSL Minimizers and that the GSL Minimizer do not implement error analysis. #### New numeric integration algorithms available RooFit can now interface all MathCore numeric intergration algorithms. In this release ROOT::Math::AdaptiveIntegratorMultiDim, which implements the 'Genz & Malik' algorithm has been interfaced in RooAdaptiveIntegratorND and is now the default numeric integrator for numeric integrations in two or more dimensions. This new default integrator has much improved stability and speed for relatively smooth p.d.f.s in two or three dimensions and can generally be used well for p.d.f. normalization integrals without causing MINUIT converge problems due to numeric precision issues. In future release some more numeric integrators will be migrated to a MathCore implementation. #### Interface to TFoam adaptive MC sampler added RooFit can now use the TFoam adaptive MC sampler for event generation of p.d.f.s that do not have an internal generator. The TFoam generator adaptively subdivides the observable space and is generally more efficient both warmup and generation than the original RooAcceptReject algorithm. In its current interface in RooFit, TFoam cannot handle problems yet with discrete observables or conditional observables. For those problems the original RooAcceptReject generator is still used. The choice of MC sampling algorithm can be steered through class RooNumGenConfig, which is similar in style and structure, to RooNumIntConfig which configures the choice of numeric integration algorithm. A new tutorial macro rf902_numgenconfig.C has been added to$ROOTSYS/tutorials/roofit to illustrate the use of the steering.

A macro that demonstrates of the power of these newly interface numeric algorithms is provided at the end of the RooFit section of the release notes.

#### Optional persistent caching of numeric integrals

For p.d.f.s with numeric integrals that remain difficult or very time consuming, a new persistent caching technique is now available that allows to precalculate these integrals and store their values for future use. This technique works transparently for any p.d.f. stored in a RooWorkspace.

One can store numeric integral values for problems with zero, one or two floating parameters. In the first case, the value is simply stored. In cases with one or two floating parameters a grid (histogram) of integral values is stored, which are interpolated to return integral values for each value of the parameters.

A new tutorial macro rf903_numintcache.C has been added to $ROOTSYS/tutorials/roofit to illustrate the use of this feature. #### Representation of function and p.d.f. derivatives A new class has been added that can represent the derivative of any p.d.f or function w.r.t. any parameter or observable. To construct e.g. a first order derivative of a Gaussian p.d.f, do  RooAbsReal* dgdx = gauss.derivative(x,1) ;  A more complete example is available in the new tutorial macro rf111_derivatives.C #### Improved handling of chi-squared fits Chi-squared fits can now be performed through the same style of interface as likelihood fits, through the newly added method RooAbsReal::chi2FitTo(const RooDataHist&,...). Functions that can be fitted with chi-squared minimization are any RooAbsReal based function as well as RooAbsPdf based p.d.f.s. In case of non-extended p.d.f.s the probability density calculated by the p.d.f. is multiplied with the number of events in the histogram to adjust the scale of the function. In case of extended p.d.f.s, the adjustment is made with the expected number of events, rather than the observed number of events. Tutorial macro rf602_chi2fit.C has been updated to use this new interface. #### Chi-squared fits to X-Y datasets now possible In addition to the ability to perform chi-squared fits to histograms it is now also possible to perform chi-squared fits to unbinned datasets containing a series of X and Y values with associated errors on Y and optionally on X. These 'X-Y' chi-squared fits are interfaced through newly added method RooAbsReal::chi2FitTo(const RooDataSet&,...). By default the event weight is interpreted as the 'Y' value, but an YVar() argument can designate any other dataset column as Y value. If X errors are defined, one can choose to integrate the fitted function over the range of the X errors, rather than taking the central value by adding an Integrate(kTRUE) argument to chi2FitTo() Two new arguments, StoreError(const RooArgSet&) and StoreAsymError(const RooArgSet&) have been added to the RooDataSet constructor to simplify the process of storing the errors of X and Y variables along with their values in a dataset. The newly added tutorial macro rf609_xychi2fit.C illustrates the use of all this new functionality. #### Uniform interface for creation of (profile likelihoods) and chi-squared from p.d.f.s It is now recommended to use the method RooAbsPdf::createNLL(RooAbsData&,...) to create a likelihood from a p.d.f and a dataset rather than constructing a RooNLLVar object directly. This is because part of the likelihood construction functionality such a using multiple Range()s, or the inclusion for constraint terms are only available through createNLL(). To promote the consistency of this interface, a similar method RooAbsReal::createChi2() has been added to construct chi-squared functions of a dataset and a function or p.d.f. Along the same lines, it is recommended to use RooAbsReal::createProfile() rather than constructing a RooProfileLL object directly as the former will efficiently recast a profile of a profile into a single profile object. #### Multivariate Gaussian modeling of parameters estimates from a fit You can now construct a multivariate Gaussian p.d.f on the parameters of a model that represents the result of a fit, from any RooFitResult object.  RooAbsPdf* paramPdf = fitresult->createHessePdf(RooArgSet(a,b)) ;  The returned object is an instance of the newly added class RooMultiVarGaussian, that can model correlated Gaussian distributions in an arbitrary number of dimensions, given a vector of mean values and a covariance matrix. Class RooMultivarGaussian implements analytical integration as well as analytical partial integrals over the first 31 dimensions (if you have that many) and implements in effect internal generation strategy for its observables A new tutorial macro rf608_fitresultaspdf.C has been added to illustrate the use MV Gaussians constructed from a RooFitResult #### Improved functionality of RooFFTConvPdf The FFT convolution operator p.d.f. class RooFFTConvPdf has been substantially upgraded for improved performance has several new options • For the overflow buffering, which aims to reduce cylical spillover from the FFT convolution, a choice of three algorithms is now provided: 1. Extend the p.d.f. somewhat beyond its original domain (the new default) 2. Fill the buffer 50/50 with the value of the p.d.f at the upper/lower bound of the convolution observable (the previous default) 3. Mirror the p.d.f. over the boundary The new default algorithm provides a more sensible result for p.d.f.s with significant spillover issues, provided that the p.d.f. can be continuated beyond its original domain. • Convolution in non-observables is also explicitly supported now. One can e.g. construct a p.d.f of the form G(x) = Int[dy] ( F(x,y) (*) H(y) ). A new tutorial macro rf211_paramconv illustrates how such convolutions can be constructed • It is now also possible to express FFT convolutions in terms of other observables than the convolution observable itself. A common occurrence of that situation is a (circular) convolution a polar angle theta, for a p.d.f. that is ultimately expressed in terms of cos(theta). A new tutorial macro rf210_angularconv illustrates how to convolutions of angular observable with or without an optional cosine transformation for the final observable. #### Option for improved calculation of errors in weighted likelihood fits A new option SumW2Error() has been added to RooAbsPdf::fitTo() that will perform an improved error calculation for weighted unbinned likelihood fits. In their unmodified form, an ML fit to a weighted dataset will correctly estimate the parameters, but the errors will scale with the sum of the weights, rather than the number of the events in the dataset (i.e. if you double all event weights, all parameter errors will go down with sqrt(2)). In chi-squared fits event weights can processed correctly by using both the sum of the weights and the sum of the weights-squared for each bin. The newly added option SumW2Error() implements a similar strategy for (unbinned) weighted ML fits by applying a correction to the covariance matrix as follows  V' = V C-1 V  where V is the covariance matrix from the fit to weighted data, and C-1 is the inverse of the covariance matrix calculated from a similar likelihood that constructed with the event weights applied squared #### Redesign of RooFit dataset class structure The original class structure of RooFit featured an abstract dataset class RooAbsData. Inheriting from that was a single class RooTreeData, which implemented datasets with a ROOT TTree-based storage implementation, and inheriting from that two classes RooDataSet , representing unbinned data, and RooDataHist, representing binned data. A main problem with this structure was that the implementation of the storage technology (TTree) and the data representation (binned vs unbinned) were intertwined. Starting with version 3.00, the class structure has been rearranged: Now classes RooDataSet and RooDataHist inherit directly from class RooAbsData, and class RooAbsData now owns an object that inherits from RooAbsDataStore that implements the storage of the data. This new class structure allows multiple data storage implementations to be applied efficiently to both RooDataSet and RooDataHist At present a single implementation of RooAbsDataStore exists, class RooTreeDataStore, that contains the storage implementation formerly implement in class RooTreeData. Methods in class RooTreeData that were not specific to the storage technology have been moved to class RooAbsData. If your user code only uses the classes RooDataSet,RooDataHist and RooAbsData nothing will change: Existing RooDataSets and RooDataHists (that inherit from RooTreeData) can be read in without problems in RooFit 3.00 and will be converted on the fly to the new dataset structure in memory. User code that explicitly uses RooTreeData pointers should be changed to RooAbsData pointers. This change should be transparent for all uses, with the exception of the RooTreeData::tree() method. Explicit access to tree implementation can still be obtained through the RooTreeDataStore::tree() method. (A pointer to the datastore can be obtained through the RooAbsData::store() method.) Note that in future releases it is no longer guaranteed that all datasets are implemented with a plain TTree implementation, so any user code that uses the tree implementation directly should implement checks that the implementation is indeed tree-based (data->store()->InheritsFrom(RooTreeDataStore::Class())==kTRUE)). In future release additional implementations of RooAbsDataStore will be provided that will support new dataset functionality such as the ability to construct 'joint' dataset from two input datasets without the need to copy the input data and 'filtered' datasets that represent a reduced view (in dimensions or by selecting events) of a dataset without the need to copy content. #### Various workspace improvements A number of smaller and larger improvements has been made to the RooWorkspace class. • Direct interactive access to contents from CINT - One can now directly access the contents of any RooWorkspace on the ROOT commandline through CINT if the RooWorkspace::exportToCint() call is made. In CINT, all workspace objects will appear as correctly typed references to workspace objects in a C++ namespace with the same name as the RooWorkspace object. Given e.g. a workspace w, with a Gaussian p.d.f gauss in terms of variables x,m,s one can now do  RooWorkspace w("w",kTRUE) ; // workspace with CINT interface activated // ... fill workspace with RooGaussian gauss(x,m,s) ... RooPlot* frame = w::x.frame() ; w::gauss.plotOn(frame) ;  to access the workspace contents. Each reference has the correct type, e.g. w::gauss is a RooGaussian&. If a workspace is deleted from memory, the corresponding CINT namespace is removed as well. Note that this feature is strictly available in interpreted C++ only A new tutorial macro has been added to illustrate this functionality in more detail: rf509_wsinteractive.C. • writeToFile -- A new utility method RooWorkspace::writeToFile() has been added to simplify the process of saving a workspace to file • Named sets and parameter snapshots -- It is now possible to define and retrieve named RooArgSets of objects that live in the workspace through methods defineSet() and set(). While named sets merely group objects logically, methods loadSnapshot and saveSnapshot allow to make copies of the values, errors and 'constant' status of sets of variable objects that live in the workspace. A newly added tutorial macro rf510_namedsets.C illustrates the functionality of both of these features. • Improved printing of contents -- Many operator p.d.f. and function components now show a more intuitive natural representation of their contents (these changes are mostly in the respective p.d.f.s, but are most relevant in the context of a workspace). #### New object factory interface to workspace to facilitate script driven model definition A object factory has been added to RooFit to simplify the proces of creating p.d.f. and function expressions consisting of multiple objects. The factory has two goals: the first is to provide a back-end for higher level factories and tools to process the creation of objects. The second is to provide a simple end-user language to populate a RooWorkspace with function and p.d.f. objects. For the latter purpose the object creation language is executed through the factory() method of a workspace object.  RooWorkspace w("w") ; RooAbsArg* arg = w.factory("expression_goes_here") ;  Basic Syntax The rules at its simplest level are as follows • Expressions with square brackets create variables (discrete and continuous)  "m[-10,10]" - Creates a RooRealVar named 'm' with range [-10,10] "m[5,-10,10]" - Idem, but with initial value 5 "m[5]" - Creates a constant RooRealVar with name 'm' and value 5. "tagCat[Lep,Kao,NT1,NT2]" -- Creates a RooCategory with name tagCat and labeled states Lep,Kao,NT1,NT2 "b0flav[B0=1,B0bar=-1]" -- Creates a RooCategory with name b0flav and states B0 and B0bar with explicit index assignments  • Expressions with parentheses create RooAbsArg function objects of any type  "RooGaussian::g(x,m,s)" -- Create a RooGaussian named g with variables x,m,s This expression maps 1-1 to a createArg() call "Gaussian::g(x,m,s)" -- Idem. The 'Roo' prefix on any class may be omitted "Gaussian(x,m,s)" -- Create a RooGaussian with an automatically assigned name with variables x,m,s  • Expressions with curly brackets creates RooArgSets or RooArgLists "{x,y,z}" Compound expressions The real power of this language is that all these expressions may be nested to result in a compact and readable expression that creates an entire p.d.f. and its components  "Gaussian::g(x[-10,10],m[-10,10],3)"  Creates a RooGaussian named 'g', its observables 'x' with range [-10,10], its parameter 'm' with range [-10,10]' and a constant width of 3.  "SUM::model( f[0.5,0,1] * Gaussian( x[-10,10], m[0], 3] ), Chebychev( x, {a0[0.1],a1[0.2],a2[-0.3]}))"  Create a RooAddPdf model of a RooGaussian and a RooChebychev (which are implicitly named model_0 and model_1), its observable x and its parameters m,a0,a1,a2,Nsig and Nbkg Note that each object may be created only once (with [] or () brackets) but may be referenced multiple times in the expression by just giving the name. Here is a much more complicated example:  "PROD::sig(BMixDecay::sig_t( dt[-20,20], mixState[mixed=1,unmix=-1], tagFlav[B0=1,B0bar=-1], tau[1.54], dm[0.472], w[0.05], dw[0], AddModel({GaussModel(dt,biasC[-10,10],sigmaC[0.1,3],dterr[0.01,0.2]), GaussModel(dt,0,sigmaT[3,10]), GaussModel(dt,0,20)},{fracC[0,1],fracT[0,1]}), DoubleSided ), Gaussian::sig_m( mes[5.20,5.30], mB0[5.20,5.30], sigmB0[0.01,0.05] )"  This create a double-sided Bmixing decay p.d.f. with observables dt, per-event error dterr and all its parameters, convoluted with a triple gaussian resolution model and multiplied with a Gaussian p.d.f. in the energy substituted mass. (In plain RooFit this would have required at least 23 lines of code). A series of three new tutorial macros has been added to illustrate the various features of the object factory • rf511_wsfactory_basic.C - Basic factory concepts • rf512_wsfactory_oper.C - Using operator p.d.f.s in the factory • rf513_wsfactory_tools.C - Advanced example using interfaced high level tools A formal transaction model is used to commit composite objects into the workspace. If an error is detected in the expression, no objects will be committed to the workspace, thus leaving no 'partial builds'. #### Compact demo of several new major features The macro below demonstrates in a couple of lines a number of major new features in RooFit 3.00: Use of • workspace factory to quickly create and store (compiled) models • workspace CINT interface to easily access contents in a typesafe way • new adaptive ND numeric integration technique to normalize arbitrary p.d.f. in fast and reliable way • new adapative TFoam sampling technique to efficiently generate toy MC data from strongly peaked datasets • parallel processing in likelihood construction and use of profile likelihood operator to represent profile likelihoods as regular RooFit functions  void demo() { // Construct compiled 2-D model that requires numeric integration for normalization RooWorkspace w("w",1) ; w.factory("CEXPR::model('1/((x-a)*(x-a)+0.001)+1/((y-b)*(y-b)+0.001)',x[-1,1],y[-1,1],a[-5,5],b[-5,5])") ; // Generate data from model (using TFoam adaptive sampling algorithm) RooDataSet* d = w::model.generate(RooArgSet(w::x,w::y),1000) ; w::model.fitTo(*d) ; // Make 2D plot on (x,y) TH2* hh = w::model.createHistogram("x,y",40,40) ; hh->SetLineColor(kBlue) ; // Make Projection on x (integrate over y) RooPlot* framex = w::x.frame(Title("Data and p.d.f. projected on X")) ; d->plotOn(framex) ; w::model.plotOn(framex) ; // Construct likelihood, profile likelihood in a, and draw the latter RooAbsReal* nll = w::model.createNLL(*d,NumCPU(2)) ; RooAbsReal* pll = nll->createProfile(w::a) ; RooPlot* framea = w::a.frame(Title("Profile likelihood in parameter a")) ; pll->plotOn(framea) ; // Construct 2D cumulative distribution function from p.d.f. RooAbsReal* cdfxy = w::model.createCdf(RooArgSet(w::x,w::y),ScanNoCdf()) ; TH2* hhcdf = cdfxy->createHistogram("x,y",40,40) ; hhcdf->SetLineColor(kRed) ; TCanvas* c = new TCanvas("c","c",650,650) ; c->Divide(2,2) ; c->cd(1) ; hh->Draw("surf") ; c->cd(2) ; framex->Draw() ; c->cd(3) ; framea->Draw() ; c->cd(4) ; hhcdf->Draw("surf") ; }  Plot that results from above macro #### Miscellaneous small improvements • Utility functiom bindFunction() and bindPdf that can bind external C++ functions as RooFit functions or p.d.f.s now can also take ROOT::Math::Functor objects as input arguments, which allows the binding of function methods of classes. • By default datasets are no cloned for fit operations to save time and memory. This change in procedure should save some time and memory, especially in toy MC studies where the overhead in setting up the likelihood can dominate the total time spent in fitting. The data cloning behavior of RooAbsPdf::fitTo() and RooAbsPdf::createNLL() can be explicitly set through the CloneData() named argument • It is now possible to construct a RooSimultaneous p.d.f. from other RooSimultaneous p.d.f.s, provided the constructor form is used that takes all input p.d.f.s. In this constructor simultaneous-of-simultaneous p.d.f.s are automatically recast to an equivalent top-level simultaneous p.d.f Sim of sim now possible • Several improvements were made in the internal handling of datasets that will speedup certain data intensive operations ### RooStats #### New Tutorials Several new tutorials were added for RooStats • rs101_limitexample.C Demonstrates use of Frequentist, Bayesian, and Likelihood intervals for a simple number counting experiment with uncertainty on signal and background rates. • rs301_splot.C Demonstrates use of RooStats sPlot implementation • rs401c_FeldmanCousins.C Demonstrates use of FeldmanCousins interval calculator with a Poisson problem, reproduces resulst from table IV and V of the original paper Phys.Rev.D57:3873-3889,1998. • rs401d_FeldmanCousins.C Demonstrates use of FeldmanCousins interval calculator with the neutrino oscillation toy example described in the original paper Phys.Rev.D57:3873-3889,1998. Reproduces figure 12. • rs_bernsteinCorrection.C Demonstrates use of BernsteinCorrection class, which corrects a nominal PDF with a polynomial to agree with observed or simulated data. #### TestStatistic interface and implementations We added a new interface class called TestStatistic. It defines the method Evaluate(data, parameterPoint), which returns a double. This class can be used in conjunction with the ToyMCSampler class to generate sampling distributions for a user-defined test statistic. The following concrete implementations of the TestStatistic interface are currently available • ProfileLikelihoodTestStatReturns the log of profile likelihood ratio. Generally a powerful test statistic. • NumEventsTestStatReturns the number of events in the dataset. Useful for number counting experiments. • DebuggingTestStat Simply returns a uniform random number between 0,1. Useful for debugging. #### SamplingDistribution and the TestStatSampler interface and implementations We introduced a result'' or data model class called SamplingDistribution, which holds the sampling distribution of an arbitrary real valued test statistic. The class also can return the inverse of the cumulative distribution function (with or without interpolation). We introduced an interface for any tool that can produce a SamplingDistribution, called TestStatSampler. The interface is essentially GetSamplingDistribution(parameterPoint) which returns a SamplingDistribution based on a given probability density function. We foresee a few versions of this tool based on toy Monte Carlo, importance sampling, Fourier transforms, etc. The following concrete implementation of the TestStatSampler interface are currently available • ToyMCSamplerUses a Toy Monte Carlo approach to build the sampling distribution. The pdf's generate method to generate is used to generate toy data, and then the test statistic is evaluated at the requested parameter point. • DebuggingSampler Simply returns a uniform distribution between 0,1. Useful for debugging. #### NeymanConstruction and FeldmanCousins A flexible framework for the Neyman Construction was added in this release. The NeymanConstruction is a concrete implementation of the IntervalCalculator interface, but it needs several additional components to be specified before use. The design factorizes the choice of the parameter points to be tested, the choice of the test statistic, and the generation of sampling distribution into separate parts (described above). Finally, the NeymanConstruction class is simply in charge of using these parts (strategies) and constructing the confidence belt and confidence intervals. The ConfidenceBelt class is still under development, but the current version works fine for producing ConfidenceIntervals. We are also working to make this class work with parallelization approaches, which is not yet complete. The FeldmanCousins class is a separate concrete implementation of the IntervalCalculator interface. It uses the NeymanConstruction internally, and enforces specific choices of the test statistic and ordering principle to realize the Unified intervals described by Feldman and Cousins in their paper Phys.Rev.D57:3873-3889,1998. In an extension to the technique discussed in Feldman and Cousins paper, the FeldmanCousins class also performs a "profile construction" if their are nuisance parameters. In this case, the parameters of interest are scanned in a regular grid. For each point in the grid the calculator finds the best fit value of the nuisance parameters (given the data). The construction is then only performed in this subspace of the parameters. As a result, the number of points in the construction only scales in the number of parameters of interest, not in the number of nuisance parameters. #### Markov Chain Monte Carlo Interval A flexible framework for Markov Chain Monte Carlo was added in this release. The MCMCCalculator is a concrete implementation of the IntervalCalculator interface. To use it one needs to specify the ProposalFunction. There is a base class for ProposalFunctions and one concrete implementation: UniformProposal. Support for other proposal functions will be added in the next release. The MCMCCalculator scans the space of the parameters of interest and nuisance parameters and produces a Bayesian posterior. In this version, the prior must be added to the model initially, otherwise a flat prior is assumed. The MCMCCalculator returns an MCMCInterval, which produces the smallest interval by taking a contour of the posterior. This first version only supports 1,2, and 3 dimensional intervals, but will be generalized in the next release. In addition to the MCMC implementation in RooStats, one can export their model and dataset into a workspace, and then use the Bayesian Analysis Toolkit (BAT) for the MCMC. There is a wrapper available. #### Redesigned SPlot class The RooStats SPlot implementation works with any RooAbsPdf. The class has been redesigned for more convenient use. It also adds some helper functions to check that the sum of sWeights over species is 1 for each event and the sum over events for a given species equals the yield for that species. #### Plotting classes We have added new plotting classes: SamplingDistPlot and LikelihoodIntervalPlot. In 1-d LikelihoodIntervalPlot shows the profile likelihood ratio and the upper/lower limits of the interval for the parameter of interest. In 2-d, the LikelihoodIntervalPlot shows the contour of the profile likelihood ratio for the parameters of interest. #### Bernstein Correction BernsteinCorrection is a utility in RooStats to augment a nominal PDF with a polynomial correction term. This is useful for incorporating systematic variations to the nominal PDF. The Bernstein basis polynomails are particularly appropriate because they are positive definite. This tool was inspired by the work of Glen Cowan together with Stephan Horner, Sascha Caron, Eilam Gross, and others. The initial implementation is independent work. The major step forward in the approach was to provide a well defined algorithm that specifies the order of polynomial to be included in the correction. This is an emperical algorithm, so in addition to the nominal model it needs either a real data set or a simulated one. In the early work, the nominal model was taken to be a histogram from Monte Carlo simulations, but in this implementation it is generalized to an arbitrary PDF (which includes a RooHistPdf). The algorithm basically consists of a hypothesis test of an nth-order correction (null) against a n+1-th order correction (alternate). The quantity q = -2 log LR is used to determine whether the n+1-th order correction is a major improvement to the n-th order correction. The distribution of q is expected to be roughly \chi^2 with one degree of freedom if the n-th order correction is a good model for the data. Thus, one only moves to the n+1-th order correction of q is relatively large. The chance that one moves from the n-th to the n+1-th order correction when the n-th order correction (eg. a type 1 error) is sufficient is given by the Prob(\chi^2_1 > threshold). The constructor of this class allows you to directly set this tolerance (in terms of probability that the n+1-th term is added unnecessarily). #### HybridCalculator Add as a new test statistics the profile likelihood ratio. Will be redesigned to use TestStatSampler and TestStatistic in next release. ### TMVA TMVA version 4.0.1 is included in this root release: #### Main changes and new features introduced with TMVA 4 • Reorganisation of internal data handling and constructors of methods to allow to build arbitrary composite MVA methods, and to deal with multi-class classification and multi-target regression • Extended TMVA to multivariate multi-target regression • Any TMVA method can now be boosted (linearly or non-linearly) • Transformation of input variables can be chained as wished • Weight files are now in XML format • New MVA methods "PDE-Foam" and "LD", both featuring classification and regression ##### Comments On XML format: The old text format is obsolete though still readable in the application. Backward compatibility is NOT guaranteed. Please contact the authors if you require the reading of old text weight files in TMVA 4. Standard macros: The structure of the standard macros has changed: macros are still in the "$ROOTSYS/tmva/test" directory, but distinguished for classification and regression examples:

TMVAClassification.C, TMVAClassificationApplication.C TMVARegression.C, TMVARegressionApplication.C
Classification and regression analysis (training) is analysed as usual via standard macros that can be called from dedicated GUIs.

Regression:

• Not yet available for all MVA methods. It exists for: PDE-RS, PDE-Foam, K-NN, LD, FDA, MLP, BDT for single targets (1D), and MLP for multiple targets (nD).
• Not all transformation of input variables are available (only "Norm" so far).
• Regression requires specific evaluation tools:
• During the training we provide a ranking of input variables, using various criteria: correlations, transposed correlation, correlation ratio, and "mutual information" between input variables and regression target. (Correlation ratio and mutual information implmentations provided by Moritz Backes, Geneva U)
• After the training, the trained MVA methods are ranked wrt. the deviations between regression target and estimate.
• Macros plot various deviation and correlation quantities. A new GUI (macros/TMVARegGui.C) collects these macros.

#### Improvements of / new features for MVA methods

• Linear Discriminant: Re-implementation of "Fisher" method as general linear discriminant ("LD"), which is also regression capable (so far: single-target only)
• PDEFoam: PDE-Foam is a variation of the PDE-RS method using a self-adapting binning method to divide the multi-dimensional variable space into a finite number of hyper-rectangles (cells). The binning algorithm adjusts the size and position of a predefined number of cells such that the variance of the signal and background densities inside the cells reaches a minimum.
• BDT: Introduced gradient boosting and stochastic gradient boosting for classification with BDT (as desribed by Friedman 1999). See "BDTG" example in TMVAClassification.C/cxx. A new option allows to restrict the maximum tree depth. This may be used to avoid overtraining and often gives better performance than pruning. (The pruning mechanism needs to be revisited)
• MLP: Introduced recognition of convergence via general ConvergenceTest-class for interrupting computations when convergence is reached. This feature has is used now in MethodMLP. Improved treatment of event-weights in BFGS training. Implemented random and importance sampling of events in DataSet. Implemented the usage of this feature for MLP.
• TMlpANN (interface to TMultiLayerPerceptron) now also uses event weights and writes standalone C++ class.
• k-NN: A new global knn search function has been added to NodekNN that searches for k-nearest neighbor using event weights instead of raw event counts. ModulekNN has been modified to allow searches using "weight" or "count" option, where "count" is default. Added UseWeight option to MethodKNN to allow using of "weight" or "count". (Work by Rustem Ospanov, CERN).
• Likelihood (and general PDF treatment): Adaptive smoothing the PDF class, allowing it to smooth between MinSmoothNum (for regions with more signal) and MaxSmoothNum (for regions with less signal). Configuration of the PDF parameters from the option string moved to PDF class, allowing the user to define all the PDF functionalities in every classifier the PDF is used (i.e., also for the MVA PDFs). The reading of these variables was removed from MethodBase and MethodLikelihood. This also allows improved (full) PDF configuration of MVA output via the "CreateMvaPdf" option. (Work by Or Cohen, CERN & Weizmann)
• New generalisation methods:
• MethodCompositeBase: combines more than one classifier within one.
• MethodBoost: boosts/bags any classifier type. A special booking procedure for it was added to Factory class.
• MethodDT: a classifier composed of a single decision tree, boosted using MethodBoost. Results are compatible with BDT, but BDT remains the default for boosted decision trees, because it has pruning (among other additional features).
• ##### Other improvements
• Improved handling of small Likelihood values such that Likelihood performance increases in analyses with many variables (~>10). Thanks to Ralph Schaefer (Bonn U.) for reporting this.
• Nicer plotting: custom variable titles and units can be assigned in "AddVariable" call.
• Introduced the inverse transformation InverseTransform for the variable transformations into the framework. While this is not necessary for classification, it is necessary for regression. The inverse transformation of the normalization transformation has been implemented.
• Started to extend the variable transformations to the regression targets as well.
• MethodCuts now produces the 'optimal-cut' histograms needed by macro mvaeffs.C. (macro 5a of TMVAGui.C)
• MsgLogger can be silenced in order to prevent excess output during boosting.
• Third dataset type added centrally (Training, Validation and Testing). The validation data is split off the original training data set.
• Update of GUI and other Macros according to the new features of PDF and the addition of MethodBoost.

• "Spectator" variables can be defined now which are computed just as the input variables and which are written out into the TestTree, but which don't participate in any MVA calculation (useful for correlation studies).
• New booking option "IgnoreNegWeightsInTraining" to test the effect of events with negative weights on the training. This is especially useful for methods, which do not properly deal with such events. Note that this new option is not available for all methods (a training interrupt is issued if not available).
##### Bug fixes:
• Fixed regression bug in VariableNormalizeTransform (Use number of targets from Event instead of DataSet)
• Fixed Multitarget-Regression in PDEFoam, foam dimensions were miscalculated
• Added writing of targets to the weight files in regression mode to fix problems in RegressionApplication
• Added missing standard C++ header files missing to some classes, which lead to compilation failures on some architectures (thanks to Lucian Ancu, Nijmegen, for reporting these).
• Added checks for unused options to Factory and DataSetFactory configuration options interpretation. Will now complain if wrong option labels are used.
• Fixed standard creation of correlation matrix plots
• Fixed internal mapping problem giving a fatal error message when destroying and recreating the Factory.

### GUI

#### TRootCanvas

• In SetWindowSize the event queue is flushed to make sure the window size change is really done.

• When creating the dialog from the context menu, skip arguments that are pointers (but not char *) and have a default value. This should avoid confusing input fields in dialog.
• Implemented online help in root dialogs (the dialog boxes used with contextual menus) via a new "Online Help" button. This opens a Root HTML browser at the right class/method location in the Root reference guide on the web.
The base url can be changed with the Browser.StartUrl option in system.rootrc (by default: http://root.cern.ch/root/html/ClassIndex.html)

• Add possibility to add a right aligned shortcut by using a tab character ('\t') before the shortcut string, as shown below:
• Use new way of adding right aligned shortcuts in the menu entries in most of the GUI classes using shortcuts in their menu

#### TGSlider

• Added HandleConfigureNotify() to handle resizing events.

#### New Browser

• Automatically browse ROOT files if there is any open when starting the browser.
• Correct system files manipulations (copy, rename, delete) and automatic update of the list tree.

### GUIHTML

#### TGHtmlBrowser

• Added ability to display single picture from the web and to open pdf files with external viewer (Windows only)
• Implemented anchor navigation (e.g. http://root.cern.ch/root/html/TH1.html#TH1:Multiply)

### Graphical Output

#### TASImage / libAfterImage

• In TImageDump the way the markers 6 and 7 are drawn (medium dot and big dot) has been changed to make sure they have the same size as the one on screen.
• Changes in libAfterImage (draw.c & draw.h), TASImage.cxx and TImageDump.cxx in order to produce nice looking circular (hollow and solid) markers. Previously the line used to draw hollow circular markers looked very thick and the solid ones did not look circular.
• Remove the global variable named "dpy" in libAfterImage. It produced an error if a user program used that simple variable name. "dpy" was a pointer to a "Display".

#### PostScript and PDF

• Now, a text with size 0 is not drawn in PDF files. An invalid PDF file is created if a text with size 0 is produced.
• The landscape orientation is now correct in pdf files. gv recognizes the files as "Landscape" and the orientation is not upside down as it was before (seascape).
• In PostScript and PDF files the method DrawPS is used to write a single (x,y) position. This case was not treated correctly and, because of that, the PS and PDF files might contain useless attributes settings. That was only a few bytes more in the file but they were useless.

#### TLegend

• When a object is added "by name" in a legend, the TMultiGraph and THStack present in the current pad are scanned if an object with this name has not been found in the pad. Previously the graphs and histograms were hidden in multi-graphs and histogram-stacks when one tried to add them by name.
• New reference guide.

#### TGaxis

• In PaintAxis The option "U", for unlabeled axis, was not implemented in case of alphanumeric axis' labels.
• On log-scale TGAxis, with labels having lower values than 1 and ticks marks set to the positive side, alignement issues seem to come up. The following example shows four TGAxis drawn respectively with the following options: RG-, RG+, LG-, LG+. For the RG+ and LG+ options, the 10E-1 and 10E-2 labels were ill-aligned, showing a shift to the right compared to the 1E2, 1E1 and 1 labels.
  {
c1 = new TCanvas("c1","Examples of Log TGaxis",10,10,700,500);
c1->Range(-10,-1,10,1);
TGaxis *axis1 = new TGaxis(-7,-0.8,-7,0.8,0.01,100,50510,"RG-");
axis1->SetTitle("RG-"); axis1->Draw();
TGaxis *axis2 = new TGaxis(-2,-0.8,-2,0.8,0.01,100,50510,"RG+");
axis2->SetLabelOffset(-0.04); axis2->SetTitleOffset(-1.5);
axis2->SetTitle("RG+"); axis2->Draw();
TGaxis *axis3 = new TGaxis(2,-0.8,2,0.8,0.01,100,50510,"LG-");
axis3->SetLabelOffset(-0.04);
axis3->SetTitle("LG-"); axis3->Draw();
TGaxis *axis4 = new TGaxis(7,-0.8,7,0.8,0.01,100,50510,"LG+");
axis4->SetTitleOffset(-1);
axis4->SetTitle("LG+"); axis4->Draw();
}

• gStyle.SetStripDecimals(kFALSE) did not work in cases like the following one:
  {
gStyle.SetStripDecimals(kFALSE);
TGraph graph_freq;
graph_freq.SetPoint(0, 933., 40078879.);
graph_freq.SetPoint(1, 934., 40078966.);
graph_freq.Draw("A*");
}


#### TCrown

• The crown picking did not work.
• Improve help.

#### TLatex

• The text angle was not taken into account in case the text was painted in low precision like in:
  gStyle->SetTitleFont(60,"xy");
TH1F* h=new TH1F("foo", "bar;#int;#int", 10, 0, 1);
h->Draw();

In that example the Y title was not rotated.

#### TCanvas

• A canvas is turned into GL mode only if the canvas name starts with "gl". Before the "gl" string could be anywhere in the name.

#### QtRoot/ libGQt

• The redundant Qt3-related code was removed.
• The Q3_SUPPORT flag was eliminated. The plug-in can be used with and without Q3_SUPPORT now.
• The code was adjusted to work under the Qt 4.5.x.
• Many platform depended (win32) sections were replaced with the cross-platform code

### OpenGL

#### Major changes

• GLEW - The OpenGL Extension Wrangler Library - has been added to facilitate detection of OpenGL version and available extensions at run-time. This will allow usage of advanced visualization techniques while still allowing fall-back solutions to be used on systems not supporting the required functionality. If GLEW and GLEW-devel packages are detected during configure, the ROOT provided GLEW is not built. See also: http://glew.sourceforge.net/.
• Latest (1.3.3) version of gl2ps has been imported (we had 1.2.6 before). See http://www.geuz.org/gl2ps/ for detailed change-log.
• New implementation of GL-in-TPad - instead of mixture of GL and non-GL graphics in a pixmap all pad graphics (2D/3D) is now done by OpenGL.
To make this possible new TVirtualPadPainter, TPadPainter, TGLPadPainter classes were introduced and painting operations inside TPad class were modified to use TVirtualPadPainter instead of TVirtualX. TVirtualPadPainter is an abstract base class, interface for drawing 2D primitives and pixmap management. TPadPainter is a default, non-GL implementation, based on TVirtualX (gVirtualX). TGLPadPainter is a GL implementation. Currently, TGLPadPainter does not support off-screen rendering (support for frame-buffer objects is planned).
Current limitations:
1. The glpad can be saved only as PS now.
2. Several sub-pads with complex 3d geometry can be slow due to lack of off-screen rendering which would allow for caching of resulting images.
Future directions:
1. Use frame-buffer objects for off-screen rendering.
2. Support "Save as" png, jpg, pdf, etc.
3. With GLEW and GL-shading-language, use of hardware anti-aliasing and shaders is possible.
• Prototype visualization of 5-dimensional distributions:
1. New option for TTree::Draw - "gl5d", for the case you have 5 and more dimensional dataset.
2. Set of iso-surfaces created, 4-th dimension is used to select iso-level.
3. "gl5d" is now very similar to "gliso" option, but instead of filling TH3 object (very primitive and rude "density estimator"), points are fed directly to the kernel density estimator, based on Fast Gauss Transform.
See TGL5D* classes.
Limitations: 5-th dimension is not shown correctly at the moment (lacks sofisticated algorithms, we do not have in a ROOT's math library now). Because of this limitation, GUI is just a toy now, must be changed.
Future directions:
1. GUI improvements.
2. Support several different density estimators.
3. Implement regression tools.

#### Minor changes, fixes and improvements

• It is now possible to draw a histogram with the "GLBOX" in the GL-viewer.
• New class TGLColor has been introduced to simplify color management in TGLViewer and TGLRnrCtx.
• Add support for several color-sets (class TGLColorSet - each defines colors for background, foreground, outline, markup and for outlines of selected and highlighted objects. This also allows for independent changing of background color and outline mode in the GL viewer - the e key now toggles between dark / light background.
• New class TGLAnnotation - it allows display of annotation-text on top of displayed objects. The annotation can be created from the TGLViewer editor ("Guides" tab). After that it can be dragged around the screen, edited or closed.
• TGLAxisPainter - reimplemented to completely separate label and tick-mark positioning code from the rendering itself.
• TGLSAViewer - when exporting an image properly take into account image extension if it was typed by the user.
• TGLFont now uses the same font-naming scheme as the rest of ROOT (had to specify font-file names before).
• Overlay-object management has been improved.
• Allow clipping object to be fixed by user - until now it was updated on every redraw. See TGLViewer::SetClipAutoUpdate().

### Eve

• TEveElement - add context-menu functions allowing the source-object to be printed, dumped or exported to CINT.
• TEveTrack - added flag for locking of current track-points - the track will not be re-extrapolated automatically even when the extrapolation parameters are changed.
• TEveTrack - removed ALICE specific ImportXyzz() functions for loading of kinematics, hits and clusters associated with a track. These were calling macros that were not available in ROOT.
• Several improvements in rendering of coordinate axes in TEveCaloLego and TEveProjectionAxes.
• New class TEveJetCone for display of circular and elliptic jet-cones clipped to the calorimeter's inner surface.
• Add support for extraction of composite-shape tesselations. A new class TEveGeoPolyShape has been introduced to make this tesselation serializable. See example in tutorials/eve/csgdemo.C.
• Generalize representation of EVE-window title-bar - it can be modified to display user-provided icons, menus or buttons.
• TEveWindowPack now supports registration of sub-frames with weights that determine relative sub-frame length along the pack's major direction.
• TEveUtil::SetColorBrightnes() now scales colors according to screen-gamma transformation formula.
• Some examples using the GUI recorder have been added to the tutorials. See macros tutorials/eve/*_playback.C.

## Quick Look plugin for MacOS X

New Quick Look plugin that allows quick inspection of the content of a ROOT file.

Quick Look is available on MacOS X since version 10.5 (Leopard). To use QL select an file icon in the Finder and hit the space bar. For all file types supported by QL you will get a window showing the file content, for file types not supported you will get a generic window showing some basic file info.

The idea of QL is that file content can be shown without the heavy application startup process. Generating a QL view of a ROOT file depends on the size of the file, but generally it is a quick operation.

Get the binary for the ROOTQL plugin from:

   ftp://root.cern.ch/root/ROOTQL.tgz


To install the plugin, after untarring the above file, just drag the bundle ROOTQL.qlgenerator to /Library/QuickLook (global, i.e. for all users on a system) or to ~/Library/QuickLook (local, this user only) directory. You may need to create that folder if it doesn't already exist.

To build from source, get it from svn using:

   svn co http://root.cern.ch/svn/root/trunk/misc/rootql rootql


Open the ROOTQL project in Xcode and click on "Build" (make sure the Active Build Configuration is set the "Release"). Copy the resulting plugin from build/Release to the desired QuickLook directory.

## SpotLight plugin for MacOS X

This is a Spotlight plugin that allows ROOT files to be indexed by SL. Once indexed SL can find ROOT files based on the names and titles of the objects in the files.

Spotlight is available on MacOS X since version 10.4 (Tiger). To use SL select the SL icon on the top right of the menubar and type in a search text.

Get the binary for the ROOTSL plugin from:

   ftp://root.cern.ch/root/ROOTSL.tgz


To install the plugin, after untarring the above file, just drag the bundle ROOTSL.mdimporter to /Library/Spotlight (global, i.e. for all users on a system) or to ~/Library/Spotlight (local, this user only) directory. You may need to create that folder if it doesn't already exist.

To build from source, get it from svn using:

   svn co http://root.cern.ch/svn/root/trunk/misc/rootsl rootsl


Open the ROOTSL project in Xcode and click on "Build" (make sure the Active Build Configuration is set the "Release"). Copy the resulting plugin from build/Release to the desired QuickLook directory.

ROOT page - Class index - Top of the page -