ROOT Version 6.26 Release Notes

2022-11-16

Introduction

ROOT version 6.26/00 was released on March 03, 2022.

For more information, see:

http://root.cern

The following people have contributed to this new version:

Sitong An, CERN/SFT,
Simone Azeglio, CERN/SFT,
Rahul Balasubramanian, NIKHEF/ATLAS,
Bertrand Bellenot, CERN/SFT,
Josh Bendavid, CERN/CMS,
Jakob Blomer, CERN/SFT,
Patrick Bos, Netherlands eScience Center,
Rene Brun, CERN/SFT,
Carsten D. Burgard, DESY/ATLAS,
Will Buttinger, STFC/ATLAS,
Philippe Canal, FNAL,
Olivier Couet, CERN/SFT,
Mattias Ellert, Uppsala University,
Gerri Ganis, CERN/SFT,
Andrei Gheata, CERN/SFT,
Enrico Guiraud, CERN/SFT,
Stephan Hageboeck, CERN/IT,
Jonas Hahnfeld, CERN/SFT,
Ahmat Hamdan, GSOC,
Fernando Hueso-González, University of Valencia,
Ivan Kabadzhov, CERN/SFT,
Shamrock Lee (@ShamrockLee),
Sergey Linev, GSI,
Javier Lopez-Gomez, CERN/SFT,
Pere Mato, CERN/SFT,
Emmanouil Michalainas, CERN/SFT,
Lorenzo Moneta, CERN/SFT,
Nicolas Morange, CNRS/IJCLab,
Axel Naumann, CERN/SFT,
Vincenzo Eduardo Padulano, CERN/SFT and UPV,
Max Orok, U Ottawa,
Alexander Penev, University of Plovdiv,
Danilo Piparo, CERN/SFT,
Fons Rademakers, CERN/SFT,
Jonas Rembser, CERN/SFT,
Enric Tejedor Saavedra, CERN/SFT,
Aaradhya Saxena, GSOC,
Oksana Shadura, UNL/CMS,
Sanjiban Sengupta, GSOC,
Harshal Shende, GSOC,
Federico Sossai, CERN/SFT,
Matevz Tadel, UCSD/CMS,
Vassil Vassilev, Princeton/CMS,
Wouter Verkerke, NIKHEF/ATLAS,
Zef Wolffs, NIKHEF/ATLAS,
Stefan Wunsch, CERN/SFT

Deprecation, Removal, Backward Incompatibilities

The “Virtual MonteCarlo” facility VMC (montecarlo/vmc) has been removed from ROOT. The development of this package has moved to a separate project. ROOT’s copy of VMC was deprecated since v6.18. The previously deprecated packages memstat, memcheck have been removed. Please use for instance valgrind or memory sanitizers instead. ROOT’s “alien” package has been deprecated and will be removed in 6.28. Please contact ALICE software support if you still rely on it.

TTree.AsMatrix has been removed, after being deprecated in 6.24. Instead, please use RDataFrame.AsNumpy from now on as a way to read and process data in ROOT files and store it in NumPy arrays (a tutorial can be found here). TTreeProcessorMT::SetMaxTasksPerFilePerWorker has been removed. TTreeProcessorMT::SetTasksPerWorkerHint is a superior alternative. TTree::GetEntry() and TTree::GetEvent() no longer have 0 as the default value for the first parameter entry. We are not aware of correct uses of this function without providing an entry number. If you have one, please simply pass 0 from now on. TBufferMerger is now production ready and not experimental anymore: ROOT::Experimental::TBufferMerger is deprecated, please use ROOT::TBufferMerger instead.

RooFit container classes marked as deprecated with this release: RooHashTable, RooNameSet, RooSetPair, and RooList. These classes are still available in this release, but will be removed in the next one. Please migrate to STL container classes, such as std::unordered_map, std::set, and std::vector. The RooFit::FitOptions(const char*) command to steer RooAbsPdf::fitTo() with an option string in now deprecated and will be removed in ROOT v6.28. Please migrate to the RooCmdArg-based fit configuration. The former character flags map to RooFit command arguments as follows:

Subsequently, the RooMinimizer::fit(const char*) function and the RooMCStudy constructor that takes an option string is deprecated as well.

Core Libraries

Interpreter

As of v6.26, cling diagnostic messages can be redirected to the ROOT error handler. Users may enable/disable this via TCling::ReportDiagnosticsToErrorHandler(), e.g.

root [1] gInterpreter->ReportDiagnosticsToErrorHandler();
root [2] int f() { return; }
Error in <cling>: ROOT_prompt_2:1:11: non-void function 'f' should return a value [-Wreturn-type]
int f() { return; }
          ^

More details at PR #8737.

Continuation of input lines using backslash \ is supported in ROOT’s prompt, e.g.

root [0] std::cout \
root (cont'ed, cancel with .@) [1]<< "ROOT\n";

ROOT now interprets code with optimization (-O1) by default, with proper inlining optimization and alike. This accelerates especially “modern” code (C++ stdlib, RDataFrame, etc) significantly. According to our measurements the increased time for just-in-time compiling code is reasonable given the runtime speed-up. Optimization can be switched with .O 0, .O 1, etc; the current optimization level is shown by .O. The CPP macro NDEBUG is now set unconditionally for interpreted code. Note that symbols that have been emitted with a given optimization level will not get re-emitted once the optimization level changes.

Unless ROOT is used with an interactive prompt (root [0]), ROOT does not inject the pointer checks anymore, accelerating code execution at the cost of not diagnosing the dereferencing of nullptr or uninitialized pointers.

Several internal optimizations of cling reduce the amount of symbols cling emits, and improve the just-in-time compilation time.

I/O Libraries

Command line utilities

$: rootls -l https://root.cern/files/ttree_read_imt.root
TTree  Mar 13 17:17 2019 TreeIMT;2 "TTree for IMT test" [current cycle]
TTree  Mar 13 17:17 2019 TreeIMT;1 "TTree for IMT test" [backup cycle]
$: root --random -z --nonexistingoption
root: unrecognized option '--random'
root: unrecognized option '-z'
root: unrecognized option '--nonexistingoption'
Try 'root --help' for more information.

TTree Libraries

RNTuple

ROOT’s experimental successor of TTree has been upgraded to the version 1 of the binary format specification. Compared to the v0 format, the header is ~40% smaller and the footer ~100% smaller (after zstd compression). More details in PR #8897. RNTuple is still experimental and is scheduled to become production grade in 2024. Thus, we appreciate feedback and suggestions for improvement.

If you have been trying RNTuple for a while, these are the other important changes that you will notice:

RDataFrame

New features

Notable changes in behavior

Other improvements

Experimental Distributed RDataFrame

The distributed RDataFrame module has been improved. Now it supports sending RDataFrame tasks to a Dask scheduler. Through Dask, RDataFrame can be also scaled to a cluster of machines managed through a batch system like HTCondor or Slurm. Here is an example:

import ROOT
from dask.distributed import Client
RDataFrame = ROOT.RDF.Experimental.Distributed.Dask.RDataFrame

# In a Python script the Dask client needs to be initalized in a context
# Jupyter notebooks / Python session don't need this
if __name__ == "__main__":
    client = Client("SCHEDULER_ADDRESS")
    df = RDataFrame("mytree","myfile.root", daskclient=client)
    # Proceed as usual
    df.Define("x","someoperation").Histo1D("x")

Other notable additions and improvements include:

Histogram Libraries

Math Libraries

RooFit Libraries

Experimental CUDA support for RooFit’s BatchMode

RooFit’s BatchMode has been around since ROOT 6.20. It was further improved in ROOT 6.24 to use vector extensions of modern CPUs without recompiling ROOT, introducing the new RooBatchCompute library as a backend that is compiled multiple times for different instruction sets. With this release, RooBatchCompute is also compiled with the Nvidia CUDA compiler to support the computation on GPUs if supported by the RooFit object. You can use the CUDA mode by passing "cuda" to the BatchMode() command argument:

model.fitTo(data);                            // not using the batch mode
model.fitTo(data, RooFit::BatchMode(true));   // using the BatchMode on CPU (RooFit::BatchMode("cpu") is equivalent)
model.fitTo(data, RooFit::BatchMode("cuda")); // using the new CUDA backend

The RooBatchCompute backend now also supports ROOT’s implicit multithreading (similar to RDataFrame), which can be enabled as follows:

ROOT::EnableImplicitMT(nThreads);

For more information, please have a look at this contribution to the ACAT 2021 conference or consult the RooBatchComupte README. The README also describes how to enable BatchMode support for your own PDFs.

Parallel calculation of likelihood gradients during fitting

This release features two new optional RooFit libraries: RooFit::MultiProcess and RooFit::TestStatistics. To activate both, build with -Droofit_multiprocess=ON.

The RooFit::TestStatistics namespace contains a major refactoring of the RooAbsTestStatistic-RooAbsOptTestStatistic-RooNLLVar inheritance tree into:

  1. statistics-based classes on the one hand;
  2. calculation/evaluation/optimization based classes on the other hand.

The main selling point of using RooFit::TestStatistics from a performance point of view is the implementation of the RooFit::MultiProcess based LikelihoodGradientJob calculator class. To use it to perform a “migrad” fit (using Minuit2), one should create a RooMinimizer using a new constructor with a RooAbsL likelihood parameter as follows:

using RooFit::TestStatistics::RooAbsL;
using RooFit::TestStatistics::buildLikelihood;

RooAbsPdf* pdf = ...;   // build a pdf
RooAbsData* data = ...; // get some data

std::shared_ptr<RooAbsL> likelihood = buildLikelihood(pdf, data, [OPTIONAL ARGUMENTS]);

RooMinimizer m(likelihood);
m.migrad();

The RooMinimizer object behaves as usual, except that behind the scenes it will now calculate each partial derivative on a separate process, ideally running on a separate CPU core. This can be used to speed up fits with many parameters (at least as many as there are cores to parallelize over), since every parameter corresponds to a partial derivative. The resulting fit parameters will be identical to those obtained with the non-parallelized gradients minimizer in most cases (see the usage notes linked below for exceptions).

In upcoming releases, further developments are planned:

For more details, consult the usage notes in the TestStatistics README.md. For benchmarking results on the prototype version of the parallelized gradient calculator, see the corresponding CHEP19 proceedings paper.

New pythonizations

Various new pythonizations are introduced to streamline your RooFit code in Python.

For a complete list of all pythonized classes and functions, please see the RooFit pythonizations page in the reference guide. All RooFit Python tutorials have been updated to profit from all available pythonizations.

Some notable highlights are listed in the following.

Keyword argument pythonizations

All functions that take RooFit command arguments as parameters now accept equivalent Python keyword arguments, for example simplifying calls to RooAbsPdf::fitTo() such as:

model.fitTo(data, ROOT.RooFit.Range("left,right"), ROOT.RooFit.Save())

which becomes:

model.fitTo(data, Range="left,right", Save=True)

String to enum pythonizations

Many functions that take an enum as a parameter now accept also a string with the enum label.

Take for example this expression:

data.plotOn(frame, ROOT.RooFit.DataError(ROOT.RooAbsData.SumW2)

Combining the enum pythonization with the keyword argument pythonization explained before, this becomes:

data.plotOn(frame, DataError="SumW2")

This pythonization is also useful for your calls to RooFit::LineColor() or RooFit::LineStyle, to give some more common examples.

Implicit conversion from Python collections to RooFit collections

You can now benefit from implicit conversion from Python lists to RooArgLists, and from Python sets to RooArgSets.

For example, you can call RooAbsPdf::generate() with a Python set to specify the observables:

pdf.generate({x, y, cut}, 10000)

Or, you can create a RooPolynomial from a Python list of coefficients:

ROOT.RooPolynomial("p", "p", x, [0.01, -0.01, 0.0004])

Note that here we benefit from another new feature: the implicit call to RooFit::RooConst() when passing raw numbers to the RooFit collection constructors.

Allow for use of Python collections instead of C++ STL containers

Some RooFit functions take STL map-like types such as std::map as parameters, for example the RooCategory constructor. In the past, you had to create the correct C++ class in Python, but now you can usually pass a Python dictionary instead. For example, a RooCategory can be created like this:

sample = ROOT.RooCategory("sample", "sample", {"Sample1": 1, "Sample2": 2, "Sample3": 3})

RooWorkspace accessors

In Python, you can now get objects stored in a RooWorkspace with the item retrieval operator, and the return value is also always downcasted to the correct type. That means in Python you don’t have to use RooWorkspace::var() to access variables or RooWorkspace::pdf() to access pdfs, but you can always get any object using square brackets. For example:

# w is a RooWorkspace instance that contains the variables `x`, `y`, and `z` for which we want to generate toy data:
model.generate({w["x"], w["y"], w["z"]}, 1000)

New PyROOT functions for interoperability with NumPy and Pandas

New member functions of RooFit classes were introduced exclusively to PyROOT for better interoperability between RooFit and Numpy and Pandas:

For more details, consult the tutorial rf409_NumPyPandasToRooFit.py.

Modeling Effective Field Theory distributions with RooLadgrangianMorphFunc

The RooLagrangianMorphFunc class is a new RooFit class for modeling a continuous distribution of an observable as a function of the parameters of an effective field theory given the distribution sampled at some points in the parameter space. Two new classes to help to provide this functionality:

For example usage of the RooLagrangianMorphFunc class, please consult the tutorials for a single parameter case (rf711_lagrangianmorph.C / .py) and for a multi-parameter case (rf712_lagrangianmorphfit.C / .py).

A RooLagrangianMorphFunc can also be created with the RooWorkspace::factory interface, showcased in rf512_wsfactory_oper.C / .py.

Exporting and importing RooWorkspace to and from JSON and YML

The new component RooFitHS3 implements serialization and deserialization of RooWorkspace objects to and from JSON and YML. The main class providing this functionality is RooJSONFactoryWSTool. For now, this functionality is not feature complete with respect to all available functions and pdfs available in RooFit, but provides an interface that is easily extensible by users, which is documented in the corresponding README. It is hoped that, though user contributions, a sufficiently comprehensive library of serializers and deserializers will emerge over time.

For more details, consult the tutorial rf515_hfJSON.

Creating RooFit datasets from RDataFrame

RooFit now contains two RDataFrame action helpers, RooDataSetHelper and RooDataHistHelper, which allow for creating RooFit datasets by booking an action:

  RooRealVar x("x", "x", -5.,   5.);
  RooRealVar y("y", "y", -50., 50.);
  auto myDataSet = rdataframe.Book<double, double>(
    RooDataSetHelper{"dataset",          // Name   (directly forwarded to RooDataSet::RooDataSet())
                    "Title of dataset",  // Title  (                  ~ '' ~                      )
                    RooArgSet(x, y) },   // Variables to create in dataset
    {"x", "y"}                           // Column names from RDataFrame
  );

For more details, consult the tutorial rf408_RDataFrameToRooFit.

Storing global observables in RooFit datasets

RooFit groups model variables into observables and parameters, depending on if their values are stored in the dataset. For fits with parameter constraints, there is a third kind of variables, called global observables. These represent the results of auxiliary measurements that constrain the nuisance parameters. In the RooFit implementation, a likelihood is generally the sum of two terms:

Before this release, the global observable values were always taken from the model/pdf. With this release, a mechanism is added to store a snapshot of global observables in any RooDataSet or RooDataHist. For toy studies where the global observables assume a different values for each toy, the bookkeeping of the set of global observables and in particular their values is much easier with this change.

Usage example for a model with global observables g1 and g2:

auto data = model.generate(x, 1000); // data has only the single observables x
data->setGlobalObservables(g1, g2); // now, data also stores a snapshot of g1 and g2

// If you fit the model to the data, the global observables and their values
// are taken from the dataset:
model.fitTo(*data);

// You can still define the set of global observables yourself, but the values
// will be takes from the dataset if available:
model.fitTo(*data, GlobalObservables(g1, g2));

// To force `fitTo` to take the global observable values from the model even
// though they are in the dataset, you can use the new `GlobalObservablesSource`
// command argument:
model.fitTo(*data, GlobalObservables(g1, g2), GlobalObservablesSource("model"));
// The only other allowed value for `GlobalObservablesSource` is "data", which
// corresponds to the new default behavior explained above.

In case you create a RooFit dataset directly by calling its constructor, you can also pass the global observables in a command argument instead of calling setGlobalObservables() later:

RooDataSet data{"dataset", "dataset", x, RooFit::GlobalObservables(g1, g2)};

To access the set of global observables stored in a RooAbsData, call RooAbsData::getGlobalObservables(). It returns a nullptr if no global observable snapshots are stored in the dataset.

For more information of global observables and how to attach them to the toy datasets, please take a look at the new rf613_global_observables.C / .py tutorial.

Changes in RooAbsPdf::fitTo behaviour for multi-range fits

The RooAbsPdf::fitTo and RooAbsPdf::createNLL functions accept a command argument to specify the fit range. One can also fit in multiple ranges simultaneously. The definition of such multi-range likelihoods for non-extended fits changes in this release. Previously, the individual likelihoods were normalized separately in each range, which meant that the relative number of events in each sub-range was not used to estimate the PDF parameters. From now on, the likelihoods are normalized by the sum of integrals in each range. This implies that the likelihood takes into account all inter-range and intra-range information.

Deprecation of the RooMinuit class

The RooMinuit class was the old interface between RooFit and minuit. With ROOT version 5.24, the more general RooMinimizer adapter was introduced, which became the default with ROOT 6.08.

Before 6.26, it was possible to still use the RooMinuit by passing the Minimizer("OldMinuit", "minimizer") command argument to RooAbsPdf::fitTo(). This option is now removed.

Increase of the RooAbsArg class version

The class version of RooAbsArg was incremented from 7 to 8 in this release. In some circumstances, this can cause warnings in TStreamerInfo for classes inheriting from RooAbsArg when reading older RooFit models from a file. These warnings are harmless and can be avoided by incrementing also the class version of the inheriting class.

Compile-time protection against creating empty RooCmdArgs from strings

The implicit RooCmdArg constructor from const char* was removed to avoid the accidental construction of meaningless RooCmdArgs that only have a name but no payload. This causes new compiler errors in your code if you pass a string instead of a RooCmdArg to various RooFit functions, such as RooAbsPdf::fitTo(). If this happens, please consult the documentation of fitTo() to check which of the free functions in the RooFit namespace you need to use to achieve the desired configuration.

Example of an error that is now caught at compile time: confusing the RooAbsPdf::fitTo() function signature with the one of TH1::Fit() and passing the fit range name as a string literal:

pdf.fitTo(*data, "r"); // ERROR!
// Will not compile anymore, as `"r"` is not a recognized command and will be ignored!
// Instead, to restrict to a range called "r", use:
pdf.fitTo(*data, RooFit::Range("r"));

TMVA

SOFIE : Code generation for fast inference of Deep Learning models

ROOT/TMVA SOFIE (“System for Optimized Fast Inference code Emit”) is a new package introduced in this release that generates C++ functions easily invokable for the fast inference of trained neural network models. It takes ONNX model files as inputs and produces C++ header files that can be included and utilized in a “plug-and-go” style. This is a new development and it is currently still in experimental stage.

From ROOT command line, or in a ROOT macro you can use this code for parsing a model in ONNX file format and generate C++ code that can be used to evaluate the model:

using namespace TMVA::Experimental;
SOFIE::RModelParser_ONNX parser;
SOFIE::RModel model = parser.Parse(“./example_model.onnx”);
model.Generate();
model.OutputGenerated(“./example_output.hxx”);

And an C++ header file will be generated. In addition also a text file, example_output.dat will be also generated. This file will contain the model weight values that will be used to initialize the model. A full example for parsing an ONNX input file is given by the tutorial TMVA_SOFIE_ONNX.C.

To use the generated inference code, you need to create a Session class and call the function Session::inder(float *):

#include "example_output.hxx"
float input[INPUT_SIZE] = {.....};   // input data 
TMVA_SOFIE_example_model::Session s("example_output.dat");
std::vector<float> out = s.infer(input);

For using the ONNX parser you need to build ROOT with the configure option tmva-sofie=ON, which will be enabled when a Google Protocol Buffer library (protobuf, see https://developers.google.com/protocol-buffers) is found in your system.

If you don’t have protobuf and you don’t want to install you can still use SOFIE, although with some more limited operator support parsing directly Keras .h5 input files or PyTorch .pt files. In tis case you can convert directly the model to a RModel representation which can be used as above to generate the header and the weight file.

For parsing a Keras input file you need to do:

SOFIE::RModel model = SOFIE::PyKeras::Parse("KerasModel.h5");

See the tutorial TMVA_SOFIE_Keras.C. For parsing a PyTorch input file :

SOFIE::RModel model = SOFIE::PyTorch::Parse("PyTorchModel.pt",inputShapes);

where inputShapes is a std::vector<std::vector<size_t>> defining the inputs shape tensors. This information is required by PyTorch since it is not stored in the model. A full example for parsing a PyTorch file is in the TMVA_SOFIE_PyTorch.C tutorial.

For using the Keras and/or the PyTorch parser you need to have installed Keras and/or PyTorch in your Python system and in addition build root with the support for pymva, obtained when configuring with -Dtmva-pymva=On.

For using the Keras and/or the PyTorch parser you need to have installed Keras and/or PyTorch in your Python system and in addition build root with the support for pymva, obtained when configuring with -Dtmva-pymva=On.

Note that the created SOFIE::RModel class after parsing can be stored also in a ROOT file, using standard ROOT I/O functionality:

SOFIE::RModel model = SOFIE::PyKeras::Parse("KerasModel.h5");
TFile file("model.root","NEW");
model.Write();
file.Close(); 

2D Graphics Libraries

canvas->Print(".tex", "Standalone");

The generated .tex file has the form:

\usepackage{tikz}
\usetikzlibrary{patterns,plotmarks}
\begin{document}
<----- here the graphics output
\end{document}

Geometry Libraries

GUI Libraries

WebGUI Libraries

PyROOT

Jupyter lab

Tutorials

Class Reference Guide

Build, Configuration and Testing Infrastructure

For users building from source the latest-stable branch and passing -Droottest=ON to the CMake command line, the corresponding revision of roottest pointed to by latest-stable will be downloaded as required.

ROOT now requires CMake version 3.16 or later. ROOT cannot be built with C++11 anymore; the supported standards are currently C++14 and C++17. ROOT’s experimental features (RNTuple, RHist, etc) now require C++17.

Bugs and Issues fixed in this release

Release 6.26/02

Published on April 12, 2022

Bugs and Issues fixed in this release

Release 6.26/04

Published on June 7, 2022

Bugs and Issues fixed in this release

Release 6.26/06

Published on July 28, 2022

Bugs and Issues fixed in this release

Release 6.26/08

Published on Oct 18, 2022

Bugs and Issues fixed in this release

Release 6.26/10

Published on November 16, 2022

Bugs and Issues fixed in this release

Release 6.26/14

Published on November 28, 2023

Bugs and Issues fixed in this release

This release also addresses a security issue. More details will follow.

Release 6.26/16

Published on March 20, 2024

Bugs and Issues fixed in this release

The latest Apple operating sytstem supported is Monterey (macOS version 12).

HEAD of the v6-26-00-patches branch

These changes will be part of a future 6.26/18.