Logo ROOT  
Reference Guide
No Matches
Go to the documentation of this file.
1// Author: Enrico Guiraud, Danilo Piparo CERN 12/2016
4 * Copyright (C) 1995-2018, Rene Brun and Fons Rademakers. *
5 * All rights reserved. *
6 * *
7 * For the licensing terms see $ROOTSYS/LICENSE. *
8 * For the list of contributors see $ROOTSYS/README/CREDITS. *
9 *************************************************************************/
12#include "ROOT/RDataFrame.hxx"
13#include "ROOT/RDataSource.hxx"
17#include "ROOT/RDF/Utils.hxx"
18#include <string_view>
19#include "TChain.h"
20#include "TDirectory.h"
21#include "RtypesCore.h" // for ULong64_t
22#include "TTree.h"
24#include <fstream> // std::ifstream
25#include <nlohmann/json.hpp> // nlohmann::json::parse
26#include <memory> // for make_shared, allocator, shared_ptr
27#include <ostream> // ostringstream
28#include <stdexcept>
29#include <string>
30#include <vector>
32// clang-format off
34* \class ROOT::RDataFrame
35* \ingroup dataframe
36* \brief ROOT's RDataFrame offers a modern, high-level interface for analysis of data stored in TTree , CSV and other data formats, in C++ or Python.
38In addition, multi-threading and other low-level optimisations allow users to exploit all the resources available
39on their machines completely transparently.<br>
40Skip to the [class reference](#reference) or keep reading for the user guide.
42In a nutshell:
44ROOT::EnableImplicitMT(); // Tell ROOT you want to go parallel
45ROOT::RDataFrame d("myTree", "file_*.root"); // Interface to TTree and TChain
46auto myHisto = d.Histo1D("Branch_A"); // This books the (lazy) filling of a histogram
47myHisto->Draw(); // Event loop is run here, upon first access to a result
50Calculations are expressed in terms of a type-safe *functional chain of actions and transformations*, RDataFrame takes
51care of their execution. The implementation automatically puts in place several low level optimisations such as
52multi-thread parallelization and caching.
55<a href="https://doi.org/10.5281/zenodo.260230"><img src="https://zenodo.org/badge/DOI/10.5281/zenodo.260230.svg"
59## For the impatient user
60You can directly see RDataFrame in action in our [tutorials](https://root.cern/doc/master/group__tutorial__dataframe.html), in C++ or Python.
62## Table of Contents
63- [Cheat sheet](\ref cheatsheet)
64- [Introduction](\ref introduction)
65- [Crash course](\ref crash-course)
66- [Working with collections](\ref collections)
67- [Transformations: manipulating data](\ref transformations)
68- [Actions: getting results](\ref actions)
69- [Distributed execution in Python](\ref distrdf)
70- [Performance tips and parallel execution](\ref parallel-execution)
71- [More features](\ref more-features)
72 - [Systematic variations](\ref systematics)
73 - [RDataFrame objects as function arguments and return values](\ref rnode)
74 - [Storing RDataFrame objects in collections](\ref RDFCollections)
75 - [Executing callbacks every N events](\ref callbacks)
76 - [Default column lists](\ref default-branches)
77 - [Special helper columns: `rdfentry_` and `rdfslot_`](\ref helper-cols)
78 - [Just-in-time compilation: column type inference and explicit declaration of column types](\ref jitting)
79 - [User-defined custom actions](\ref generic-actions)
80 - [Dataset joins with friend trees](\ref friends)
81 - [Reading data formats other than ROOT trees](\ref other-file-formats)
82 - [Computation graphs (storing and reusing sets of transformations)](\ref callgraphs)
83 - [Visualizing the computation graph](\ref representgraph)
84 - [Activating RDataFrame execution logs](\ref rdf-logging)
85 - [Creating an RDataFrame from a dataset specification file](\ref rdf-from-spec)
86 - [Adding a progress bar](\ref progressbar)
87- [Efficient analysis in Python](\ref python)
88- <a class="el" href="classROOT_1_1RDataFrame.html#reference" onclick="javascript:toggleInherit('pub_methods_classROOT_1_1RDF_1_1RInterface')">Class reference</a>
90\anchor cheatsheet
91## Cheat sheet
92These are the operations which can be performed with RDataFrame.
94### Transformations
95Transformations are a way to manipulate the data.
97| **Transformation** | **Description** |
99| Alias() | Introduce an alias for a particular column name. |
100| Define() | Create a new column in the dataset. Example usages include adding a column that contains the invariant mass of a particle, or a selection of elements of an array (e.g. only the `pt`s of "good" muons). |
101| DefinePerSample() | Define a new column that is updated when the input sample changes, e.g. when switching tree being processed in a chain. |
102| DefineSlot() | Same as Define(), but the user-defined function must take an extra `unsigned int slot` as its first parameter. `slot` will take a different value, `0` to `nThreads - 1`, for each thread of execution. This is meant as a helper in writing thread-safe Define() transformation when using RDataFrame after ROOT::EnableImplicitMT(). DefineSlot() works just as well with single-thread execution: in that case `slot` will always be `0`. |
103| DefineSlotEntry() | Same as DefineSlot(), but the entry number is passed in addition to the slot number. This is meant as a helper in case the expression depends on the entry number. For details about entry numbers in multi-threaded runs, see [here](\ref helper-cols). |
104| Filter() | Filter rows based on user-defined conditions. |
105| Range() | Filter rows based on entry number (single-thread only). |
106| Redefine() | Overwrite the value and/or type of an existing column. See Define() for more information. |
107| RedefineSlot() | Overwrite the value and/or type of an existing column. See DefineSlot() for more information. |
108| RedefineSlotEntry() | Overwrite the value and/or type of an existing column. See DefineSlotEntry() for more information. |
109| Vary() | Register systematic variations for an existing column. Varied results are then extracted via VariationsFor(). |
112### Actions
113Actions aggregate data into a result. Each one is described in more detail in the reference guide.
115In the following, whenever we say an action "returns" something, we always mean it returns a smart pointer to it. Actions only act on events that pass all preceding filters.
117Lazy actions only trigger the event loop when one of the results is accessed for the first time, making it easy to
118produce many different results in one event loop. Instant actions trigger the event loop instantly.
121| **Lazy action** | **Description** |
123| Aggregate() | Execute a user-defined accumulation operation on the processed column values. |
124| Book() | Book execution of a custom action using a user-defined helper object. |
125| Cache() | Cache column values in memory. Custom columns can be cached as well, filtered entries are not cached. Users can specify which columns to save (default is all). |
126| Count() | Return the number of events processed. Useful e.g. to get a quick count of the number of events passing a Filter. |
127| Display() | Provides a printable representation of the dataset contents. The method returns a ROOT::RDF::RDisplay() instance which can print a tabular representation of the data or return it as a string. |
128| Fill() | Fill a user-defined object with the values of the specified columns, as if by calling `Obj.Fill(col1, col2, ...)`. |
129| Graph() | Fills a TGraph with the two columns provided. If multi-threading is enabled, the order of the points may not be the one expected, it is therefore suggested to sort if before drawing. |
130| GraphAsymmErrors() | Fills a TGraphAsymmErrors. If multi-threading is enabled, the order of the points may not be the one expected, it is therefore suggested to sort if before drawing. |
131| Histo1D(), Histo2D(), Histo3D() | Fill a one-, two-, three-dimensional histogram with the processed column values. |
132| HistoND() | Fill an N-dimensional histogram with the processed column values. |
133| Max() | Return the maximum of processed column values. If the type of the column is inferred, the return type is `double`, the type of the column otherwise.|
134| Mean() | Return the mean of processed column values.|
135| Min() | Return the minimum of processed column values. If the type of the column is inferred, the return type is `double`, the type of the column otherwise.|
136| Profile1D(), Profile2D() | Fill a one- or two-dimensional profile with the column values that passed all filters. |
137| Reduce() | Reduce (e.g. sum, merge) entries using the function (lambda, functor...) passed as argument. The function must have signature `T(T,T)` where `T` is the type of the column. Return the final result of the reduction operation. An optional parameter allows initialization of the result object to non-default values. |
138| Report() | Obtain statistics on how many entries have been accepted and rejected by the filters. See the section on [named filters](#named-filters-and-cutflow-reports) for a more detailed explanation. The method returns a ROOT::RDF::RCutFlowReport instance which can be queried programmatically to get information about the effects of the individual cuts. |
139| Stats() | Return a TStatistic object filled with the input columns. |
140| StdDev() | Return the unbiased standard deviation of the processed column values. |
141| Sum() | Return the sum of the values in the column. If the type of the column is inferred, the return type is `double`, the type of the column otherwise. |
142| Take() | Extract a column from the dataset as a collection of values, e.g. a `std::vector<float>` for a column of type `float`. |
144| **Instant action** | **Description** |
146| Foreach() | Execute a user-defined function on each entry. Users are responsible for the thread-safety of this callable when executing with implicit multi-threading enabled. |
147| ForeachSlot() | Same as Foreach(), but the user-defined function must take an extra `unsigned int slot` as its first parameter. `slot` will take a different value, `0` to `nThreads - 1`, for each thread of execution. This is meant as a helper in writing thread-safe Foreach() actions when using RDataFrame after ROOT::EnableImplicitMT(). ForeachSlot() works just as well with single-thread execution: in that case `slot` will always be `0`. |
148| Snapshot() | Write the processed dataset to disk, in a new TTree and TFile. Custom columns can be saved as well, filtered entries are not saved. Users can specify which columns to save (default is all). Snapshot, by default, overwrites the output file if it already exists. Snapshot() can be made *lazy* setting the appropriate flag in the snapshot options.|
151### Queries
153These operations do not modify the dataframe or book computations but simply return information on the RDataFrame object.
155| **Operation** | **Description** |
157| Describe() | Get useful information describing the dataframe, e.g. columns and their types. |
158| GetColumnNames() | Get the names of all the available columns of the dataset. |
159| GetColumnType() | Return the type of a given column as a string. |
160| GetColumnTypeNamesList() | Return the list of type names of columns in the dataset. |
161| GetDefinedColumnNames() | Get the names of all the defined columns. |
162| GetFilterNames() | Return the names of all filters in the computation graph. |
163| GetNRuns() | Return the number of event loops run by this RDataFrame instance so far. |
164| GetNSlots() | Return the number of processing slots that RDataFrame will use during the event loop (i.e. the concurrency level). |
165| SaveGraph() | Store the computation graph of an RDataFrame in [DOT format (graphviz)](https://en.wikipedia.org/wiki/DOT_(graph_description_language)) for easy inspection. See the [relevant section](\ref representgraph) for details. |
167\anchor introduction
168## Introduction
169Users define their analysis as a sequence of operations to be performed on the dataframe object; the framework
170takes care of the management of the loop over entries as well as low-level details such as I/O and parallelization.
171RDataFrame provides methods to perform most common operations required by ROOT analyses;
172at the same time, users can just as easily specify custom code that will be executed in the event loop.
174RDataFrame is built with a *modular* and *flexible* workflow in mind, summarised as follows:
1761. Construct a dataframe object by specifying a dataset. RDataFrame supports TTree as well as TChain, [CSV files](https://root.cern/doc/master/df014__CSVDataSource_8C.html), [SQLite files](https://root.cern/doc/master/df027__SQliteDependencyOverVersion_8C.html), [RNTuples](https://root.cern/doc/master/structROOT_1_1Experimental_1_1RNTuple.html), and it can be extended to custom data formats. From Python, [NumPy arrays can be imported into RDataFrame](https://root.cern/doc/master/df032__MakeNumpyDataFrame_8py.html) as well.
1782. Transform the dataframe by:
180 - [Applying filters](https://root.cern/doc/master/classROOT_1_1RDataFrame.html#transformations). This selects only specific rows of the dataset.
182 - [Creating custom columns](https://root.cern/doc/master/classROOT_1_1RDataFrame.html#transformations). Custom columns can, for example, contain the results of a computation that must be performed for every row of the dataset.
1843. [Produce results](https://root.cern/doc/master/classROOT_1_1RDataFrame.html#actions). *Actions* are used to aggregate data into results. Most actions are *lazy*, i.e. they are not executed on the spot, but registered with RDataFrame and executed only when a result is accessed for the first time.
186Make sure to book all transformations and actions before you access the contents of any of the results. This lets RDataFrame accumulate work and then produce all results at the same time, upon first access to any of them.
188The following table shows how analyses based on TTreeReader and TTree::Draw() translate to RDataFrame. Follow the
189[crash course](#crash-course) to discover more idiomatic and flexible ways to express analyses with RDataFrame.
192 <td>
193 <b>TTreeReader</b>
194 </td>
195 <td>
196 <b>ROOT::RDataFrame</b>
197 </td>
200 <td>
202TTreeReader reader("myTree", file);
203TTreeReaderValue<A_t> a(reader, "A");
204TTreeReaderValue<B_t> b(reader, "B");
205TTreeReaderValue<C_t> c(reader, "C");
206while(reader.Next()) {
207 if(IsGoodEvent(*a, *b, *c))
208 DoStuff(*a, *b, *c);
211 </td>
212 <td>
214ROOT::RDataFrame d("myTree", file, {"A", "B", "C"});
217 </td>
220 <td>
221 <b>TTree::Draw</b>
222 </td>
223 <td>
224 <b>ROOT::RDataFrame</b>
225 </td>
228 <td>
230auto *tree = file->Get<TTree>("myTree");
231tree->Draw("x", "y > 2");
233 </td>
234 <td>
236ROOT::RDataFrame df("myTree", file);
237auto h = df.Filter("y > 2").Histo1D("x");
240 </td>
243 <td>
245tree->Draw("jet_eta", "weight*(event == 1)");
247 </td>
248 <td>
250df.Filter("event == 1").Histo1D("jet_eta", "weight");
251// or the fully compiled version:
252df.Filter([] (ULong64_t e) { return e == 1; }, {"event"}).Histo1D<RVec<float>>("jet_eta", "weight");
254 </td>
257 <td>
259// object selection: for each event, fill histogram with array of selected pts
260tree->Draw('Muon_pt', 'Muon_pt > 100')
262 </td>
263 <td>
265// with RDF, arrays are read as ROOT::VecOps::RVec objects
266df.Define("good_pt", "Muon_pt[Muon_pt > 100]").Histo1D("good_pt")
268 </td>
272\anchor crash-course
273## Crash course
274All snippets of code presented in the crash course can be executed in the ROOT interpreter. Simply precede them with
276using namespace ROOT; // RDataFrame's namespace
278which is omitted for brevity. The terms "column" and "branch" are used interchangeably.
280### Creating an RDataFrame
281RDataFrame's constructor is where the user specifies the dataset and, optionally, a default set of columns that
282operations should work with. Here are the most common methods to construct an RDataFrame object:
284// single file -- all constructors are equivalent
285TFile *f = TFile::Open("file.root");
286TTree *t = f.Get<TTree>("treeName");
288RDataFrame d1("treeName", "file.root");
289RDataFrame d2("treeName", f); // same as TTreeReader
290RDataFrame d3(*t);
292// multiple files -- all constructors are equivalent
293TChain chain("myTree");
297RDataFrame d4("myTree", {"file1.root", "file2.root"});
298std::vector<std::string> files = {"file1.root", "file2.root"};
299RDataFrame d5("myTree", files);
300RDataFrame d6("myTree", "file*.root"); // the glob is passed as-is to TChain's constructor
301RDataFrame d7(chain);
303Additionally, users can construct an RDataFrame with no data source by passing an integer number. This is the number of rows that
304will be generated by this RDataFrame.
306RDataFrame d(10); // a RDF with 10 entries (and no columns/branches, for now)
307d.Foreach([] { static int i = 0; std::cout << i++ << std::endl; }); // silly example usage: count to ten
309This is useful to generate simple datasets on the fly: the contents of each event can be specified with Define() (explained below). For example, we have used this method to generate [Pythia](https://pythia.org/) events and write them to disk in parallel (with the Snapshot action).
311For data sources other than TTrees and TChains, RDataFrame objects are constructed using ad-hoc factory functions (see e.g. FromCSV(), FromSqlite(), FromArrow()):
314auto df = ROOT::RDF::FromCSV("input.csv");
315// use df as usual
318### Filling a histogram
319Let's now tackle a very common task, filling a histogram:
321// Fill a TH1D with the "MET" branch
322RDataFrame d("myTree", "file.root");
323auto h = d.Histo1D("MET");
326The first line creates an RDataFrame associated to the TTree "myTree". This tree has a branch named "MET".
328Histo1D() is an *action*; it returns a smart pointer (a ROOT::RDF::RResultPtr, to be precise) to a TH1D histogram filled
329with the `MET` of all events. If the quantity stored in the column is a collection (e.g. a vector or an array), the
330histogram is filled with all vector elements for each event.
332You can use the objects returned by actions as if they were pointers to the desired results. There are many other
333possible [actions](\ref cheatsheet), and all their results are wrapped in smart pointers; we'll see why in a minute.
335### Applying a filter
336Let's say we want to cut over the value of branch "MET" and count how many events pass this cut. This is one way to do it:
338RDataFrame d("myTree", "file.root");
339auto c = d.Filter("MET > 4.").Count(); // computations booked, not run
340std::cout << *c << std::endl; // computations run here, upon first access to the result
342The filter string (which must contain a valid C++ expression) is applied to the specified columns for each event;
343the name and types of the columns are inferred automatically. The string expression is required to return a `bool`
344which signals whether the event passes the filter (`true`) or not (`false`).
346You can think of your data as "flowing" through the chain of calls, being transformed, filtered and finally used to
347perform actions. Multiple Filter() calls can be chained one after another.
349Using string filters is nice for simple things, but they are limited to specifying the equivalent of a single return
350statement or the body of a lambda, so it's cumbersome to use strings with more complex filters. They also add a small
351runtime overhead, as ROOT needs to just-in-time compile the string into C++ code. When more freedom is required or
352runtime performance is very important, a C++ callable can be specified instead (a lambda in the following snippet,
353but it can be any kind of function or even a functor class), together with a list of column names.
354This snippet is analogous to the one above:
356RDataFrame d("myTree", "file.root");
357auto metCut = [](double x) { return x > 4.; }; // a C++11 lambda function checking "x > 4"
358auto c = d.Filter(metCut, {"MET"}).Count();
359std::cout << *c << std::endl;
362An example of a more complex filter expressed as a string containing C++ code is shown below
365RDataFrame d("myTree", "file.root");
366auto df = d.Define("p", "std::array<double, 4> p{px, py, pz}; return p;")
367 .Filter("double p2 = 0.0; for (auto&& x : p) p2 += x*x; return sqrt(p2) < 10.0;");
370The code snippet above defines a column `p` that is a fixed-size array using the component column names and then
371filters on its magnitude by looping over its elements. It must be noted that the usage of strings to define columns
372like the one above is currently the only possibility when using PyROOT. When writing expressions as such, only constants
373and data coming from other columns in the dataset can be involved in the code passed as a string. Local variables and
374functions cannot be used, since the interpreter will not know how to find them. When capturing local state is necessary,
375it must first be declared to the ROOT C++ interpreter.
377More information on filters and how to use them to automatically generate cutflow reports can be found [below](#Filters).
379### Defining custom columns
380Let's now consider the case in which "myTree" contains two quantities "x" and "y", but our analysis relies on a derived
381quantity `z = sqrt(x*x + y*y)`. Using the Define() transformation, we can create a new column in the dataset containing
382the variable "z":
384RDataFrame d("myTree", "file.root");
385auto sqrtSum = [](double x, double y) { return sqrt(x*x + y*y); };
386auto zMean = d.Define("z", sqrtSum, {"x","y"}).Mean("z");
387std::cout << *zMean << std::endl;
389Define() creates the variable "z" by applying `sqrtSum` to "x" and "y". Later in the chain of calls we refer to
390variables created with Define() as if they were actual tree branches/columns, but they are evaluated on demand, at most
391once per event. As with filters, Define() calls can be chained with other transformations to create multiple custom
392columns. Define() and Filter() transformations can be concatenated and intermixed at will.
394As with filters, it is possible to specify new columns as string expressions. This snippet is analogous to the one above:
396RDataFrame d("myTree", "file.root");
397auto zMean = d.Define("z", "sqrt(x*x + y*y)").Mean("z");
398std::cout << *zMean << std::endl;
401Again the names of the columns used in the expression and their types are inferred automatically. The string must be
402valid C++ and it is just-in-time compiled. The process has a small runtime overhead and like with filters it is currently the only possible approach when using PyROOT.
404Previously, when showing the different ways an RDataFrame can be created, we showed a constructor that takes a
405number of entries as a parameter. In the following example we show how to combine such an "empty" RDataFrame with Define()
406transformations to create a dataset on the fly. We then save the generated data on disk using the Snapshot() action.
408RDataFrame d(100); // an RDF that will generate 100 entries (currently empty)
409int x = -1;
410auto d_with_columns = d.Define("x", [&x] { return ++x; })
411 .Define("xx", [&x] { return x*x; });
412d_with_columns.Snapshot("myNewTree", "newfile.root");
414This example is slightly more advanced than what we have seen so far. First, it makes use of lambda captures (a
415simple way to make external variables available inside the body of C++ lambdas) to act on the same variable `x` from
416both Define() transformations. Second, we have *stored* the transformed dataframe in a variable. This is always
417possible, since at each point of the transformation chain users can store the status of the dataframe for further use (more
418on this [below](#callgraphs)).
420You can read more about defining new columns [here](#custom-columns).
422\image html RDF_Graph.png "A graph composed of two branches, one starting with a filter and one with a define. The end point of a branch is always an action."
425### Running on a range of entries
426It is sometimes necessary to limit the processing of the dataset to a range of entries. For this reason, the RDataFrame
427offers the concept of ranges as a node of the RDataFrame chain of transformations; this means that filters, columns and
428actions can be concatenated to and intermixed with Range()s. If a range is specified after a filter, the range will act
429exclusively on the entries passing the filter -- it will not even count the other entries! The same goes for a Range()
430hanging from another Range(). Here are some commented examples:
432RDataFrame d("myTree", "file.root");
433// Here we store a dataframe that loops over only the first 30 entries in a variable
434auto d30 = d.Range(30);
435// This is how you pick all entries from 15 onwards
436auto d15on = d.Range(15, 0);
437// We can specify a stride too, in this case we pick an event every 3
438auto d15each3 = d.Range(0, 15, 3);
440Note that ranges are not available when multi-threading is enabled. More information on ranges is available
443### Executing multiple actions in the same event loop
444As a final example let us apply two different cuts on branch "MET" and fill two different histograms with the "pt_v" of
445the filtered events.
446By now, you should be able to easily understand what is happening:
448RDataFrame d("treeName", "file.root");
449auto h1 = d.Filter("MET > 10").Histo1D("pt_v");
450auto h2 = d.Histo1D("pt_v");
451h1->Draw(); // event loop is run once here
452h2->Draw("SAME"); // no need to run the event loop again
454RDataFrame executes all above actions by **running the event-loop only once**. The trick is that actions are not
455executed at the moment they are called, but they are **lazy**, i.e. delayed until the moment one of their results is
456accessed through the smart pointer. At that time, the event loop is triggered and *all* results are produced
459It is therefore good practice to declare all your transformations and actions *before* accessing their results, allowing
460RDataFrame to run the loop once and produce all results in one go.
462### Going parallel
463Let's say we would like to run the previous examples in parallel on several cores, dividing events fairly between cores.
464The only modification required to the snippets would be the addition of this line *before* constructing the main
465dataframe object:
469Simple as that. More details are given [below](#parallel-execution).
471\anchor collections
472## Working with collections and object selections
474RDataFrame reads collections as the special type [ROOT::RVec](https://root.cern/doc/master/classROOT_1_1VecOps_1_1RVec.html): for example, a column containing an array of floating point numbers can be read as a ROOT::RVecF. C-style arrays (with variable or static size), STL vectors and most other collection types can be read this way.
476RVec is a container similar to std::vector (and can be used just like a std::vector) but it also offers a rich interface to operate on the array elements in a vectorised fashion, similarly to Python's NumPy arrays.
478For example, to fill a histogram with the "pt" of selected particles for each event, Define() can be used to create a column that contains the desired array elements as follows:
481// h is filled with all the elements of `good_pts`, for each event
482auto h = df.Define("good_pts", [](const ROOT::RVecF &pt) { return pt[pt > 0]; })
483 .Histo1D("good_pts");
486And in Python:
489h = df.Define("good_pts", "pt[pt > 0]").Histo1D("good_pts")
492Learn more at ROOT::VecOps::RVec.
494\anchor transformations
495## Transformations: manipulating data
496\anchor Filters
497### Filters
498A filter is created through a call to `Filter(f, columnList)` or `Filter(filterString)`. In the first overload, `f` can
499be a function, a lambda expression, a functor class, or any other callable object. It must return a `bool` signalling
500whether the event has passed the selection (`true`) or not (`false`). It should perform "read-only" operations on the
501columns, and should not have side-effects (e.g. modification of an external or static variable) to ensure correctness
502when implicit multi-threading is active. The second overload takes a string with a valid C++ expression in which column
503names are used as variable names (e.g. `Filter("x[0] + x[1] > 0")`). This is a convenience feature that comes with a
504certain runtime overhead: C++ code has to be generated on the fly from this expression before using it in the event
505loop. See the paragraph about "Just-in-time compilation" below for more information.
507RDataFrame only evaluates filters when necessary: if multiple filters are chained one after another, they are executed
508in order and the first one returning `false` causes the event to be discarded and triggers the processing of the next
509entry. If multiple actions or transformations depend on the same filter, that filter is not executed multiple times for
510each entry: after the first access it simply serves a cached result.
512\anchor named-filters-and-cutflow-reports
513#### Named filters and cutflow reports
514An optional string parameter `name` can be passed to the Filter() method to create a **named filter**. Named filters
515work as usual, but also keep track of how many entries they accept and reject.
517Statistics are retrieved through a call to the Report() method:
519- when Report() is called on the main RDataFrame object, it returns a ROOT::RDF::RResultPtr<RCutFlowReport> relative to all
520named filters declared up to that point
521- when called on a specific node (e.g. the result of a Define() or Filter()), it returns a ROOT::RDF::RResultPtr<RCutFlowReport>
522relative all named filters in the section of the chain between the main RDataFrame and that node (included).
524Stats are stored in the same order as named filters have been added to the graph, and *refer to the latest event-loop*
525that has been run using the relevant RDataFrame.
527\anchor ranges
528### Ranges
529When RDataFrame is not being used in a multi-thread environment (i.e. no call to EnableImplicitMT() was made),
530Range() transformations are available. These act very much like filters but instead of basing their decision on
531a filter expression, they rely on `begin`,`end` and `stride` parameters.
533- `begin`: initial entry number considered for this range.
534- `end`: final entry number (excluded) considered for this range. 0 means that the range goes until the end of the dataset.
535- `stride`: process one entry of the [begin, end) range every `stride` entries. Must be strictly greater than 0.
537The actual number of entries processed downstream of a Range() node will be `(end - begin)/stride` (or less if less
538entries than that are available).
540Note that ranges act "locally", not based on the global entry count: `Range(10,50)` means "skip the first 10 entries
541*that reach this node*, let the next 40 entries pass, then stop processing". If a range node hangs from a filter node,
542and the range has a `begin` parameter of 10, that means the range will skip the first 10 entries *that pass the
543preceding filter*.
545Ranges allow "early quitting": if all branches of execution of a functional graph reached their `end` value of
546processed entries, the event-loop is immediately interrupted. This is useful for debugging and quick data explorations.
548\anchor custom-columns
549### Custom columns
550Custom columns are created by invoking `Define(name, f, columnList)`. As usual, `f` can be any callable object
551(function, lambda expression, functor class...); it takes the values of the columns listed in `columnList` (a list of
552strings) as parameters, in the same order as they are listed in `columnList`. `f` must return the value that will be
553assigned to the temporary column.
555A new variable is created called `name`, accessible as if it was contained in the dataset from subsequent
558Use cases include:
559- caching the results of complex calculations for easy and efficient multiple access
560- extraction of quantities of interest from complex objects
561- branch aliasing, i.e. changing the name of a branch
563An exception is thrown if the `name` of the new column/branch is already in use for another branch in the TTree.
565It is also possible to specify the quantity to be stored in the new temporary column as a C++ expression with the method
566`Define(name, expression)`. For example this invocation
569df.Define("pt", "sqrt(px*px + py*py)");
572will create a new column called "pt" the value of which is calculated starting from the columns px and py. The system
573builds a just-in-time compiled function starting from the expression after having deduced the list of necessary branches
574from the names of the variables specified by the user.
576#### Custom columns as function of slot and entry number
578It is possible to create custom columns also as a function of the processing slot and entry numbers. The methods that can
579be invoked are:
580- `DefineSlot(name, f, columnList)`. In this case the callable f has this signature `R(unsigned int, T1, T2, ...)`: the
581first parameter is the slot number which ranges from 0 to ROOT::GetThreadPoolSize() - 1.
582- `DefineSlotEntry(name, f, columnList)`. In this case the callable f has this signature `R(unsigned int, ULong64_t,
583T1, T2, ...)`: the first parameter is the slot number while the second one the number of the entry being processed.
585\anchor actions
586## Actions: getting results
587### Instant and lazy actions
588Actions can be **instant** or **lazy**. Instant actions are executed as soon as they are called, while lazy actions are
589executed whenever the object they return is accessed for the first time. As a rule of thumb, actions with a return value
590are lazy, the others are instant.
592### Return type of a lazy action
594When a lazy action is called, it returns a \link ROOT::RDF::RResultPtr ROOT::RDF::RResultPtr<T>\endlink, where T is the
595type of the result of the action. The final result will be stored in the `RResultPtr` and can be retrieved by
596dereferencing it or via its `GetValue` method.
598### Actions that return collections
600If the type of the return value of an action is a collection, e.g. `std::vector<int>`, you can iterate its elements
601directly through the wrapping `RResultPtr`:
604ROOT::RDataFrame df{5};
605auto df1 = df.Define("x", []{ return 42; });
606for (const auto &el: df1.Take<int>("x")){
607 std::cout << "Element: " << el << "\n";
612df = ROOT.RDataFrame(5).Define("x", "42")
613for el in df.Take[int]("x"):
614 print(f"Element: {el}")
617\anchor distrdf
618## Distributed execution
620RDataFrame applications can be executed in parallel through distributed computing frameworks on a set of remote machines
621thanks to the Python package `ROOT.RDF.Experimental.Distributed`. This experimental, **Python-only** package allows to scale the
622optimized performance RDataFrame can achieve on a single machine to multiple nodes at the same time. It is designed so
623that different backends can be easily plugged in, currently supporting [Apache Spark](http://spark.apache.org/) and
624[Dask](https://dask.org/). To make use of distributed RDataFrame, you only need to switch `ROOT.RDataFrame` with
625the backend-specific `RDataFrame` of your choice, for example:
628import ROOT
630# Point RDataFrame calls to the Spark specific RDataFrame
631RDataFrame = ROOT.RDF.Experimental.Distributed.Spark.RDataFrame
633# It still accepts the same constructor arguments as traditional RDataFrame
634df = RDataFrame("mytree", "myfile.root")
636# Continue the application with the traditional RDataFrame API
637sum = df.Filter("x > 10").Sum("y")
638h = df.Histo1D(("name", "title", 10, 0, 10), "x")
644The main goal of this package is to support running any RDataFrame application distributedly. Nonetheless, not all
645parts of the RDataFrame API currently work with this package. The subset that is currently available is:
646- AsNumpy
647- Count
648- Define
649- DefinePerSample
650- Filter
651- Graph
652- Histo[1,2,3]D
653- HistoND
654- Max
655- Mean
656- Min
657- Profile[1,2,3]D
658- Redefine
659- Snapshot
660- Stats
661- StdDev
662- Sum
663- Systematic variations: Vary and [VariationsFor](\ref ROOT::RDF::Experimental::VariationsFor).
664- Parallel submission of distributed graphs: [RunGraphs](\ref ROOT::RDF::RunGraphs).
665- Information about the dataframe: GetColumnNames.
667with support for more operations coming in the future. Data sources other than TTree and TChain (e.g. CSV, RNTuple) are
668currently not supported.
670\note The distributed RDataFrame module requires at least Python version 3.8.
672### Connecting to a Spark cluster
674In order to distribute the RDataFrame workload, you can connect to a Spark cluster you have access to through the
675official [Spark API](https://spark.apache.org/docs/latest/rdd-programming-guide.html#initializing-spark), then hook the
676connection instance to the distributed `RDataFrame` object like so:
679import pyspark
680import ROOT
682# Create a SparkContext object with the right configuration for your Spark cluster
683conf = SparkConf().setAppName(appName).setMaster(master)
684sc = SparkContext(conf=conf)
686# Point RDataFrame calls to the Spark specific RDataFrame
687RDataFrame = ROOT.RDF.Experimental.Distributed.Spark.RDataFrame
689# The Spark RDataFrame constructor accepts an optional "sparkcontext" parameter
690# and it will distribute the application to the connected cluster
691df = RDataFrame("mytree", "myfile.root", sparkcontext = sc)
694If an instance of [SparkContext](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.SparkContext.html)
695is not provided, the default behaviour is to create one in the background for you.
697### Connecting to a Dask cluster
699Similarly, you can connect to a Dask cluster by creating your own connection object which internally operates with one
700of the cluster schedulers supported by Dask (more information in the
701[Dask distributed docs](http://distributed.dask.org/en/stable/)):
704import ROOT
705from dask.distributed import Client
707# Point RDataFrame calls to the Dask specific RDataFrame
708RDataFrame = ROOT.RDF.Experimental.Distributed.Dask.RDataFrame
710# In a Python script the Dask client needs to be initalized in a context
711# Jupyter notebooks / Python session don't need this
712if __name__ == "__main__":
713 # With an already setup cluster that exposes a Dask scheduler endpoint
714 client = Client("dask_scheduler.domain.com:8786")
716 # The Dask RDataFrame constructor accepts the Dask Client object as an optional argument
717 df = RDataFrame("mytree","myfile.root", daskclient=client)
718 # Proceed as usual
719 df.Define("x","someoperation").Histo1D(("name", "title", 10, 0, 10), "x")
722If an instance of [distributed.Client](http://distributed.dask.org/en/stable/api.html#distributed.Client) is not
723provided to the RDataFrame object, it will be created for you and it will run the computations in the local machine
724using all cores available.
726### Choosing the number of distributed tasks
728A distributed RDataFrame has internal logic to define in how many chunks the input dataset will be split before sending
729tasks to the distributed backend. Each task reads and processes one of said chunks. The logic is backend-dependent, but
730generically tries to infer how many cores are available in the cluster through the connection object. The number of
731tasks will be equal to the inferred number of cores. There are cases where the connection object of the chosen backend
732doesn't have information about the actual resources of the cluster. An example of this is when using Dask to connect to
733a batch system. The client object created at the beginning of the application does not automatically know how many cores
734will be available during distributed execution, since the jobs are submitted to the batch system after the creation of
735the connection. In such cases, the logic is to default to process the whole dataset in 2 tasks.
737The number of tasks submitted for distributed execution can be also set programmatically, by providing the optional
738keyword argument `npartitions` when creating the RDataFrame object. This parameter is accepted irrespectively of the
739backend used:
742import ROOT
744# Define correct imports and access the distributed RDataFrame appropriate for the
745# backend used in the analysis
746RDataFrame = ROOT.RDF.Experimental.Distributed.[BACKEND].RDataFrame
748if __name__ == "__main__":
749 # The `npartitions` optional argument tells the RDataFrame how many tasks are desired
750 df = RDataFrame("mytree","myfile.root", npartitions=NPARTITIONS)
751 # Proceed as usual
752 df.Define("x","someoperation").Histo1D(("name", "title", 10, 0, 10), "x")
755Note that when processing a TTree or TChain dataset, the `npartitions` value should not exceed the number of clusters in
756the dataset. The number of clusters in a TTree can be retrieved by typing `rootls -lt myfile.root` at a command line.
758### Distributed Snapshot
760The Snapshot operation behaves slightly differently when executed distributedly. First off, it requires the path
761supplied to the Snapshot call to be accessible from any worker of the cluster and from the client machine (in general
762it should be provided as an absolute path). Another important difference is that `n` separate files will be produced,
763where `n` is the number of dataset partitions. As with local RDataFrame, the result of a Snapshot on a distributed
764RDataFrame is another distributed RDataFrame on which we can define a new computation graph and run more distributed
767### Distributed RunGraphs
769Submitting multiple distributed RDataFrame executions is supported through the RunGraphs function. Similarly to its
770local counterpart, the function expects an iterable of objects representing an RDataFrame action. Each action will be
771triggered concurrently to send multiple computation graphs to a distributed cluster at the same time:
774import ROOT
775RDataFrame = ROOT.RDF.Experimental.Distributed.Dask.RDataFrame
776RunGraphs = ROOT.RDF.Experimental.Distributed.RunGraphs
778# Create 3 different dataframes and book an histogram on each one
779histoproxies = [
780 RDataFrame(100)
781 .Define("x", "rdfentry_")
782 .Histo1D(("name", "title", 10, 0, 100), "x")
783 for _ in range(4)
786# Execute the 3 computation graphs
788# Retrieve all the histograms in one go
789histos = [histoproxy.GetValue() for histoproxy in histoproxies]
792Every distributed backend supports this feature and graphs belonging to different backends can be still triggered with
793a single call to RunGraphs (e.g. it is possible to send a Spark job and a Dask job at the same time).
795### Histogram models in distributed mode
797When calling a Histo*D operation in distributed mode, remember to pass to the function the model of the histogram to be
798computed, e.g. the axis range and the number of bins:
801import ROOT
802RDataFrame = ROOT.RDF.Experimental.Distributed.[BACKEND].RDataFrame
804if __name__ == "__main__":
805 df = RDataFrame("mytree","myfile.root").Define("x","someoperation")
806 # The model can be passed either as a tuple with the arguments in the correct order
807 df.Histo1D(("name", "title", 10, 0, 10), "x")
808 # Or by creating the specific struct
809 model = ROOT.RDF.TH1DModel("name", "title", 10, 0, 10)
810 df.Histo1D(model, "x")
813Without this, two partial histograms resulting from two distributed tasks would have incompatible binning, thus leading
814to errors when merging them. Failing to pass a histogram model will raise an error on the client side, before starting
815the distributed execution.
817### Live visualization in distributed mode with dask
819The live visualization feature allows real-time data representation of plots generated during the execution
820of a distributed RDataFrame application.
821It enables visualizing intermediate results as they are computed across multiple nodes of a Dask cluster
822by creating a canvas and continuously updating it as partial results become available.
824The LiveVisualize() function can be imported from the Python package **ROOT.RDF.Experimental.Distributed**:
827import ROOT
829LiveVisualize = ROOT.RDF.Experimental.Distributed.LiveVisualize
832The function takes drawable objects (e.g. histograms) and optional callback functions as argument, it accepts 4 different input formats:
834- Passing a list or tuple of drawables:
835You can pass a list or tuple containing the plots you want to visualize. For example:
838LiveVisualize([h_gaus, h_exp, h_random])
841- Passing a list or tuple of drawables with a global callback function:
842You can also include a global callback function that will be applied to all plots. For example:
845def set_fill_color(hist):
846 hist.SetFillColor(ROOT.kBlue)
848LiveVisualize([h_gaus, h_exp, h_random], set_fill_color)
851- Passing a Dictionary of drawables and callback functions:
852For more control, you can create a dictionary where keys are plots and values are corresponding (optional) callback functions. For example:
855plot_callback_dict = {
856 graph: set_marker,
857 h_exp: fit_exp,
858 tprofile_2d: None
864- Passing a Dictionary of drawables and callback functions with a global callback function:
865You can also combine a dictionary of plots and callbacks with a global callback function:
868LiveVisualize(plot_callback_dict, write_to_tfile)
871\note The allowed operations to pass to LiveVisualize are:
872 - Histo1D(), Histo2D(), Histo3D()
873 - Graph()
874 - Profile1D(), Profile2D()
876\warning The Live Visualization feature is only supported for the Dask backend.
878\anchor parallel-execution
879## Performance tips and parallel execution
880As pointed out before in this document, RDataFrame can transparently perform multi-threaded event loops to speed up
881the execution of its actions. Users have to call ROOT::EnableImplicitMT() *before* constructing the RDataFrame
882object to indicate that it should take advantage of a pool of worker threads. **Each worker thread processes a distinct
883subset of entries**, and their partial results are merged before returning the final values to the user.
884There are no guarantees on the order in which threads will process the batches of entries.
885In particular, note that this means that, for multi-thread event loops, there is no
886guarantee on the order in which Snapshot() will _write_ entries: they could be scrambled with respect to the input dataset. The values of the special `rdfentry_` column will also not correspond to the entry numbers in the input dataset (e.g. TChain) in multi-thread runs.
888\warning By default, RDataFrame will use as many threads as the hardware supports, using up **all** the resources on
889a machine. This might be undesirable on shared computing resources such as a batch cluster. Therefore, when running on shared computing resources, use
893replacing `i` with the number of CPUs/slots that were allocated for this job.
895### Thread-safety of user-defined expressions
896RDataFrame operations such as Histo1D() or Snapshot() are guaranteed to work correctly in multi-thread event loops.
897User-defined expressions, such as strings or lambdas passed to Filter(), Define(), Foreach(), Reduce() or Aggregate()
898will have to be thread-safe, i.e. it should be possible to call them concurrently from different threads.
900Note that simple Filter() and Define() transformations will inherently satisfy this requirement: Filter() / Define()
901expressions will often be *pure* in the functional programming sense (no side-effects, no dependency on external state),
902which eliminates all risks of race conditions.
904In order to facilitate writing of thread-safe operations, some RDataFrame features such as Foreach(), Define() or \link ROOT::RDF::RResultPtr::OnPartialResult OnPartialResult()\endlink
905offer thread-aware counterparts (ForeachSlot(), DefineSlot(), \link ROOT::RDF::RResultPtr::OnPartialResultSlot OnPartialResultSlot()\endlink): their only difference is that they
906will pass an extra `slot` argument (an unsigned integer) to the user-defined expression. When calling user-defined code
907concurrently, RDataFrame guarantees that different threads will employ different values of the `slot` parameter,
908where `slot` will be a number between 0 and `GetNSlots() - 1`.
909In other words, within a slot, computation runs sequentially and events are processed sequentially.
910Note that the same slot might be associated to different threads over the course of a single event loop, but two threads
911will never receive the same slot at the same time.
912This extra parameter might facilitate writing safe parallel code by having each thread write/modify a different
913processing slot, e.g. a different element of a list. See [here](#generic-actions) for an example usage of ForeachSlot().
915### Parallel execution of multiple RDataFrame event loops
916A complex analysis may require multiple separate RDataFrame computation graphs to produce all desired results. This poses the challenge that the
917event loops of each computation graph can be parallelized, but the different loops run sequentially, one after the other.
918On many-core architectures it might be desirable to run different event loops concurrently to improve resource usage.
919ROOT::RDF::RunGraphs() allows running multiple RDataFrame event loops concurrently:
922ROOT::RDataFrame df1("tree1", "f1.root");
923ROOT::RDataFrame df2("tree2", "f2.root");
924auto histo1 = df1.Histo1D("x");
925auto histo2 = df2.Histo1D("y");
927// just accessing result pointers, the event loops of separate RDataFrames run one after the other
928histo1->Draw(); // runs first multi-thread event loop
929histo2->Draw(); // runs second multi-thread event loop
931// alternatively, with ROOT::RDF::RunGraphs, event loops for separate computation graphs can run concurrently
932ROOT::RDF::RunGraphs({histo1, histo2});
933histo1->Draw(); // results can then be used as usual
936### Performance considerations
938To obtain the maximum performance out of RDataFrame, make sure to avoid just-in-time compiled versions of transformations and actions if at all possible.
939For instance, `Filter("x > 0")` requires just-in-time compilation of the corresponding C++ logic, while the equivalent `Filter([](float x) { return x > 0.; }, {"x"})` does not.
940Similarly, `Histo1D("x")` requires just-in-time compilation after the type of `x` is retrieved from the dataset, while `Histo1D<float>("x")` does not; the latter spelling
941should be preferred for performance-critical applications.
943Python applications cannot easily specify template parameters or pass C++ callables to RDataFrame.
944See [Efficient analysis in Python](#python) for possible ways to speed up hot paths in this case.
946Just-in-time compilation happens once, right before starting an event loop. To reduce the runtime cost of this step, make sure to book all operations *for all RDataFrame computation graphs*
947before the first event loop is triggered: just-in-time compilation will happen once for all code required to be generated up to that point, also across different computation graphs.
949Also make sure not to count the just-in-time compilation time (which happens once before the event loop and does not depend on the size of the dataset) as part of the event loop runtime (which scales with the size of the dataset). RDataFrame has an experimental logging feature that simplifies measuring the time spent in just-in-time compilation and in the event loop (as well as providing some more interesting information). See [Activating RDataFrame execution logs](\ref rdf-logging).
951### Memory usage
953There are two reasons why RDataFrame may consume more memory than expected. Firstly, each result is duplicated for each worker thread, which e.g. in case of many (possibly multi-dimensional) histograms with fine binning can result in visible memory consumption during the event loop. The thread-local copies of the results are destroyed when the final result is produced. Reducing the number of threads or using coarser binning will reduce the memory usage.
955Secondly, just-in-time compilation of string expressions or non-templated actions (see the previous paragraph) causes Cling, ROOT's C++ interpreter, to allocate some memory for the generated code that is only released at the end of the application. This commonly results in memory usage creep in long-running applications that create many RDataFrames one after the other. Possible mitigations include creating and running each RDataFrame event loop in a sub-process, or booking all operations for all different RDataFrame computation graphs before the first event loop is triggered, so that the interpreter is invoked only once for all computation graphs:
958// assuming df1 and df2 are separate computation graphs, do:
959auto h1 = df1.Histo1D("x");
960auto h2 = df2.Histo1D("y");
961h1->Draw(); // we just-in-time compile everything needed by df1 and df2 here
964// do not:
965auto h1 = df1.Histo1D("x");
966h1->Draw(); // we just-in-time compile here
967auto h2 = df2.Histo1D("y");
968h2->Draw("SAME"); // we just-in-time compile again here, as the second Histo1D call is new
971\anchor more-features
972## More features
973Here is a list of the most important features that have been omitted in the "Crash course" for brevity.
974You don't need to read all these to start using RDataFrame, but they are useful to save typing time and runtime.
976\anchor systematics
977### Systematic variations
979Starting from ROOT v6.26, RDataFrame provides a flexible syntax to define systematic variations.
980This is done in two steps: a) register variations for one or more existing columns using Vary() and b) extract variations
981of normal RDataFrame results using \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()". In between these steps, no other change
982to the analysis code is required: the presence of systematic variations for certain columns is automatically propagated
983through filters, defines and actions, and RDataFrame will take these dependencies into account when producing varied
984results. \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()" is included in header `ROOT/RDFHelpers.hxx`. The compiled C++ programs must include this header
985explicitly, this is not required for ROOT macros.
987An example usage of Vary() and \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()" in C++:
990auto nominal_hx =
991 df.Vary("pt", "ROOT::RVecD{pt*0.9f, pt*1.1f}", {"down", "up"})
992 .Filter("pt > pt_cut")
993 .Define("x", someFunc, {"pt"})
994 .Histo1D<float>("x");
996// request the generation of varied results from the nominal_hx
997ROOT::RDF::Experimental::RResultMap<TH1D> hx = ROOT::RDF::Experimental::VariationsFor(nominal_hx);
999// the event loop runs here, upon first access to any of the results or varied results:
1000hx["nominal"].Draw(); // same effect as nominal_hx->Draw()
1005A list of variation "tags" is passed as the last argument to Vary(). The tags give names to the varied values that are returned
1006as elements of an RVec of the appropriate C++ type. The number of variation tags must correspond to the number of elements of
1007this RVec (2 in the example above: the first element will correspond to the tag "down", the second
1008to the tag "up"). The _full_ variation name will be composed of the varied column name and the variation tags (e.g.
1009"pt:down", "pt:up" in this example). Python usage looks similar.
1011Note how we use the "pt" column as usual in the Filter() and Define() calls and we simply use "x" as the value to fill
1012the resulting histogram. To produce the varied results, RDataFrame will automatically execute the Filter and Define
1013calls for each variation and fill the histogram with values and cuts that depend on the variation.
1015There is no limitation to the complexity of a Vary() expression. Just like for the Define() and Filter() calls, users are
1016not limited to string expressions but they can also pass any valid C++ callable, including lambda functions and
1017complex functors. The callable can be applied to zero or more existing columns and it will always receive their
1018_nominal_ value in input.
1020#### Varying multiple columns in lockstep
1022In the following Python snippet we use the Vary() signature that allows varying multiple columns simultaneously or
1023"in lockstep":
1026df.Vary(["pt", "eta"],
1027 "RVec<RVecF>{{pt*0.9, pt*1.1}, {eta*0.9, eta*1.1}}",
1028 variationTags=["down", "up"],
1029 variationName="ptAndEta")
1032The expression returns an RVec of two RVecs: each inner vector contains the varied values for one column. The
1033inner vectors follow the same ordering as the column names that are passed as the first argument. Besides the variation tags, in
1034this case we also have to explicitly pass the variation name (here: "ptAndEta") as the default column name does not exist.
1036The above call will produce variations "ptAndEta:down" and "ptAndEta:up".
1038#### Combining multiple variations
1040Even if a result depends on multiple variations, only one variation is applied at a time, i.e. there will be no result produced
1041by applying multiple systematic variations at the same time.
1042For example, in the following example snippet, the RResultMap instance `all_h` will contain keys "nominal", "pt:down",
1043"pt:up", "eta:0", "eta:1", but no "pt:up&&eta:0" or similar:
1046auto df = _df.Vary("pt",
1047 "ROOT::RVecD{pt*0.9, pt*1.1}",
1048 {"down", "up"})
1049 .Vary("eta",
1050 [](float eta) { return RVecF{eta*0.9f, eta*1.1f}; },
1051 {"eta"},
1052 2);
1054auto nom_h = df.Histo2D(histoModel, "pt", "eta");
1055auto all_hs = VariationsFor(nom_h);
1056all_hs.GetKeys(); // returns {"nominal", "pt:down", "pt:up", "eta:0", "eta:1"}
1059Note how we passed the integer `2` instead of a list of variation tags to the second Vary() invocation: this is a
1060shorthand that automatically generates tags 0 to N-1 (in this case 0 and 1).
1062\note Currently, VariationsFor() and RResultMap are in the `ROOT::RDF::Experimental` namespace, to indicate that these
1063 interfaces might still evolve and improve based on user feedback. We expect that some aspects of the related
1064 programming model will be streamlined in future versions.
1066\note Currently, the results of a Snapshot(), Report() or Display() call cannot be varied (i.e. it is not possible to
1067 call \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()" on them. These limitations will be lifted in future releases.
1069See the Vary() method for more information and [this tutorial](https://root.cern/doc/master/df106__HiggsToFourLeptons_8C.html)
1070for an example usage of Vary and \ref ROOT::RDF::Experimental::VariationsFor "VariationsFor()" in the analysis.
1072\anchor rnode
1073### RDataFrame objects as function arguments and return values
1074RDataFrame variables/nodes are relatively cheap to copy and it's possible to both pass them to (or move them into)
1075functions and to return them from functions. However, in general each dataframe node will have a different C++ type,
1076which includes all available compile-time information about what that node does. One way to cope with this complication
1077is to use template functions and/or C++14 auto return types:
1079template <typename RDF>
1080auto ApplySomeFilters(RDF df)
1082 return df.Filter("x > 0").Filter([](int y) { return y < 0; }, {"y"});
1086A possibly simpler, C++11-compatible alternative is to take advantage of the fact that any dataframe node can be
1087converted (implicitly or via an explicit cast) to the common type ROOT::RDF::RNode:
1089// a function that conditionally adds a Range to an RDataFrame node.
1090RNode MaybeAddRange(RNode df, bool mustAddRange)
1092 return mustAddRange ? df.Range(1) : df;
1094// use as :
1095ROOT::RDataFrame df(10);
1096auto maybeRangedDF = MaybeAddRange(df, true);
1099The conversion to ROOT::RDF::RNode is cheap, but it will introduce an extra virtual call during the RDataFrame event
1100loop (in most cases, the resulting performance impact should be negligible). Python users can perform the conversion with the helper function `ROOT.RDF.AsRNode`.
1102\anchor RDFCollections
1103### Storing RDataFrame objects in collections
1105ROOT::RDF::RNode also makes it simple to store RDataFrame nodes in collections, e.g. a `std::vector<RNode>` or a `std::map<std::string, RNode>`:
1108std::vector<ROOT::RDF::RNode> dfs;
1110dfs.emplace_back(dfs[0].Define("x", "42.f"));
1113\anchor callbacks
1114### Executing callbacks every N events
1115It's possible to schedule execution of arbitrary functions (callbacks) during the event loop.
1116Callbacks can be used e.g. to inspect partial results of the analysis while the event loop is running,
1117drawing a partially-filled histogram every time a certain number of new entries is processed, or
1118displaying a progress bar while the event loop runs.
1120For example one can draw an up-to-date version of a result histogram every 100 entries like this:
1122auto h = df.Histo1D("x");
1123TCanvas c("c","x hist");
1124h.OnPartialResult(100, [&c](TH1D &h_) { c.cd(); h_.Draw(); c.Update(); });
1125// event loop runs here, this final `Draw` is executed after the event loop is finished
1129Callbacks are registered to a ROOT::RDF::RResultPtr and must be callables that takes a reference to the result type as argument
1130and return nothing. RDataFrame will invoke registered callbacks passing partial action results as arguments to them
1131(e.g. a histogram filled with a part of the selected events).
1133Read more on ROOT::RDF::RResultPtr::OnPartialResult() and ROOT::RDF::RResultPtr::OnPartialResultSlot().
1135\anchor default-branches
1136### Default column lists
1137When constructing an RDataFrame object, it is possible to specify a **default column list** for your analysis, in the
1138usual form of a list of strings representing branch/column names. The default column list will be used as a fallback
1139whenever a list specific to the transformation/action is not present. RDataFrame will take as many of these columns as
1140needed, ignoring trailing extra names if present.
1142// use "b1" and "b2" as default columns
1143RDataFrame d1("myTree", "file.root", {"b1","b2"});
1144auto h = d1.Filter([](int b1, int b2) { return b1 > b2; }) // will act on "b1" and "b2"
1145 .Histo1D(); // will act on "b1"
1147// just one default column this time
1148RDataFrame d2("myTree", "file.root", {"b1"});
1149auto min = d2.Filter([](double b2) { return b2 > 0; }, {"b2"}) // we can still specify non-default column lists
1150 .Min(); // returns the minimum value of "b1" for the filtered entries
1153\anchor helper-cols
1154### Special helper columns: rdfentry_ and rdfslot_
1155Every instance of RDataFrame is created with two special columns called `rdfentry_` and `rdfslot_`. The `rdfentry_`
1156column is of type `ULong64_t` and it holds the current entry number while `rdfslot_` is an `unsigned int`
1157holding the index of the current data processing slot.
1158For backwards compatibility reasons, the names `tdfentry_` and `tdfslot_` are also accepted.
1159These columns are ignored by operations such as [Cache](classROOT_1_1RDF_1_1RInterface.html#aaaa0a7bb8eb21315d8daa08c3e25f6c9)
1160or [Snapshot](classROOT_1_1RDF_1_1RInterface.html#a233b7723e498967f4340705d2c4db7f8).
1162\warning Note that in multi-thread event loops the values of `rdfentry_` _do not_ correspond to what would be the entry numbers
1163of a TChain constructed over the same set of ROOT files, as the entries are processed in an unspecified order.
1165\anchor jitting
1166### Just-in-time compilation: column type inference and explicit declaration of column types
1167C++ is a statically typed language: all types must be known at compile-time. This includes the types of the TTree
1168branches we want to work on. For filters, defined columns and some of the actions, **column types are deduced from the
1169signature** of the relevant filter function/temporary column expression/action function:
1171// here b1 is deduced to be `int` and b2 to be `double`
1172df.Filter([](int x, double y) { return x > 0 && y < 0.; }, {"b1", "b2"});
1174If we specify an incorrect type for one of the columns, an exception with an informative message will be thrown at
1175runtime, when the column value is actually read from the dataset: RDataFrame detects type mismatches. The same would
1176happen if we swapped the order of "b1" and "b2" in the column list passed to Filter().
1178Certain actions, on the other hand, do not take a function as argument (e.g. Histo1D()), so we cannot deduce the type of
1179the column at compile-time. In this case **RDataFrame infers the type of the column** from the TTree itself. This
1180is why we never needed to specify the column types for all actions in the above snippets.
1182When the column type is not a common one such as `int`, `double`, `char` or `float` it is nonetheless good practice to
1183specify it as a template parameter to the action itself, like this:
1185df.Histo1D("b1"); // OK, the type of "b1" is deduced at runtime
1186df.Min<MyNumber_t>("myObject"); // OK, "myObject" is deduced to be of type `MyNumber_t`
1189Deducing types at runtime requires the just-in-time compilation of the relevant actions, which has a small runtime
1190overhead, so specifying the type of the columns as template parameters to the action is good practice when performance is a goal.
1192When strings are passed as expressions to Filter() or Define(), fundamental types are passed as constants. This avoids certaincommon mistakes such as typing `x = 0` rather than `x == 0`:
1195// this throws an error (note the typo)
1196df.Define("x", "0").Filter("x = 0");
1199\anchor generic-actions
1200### User-defined custom actions
1201RDataFrame strives to offer a comprehensive set of standard actions that can be performed on each event. At the same
1202time, it allows users to inject their own action code to perform arbitrarily complex data reductions.
1204#### Implementing custom actions with Book()
1206Through the Book() method, users can implement a custom action and have access to the same features
1207that built-in RDataFrame actions have, e.g. hooks to events related to the start, end and execution of the
1208event loop, or the possibility to return a lazy RResultPtr to an arbitrary type of result:
1211#include <ROOT/RDataFrame.hxx>
1212#include <memory>
1214class MyCounter : public ROOT::Detail::RDF::RActionImpl<MyCounter> {
1215 std::shared_ptr<int> fFinalResult = std::make_shared<int>(0);
1216 std::vector<int> fPerThreadResults;
1219 // We use a public type alias to advertise the type of the result of this action
1220 using Result_t = int;
1222 MyCounter(unsigned int nSlots) : fPerThreadResults(nSlots) {}
1224 // Called before the event loop to retrieve the address of the result that will be filled/generated.
1225 std::shared_ptr<int> GetResultPtr() const { return fFinalResult; }
1227 // Called at the beginning of the event loop.
1228 void Initialize() {}
1230 // Called at the beginning of each processing task.
1231 void InitTask(TTreeReader *, int) {}
1233 /// Called at every entry.
1234 void Exec(unsigned int slot)
1235 {
1236 fPerThreadResults[slot]++;
1237 }
1239 // Called at the end of the event loop.
1240 void Finalize()
1241 {
1242 *fFinalResult = std::accumulate(fPerThreadResults.begin(), fPerThreadResults.end(), 0);
1243 }
1245 // Called by RDataFrame to retrieve the name of this action.
1246 std::string GetActionName() const { return "MyCounter"; }
1249int main() {
1250 ROOT::RDataFrame df(10);
1251 ROOT::RDF::RResultPtr<int> resultPtr = df.Book<>(MyCounter{df.GetNSlots()}, {});
1252 // The GetValue call triggers the event loop
1253 std::cout << "Number of processed entries: " << resultPtr.GetValue() << std::endl;
1257See the Book() method for more information and [this tutorial](https://root.cern/doc/master/df018__customActions_8C.html)
1258for a more complete example.
1260#### Injecting arbitrary code in the event loop with Foreach() and ForeachSlot()
1262Foreach() takes a callable (lambda expression, free function, functor...) and a list of columns and
1263executes the callable on the values of those columns for each event that passes all upstream selections.
1264It can be used to perform actions that are not already available in the interface. For example, the following snippet
1265evaluates the root mean square of column "x":
1267// Single-thread evaluation of RMS of column "x" using Foreach
1268double sumSq = 0.;
1269unsigned int n = 0;
1270df.Foreach([&sumSq, &n](double x) { ++n; sumSq += x*x; }, {"x"});
1271std::cout << "rms of x: " << std::sqrt(sumSq / n) << std::endl;
1273In multi-thread runs, users are responsible for the thread-safety of the expression passed to Foreach():
1274thread will execute the expression concurrently.
1275The code above would need to employ some resource protection mechanism to ensure non-concurrent writing of `rms`; but
1276this is probably too much head-scratch for such a simple operation.
1278ForeachSlot() can help in this situation. It is an alternative version of Foreach() for which the function takes an
1279additional "processing slot" parameter besides the columns it should be applied to. RDataFrame
1280guarantees that ForeachSlot() will invoke the user expression with different `slot` parameters for different concurrent
1281executions (see [Special helper columns: rdfentry_ and rdfslot_](\ref helper-cols) for more information on the slot parameter).
1282We can take advantage of ForeachSlot() to evaluate a thread-safe root mean square of column "x":
1284// Thread-safe evaluation of RMS of column "x" using ForeachSlot
1286const unsigned int nSlots = df.GetNSlots();
1287std::vector<double> sumSqs(nSlots, 0.);
1288std::vector<unsigned int> ns(nSlots, 0);
1290df.ForeachSlot([&sumSqs, &ns](unsigned int slot, double x) { sumSqs[slot] += x*x; ns[slot] += 1; }, {"x"});
1291double sumSq = std::accumulate(sumSqs.begin(), sumSqs.end(), 0.); // sum all squares
1292unsigned int n = std::accumulate(ns.begin(), ns.end(), 0); // sum all counts
1293std::cout << "rms of x: " << std::sqrt(sumSq / n) << std::endl;
1295Notice how we created one `double` variable for each processing slot and later merged their results via `std::accumulate`.
1298\anchor friends
1299### Dataset joins with friend trees
1301Vertically concatenating multiple trees that have the same columns (creating a logical dataset with the same columns and
1302more rows) is trivial in RDataFrame: just pass the tree name and a list of file names to RDataFrame's constructor, or create a TChain
1303out of the desired trees and pass that to RDataFrame.
1305Horizontal concatenations of trees or chains (creating a logical dataset with the same number of rows and the union of the
1306columns of multiple trees) leverages TTree's "friend" mechanism.
1308Simple joins of trees that do not have the same number of rows are also possible with indexed friend trees (see below).
1310To use friend trees in RDataFrame, set up trees with the appropriate relationships and then instantiate an RDataFrame
1311with the main tree:
1314TTree main([...]);
1315TTree friend([...]);
1316main.AddFriend(&friend, "myFriend");
1318RDataFrame df(main);
1319auto df2 = df.Filter("myFriend.MyCol == 42");
1322The same applies for TChains. Columns coming from the friend trees can be referred to by their full name, like in the example above,
1323or the friend tree name can be omitted in case the column name is not ambiguous (e.g. "MyCol" could be used instead of
1324"myFriend.MyCol" in the example above if there is no column "MyCol" in the main tree).
1326\note A common source of confusion is that trees that are written out from a multi-thread Snapshot() call will have their
1327 entries (block-wise) shuffled with respect to the original tree. Such trees cannot be used as friends of the original
1328 one: rows will be mismatched.
1330Indexed friend trees provide a way to perform simple joins of multiple trees over a common column.
1331When a certain entry in the main tree (or chain) is loaded, the friend trees (or chains) will then load an entry where the
1332"index" columns have a value identical to the one in the main one. For example, in Python:
1335main_tree = ...
1336aux_tree = ...
1338# If a friend tree has an index on `commonColumn`, when the main tree loads
1339# a given row, it also loads the row of the friend tree that has the same
1340# value of `commonColumn`
1345df = ROOT.RDataFrame(mainTree)
1348RDataFrame supports indexed friend TTrees from ROOT v6.24 in single-thread mode and from v6.28/02 in multi-thread mode.
1350\anchor other-file-formats
1351### Reading data formats other than ROOT trees
1352RDataFrame can be interfaced with RDataSources. The ROOT::RDF::RDataSource interface defines an API that RDataFrame can use to read arbitrary columnar data formats.
1354RDataFrame calls into concrete RDataSource implementations to retrieve information about the data, retrieve (thread-local) readers or "cursors" for selected columns
1355and to advance the readers to the desired data entry.
1356Some predefined RDataSources are natively provided by ROOT such as the ROOT::RDF::RCsvDS which allows to read comma separated files:
1358auto tdf = ROOT::RDF::FromCSV("MuRun2010B.csv");
1359auto filteredEvents =
1360 tdf.Filter("Q1 * Q2 == -1")
1361 .Define("m", "sqrt(pow(E1 + E2, 2) - (pow(px1 + px2, 2) + pow(py1 + py2, 2) + pow(pz1 + pz2, 2)))");
1362auto h = filteredEvents.Histo1D("m");
1366See also FromNumpy (Python-only), FromRNTuple(), FromArrow(), FromSqlite().
1368\anchor callgraphs
1369### Computation graphs (storing and reusing sets of transformations)
1371As we saw, transformed dataframes can be stored as variables and reused multiple times to create modified versions of the dataset. This implicitly defines a **computation graph** in which
1372several paths of filtering/creation of columns are executed simultaneously, and finally aggregated results are produced.
1374RDataFrame detects when several actions use the same filter or the same defined column, and **only evaluates each
1375filter or defined column once per event**, regardless of how many times that result is used down the computation graph.
1376Objects read from each column are **built once and never copied**, for maximum efficiency.
1377When "upstream" filters are not passed, subsequent filters, temporary column expressions and actions are not evaluated,
1378so it might be advisable to put the strictest filters first in the graph.
1380\anchor representgraph
1381### Visualizing the computation graph
1382It is possible to print the computation graph from any node to obtain a [DOT (graphviz)](https://en.wikipedia.org/wiki/DOT_(graph_description_language)) representation either on the standard output
1383or in a file.
1385Invoking the function ROOT::RDF::SaveGraph() on any node that is not the head node, the computation graph of the branch
1386the node belongs to is printed. By using the head node, the entire computation graph is printed.
1388Following there is an example of usage:
1390// First, a sample computational graph is built
1391ROOT::RDataFrame df("tree", "f.root");
1393auto df2 = df.Define("x", []() { return 1; })
1394 .Filter("col0 % 1 == col0")
1395 .Filter([](int b1) { return b1 <2; }, {"cut1"})
1396 .Define("y", []() { return 1; });
1398auto count = df2.Count();
1400// Prints the graph to the rd1.dot file in the current directory
1401ROOT::RDF::SaveGraph(df, "./mydot.dot");
1402// Prints the graph to standard output
1406The generated graph can be rendered using one of the graphviz filters, e.g. `dot`. For instance, the image below can be generated with the following command:
1408$ dot -Tpng computation_graph.dot -ocomputation_graph.png
1411\image html RDF_Graph2.png
1413\anchor rdf-logging
1414### Activating RDataFrame execution logs
1416RDataFrame has experimental support for verbose logging of the event loop runtimes and other interesting related information. It is activated as follows:
1418#include <ROOT/RLogger.hxx>
1420// this increases RDF's verbosity level as long as the `verbosity` variable is in scope
1421auto verbosity = ROOT::Experimental::RLogScopedVerbosity(ROOT::Detail::RDF::RDFLogChannel(), ROOT::Experimental::ELogLevel::kInfo);
1424or in Python:
1426import ROOT
1428verbosity = ROOT.Experimental.RLogScopedVerbosity(ROOT.Detail.RDF.RDFLogChannel(), ROOT.Experimental.ELogLevel.kInfo)
1431More information (e.g. start and end of each multi-thread task) is printed using `ELogLevel.kDebug` and even more
1432(e.g. a full dump of the generated code that RDataFrame just-in-time-compiles) using `ELogLevel.kDebug+10`.
1434\anchor rdf-from-spec
1435### Creating an RDataFrame from a dataset specification file
1437RDataFrame can be created using a dataset specification JSON file:
1440import ROOT
1442df = ROOT.RDF.Experimental.FromSpec("spec.json")
1445The input dataset specification JSON file needs to be provided by the user and it describes all necessary samples and
1446their associated metadata information. The main required key is the "samples" (at least one sample is needed) and the
1447required sub-keys for each sample are "trees" and "files". Additionally, one can specify a metadata dictionary for each
1448sample in the "metadata" key.
1450A simple example for the formatting of the specification in the JSON file is the following:
1454 "samples": {
1455 "sampleA": {
1456 "trees": ["tree1", "tree2"],
1457 "files": ["file1.root", "file2.root"],
1458 "metadata": {
1459 "lumi": 10000.0,
1460 "xsec": 1.0,
1461 "sample_category" = "data"
1462 }
1463 },
1464 "sampleB": {
1465 "trees": ["tree3", "tree4"],
1466 "files": ["file3.root", "file4.root"],
1467 "metadata": {
1468 "lumi": 0.5,
1469 "xsec": 1.5,
1470 "sample_category" = "MC_background"
1471 }
1472 }
1473 }
1477The metadata information from the specification file can be then accessed using the DefinePerSample function.
1478For example, to access luminosity information (stored as a double):
1481df.DefinePerSample("lumi", 'rdfsampleinfo_.GetD("lumi")')
1484or sample_category information (stored as a string):
1487df.DefinePerSample("sample_category", 'rdfsampleinfo_.GetS("sample_category")')
1490or directly the filename:
1493df.DefinePerSample("name", "rdfsampleinfo_.GetSampleName()")
1496An example implementation of the "FromSpec" method is available in tutorial: df106_HiggstoFourLeptons.py, which also
1497provides a corresponding exemplary JSON file for the dataset specification.
1499\anchor progressbar
1500### Adding a progress bar
1502A progress bar showing the processed event statistics can be added to any RDataFrame program.
1503The event statistics include elapsed time, currently processed file, currently processed events, the rate of event processing
1504and an estimated remaining time (per file being processed). It is recorded and printed in the terminal every m events and every
1505n seconds (by default m = 1000 and n = 1). The ProgressBar can be also added when the multithread (MT) mode is enabled.
1507ProgressBar is added after creating the dataframe object (df):
1509ROOT::RDataFrame df("tree", "file.root");
1513Alternatively, RDataFrame can be cast to an RNode first, giving the user more flexibility
1514For example, it can be called at any computational node, such as Filter or Define, not only the head node,
1515with no change to the ProgressBar function itself (please see the [Efficient analysis in Python](#python)
1516section for appropriate usage in Python):
1518ROOT::RDataFrame df("tree", "file.root");
1519auto df_1 = ROOT::RDF::RNode(df.Filter("x>1"));
1522Examples of implemented progress bars can be seen by running [Higgs to Four Lepton tutorial](https://root.cern/doc/master/df106__HiggsToFourLeptons_8py_source.html) and [Dimuon tutorial](https://root.cern/doc/master/df102__NanoAODDimuonAnalysis_8C.html).
1525// clang-format on
1527namespace ROOT {
1530using ColumnNamesPtr_t = std::shared_ptr<const ColumnNames_t>;
1533/// \brief Build the dataframe.
1534/// \param[in] treeName Name of the tree contained in the directory
1535/// \param[in] dirPtr TDirectory where the tree is stored, e.g. a TFile.
1536/// \param[in] defaultColumns Collection of default columns.
1538/// The default columns are looked at in case no column is specified in the
1539/// booking of actions or transformations.
1540/// \see ROOT::RDF::RInterface for the documentation of the methods available.
1541RDataFrame::RDataFrame(std::string_view treeName, TDirectory *dirPtr, const ColumnNames_t &defaultColumns)
1542 : RInterface(std::make_shared<RDFDetail::RLoopManager>(nullptr, defaultColumns))
1544 if (!dirPtr) {
1545 auto msg = "Invalid TDirectory!";
1546 throw std::runtime_error(msg);
1547 }
1548 const std::string treeNameInt(treeName);
1549 auto tree = static_cast<TTree *>(dirPtr->Get(treeNameInt.c_str()));
1550 if (!tree) {
1551 auto msg = "Tree \"" + treeNameInt + "\" cannot be found!";
1552 throw std::runtime_error(msg);
1553 }
1554 GetProxiedPtr()->SetTree(std::shared_ptr<TTree>(tree, [](TTree *) {}));
1558/// \brief Build the dataframe.
1559/// \param[in] treeName Name of the tree contained in the directory
1560/// \param[in] filenameglob TDirectory where the tree is stored, e.g. a TFile.
1561/// \param[in] defaultColumns Collection of default columns.
1563/// The filename glob supports the same type of expressions as TChain::Add(), and it is passed as-is to TChain's
1564/// constructor.
1566/// The default columns are looked at in case no column is specified in the
1567/// booking of actions or transformations.
1568/// \see ROOT::RDF::RInterface for the documentation of the methods available.
1569#ifdef R__HAS_ROOT7
1570RDataFrame::RDataFrame(std::string_view treeName, std::string_view fileNameGlob, const ColumnNames_t &defaultColumns)
1571 : RInterface(ROOT::Detail::RDF::CreateLMFromFile(treeName, fileNameGlob, defaultColumns))
1575RDataFrame::RDataFrame(std::string_view treeName, std::string_view fileNameGlob, const ColumnNames_t &defaultColumns)
1576 : RInterface(ROOT::Detail::RDF::CreateLMFromTTree(treeName, fileNameGlob, defaultColumns))
1582/// \brief Build the dataframe.
1583/// \param[in] treeName Name of the tree contained in the directory
1584/// \param[in] fileglobs Collection of file names of filename globs
1585/// \param[in] defaultColumns Collection of default columns.
1587/// The filename globs support the same type of expressions as TChain::Add(), and each glob is passed as-is
1588/// to TChain's constructor.
1590/// The default columns are looked at in case no column is specified in the booking of actions or transformations.
1591/// \see ROOT::RDF::RInterface for the documentation of the methods available.
1592#ifdef R__HAS_ROOT7
1593RDataFrame::RDataFrame(std::string_view datasetName, const std::vector<std::string> &fileNameGlobs,
1594 const ColumnNames_t &defaultColumns)
1595 : RInterface(ROOT::Detail::RDF::CreateLMFromFile(datasetName, fileNameGlobs, defaultColumns))
1599RDataFrame::RDataFrame(std::string_view datasetName, const std::vector<std::string> &fileNameGlobs,
1600 const ColumnNames_t &defaultColumns)
1601 : RInterface(ROOT::Detail::RDF::CreateLMFromTTree(datasetName, fileNameGlobs, defaultColumns))
1607/// \brief Build the dataframe.
1608/// \param[in] tree The tree or chain to be studied.
1609/// \param[in] defaultColumns Collection of default column names to fall back to when none is specified.
1611/// The default columns are looked at in case no column is specified in the
1612/// booking of actions or transformations.
1613/// \see ROOT::RDF::RInterface for the documentation of the methods available.
1614RDataFrame::RDataFrame(TTree &tree, const ColumnNames_t &defaultColumns)
1615 : RInterface(std::make_shared<RDFDetail::RLoopManager>(&tree, defaultColumns))
1620/// \brief Build a dataframe that generates numEntries entries.
1621/// \param[in] numEntries The number of entries to generate.
1623/// An empty-source dataframe constructed with a number of entries will
1624/// generate those entries on the fly when some action is triggered,
1625/// and it will do so for all the previously-defined columns.
1626/// \see ROOT::RDF::RInterface for the documentation of the methods available.
1628 : RInterface(std::make_shared<RDFDetail::RLoopManager>(numEntries))
1634/// \brief Build dataframe associated to data source.
1635/// \param[in] ds The data source object.
1636/// \param[in] defaultColumns Collection of default column names to fall back to when none is specified.
1638/// A dataframe associated to a data source will query it to access column values.
1639/// \see ROOT::RDF::RInterface for the documentation of the methods available.
1640RDataFrame::RDataFrame(std::unique_ptr<ROOT::RDF::RDataSource> ds, const ColumnNames_t &defaultColumns)
1641 : RInterface(std::make_shared<RDFDetail::RLoopManager>(std::move(ds), defaultColumns))
1646/// \brief Build dataframe from an RDatasetSpec object.
1647/// \param[in] spec The dataset specification object.
1649/// A dataset specification includes trees and file names,
1650/// as well as an optional friend list and/or entry range.
1652/// ### Example usage from Python:
1653/// ~~~{.py}
1654/// spec = (
1655/// ROOT.RDF.Experimental.RDatasetSpec()
1656/// .AddSample(("data", "tree", "file.root"))
1657/// .WithGlobalFriends("friendTree", "friend.root", "alias")
1658/// .WithGlobalRange((100, 200))
1659/// )
1660/// df = ROOT.RDataFrame(spec)
1661/// ~~~
1663/// See also ROOT::RDataFrame::FromSpec().
1665 : RInterface(std::make_shared<RDFDetail::RLoopManager>(std::move(spec)))
1671 // If any node of the computation graph associated with this RDataFrame
1672 // declared code to jit, we need to make sure the compilation actually
1673 // happens. For example, a jitted Define could have been booked but
1674 // if the computation graph is not actually run then the code of the
1675 // Define node is not jitted. This in turn would cause memory leaks.
1676 // See https://github.com/root-project/root/issues/15399
1677 fLoopManager->Jit();
1680namespace RDF {
1681namespace Experimental {
1684/// \brief Create the RDataFrame from the dataset specification file.
1685/// \param[in] jsonFile Path to the dataset specification JSON file.
1687/// The input dataset specification JSON file must include a number of keys that
1688/// describe all the necessary samples and their associated metadata information.
1689/// The main key, "samples", is required and at least one sample is needed. Each
1690/// sample must have at least one key "trees" and at least one key "files" from
1691/// which the data is read. Optionally, one or more metadata information can be
1692/// added, as well as the friend list information.
1694/// ### Example specification file JSON:
1695/// The following is an example of the dataset specification JSON file formatting:
1697/// {
1698/// "samples": {
1699/// "sampleA": {
1700/// "trees": ["tree1", "tree2"],
1701/// "files": ["file1.root", "file2.root"],
1702/// "metadata": {"lumi": 1.0, }
1703/// },
1704/// "sampleB": {
1705/// "trees": ["tree3", "tree4"],
1706/// "files": ["file3.root", "file4.root"],
1707/// "metadata": {"lumi": 0.5, }
1708/// },
1709/// ...
1710/// },
1711/// }
1713ROOT::RDataFrame FromSpec(const std::string &jsonFile)
1715 const nlohmann::ordered_json fullData = nlohmann::ordered_json::parse(std::ifstream(jsonFile));
1716 if (!fullData.contains("samples") || fullData["samples"].empty()) {
1717 throw std::runtime_error(
1718 R"(The input specification does not contain any samples. Please provide the samples in the specification like:
1720 "samples": {
1721 "sampleA": {
1722 "trees": ["tree1", "tree2"],
1723 "files": ["file1.root", "file2.root"],
1724 "metadata": {"lumi": 1.0, }
1725 },
1726 "sampleB": {
1727 "trees": ["tree3", "tree4"],
1728 "files": ["file3.root", "file4.root"],
1729 "metadata": {"lumi": 0.5, }
1730 },
1731 ...
1732 },
1734 }
1736 RDatasetSpec spec;
1737 for (const auto &keyValue : fullData["samples"].items()) {
1738 const std::string &sampleName = keyValue.key();
1739 const auto &sample = keyValue.value();
1740 // TODO: if requested in https://github.com/root-project/root/issues/11624
1741 // allow union-like types for trees and files, see: https://github.com/nlohmann/json/discussions/3815
1742 if (!sample.contains("trees")) {
1743 throw std::runtime_error("A list of tree names must be provided for sample " + sampleName + ".");
1744 }
1745 std::vector<std::string> trees = sample["trees"];
1746 if (!sample.contains("files")) {
1747 throw std::runtime_error("A list of files must be provided for sample " + sampleName + ".");
1748 }
1749 std::vector<std::string> files = sample["files"];
1750 if (!sample.contains("metadata")) {
1751 spec.AddSample(RSample{sampleName, trees, files});
1752 } else {
1753 RMetaData m;
1754 for (const auto &metadata : sample["metadata"].items()) {
1755 const auto &val = metadata.value();
1756 if (val.is_string())
1757 m.Add(metadata.key(), val.get<std::string>());
1758 else if (val.is_number_integer())
1759 m.Add(metadata.key(), val.get<int>());
1760 else if (val.is_number_float())
1761 m.Add(metadata.key(), val.get<double>());
1762 else
1763 throw std::logic_error("The metadata keys can only be of type [string|int|double].");
1764 }
1765 spec.AddSample(RSample{sampleName, trees, files, m});
1766 }
1767 }
1768 if (fullData.contains("friends")) {
1769 for (const auto &friends : fullData["friends"].items()) {
1770 std::string alias = friends.key();
1771 std::vector<std::string> trees = friends.value()["trees"];
1772 std::vector<std::string> files = friends.value()["files"];
1773 if (files.size() != trees.size() && trees.size() > 1)
1774 throw std::runtime_error("Mismatch between trees and files in a friend.");
1775 spec.WithGlobalFriends(trees, files, alias);
1776 }
1777 }
1779 if (fullData.contains("range")) {
1780 std::vector<int> range = fullData["range"];
1782 if (range.size() == 1)
1783 spec.WithGlobalRange({range[0]});
1784 else if (range.size() == 2)
1785 spec.WithGlobalRange({range[0], range[1]});
1786 }
1787 return ROOT::RDataFrame(spec);
1790} // namespace Experimental
1791} // namespace RDF
1793} // namespace ROOT
1795namespace cling {
1797/// Print an RDataFrame at the prompt
1798std::string printValue(ROOT::RDataFrame *df)
1800 // The loop manager is never null, except when its construction failed.
1801 // This can happen e.g. if the constructor of RLoopManager that expects
1802 // a file name is used and that file doesn't exist. This point is usually
1803 // not even reached in that situation, since the exception thrown by the
1804 // constructor will also stop execution of the program. But it can still
1805 // be reached at the prompt, if the user tries to print the RDataFrame
1806 // variable after an incomplete initialization.
1807 auto *lm = df->GetLoopManager();
1808 if (!lm) {
1809 throw std::runtime_error("Cannot print information about this RDataFrame, "
1810 "it was not properly created. It must be discarded.");
1811 }
1812 auto *tree = lm->GetTree();
1813 auto defCols = lm->GetDefaultColumnNames();
1815 std::ostringstream ret;
1816 if (tree) {
1817 ret << "A data frame built on top of the " << tree->GetName() << " dataset.";
1818 if (!defCols.empty()) {
1819 if (defCols.size() == 1)
1820 ret << "\nDefault column: " << defCols[0];
1821 else {
1822 ret << "\nDefault columns:\n";
1823 for (auto &&col : defCols) {
1824 ret << " - " << col << "\n";
1825 }
1826 }
1827 }
1828 } else if (auto ds = df->fDataSource) {
1829 ret << "A data frame associated to the data source \"" << cling::printValue(ds) << "\"";
1830 } else {
1831 ret << "An empty data frame that will create " << lm->GetNEmptyEntries() << " entries\n";
1832 }
1834 return ret.str();
1836} // namespace cling
unsigned long long ULong64_t
Definition RtypesCore.h:81
The head node of a RDF computation graph.
The dataset specification for RDataFrame.
RDatasetSpec & WithGlobalFriends(const std::string &treeName, const std::string &fileNameGlob, const std::string &alias="")
Add friend tree to RDatasetSpec object.
RDatasetSpec & AddSample(RSample sample)
Add sample (RSample class object) to the RDatasetSpec object.
RDatasetSpec & WithGlobalRange(const RDatasetSpec::REntryRange &entryRange={})
Create an RDatasetSpec object for a given range of entries.
Class behaving as a heterogenuous dictionary to store the metadata of a dataset.
Definition RMetaData.hxx:50
Class representing a sample which is a grouping of trees and their fileglobs, and,...
Definition RSample.hxx:39
std::shared_ptr< ROOT::Detail::RDF::RLoopManager > fLoopManager
< The RLoopManager at the root of this computation graph. Never null.
RDataSource * fDataSource
Non-owning pointer to a data-source object. Null if no data-source. RLoopManager has ownership of the...
RDFDetail::RLoopManager * GetLoopManager() const
ROOT's RDataFrame offers a modern, high-level interface for analysis of data stored in TTree ,...
RDataFrame(std::string_view treeName, std::string_view filenameglob, const ColumnNames_t &defaultColumns={})
Build the dataframe.
ROOT::RDF::ColumnNames_t ColumnNames_t
Describe directory structure in memory.
Definition TDirectory.h:45
virtual TObject * Get(const char *namecycle)
Return pointer to object identified by namecycle.
A TTree represents a columnar dataset.
Definition TTree.h:79
ROOT::RDataFrame FromSpec(const std::string &jsonFile)
Factory method to create an RDataFrame from a JSON specification file.
std::vector< std::string > ColumnNames_t
tbb::task_arena is an alias of tbb::interface7::task_arena, which doesn't allow to forward declare tb...
std::shared_ptr< const ColumnNames_t > ColumnNamesPtr_t
TMarker m
Definition textangle.C:8