Dear all,
I have tried to switch to THnSparseD for my analysis, since it allow to
have multiple binnings which is of help in fastening my analysis and
keeping relation between all the variables I used to bin my histogram,
but this caused to have virtually an object with very many bins (~400k),
even if in the most segmented axes, almost one quarter of them should be
empty.
Since I run jobs on grid, I have many output files, each one containing
a set of such objects, and I have then to merge them into a single file.
In my file, the histograms are stored into a TList object, which contains 3 deeper order of TLists (I mean: TList->TList->TList->TList->object)
Now, when I use "hadd" to do this, of the TFileMerger, it happens that the memory occupancy of the execution takes almost all the RAM available in my PC (4 GB), so I don't manage to add them without raising segmentation faults or aborts due to excessive memory consumption.
I was wandering if this can be imputable to the huge size of THnSparse's or to problems related to having many nested levels of TLists which are all stored as a single key inside the file.
Can someone help me in understanding this or give me some suggestion about the best way to store data in the file?
Thanks, best regards
Alberto Received on Tue Jun 16 2009 - 09:19:08 CEST
This archive was generated by hypermail 2.2.0 : Tue Jun 16 2009 - 11:50:03 CEST