Hi Donal.
There is currently no easy way to enforce that a ROOT file is exactly in 64M chunks. However you can get close (within statistical fluctuations) by setting the 'auto-flush' size is 64M when writing the TTree (SetAutoFlush(-64*1024*1024) )
Cheers,
Philippe.
On 11/5/11 3:56 AM, donal0412 wrote:
> Hi ROOT experts and users,
> I'm considering to use HDFS to store ROOT files and use map-reduce framework to process the files (reconstruction,analysis,MC).
> I wonder if it's possible to have an efficient way to split the ROOT file into fixed size(say 64M) , and merge some ROOT files
> into one file.
>
> Thanks !
> Donal
>
Received on Mon Nov 07 2011 - 16:40:52 CET
This archive was generated by hypermail 2.2.0 : Tue Nov 08 2011 - 11:50:02 CET