Hi Mandeep, I cc my reply to roottalk. I had a few questions similar to yours. The conversion of histograms from Hbook format to the Root classes is extremely fast (overhead is negligible). The THbookTree class (via THbookBranch/THbookFile) converts only the ntuple header to a Root class. All the data buffers are read directly by the Hbook/Zebra routines. I have benchmarked some queries with: A- PAW itself on myfile.hbook B- THbookFile/THbookTree on myfile.hbook C- Root on myfile.root created via h2root In average the myfile.root files that I have used are two times smaller than the original myfile.hbook. Concerning the time, one has to distinguish the first pass and the following passes through the data. -First pass: C is always the best (because less physical I/O) B is slightly faster than A - Second pass: C is still the best in general (unless one uses a small ntuple that the PAW query processor caches in memory) A is faster than B (for the reason above). It would be possible to develop a cache mechanism for Hbook files within Root, but I would prefer to not start this exercise. I am interested by additional benchmarks. These new classes were developed on request from some experiments having a large collection of PAW ntuples (I was told multi Terabytes) who cannot convert all their applications to Root immediatly. Also note that THbookFile supports read only. No plans to support a Write option. In fact, my main motivation in implementing these classes was not so much the direct support for Hbook files, but mainly to show a concrete example how to interface to foreigh formats, still using the standard ROOT Tree query processor. Rene Brun On Wed, 20 Feb 2002, Mandeep S. Gill wrote: > > hi Rene: this sounds very intriguing, do you have any feeling for how > much overhead there is though for loading and converting files on the fly, > vs. pre-converting them with h2root? > > thanks- > -M >
This archive was generated by hypermail 2b29 : Sat Jan 04 2003 - 23:50:42 MET