RE: Cannot have more than 255 TProcessID's in one file ?

From: Philippe Canal <pcanal_at_fnal.gov>
Date: Tue, 15 Mar 2005 09:45:14 -0600


Hi Nicolas,

Note that files that were produce using the older version of ROOT and have more than 255 process Ids will present some difficulties to read. In particular, it is likely to confuse
one referenced object for another.

As you mention, the issue could be a bad interaction between TClonesArray and Trefs/
To help us track this issue down, could you please provide me with a simple complete
example reproducing the problem?

Cheers,
Philippe.

-----Original Message-----
From: owner-roottalk_at_pcroot.cern.ch [mailto:owner-roottalk_at_pcroot.cern.ch] On Behalf Of Nicolas Berger
Sent: Tuesday, March 15, 2005 2:37 AM
To: roottalk_at_pcroot.cern.ch
Subject: Re: [ROOT] Cannot have more than 255 TProcessID's in one file ?

Hi Again,

  Thank you for adding this feature, it is exactly what was needed. However, after migrating to root 4.03/02 I still have some difficulty with this: namely I get many error messages of the form

Error in <TExMap::Add>: key 197702208 is not unique

A guess as to the problem would be that it is due to the fact that the hashmap that maps the objects to the id-number of their TProcessID uses the address of the object as the key for the lookup. In my case the objects are stored in a TClonesArray inside an "event" class so every time a new event is loaded, objects from the previous event can be overwritten by new objects at the same position. Then the new objects would trigger the error by attempting to register themselves with the same address as the previous ones.

If this is indeed the problem it could be solved by clearing the hashmap when a new event is loaded, but it seems this is impossible to the user since the hashmap (fObjPIDs) is not available outside TProcessID.

Any help with this problem is greatly appreciated! Thanks,
- Nicolas

> Hi Nicolas,
>
> This limitation has been removed a few weeks ago in the development
> version.
> see http://root.cern.ch/root/Version40302.news.html
> (see TRefs/TObject)
>
> see also the development notes at :
> http://root.cern.ch/root/html/examples/V4.03.txt.html
>
> 2005-01-28 06:45 brun
>
> * base/inc/TProcessID.h, base/src/TObject.cxx,
> base/src/TProcessID.cxx, base/src/TRef.cxx,
> meta/src/TStreamerInfoReadBuffer.cxx,
> meta/src/TStreamerInfoWriteBuffer.cxx:
> From Philippe:
> This patch solved a problem due to the fact that TObject's fUniqueID can
> only hold a 8 bits ProccessID identifier (in addition to storing a
> 24 bits object ID). However we support 65535 distinct ProcsssID
> identifiers perfile. Before this patch, for a file containing more than
> 255 distcint ProcessID, the TRef using the later ProcessID would be unable
> to find their references.
>
> Specifically, fUniqueID can now store the ProcessID indentifier 0 through
> 254. When more identifiers are used, then instead of store the identifier
> in the 8 higher bit of fUniqueID we store in a table
> (TProcessID::fgObjPIDs) linking addresses to pids.
>
>
> Rene Brun
>
> On Sat, 12 Mar 2005,
> Nicolas Berger wrote:
>
> >
> > Hi,
> >
> > I am using ROOT to write out TTrees containing objects refering to each
> > others using TRef's. This works well, but I run into problems when
> > trying to merge a large number of such trees produced by different jobs.
> >
> > Starting out with 300 files, each containing a TTree and a TProcessID
> > object, I load the files into a chain and then try to merge everything
> > into a single file by copying the events one by one into a new file.
When
> > the number of initial files is N<255, this works and a file is produced
> > that contains a TTree and N TProcessID objects. For N>=255, the produced
> > file still looks OK (it contains a tree and a long list of TProcessIDs)
> > but it is corrupted: all TRefs that are associated with ProcessID255 and
> > above cannot find the object they point to even after it is loaded.
Also,
> > in this latter case, the file actually contains N+1 TProcessIDs :
> > ProcessID1-ProcessID254 are correct, ProcessID255 is actually the
> > process ID of the ROOT session during which the files were merged and
> > ProcessID256-ProcessID(N+1) are the remaining ones.
> >
> > >From looking into the TRef/TProcessID code it seems this is due to a
> > limitation of ROOT: apparently the index of the TProcessID is encoded
into
> > the upper 8 bits of the TObject's unique ID, at least that's how it
seems
> > to be used in TProcessID::GetProcessWithUID(UInt_t uid, void *obj):
> >
> > -------------------------------
> > Int_t pid = (uid>>24)&0xff;
> > <...>
> > (TProcessID*)fgPIDs->At(pid);
> > -------------------------------
> >
> > so when the PID index reaches 256, pid overflows to 0. Since pid=0 is
the
> > TProcessID of the current session, there can be only an extra 255 PIDs
in
> > use before the index overflows. This would also explain why the
processID
> > of the current session is also written into the corrupted file.
> >
> > I would appreciate it if some people knowledgeable with the inner
workings
> > of TRef's could comment if this is indeed the problem or not. If it is,
> > is there a different way of merging files with TProcessIDs that does not
> > run into this problem?
> >
> > Thank you for your help,
> > - Nicolas
> >
>
>
Received on Tue Mar 15 2005 - 16:47:47 MET

This archive was generated by hypermail 2.2.0 : Tue Jan 02 2007 - 14:45:06 MET