[ROOT] Re: CINT cpu speed? soft-link related!

From: Arthur E. Snyder (snyder@SLAC.stanford.edu)
Date: Wed Jan 29 2003 - 00:56:12 MET


It seems these problems are some how related to use of soft links. The
difference between your verion of my routine (art.C) and my original seems
to be that I use a soft link to find the ASCII file in my paw directory
were it's made and your version reads it directly.

Here is a little table of the performance of the two root
versions with and w/o soft links:


          |  3.02-07   | 2.23-12 |
direct    |    2.41    |   1.79  |
soft-link |    8.95    |   4.60  |


The old version is a little faster than the newer one even without
soft-link, but the difference grows dramatically when the soft link is
used.

Why should soft link have such a big effect? The file being read is
actually on the same physical disk, so one would think that once the
stream was open it wouldn't be any slower than directly reading file in
the same directory.

-Art

A.E. Snyder, Group EC                        \!c*p?/
SLAC Mail Stop #95                          ((.   .))
Box 4349                                        |
Stanford, Ca, USA, 94309                      '\|/`
e-mail:snyder@slac.stanford.edu                 o
phone:650-926-2701                              _
http://www.slac.stanford.edu/~snyder          BaBar
FAX:650-926-2657                          Collaboration



On Tue, 28 Jan 2003, Rene Brun wrote:

> Art,
>
> Could you run the script below on your machine with your latest Root with:
>  root > gROOT.Time()
>  root > .x art.C
>  root > .x art.C+
>  root > .x art.C+
>
> and send me the times
>
> Rene Brun
>
> //--------file art.C
> #include "TNtuple.h"
> #include <fstream>
>
>  //ASCII tuple reader:
>
>  TNtuple* art() {
>  ifstream fp("pion-1.612.tuples");
>
>  TNtuple* temp=new TNtuple("ntuple","ascii
>  data","evtno:npievt:status:idmom:eve:ideve:b:idb:xb:yb:zb:
>  xeve:yeve:zeve:pxpimc:pypimc:pzpimc:ptpimc:ppimc:thpimc:
>  phipimc:pxpirc:pypirc:pzpirc:ptpirc:ppirc:thpirc:phipirc:
>  xpirc:ypirc:zpirc:nsvtpi:ndchpi:lenpi:delth:delthalt:pxalt:
>  pyalt:pzalt:thalt:phialt:xalt:yalt:zalt:pxmom:pymom:pzmom:
>  ptmom:pmom:thmom:phimom:xmom:ymom:zmom:idpimom:idpigma");
>
>  Float_t array[56];
>  Int_t loop=1;
>  Int_t count=0;
>
>  while(loop) {
>   for(Int_t i=0; i<56; i++) {
>  fp >> array[i];
>   }
>  temp->Fill(array);
>  count++;
>  if(count>=16434) break;
>  if(count%1000==1) printf("count: %d \n",count);
>  }
>
>  printf("total: %d \n ",count);
>  return temp;
>  }
>
>
>
> "Arthur E. Snyder" wrote:
> >
> > I ran it on the same file on the same machine with the two different root
> > versions, so the only thing different was the root version. The times I got
> > with the   faster, older version were about the same as your seeing.
> >
> > -Art
> >
> > ----- Original Message -----
> > From: "Rene Brun" <Rene.Brun@cern.ch>
> > To: "Arthur E. Snyder" <snyder@SLAC.Stanford.EDU>
> > Cc: <roottalk@pcroot.cern.ch>
> > Sent: Tuesday, January 28, 2003 7:25 AM
> > Subject: Re: CINT cpu speed?
> >
> > > Art,
> > >
> > > I have run your script on your file
> > >  - with CINT      : Real time 0:00:04, CP time 4.280
> > >  - with ACLIC/gcc : Real time 0:00:04, CP time 3.590
> > >
> > > As you can see, CINT is extremely fast and your factor 2 or 10!
> > > must be somewhere else. Is your file somewhere on a slow server?
> > >
> > > Rene Brun
> > >
> > > "Arthur E. Snyder" wrote:
> > > >
> > > > The factor of 10 is very odd. I'm just using standard root executables
> > > > at SLAC. I haven't recompiled it our anything, so I assume they are
> > > > optimized to the same level.
> > > >
> > > > I'll send you the file.
> > > >
> > > > ----- Original Message -----
> > > > From: "Rene Brun" <Rene.Brun@cern.ch>
> > > > To: "Arthur E. Snyder" <snyder@SLAC.Stanford.EDU>
> > > > Cc: <roottalk@pcroot.cern.ch>
> > > > Sent: Tuesday, January 28, 2003 5:57 AM
> > > > Subject: Re: CINT cpu speed?
> > > >
> > > > > Art,
> > > > >
> > > > > I do not understand this factor 10. Are you sure than you run
> > > > > with the same CINT optimisation level in both cases?
> > > > > Could you send me your file pion-1.612.tuples?
> > > > >
> > > > > Rene Brun
> > > > >
> > > > > "Arthur E. Snyder" wrote:
> > > > > >
> > > > > > I find speed of C++ interpreter to be much slower in new versions of
> > > > root
> > > > > > than in old ones. Using the code attached below to read in an ASCII
> > file
> > > > I
> > > > > > find a factor of 2 decrease in the speed of the macro between
> > 3.02-07
> > > > > > and  2.23-12. Why is that? This not progress!
> > > > > >
> > > > > > Even stranger is that the original version of this code which used
> > > > "cout"
> > > > > > rather than "printf" to print out variable "count" is even slower.
> > That
> > > > > > one ran 10x slower in 3.02-07 than 2.23-12 the 1st time it was
> > executed,
> > > > > > but improved to only 2x slower when executed again. I'm not sure if
> > this
> > > > > > really had anything to do with use of "cout <<" instead of  printf,
> > > > since
> > > > > > other minor changes such as putting in a few statements to print cpu
> > > > time
> > > > > > used also produced improvement from 10x worse to only 2x worse.
> > > > > >
> > > > > > Anybody have any idea what's going on here?
> > > > > >
> > > > > > -Art Snyder, SLAC
> > > > > >
> > > > > > ASCII tuple reader:
> > > > > >
> > > > > > TNtuple* readASCII() {
> > > > > > ifstream fp("paw/pion-1.612.tuples");
> > > > > >
> > > > > > TNtuple* temp=new TNtuple("ntuple","ascii
> > > > > > data","evtno:npievt:status:idmom:eve:ideve:b:idb:xb:yb:zb:
> > > > > > xeve:yeve:zeve:pxpimc:pypimc:pzpimc:ptpimc:ppimc:thpimc:
> > > > > > phipimc:pxpirc:pypirc:pzpirc:ptpirc:ppirc:thpirc:phipirc:
> > > > > > xpirc:ypirc:zpirc:nsvtpi:ndchpi:lenpi:delth:delthalt:pxalt:
> > > > > > pyalt:pzalt:thalt:phialt:xalt:yalt:zalt:pxmom:pymom:pzmom:
> > > > > > ptmom:pmom:thmom:phimom:xmom:ymom:zmom:idpimom:idpigma");
> > > > > >
> > > > > > Float_t array[56];
> > > > > > Int_t loop=1;
> > > > > > Int_t count=0;
> > > > > >
> > > > > > while(loop) {
> > > > > >  for(Int_t i=0; i<56; i++) {
> > > > > > fp >> array[i];
> > > > > >  }
> > > > > > temp->Fill(array);
> > > > > > count++;
> > > > > > if(count>=16434) break;
> > > > > > if(count%1000==1) printf("count: %d \n",count);
> > > > > > }
> > > > > >
> > > > > > printf("total: %d \n ",count);
> > > > > > return temp;
> > > > > > }
> > > > > >
> > > > > > A.E. Snyder, Group EC                        \!c*p?/
> > > > > > SLAC Mail Stop #95                          ((.   .))
> > > > > > Box 4349                                        |
> > > > > > Stanford, Ca, USA, 94309                      '\|/`
> > > > > > e-mail:snyder@slac.stanford.edu                 o
> > > > > > phone:650-926-2701                              _
> > > > > > http://www.slac.stanford.edu/~snyder          BaBar
> > > > > > FAX:650-926-2657                          Collaboration
>



This archive was generated by hypermail 2b29 : Thu Jan 01 2004 - 17:50:08 MET