Re: [ROOT] h2root ... How build it?

From: Christian Holm Christensen (cholm@hehi03.nbi.dk)
Date: Wed Aug 06 2003 - 12:31:59 MEST


Hi Zaldy,

zaldy <zaldy@neutrino.kek.jp> wrote concerning
  Re: [ROOT] h2root ... How build it? [Tue, 5 Aug 2003 22:54:24 +0900 (JST)] 
----------------------------------------------------------------------
> At this time am looking for the libpacklib.{so,a}. My Compiler is gcc 
> 3.2.2 on RedHat 9.0. Unfortunately, available libraries are up to RedHat 
> 7.3, built on gcc 2.95.2 (pls see: 
> http://cernlib.web.cern.ch/cernlib/download/2002_rh73/README)
> 
> One option now is to build the library locally. 

Whoha! Have you ever tried building the CERNLIB libraries yourself?
Not funny at all. 

> I wonder if using the existing cernlibs would be ok?

They should be OK (as long as it's not for Red Hat 4.x or previously,
when the `native' libc5 library was used).  

That's the short answer - now for the longer one:

The point is, that the CERNLIB libraries are (AFAIK) all C or
Fortran77 libraries.  The ABI (Application Binary Interface) of C and
Fortran hasn't changed for a very long time, and is (again AFAIK)
completely backward compatible. 

FYI: An ABI is different from an API (Application Programmers
Interface, or something like that), in that the ABI is what the
runtime system  and linker sees, while the API is what the (human)
developer sees.  Two different domains, with similar names. 
 
The issue of ABI compatibility for C++ (compiled) libraries is
completely different.  The C++ ABI for GCC has changed 2 times with in
the past 2 years or so.  pre-GCC 3.0 used an ABI that was specific to
GCC (as recommended in Bjarne Stroustups original book - A.K.A. C++
ARM  - C++ Annotated Reference Manual), while GCC 3.0 and 3.1 used an
ABI which is closer to the pseudo-standard published by Intel.
However, that ABI changed a wee bit, and GCC 3.2 followed suit.

Hence, the C++ libraries compiled with GCC pre-3.0 is not compatible
with GCC 3.0, which again isn't compatible with GCC 3.2 (and it's a
transient relation by the way :-) 

The reason for all this fuss around the C++ ABI, is that calling C++
is very different from calling C (or Fortran77) code.  Consider: 

  extern "C" 
  {
    void bar(const char* gret) // Context independent
    { 
      printf("%s\n", gret); 
    } 
  }
 
  struct foo 
  {
    int _foo;
    int get() const { return _foo; }  // Context dependent function
  };

  int main() 
  { 
    bar("Hello, World");              // No context
    foo f(10); 
    return f.get();                   // Execute foo::get in f's context
  }

When the runtime system executes `bar' it doesn't need to know
anything but possible arguments.  That is, it only needs to push the
`bar' binary code onto the stack - sort off. 

However, to execute `foo::get' the runtime system needs to know the
object (or context) that executes this call.  Hence, it must find the
object, and push that context onto the stack along with the code of
`foo::get'.  Usually, this is implemented by implicitly passing a
pointer to the object as first argument of the function.  sort of
like: 

  int _3get_3fooi(_3foo* self) 
  { 
    return _3foo->_foo;
  }

[This is more or less how CINT does it by the way]

How exactly to to all this, is what the ABI is all about.  And to
complicate things further, we have templates in C++ too.

Please note, that this _only_ applies to C++, and possibly also
native-compiled Java. 

Also note, that Micros**t Virtual C--, Sun C++, and possibly others,
uses a completely different ABI.  Intel C++ and GCC have agreed to use
the same ABI, so that they are binary compatible.  Sometimes, Intel
does do The Right Thing(tm).  
 
Phew, that got a bit long - sorry about that. 

So I guess your original problem was, that you didn't have CERNLIB
installed? 

Note, that the libraries you get from CERN are _static_ libraries
(sigh) - ending in `.a'.  The ALICE experiment has done a job of
making shared version of the libraries - ending in `.so'.  Hence, the
`configure' script checks for both, mainly as a service to ALICE
users. 

In general, shared libraries should be preferred over static
libraries (`configure' option `--enable-shared').

When you link a static library into another library or program, you
essentially copy all the binary stuff you need into the library or
executable.  With shared libraries, you simply make a reference to the
library and symbols.

If a program needs a shared library, the OS will load that library
into memory if it is not already present, if it's there, it increments
a reference count.  Hence, if many applications (running at the same
time) uses the same shared library, you will essentially save memory
by using shared libraries over static ones. 

You'll also save disk space, as library binary is only present once -
in the shared library, while for statically linked executables, the
library code is present as many times as there are programs linked to
it. 

Note, that this only applies to modern OS, like GNU/Linux, GNU/Hurd,
MacOSX, BeOS, and to some extend Windoze.  `Program launchers', like
DOS could never do that, as the OS is gone once a program executes. 

Yours, 

 ___  |  Christian Holm Christensen 
  |_| |	 -------------------------------------------------------------
    | |	 Address: Sankt Hansgade 23, 1. th.  Phone:  (+45) 35 35 96 91
     _|	          DK-2200 Copenhagen N       Cell:   (+45) 24 61 85 91
    _|	          Denmark                    Office: (+45) 353  25 305
 ____|	 Email:   cholm@nbi.dk               Web:    www.nbi.dk/~cholm
 | |



This archive was generated by hypermail 2b29 : Thu Jan 01 2004 - 17:50:14 MET