RE:Re: CINT and overload resolution

From: Masaharu Goto (MXJ02154@nifty.ne.jp)
Date: Mon Apr 03 2000 - 14:05:57 MEST


Hello George and Rene,

just an update,

I am implementing new overloading scheme. It is about 60% done and 
working with some simple cases. 

Now, I am concenred about backward compatibility. Overload resolution
behavior will change once new implementation is activated. This is one 
of the most complicated thing to implement in C++ spec. It will take time
that it settles to a reasonable quality.

Thank you
Masaharu Goto


>Hi Masa,
>I agree with George on this point. 
>Could you clarify what you mean by performance penalty ? Only when you
>generate the byte code or when you execute it ? Could you quantify how
>much
>overhead it could be in the case the overhead is at run time ?
>I have no idea on how much work it is to implement the same behaviour
>as the compilers, but I think it is important to not diverge from the
>standard C++ on this point.
>
>Rene
>
>
>George Heintzelman wrote:
>> 
>> > Cint checks parameter matches from exact match to user conversion
>> > all together with all the arguments. In this case,
>> >     TPhCalKey("String","String",100)
>> > Cint searches in following order
>> >     TPhCalKey(char*,char*,int)
>> >     template<class T,E> TPhCalKey(T,T,E)
>> >     TPhCalKey(char*,char*,(any integral type))
>> >     TPhCalKey(void*,void*,(any numerical type))
>> >     TPhCalKey(anyUserConv(char*),anyUserConv(char*),anyUserConv(int))
>> >
>> > In this case, because all 3 parameters matched with user defined 
conversion
>> > before it sees the true candidate.  his behavior is not fully compliant t
o
>> > the standard , but speeds up overloading resolution in interpreter
>> > environment. Please understand the speed advantage and stand with current
>> > implementation.
>> 
>> Yes, I understand what CINT is doing. I'm claiming the opposite: I
>> think this particular variation from the standard is unintuitive and
>> leads to subtle changes in code behavior between compiled and
>> interpreted versions, and that this is worth a small speed penalty. At
>> least for ROOT users, I suspect that little of the 'real work'/bulk of
>> CPU time of code is spent in interpreted code. Certainly for us
>> (Phobos), we use scripts first as control and direction of a ROOT
>> session and second as a way to do fast prototyping, debugging and
>> testing. I think this deviation causes potential problems with both of
>> these uses.
>> 
>> Furthermore, the standard says that a case where there is a real
>> ambiguity in a function call is an error and must be detected. CINT's
>> current behavior here is to accept an ambiguous function, picking one
>> essentially randomly, and issue no diagnostic. Solving the problem
>> about sub-resolution within classes of conversion would certainly fix
>> this second deviation from the standard as well.
>> 
>> Is the speed advantage really that big a deal? Most functions (except
>> constructors) will have only at most a few overloads, and even for
>> constructors, the cases where more complicated resolution is needed
>> shouldn't be all that common; I wouldn't expect the time spent in
>> overload resolution to expand by much except in pathological cases (all
>> you need to do is make a list of those matching at a given stage, and
>> make a single pass through them to find the best candidate if there is
>> more than one). From profiling, you should be able to say how much time
>> CINT currently spends in overload resolution versus its other tasks, in
>> usual cases; is it really significant?
>> 
>> George



This archive was generated by hypermail 2b29 : Tue Jan 02 2001 - 11:50:22 MET