Re: [Fwd: Re: Wikipedia criticism about root]

From: Andy Buckley <>
Date: Thu, 29 Jun 2006 20:40:07 +0100

cstrato wrote:
> It is really sad that you did not cc your reply to roottalk :-(

I did. And this one, too.

> You are correct that C++ was never designed to be interpreted.
> This means that CINT is even more ingenous to achieve this.

Or misguided.

> I do not want to use Python, Perl, Ruby or any other slow language,
> I want to be able to code C++ efficiently, and this is what CINT
> offers to me.

For a long time you couldn't e.g. use parenthetic initialisers on primitive types, such as int foo(3); without introducing virtually untraceable errors in CINT. You still can't use templates. Given any remotely complex C++ construct I'd maybe give it a 40% chance of doing exactly what it should. I don't know to what extent CINT is unit tested or peer reviewed. I don't know of any reference as to which bits of C++ are supported in CINT, and I don't trust the interpreter to throw errors at all unsupported constructs. So my only manual for "CINT-script" is the C++ language reference and a painful process of trial and error to find which bits work and which don't. Hmm. For Python I have a much greater degree of confidence, not least because the test community is much larger and more communicative. For Python my comparative figure of confidence is impressively close to 100%. Opinion, but based on quite a bit of experience of using both languages.

Incidentally, I suspect that the ROOT-Python/Ruby bindings indicate that similar views are held elsewhere.

> BTW, the NeXT people have shown that Obj-C is much better and easier
> to use than C++, and has all advantages of Java w/o the disadvantages,
> but as is, C++ is the standard, and Java not suitalbe for writing
> large applications.

What does that have to do with anything I've been talking about here? The only remotely related point I can think of is that I've been saying that clear, modern interpreted languages (like Python) make better interactive UIs than C++ which was never designed for that purpose. And that the speed problems of interpreted languages can be largely allayed by taking care of the critical path computations in fast, compiled library objects generated (for example) by complex, powerful, type-safe C++. Exactly the path taken by SciPy/NumPy and HippoDraw/NumPy. I'm not sure that's remotely controversial, but have a go if you must :-)

The "no silver bullet" idea is a well-established one in software engineering ( and neither C++, Python, HippoDraw, etc. --- nor ROOT --- serve as counter-examples. I'd be a fool to argue that they did.


PS. Myself and cstrato seem to have assumed the twin roles of Chief Arguer here. I think it would be of benefit if others --- including some of those who've mailed me personally with supportive messages and the main ROOT developers, who presumably (I hope!) have opinions on these issues --- can add to the debate. Otherwise this whole affair is a waste of words... a fact which I suspect is not lost on the more prominent silent parties ;-) Received on Thu Jun 29 2006 - 21:40:22 MEST

This archive was generated by hypermail 2.2.0 : Mon Jan 01 2007 - 16:31:59 MET