Anton Fokin wrote: ... snip... > But it is quite hard to > explain "one write many reads" issue to an average customer of the > framework, which suppose to be general enough to serve people doing > different things (including university researchers in finance for example). hi Anton - didn't you try to explain to an "average customer" something along the following lines: the data which are coming in in real time and can't be reproduced later SHOULD NEVER be overwritten/corrected. This is what is called primary datasets in HEP. THis is why "one write many reads" is a very natural concept for the experimental physics. We also have a notion of a secondary dataset which is a derivative (selection/result of reprocessing) from the primary dataset. The secondary datasets, unlike the primary ones, can be recreated as many times as it is needed, so correcting the "raw" data never becomes an issue. best, Pasha
This archive was generated by hypermail 2b29 : Tue Jan 01 2002 - 17:50:38 MET