As the date of first LHC collision is steadily approaching, preparing
the end-user data analysis environment is becoming more urgent.
Over the past few years, the PROOF system has been developed to provide
a seamlesss extension of the single-user ROOT session to large clusters.
Using PROOF the analysis is done in parallel on the cluster,
thereby reducing the analysis time and increasing the amount
of data that can be processed.
In the meanwhile the computing landscape is changing with the arrival
of dual and quad-core CPU's and desktop machines with 32 or more cores are
on the horizon. To harness all these cores, programs must be multi-threaded
and parallelized. PROOF already runs efficient on 8 core machines and
is designed to scale to many more.
This small workshop is organized to bring together people who plan to
run PROOF as Central Analysis Facility (CAF) on hundreds of nodes and TB's
of disk, or who plan to run it on medium and small departmental clusters
(Tier 2 and 3) for physics workgroup analysis.
The main workshop topics are data set management, user and resource
scheduling, best hardware to run PROOF on, cluster monitoring and management,
experiment analysis models and frameworks, missing features and user feedback.