[this is what interested me (G.R.)] a plenary session of invited oral presentations ------------------------------------------------ High Energy Physics and Computing ================================= Perspectives from DOE Dr. Glen Crawford Simulations for Science - The analysis of Cosmic Microwave Background data depends on computationally challenging simulations - Researchers generated the first comprehensive simulation of the ongoing ESA/NASA Planck mission Applications Beyond HEP -GEANT4 is freely available to the public and has found important uses in industry -Aerospace and medical devices companies use the software in their work. Boeing and Lockheed Martin use it to study the effects of cosmic rays on the electronics in satellites. -Geant4 Application for Tomographic scanning HEP as an Early Adopter of Computing Automated bubble chamber scanning starting in 1970s HEP as a Leader in Development First research discipline-wide computer network Precursor of modern research networks 1980s HEP as a Leader in Implementation Large Collaborations (and QCD) drove rapid development of large costeffective computing Highly parallel computing “farms” (1990s) Note Lattice QCD farms => IBM Blue Gene Large scale implementation of distributed (grid) computing (2000s) Computational High Energy Physics at HEP -------------------------------------------- -Scientific Discovery through Advanced Computing (SciDAC) New proposals submitted under SciDAC 3 solicitation are under considerations for Research to advance the HEP mission by fully exploiting leadership class computing resources in the areas: Cosmic Frontier Scientific Simulations, Lattice Gauge Theory Research, and Accelerator Science Modeling and Simulation -General HEP Computing – addresses current community needs for Event Generators, Data Tools, Distributed Computing, Networks, Software HEP Computing ============== René Brun Software Evolution in HEP From Mainframes =====> Clusters - 1994: Move to C++ Painful but successful, even if it took 10 years !! Missing support for Reflexion Abuse of inheritance Long learning phase to make modular shared libs Abuse of « new » generating scattered structures Missing concept of ownership (Memory leaks, double delete; Complex algorithms during I/O or deep copy; No way to visualize a tree/graph structure.) - Development Process Software committees generate more bureaucracy than practical results Launching a new major system requires agility Software metrics, dynamic & static code analyzers (eg Coverity) are essential User support with instantaneous response time is a must Systems in 2030 ? Keyword: parallelism - One such project is GEANT5 launched 18 months ago. - Compare the results with the most efficient sequential version and not just the version using one single thread. - Prove that you use a 8 cores-node with one job more efficiently than running 8 independent jobs (memory, cpu, I/O). New Computing Models and LHCOne ================================ Ian Fisk Over the development the evolution of the WLCG Production grid has oscillated between structure and flexibility Old Model - Foresaw Tiered Computing Facilities to meet the needs of the LHC Experiments - Assumes poor networking - Hierarchy of functionality and capability New Model - Divisions in functionality especially for chaotic activities like analysis become more blurry - More access over the wide area Services like the Data Popularity Service track all the file accesses and can show what data is accessed and for how long Dynamic Data Placement ->Reduction in the amount of disk needed Wide Area Access ->With optimized IO other methods of managing the data and the storage are available ->Not immediately obvious that this increases the wide area network transfers Challenge of HEP High Energy Physics has a lot of data in a highly distributed environment - Hard to make many multiple static copies - Need to be able to make dynamic replicas and clean up - Need to access data over long distances Trying to make networking more predictable - Enter LHCOne Changes New model has less structure - More options for where data comes from - More flexibility in where activities happen This lack of structure places more expectations on the support services like networking and data management - Developing more advances service for management and transport - Network has been very reliable. Programs like LHCOne try to maintain this Outlook More flexible and dynamic use of the resources available will make more efficient use of the resources All the actions trying to ensure that we don’t make artificial separation Should put us in a better situation to make use of Computing Services we don’t control - Clouds and Opportunistic Computing VC in HEP Status and Perspectives ======================== Philippe Galvez, Caltech Current VC Service used by HEP -Dedicated H.323 Services (Renater/IN2P3/INRA/Inserm/CNRS) -Vidyo operated by CERN. Starting in 2008, Vidyo was presented as a possible product to serve the LHC and CERN related experiments by CERN/IT who was mandated to look for commercial alternative to EVO. As of today, it appears that the product is still missing some key functionalities to be a full release candidate. Nevertheless, some research groups at CERN are starting to use the service. -EVO (Enabling Virtual Organization) released in 2007 From EVO to SeeVogh -------------------- In order to continue to expand and operate the current EVO service and provide enhancements, private funding is now needed. Evogh, Inc. was formed by January 2013 Computing the Universe ======================= Adrian Pope Computational Cosmology: A ‘Particle Physics’ Perspective Primary Research Target: Cosmological signatures of physics beyond the Standard Model Structure Formation Probes: Exploit nonlinear regime of structure formation Precision Cosmology: “Inverting” the 3-D Sky Computing the Universe: Simulations for Surveys Simulating the Universe High Performance Computing HACC (Hybrid/Hardware Accelerated Cosmology Code) Architectures and Algorithms IBM Cell Broadband Engine Accelerator IBM Blue Gene/Q GPGPU Future Experiments and Impact on Computing ========================================== MESSCHENDORP, Johan PANDA online computing Software Trigger Algorithms (Trigger-less Data Acquisition) continuous data sampling with self-triggered detector a number of parallel sessions comprising oral and poster presentations ----------------------------------------------------------------------- Software Engineering, Data Stores and Databases ################################################ Cling – The New C++ Interpreter for ROOT 6 ======================= V. Vassilev Cling Is Better Than CINT ------------------------- Full C++ support Correctness Better type informaton and representations Always compile in memory Much less code to maintain Cling's Dual Personality ------------------------- An interpreter – looks like an interpreter and behaves like an interpreter More than interpreter – built on top of compiler libraries (Clang and LLVM) Cling In The World ------------------- Announced in July 2011 as working C++ interpreter Cling and OpenGL Cling and QT MATLAB to C++ translator Regular bug reports from outside HEP Cling + ROOT = ROOT 6 A CMake-based build and configuration framework ================================================ M. Clemencic CMT: a Configuration Management Tool * manages concurrent versions of projects and packages * dynamic runtime environment Special feature of CMT * a project can override packages from projects it use Extremely used in LHCb * Pick up bugfixes before releases * Lightweight development environment CMT has got limitations * OK on small projects, but very slow on big projects * limited logic of configuration language CMake is powerful and widely used (e.g. KDE) Cons * no support for runtime environment * cannot override targets * transitivity of libraries, but not of includes Something just fit, something not, but the language and the features are powerful enough to outweigh the limitations. STRUCTURED STORAGE IN ATLAS DISTRIBUTED DATA MANAGEMENT ================================== Mario Lassnig Non-relational modelling and storage of data - Use the native data layout of an application Main problems addressed: * There is an upper limit of processing power you can put in a single node * Explicit partitioning can be cumbersome * Query plans need information about the data contents Three technologies evaluated * MongoDB (10gen, Inc.) * Cassandra (Apache Software Foundation, formerly Facebook) * Hadoop with HBase (Apache Software Foundation, formerly Yahoo) Hadoop is framework for distributed data processing * It is not a database like MongoDB or Cassandra Use cases: Log file aggregation Trace mining Wildcard search Accounting ROOT I/O IN JAVASCRIPT Reading ROOT files in any browser ================================== Bertrand Bellenot * How to share thousands of histograms on the web, without having to generate picture files (gif, jpg, ...)? * How to easily share a ROOT file? * How to browse & display the content of a ROOT file from any platform (even from a smartphone or tablet)? * Online monitoring? * And obviously, all that without having to install ROOT anywhere SOLUTION * HTML & JavaScript: JSROOTIO.js * Copy the root file on any plain web server * Data is transferred over the web * Visualization happens on the client side New software library of geometrical primitives for modelling of solids used in Monte Carlo detector simulations ======================================= Marek Gayer Optimize and guarantee better long-term maintenance of Root and Gean4 solids libraries Create a single library of high quality implementations Create extensive testing suite Computer Facilities, Production Grids and Networking ##################################################### Review of CERN Computer Centre Infrastructure ============================================== Tim Bell Maintenance costs for own tool too high * CERN compute centre size not longer leading edge * Meanwhile many open source solutions available * Puppet: Large user an support community * Better chances on the job market! //////////////////////////////////////////////////////////////////////// http://www.chep2012.org/program.php A conference timetable can be found on the CHEP Indico timetable page http://indico.cern.ch/conferenceTimeTable.py?confId=149557#all.detaile (the track summaries - as well) ////////////////////////////////////////////////////////////////////////