| Paper | Title | Page |
|---|---|---|
| TUB001 | Accelerator Data Foundation: How It All Fits Together | 61 |
|
||
| Since 2003, a coherent data management approach was envisaged for the needs of installing, commissioning, operating and maintaining the LHC. Data repositories in the distinct domains of physical equipment, installed components, controls configuration and operational data have been established to cater for these different aspects. The interdependencies between the domains have been implemented as a distributed database. This approach, based on a very wide data foundation, has been used for the LHC and is being extended to the CERN accelerator complex. | ||
|
Slides | |
| TUB002 | The National Ignition Facility Data Repository | 66 |
|
||
|
Funding: This work performed under the auspices of the U. S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. NIF is the world's largest and most energetic laser experimental system, providing a scientific center to study inertial confinement fusion (ICF) and matter at extreme energy densities and pressures. This presentation discusses the design, architecture, and implementation of the NIF Data Repository (NDR), which provides for the capture and long-term digital storage of peta-scale datasets produced by conducting experimental campaigns. The NDR is a federated database that provides for the capture of: experimental campaign plans, machine configuration & calibration data, raw experimental results and the processed results produced by scientific workflows. The NDR provides for metadata, pedigree, quality, effectivity, versioning and access control for each of the data categories. A critical capability of the NDR is its extensive data provisioning capabilities and protocols that enable scientists, local and remote alike, to review the results of analysis produced by the NDR's analysis pipeline or to download datasets for offline analysis. The NDR provides for the capture of these locally-produced analysis results to enable both peer review and follow-on automated analysis. |
||
| TUB003 | Event-Synchronized Data-Acquisition System for SPring-8 XFEL | 69 |
|
||
| We report the status and the upgrade of the event-synchronized data-acquisition system for the accelerator control of XFEL/SPring-8. Because the XFEL is composed of a linac, most of the equipment is driven with the pulsed operation. The stability of equipment is critically important to achieve/stabilize the FEL lasing. We need a fast data-acquisition system to take a set of data from RF signals and beam monitor signals synchronizing with the same electron beam shots. For this purpose, the event-synchronized data-acquisition system has been introduced to the control system of the SCSS test accelerator, an XFEL prototype machine. The system consists of a data filling computer, a relational data base server, VME-based shared memory boards and distributed shared memory network. So far total of 54 signals from the beam monitoring system are successfully collected synchronizing with the 60 Hz of beam operation cycles. The accumulated data was utilized for the fast feedback correction of beam trajectories and energy quite effectively. Signals from the RF systems will be taken by the upgraded data-acquisition system utilizing the distributed memory-cache system. | ||
| TUB004 | CERN's Global Equipment Data Repository | 72 |
|
||
| Infor EAM/MTF is the official equipment maintenance and asset tracking system at CERN. It has become a central repository for data gathered throughout the entire equipment's lifecycle: specifications, nominal values, manufacturing data, results of tests and measurements, non-conformities, commissioning data, logging of breakdowns, maintenance interventions, etc. Even though it has not been explicitly designed for control applications purposes, Infor EAM/MTF helps and is complementary to their information systems. Also, some of the data logged by control applications that might be of interest to other systems can be made publicly available through MTF. Some major benefits of the collaboration between Infor EAM/MTF and control systems have been the enforcement of the CERN equipment naming conventions and the standardized access to data regardless of the type of equipment. Data in Infor EAM /MTF are made available to control clients via web interfaces or in more tailored and efficient ways by directly accessing the Oracle database upon which Infor EAM/MTF is built. Some examples of data sharing between Infor EAM/MTF and control applications are presented in this paper. | ||
| TUB005 | Experimental Data Storage Management in NeXus Format at Synchrotron SOLEIL | 75 |
|
||
|
At Synchrotron SOLEIL, twenty Beamlines are already in operation and produce daily a lot of experimental data files in a range from a few Mb up to 10Gb. Almost all experimental data are stored using the NeXus data format [1]. It is a logical format based on a physical format HDF, able to store any kind of scientific data produced in Neutron or Synchrotron sources. Besides, the NeXus format allows recording experimental data together with all needed contextual like experiment, instrumentation, sample, user's information' Several tools are developed or under development at SOLEIL: a web data browser to retrieve, browse and download data, a command line tool used to export data from NeXus in ASCII or binary data file
The storage system is fully integrated into the Tango bus [2] through a set of dedicated devices. Therefore it's possible to record any data coming from Tango devices into a NeXus file. The storage system is designed to allow recording data from various data sources using a plug-in system. In terms of hardware, a high availability system with the innovator concept of cellular storage "Active Circle" [3] is used since December 2006.
[1] : http://www.nexusformat.org/MainPage |
||
| TUB006 | Marrying a High Performance Computer with Synchrotron Beamlines | 78 |
|
||
| A High performance computing facility is being established to service the imaging needs of Beamlines at the Australian Synchrotron. It is planned to schedule the high performance together with the experimental procedures in order to provide a near real time imaging response to experimental data collection. In this way significantly greater efficiency in the use of experimental facilities can be achieved. It is also intended to utilise the non scheduled beamtime to provide a centre of high performance imaging excellence for other scientific imaging needs in the geographic region. A Petabyte data store, fast links to the researchers home institutes, an optiportal, 3d imaging resources, grid portals are also provided to researchers. |