MOB  —  Status Reports I   (12-Oct-09   10:40—12:05)

Paper Title Page
MOB001 ITER Controls Design Status 1
 
  • W.-D. Klotz, H. Dave, F. Di Maio, H. K. Gulati, C. Hansalia, J. Y. Journeaux, K. Mahajan, L. Scibile, D. Stepanov, A. Wallander, I. Yonekawa
    ITER, St Paul lez Durance
  • D. Joonekindt
    Atos Origin, Meylan
 
  ITER Control, Data Access and Communication (CODAC) system passed conceptual design review in late 2007. Since then a local CODAC group has been formed on the ITER site currently consisting of 12 staff representing four out of the seven parties. The work in transforming the conceptual design into an engineering design is now in full swing. The primary current focus is on standardization of the development process as well as hardware and software components for instrumentation and control. The system will provide services and communication functions to orchestrate and integrate, current estimate is 165, Plant Systems which will be delivered 'in kind' by the seven ITER Parties. The latter poses the largest challenge of the project. This paper will detail the architecture of the system, will report on standards selected and will present the activities, strategy and technology choices made during the last year and outline the plans ahead.  
slides icon Slides  
MOB002 Status Report of the LMJ (Laser Mégajoule) Control System 1
 
  • J. J. Dupas, J. I. Nicoloso
    CEA, Arpajon
  • J. C. Picon, F. P. Signol
    CESTA, Le Barp
 
  The French Commissariat à l'Énergie Atomique (CEA) is currently building the Laser MegaJoule (LMJ), a 240-beam laser facility, at the CEA Laboratory CESTA near Bordeaux. It is designed to deliver about 2 MJ of energy to targets for high energy density physics experiments, including fusion experiments. LMJ technological choices were validated with the LIL, a scale 1 prototype of one LMJ bundle. The construction of the LMJ building itself is now achieved and the assembly of laser components is on-going. The presentation gives an overview of the control system architecture and focuses on the way it was divided between the dozen of contractors involved in the LMJ design. We will discuss also how we tried to preserve system consistency by developing a common framework shared by the different contractors. This framework is based on the PANORAMA ·102 industrial SCADA and includes WCF technology for subsystems communication. It is intended to integrate all the common components and rules for configuring, controlling and operating a large facility, so that developers has only to concentrate on process specific tasks.  
slides icon Slides  
MOB003 Status of the LHC Power Converter Controls 4
 
  • Q. King
    CERN, Geneva
 
  The LHC has more than 1700 power converters spread around the 27 km machine. Controlling them all is an unprecedented challenge due to the part-per-million level accuracy required for the main circuits and because the majority of the systems are exposed to significant levels of radiation. The project started in 1996 and consumed 7 MSF and around 50 man years. The architecture chosen is similar to the one used successfully in LEP with one embedded controller per power converter linked by fieldbuses to middle-tier gateway systems. This paper presents a summary of the architecture and the results achieved during the commissioning of the LHC from 2006-8. The system contains numerous enhancements compared to LEP including: digital regulation of current; automatic configuration based on a machine readable inventory; extensive remote diagnostics of power converter and controller faults; and distribution of time of day, timing events, software updates and power cycle requests over the WorldFIP fieldbus. The paper reports on what worked well, what could have been designed better, and what are expected to be the important issues for exploitation in the future.  
slides icon Slides  
MOB004 Commissioning of the New Control Systems for the PETRA3 Accelerator Complex at DESY 7
 
  • R. Bacher
    DESY, Hamburg
 
  At DESY the exisitng accelerator complex has been upgraded and partially rebuilt to become the high-brilliance 3rd-generation light source PETRA3. The pre-accelerators have been successfully re-commissioned during the second half-year 2008, while PETRA3 has provided stored beam first time in April 2009. In the context of the PETRA3 project, the control systems of all accelerators involved have been rebuilt. At all levels radical and significant changes have been introduced. Key elements of the chosen architecture and technologies are among others: TINE as core control system software suite, JAVA as the principal programming language for implementing graphical operator applications as well as many device and middle-layer servers (other device servers making use of C, C++, VB, and LabView), integrated Matlab and light-weight dynamic web-based applications, generic device access, CANopen as interface for standard process control, more than 200 LIBERA brilliance beam position internet appliances and integrated high-bandwidth video transmission. The paper reports the experiences gained so far during the commissioning of the new control systems.  
slides icon Slides  
MOB005 The Compace Muon Solenoid Detector Control System 10
 
  • R. Gomez-Reino, B. Beccati, E. Cano, M. Ciganek, S. Cittolin, J. A. Coarasa, C. Deldicque, D. Gigi, F. Glege, J. Gutleber, Y. L. Hwong, J. F. Laurens, F. Meijers, E. Meschi, R. Moser, L. Orsini, A. Racz, H. Sakulin, C. Schwick, M. Simon, M. Zanetti
    CERN, Geneva
  • G. Bauer, C. Loizides, C. Paus, J. F. Serrano Margaleff, K. Sumorok
    MIT, Cambridge, Massachusetts
  • U. Behrens, D. Hatton, A. Meyer
    DESY, Hamburg
  • K. Biery, H. Cheung, J. A. Lopez-Perez, R. K. Mommsen, V. O'Dell, D. Shpakov
    Fermilab, Batavia
  • J. Branson, A. Petrucci, M. Pieri, M. Sani
    UCSD, La Jolla, California
  • S. Erhan
    UCLA, Los Angeles, California
 
  The Compact Muon Solenoid (CMS) experiment at CERN is one of the Large Hadron Collider multi-purpose experiments. Its large subsystems size sum up to around 6 million Detector Control System (DCS) channels to be supervised. A cluster of ~100 servers is needed to provide the required processing resources. To cope with such a size a scalable approach has been chosen factorizing the DCS system as much as possible. CMS DCS has made a clear division between its computing resources and functionality by creating a computing framework allowing for plugging in functional components. DCS components are developed by the subsystems expert groups while the computing infrastructure is developed centrally. To ease the component development task, a framework based on PVSSII [1] has been developed by the CERN Joint Controls Project [2] (JCOP). This paper describes the current status of CMS Detector Control System, giving an overview of the DCS computing infrastructure, the integration of DCS subsystem functional components and the experience gathered so far.  
slides icon Slides