WED  —  Safety/High reliablity and Fablic Management   (14-Oct-09   16:30—18:00)

Paper Title Page
WED001 Measured Performance of the LHC Collimators Low Level Control System 612
 
  • A. Masi, R. Losito, S. Redaelli
    CERN, Geneva
 
  The LHC will be protected against uncontrolled beam losses by the collimation system, which is made of more than 100 collimators each equipped with two moveable jaws of different materials. The nominal LHC stored beam energy is 362 MJ, therefore a great care has been taken in optimizing the performance of the low level control system. This controls the position and angle of the jaws with an accuracy of a few microns, and monitors the actual position against errors with respect to the desired position at a rate of up to 100 Hz, triggering a fast beam dump in case of problems. We have chosen stepping motors to have an accurate open loop positioning, while LVDTs and resolvers monitor the axes. The National Instruments PXI platform has been adopted as real-time low level hardware. In this paper we describe the control architecture, the low level custom solution implemented on the FPGA and we provide a detailed performance review of the entire system. In particular we present the excellent synchronization of several hundred motors over a profile of about 30 minutes simulating the nominal energy ramp of the LHC and show that the position error is well below the specified 20 microns.  
WED002 Development of a Remote Experiment Control System 615
 
  • Y. Furukawa, K. Hasegawa
    JASRI/SPring-8, Hyogo-ken
  • D. Maeda, G. Ueno
    RIKEN Spring-8 Harima, Hyogo
 
  A remote experiment control system is a long desired system for synchrotron radiation facilities. For coexistence of radiation and physical safety, operationand data security, careful consideration is required. The beamline interlock system provides radiation safety with radiation shielding hutches. Under the condition in which synchrotron radiation can be introduced into the hutch, the interlock system guarantees that there are no person inside the hutch. Using the information from the radiation safety interlock system, physical safety system has been built. For the operation and data security, we made a custom SSL serverwith bidirectional authentication. The server replays commands from remote user's program to the experiment control computer. A client certification file contains an information about remote user's experiment, the SSL server accepts commands from the authorized user. With a video streaming, remote user can watch samples and/or experimental equipments. A system latency is small enough, the remote users can perform their experiment as if they were beside the beamline. The system is under testing and it will be opened users in this fiscal year.  
slides icon Slides  
WED003 Progress of the Virtual Laboratory for Fusion Researches in Japan 618
 
  • T. Yamamoto, S. Ishiguro, Y. Nagayama, H. Nakanishi, S. Okamura, S. Takami
    NIFS, Gifu
  • K. Hiraki
    IST, Tokyo
 
  Funding: A part of SNET is partly supported by Cyber Science Infrastracture development project of the National Institute of Informatics.

SNET is a virtual laboratory system for nuclear fusion research in Japan, it has been developed since 2001 with SINET3, which is a national academic network backbone operated by National Institute of Informatics. Twenty one sites including major Japanese universities, Japan Atomic Energy Agency and National Institute for Fusion Science (NIFS) are mutually connected on SNET with the speed of 1 Gbps in 2008 fiscal year. The SNET is a closed network system based on L2/L3VPN. Collaboration categories in SNET are as follows: the LHD remote participation; the remote use of supercomputer system; the all Japan Spherical Tokamak (ST) research program. For example, the collaborators of the first category in a remote station can control their diagnostic devices at LHD and analyze the LHD data as if they were at the LHD control room. ITER activity has started in 2007 and 'The ITER Remote Experimentation Centre' will be constructed at the Rokkasho village in Japan under ITER-BA agreement. SNET would be useful for distributing the data of ITER to Japanese universities and institutions.

 
WED004 Management of the LHCb Online Network Based on SCADA System 621
 
  • G. Liu, N. Neufeld
    CERN, Geneva
 
  LHCb employs two large networks based on Ethernet. One is a data network dedicated for data acquisition, the other one is a control network which connects all devices in LHCb. Sophisticated monitoring of both networks at all levels is essential for the successful operation of the experiment. LHCb uses a commercial SCADA system (PVSSII) for its Experiment Control System (ECS). For the consistency and efficiency reason, the network management system is implemented in the same framework. We show here how a large scale network can be monitored and managed using tools originally made for industrial supervisory control, and discuss several tools developed to facilitate the integration of network management and monitoring in LHCb's control system. In the network management system, the status of the network is monitored at different levels, including the application level, the devices, the ports and the connectivities. Alarms can be issued to inform the experiment operators and online network experts about errors such as dropped packets or broadcast-storms. Reports and long-term monitoring are possible by using powerful trending tools.  
WED005 Implementing High Availability with COTS Components and Open-source Software 624
 
  • R. Schwemmer, N. Neufeld
    CERN, Geneva
 
  High Availability of IT services is essential for the successful operation of large experimental facilities such as the LHC experiments. In the past, high availability was often taken for granted and/or ensured by using very expensive high-end hardware based on proprietary, single-vendor solutions. Today's IT infrastructure in HEP is usually a heterogeneous environment of cheap, of the shelf components which usually have no intrinsic failure tolerance and can thus not be considered reliable at all. Many services, in particular networked services like the Domain Name service, shared storage and databases need to run on this unreliable hardware, while they are indispensable for the operation of today's control systems. We present our approach to this problem which is based on a combination of open-source tools, such as the Linux High Availability Project and home-made tools to ensure high-availability for the LHCb Experiment Control system, which consists of over 200 servers, several hundred switches and is controlling thousands of devices ranging from custom made devices, connected to the LAN, to the servers of the event-filter farm.  
slides icon Slides  
WED006 Upgrade Of The Spring-8 Control Network For Integration Of Xfel 627
 
  • T. Sugimoto, M. I. Ishii, T. Ohata, T. Sakamoto, R. Tanaka
    JASRI/SPring-8, Hyogo-ken
 
  Today, new synchrotron-radiation facilities have been built around the world. One of these facilities, RIKEN XFEL project in Japan, is characterized by its location beside existing facility, SPring-8. Using X rays from two facilities in coincidence, new scientific applications are expected such as pump-and-probe experiments, and so on. We also plan to use linac of the XFEL as another injector to the SPring-8. By benefiting from combined application with two facilities, it is necessary to integrate two control systems. Important point of the integration is combination and segregation of two facilities. For combined applications, two control systems should be treated as one facility. On the other hand, when two facilities are operated separately, two control systems should be independent each other, and one system must not be affected by any trouble of another system. To archive the point, we physically segregate control system into two networks using firewall. Since control architecture in SPring-8 is database oriented, two systems can be coupled with synchronization of database for combined applications. We show the concept and upgrade status of new network and control system.  
slides icon Slides