Introduction to CODAS
description
Transcript of Introduction to CODAS
CCFE is the fusion research arm of the United Kingdom Atomic Energy Authority
Introduction to CODAS
Sverker Griph
Overview 1
• Fusion history• The JET experiment• The JET machine• Functional overview• The operator view• HW organisation
– Computer networks– Plant interface
• JET machine safety
• Real time view• JET state machine• JET control software• S/W infrastructure• System admin view• CODAS configuration• Database view• Result access view
– Presentation to be defined
Overview 2
• Documentation view• JET ROs• JET roles• Security view• CODAS on-duty view• Product view• Personal computing
environment
• Management view• S/W engineering view• S/W technology view• JET software and the
future of software technology
Presentation to be defined for all of these points
1920: F.W. Aston 1905: E = mc2
1905: E = mc2
1920: F.W. Aston 1920: A. Eddington 1933: L. Szilard 1934: E. Rutherford 1938: Meitner, Hahn, Strassmann 1941: Manhattan project 1945: First fission bombs 1952: First fusion bomb
Without magnetic field
Charges in a magnetic field
Magnetic confinement
1956 - T1 at Kurchatov Institute, Moscow
1969 - T3
high performanceplasma confirmed
The JET Experiment
• The principle of fusion
• Confinement principles:– gravity, inertia, magnetic
• Tokamak: Magnetic confinement in torus– Toriodial magnetic field– Pollodial magnetic field
• Fusion reaction: D+ + T+ = He++ + n
Deuterium Tritium Fusion
The JET Experiment
• European and international collaboration– JET Joint Undertaking (1978 - 1999).– EFDA-JET (2000 – 2014 and beyond?)
• JET Operating Contract - JOC: UKAEA-EFDA
• JET followed by ITER• Site preparations in France• Design under further development• JET experiments to support design decisions
The JET Machine
• Toroidal vacuum vessel
• Toroidal field (4.0 T) - toroidal coils
• Poloidal field - poloidal coils
• Divertor - divertor coils
• Additional heating and fuelling systems
• Diagnostics
The JET Machine
• Power supply systems– National grid: 415 kV - max 575 MW– Two fly-wheel generators - max 630 MW– Total 1.2 GW
• Vacuum systems– Partitions for vacuum isolation– Gas introduction system
• Cryogenic systems– Pumping the diverter, beams, diagnostics
The JET Machine
• Tritium plant - J25 building– Store tritium in uranium beds– Recover tritium from exhausts
• Remote handling system
• Interface and control systems
415 kV Max 575 MW
Max 315 MW x 2
Total max 1.2 GW Total max 1.2 GW
Functional Overview
• JET is an ‘electrical transformer’ - pulse– Preparations– Countdown
• Power supply preparations• Checks and initialisations• Pulse• Data collection
– Analyses
Functional Overview
• Typical pulse:– PF pre-magnetisation– TF ramp up– Gas introduction - PF fast rise - plasma– PF null– X-point formation– Additional heating and fuelling
JET Plasma Pulse - 60-120 s
• Executed automatically after a countdown
• Synchronized - optical fiber distribution– triggers + 2 MHz clock (phase-locking)– clock usage is recorded
• Waveforms are generated– used as plant control references
• Real-time feedback loops are executed
• Signals are recorded - local memory– analog inputs, pulse counters, cameras
Countdown - circa 3 minutes
• Automatic checks• Pulse number allocated• Pulse archives prepared• Slow control sequences executed
– flywheel generators winded up– valves opened or closed– circuit breakers opened or closed
• More automatic checks• Trigger pulse timing system
After Pulse
• Recorded signals transferred to archive• Recorded time bases transferred to archive
– Associations between signals and time are kept
• Pulse setup info is archived• Archives transferred to central storage• Automatic chain of analysis is executed• Immediate human evaluation• Continuing evaluation - days, months & years
Pulse Preparation
Countdown
Pulse
Data Collection
Data Storage
Intershot Analysis
Control/Diagnostic
Setup/Validation
Offline Analysis
JET Experiment Pulse Cycle
JET is pulsed25 pulses per day
~80,000 since 1983
Two shifts 06:30 – 22:30
Pulse every ~ 30 minutes
~30-40s of plasma
Maximise Repetition Rate
Hierarchical and Modular ArchitectureThree-level hierarchical and modular control system structure in hardware and software:
Central/Supervisory - level 1Subsystem - level 2Component - level 3
11
22
33
Autonomy at Subsystem Level
Subsystem per major Plant System Vacuum, Poloidal Field, RF Heating ...
10 Subsystems for ~70 Diagnostics
Same structure applicable to Machine Control and to Diagnostics
21 subsystems in total
Functional Overview
• JET control is organised as subsystems– Central subsystems– Essential subsystems– Additional heating and fuelling subsystems– Other pulse oriented subsystems– Non-pulse subsystems– Test and off-line subsystems
Functional Overview
• JET control is organised as subsystems– Central subsystems
• MC - machine control• SS - safety and (machine) security• SA - storage & analysis• PM - pulse management
Functional Overview
• JET control is organised as subsystems– Essential subsystems
• GS - general services• VC - vacuum control• PF - poloidal field control• TF - toroidal field control• DA - magnetic diagnostic (KC1)• DF - plasma density diagnostic (KG1)
Functional Overview
• JET control is organised as subsystem– Additional heating and fuelling subsystems
• AH - neutral beam heating system• YC - neutral beam heating system• RF - Radio Frequency heating system• LH - Lower Hybrid current drive system• PL - Pellet Launcher system
Functional Overview
• JET control is organised as subsystems– Other pulse oriented subsystems
• SC - Saddle Coil system• Diagnostic subsystems
– DB, DC, DD, DE– DG, DH, DI, DJ
Functional Overview
• JET control is organised as subsystems– Non-pulse subsystems
• NM - network monitoring
Functional Overview
• JET control is organised as subsystems– Test and off-line subsystems
• PD - Power supply development system• YD - diagnostic commissioning system• YC - neutral beam off-line testing system
– Development systems• YE - driver development system• CC - Codas commissioning system• EL - Electronics commissioning system
Functional Overview
• Within each subsystem control is subdivided as subsystem components– Components can be selected for pulse or not– Components can be operational or not– Components can be marked as essential or not
• A subsystem component that is– selected and essential– not operational
• stops JET countdown• state may be forced
The Operator view
• Preparations for pulse– Prepare parameters for pulse - level1 etc– Manually check and setup machine
• Mimics - xmimic (active points: commands & links) • EIC provided with ‘reasons’
– Select systems to be included in pulse– Clear all states that raises alarms - xdalar– Put CISS in normal (Central Interlock and Safety
System)
The Operator view
• Countdown– Dcountd: coordination of the activities of all
subsystems taking part in the pulse - cdutil• Start with automatic checks• Initialisation of H/W and S/W modules• Wait - then trigger pulse• More automatic checks• Automatic execution of the pulse• Data collection:
– pulse files: JPF,IPF,QPF,DPF,LPF
The Operator view
• Countdown– Hardware and software checks - the pulse
may fail• ‘Pulse Aborted’
– H/W - something like CISS detects a hard fault– S/W - an unconditional check fails– Engineer in charge aborts the pulse
• ‘Force State’ - a check has failed but…• Alarms - xdalar displays, operator acknowledges
The Operator view
• Result analysis of pulse file data– Control room session
• PAD analysis – old results automatically removed• XPAD displays pulse file signals• Some outside control room analyses may be used
to adjust the next pulses
– After session analyses• JET Analysis Cluster (JAC): PC farm - 120+ Linux
systems
HW organisation
• Computer networks - ethernet– Off-line office network - JETNET– On-line control networks– Core network– IP gap - no IP routing between on- and off-line
• Bespoke proxies implements access across– X11 server proxy: used by ‘tunnels’ through the gap– OMS - object monitoring service
HW organisation
• Computer networks - on-line control:– Many, many sub-nets:
• Datanet - control and data acquisition between control subsystems and subordinate host computers
• JPFnet - Pulse file data transfer from control subsystems to central file storage
• One sub-net per subsystem fileserver cluster
HW organisation
• Computer networks– Subsystem fileserver cluster subnet
• One SUN file server• Related control subsystems each running on one
SUN V40 computer– No specific discs on subsystem computers– Subsystem internal disc used for swapping and tmpfs
• srv-control: MC, SA, GS, PF, TF, VC• srv-heat: RF, AH, YC, LH, PL
HW organisation
• Computer networks - What for?– nfs– TCP/IP
• X11 client-server• CODAS message protocols
– CSL5 - ported from Nord/Sintran– CSL8 - extended client-server event based API– HTTP ‘Black Box’ protocol
– UDP/IP• echo, CSL6 ‘real-time’ broadcast
HW organisation
• Computer networks - ATM (not Ethernet)– Star configuration - central ATM switch– High bandwidth– Real time compatible
• Dedicated channels with guaranteed bandwidth• Supports both point-2-point and broadcasts• TCP/IP in parallel on dedicated channels• Usage: Real time signal server input and signal
distribution
HW organisation
• Computer networks – NM - network monitoring subsystem
• Regularly polls all known hosts using whatever method is available
• Raises alarm if a host is no longer available
– What hosts are there?• ypcat -k hosts | less
HW organisation
• Plant interface– CAMAC buses with CAMAC modules– VME buses with VME modules– TCP/IP to plant PC:s
• National Instruments modules etc
– PCI buses for a few module types• PC hardware, especially memory is cheap• BAD2
– FPGA front ends, no bus• UXD7 via Ethernet
• Plant interfaces are ‘hosted’ on subsystems
HW organisation
• Plant interface - CAMAC– Old ‘slow’ bus standard - but still going strong
• CAMAC modules in CAMAC crates• Standard modules for crate control, crate LAMs
and crate supervision• Specialised CAMAC modules• FPGA based module refurbishment program
– Serial communication with host via loop• Fibre optics (U-port interface with batteries)• Main loop and backup loop
HW organisation
• Plant interface - VME– VME crate with 1 VPLS module (CODAS
design)• CTTS interface• Crate monitoring• LSD digital I/O
– VME crate controller with Ethernet interface• PPC or 68K processor (VxWorks operating
system)
– Specialised VME modules
HW organisation
• Plant interface - TCP/IP– Standard plant PCs for dedicated purposes
• National Instruments modules etc.
– Specialised PCs– All use TCP/IP
• a subset of the CSL8 server protocol.• HTTP – ‘Black Box’ protocol now dominates new
developments
Rack Mounted Modules (2007)
• VME: 1,668 modules of 70 types.• Camac: 5,746 modules of 120 types.• Eurocard: 14,393 modules of 304 types.• PCs: 81 processors
– many PCI signal interface boards.
• Circa 35,000 cables and optical fibers• Circa 200,000 plant access data channels• Per pulse circa 35,000 data channels archived
Some Modules Developed
• Most modules are commercial OTS• Some special requirements
– Isolation: 1.5 kV or more– Fast analog optical isolation is expensive
• Latest example: UXD7 - Eurocard– 2 fast network ports– 8 mezzanine mounted channels - ADCs
isolated– Large FPGA and large RAM
• Processor with HTTP server
JET machine safety
• All primary protection implemented in H/W
• Secondary protection only in real time S/W
JET machine safety
• All primary protection implemented in H/W– PEMS - Plant Essentials Monitoring System
• Failsafe non-pulse oriented backup monitoring system
– CISS - Central Interlock Safety System• PLC based failsafe logic to provide protection
during JET pulsing
JET machine safety
• All primary protection implemented in H/W– CISS - Central Interlock Safety System
• Logic to provide protection during JET pulsing– Hierarchical organisation
» CISS supervisor (hosted on the SS subsystem)» CISS subsystems (hosted on GS, PF x2, TF, VC, RF,
AH, YC)– State machine: normal, pulse-on, pulse-inhibit,
emergency-shutdown, fulldown.– CISS serial link to host: mimics with logic history. – CISS inputs or outputs sometimes need to be patched.
JET machine safety
• Secondary protection only in real time S/W– CPS - Coil Protection System (on PF)
• Dedicated VME DSP system with H/W watchdog
– PPS - Plasma Protection System (on PF)• Dedicated CAMAC CACs on PF
– PIW - Protectection for ITER-like Wall - NEW!• The new inner wall is made of metal – it is prone to
melting.• ATM network of collaborating systems.
JET machine safety
• Secondary protection only in real time S/W– PPS - Plasma Protection System
• Dedicated CAMAC CACs on PF– PDV input - Plasma Density Validation system
» Dedicated VME system (input from KS3 and KG1)» Will be integrated with new PDLM
– PPCC input - Plasma Position and Current Control system
» Dedicated VME DSP systems (vertical stabilisation and plasma shape control)
– Output to CISS (hard stop) and PPCC (soft stop)– Will be integrated with PIW
JET machine safety
• Secondary protection only in real time S/W– PIW - Protectection for ITER-like Wall - NEW!
• ATM network of collaborating systems.• Many diagnostic inputs
– 100+ thermocouples– 15+ cameras– pyrometers for hotspot monitoring– spectrometers for monitoring of W and Be lines.
• Mainly PPC systems - the JET pulse may be stopped (hard/fast/slow) or downgraded (less heating).
– VTM – Virtual Thermal Map
JET machine safety
• Whenever designing JET systems have personal and machine safety in mind.– KE4 blow up equipment for millions
• An operator programmed a CODAS CTM1 module to ON for ever when trying to switch it off.
• Implementation bypassed the machine safety procedures.
– KG1 can send a validated false density to PDV which may let NIB through a too thin plasma.
Real Time View
• CCTS - Central Timing and Triggering System– Dedicated VME system hosted on PF
• Was CAMAC system hosted on PF (still exists)• Operated from a level1 setup page
– Fibre optic distribution system of JET clock and reset and pulse triggers to all controlling parts of JET.
Real Time View
• CCTS - Central Timing and Triggering System– What is distributed on the fibres?
• 2 MHz JET clock (divided to e.g. 200 kHz).• Reset triggers to VME systems (unique addresses)• Pulse oriented triggers:
– PRE - always 1 ms after CCTS is triggered.– GAS - typically 40 s after PRE.– Pulse triggers are both ungated and gated (by core
plant).
Real Time View
• CCTS - Central Timing and Triggering System– What receives the CTTS signals?
• Clock to CAMAC master U-ports in the computer room and then distributed to all CAMAC crates.
– Clock then distributed on daisy chain through each crate
• Triggers to CAMAC timing modules (CTM1,CTM2). CTM1 modules output the triggers to CPG2 and CPG3 modules after controlled delays.
Real Time View
• CCTS - Central Timing and Triggering System– What receives the CTTS signals?
• VME VPLS module receives a 2 MHz clock with pulse and reset triggers coded on top. The VPLS has an address equal to the H/W address of the crate. The JET pulse triggers are reproduced on the VPLS back.
• Dedicated EuroCard modules decodes clock and JET pulse triggers for non-CAMAC non-VME.
Real Time View
• CCTS - Central Timing and Triggering System - Summary– A fibre optic based system to distribute a
common clock and synchronised triggers throughout the JET plant.
– Ensures micro second accurate coordination of JET control and synchronisation of JET data collection.
Real Time View
• Typical timing arrangement used in most CAMAC crates– Reflected in the ‘timing
diagram’ drawing.– Reflected in CODAS
configuration trees like GAP-tree.
CTM1PRE
CPG2
CPG3
Transientrecording
ADC
delay clock
CAC3CAMACAuxiliaryController
Real Time View
• Applications on subsystems are quasi real time - response time is not guaranteed.
• Hard real time software is run on dedicated processors, where response time can be predicted:– CAMAC: CAC3 programmed in BCL– VME: 68K, PPC, DSP C40– PCI: Intel– Private Ethernet networks – FPGA front ends
Real Time View
• Hard real time software examples:– Embedded 68K applications under VxWorks
• VPL1/Slowadc, VPLC/Latching scaler.
– Downloaded moderately configurable S/W:• PPCC, CPS, KG1, KC1D, KK3rt, RFLM, NBLM.
– Downloaded highly configurable S/W:• RTGS: Real Time Controller
– Real time signal server input and output via ATM– Configurable networks of algorithms
JET State Machine
• The JET state machine drives pulse countdown
Standby
Check
Pre-pulse
Incrementingpulse
number
Moduleinitiali-sation
Triggerpulse
Check PulseData
collection
End ofpulse
sequence
Startcountdown
Go tostandby
FEC = Forcible
Error Condition
FEC
FECFEC
Idle Abort pulse
CTTS control
JET State Machine
• Countdown overview:– xmimic mc/codas/countdown
• Holds active buttons to include or exclude subsystems from the JET countdown.
• Countdown control:– Main control program: Dcountd on MC.– User interface: cdutil as users EIC or ST.
JET State Machine
• Countdown control:– Dcountd on MC talks to
• Dchkinx on MC– Dchkinx on MC talks to Dgapchk on every subsystem
included in the pulse countdown.
• Dsuperv on every included subsystem– Commands all pulse oriented control programs on the
subsystem to go to the next state and follows up this.– In particular it commands Dgap which sets up a large
part of the pulse parameters and collects most of the data.
JET State Machine
• Countdown error handling:– What can go wrong:
• Dgapchk on an included subsystem may report to Dchkinx on MC that a subsystem component that is marked as essential is not operational Forcible Error Condition
JET State Machine
• Countdown error handling:– What can go wrong:
• Dsuperv on a subsystem fails to reach the next stage in the countdown in time Forcible Error Condition
• An explicit check in Dsuperv on a subsystem fails:– Forcible Error Condition– Abort Pulse
JET State Machine
• Countdown error handling:– How are errors reported:
• cdutil provides a popup error box - some error reports are cryptic.
– Separate cdutil dialog for each forcible error condition
• The countdown mimic on MC and the substat mimics on the subsystem should explain (depends on correct use of the SS-ERROR software point).
• CODAS error logs on the subsystems may provide additional information.
• Reasons – XML based hierarchical report -> EIC, DC, CDO
JET State Machine
• The JET state machine drives pulse countdown
Standby
Check
Pre-pulse
Incrementingpulse
number
Moduleinitiali-sation
Triggerpulse
Check PulseData
collection
End ofpulse
sequence
Startcountdown
Go tostandby
FEC = Forcible
Error Condition
FEC
FECFEC
Idle Abort pulse
CTTS control
JET Control Software
• Level1 control:– Pulse Parameter Setup
• Level2 control:– Subsystem control
• Level3 control:– Low level H/W access
• Subsystem infrastructure
JET Control Software
• Level1 control:– Pulse Parameter Setup
• Dmread on the PM subsystem– Manages the level1 parameter database (parameter
state)– Executes check and parameter distribution algorithms– Maintains parameter sets: old pulses and pulse
schedules
• Dpdslav on the PM subsystem• User interface: xpsedit
JET Control Software
• Level1 control:– Pulse Parameter Setup
• Dmread on the PM subsystem• Dpdslav on the PM subsystem
– Collects an image of the current setup » uses OMS (Object Monitoring System).
• User interface: xpsedit
JET Control Software
• Level1 control:– Pulse Parameter Setup
• Dmread on the PM subsystem• Dpdslav on the PM subsystem• User interface: xpsedit
– Maintains many independent configurable views on user parameter setup.
» Each view has separate configuration file» Each view has separate authorisation scheme
JET Control Software
• Level2 control:– Subsystem control:
• Subsystem control daemons• Subsystem mimics• Subsystem touch panel input• Subsystem alarms
JET Control Software
• Level2 control:– Subsystem control:
• Subsystem control daemons - tasks:– Take part in pulse oriented control– Perform non pulse oriented control– Monitor plant and subsystem
JET Control Software
• Level2 control:– Subsystem control:
• Subsystem control daemons– Standard subsystem daemons
» Pulse control daemons: Dcountd, Dsuperv, Dgapchk, Dgap, Dlip1, Dlap1, Dlip2, Dlap2, Ddap.
» Non-pulse standard daemons: None.» Monitoring daemons: Dwatchdg, Dvxmon, Dshdmon.
– Specific subsystem daemons
JET Control Software
• Level2 control:– Subsystem control:
• Subsystem control daemons– Standard subsystem daemons– Specific subsystem daemons
» Normally at least one specialised daemon per subsystem component: pulse and non-pulse oriented control integrated with plant monitoring.
» Configurable daemons: Drut, Dcfw, Duxgap.» Bespoke daemons.
JET Control Software
• Level2 control:– Subsystem control:
• Subsystem mimics– Standard control display interface: xmimic
» Mimics have active buttons: links and commands– Configured from files in /jet/<ss>/mimics/
» JET mimic programming language» One .mim file per mimic» Can include .mlib files» .mim files are compiled (mcomp) into .mpic files
JET Control Software
• Level2 control:– Subsystem control:
• Subsystem touch panel input– Standard user control interface
» Used to run on special H/W: Small square panel displays with 4 x 4 touch sensitive areas
» Organised in touch panel trees - some ‘buttons’ are commands, others lead to subordinate panels
– Root of touch panel tree is /jet/<ss>/tp– xtp -d /jet/<ss>/tp or xtp -as <ss>
JET Control Software
• Subsystem touch panel input
JET Control Software• Subsystem
touch panel input
tptp
tptp
tptp
tptp
tptp
tptp
mimicmimic
mimicmimic
cmdcmd
cmdcmd
cmdcmd
cmdcmd
JET Control Software
• Subsystem touch panel input
JET Control Software
• Level2 control:– Subsystem control:
• Subsystem alarms (alarm levels, audible)– Alarm control daemons: Dalman, Daehan– Alarm display: xdalar– Alarm definitions:
» JET alarm definition language» Default source in /jet/<ss>/alarms/alarms.sou» Compiled by ecomp into alarms.obj
– Alarms can be exported between subsystems
JET Control Software
• Level3 control:– Low level H/W access
• Drivers– Disser – signal server– Dpsi – ‘plant state image’ in shared memory– Dissupr – VME signal access proxy– libdrv – shared codes for driver daemons– libdr – API for application programs– udriv – driver user utility– HW tree (tree structured database)– mtm – compiles HW tree into device tables
JET Control Software
• Summary:– JET levels of control:
• level1 - pulse parameter setup• level2 - subsystem control• level3 - low level H/W access• subsystem infrastructure
– What is running on a subsystem?• lrtp or lrtp <ss> e.g. lrtp PM
S/W infrastructure
• Technology ownership
• Generic applications– Configurable– Data driven
• Table driven• Object oriented• Component based
• GAP– General Acquisition
Program– LAP, DAP, CAP
• CFW– Component
Framework– 100+ control
applications
System admin view
• SOLARIS 5.10– Configure– Boot– Shutdown– Patches– Performance– Logging
• Filesystem– NFS– ZFS
• Distribution of required functionallity– Servers– Subsystems– Data store
• Cache• Tapestore
• Security– Authentication– Authority– Backup
CODAS configuration
• In House DBMS– Tree databases
• HW tree• GAP tree
• Configuration files– CODAS wide– Subsystem– CCL
• CODAS Configuration Language
• Based on RDF
Database view
• HW tree• GAP tree• JET logging• SW product database
– deployment• Releases
• Installation
• Commit
• Roll back
• PDS – product repository– Version control– Product level
• Electra– Relational database– HW
• Module types
• Inventory
• Functional positions
• Cables
• Terminal blocks
• History
– Projects - change• F20
• F78
• Activities
– Documents
Result access view
Documentation view
JET Responsible Officers
• JET ROs
JET roles
Security view
CODAS on-duty view
Product view
Personal computing environment
Management view
S/W engineering view
S/W technology view
JET - the future of IT