Wireless Control System Simulation and Network Adaptive...
Transcript of Wireless Control System Simulation and Network Adaptive...
Helsinki University of Technology Control Engineering Espoo 2010 Report 167
WIRELESS CONTROL SYSTEM SIMULATION AND NETWORK ADAPTIVE CONTROL Mikael Björkbom
AALTO UNIVERSITY SCHOOL OF SCIENCE AND TECHNOLOGY DEPARTMENT OF AUTOMATION AND SYSTEMS TECHNOLOGY
Helsinki University of Technology Control Engineering Espoo October 2010 Report 167
WIRELESS CONTROL SYSTEM SIMULATION AND NETWORK ADAPTIVE CONTROL Mikael Björkbom Doctoral dissertation for the degree of Doctor of Science in Technology to be presented with due permission of the Faculty of Electronics, Communications and Automation for public examination and debate in Auditorium AS1 at the Aalto University School of Science and Technology (Espoo, Finland) on the 10th of December 2010 at 12 noon.
Aalto University
School of Science and Technology
Faculty of Electronics, Communications and Automation
Department of Automation and Systems Technology
Distribution:
Aalto University
Department of Automation and Systems Technology
P.O. Box 15500
FI-00076 Aalto, Finland
Tel. +358-9-470 25201
Fax. +358-9-470 25208
E-mail: [email protected]
http://autsys.tkk.fi/
ISBN 978-952-60-3460-7 (printed)
ISBN 978-952-60-3461-4 (pdf)
ISSN 0356-0872
URL: http://lib.tkk.fi/Diss/2010/isbn9789526034614
Aalto-Print
Helsinki 2010
ABSTRACT OF DOCTORAL DISSERTATION
AALTO UNIVERSITY SCHOOL OF SCIENCE AND TECHNOLOGY P.O. BOX 11000, FI‐00076 AALTO http://www.aalto.fi Author Mikael Björkbom
Name of the dissertation Wireless control system simulation and network adaptive control Manuscript submitted 27.5.2010 Manuscript revised 7.10.2010 Date of the defence 10.12.2010 Monograph Article dissertation
Faculty Faculty of Electronics, Communications and Automation Department Department of Automation and Systems Technology Field of research Control Engineering Opponents Prof. Matti Vilkko, Prof. Tapani Ristaniemi Supervisor Prof. Heikki Koivo Abstract
With the arrival of the wireless automation standards WirelessHART and ISA100.11a, the use of wireless technology in the automation industry is emerging today. The main benefits of using wireless devices range from no cable and lower installation costs to more flexible positioning. When using next generation agile wireless communication methods in control applications, the unreliability of the wireless network becomes an issue, due to the real‐time requirements of control. The research has previously focused on either control design and stability for wired control, or network protocols for wireless sensor networks. A marginal part of the research has studied wireless control.
This thesis takes a practical approach to the field of wireless control design. A simulation system called PiccSIM is developed, where the communication and control can be co‐simulated and studied. There already exists some simulation tools, such as TrueTime, but none of them delivers as flexible and versatile capabilities as PiccSIM for simulation of specific protocols and algorithms. PiccSIM is not only a simulation system: it consists of a tool‐chain for network and control design, and further implementation for real wireless nodes. A variety of wireless control scenarios are simulated and studied. The effects of the net‐work on the control performance are studied both theoretically and through simulations to gain an insight into the communication and control interaction.
Typical control design approaches in the literature are of optimal control‐type, with guaranteed stability given certain network induced delay and packet losses. The control design has been complicated and resulted in complex controllers. This thesis concentrates on PID‐type controllers, because of their simplicity and wide use in industry. To accommodate PID controllers to control over unreliable wireless networks, several adaptive schemes, which adapt to the network quality of service, are developed. This results in flexible, self‐tuning control that can cope with non‐deterministic and time‐varying wireless networks. The proposed adaptive control algorithms are tested and verified in simulations using PiccSIM.
Keywords wireless networked control systems, co‐simulation, network adaptive control ISBN (printed) 978‐952‐60‐3460‐7 ISSN (printed) 0356‐0872 ISBN (pdf) 978‐952‐60‐3461‐4 ISSN (pdf) Language English Number of pages 173 Publisher Aalto University, Department of Automation and Systems Technology Print distribution Aalto University, Department of Automation and Systems Technology The dissertation can be read at http://lib.tkk.fi/Diss/2010/isbn9789526034614/
SAMMANFATTNING (ABSTRAKT) AV DOKTORSAVHANDLING
AALTO‐UNIVERSITETET TEKNISKA HÖGSKOLAN PB 11000, FI‐00076 AALTO http://www.aalto.fi Författare Mikael Björkbom
Titel Simulering av trådlösa reglersystem och nätverksadaptiv reglering Inlämning av manuskriptet 27.5.2010 Korrigering av manuskriptet 7.10.2010 Datum för disputation 10.12.2010 Monografi Sammanläggningsavhandling
Fakultet Fakulteten för elektronik, kommunikation och automation Institution Institutionen för automations‐ och systemteknik Forskningsområde Systemteknik Opponent(er) Prof. Matti Vilkko, Prof. Tapani Ristaniemi Övervakare Prof. Heikki Koivo Sammanfattning (Abstrakt)
Användande av trådlös teknologi i automationsindustrin slår nu igenom tack vare de nya standarderna för trådlös automation: WirelessHART och ISA100.11a. De största fördelarna för att använda trådlösa apparater är saknaden av kablar med påföljande lägre installationskostnader och ökad flexibilitet. An‐vändandet av den nästa generationens flexible trådlösa nätverk i reglerapplikation medför problem på grund av nätverkens opålitlighet och den realtidsprestanda som reglersystemet kräver. Forskningen på detta område har tidigare fokuserat på antingen reglerdesign och stabilitet av trådbundna reglersystem, eller på nätverksprotokol för trådlösa sensornätverk. En marginell del har studerat trådlös reglering.
Denhär avhandlingen närmar sig problemen med ett praktiskt synsätt. Ett simulations‐system kallat PiccSIM utvecklas, där den trådlösa kommunikationen och regleringen kan simuleras och studeras samtidigt. Det existerar redan ett par liknande simulatorer, till exempel TrueTime, men ingen av dem är så flexible och mångsidig som PiccSIM, där simulation av specifika protokol och algoritmer är möjligt. PiccSIM är inte endast en simulator, utan består av flera verktyg för design av nätverk och reglersystem. Flera trådlösa reglersystem simuleras och studeras. Prestandan av de trådlösa närverken och deras verkan på reglersystemet studeras både teoretiskt och via simulationer för att förstå växelverkan mellan det trådlösa nätverket och reglersystemet.
Ett typiskt tillvägagångssätt i litteraturen är optimal reglering, där regulatorn planeras enligt vissa för‐dröjnings‐ och paketförlustspecifikationer. Detta resulterar i en komplex reglerdesign. Denhär avhand‐lingen koncentrerar sig på PID‐typens regulatorer, för de är enkla och används omfattande i industrin. För att tillämpa PID regulatorer over opålitliga trådlösa nätverk utvecklas flera adaptiva reglermetoder, som anpassar sig själv till nätverkets prestanda. Resultatet är flexibla, självinställbara regulatorer, som fungerar trots det icke‐deterministiska trådlösa nätverket. De utvecklade adaptiva reglermetoderna testas och verifieras i simulationer med PiccSIM.
Ämnesord (Nyckelord) trådlösa reglersystem, co‐simulering, närverksadaptiv reglering ISBN (tryckt) 978‐952‐60‐3460‐7 ISSN (tryckt) 0356‐0872 ISBN (pdf) 978‐952‐60‐3461‐4 ISSN (pdf) Språk Engelska Sidantal 173 Utgivare Aalto Universitetet, Institutionen för automations‐ och systemteknik Distribution Aalto Universitetet, Institutionen för automations‐ och systemteknik Avhandlingen är tillgänglig på nätet http://lib.tkk.fi/Diss/2010/isbn9789526034614/
Anyone who has a Master’s degree can become a Ph.D. – but the persistent drive to discover new knowledge is essential.
PREFACE
I started my research carrier at the former Control Engineering Laboratory at Helsinki University of Technology in 2003 as a summer trainee with prof. Heikki Koivo as my supervisor. The following summer I developed the MoCoNet platform, which later was extended to the PiccSIM platform. The MoCoNet platform became a part of my Master’s thesis, which I finished in 2006. Since the Master’s thesis, I have worked in the WiSA I and II projects (Wireless Sensor and Actuator Networks for Measurement and Control) where the PiccSIM Toolchain is a major contribution to the projects. My Licentiate thesis on PiccSIM was a convenient stepping stone for this Doctoral thesis, as it is now a part of the foundation of this thesis.
My supervisor, prof. Heikki Koivo, has given me academic freedom in my re‐search work. I have in other words developed my adaptive control algorithms completely myself. In the implementation of PiccSIM I have collaborated with Shekar Nethi from the Department of Communications and Networking, who has assisted with the network simulation part. Tuomo Kohtamäki has, under my guidance, done the hard work by implemented the Toolchain interfaces, which I am grateful for. Sofia Piltz did her Master’s thesis under my supervi‐sion about the step adaptive controller. I thank her for her hard and careful work. For the simulation case studies I have received invaluable input and assistance from prof. Riku Jäntti, Shekar Nethi and Lasse Eriksson. Lasse Eriksson has also thoroughly read the thesis and given some excellent sugges‐tions to improve it. I am very grateful for the countless hours of bedtime read‐ing he has done. William Martin has done the proofreading with tireless detail and grammar improvements. I received the final comments from the pre‐examiners associate prof. Anton Cervin and prof. Muhammed Elmusrati, of which the comments by Cervin were objective and insightful.
The funding of the WiSA I‐II projects is from The Finnish Funding Agency for Technology and Innovation (TEKES), through the Nordite program. The re‐search has been a collaboration between Nordic universities, in our case Kungliga Tekniska Högskolan (KTH) from Stockholm, Sweden. I have had the pleasure to visit Mikael Johansson at KTH for one month in May 2009, and many shorter visits later on. The research launched during the visit has contin‐ued being fruitful. I appreciate the graduate student position I received at the Graduate School in Electronics, Telecommunications and Automation (GETA) in 2007. It enabled the freedom to solely work on one’s own subject, although
v
vi
there has not been any situation where I have needed to exercise that freedom. The wireless measurements were done at facilities of Konecranes, for which I thank D.Sc. Timo Sorsa for allowing us to visit their industrial halls.
I would additionally like to thank: Finnish Foundation for Technology Promo‐tion, Emil Aaltonen Foundation, The Finnish Foundation for Economic and Technology Sciences ‐ KAUTE, Neles Oy:n 30‐vuotissäätiö (The 30th Anniver‐sary Foundation of Neles), the Walter Ahlström Foundation, and the Oskar Öflund Foundation for the support I have received. I have also received several travel grants to conferences from The Automation Foundation and GETA.
Finally, I thank my wife Susse for listening patiently to me, when I am trying to explain, in a simple way, things that she does not understand. The marriage left an impact on the contributed papers, as my family name changed. The name Pohjola was exchanged to the, index unfriendly, Björkbom.
Espoo, October 2010 Mikael Björkbom
TABLE OF CONTENTS
Preface v
Table of Contents vii
List of Publications by the Author xi
List of Abbreviations xiii
List of Symbols xv
1. Introduction 1
1.1. Objectives of the Thesis .................................................................................. 3 1.2. Contributions and Organization of the Thesis ........................................... 4 1.3. Background of Wireless Control .................................................................. 6 1.4. Wireless Control Systems and Simulation .................................................. 8 1.5. Research on Wireless Control Networks and Applications ................... 10
1.5.1. Wireless Networks for Control ............................................................. 11 1.5.2. Current Standards for Wireless Automation ...................................... 12 1.5.3. Wireless Sensor Networks ..................................................................... 14
2. Preliminaries – Networks and Controllers 17
2.1. The Networked Control Problem ............................................................... 17 2.2. General Assumptions ................................................................................... 18 2.3. Networked Control Structures ................................................................... 21 2.4. Network Models ........................................................................................... 24
2.4.1. Packet Drop ‐ Delay Jitter ...................................................................... 26 2.4.2. Drop and Delay Models based on Markov‐chains ............................. 28
2.5. Jitter Margin ................................................................................................... 31 2.6. The PID Controller in Networked Systems .............................................. 32
2.6.1. Tuning of PID controllers in Networked Control Systems ............... 32 2.6.2. The PID PLUS Controller ....................................................................... 34
2.7. Internal Model Control ................................................................................ 35
vii
2.7.1. Internal Model Control Design .............................................................. 36 2.7.2. IMC‐PID Controller Design .................................................................... 37
2.8. Network Quality of Service in Networked Control Systems ................. 38 2.8.1. Network Performance Considerations ................................................. 39 2.8.2. Network Congestion and Traffic Rate Control .................................... 40
2.9. Kalman Filtering in Networked Control Systems .................................... 42 2.10. Summary ................................................................................................... 44
3. Networks and Controllers in Practice 45
3.1. Measurements of Radio Environments ...................................................... 45 3.2. Estimated Gilbert‐Elliott Models ................................................................ 50 3.3. The Networked PID Controller ................................................................... 51 3.4. Internal Model Control in Networked Control Systems ......................... 53
3.4.1. Approximations of Closed‐loop Step Response .................................. 53 3.4.2. IMC Control and Jitter Margin .............................................................. 55 3.4.3. Sampling Interval and IMC Tuning for Jitter Margin ........................ 57
3.5. Effect of Network Quality of Service on Control Performance .............. 59 3.5.1. Network Cost for Control ....................................................................... 60 3.5.2. Simulations for Network and Control Performance Relationship ... 62
3.6. Summary ........................................................................................................ 64
4. PiccSIM – Toolchain for Network and Control Co‐Design and Simulation 67
4.1. Development of the Co‐simulation Platform ............................................ 68 4.2. Review of Networked Control System Simulators .................................. 69 4.3. PiccSIM Architecture .................................................................................... 75
4.3.1. Simulink and ns‐2 Integration ............................................................... 77 4.3.2. Data Exchange Between Simulators ...................................................... 78 4.3.3. Simulation Clock Synchronization ........................................................ 79 4.3.4. Other Implemented Features ................................................................. 80
4.4. PiccSIM Toolchain ......................................................................................... 82 4.4.1. PiccSIM Block Library ............................................................................. 83 4.4.2. Toolchain User Interfaces ....................................................................... 84
4.5. Remote User Interfaces ................................................................................. 88 4.6. Automatic Code Generation and Implementation ................................... 90 4.7. Simulation Case Studies ............................................................................... 91
4.7.1. Target Tracking Scenario ........................................................................ 92
viii
ix
4.7.2. Robot Squad with Formation Changes ................................................ 95 4.7.3. Building Automation Scenario .............................................................. 98 4.7.4. Crane Control in an Industrial Hall ................................................... 102 4.7.5. PiccSIM Toolchain Demonstrations ................................................... 105
4.8. Summary ...................................................................................................... 109
5. Adaptive Control in Wireless Networked Control Systems 111
5.1. Adaptive Jitter Margin PID Control ......................................................... 112 5.1.1. Delay Jitter Estimation Simulations ................................................... 113 5.1.2. Adaptive Control Tuning Scenario Simulations .............................. 116 5.1.3. Summary ................................................................................................ 118
5.2. Adaptive Control Speed Based on Network Quality of Service .......... 119 5.2.1. The Adaptive Control Speed Scheme ................................................ 120 5.2.2. Changing the Sampling Interval ......................................................... 122 5.2.3. Analysis of the Adaptive Control Speed Algorithm ........................ 124 5.2.4. Simulation Scenario .............................................................................. 126 5.2.5. Summary ................................................................................................ 129
5.3. Step Adaptive Controller for Networked MIMO Control Systems .... 129 5.3.1. Controller Tuning by Optimization for MIMO Systems ................. 132 5.3.2. Step Adaptive Controller Tuning and Simulations ......................... 133 5.3.3. Summary ................................................................................................ 137
5.4. Steady‐State Outage Compensation Heuristic ....................................... 138 5.4.1. The Steady‐State Heuristic ................................................................... 139 5.4.2. Stability of the Steady‐State Heuristic ................................................ 142 5.4.3. Simulations and Comparisons ............................................................ 146 5.4.4. Summary ................................................................................................ 150
6. Conclusions 151
References 157
LIST OF PUBLICATIONS BY THE AUTHOR
Although this doctoral dissertation is a monograph, the results presented here are based on the following publications presented at international conferences or journals.
[P1] Pohjola, M., L. Eriksson, V. Hölttä, and T. Oksanen, Platform for monitoring and controlling educational laboratory processes over Internet, in Proc. 16th IFAC World Congress, Prague, Czech Republic, 4‐8 July, 2005.
[P2] Nethi, S., M. Pohjola, L. Eriksson, and R. Jäntti, Platform for emulating networked control systems in laboratory environments, in Proc. 8th International Symposium on a World of Wireless, Mobile and Multimedia Networks, Helsinki, Finland, 18‐21 June, 2007.
[P3] Kohtamäki, T., M. Pohjola, J. Brand, and L.M. Eriksson, PiccSIM Toolchain – Design, simulation and automatic implementation of wireless networked control systems, in Proc. IEEE International Conference on Networking, Sensing and Control, Okayama, Japan, 26‐29 March, 2009.
[P4] Nethi, S., M. Pohjola, L. Eriksson, and R. Jäntti, Simulation case studies of wireless networked control systems, in Proc. 10th ACM/IEEE International Symposium on Modelling, Analysis and Simulation of Wireless and Mobile Systems, Crete, Greece, 22‐26 October, 2007.
[P5] Björkbom, M., S. Nethi, and R. Jäntti, Wireless control of multihop mobile robot squad, IEEE Wireless Communications, Special Issue on Wireless Communications in Networked Robotics, vol. 16, no. 1, February, 2009.
[P6] Björkbom, M., S. Nethi, L. Eriksson, and R. Jäntti, Wireless control system design and co‐simulation, submitted.
xi
xii
[P7] Pohjola, M. and H. Koivo, Measurement delay estimation for Kalman filter in networked control systems, in Proc. 17th IFAC World Congress, Seoul, Korea, 6‐11 July, 2008.
[P8] Pohjola, M., Adaptive jitter margin PID controller, in Proc. 4th IEEE Conference on Automation Science and Engineering, Washington D.C., USA, 23‐26 August, 2008.
[P9] Pohjola, M., Adaptive control speed based on network quality of service, in Proc. 17th Mediterranean Conference on Control and Automation, Thessaloniki, Greece, 24‐26 June, 2009.
[P10] Piltz, S., M. Björkbom, L.M. Eriksson, and H.N. Koivo, Step adaptive controller for networked MIMO control systems, in Proc. IEEE International Conference on Networking, Sensing and Control, Chicago, USA, 11‐13 April, 2010.
[P11] Björkbom, M. and M. Johansson, Networked PID control: tuning and outage compensation, in Proc. 36th IEEE Industrial Electronics Conference, Glendale, AZ, USA, 7‐10 November, 2010.
LIST OF ABBREVIATIONS
ACS Adaptive Control Speed AIMD Additive Increase, Multiplicative Decrease AJM Adaptive Jitter Margin ANSI American National Standards Institute AODV Ad Hoc On‐demand Distance Vector CAN Controller Area Network COTS Commercial Off The Shelf CSMA Carrier Sense Multiple Access DCF Distributed Coordination Function (MAC protocol for
WLAN) FDMA Frequency Division Multiple Access FOLIPD First Order Lag Plus Integral Plus Delay FOTD First Order Time‐Delay G‐E Gilbert‐Elliott GUI Graphical User Interface HART Highway Addressable Remote Transducer HVAC Heating, Ventilation and Air Conditioning IAE Integral of Absolute Error IEC International Electrotechnical Commission IEEE Institute of Electrical and Electronics Engineers IMC Internal Model Control ISA International Society of Automation ISE Integral of Square Error ISM Industrial, Scientific, and Medical (frequency band) ITAE Integral of Time weighted Absolute Error ITSE Integral of Time weighted Square Error KF Kalman Filter LAN Local Area Network LMI Linear Matrix Inequality LMNR Localized Multiple Next‐hop Routing MAC Medium Access Control MIMO Multiple‐Input Multiple‐Output MoCoNet Monitoring and Controlling Educational Laboratory
Processes over Internet NCC Network Cost for Control NCS Networked Control System
xiii
xiv
NS‐2 Network Simulator version 2 OPNET Optimized Network Engineering Tool PiccSIM Platform for Integrated Communications and Control design,
Simulation, Implementation and Modeling PID Proportional‐Integral‐Derivative RTE Real‐Time Ethernet SAC Step Adaptive Controller SISO Single‐Input Single‐Output SSH Steady‐State Heuristic TCL Tool Command Language TCP Transmission Control Protocol TDMA Time Division Multiple Access TLC Target Language Compiler TOSSIM TinyOS Simulator TSMP Time Synchronized Mesh Protocol UDP User Datagram Protocol WNCS Wireless Networked Control System WLAN Wireless Local Area Network WSAN Wireless Sensor and Actuator Network WSN Wireless Sensor Network QoS Quality of Service QPT Quantitative Parameter Tuning ZOH Zero Order Hold
LIST OF SYMBOLS
α Weighting factor β Filtering factor γ Time‐constant of discrete‐time filter δ Delay jitter δmax Jitter margin (maximum allowed delay jitter) θ Markov‐chain jump parameter λ IMC tuning parameter, closed‐loop system time‐constant distribution π , , Markov‐chain steady‐state probability, Good and Bad state
of Gilbert‐Elliott model Gπ Bπ
σσ, , Standard deviation, of data, of Gilbert‐Elliott model Dσ GEσnormσ
Normalized standard deviation NCCσ
Network cost for control fairness measure totτ Process delay (without network induced delay)
Total standard deviation, on several time‐scales
ω Angular velocity Γc Controller input matrix Φc Controller state‐transition matrix Χ Stochastic process a Controller gain parameter b Set‐point weighting c Update step scaling factor of adaptive control speed algo‐
rithm Jcv Coefficient of variation c Cost scaling factor
d Delay of packet d Delay difference (jitter) df Time‐constant of discrete‐time derivative filter Gd , , Packet drop probabilities of Gilbert‐Elliott model, Good
state, Bad state, average Bd GEd
dmax Maximum delay before control is switched to stop mode e, Control error, integral of error, derivative of error Σehold Error signal value hold constant during network outage e , Δe
f Frequency f(k) Filter for PID PLUS g Time‐constant of steady‐state heuristic h, hbase Sampling interval, base sampling interval
xv
i index j Imaginary unit or index k Discrete time‐index ks Time‐index for switching of controller m Time‐scale m(k) Relative update speed of adaptive control speed scheme maxcross Maximum constraint for cross‐interaction n integer, order of IMC filter GGpp
, State‐holding probabilities for Gilbert‐Elliott model BBppGB , State‐transition probabilities for Gilbert‐Elliott model BG
pdrop Packet drop probability pij Markov‐chain state‐transition probability r, rd, rtot, rmeas Packet drop, desired, total, and measured packet drop Δr Velocity of adaptive control speed algorithm s Laplace‐transform variable t, t(k) Continuous time, discrete time‐instant tn Time‐instant u, uhold, uol Control signal, signal value hold constant during network
outage, and control signal of open‐loop system Dx, , ux Derivative part of control signal KF
xs, xc Process, Kalman filter, sensor, controller state vector y, yhold, yol, ys Process output, signal value hold constant during network
outage, and output during open‐loop control, sensor output yin, yout Input and output signal of network yr Control reference signal y Difference in output Δy Change in process output z Process output measurement vector A, B, C, D State‐space matrixes, state‐transition, input, output, and di‐
rect terms. Xc: controller, Xc,drop: controller during packet drop, Xp: process, Xs: sensor
Adrop State‐space transition matrix for whole system during packet drop
D Vector of delays D(z) Denominator of discrete‐time controller Df Time‐constant of derivative filter Dhist Histogram of consecutive drop lengths Dload Load disturbance G, Gp Process transfer function G− , G Invertible, non‐invertible part of transfer function +
xvi
xvii
Gc Controller transfer function Gcl Closed‐loop transfer function Gf Low‐pass filter
IMCG Internal model control transfer function Gm Process transfer function model Hc Controller output matrix
,δ estJ Delay jitter estimation cost function Jtot Total cost function of MIMO process IAEJJ
, , ITAEJJISE
J, Integral error cost functions ITSE
NCCK, Km Process gain, process model gain
Network cost for control measure
Kp, Ki, Kd, PID controller proportional, integral, and derivative gain KFK Kalman gain
L Process time‐delay (including constant minimum network induced delay)
NN Number of L Time‐delay of network
N(z) Numerator of discrete‐time controller Nd Derivative filter constant of discrete‐time PID controller Nh Sampling instants per rise‐time Nmax Jitter margin in terms of sampling intervals
MN Number of states in Markov‐chain P Kalman filter state covariance matrix P Markov‐chain state‐transition matrix Q State covariance matrix R Measurement covariance matrix T, Tm, Tf Time‐constant of process, process model, low‐pass filter Ti, Td Integration, derivation time of PID controller Tout Length of network outage GET State‐residence time of Gilbert‐Elliott model Tr Rise‐time WTΔT
Time‐window Difference in time
L Laplace operator Natural number
Pr Probability U Uniform random distribution
1. INTRODUCTION
The use of wireless networks in control applications, so‐called “wireless auto‐mation”, is an emerging application area [50], [110], with the possibility to revo‐lutionize the automation industry [16]. The primary benefit of wireless control technology is reduced installation cost, as a considerable investment is made in the wiring of factories, both financially and in labor. The use of wireless tech‐nology is not only a replacement of cables; the benefits go beyond that. With wireless devices, increased flexibility is gained as sensors can be placed more freely, even on rotating machines. Robustness is increased, as the communica‐tion can be done over several paths in a mesh network and failure of cables is eliminated [155]. Finally, there are the opportunities for new applications that are enabled by wireless control. Some existing or emerging applications are remote control of devices, for example cranes or dexterous and mobile robots, mobile applications, and wireless monitoring of large plants for fault detection, maintenance, production quality monitoring, and compliance to environmental regulations [59].
There is a strong aim [156] to develop and deploy wireless networked control systems (WNCS), where a control system communicate over a wireless net‐work, in factory and home automation [9], [40], [50], [59], [82], [163]. In a related field, sensor network applications have as well received much attention [2], [11], [158], [176]. Today, wireless automation technology is mostly applied in monitoring applications, because in these applications the network require‐ments in terms of real‐time performance are low. The industry is cautious to apply wireless to closed‐loop control, due to the unreliability issues of wireless networks. In general the current research on this subject is consequently aiming on deterministic wireless control.
In addition to the technological and research interests, the simulation of WNCSs is important and necessary for several reasons. The current networked control system (NCS) research need to be complemented by simulation to assess the validity and practical benefits of the developed theory and algorithms. The applicability of the developed algorithms must be evaluated in practical case studies. Simulations are a feasible way to test and assess the network and con‐trol strategies and theories for WNCSs before deployment. With simulations, problems occurring in the network and the resulting performance of the control
1
algorithms to these issues can be studied. The critical properties and behavior of the network, and the impact on the control system can be analyzed. Especially the interaction between the network and the control system must be further understood, and the practical impact must be studied by simulation. These issues, in particular the protocol specific ones, are hard to approach analytically. Simulation studies will, hopefully, unravel these matters and lead to a coherent theory, best practices knowledge, and design expertise of WNCSs.
This thesis focuses on simulations of WNCSs and controller adaptation based on the wireless network quality of service. The aim is at closed‐loop control over an unreliable network, where the control system adapts to the network uncertainties. The network uncertainties can be due to fading and interference of the wireless communication, or the non‐determinism of the network proto‐cols, and varying demands of the application. The unreliability refers thus to the non‐determinism and non real‐time operation of the network.
When starting to work on the thesis, the questions that immediately arose were: How does the quality of service (QoS) of the wireless network change? How does that affect the control system? What should the control system compensate for? How should it compensate for the changes in the network QoS? The inves‐tigations of these issues started by the development of the communication and control co‐simulator PiccSIM.
The currently available simulation tools for WNCSs are few or limited in simu‐lation capabilities. Most of the available simulators concentrate on either the network or control part. At the moment there exist only a couple of co‐simulators, where both the network and control system are properly addressed. The PiccSIM simulator, presented in Chapter 4, is an attempt to remedy this situation, with a complete set of modeling, design and simulation tools. The initial simulation case studies presented in Section 4.7 give some insight on how the communication and control layers interact.
With PiccSIM the controller adaptive part of this thesis can be addressed. The main impact of the wireless network on the control system is the limited band‐width and non‐determinism, causing communication delay jitter and packet losses. The adaptive control schemes developed in Chapter 5 deals with these issues. The adaptive control algorithms are not adaptive in the traditional sense that they adapt to the changes in the process [184], but rather to the network conditions. The controllers are not necessary continuously updated as tradi‐tionally in adaptive control, but only when compensation of the network condi‐tions requires it. Thus, the control system is be flexible in compensating for the problems in the network. The adaptive schemes are ultimately verified by simu‐lation on PiccSIM.
2
1.1. Objectives of the Thesis When the subject of the thesis was first envisioned, the premise was that in a WNCS, an unreliable network is used where the QoS will change over time due to the inherent uncertainties of the wireless communication, changes in the environment, and non‐deterministic network protocols. The solution would be to develop agile control algorithms that are flexible, self‐tuning, and adaptive to compensate for the deficiencies of the wireless communication.
The field of WNCSs is cross‐disciplinary: both the network and the control system need to be taken into account. Traditionally, either the network or the control system has been studied separately. As such there has been little re‐search focusing on both aspects at the same time. The stability of NCSs has received plenty of attention in the literature [23], [72], [178], [74], [103], [160]. Little is said about the practical implementation, behavior, and performance of the control systems. Many of the stability proofs or controller design methods are cumbersome, for instance [74], and if all the network related problems are to be taken into account, the proofs become complicated [103].
This thesis aims at simplicity, giving a practical viewpoint to WNCS operation through the simulation cases and implementation. Practical controller design methods that likely will be applied and implemented on real WNCS applica‐tions are employed. Easy implementation is facilitated by using proportional‐integral‐derivative (PID) controllers and internal model control (IMC) design. The PiccSIM simulator, described in Chapter 4, is merely a tool to test the de‐veloped adaptive networked control algorithms presented in Chapter 5. The scientific contribution in this thesis is the developed adaptive control algo‐rithms for WNCSs. The aim of this thesis is not state‐of‐the‐art WNCS control performance and stability proofs, but giving more insight into the general ten‐dencies of WNCSs and practical implementation.
Wireless networks are inherently non‐deterministic, and no network design can make it fully dependable, because of interference in the open communication media. If for instance an industrial standard WirelessHART type network is used, the network performance can largely be considered deterministic, and the research deals with communication and controller scheduling [137], [160]. In‐stead of trying to make the network completely deterministic, which ultimately will fail, an alternative is to accept the network‐related problems and use a cheap, but unreliable, network based on ZigBee or similar commercial off‐the‐shelf (COTS) technology. In return, the robustness of the control system to cope with these deficiencies needs to be improved. In this approach, wireless control can be applied in the automation industry and other applications, without us‐ing, possible expensive, industrial grade hardware.
Increasing the control robustness against the network uncertainty can be done, for instance by controller tuning [47]. The idea of changing the controller is in
3
this thesis taken further. Several adaptive control schemes or heuristics are developed in Chapter 5 that compensate for the unreliable and non‐deterministic network in a WNCS. The objective of this thesis is thus to develop control systems that work even if problems arise in the network.
The developed adaptation schemes addresses several different situations that arise in a WNCS: the self‐configuration or self‐tuning of the controllers depend‐ing on the network characteristics; the adaptation of control aggressiveness and generated network traffic for control according to the network congestion; the change of tuning in multiple‐input multiple‐output distributed control systems; and a heuristic to overcome network outages. All the developed algorithms are tested with PiccSIM, with promising results.
1.2. Contributions and Organization of the Thesis There are many research topics in the field of WNCSs and sensor networks, such as hardware, sensor and energy technology, network protocols, software, middleware, and control algorithms. In this thesis little or nothing is said about the hardware, lower level layers, and protocols, such as radio, medium access control, bandwidth allocation, controller scheduling, and security. The focus in this thesis is on WNCS simulation and design, and adaptive control algorithms for WNCSs.
The main contributions of this thesis are the development of the simulation platform PiccSIM for communication and control co‐simulation, including the user interfaces, the case study simulations done with the simulator, and the adaptive control algorithms for WNCSs. PiccSIM is released as an open source package and it is free for use [127].
The contributions are summarized in the following list:
Development and implementation of a simulation platform for communi‐cation and control co‐simulation and design.
- Development of communication and control co‐simulator PiccSIM for wireless control systems.
- Development of PiccSIM Toolchain for integrated networked control system design with PiccSIM, including network design, control tuning tool and simulation graphical user interfaces (GUI)s.
- Integration of additional propagation models to the network simulator ns‐2 for more realistic simulation of wireless net‐works with data based radio environment models.
- Implementations and case studies of several different scenarios simulated on PiccSIM. Simulations of all the adaptive control‐
4
lers developed in this thesis. Results give new insights into the behavior of networked control systems.
- Development of remote access for PiccSIM for educational re‐mote laboratory experiments and for researchers around the world.
- Automatic code generation from Simulink model block dia‐gram for implementation on Sensinode wireless nodes, with two demonstration cases.
New concepts and algorithms for networked control systems. - Network cost for control, relating network quality of service to
quality of control. - IMC‐PID design for networked control systems. - Networked PID controller, a distributed version of the PID con‐
troller. - Method for online changing of controller sampling interval
without bumps. Development and simulation of several adaptive controller algorithms for networked control.
- Adaptive control tuning based on network delay jitter. - Adaptive control speed and sampling interval based on net‐
work congestion. - Adaptive MIMO control based on step response and load dis‐
turbance rejection. Selection of cost function for controller pa‐rameter optimization in a decentralized MIMO control scenario.
- Control heuristic and compensation during network outages. The contents of the thesis are based on the work presented in the papers [P1]‐[P11], done in cooperation with the co‐authors. The thesis can be divided into two parts. The first part deals with practical control system design for wireless control systems. Chapter 2 gives the preliminaries of the thesis. Chapter 3 in‐troduces some results regarding WNCSs related to network performance mea‐surements and evaluation, and control design. Chapter 5 treats different kinds of adaptive control algorithms [P8], [P9] or heuristics [P10], [P11] for wireless control systems. Minor contribution related to this area can also be found among the control theory preliminaries in Chapter 2.
The other half of the thesis deals with the development of the PiccSIM simulator and the PiccSIM Toolchain in Chapter 4 [P1], [P2], [P3], [P6]. A survey of related simulators is given in Section 4.2. Because the PiccSIM platform has evolved over the years and a considerable amount of simulations have been done, Chap‐ter 4 concentrates on giving a whole, up‐to‐date, view of the platform and a coherent presentation of the simulations and the results. Some illustrative simu‐lations are additionally carried out with PiccSIM in Section 4.7, where different
5
simulation scenarios are considered ranging from building automation, mobile robot control, to wireless process control [P4], [P5], [P6].
The main work of the author is the development of PiccSIM and the implemen‐tation of the simulation cases in Chapter 4, the practical control results in Chap‐ter 3, and the network adaptive control algorithms in Chapter 5. The co‐authors of the related papers have mainly been involved in planning the simulation cases and writing the publications. In addition, Shekar Nethi has in particular developed the ns‐2 part of PiccSIM, made the wireless measurements in Section 3.1, and assisted in the simulations. Jenna Brand has developed the wall‐fading model in Section 4.3.4. Huang Chen from Vaasa University of Applied Sciences has implemented the ns‐2 configuration tool presented in Section 4.4.2, with further development by Tuomo Kohtamäki, who has also implemented the PiccSIM user interfaces and the simulator time‐synchronization and data‐exchange mechanisms. Sofia Piltz has executed the simulations in Section 5.3. Kohtamäki and Piltz have done the work under the supervision and co‐development of the author. The author has made the field overview and litera‐ture survey in Chapters 1 and 2, and developed the theory in Chapter 3.
The organization of the thesis is the following: in Chapter 2 the preliminaries used in the later chapters are established. Most notable is the jitter margin tun‐ing and PID controllers, Sections 2.5 and 2.6, and the IMC design framework in Section 2.7, which are used in several of the adaptive control schemes. In Chap‐ter 3, new results regarding WNCSs are presented. Measurements of packet drop and estimated network models are shown. The application of IMC design in NCSs is analyzed. A novel network QoS measure for NCSs, based on packet drops, and the corresponding effect on the control systems is presented in Sec‐tion 3.5. The proposed network cost for control measure correlates with the resulting obtainable control performance, and hence gives a good network design objective for WNCSs. The network and control co‐simulator PiccSIM is introduced in Chapter 4, including the technical details and the PiccSIM Tool‐chain, Sections 4.3‐4.6, and some simulation results which point out special characteristics of WNCSs in Section 4.7. In Chapter 5, the adaptive control algo‐rithms and heuristics are developed. The adaptive schemes are presented in separate sections, with the simulations, results, and conclusions obtained with PiccSIM. The thesis is finalized with conclusions in Chapter 6.
1.3. Background of Wireless Control One of the first real wireless control systems can be traced to the US patent no. 613809 by Nikolai Tesla, which was filed on 1st of July 1898. The patent named “Method of an Apparatus for Controlling Mechanism of Moving Vehicle or Vehicles“ described how to remotely, without mechanical devices or wires control a boat by switching either on, off, or hold the state of electrical motors. In one demonstration Tesla remotely controlled a boat from 18 miles away on
6
the Isle of Wight [57]. The design was improved by Leonardo Torres‐Quevedo in 1903 (patent in Spain) with his Telekino, which introduced multiple states and codewords to control multiple devices (up to 19) of different types [125]. Later, Torres‐Quevedo envisioned implementing the same technology on tor‐pedoes. He had additional plans to apply the Telekino to remote control dirigi‐ble balloons and planes (because test flying was dangerous), but lack of funding made him abandon the development of his inventions.
The few early remote control applications used analog commands and the me‐chanism of radio controlled electromechanical escapement, similar to the “Tesla boat” or the Telekino of Torres Quevedo. In the 1960s remote control developed drastically with transistor based radios and multi‐channel communication, which allowed the simultaneous control in several control dimensions. An example is the control of the pitch, yaw, and motor speed of a remote controlled model plane. The space age drove the technology forward, dictated by the need to get data from the spacecraft (telemetry) or send commands to it (remote control).
The first packet based radio network, ALOHANET, was deployed in 1971 for the University of Hawaii [57]. The industrial applications started also to emerge, as more information to separate devices could be communicated. In the beginning the wireless communication used proprietary protocols. The first widespread industrial applications emerged in the 1980s when remote con‐trolled switchyard locomotives and cranes appeared. At that time proprietary devices working on standardized radio communication protocols were devel‐oped [140], [150].
The wireless local area network (WLAN) operating on the Industrial, Scientific, and Medical (ISM) radio band started to be developed in 1985, which later be‐came generally accepted by the IEEE 802.11 standard, which solved the limita‐tions of the previous implementations [57]. Wireless digital communication developed in the early 1990s for cellular phones. Nowadays coded pulse width modulation or pulse‐code modulation are used for planes and similar remote controlled toys. Some more advanced model plane remote controls use the license‐free ISM radio band at 2.4 GHz. At the moment the standardization of digital wireless communication and protocols suitable for industrial control systems, such as IEEE 802.15.4 “ZigBee” [180], have sparked the field and new interoperable devices from different vendors are emerging [16]. These advances have enabled the use of cheap and ubiquitous devices for wireless automation of today and wireless devices are currently starting to be applied for wireless automation applications. The development from fieldbus based automation systems to networked systems, such as real time Ethernet (RTE), and in the near future to wireless networks is described in [50].
7
1.4. Wireless Control Systems and Simulation In a networked control system, sensors, controllers and actuators are connected with a computer network [9]. The standard approach in automation is to use a fieldbus, which connects all the devices through a shared network. One of the benefits of NCSs is reduced cabling cost [115], which is removed completely by the introduction of wireless devices. Other advantages include ease of adding field devices, introducing two‐way communication with field devices for re‐mote configuration, device status, diagnostics and health monitoring, and uti‐lizing more advanced control strategies because of improved field data [59], [115].
The cheap and proven technology from office environment is being applied to automation. Ethernet networks are becoming regularly used and have to some extent replaced fieldbus technology in control applications [110]. The “Industri‐al Ethernets” or RTE [36], [115], which allow for real‐time operation, where an operation is guaranteed to be executed in a given time, are gradually being applied. The same benefits are also available by means of wireless technology, with the addition of accessing the data wirelessly using a handheld device, enabling in‐situ inspection of the process [19].
The terms wireless networked control system or wireless sensor and actuator network (WSAN) refer to a control system, which communicates over a wireless network. These systems deliver more benefits in terms of flexibility and cost compared to NCSs as there are no wires, but also more problems, mainly be‐cause of the open air and shared communication medium. The general conven‐tion to distinguish between these two terms is related to the background of the researchers working in this field. WSAN refers to a wireless sensor network (WSN) [11] with the addition of actuators, where a WNCS is more aimed at wireless industrial automation. The former is rooted in the networking area and is more ad‐hoc, redundant and tolerates failures in the system, whereas the latter comes from the control area and is designed for high reliability and de‐pendability.
An overview of NCSs can be found in [9] and [65]. The benefits of NCSs are that cabling is reduced, similarly as using an automation fieldbus, and cheaper of‐fice grade hardware is utilized [21]. The general development and philosophy of networked control systems is presented in [21] and [50]. There are many technological and social obstacles to using wireless networks in control. The main concern against deploying wireless networks for control is the uncertainty of communication, co‐existence with other wireless networks [50] and security. The inability to guarantee a sufficient quality of service for the control system is a real concern. Control engineers are hesitant to apply technology that cannot be trusted, since failure in control can cause physical damage. The network must therefore provide real‐time and constant operation [110]. This required
8
real‐time operation may not always be guaranteed, which causes problems for the control system design [93]. This thesis tries to show through simulations that hard real‐time operation is not necessarily needed in practical applications. Soft real‐time operation is enough, if it is taken into account in the control de‐sign, for instance through adaptation. Another concern hindering the adoption of wireless technologies is security, since the wireless medium is open for eavesdropping and interference [112].
WNCSs are in essence non‐deterministic, stochastic and asynchronous systems, which are difficult for traditional control theory, where constant sampling in‐terval is assumed, cf. the Z‐transform. Therefore simulators for NCSs are needed, where the asynchronism and issues related to the network and control interaction can be studied. Uniform packet loss or analytical delay distributions are usually used in networked control design. These assumptions do not neces‐sary hold in practice. Simulation of WNCSs with specific network protocols is thus needed. Therefore the network and control co‐simulator PiccSIM is devel‐oped in this thesis. The strength of PiccSIM is to enable one to quickly test sev‐eral control algorithms in realistic WNCS scenarios [P2]. With the automatic code generation capabilities, the algorithms can further be tested easily in real applications [P3].
There are already some suitable simulators for WNCSs, such as TrueTime [22] and Modelica – ns‐2 [17], reviewed in Section 4.2. PiccSIM integrates two simu‐lators to achieve an accurate and versatile simulation system at both the com‐munication and control level for WNCSs. It has the unique feature of delivering a whole chain of tools for network and control modeling and design, integrated into one package with communication and control co‐simulation capabilities. By combining the design and simulation of WNCSs into one tool, a flexible, integrated, and powerful co‐simulation platform for research is obtained [P3]. With PiccSIM, the specific characteristics of WNCSs can be studied by simula‐tions, as is done in some example simulations presented in Section 4.7.
The algorithms developed in this thesis are aimed at future agile wireless con‐trol systems, either in the industry or consumer applications. The adaptive control algorithms are designed to work when using a non‐deterministic net‐work for control system communication. The network used would either be classified as an office network or a WSN/WSAN. The target applications are process control as opposed to discrete factory automation. Typical usages are stable processes in the industry, toys and home applications, or in the society related to ubiquitous applications. Examples of home applications are building automation, remote controlled radio cars and robots. In a ubiquitous computing future, the applications would be diverse. The initial industrial applications would be such that by adding a cheap wireless control system, additional value would be obtained from the assistance of this secondary control. Nothing pre‐vents the use of cheap wireless control in the future for a whole plant, provided
9
that it is stable and non‐critical. In critical and unstable industrial processes, special industrial networks and protocols, which can deliver deterministic real‐time performance, are recommended.
1.5. Research on Wireless Control Networks and Applications
The wireless roadmap developed by the RUNES project, with the needed tech‐nological and social development for the adoption of wireless technology in automation, is summarized in [80]. A comprehensive overview of current tech‐nologies, future issues, and research topics of wireless industrial networking is given in [59] and [165]. Several wireless standards are presented and the anti‐cipated promising research topics are introduced. Some of them are: network architecture and scalability, network standards, quality of service measures, provisioning and analysis of wireless industrial networks, real‐time and relia‐bility, security, and energy efficiency. Another source of information on indus‐trial wireless control is the report [46], where the whole field is reviewed start‐ing from wireless communication to control issues and theories, and finally simulation tools. The wired NCS case with similar MAC, QoS and other issues as the wireless case, is discussed in [110].
There are many other papers giving an overview of the current wireless tech‐nologies and networks for control, e.g. [59], [69], [124], and [163]. Gungor re‐views the challenges, design goals, and technical solutions for industrial wire‐less sensor networks [59]. Willig [163] discusses several properties and chal‐lenges of using wireless in real‐time control applications. Some of the network related issues are: interference, path loss, timing and timeliness, co‐existence of other wireless networks, and connection to an existing wired automation sys‐tem. Pellegrini [124] discusses the requirements and features for using wireless at the device level in an automation system, including power consumption, security, and connection to the wired control system. The necessity of wireless protocols aimed specifically at control applications is also pointed out.
Wireless communication can be applied in many control applications in process control and factory automation. The first benefit is the reduced wiring and installation costs [19]. The savings naturally increase with increasing plant size such as oil refineries and with increasing number of sensors. Use of wireless technologies in automation enables one to more freely place sensors in a factory and even in places where it previously was expensive or impossible, such as explosive environments and rotating devices. Industrial robots will also become more agile, as the wires are removed [150]. New applications using wireless communication will emerge, such as mobile applications.
10
1.5.1. Wireless Networks for Control Wireless networks for control applications are currently envisioned to use stan‐dard existing wireless devices such as Bluetooth, ZigBee (based on IEEE 802.15.4 radio) [11], and WLAN (IEEE 802.11). The wireless network design problems are presented for instance in [82]. Traditional computer networks, such as Ethernet and WLAN, use carrier sense multiple access (CSMA) type medium access control (MAC) with exponential back‐off in case of collisions. Several MAC‐types are compared and their suitability for control purposes are evaluated in [25], where, among the compared protocols, the CSMA‐type was found to be the best because of the immediate transmission opportunity. This result does not hold in high traffic conditions where collisions triggers back‐offs, which were not taken into account in [25]. The non‐deterministic exponen‐tial back‐off of the default CSMA protocol is not suitable for wireless control applications, since the communication delay, which is important for the control stability [23], cannot be bounded and packet drop due to congestion decreases the performance [96]. The current preferred solution is to use deterministic networks, using polling (e.g. Bluetooth) or scheduling (WirelessHART and ISA100.11a).
Wireless networks are already used for control. Some early adoptions of wire‐less devices as cable replacements are listed in [80]. The first wireless deploy‐ments have been mostly cable replacements using Bluetooth. Bluetooth has, however, given way to ZigBee, as ZigBee has lower power consumption and more flexible networking. An overview of ZigBee/IEEE 802.15.4 can be found in [11]. ZigBee has rightfully been criticized for being unreliable, lacking tech‐niques to mitigate the communication problems, and unsuitable for industrial control [88]. ZigBee is more suitable for small applications, and there are sepa‐rate industrial standards for wireless automation. Using standard wireless hardware for automation is considered in [124], where two application layer protocols suitable for real‐time control are designed and evaluated.
In the current wireless automation applications, the radios typically operate in the open ISM frequency band. The ISM band is quite crowded, as also the office networks (WLAN, Bluetooth) operate at the same frequencies. In the future, a separate frequency band could be reserved world‐wide exclusively for indus‐trial automation applications, to enable proper, interference free wireless con‐trol operation.
The use of heterogeneous networks spanning the whole automation system from low level devices to high level functions, such as production monitoring, is considered in [115] and [110], where the applicability of different networks at the different levels and tasks are evaluated. For the higher level functions, such as plant monitoring and production planning, trend analysis, or gathering of batch information, real‐time operation is not necessary, and office grade wire‐less networks are suitable for these tasks. In the current wireless automation
11
standards, only device level wireless networks, where sensor devices report their measured values and possible health data to a gateway and the rest of the automation system, are considered. The network is thus used only at the lowest device level in the whole automation system [150]. In practice, also plant wide wireless networks with proprietary protocols based on the office grade IEEE 802.11 standard are used.
Despite the wireless communication, the devices may still have wired power, because of large power requirements of the sensor or, more often, the actuator. For truly wireless devices, the power source must be local. A battery contains a finite amount of energy, and thus either the device lifetime is limited, or energy must be gathered during operation from the environment with energy harvest‐ing techniques. Sources of auxiliary energy are for example electromagnetic waves, light, vibration, or temperature differences [123]. Another solution to completely get rid of cables is wireless power transportation. An existing solu‐tion is inductive power transfer to devices located inside a cage [140]. The cage walls induce a rotating magnetic field that solenoids in the devices convert to current. Typical power transfer ranges from 10 to 100mW [150].
1.5.2. Current Standards for Wireless Automation Currently, there are two standards for industrial wireless automation applica‐tions: WirelessHART and ISA100.11a. Both industrial standards are based on the IEEE 802.15.4 radio [180]. The IEEE 802.15.4 standard is suitable for building automation [76], industrial monitoring, and control applications [40], [161]. The main characteristics are low bit rate and low power consumption. The Wireless‐HART standard and some implementation details are discussed in [148]. ISA100.11a is in practice very similar to WirelessHART, as both have similar design goals and use the same radio, but the two standards are not compatible. The WISA system is a complete solution for a reliable wireless cell in industrial manufacturing [140].
The architecture of both industrial wireless network standards include sensor nodes, wireless routers communicating with each other, and a gateway, which is connected to the automation fieldbus and the rest of the automation system. Mesh networking is possible for reliability, but all communication between devices in the wireless network is routed via the gateway. This routing con‐straint makes the network scheduling and routing design easier.
WirelessHART was approved by the International Electrotechnical Commission (IEC) as a full international standard (IEC 62591Ed. 1.0) in March 2010. Several manufacturers have released devices for WirelessHART and it is by now in use in control applications [166]. The ISA100.11a standard [70] was published in September 2009, gained IEC approval in 2010. Hence, the field of industrial wireless control has taken its first steps. The standards are designed for deter‐minism, such that traditional control can readily be applied. Although deter‐
12
minism is the main design goal, this is never fully assured and is on the expense on performance and flexibility.
WirelessHART uses a combination of time division multiple access (TDMA) and frequency division multiple access (FDMA) MAC protocol. The TDMA slot is 10 ms, in which the data packet with sensor or control information and an acknowledgement are exchanged between two nodes. The network and trans‐port layers are based on the Time Synchronized Mesh Protocol (TSMP) original‐ly developed by Dust Networks [155]. Each node pair is assigned a unique time/frequency slot for contention free communication by a centralized network manager [155]. Some slots can be reserved for contention based access using CSMA, for communicating rare event messages or retransmissions in case of dropped packets. Additionally, frequency hopping is used to mitigate interfe‐rence on some channels. A more detailed presentation of WirelessHART can be found in [148]. The benefits of WirelessHART and how to accommodate the control system to the wireless network, and meet the required control perfor‐mance, are discussed in [117]. ISA100.11a uses similar techniques and both network standards can be applied where the application can tolerate a delay jitter in the order of 100 ms. The delay jitter stems from packet drop due to interference.
The scheduling and routing of the WirelessHART and ISA100.11a networks are left open in their standards. Due to the determinism of the TDMA approach with a pre‐determined schedule, fixed bounds on the communication can be advertized, although not guaranteed. In the case of packet drops, retransmis‐sion is needed, which may cause the information to exceed the delay bound. Retransmission slots must thus be incorporated into the schedule, which reduc‐es the bandwidth usage and unavoidably introduces delay jitter. Retransmis‐sion can take place on the slots allocated for random access, or on extra slots allocated in the schedule. The schedule and retransmissions determine when information is available to the control system, and hence affect the control oper‐ation. There exists work where the actual network MAC protocol and related functions such as duty‐cycle [102], or routing and schedule [137], [160] are taken into account in the control stability proof.
The current standards are designed for reliability and are thus conservative, which implies that closed‐loop control of fast processes is not possible. The design decisions of both standards ensure a relatively simple network design. The use of TDMA ensures determinism (disregarding packet drop due to inter‐ference) and the routing via gateway constraint results in a simpler routing design. Current research related to the standards is for instance the optimality of the time/frequency‐slot scheduling and routing [160]. The room for im‐provement is thus limited.
The future research issues therefore include new technologies and algorithms to advance the capabilities of wireless control. The introduction of new agile and
13
intelligent communication methods will improve the field. These new networks will probably not guarantee a certain QoS or be deterministic, such as the case when using TDMA. One research direction is then the introduction of adaptive control methods to compensate for the deficiencies of the wireless communica‐tion, which this thesis focuses on.
In the future, wireless control systems with low performance requirements are likely to emerge. These can be based on commercial off‐the‐shelf hardware, by adopting robust control algorithms. Today’s COTS hardware, such as WLAN, Bluetooth, and IEEE 802.15.4, utilizes mostly CSMA type communications [69]. This implies that the network is inherently non‐deterministic and unreliable. There are no quality of service guarantees, such as designated transmission slots. This does not mean that wireless applications on this hardware are im‐possible; it is rather a research opportunity. Several practical applications can be proven to work satisfactorily, using simulations and pilot implementation.
1.5.3. Wireless Sensor Networks Wireless sensor networks are a field closely related to WNCSs, with a lot of ongoing research. In WSNs a low powered wireless network with hundreds or thousands of nodes are sensing or observing some phenomenon and collaborat‐ing on environment monitoring to deliver situation awareness to the user [73]. The nodes are small and low cost with a limited operational time [158]. The limited power source of WSN nodes demands for algorithms with low compu‐tational and communication requirements to enable a long lifetime of the appli‐cation [59]. The applications range from environmental, agricultural or struc‐tural health monitoring (forest, crop, earthquakes, bridges, buildings, among others), asset management (inventory surveillance, plant monitoring, and main‐tenance), to military and battlefield applications (detection of events such as enemy activity, poisonous gases, or radioactivity) [11]. The key properties, applications, and open research problems of wireless sensor networks are summarized in [176], [2] and [59]. The leading research is summarized in [11].
The network related research topics in WSNs are mostly medium access control or routing [73]. The networking issues are similar to WNCSs, but there is usual‐ly no closed‐loop control and thus the real‐time operation requirement is not as strict as in wireless automation. Reliability is obtained with redundancy and distributed computation. The low power consumption of the tiny network nodes is necessary to save the battery. This boils down to hardware and MAC protocol design, for example in the WiseNET sensor network [43], or TUTWSN developed at the Tampere University of Technology, Finland [78]. Other topics in sensor networks are data compression, storage, transportation, processing and enhancing [54].
Sensor networks can be used as a monitoring system for plants, where the sen‐sors deliver additional measurements of a plant, independent of the automation
14
15
system. The increased demand on high efficiency and ecological production require new, cheap, and flexible production monitoring technologies. Industrial wireless sensor networks can be used for production monitoring of energy efficiency and compliance to environmental regulations [59]. Another similar application is the “mobile wireless industrial worker,” where a serviceman can walk in a factory and monitor the nearby sensors and actuators with a wireless handheld device [19].
The issues and challenges of applying a sensor network to factory automation are summarized in [179]. Such lessons are valuable for the deployment of wire‐less control in industrial environments, as the conditions may be quite harsh, including shadowing and interference from motors and devices [164]. There are some reports on the experiences of sensor network deployments in industrial environments. A four month continuous monitoring campaign of a plant has been reported, where power management protocols and periodic system resets were used [81]. Another example is a sewage overflow control system called CSOnet. This is a metropolitan wide sensor and actuator network, consisting of about 150 wireless sensor nodes, used to control sewage overflow by measuring the water levels in the sewage and controlling storm water flow to prevent overflow, in case of heavy rain [109]. The experiences of a WSN deployment in a mine are presented in [1].
2. PRELIMINARIES – NETWORKS AND CONTROLLERS
In this chapter preliminary information and relevant theory that are needed later are summarized. First the general assumptions of the WNCS used in this thesis are listed and networked control structures are discussed. A defining feature of WNCSs is packet drop, therefore several packet drop models are presented in Section 2.4. Measurements and estimation of corresponding packet drop model are done in Sections 3.1 and 3.2.
In the following sections some controller design and tuning methods for WNCSs are presented. First, a stability criterion for NCSs, used in many of the controller tuning algorithms, is given in Section 2.5. Several PID controller structures suitable for NCSs are then presented in Section 2.6 and later, in Sec‐tion 3.3, a new control structure is proposed. The internal model control frame‐work is treated in Section 2.7, including the IMC‐PID controller design.
In Section 2.8, some initial approaches in the literature on network adaptive control or control traffic adjustment are reviewed. Network congestion and adaptation methods of control traffic are also discussed. These issues are later developed further in Section 3.5. Finally, Kalman filtering in NCSs with packet dropout is presented in Section 2.9.
2.1. The Networked Control Problem The general problem in the NCS field is related to the stability of the control system in the case of information loss. In a traditional wired control system, the operation is deterministic and the sampling instants are equally spaced. These dynamic systems can be analyzed effectively using the Z‐transform, where several proofs of stability exist, based for instance on the poles of the closed‐loop system transfer function.
In the wireless control case the information flow between some of the compo‐nents is stochastic and the situation becomes problematic. In this case the stabil‐ity depends on the varying delay and packet drop of the network. Often also the system is not synchronized or periodic sampling is not possible as the sen‐sors and controllers are distributed, which means that the Z‐transform cannot
17
be readily applied. This results in stochastic stability proofs or cases where for example all the possible packet drop realizations have to be enumerated for proving the stability.
Current wireless control system research has its roots in networked control system theory, as the issues of a shared communication medium are the same. The research problems are mainly related to variable communication time‐delays and packet losses, and system architecture design, see [179] and [165]. Both fields deal with network protocols [6], [102], transmission scheduling [160], [159], communication and control co‐scheduling [137], [153], traffic reduc‐tion [27], [92], congestion control [134], [157], and estimation [113], [173], [177]. The main difference between NCSs and WNCSs is that wireless communication is less deterministic because of external interference and finite communication range, but problems with wiring and failing connectors are eliminated.
Some of the approaches for proving controller stability include: LQG control [60], Linear Matrix Inequalities (LMIs) [178], Markov Jump Linear Systems (MJLS) [74], the jitter margin, [23], [72], Lyapunov functions [103], power spec‐trum [94], and optimal communication scheduling for stability [160]. Other control related theory relates to Kalman filtering [144], [171], controller tuning [47], [67], and control performance [91], [94].
2.2. General Assumptions Throughout this thesis certain assumptions on the studied WNCS are made. The assumptions are declared and motivated here. Previously the majority of the literature focused on wired networked control systems. Nowadays, wireless NCSs are also considered. This thesis focuses solely on WNCSs and the simu‐lated cases are all with a wireless network. Some of the developed theory can be applied to wired NCSs, although the problems with NCSs are exaggerated in the wireless NCS case, as wireless networks are, in general, less reliable than wired ones, because the shared and open transmission medium is susceptible to interference.
The wireless network is thus assumed to be unreliable, with time‐varying deli‐vered quality of service and with the possibility of longer outages. The unrelia‐bility is either due to the properties of the wireless communication, or due to the used non‐deterministic network protocols, such as CSMA‐type MAC.
The adaptive algorithms adapt to the general, slowly changing, performance of the network. Instantaneous accommodation to sudden bursts of packet drops is in practice impossible, and can cause instability due to switching of controller parameters. The adaptation is done slowly, such that problems of instability due to switching of tuning are not an issue, as is customary in adaptive control approaches [184].
18
Time‐driven sensors, controllers, and actuators are assumed the whole time. This implies that the observed delays and delay jitters are effectively quantized to multiples of the sampling interval. This simplifies the analysis, since actions between sampling instants need not be taken into account. In the case event‐driven controllers and actuators are assumed, which is sometimes the case in the literature, the theory and implementation would be more complicated, as the algorithms would become truly time‐variant. In practice, systems are still asynchronous, as there might be a time‐offset between the sampling instants of the clocks of all the nodes in the WNCS, if they are not synchronized. Random time offsets are automatically used in the PiccSIM simulator.
Due to the choice of time‐driven operation, a zero order hold (ZOH) is assumed at the receiver until the next sampling instant. In the case of a dropped packet, ZOH is also used, such that the previously received information is held until a new value is received.
The wireless nodes are assumed to be ideal, in the sense that the input/output and computational tasks are always performed on time. The hardware includ‐ing the microcontroller and radio are not modeled. The scheduling of the tasks in the microcontroller and resulting computational delays are not taken into account in the PiccSIM simulator. It is assumed that the operations are bounded by the sampling interval, such that sampling, transmission, and reception, are executed before the next sampling instant. This is motivated by the short com‐munication delay compared to the sampling interval, typically observed in the simulations of this thesis.
The wireless network is often assumed to reside between the sensor and con‐troller. The controller is co‐located at the actuator, which then naturally elimi‐nates one (unnecessary) wireless communication link between the controller and actuator, and the controller can take advantage of the wired power often required by the actuator. Only wireless measurements are assumed in the theory because of technical aspects, where stability proofs are only formulated for this case. In practice, depending on the application, some simulation cases have also wireless communication between the controller and actuator.
Stable processes are assumed, as an outage in the network makes the control system work in an open‐loop configuration, which would be detrimental in control of an unstable process. Furthermore, a simple process model is pre‐ferred. When doing control design, generally a first‐order process with time delay (FOTD) [185] of the form
( )1
τsKG s eTs
−=+
, (1)
19
where K is the process gain, T is the time‐constant and τ is the time‐delay, is assumed. In the case of higher‐order processes, a first‐order approximation can in some cases be used.
The total time‐delay L in a control loop is defined as
NL τ L= + , (2)
where N is the constant minimum communication delay of the network L [23]. The control design is always done for the total delay L. On top of the constant time‐delay, an additional varying delay δ(t), caused by the network, is often present.
For communication, today’s commercial off‐the‐shelf radios, or similar, are assumed to be used. In the PiccSIM simulations an IEEE 802.15.4 network [11] is always used. This network type is selected, because it is well suited for low power, low bandwidth communication, and the current wireless automation standards use it. Non‐deterministic operation of the network is assumed, main‐ly due to the CSMA type MAC protocol. Deterministic approaches such as WirelessHART are not considered, because they do not pose the same problems of varying delivered QoS. UDP‐like (User Datagram Protocol) communication is used, since the sensors and controllers are time‐driven and they send packets with a fixed rate. UDP does not have retransmissions in case of packet drop, but this is not required in control applications, since due to the real‐time operation, sending new information is more desired than retransmitting old, which may be outdated when retransmitting. Traffic rate adopting protocols, such as Transmission Control Protocol (TCP), cannot be used in control applications, because of the constant packet rate produced by the sensors. Thus, before dep‐loyment of a wireless automation system, the designer has to verify, for in‐stance by simulation, that the bandwidth of the network is adequate for the application. In Section 5.2 a controller with adaptive communication rate is developed to alleviate this situation.
In this thesis only problems of packet drops in the network are considered. Due to the time‐driven assumption, packet drop can be thought of as a kind of vary‐ing delay, as shown in Section 2.4.1, since the controller has to wait for the next measurement packet if the current one is dropped by the network. In the simu‐lation cases of this thesis the varying delay induced by the network is negligible compared to the sampling interval, thus only packet drop needs to be consi‐dered in the control design.
Only lightweight control algorithms, such as variations of the PID controller, are considered. The low computation capabilities and power saving require‐ment of wireless nodes necessitates the usage of simple algorithms. PID control‐ler is also favored because of the widespread use of it in the industry.
20
2.3. Networked Control Structures When designing a control system for a WNCS, the selection of the control struc‐ture is important, as it determines what information is processed in which part of the network, and what information needs to be communicated to the other nodes in the control system. The controller algorithm can then be constructed with special logic to handle separate cases depending on what information has been received or lost. There are many possible control structures and design approaches for NCSs or WNCSs, of which only some are discussed here.
In this work single‐input single‐output (SISO) control loops are mainly consi‐dered, which can be extended to the multiple‐input multiple‐output (MIMO) case by parallelizing several SISO loops. Other MIMO architectures, such as centralized or hierarchical, are naturally possible. Three main control design and tuning approaches, with more or less traditional control structures, for NCSs are considered next. The first and most complicated approach is to design an optimal controller that can stabilize the process with given delay and loss specifications. In the literature, the controller is usually of state‐feedback type, either time‐varying or constant, and depicted in Figure 1a. The control system may need a state observer at the transmitter, if the state is not directly observa‐ble. The optimal controller is usually designed by casting it to an optimization problem of linear matrix inequalities, see e.g. [65] and [67]. The math is quite involved and it is thus unlikely that this method will become a mainstream approach in practical applications, where the operator should be able to under‐stand the control algorithm and be assured that it works properly.
During packet drops it seems intuitively clear to use a model to predict the process output at the controller during outages. The objective is to estimate the current process state, as shown in Figure 1b, by using the received intermittent and delayed measurement packets [108]. The network delays are taken into account in the state‐estimator and the state can be predicted if a packet is dropped. In this way there is always a current process state estimate available for the controller, which can be any conventional (non‐network aware) control‐ler [98]. A suitable estimator for NCSs is the Kalman filter (see Section 2.9), because of its convenient form with a prediction and an update phase.
In [141] and [174] a ʺsmart sensorʺ is used, capable of doing some processing on its own. The filtering is done at the sensor and the state estimate is sent over the network. This ensures that the estimate is optimal, since no measurements are lost, and the current state can be calculated by prediction, if packets are dropped. The estimation at the sensor has the downside that the control input to the process has to be transmitted to the sensor without delay and loss, which is not practically achievable. Further, ztate‐estimators at both sensor and con‐troller can be used to reduce the traffic, by estimating the current process state without the need to transmit all the measurements [177]. In this case the esti‐
21
mates are updated by communication only if the estimation error grows too large.
The third alternative is to still use a conventional controller, such as the PID controller (Figure 1c), and tune it to be robust to the packet drops and delay jitter (Section 2.5) [47]. The advantage of this approach is that the PID controller is widely used in the industry. When wireless communication is adopted for control applications, the PID controller is already available in the automation system and implementing a new controller suitable for wireless automation is more laborious than retuning an existing PID controller. Thus, PID controllers will most probably be adopted for wireless control applications. Additionally, the operators are familiar with them, they understand how the control law works, and they have confidence in it.
(a) Optimal state feedback
(b) State estimator and regular PID controller
(c) Jitter margin tuned PID controller
State feedback State estimator
uz y
Reference
yr
Process
control output
Network
y iny out
u
State estimator
uz
y
ConventionalPID Controller
PIDReference
yr
Network
y iny out
yu
Referenceyr
Network awarePID Controller
PID
Network
y iny out
yu
Process
control output
Process
control output
Figure 1. Some control structures suitable for networked control systems.
22
State feedbackReference
yr
Process
control output
Network
inout
u(t)
( )( )y t
y t( )( )y k
y k
Reference
yr
Process
control output
Network
inout
u(k)
( )( )1y k
y k −( )( )1y k
y k −
PID
Figure 2. Approaches to control with discrete‐time feedback information in NCSs. Discrete‐time signal indicated with dashed line. Top: only communi‐cation is in discrete‐time, Bottom: Discrete‐time controller.
The simulations in Section 4.7.2 compare these control structures. The rest of the simulations use, in general, the structure of case (c), whereas the proposed Networked PID in Section 3.3 is an attempt to use the advantages of case (a) in a lightweight manner. This is further combined with case (b) to achieve more benefits, in the steady‐state heuristic suggested in Section 5.4.
Besides controller structures, the approach of control design with packet based communication, is another fundamental issue. In the literature there are two approaches to deal with the case when the feedback information is received as discrete‐time packets over the network, as depicted in Figure 2. One is to look at the control as a continuous‐time system, where, for implementation reasons due to the network, only the feedback information is in discrete‐time, such as in [103]. In this case, the discrete‐time communication approaches asymptotically the continuous‐time system when decreasing the sampling interval. Typical approaches are state‐feedback controllers [128] or other continuous‐time con‐trollers with information updated at discrete time‐instants [103].
With truly discrete‐time controllers, the control algorithm is calculated whether a packet is received or not. This might cause some trouble to the correct opera‐tion. On the other hand, if the control algorithm is only calculated at the recep‐
23
tion of a new packet, the control response changes depending on the timing of the execution events. In this case the constant operation approach is not valid. The controller must be changed as a function of the packet inter‐arrival time or rate, similarly as the PID PLUS controller in Section 2.6.2 or [53], to produce in the same operation as the ideal continuous‐time counterpart. This is typically not done in the literature, e.g. [4], [20], and [128], and as a consequence the control response degrades when the actual sampling interval deviates from the designed one. Proper changing of the controller sampling interval and tuning is shown with one of the developed adaptive control schemes in Section 5.2.2.
Both continuous‐ and discrete‐time approaches have their advantages. In the former case, the control design is done in continuous‐time, where event‐driven feedback is most naturally formulated [8], [153]. In the discrete‐time controller case, packet drop is more natural to deal with, as the signal value is hold until the next sampling instant. The resulting network traffic is predictable as the sampling interval is constant, and the implementation is better suitable for scheduled networks.
2.4. Network Models In WNCSs, the essential challenges for the control system are packet drop and delay jitter caused by the network. Delay jitter is in general caused by packet drop, random transmission opportunities in CSMA‐type MAC protocols or different sequences of timeslots in TDMA MAC protocols. In all cases the delay jitter is aggravated in multihop communication, typical for WNCSs, as the de‐lay accumulates at every hop. Packet drop occurs when there is packet collision, poor signal strength or interference. For simulation of WNCSs and analysis purposes, network models that imitate the packet drop and delay jitter of real wireless networks are needed.
In industrial or factory environments the radio propagation signal deviates considerably from the ideal free space propagation models used in most net‐work simulator models. Besides the simple free space model there exists many other fading models for wireless communication [57]. Metal and obstacles cause shadowing and multipath effects that amplify or attenuate the radio signal strength. The radio environment in a factory can be harsh with interfering elec‐tromagnetic radiation from motors and moving machinery temporarily block‐ing links of the wireless network. Reflections of radio waves can in these envi‐ronments be an advantage, because shadowed locations can obtain a strong signal through reflections.
There are several studies of the performance of IEEE 802.11 networks, e.g. [131] where the network design is also discussed. There are some reports on studies of measurements done in industrial environments. The received signal strength in a chemical pulp factory, cable factory and a nuclear power plant was meas‐
24
ured with an IEEE 802.11 network at the 2.45 GHz ISM radio band [77]. The conclusions of the experiments were that the radio environment is not as harsh as initially thought; reflections and diffractions improve the signal strength in shadow areas. The study in Section 3.1 reveals that, while many locations are improved by multipath fading, communication in some locations is impossible, due to no signal or destructive interference, even if the distance is short. Anoth‐er study presents measurements of the bit‐error‐rate and more importantly, the error pattern, of an IEEE 802.11 network in an industrial environment [162]. Interesting findings were that the packet losses are correlated, error burst and packet loss burst lengths fluctuate several orders of magnitude with time. This means that the consecutive packet drops may be long in some instants and hard to eliminate, for various physical reasons caused by the environment and the radio. On the other hand, error free periods vary also and can be long. Packet loss rates vary from the high 80 % to less than 10 % in generous situations. In the Internet, packet drop is found to be mostly random [15].
In office environments, similar measurements can be made. An example is [169], where the propagation channel is measured. Among the tested models, the Ricean model fits the data best. Ricean models are estimated for different distances and configurations between the transmitter and receiver. Because of multipath propagation, the parameters of the model are not linearly dependent on the transmission distance, as generally assumed. On large scales, the log‐normal distribution fitted the data well [169].
Wired Ethernet traffic is studied in [87], where the self‐similar property of the traffic is demonstrated. Similar behavior can be assumed with WLAN networks in office environments, as they both use CSMA. Studies of the traffic properties in the Internet have also been done [111].
In this section the focus is on models for the packet drop in the network. This restriction is made because the main limiting factor in real‐time control is the loss of feedback, for instance caused by packet drop. First the relationship be‐tween packet drop and delay is established. Both simple and data‐based packet drop models, which are adequate for basic simulations of unreliable networks, are developed in the following subsections. For more realistic packet drop be‐havior of the network, a network simulator, where also the network protocols and packet collisions are taken into account, can be used as discussed in Section 4.3. Real environments have also been measured in this thesis, as reported in Section 3.1, to make the simulation results more realistic. Based on the radio environment measurements, packet drop models are estimated and the model fit is evaluated in Section 3.2. These network packet drop models are integrated into the network simulation model as described in Section 4.3.4.
25
2.4.1. Packet Drop ‐ Delay Jitter Although delay jitter and packet drop are two distinct phenomena with differ‐ent causes, they are linked in a sense, as the effects on the control system are similar. Consider a controller with zero‐order‐hold. When a packet is dropped, the controller will use the most recently received data. The drop of a packet will thus effectively cause an increase in the delay, seen as a delay jitter. In the thesis the notion delay jitter is used even if the actual underlying event is packet drop. With a pure delay jitter no information is lost, but in a real‐time system it may become outdated and thus useless.
In wireless communication, packet drop due to interference or collisions can be approximated with a uniform random packet drop defined by a certain proba‐bility [15]. Consider a network with a constant delay N , where indicates the delay in terms of sampling intervals h, and a random packet drop with probability pdrop. With time‐driven algorithms and the ZOH assumption at the receiving side, it follows that in the network simulations the output of the network is described by
L nh= n∈
( ) ( )( )
( ) ( ) ( )in dropout
out
/ , , U 0,1
1 , otherwiseNy k L h r k p
y k r ky k
⎧ − >⎪= ∼⎨−⎪⎩
(3)
where yin and yout are the input to, and output from the network respectively, and r is a uniformly distributed random number between zero and one. The previous output is thus held if a packet is dropped.
The resulting delay jitter caused by packet drop according to the above model is thus
( ) 1 ,n nδ t t t t t t += − ∀ ∈⎡ ⎡⎣ ⎣n nt, t= when ( ) dropr k p> (4)
where tn are the times of the received packets.
An example realization of the packet drop induced delay is plotted in Figure 3 for a uniform packet drop probability of pdrop = 0.2, and sampling interval of h = 0.1 seconds. Notice the additional constant minimum delay related to transmission.
NL
At the receiving side, the communication delay is in certain cases needed by the control algorithm. The delay estimation with a linear estimator, assuming slow‐ly changing random delay is presented in [139].
A simple delay jitter estimation algorithm for a quickly changing delay, where the delay can change on every time‐ step, is presented next. It relies on counting the timestamps, and the gaps due to packet drop, between the received packets.
26
0 5 10 15 200
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Time [s]
Del
ay [s
]
Figure 3. Delay with uniform packet drop probability of pdrop = 0.2, and sampling interval of h = 0.1 s.
On the reception of a packet with timestamp tn‐1, the next packet is expected at time tn‐1 + h, where h is the sampling interval of the sensor. If however one pack‐et is dropped, the next packet received has timestamp tn = tn‐1 + h + dn, where dn > 0 is the additional delay. The delay difference is the difference in time‐stamps between the two most recently received packets tn‐1 and tn according to
d
1n n nd t t −= − −h . (5)
If , there is no delay jitter. To record the delay jitter, the tuple: delay jitter n and timestamp tn, of the received packet are stored. In practice, the delay
statistics of a given time‐window
0nd =d
,Wt T t⎡ − ⎤⎣ ⎦WT
of length W is used. Thus, all the jitters from the current time‐period , are collected in D(k).
T
( ) ( ) ( ){ }D , | ,n n n Wk d t t t k T t k⎡ ⎤= ∀ ∈ −⎣ ⎦ . (6)
Here t(k) refers to the current time. The maximum delay jitter in the time‐frame is defined as ( ) ( ),Wt k T t k⎡ −⎣ ⎤⎦
( ) ( ){ }max argmax Dd
δ k k= . (7)
This delay counting is used in the adaptive jitter margin controller of Section 5.1 and the notion of packet drop caused delays (4) is used in all the simulations.
27
The assumptions of this method are that every packet has a timestamp and that the delay jitter seen by the controller is only due to dropped packets. This means that the delay variation of successfully transmitted packets is considera‐bly smaller than the sampling interval of the controller. In most applications this can be assumed if the network is small and the communication times are small compared to h. A more complex delay estimation algorithm, which avoids these assumptions, is the Kalman filter based maximum a posteriori method presented in [P7].
2.4.2. Drop and Delay Models based on Markov‐chains Instead of using a static drop probability as a model for the network, a Markov‐chain can be used to model correlated network delay or packet drop [172]. In this section several Markov‐chain packet drop models are described and Gil‐bert‐Elliott model identification presented. These models are identified from data in Section 3.2 and used later in the thesis in the simulations.
A Markov‐chain is a sequence of random variables ( )Χ k defined by the proba‐bility of being in a state χ according to
( ) ( ) ( ) ( )( )Pr Χ 1 1 |Χk χ k k χ kP = + = + = , (8)
where ij is the state‐transition matrix, giving the probability of changing from state i to state j. The steady‐state state distribution of the Markov‐chain is given by the left eigenvector of the equation
pP ⎡ ⎤= ⎣ ⎦
π πP= , corresponding to the ei‐genvalue 1. [34]
For modeling a network with a maximum delay jitter of max , a Markov‐chain with max states, each corresponding to a delay value, can be used. The delayed output of the network is then dictated by the current state of the Mar‐kov‐chain.
δ/MN δ h=
If a network with constant delay and only packet drops is considered, a Mar‐kov‐chain can also be used. In this case the delay increases by one sampling interval if a packet is dropped, or it returns to the minimum delay if a packet is transmitted successfully. Thus, with uniform random packet drop and a maxi‐mum number of consecutive packet drops of max /MN δ h= , the Markov chain state‐transition matrix is of the form
drop drop
drop
drop
1 01 0
1 0 0 0
p pp
pP
⎡ ⎤−⎢ ⎥−⎢ ⎥= ⎢ ⎥
⎢ ⎥⎢ ⎥⎣ ⎦
00
. (9)
28
dG dB
Good BadpGB
pBG
pBBpGG
Figure 4. Gilbert‐Elliot model with states Good and Bad. State‐transitions and probabilities indicated.
In the case that the packet drop probability is not uniform, different transition probabilities can be used for the separate states and thus correlated packet drops can be simulated. Other Markov chains are also possible, see e.g. [172].
A common way to model a network with packet drops is the Gilbert‐Elliott (G‐E) model [41], [56], which is based on the Markov‐chain. The G‐E model has two states: one corresponding to good (G) and the other to bad (B) conditions, with separate packet drop probabilities in the good and bad state,
and ( )| GP drop G d= ( )| BP drop B d= , respectively. The transitions between the states follow a two‐state Markov model. The state‐transition matrix is given by
( ) ( )( )( ) ( )( )
Χ |Χ 1 , 1,
Χ |Χ 1 , 1GB GG GBGG GB
BG BB BG BB BG
p P k B k G p pp pp p p P k G k B p p
P= = − = = −⎡ ⎤
=⎢ ⎥= = − = = −⎣ ⎦
, (10)
where GG and BB are the state‐holding, and GB and BG are the state‐transition probabilities as illustrated in
p p p pFigure 4. The state residence time of
state i is given by
, 1GE iii
hTp
=−
, (11)
where h is the time‐step of the Markov‐chain.
The average good and bad state probabilities of the G‐E model are
BGG
BG GB
pπ
p p=
+, GB
BBG GB
pπ
p p=
+, (12)
and the mean packet drop is [66]
GE G G B Bd π d π d= + . (13)
In Section 3.2, the Gilbert‐Elliott model is fitted to the data collected from an industrial environment. These models are implemented, as explained in Section 4.3.4, for realistic simulation purposes. To fit the G‐E model to the data, the two drop probabilities ( and ) and the state‐transition probabilities ( and Gd Bd GBp
29
BG ) must be identified from the data. The model identification is a Hidden Markov Model fitting problem p
[66], where the observations, in this case the packet drops, are available and the underlying states and emission probabilities are estimated. To evaluate the model fit on the data, using second order statis‐tics over different time‐scales is a standard approach [66].
The time‐scales are defined as follows. The stochastic process can be ex‐amined on different time‐scales m by taking the average of non‐overlapping blocks of size m
Χ
( ) ( ) (( )( ) 1Χ Χ 1 Χm k mk mm
= − + + + )mk . (14)
For time‐series with little data, averaging with a sliding window or partly over‐lapping windows of size m can be used.
The model fit is evaluated by the mean packet drop (13) and the normalized error in standard deviation
( ) ( ) (( )
)norm 1
D GE
D
σ m σ mσ m
σ−
= , (15)
where Dσ and GE are the standard deviations of the data and the Gilbert‐Elliott model, at time‐scale m. The error
σ(15) is zero if the variances coincide and one if
the difference in variances is as large as the variance in the data. The overall model fit is evaluated with the mean of the normalized standard deviation error over logarithmically spaced time‐scales, listed in the set M
( )tot norm1
m Mσ σ m
M ∈
= ∑ . (16)
The statistical properties of the Gilbert‐Elliot model and higher order Markov models are derived in [66]. The coefficient of variation
( )( )ΧΧv
σc
E≡ (17)
for the G‐E model is
( )( )( )
( )( )( )
( )
2
2
2 1 11 1 1 1
v
m
GB BG GB BG G B GB BG
GE GB BGGB BG GB B BG G
c m
p p p p d d p pd m p pm p p p d p d
=
⎛ ⎞⎛− − − − −⎜ ⎟⎜= − + −⎜ ⎟⎜ ++ +⎝ ⎠⎝
⎞⎟⎟⎠
, (18)
from which the variance at different time‐scales can be calculated
( ) ( )GE v GEσ m c m d= . (19)
30
2.5. Jitter Margin Control with packet drops and varying delay stemming from a network is a complex case to analyze, because of the stochastic and time‐varying nature of the problem. Ensuring stability of NCSs has been under much research lately [65]. Some results deal with optimal control [95], jump‐linear Markov models [172] and the jitter margin [23], [72].
The jitter margin [23] defines the amount of additional delay that a control system can tolerate without becoming unstable. The delay may vary in any way, provided that it is bounded by the jitter margin δmax. By selecting a tuning of a conventional controller such that the control loop has a positive jitter mar‐gin, the control loop is stable for network induced delay jitter and packet drop bounded by the jitter margin.
The theorem for the jitter margin states that in the continuous‐time case, the closed loop system with process G(s) and controller Gc(s) is stable for any addi‐tional delay in the loop, if ( ) max0 δ t δ≤ ≤ [72]
( ) ( ) ( )( ) ( ) max
1 , 0,1
ccl
c
G jω G jωG s ω
δ ωG jω G jω= < ∀ ∈⎡ ∞⎡ , (20) ⎣ ⎣+
or equivalently
( )max1 0,
cl
δ ωG ωjω
< ∀ ∈⎡ ∞⎡⎣ ⎣ . (21)
In the discrete‐time case the criterion becomes
( ) ( )( ) ( ) max
1 , 0,11
jω jωc
jωjω jωc
G e G eω
N eG e G e< ∀ ∈⎡ ∞⎡⎣ ⎣−+
(22)
where
max max /N δ h= (23)
is the jitter margin in terms of sampling intervals Nmax and h is the sampling interval of the control loop. The mixed discrete‐continuous‐time case is the same as (22), provided that the sampling interval is chosen properly, i.e. suffi‐ciently small, to prevent aliasing.
The jitter margin is in essence an extension to the phase margin [72]. In case of only packet drop, the delay follows a sawtooth shape, as in Figure 3, and the Mirkin’s lemma [106] can be used, which makes the jitter margin 57 % less con‐servative.
31
2.6. The PID Controller in Networked Systems PID controllers have the reputation of being simple, yet delivering acceptable performance. The wide use of them in industry suggests that it will be applied in the NCS case also. The traditionally used controllers, such as the PID control‐ler, have been shown to work well also in the networked control case [47].
In this thesis the discrete‐time PID controller of the form
( ) ( )( )
( ) ( )11
1d d
pi d d d
T N zhu k K e kT hz T N h z T
⎛ ⎞−= + +⎜⎜ − + −⎝ ⎠
⎟⎟
i
, (24).
is used, where the control signal u is calculated based on the error signal r between the set‐point value and the actual process output. Kp is the
controller gain, Ti and Td the integration and derivation time, respectively, Nd is the derivative filter constant. The sampling interval h is naturally used at the sensor also, and determines the packet rate over the wireless network. Ti, and Td are related to the PID controller integral and derivative gains Ki and Kd through
e y y= −
/ ,.
i p
d p d
K K TK K T
=
= (25)
2.6.1. Tuning of PID controllers in Networked Control Systems
The tuning of PID controllers for different cases and requirements is an exten‐sively studied topic with an abundance of tuning rules and methods [185]. Tuning of PID controllers for networked control systems is a difficult task be‐cause of the varying delay induced by the network, where stability is hard to show. Some methods use Lyapunov functions [103], LMIs [178], MJLS [74], or power spectrum [94]. Another method to guarantee stability is to use the jitter margin theorem presented in Section 2.5. In the following, some approaches and PID controller tuning methods for varying time‐delay systems are pre‐sented. These are used in the adaptive control schemes and simulations of this thesis.
PID controller tuning methods and rules for varying delay control systems using the jitter margin theorem have been developed by Eriksson [47]. The tuning rules are developed into formulas, for the Kp, Ki and Kd gains of the PID controller. The basics of one tuning rule are briefly repeated here. This tuning is used in some of the simulation studies in Sections 4.7.1‐5.1.
Consider a first order lag plus integral plus delay process (the so‐called FOLIPD model)
32
( ) ( )1sτKG s e
s Ts−=
+ , (26)
where K is the velocity gain, T the time‐constant, and τ is the time‐delay, the PID tuning is given in the form [48]
, 0, p i da aK K KKL KL
= = =T , (27)
as a function of a tuning parameter a which depends on the desired jitter mar‐gin δmax
max
0.94850.6356La α
δ L=
+, (28)
where α is a tightness factor describing how close to the stability bound the tuning is selected. Usually as tight tuning as possible is selected with α = 1. In the tuning, the total constant delay including the process and minimum net‐work delay L (2) is used.
The control system with a PID controller tuned in this way can tolerate any excess delay that is smaller than δmax, without the risk of instability. The para‐meter a gives the maximum gain of the tuning for the given jitter margin. There are other similar tuning methods, each giving the gain a through different for‐mula depending on the design goal. Instead of using the jitter margin, the ro‐bustness to delay jitter can be obtained by optimizing the worst cost of several step responses with different network realizations, though this method does not guarantee stability [129].
The PID controller can also be tuned by an optimization procedure. When using optimization, there is a choice of several different cost functions that can be used to evaluate the control performance. The most common are the IAE, ISE (Integral of Absolute/Square Error) and the ITAE, ITSE (Integral of Time‐weighted Absolute/Square Error) criteria [185]
( )2
1
IAE
t
t
J e t= ∫ dt (29)
( )2
1
ITAE
t
t
J t e t= ∫ dt
dt
dt
(30)
( )2
1
2ISE
t
t
J e t= ∫2t
(31)
( )1
2ITSE
t
J te t= ∫ (32)
33
where ( ) ( ) ( )re t y t y t= − is the difference between the reference and the output of the process. The cost criterion is usually evaluated over a step response be‐ginning at t1 until the response has settled down at t2, and minimized with re‐spect to the PID parameters. The time‐weighted cost criteria emphasize the steady‐state error and discount the transients in the beginning, whereas the other costs are suitable for measuring the impact of disturbances. The cost crite‐ria can also be used in multiobjective optimization of PID controllers for NCSs [48], where the control performance is optimized with a target desired jitter margin or jitter margin constraint.
In the next section and Section 3.3 two variants of the PID controller are pre‐sented, which are modified to better suit the NCS case where packet drops are present.
2.6.2. The PID PLUS Controller A variation of the PID controller is an event based PID, which is an extension of the conventional PID controller to varying calculation interval, where the integral and derivative parts take into account the time passed since the pre‐vious iteration [181]. The PID PLUS controller is a heuristic PID control ap‐proach to packet drops developed by industry [147].
The main idea of the PID PLUS is to implement an integral anti‐windup type scheme to the controller for dropped measurement and control packets. The integral and derivative actions of the controller are calculated over the time‐interval between two consecutively received packets. Thus, the PID PLUS is event‐driven in the sense that if no new information is received, the control output is constant. The structure of the PID PLUS controller is depicted in Figure 5. The filter equation that replaces the integral action is
Figure 5. PID PLUS controller block diagram [147].
34
( ) ( ) ( )( )( )Δ /( ) 1 1 1 1 iT Tf k f k u k f k e−= − + − − − − , (33)
where f is the output of the filter, u is the controller output, Ti is the integration time, and is the time‐difference between two consecutively received pack‐ets. The integral filter is derived from the typical integral anti‐windup scheme
with filter
ΔT
( ) 11i
F sT s
=+
, which acts as an integrator when arranged in a positive
feedback loop
( )( )
11 i
F sT sF s
=−
. (34)
Discretizing the filter with sampling interval h leads to ( )1 1
11
γF qγq
−−
−=
−, where
. The input to the filter is the previous control value, thus / ih Tγ e−=
( ) ( ) ( ) ( )( ) ( ) ( ) ( )( )( )
1
Δ /
1
1 1 1
1 1
1 iT T
γq f
f k u k f k
k k
f k e
γ u−
−
−
− = − −
⇒ = + − − − − .
)
(35)
The last implication is obtained by adding and subtracting , and replac‐ing the constant sampling interval h with the time‐difference , which results in the filter equation for the PID PLUS
( 1f k −ΔT
(33).
The derivative is calculated according to the approximation of the derivative where the time since the previous measurement is taken into account ΔT
( ) ( ) ( )1ΔD d
e k e kT
u k K− −
= . (36)
The integral and derivative thus depend on the time between the previous measurement packets and they are only calculated when a new measurement has arrived and the new value flag is set by the communication stack as indi‐cated in Figure 5. The PID PLUS scheme is compared to the proposed IMC tuning and outage heuristic in Section 5.4.
2.7. Internal Model Control The IMC control approach, first brought to a comprehensive framework by [55], uses a model of the process . The difference between the model and the process is fed back to the controller . In the case of a perfect model, choosing m , yields perfect control. To make the controller realiza‐ble a low‐pass filter
( )mG s
( )c G=
( )pG s( )cG s
1( )G s s −
35
( )1( )1
f nG sλs
=+
(37)
with an appropriate integer n to make the closed‐loop strictly proper, and a positive tuning parameter λ, is added to the controller. Now the closed loop transfer function becomes
( )( )cl fG s G s= . (38)
Thus, λ determines the speed of the control, and a step response of desired speed is achieved.
2.7.1. Internal Model Control Design In practice the following steps are taken due to problems caused by noise, mod‐eling error, and problems when inverting the model. If the process model is non‐invertible, it is split into an invertible and a non‐invertible part, . The non‐invertible part contains all positive zeros and time‐delays , which upon inverting would become unstable or non‐realizable. The rest of the model consists of the invertible part, which is incorporated into the controller Gc. The non‐invertible part is treated as un‐modeled dynamics and is handled by the feedback.
( )mG s− ( )mG s+
( ) ( ) ( )m m mG s G s G s+ −=e
( )mG s+
Ls−
The IMC approach can be transformed into an output feedback control loop, with the model included in the controller. With elementary block diagram alge‐bra the IMC controller becomes [55]
1
IMC
( ) ( )( )
1 1 ( ) (m fc
m c m f
G s G sGG s
G G G s G s
− −
+= =
− − ). (39)
The obtained closed‐loop system then becomes
IMC
IMC11
pf p
p mcl
p pf p m
m
GG GG G G
GG G G
G G GG
−+
−
−+ +
−
= =+ ⎛ ⎞
+ ⎜ −⎜ ⎟⎝ ⎠
⎟
, (40)
where similarly as for the process model, which if the process model is exact ( ) , reduces to
( ) ( ) ( )p p pG s G s G s+ −= and m pp mG G G G− − + += =
cl f pG G G+= . (41)
That is, the obtained closed‐loop system is a low‐pass filter with the desired time‐constant λ, and the non‐invertible part, which cannot be avoided.
36
In practice, especially in NCSs where the communication is with discrete pack‐ets, the controller is implemented as a discrete‐time algorithm. Either with con‐tinuous‐time design followed by discretization of the controller, or the control‐ler is designed in discrete‐time from the start. The discrete‐time design proce‐dure is similar to the continuous‐time case, using the same controller structure. In the discrete‐time case, given a continuous‐time process model the model can be discretized to using a suitable discretization method.
( )mG s( )mG z
In the discrete‐time case, the non‐invertible part contains the delays, z‐d, all zeros outside the unit circle, and negative zeros inside the unit circle, which otherwise cause oscillations in the control signal. The separation is not unique, but the all‐pass form is advantageous [55]. For all p1 zeros vi outside the unit circle, a pole at 1/vi is added, forming an all‐pass form of the non‐invertible part. All p2 oscillating zeros wj in the left‐half unit circle are as well included, ba‐lanced by a pole at zero. The non‐invertible part thus becomes
( ) ( ) 1 21
1 1
1 1 / 11 / 1 1
p pd ji i
mi ji i
z wz v vG z z
z v v z w− ++
= =
⎛ ⎞⎛ − ⎞⎛ ⎞⎛ ⎞− −= ⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟− − −⎝ ⎠⎝ ⎠ ⎝ ⎠⎝ ⎠
∏ ∏j
(42)
The corresponding discrete‐time low‐pass filter is
( )( )11
( ) ,1
n
f n
γG z
γz−−
=−
(43)
where
/h λγ e−= (44)
gives the relationship between the continuous‐time λ and the corresponding discrete‐time filter coefficient γ.
More elaborate IMC controller design discussions can be found in [135] and [86]. The case with an IMC controller in a NCS is studied further in Section 3.4. In the following, an IMC based tuning method for PID controllers, also used in the thesis, is described.
2.7.2. IMC‐PID Controller Design The IMC design procedure can result in a conventional PID‐type controller with certain model choices or approximations. Implementing the IMC controller with a PID controller means that the tuning of a PID controller is selected based on the IMC design [135]. This is called IMC‐PID tuning, and is often readily implemented as the PID control structure is simple and available in automation products.
37
The IMC‐PID tuning usually involves approximations to convert the IMC con‐troller to PID‐type. Additionally, the delay of the non‐invertible part of the IMC controller (39) must be approximated to implement the controller. An example of an IMC‐PID tuning rule for a first‐order process with time‐delay, approx‐imated with the first‐order Padé approximation and filter‐order n = 1 is [135]
1 ,( ) 21 ,
( )
,2 ( )
p
i
d
T τKK λ τ T
KK λ τTτK
K λ τ
⎛ ⎞= +⎜ ⎟+ ⎝ ⎠
=+
=+
(45)
with a pre‐filter
( ) 11f
f
G sT s
=+
, where ( )2fλτTλ τ
=+
. (46)
This tuning is used in the simulation cases of Sections 5.2 and 5.4. A table of other IMC‐PID design alternatives can be found in [135].
2.8. Network Quality of Service in Networked Control Systems
In networked control systems several control loops are distributed in a plant and connected with a shared wired or wireless network. The goal is for the network to deliver sufficient QoS with minimum effort to obtain a desired qual‐ity of control [149]. In the literature there exists no systematic study to assess the control performance in relation to the network quality of service. The con‐trol performance is usually measured with the traditional integral cost func‐tions, see Section 2.6 [71]. An example of network and control performance comparison is in [90] and [92] where different wired networks and their effect on the control system are studied as a function of controller sampling interval.
In this section the effect of the network quality of service on the control perfor‐mance is discussed. First network congestion, which may cause information loss, and traffic rate control from the control application point of view are con‐sidered. The rate control algorithms for control systems reviewed in the next subsection are the first methods in the literature in the field of network adaptive control. In Section 3.5 this issue is further studied where a network QoS cost for control systems is presented. Practical insights are gained in the simulations of Sections 4.7 and 5.1.
38
2.8.1. Network Performance Considerations The network traffic in control applications is considerably different than in computer networks. In WNCSs, for example, periodic communication of a small amount of data, for example a measurement value, needs to be communi‐cated reliably in real‐time. In computer networks, typical usage is the transfer of files in burst of large packets, where the average throughput is important. The required QoS is thus significantly different in WNCSs compared to computer networks, and transferring the knowledge from the computer network field to WNCSs is not straightforward.
The key characteristics of a wireless network on closed‐loop control are com‐munication delay and packet loss. Typical end‐to‐end delay of a moderately sized IEEE 802.15.4 network for control applications is less than 100 ms, see Section 4.7.3. Wireless control may not be applied to very fast or unstable processes, due to the inherent unreliability of the wireless communication. The current wireless standards, WirelessHART and ISA100.11a, are both intended for applications where delay jitter of about 100 ms is tolerated [166]. In other words, these networks can be considered deterministic when examined at larg‐er time‐scales. In current practical stable wireless control applications, the mi‐nimal sampling interval is about 1 s. This is reflected in the devices sold today, where the sampling interval is restricted by the device manufacturers to a min‐imum of one second, partly also because of energy constraints. In this thesis, the opposite case is considered with non‐deterministic networks where the traffic in the network affects the network QoS and further the control performance.
In wireless networks the QoS can never truly be guaranteed, as interference can always hamper the communication. This is especially troublesome in wireless control, since deviating from the real‐time operation can cause physical dam‐age. The overall wireless automation system must thus be designed such that the probability of a fault is low and that no damage is caused when a fault hap‐pens. A good networked control system design should exhibit graceful degra‐dation, where the control performance is minimally, or non‐catastrophically, degraded when the network QoS decreases, and turn to safe operation when the network malfunctions. This restricts the wireless control to stable processes, where the process remains steady even if the control is open‐loop.
There are trade‐offs between the packet rates, control performance and network congestion. The network performance depends mostly on the utilized MAC protocol, as it determines the access to the network [96]. The general perfor‐mance of a CSMA type MAC is good with low traffic, but becomes poor with increased traffic, mainly due to larger probability of collisions. This is further aggravated by the first‐come last‐served behavior of the exponential backup mechanism in the case of a collision.
39
If a low sampling rate compared to the process dynamics is used, the control is poor. Increasing the sampling rate improves to control performance until the network becomes congested and the control performance start to degrade, due to packet drops or increased communication delay. As naturally, the control performance generally degrades when packets are dropped and thus less in‐formation is available at the controller. In a NCS with limited bandwidth there exists an optimal region, in terms of the control performance, for the sampling interval of the control loops [92]. Selecting an optimal control bandwidth is called cross‐layer optimization, where the performance of the whole system is optimized by tailoring the different parts of the system to suit each other, to obtain optimal performance.
2.8.2. Network Congestion and Traffic Rate Control In wireless networks, the techniques for guaranteeing a specified QoS for the user can be divided into two parts: admission control and scheduling. With admission control, users are admitted to use the medium only when the net‐work can guarantee to meet the user’s QoS request. Then the task becomes to schedule or prioritize the admitted users on the available bandwidth such that everyone gets the best possible QoS, according to their needs. For more details, see for example [68] and references therein. This framework is called radio resource management, where the target is to deliver specified QoS guarantees to each user.
The required QoS depends on the application, and can be for example a band‐width, a delay, or a packet drop constraint. In WNCSs the problem is how to share the limited available bandwidth among all the control loops, such that every loop attains an equal control performance. There exist several algorithms to calculate the optimal bandwidth shares, sampling intervals [42], [71], or transmission schedules to be allocated for each controller, for instance by using utility functions, which describe the control quality, given a certain bandwidth [27]. These methods rely on a model of the network and the control system, and the optimal, according to some criterion, schedule or allocation is calculated beforehand. Perfect communication or a simple the network model is usually assumed, where a certain bandwidth is divided among the control loops.
Other bandwidth control approaches are dynamic scheduling or transmission heuristics such as maximum‐error‐first [159], where the sensor with the largest error should transmit. These methods have the drawback that the schedule needs to be updated continuously or the transmission opportunities arbitrated online in real‐time, which consumes bandwidth and may even be impossible in practice. Similar issues are encountered in embedded control systems with task scheduling [22], but many of the results cannot be applied to WNCSs as the scheduling must be distributed over the network, which requires communica‐
40
tion, in contrast to the processor scheduling where all information is locally available.
In reality, the network is more complex and the actual performance of the net‐work is different than assumed, because of interference, other overhead traffic, and simplified models. The operation of the network can additionally change over time, for example when new devices are installed, or when the traffic changes depending on the control tasks which are currently executing. This calls for online control adaptation in WNCSs, which adjusts the sampling inter‐val and used bandwidth of networked controllers. A new method for control system traffic adaptation is proposed in Section 5.2.
There are many rate control approaches proposed in the literature. The adap‐tive rate fallback method, which is used in IEEE 802.11, is a network layer communication rate adjustment method. The communication bit‐rate is adap‐tively reduced in case of poor signal strength. This takes care of poor communi‐cation conditions, by reducing the bit‐rate, and thus increasing the robustness of communication, when the radio signal quality is diminished [32]. An exam‐ple is in [99], where modulation and coding rates are changed depending on the network QoS.
Radio resource management tries only to optimize the communication depend‐ing on the channel conditions. There must furthermore be cross‐layer optimiza‐tion for adaptation on the application layer to adjust the generated traffic; oth‐erwise queues will fill up with data that the network cannot deliver [90]. One example of application layer congestion control is the Rate Adaptation Protocol [134]. It adjusts the amount of data to be sent for a multimedia stream depend‐ing on the network QoS, namely packet loss.
In control systems, the corresponding action is to change the sampling interval. Adapting the sampling interval to the available network bandwidth are the first attempts in the literature on network adaptive control. In [32] and [33], several different update rules to change the sampling interval between specified bounds depending on the round‐trip‐time are proposed for cases of packet drop either due to interference or congestion. In [74], the sampling interval is adapted based on dropped packets, using parameter estimation of a Markov model related to the network state, similar to the Gilbert‐Elliott model (Section 2.4.2), followed by a certain sampling update policy. Another approach is to decide the length of the next sampling interval based on the available band‐width and the control error [157]. In this case a criticalness factor is assigned to each loop and a heuristic formula determines how much the control error af‐fects the sampling interval adjustment.
A control‐oriented approach is to use a PI controller with saturation to change the sampling interval of the control system based on packet drop feedback [128]. In this particular case, no adjustment of the process controller tuning is,
41
however, done, which is designed for a nominal sampling frequency of 200 Hz. The saturation of the PI controller limits the sampling interval to a minimum of 100 Hz, to prevent instability due to operation far from the designed sampling frequency. The trade‐off between the designed and operating sampling fre‐quency is shown in [20], where a method for pre‐selecting a discrete set of sam‐pling intervals is presented, such that the degradation because of operation too far from the design point is bounded. A better approach is presented in [53], where a discretized PI controller with the sampling interval left explicitly in the control algorithm is used. A heuristic sampling interval and gain scheduling selection algorithm that depends on the packet drop and delay jitter of the net‐work is then used. The adjustable sampling interval in the controller ensures nominal behavior even if the packet rate is changed. The heuristic controller has also been simulated with an ns‐2 based network and control co‐simulator in [152].
Another control‐oriented approach is a queue controller where the transmission rates are controlled by a P or PI controller [4]. It is assumed that maintaining a user specified queue length at the routers (intermediate nodes) results in a de‐sired network QoS. A utility function of each control loop based on the process dynamics determines how much of the shared link capacity is used by the loop. A P‐controller is applied in [170], with an adaptive gain depending on the traf‐fic amount relative to the point of catastrophic congestion, for control over a WLAN. Congestion control of the Internet is beginning to use control system algorithms too, such as PID controllers, instead of the heuristic TCP [136].
Most of the algorithms in the literature so far require hand selected parameters, and are thus very application specific, needing extensive testing to be applied on different systems. The adaptive control speed scheme developed in Section 5.2 adapts to the QoS of the network and does not need any arbitrary parame‐ters, only the time‐constant of the controlled process. It adjusts the sampling interval and controller tuning, such that a specified network QoS is obtained.
2.9. Kalman Filtering in Networked Control Systems
As pointed out in Section 2.3, the Kalman filter (KF) is suitable for estimation in NCSs, especially when there are measurement packet drops. It is an optimal state‐estimator, given a process model (in state‐space form: Ap, Bp, Cp) and asso‐ciated state‐ and measurement noise covariances, Q and R.
The assumed state‐space model of the process is
( ) ( ) ( ) ( )( ) ( ) ( )
1 p p
p
x k A x k B u k w k
z k C x k v k
⎧ + = + +⎪⎨
= +⎪⎩, (47)
42
where w and v are normally distributed white noise sequences with covariances Q and R, x is the state vector, u the input vector, and z is the measurement vec‐tor. The matrices Ap, Bp, and Cp are constant and have appropriate dimensions. The state‐matrices can also be time‐varying. Kalman filtering is performed according to the well‐known prediction and update equations [104].
Estimation in the NCS case, with intermittent information, that is packet drops, is straight‐forward with a Kalman filter, due to the fact that the algorithm is divided into a prediction and an update step. If there is no new measurement, only the prediction step is carried out. This is obvious, since the prediction is the best estimate of the current state, if no new observation is received. This is equivalent to receiving a measurement with infinite variance [144]. In this case the Kalman gain KF tends to zero, resulting in no update, as the measurement noise R tends to infinity.
K
The case of missing information can be extended to partial measurements, where only a part of the measurement vector is received. The arrival rates of the measurements determine the stability of the KF, if the process is unstable. If the rates are too low, the KF is unstable. The upper and lower bound of the stability border can be calculated, by showing that the, in this case stochastic, state cova‐riance P is bounded [97].
Optimal Kalman filtering with varying measurement delay is treated in [141], where the previous measurements, the state and the covariance estimates are stored in buffers and the filtering is done up to the current time every time a new measurement arrives. This is computationally heavy and the delay or time‐stamps of the measurements must be known. The convergence is proven with LMIs [65]. The paper additionally presents estimation with a constant Kalman gain.
The Kalman filter can be used to additionally estimate a process load distur‐bance, by augmenting a state for the load disturbance into the model. Assuming a constant load disturbance, the model used in the KF is thus as follows
KF KF KF
KF KF
KFload
KF
ˆ ˆ( 1) ( ) ( )ˆ ˆ( ) ( )
ˆ( )0ˆ ( 1) (ˆ ( )0 1 0
ˆ ˆ( ) 1 ( )
KF
p p
p
x k A x k B u ky k C x k
x kA Bx k u k
D k
y k C x k
⎧ + = +⎨ =⎩⎧ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
+ = +⎪ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪= ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎨ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎪ ⎡ ⎤=⎪ ⎣ ⎦⎩
) , (48)
where KF is the augmented Kalman filter state‐vector. The load disturbance estimate is obtained through
x̂
load load KF KFˆ ˆ ˆ( ) ( ) 0 1 ( )D k C x k x k= = ⎡ ⎤⎣ ⎦ . (49)
43
44
The Kalman filter is used in the simulations of Section 4.7.1, where a varying delay KF is used, and in Section 4.7.2, where the state‐estimator with conven‐tional PID control is compared with a jitter margin tuned PID. In Section 5.4.3 the KF filter is used as a load disturbance estimator for the control reference target.
2.10. Summary In this chapter the preliminaries of WNCSs were summarized. The focus of the control methods is on the PID controller, because of the lightweight algorithm suitable for wireless nodes and its widespread use in the industry. Several PID controller alternatives suitable for WNCSs are presented, both from the tuning and algorithmic viewpoint. The main control algorithms being IMC, IMC‐PID, and the PID PLUS controller.
Network models for packet drops are considered based on drop probabilities and Markov chains. The properties of the Gilbert‐Elliott model are recapitu‐lated. Then the network quality of service from the control system point‐of‐view is discussed. The communication and control performances are related. The network QoS is related to the traffic in the network, which can be affected by changing the controller sampling interval to attain a comfortable communi‐cation rate. Several existing methods for QoS guarantees or rate control are reviewed and special attention is put on methods for traffic rate control of con‐trol systems. In the literature, this field is still developing and only some tenta‐tive methods have been devised. In the literature, controller rate adaptation is the first and, to the knowledge of the author, the only existing network adaptive control methods existing today. This field is further developed in the next chap‐ter and several new methods are presented in Chapter 5.
3. NETWORKS AND CONTROLLERS IN PRACTICE
In this chapter the issues presented previously are developed further and moti‐vated using practical measurements and simulations. Measurements of radio environments are done to estimate network packet drop models, which are later used in the simulation. Properties of the control algorithms and tuning methods important for NCSs are established. The results are extensively used in the proposed adaptive control schemes of Chapter 5
The relation between the network quality of service, the controller tuning and performance is studied. The effect of the network QoS on the control system is discussed and a network cost for control measure is proposed. These issues are important in the network QoS adaptive control schemes of Chapter 5.
3.1. Measurements of Radio Environments Realistic packet drop models of wireless networks are needed to study WNCSs, since information loss ought to affect the control performance. One approach is to measure the packet drop with wireless nodes in authentic environments and use these results to build data‐based network models. This allows one to study the effects and the behavior of a real network on the control system in a particu‐lar environment. The packet loss models obtained in the next section are uti‐lized in the simulations of Sections 4.7.3 and 4.7.4, to obtain realistic simulation results.
The physical properties of an existing radio environment are assessed by carry‐ing out actual measurements at the target site, as described next. The transmit‐ter device consists of a sensor node equipped with a Texas Instruments CC2431 radio module connected to two monopole WLAN antennas, with a separation of 12.5 cm. Similarly four receivers are arranged in an array, placed 6.25 cm from each other, which is half the wavelength at 2.4 GHz. The IEEE 802.15.4 radio channel 26 is used, which has the least frequency overlap with the IEEE 802.11 radio, to mitigate packet drop due to WLAN interference and other de‐vices. Transmission power is set to 0 dBm and measurements are taken for several different distances and locations. The transmitter switches between the two antennas for every consecutive packet, thus eight different signal paths are
45
recorded. A total of 15000 packets of size 119 bytes are transmitted for each location at an interval of 0.1 seconds.
Packets are recorded with their RSSI value (Received Signal Strength Indicator) and an indication if the packet was correctly received with no bit errors, or dropped. These measurements differ from other similar measurements, e.g. [77] where the received signal strength is only measured, not the actual packet re‐ception. Here, the same hardware as would be used in a real application is used, not a specialized measurement device, which could differ significantly from the signal reception capabilities of the actual device.
Measurements are performed in an industrial assembly hall and an office. The estimated models are presented in the next section. In the industrial hall there are machines, racks of tools, and open spaces. Measurements are made in dif‐ferent parts of the hall, which can be categorized as light: open space, medium: mostly open with machines standing on the floor, and heavy: metal racks of tools obstructing the line‐of‐sight. The distances between the transmitter and receiver for the different measurements are in the range of 25 ‐ 35 m. The office is an indoor environment with plaster walls, used as a reference in the simula‐tions of Section 4.7.3. The measurements are made from room to room as de‐picted in Figure 6. No communication was possible across more rooms in this environment.
The measured packet drop probabilities from the prototype locations of the real office are shown in Figure 7. The drop probability is given for every antenna pair, ordered such that every odd numbered link represents a transmission from antenna 1. The packet drop probability varies from location to location and there is significant variation between the antenna pairs. This implies that
Brickwall
Wireless nodePlaster wall
1
2
3
4
5
6
3.9 m
4.5 m
3.9 m
1.5 m
4.5 m
Figure 6. Measured prototype locations and links for the office. Transmitter to receiver direction indicated by arrow.
46
the signal strength is very sensitive to the antenna location, due to multipath fading. Similar results are obtained in the industrial hall case shown in Figure 8, with even more variability between the antenna pairs.
The histograms in Figure 9 show the number of consecutively dropped packets in the industrial hall. The distribution is long tailed, as it is about linear on the log‐log scale. This implies that long network outages are possible, although unlikely. The control system should thus be designed to handle outages of unbounded length. One method is proposed in Section 5.4.
As the most common outage of the previous results is one packet, the minimum outage length is upper bounded by the packet interval 0.2 s. The question re‐mains what the shortest outage is. Therefore, several new measurements in the medium environment of the industrial hall are done with a faster packet rate. One transmitting antenna with a 10 ms packet interval and four receivers are used. The histograms for the consecutively dropped packets are shown for one representative result of the second measurement campaign in Figure 10, from which it is evident that the most frequent outage is about 40 ms, which is over a decade less than the packet interval in the first measurements. The average outage length, which varies from 0.01 s to 1 s, for all the new measurements are given in Figure 11.
1 2 3 4 5 6 7 80
50
100Prototype location 1
1 2 3 4 5 6 7 80
50
100Prototype location 2
1 2 3 4 5 6 7 80
50
100Prototype location 3
Pac
ket d
rop
[%]
1 2 3 4 5 6 7 80
50
100Prototype location 4
1 2 3 4 5 6 7 80
50
100Prototype location 5
Link [#]1 2 3 4 5 6 7 8
0
50
100Prototype location 6
Link [#] Figure 7. Measured packet drop probabilities in the office for all the proto‐type locations and links.
47
1 2 3 4 5 6 7 80
50
100Heavy 1
1 2 3 4 5 6 7 80
50
100Heavy 2
1 2 3 4 5 6 7 80
50
100
Pac
ket d
rop
[%]
Heavy 3
1 2 3 4 5 6 7 80
50
100Heavy 4
1 2 3 4 5 6 7 80
50
100Medium 1
1 2 3 4 5 6 7 80
50
100Medium 2
1 2 3 4 5 6 7 80
50
100
Link [#]
Light 1
1 2 3 4 5 6 7 80
50
100
Link [#]
Light 2
Figure 8. Measured packet drop probability for different locations in the industrial hall.
100
101
102
100
102
104 Heavy 1, Link 1
100
101
102
100
102
104 Heavy 1, Link 2
100
101
102
100
101
102 Heavy 1, Link 3
100
101
102
100
102
104 Heavy 1, Link 4
100
101
102
100
101
102 Heavy 1, Link 5
100
101
102
100
102
104 Heavy 1, Link 6
100
101
102
100
101
102 Heavy 1, Link 7
100
101
102
100
101
102 Heavy 1, Link 8
Consecutive packet drops [#]
Occ
uren
ces
[#]
Figure 9. Histogram of consecutive dropped packets for every link in in‐dustrial hall, Heavy 1 measurement point.
48
10-2
100
100
102
104 Link 1
Occ
uren
ces
[#]
10-2
100
100
105 Link 2
10-2
100
100
102
104 Link 3
Outage length [s]
Occ
uren
ces
[#]
10-2
100
100
102
104 Link 4
Outage length [s] Figure 10. Histogram of outage length of second measurement campaign of
medium industrial environment.
1 2 3 40
0.5
1
Link [#]1 2 3 4
0
0.2
0.4
Link [#]
1 2 3 40
0.1
0.2
Link [#]Ave
rage
out
age
time
[s]
1 2 3 40
0.05
0.1
Link [#]
1 2 3 40
0.1
0.2
Link [#] Figure 11. Average outage time for all the links in the second measurement campaign of medium industrial environment.
49
3.2. Estimated Gilbert‐Elliott Models Based on the measurements performed in the industrial hall and the office, presented in the previous section, Gilbert‐Elliott packet drop models are identi‐fied from the data. The data from all the links are individually fitted to separate G‐E models using the Baum‐Welch algorithm [12].
As an example, the identification results of the prototype location 4 from Figure 6 are illustrated in Figure 12. In general, packet drop probability of the good state is low whereas the drop probability of the bad state is high and the time spent in the good state (11) is longer than in the bad state. There are, however, large variations among the different links. Similar results are obtained for the other locations and the industrial hall case.
Good Bad0
100
200
300
400
Good Bad0
500
1000
Good Bad0
20
40
60
Sta
te re
side
nce
time
[s]
Good Bad0
10
20
30
Good Bad0
5
10
Good Bad0
5
10
15
20
Good Bad0
200
400
600
Good Bad0
10
20
30
40
Link 1
0
50
100Link 2
0
20
40
Link 3
0
50
100Link 4
Pac
ket d
rop
[%]
0
50
100
Link 5
0
50
100Link 6
0
50
100
Link 7
0
50
100Link 8
0
50
100
Figure 12. Gilbert‐Elliott model for each link of prototype location 4 in the office. Grey bar indicates mean state‐residence time (11) and black bar packet drop probability.
50
The mean packet drop (13) absolute error between the fitted models and the
3.3. The Networked PID Controller ied in industrial wire‐
the Networked PID (NPID) controller, with the architecture shown in Figure 14.
1 2 3 4 5 6 7 80
0.05
0.1
0.15
0.2
0.25
Location 1
Link [#]
|| σto
t||
1 2 3 4 5 6 7 80
0.05
0.1
0.15
0.2
0.25
Location 2
Link [#]1 2 3 4 5 6 7 8
0
0.05
0.1
0.15
0.2
0.25
Location 3
Link [#]
1 2 3 4 5 6 7 80
0.05
0.1
0.15
0.2
0.25
Location 4
Link [#]
|| σto
t||
1 2 3 4 5 6 7 80
0.05
0.1
0.15
0.2
0.25
Location 5
Link [#]1 2 3 4 5 6 7 8
0
0.05
0.1
0.15
0.2
0.25
Location 6
Link [#]
Figure 13. Mean of the normalized standard deviation error between data and Gilbert‐Elliott model for measurements from the office.
measurements are small, less than 0.01, for all the links in both studied envi‐ronments. In Figure 13 the mean of the normalized standard deviation error (16) for different time‐scales (14), are shown between the data and the model. The model fit is satisfactory, with normalized errors of less than 25% of the total standard deviation for both the office and industrial hall environments. A better fit than the one obtained with the Baum‐Welch algorithm may be obtained by doing a mean square error fit of the model using the variance [66]. A better fit is also achieved by using a Markov‐chain model with more states. With three states the maximum standard deviation error is less than 10%.
As noted previously, the PID controller is likely to be applless control systems. The conventional PID controller can be modified to suit WNCSs better. One proposal is the PID PLUS controller discussed previously in Section 2.6.2. A new PID controller structure for NCSs proposed in this thesis is
51
Figure 14. Networked PID control in a NCS setting.
Wireless Network
inout
‐
SmartSensor
Reference
Process
control output
NetworkedController
PIDe
eΣ
eΔ
u y
yr
The Networked PID is a PID controller split into two parts and distributed overthe n sor. Thus a “smart sensor” with some computational abilities is needed. The only additional information
etwork, where part of the algorithm is at the sen
the sensor needs, is the reference signal. On the sensor side the error e, integral of error Σe , and derivative of error Δe are calculated. The three terms are then transmitted to the controller where the final control signal is calculated at the actuator side by
( ) ( ) ( ) ( )Σ Δp i du k K e k K e k K e k= + + . (50)
This division is motivated by the good properties of the control architecture of Figure 1a, where the estimate, or in is case e, and are updated exact, even if packets are dropped. Whenever the controller receives a packet,
PI ro
only if necessary, to save bandwidth [8]. In this case some additional computa‐
t error is
th and Σe ,
Δe ,
the control signal is correct. If no data from the sensor is received, the previous‐ly received values can be held, as in conventional D cont l in a NCS setting. A similar approach is in [132], with event‐driven control, where the proportion‐al, integral, and derivative actions are coded and transmitted to the controller.
The Networked PID architecture is further motivated by the extension to event‐ or self‐triggered control [25], [132], [153], where the control signal is updated
tion at the sensor is needed, to decide when to send information to the actuator. The control signal would for example only be transmitted if it has changed more than some threshold.
Next, the behavior of the NPID during an outage is analyzed. If the control packet is dropped, the outpu
52
hold1c
rc
GGy y Gu
GG= −
+, (51)
where uhold is the most recently received control value. The notable differencthe conventional PID controller is that there is no windup in the output if pack
the
behavior of the Networked PID controller is show in the simulations of Section 5.4.3, where it is compared
. Internal Model Control in Networked Control Systems
e essential d to applying the IMC controller in NCSs are studied.
A model is rarely perfect, but some approximations to get a closed‐loop transfer
e to ‐
ets from sensor are dropped. There is still some integral windup on the sensor side, as the integral is calculated there
holdol cu G Gu= . (52)
This issue is further studied in Section 3.5. Then and
shown to behave better than a conventional PID controller during network outages.
3.4
In Section 2.7 the basics of IMC controller synthesis was covered. Now, th properties relate
First, approximation of the closed‐loop step response, which is used in the out‐age compensation heuristic of Section 5.4 is analyzed to determine when the heuristic can be used. Then the stability of the IMC controller in a NCS setting with delay jitter is established.
3.4.1. Approximations of Closed‐loop Step Response
function of the desired form
cl f pG G G+≈ (53)
can be considered. If the non‐invertible part is identical to one, or exactly known, this reduces the closed‐loop system to
pcl f p
GG G G
mG(54)
The deviation from desired closed‐loop system thus depends on the rbetween the invertible part of the process and the model. Assuming a first order
−+
−≈ .
the atio
process, this can be analyzed by noting that
11
p mG T sK
K TsG
−
−
+=
+
mm
(55)
53
m
m
KTK T
is a lag filter. The maximum error in gain is and in phase
1 / 1sin
/ 1m
m
φT T
= − ⎜ ⎟+⎝ ⎠
T T− ⎛ ⎞−
process K
ciently well known, which can often be assumed, these errors are small and (55)
can be approximated as
[37]. If the parameters of the and T are suffi‐
1p
m
GG
−
≈ +≈
If the process is of highe er, the closed‐loop system offrequencies, as the proc nominator is in the denom
− , and further (54) leads to
r ord will roll‐ f at higher ess de inator of . As an
example, take a process with two real left‐hand plane poles, one pole is can‐
the invertible part can be canceled out
cl f pG G G .
(53)
celed by the process model in the IMC design and the other is left in the closed‐loop.
On the other hand, the non‐invertible part may contain a delay, which must be approximated in the controller implementation (39) and thus is never exact.Assuming ( )m pG G− −= , the closed‐loop transfer function is
( )1f p
clp
G GG
G G
+
+≈
+ −. (56)
m fG+
The approximation to a closed‐loop of a form cl f pG G G+≈ is possible if theequality
in‐
( ) 1p m fG G G+ +− (57)
holds, which depends on the difference between the non‐invertible part of the process and the model.
In the case that the non‐invertible part is a time‐delay LspG e+ −= , suitable ap‐
proximation alternatives are for instance one of the following:
1Lse Ls− ≈ − , first order Taylor approximation, (58) 1 1LsLse− = ≈ , first order inverse Taylor approximation
1 Lse +, and (59)
/2
/2
1 / 2L s− , first order Padé approximation. (60) 1
L sLs
L s
eee
−− = ≈
+
The Taylor approximation is the coarsest approximation of them all. Assuming the non‐invertible part in mating it withTaylor approximation leads to
/ 2L s
(57) is a pure time‐delay, approxi the
54
( )( )
( ) ( )( )( )
2 2
2 2
2 2 cos sin1 ,
1n
Lω Lω Lω L ωs jω
λ ω
− + += ∀ =
+
(61)
which cannot be solved analytically. The approximate value of the left side of the inequality of (61) is zero at low frequencies, and at high frequencies
The inequality
must thus hold ) for the approximation (53) to hold in case of a time‐delay in the process.
3.4.2. IMC Control and Jitter Margin design and assuming that the approximation (53)
holds, according to the previous discussion, then
11
Ls
p m f n
e LsG G Gλs
−+ + − +− = =
+
( )ω→∞ / nL λ .
nλ L (62)
according to (57
Using the IMC controller
( ) ( )( ) /22
1 11 1
cl f p n n n
eG G Gjλω jλω λω
+≈ = = =+ +
. (63) jωL−
1+
The control loop is thus stable according to the jitter margin (21) for
( )( ) /22
max , 0,δ ωω
< ∀ ∈⎡ ∞⎡1n
λω +⎣ ⎣ . (64)
When n = 1, the minimum is at infinity, which gives a jitter margin of maxδ λ= .solving
For n > 1 the jitter margin is solved by taking the derivative of (64),
minimum, which is
the * 1
1λ n and finally substituting in (64) resul*ω ts in ω =
−
/2max
, 1λ nδ
⎧ =⎪
= ⎨ (61 , 1
1
nnλ n nn
⎛ ⎞− >⎪ ⎜ ⎟−⎝ ⎠⎩
5)
for the jitter margin. Conversely, the corresponding tuning λ can be solved,given a jitter margin constraint.
In the case of a FOTD process, where the non‐invertible time‐delay makes the approximation (53) invalid, as the delay must be approximated in the control
55
implementation, the above stability to delay jitter is not guaranteed. The jitter margin depends then on the approximation method used for the time‐delay, e.g. one of (58)‐(60). Now clG is according to (56) and the jitter margin inequali‐ty (21) becomes
( )max
1, 0,
n
p mλjω G Gδ ω
ω
+ ++ + −< ∀ ∈⎡ ∞⎡⎣ ⎣ , (66)
where jωLpG e−+ = and is one of the delay approximations, which reduces to
(64) if Using the Taylor approximation (58) and n = 1 results in
mG+
insignificant.the delay is
( ) ( )2max 2
2sin( )
λδ ωL λ L
ωω+ + . (67)
In
1 L+< −
milar inequality is solved numerically. Using the same technique, the jitter margin is approximately
[48] a si
( )max 0.9562 0.6431δ λ L L≈ + − .
The discrete‐time case of (63) using 43) leads to
(68)
With a negligible delay this approximation is close to the case without a delay given in (65) (n = 1).
(
( )( )( )
( )( )
1 1n njωL
jωcl
γ e γG e
−− −≈
1 1n njω jωγe γe
=− −
(69)
and the restriction for the jitter margin after manipulation becomes
( )( )
( )( )( )
/22
max
1 1 2 cos
1 1 2 2cos 1
njω n
n njω
γe γ γ ωN
e γ ω
− + −< =
− − − −.
γ (70)
Substituting n = 1, the global minimum of (70) occurs at π, which gives
( )( )max, 1 /
1 1 122 1 1n h λ
γN
γ e= −
+= =
− −.− (71)
The approximation
( )max, 1 /
1 1 1 1 /2 21 1 /
(72) 1n h λN λ h
h λe= −= − ≈ − ≈
− −−
56
holds for large λ. For n > 2 there exist closed form utions to (70) but numcal solutions are more convenient. As an example, jitter margin for the tinuous‐time (65), (67) and discrete‐time cases (22), (71) are plotted later on in
of sampling interval of discrete‐time controllers need to be consi‐en‐ The
sol eri‐ the con‐
Figure 16.
3.4.3. Sampling Interval and IMC Tuning for Jitter Margin The selectiondered, especially as in the case of packet drop the induced delay jitter is depdent on the sampling interval and number of consecutive dropped packets.rule of thumb for selecting the sampling interval h for control of a first‐order process is
4 10rT Nh≈ ⎡ ⎤ =⎣ ⎦… , (73) h
where Tr is the rise‐time of the closed‐loop system [183]. Using the IMC design with a specified time‐constant rλ T= , this equates to a sampling interval of
tuni rted by the linear relationship between the IMC λ and the jitter margin seen in or Figure 16.
mbining (72) with (74), which gives
packet drops with a dis‐crete‐time IMC controller can be selected directly by specifying Nh and u(74) to get the controller tuning parameter, given a fixed sampling interval. To
t robust to the network or “network aware” by selecting a
tuning such that the control loop is stable for a specified delay jitter, for exam‐
/ hh λ N= . (74)
This relation between the IMC ng and sampling interval is also suppo(72),
The resulting jitter margin with this selection of tuning and sampling interval is obtained by co
max, 1h nN N =≈ . (75)
Thus, a suitable jitter margin in terms of consecutivesing
illustrate this, the jitter margin (22) according to the case described in Section 5.2.4 (closed‐loop control with IMC designed controller of process with time‐constant of T = 10), is solved numerically and plotted as a function of IMC tun‐ing parameter λ in Figure 15. Without quantization the obtained jitter margin is as specified at Nmax = Nh = 8, and with quantization the jitter margin is at least the specified one.
To accommodate he conventional PID control to the WNCS setting, the con‐troller can be made
ple by the jitter margin theorem. The jitter margin of the PID controller (24) with the IMC‐PID tuning (45) without the pre‐filter is plotted in Figure 16, for a first order process with K = 1, T = 10, and τ = 0.1 (discrete‐time controller para‐meters: h = 0.1, Nd = 5). The jitter margin for the IMC‐PID controller is solved
57
numerically, as a closed‐form minimum of (21) is infeasible. The approxima‐tions (65) and (72), which coincides, are also given.
The control is stable for less than Nmax consecutive where (23). The discrete‐time controller has a larger jitter
drops margin than
tes to about 2
controller as function of λ with (96) and without (74) sampling interval quantization.
max max /N δ h= the equivalent
continuous‐time controller, whose jitter margin practically satura seconds. By increasing the sampling interval, the jitter margin is increased. This is a consequence of the limited possible delay jitter values in discrete‐time. The jitter margin of the IMC‐PID controller without the pre‐filter is less than the approximations indicate. With the pre‐filter, the actual jitter margin coincides with the approximations.
Figure 15. Jitter margin measured in consecutive packet drops with IMC
0 10 20 30 40 50 604
6
8
10
12
14
16
18
IMC controller tuning parameter λ
Jitte
r mar
gin,
Nm
ax
δmax, quantized h
δmax, h = λ/Nh
58
3.5. Effect of Network Quality of Service on
The l loop must be guaranteed for main‐
performance is through packet
1 2 3 4 5 6 7 8 9 100
2
4
6
8
10
IMC tuning parameter, λ
Jitte
r mar
gin,
δm
ax
Continuous-timeDiscrete-time, h = 0.01Discrete-time, h = 0.05Discrete-time, h = 0.1Continuous-time CL appr.Discrete-time CL appr.
Figure 16. Jitter margin for first order ocess with K = 1, T = 10, and τ = 0.1. prContinuous‐ and discrete‐time PID controller with IMC tuning. Jitter mar‐gin for closed‐loop system (21) and approximation given in Section 3.4.1 (65) and (72). The approximations for the closed‐loop system coincide.
Control Performance network quality of service for a contro
taining stable networked control. There exist theoretical data rate or packet drop rate bounds for stability [120], but they do not indicate the respective quality of control or control performance. The problem is how to measure con‐trol performance related to network QoS [165]. A common approach is to use some of the integral of error criteria (29)‐(32) [71], [92]. An increase in these measures with decreasing network QoS indicates degradation in control per‐formance due to network related problems.
An approach to relate network QoS and controldrop. The packet drop sequence, in other words, the number of consecutive packet drops, is important for the control system. Packet drop or outages affect the real‐time operation, as no feedback is received. A control loop can usually
59
tolerate packet drops occasionally, but control is impossible if several consecu‐tive packets are dropped, although the average packet drop may be lower in this case. This is in contrast to computer networks, which usually transfer large files over a network, and only the average throughput and packet drop are important. A controller in a WNCS must be designed to handle single packet drops, since these cannot be avoided, because of fading and interference in the wireless network. Larger gaps, when the network is congested, for example during link breaks, routing, moving obstacles, or during extra traffic in the network, are detrimental to the control system and may lead to instability, e.g. the jitter margin is exceeded.
The network used for control should thus be designed to minimize consecutive
3.5.1. Network Cost for Control measure specifically for NCS. One
Consider a PID controller Gc controlling a process G. The closed‐loop response
packet drops, eliminate outages and quickly recover from these. The selection of MAC and routing protocols is important in this case. The MAC protocol should guarantee fair and regular access to the medium. The routing protocol should quickly switch to an alternate route during link breaks or changed traffic conditions, if the QoS of the network does not satisfy the requirements of the control system. For the routing protocol Sections 4.7.1 and 4.7.2 give some in‐sight into how the network affects the control loop at low and high mobility scenarios.
In the literature there exists no network QoScontrol related network QoS measure proposed here is the network cost for control (NCC) measure, NCCJ . The objective for the NCC is to indicate the net‐work performance experienced by the controller. This is done through the packet drop statistic, or length of consecutive packet drops, as the outage length directly affects the control performance, as outlined next.
is
1c
rc
GGy y
GG=
+. (76)
When measurement packets are dropped between the sensor and controller,
(77)
where yhold is the most recent received value before the outage. The difference
and ZOH at the controller input is assumed, the control is open‐loop and the open‐loop response yol is
( )holdol c ry GG y y= − ,
between normal and outage operation is
60
hold hold1 1c c c
ol c r c rc c
GG GG GGy y y GG y y GG y y
GG GG⎛ ⎞
= − = − = −⎜+ +⎝ ⎠
⎟ (78)
which in the case of a long outage, when the transient dynamics have ended, can be approximated by the integral windup effect as contains an integra‐tor of the PID controller. In this case
cGG
( )hold hold hold1 1
ry y y e tes s
≈ − = = , (79)
where ehold is the difference between the desired and controller observed output. During an outage of length Tout with a (constant) non‐zero reference error ehold, the total squared output error, cf. the ISE criterion, is given by
( ) ( )out out2 2 2 3hold
out hold hold out out0 0 3
T T ey T te dt e t dt T T= = =∫ ∫ ∼ 3 . (80)
The error induced by a network outage given by (80) leads to the conclusion that the network cost for control should be proportional to the third power of the outage length. In WNCS with discrete‐time packets, the outage length trans‐lates to the number of consecutive packet drops. The NCC is only valid for open‐loop stable systems. If the system turns unstable, the error grows expo‐nentially instead.
The NCC is related to stability through the jitter margin (Section 2.5), where stability is guaranteed until a certain outage length determined by the jitter margin is exceeded. If the control loops have different delay jitter stability mar‐gins, using the δmax‐normalized outage length might be more appropriate to evaluate the outage cost
( )out out
2 32 2hold outout hold 2 2
0 0max max max
T T
δ
e Tty T e dt t dtδ δ
⎛ ⎞= =⎜ ⎟⎜ ⎟
⎝ ⎠∫ ∫ ∼
δ. (81)
The NCC is now defined as the average outage cost
( ) 3NCC hist
1 Dk
J kN
= ∑ k , (82)
where Dhist is a histogram as a function of the drop length k ( )3 3outk T↔ , and N is
the count of the total number of packets.
The NCC measure (82) is applicable if packet drop affects all control loops simi‐larly, and they all the same sampling interval. The δmax/h‐normalized outage length can instead be used,
61
( )3
NCC, hist NCC2 2max max
1 Dδk
hk hJ kN δ δ
= =∑ J (83)
if the control loops have different jitter margin limits or sampling intervals h.
To measure the average NCC, the count of consecutive packet drops is collected in a histogram Dhist as a function of the drop length. The histogram is accumu‐lated over N sent packets. The sum ( )histD
kk k∑ is the total number of packet
drops, where k is the histogram bin. The network cost for control is then a k3‐weighted sum of the number of outage lengths, and averaged over all the sent packets N.
NCCJ
As an example, suppose that n of N packets are dropped in equally long bursts of packets. Assume also that n is divisible by B , and that in N there is room for separate packet bursts. Then the NCC becomes
Bn/n
nBn
( ) 3 2NCC
1, ,B
B Bk n B
n nJ n n N n nN n N=
= =∑ B . (84)
Thus, fixing the number of dropped packets, the NCC is minimized with single
packet bursts ( )NCC ,1, nJ n NN
= and maximized when all the packets are
dropped in one long burst, ( )3
NCC , , nJ n n NN
= . A network protocol should try to
minimize , that is favor single packet drops over consecutive packet drops, to deliver a good QoS for the control loop.
NCCJ
3.5.2. Simulations for Network and Control Performance Relationship
Simulations of a first order system (1) with K = 1, T = 10, and τ = 2 and an IMC‐PID controller (see Section 2.7.2) discretized with h = 0.5 seconds tuned with different values for λ are done. Several simulations with a Gilbert‐Elliott model with average packet drop probabilities ranging from 0 to 0.5 are made. The parameters of the model are 0Gd = , 0, 0.98Bd = ⎡ ⎤⎣ ⎦ , 0, 0.99GG GGp p= = ⎡ ⎤⎣ ⎦ , if
then . To average out the particular packet drop realizations, the average of 1000 individual step responses is calculated.
0BBp = 1GGp =
The cases of choosing three values for the controller tuning parameter λ are shown in Figure 17, with and without a measurement packet outage in the beginning of the step. The control cost measured with the ISE criterion (31) is plotted as a function of packet drop probability and network cost for control (82) in Figure 18. The cases show the following situations: With λ = 1 the control is tight and the best performance, which deteriorates with increasing packet
62
0 5 10 15 20 25 30 35 400
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
Time [s]
y
λ = 1λ = 2λ = 4
Figure 17. Step responses comparisons of differently tuned IMC‐PID con‐trollers with (solid) and without (dotted) measurement packet outage be‐tween t = 2 ‐ 3 s.
0 5 10 15 20 25 30 35 40 45 500
2
4
6
8
Packet drop probability
Con
trol c
ost (
ISE
)
0 5 10 15 20 250
2
4
6
8
Network cost for control
Con
trol c
ost (
ISE
)
λ = 1λ = 2λ = 4
λ = 1λ = 2λ = 4
Figure 18. Control cost as function of packet drop probability and network cost for control (82).
63
drop, is obtained. With λ = 2 a slightly lower control performance is obtained, but a graceful degradation is evident, as the control cost barely increases with an increased NCC. In the last case with λ = 4, a too conservative tuning is se‐lected. In this case the control performance might even improves with increased packet drops, due to more aggressive control induced by integral windup.
While the network drop probability does not give a clear indication of the con‐trol performance, there is an approximate linear correspondence between the NCC and the ISE cost. Applying the NCC in a more realistic case is done in the crane control example of Section 4.7.4, which shows a similar behavior between the NCC and control performance.
Furthermore, the network should deliver equal QoS to all the control loops. Although the overall packet drop rate may be low, a control system cannot function properly if all the packet drops are concentrated in one loop. No con‐trol loop should experience more packet losses than the other loops. The as‐sumption of equal packet drop requirement is natural, if the control loops are tuned with the same assumption of network performance or packet drops, for example using the same jitter margin.
Control loop fairness can be measured by comparing the NCC between differ‐ent control loops. Provided that every control loop has equal packet drop re‐quirements, every control loop should have an equal NCC. The standard devia‐tion of NCC calculated over all the M control loops is then a measure of the packet drop fairness among the control loops
J
( )2
NCC NCC, NCC1
1 M
ii
σ J JM =
= −∑ , (85)
where NCCJ is the average network cost for control and NCC,i is the cost of the ith control loop. The packet drop fairness is employed in Section
J4.7.3 where the
performances of the control loops are compared.
3.6. Summary In this chapter some practical results and control design considerations were presented. The radio environments of two sites are measured and Gilbert‐Elliott models are fitted to the observed packet drops. These models are integrated to the PiccSIM simulator and will be used in the simulations later on.
The step response and stability properties of the IMC design in the NCSs are established. It turns out that the tuning parameter of the IMC design directly determines the jitter margin of the resulting controller. In the discrete‐time case the tuning gives the stability in the number of consecutive dropped packets. These results are used in the adaptive control systems developed in Chapter 5.
64
65
The relationship between network QoS, control performance, and stability is established. With decreasing network QoS, the control performance decreases, as indicated by the proposed network cost for control‐measure, until instability of the control system is reached. The NCC is further motivated and visualized by simulations.
4. PICCSIM – TOOLCHAIN FOR NETWORK AND CONTROL CO‐DESIGN AND SIMULATION
This chapter presents and discusses the simulation platform PiccSIM, which is a toolchain for design and simulation of WNCSs. PiccSIM stands for Platform for integrated communications and control design, simulation, implementation and model‐ing. The aim of PiccSIM is to deliver, as the name suggests, a complete toolset for developing a wireless control system. The tools in PiccSIM range from the beginning of the design of the system, through simulation and system testing, to implementation of a wireless control system. The main purpose of PiccSIM is a co‐simulation tool for network and control system simulation in a networked control system setting. It is intended for research on NCSs and WNCSs.
The main characteristics of the PiccSIM platform are:
- Co‐simulation of network and control system. - Graphical user interface for running simulations or batches of simula‐
tions. - Several integrated tools for network and control system design and
modeling, and a controller tuning tool. - Control of a true process in real‐time or simulated process over a user‐
specified simulated (if available in ns‐2) or real network. - Automatic code generation from Simulink model block diagram for im‐
plementation on actual wireless nodes. - Remote user interface for doing student laboratory experiments over the
Internet or for sharing the PiccSIM platform with other researchers.
The PiccSIM simulator is an integration of Matlab/Simulink where the dynamic system is simulated, including the control system, and ns‐2 [118] where the network simulation is done. The PiccSIM Toolchain, is a graphical user interface for network and control design, realized in Matlab. It is a front‐end for the PiccSIM simulator and delivers the user full access to all the PiccSIM modeling, simulation and implementation tools.
67
There are several reasons to build a co‐simulation platform consisting of Matlab and ns‐2. Matlab and Simulink are widely employed research tools used in dynamic system simulation, providing efficient tools for control design. Control engineers are accustomed to working in this environment. Ns‐2 [118], on the other hand, is the de facto standard for network simulation in the communica‐tion research community. Ns‐2 simulates the network on a per packet basis, with models for MAC, routing and transport protocol layers. The wireless communication part of ns‐2 includes radio models with propagation time, sig‐nal propagation and fading models, error thresholds for received signal strength. The decision to use pre‐existing simulators is supported by the mini‐mum amount of maintenance needed to improve the simulation environment, and the advantage of using well known and powerful tools. Models for new wireless technologies, such as routing protocols, are frequently developed for ns‐2 [18].
The PiccSIM simulator is presented in more detail in Section 4.3 [P2]. Other existing network and control co‐simulation tools are reviewed in Section 4.2 [P6]. The PiccSIM Toolchain is described in Section 4.4. The Toolchain has sev‐eral tools for setting network properties and controller tuning suitable for net‐worked control systems. With the GUI, simulation and management of both simulators is made easy. The advantage of integrating all in one tool is that it is easy to study all aspects of communication and control, including the interac‐tion between them. PiccSIM enables automatic code generation from the simu‐lation model to actual wireless network nodes, as presented in Section 4.6. The simulated system can thus be tested with real hardware with no extra pro‐gramming effort.
The PiccSIM simulator has, in addition to the PiccSIM Toolchain, two remote graphical user interfaces, presented in Section 4.5. The remote interfaces are applets based on the MoCoNet system, for accessing the simulation functions with a web‐browser, without the need to install PiccSIM [P1], [P2]. One of the interfaces is for students, which enables convenient fields for inputting control‐ler tuning and running experiments; and the Researcher’s Interface, which offers all researchers the opportunity to use the PiccSIM simulator.
4.1. Development of the Co‐simulation Platform The PiccSIM platform has been developed by the author over several years, starting from the summer of 2004. In the beginning it was in the form of the MoCoNet (Monitoring and Controlling Educational Laboratory Processes over Internet) platform. The platform was at that time developed for educational purposes, specifically for enabling remote laboratory experiments. Much of the architecture has survived to the PiccSIM platform. The remote user interface, communication with Matlab and a simple network simulator were already implemented then. [P1]
68
Much, though, has changed over the years. Some features, such as network simulation, have been improved, and some features have remained the same, for instance the MoCoNet user interface. Completely new features have as well been developed, such as the PiccSIM Toolchain.
Already in the MoCoNet platform there was a possibility to simulate a network by routing packets through a simple network simulator. The simulator delayed the packets according to a specific time‐delay distribution and implemented a random packet drop with a certain probability, imitating statistically the delay and packet drops of a network [P1]. The simulations were run approximately in real‐time. In the PiccSIM platform the network is simulated with the ns‐2 simu‐lator, which is more realistic, since it actually simulates packets traveling in a user specified network [P2]. Now simulations are run as fast as possible, with the aid of time‐synchronization between the simulators [P3].
When reading the older publications related to this thesis (mainly [P1], [P2], and [P4]), one has to keep in mind that some information presented there may be outdated, because of the constantly evolving development of the simulation platform. For example, the connection between Matlab and the remote user interface was previously implemented with the Matlab Web server, but is now replaced by the Java Native Interface. On the simulation side, the time‐synchronization mechanism between the two simulators was developed quite late. This improvement changed slightly some of the simulation results. New, more accurate simulations have been done for this thesis. The most recent pub‐lication [P3] depicts the current situation most accurately.
Next, other NCS simulators are surveyed and in the following sections an up‐dated documentation of the PiccSIM platform is presented.
4.2. Review of Networked Control System Simulators
The PiccSIM Toolchain is unique, because it enables the design simulation and implementation of wireless control systems in one framework. There are other similar WNCS simulators, but they do not deliver any design support, or auto‐matic code generation for actual wireless nodes. PiccSIM is also rich on simula‐tion features, as it is comprised of two simulators.
WNCS or sensor network simulators can be divided into several categories: network simulators with control or application extensions; control system simu‐lators with network simulation extensions; sensor node simulators, where the actual code of the sensor node is executed; and hybrid simulators, where a network and a control simulator is combined.
69
In addition to the simulators reviewed here, there are plenty of network only simulators, which cannot be applied for WNCS simulation as such. Some of the simulators are the sensor network simulation tools and testbeds: ns‐2 [118], TOSSIM [89], OMNeT++, J‐sim, WISENES [83], and Cooja [38], which do not consider real‐time plant dynamics, control or actuation. For other network simulators, mostly aimed at sensor networks, see e.g. [35]. PiccSIM can also be used for sensor network application simulation, where a typical WSN simula‐tion case would be testing a distributed algorithm.
The basic NCS simulators are commonly implemented by extending an existing simulator with a network or dynamic simulation extension, and as such the extension are usually not as versatile as the main simulator. The most common approaches are extending ns‐2 or Simulink. The advantage of these extensions is that the network and control simulation is done within the same tool, but the disadvantage is that the simulator may not be equally suitable for both network and control simulation. Additionally, the network or control extension must be developed from scratch, which usually leads to simplistic and inaccurate mod‐els. Most of the network simulators have no control or dynamic simulation mode to enable reasonable WNCS simulation. Therefore a variety of extensions to these simulators have been implemented.
Ns‐2 [118] is the de‐facto standard network simulator in the communication research community. It is flexible and can be extended by new classes written in C++. For ns‐2 there exists some dynamic system simulation addons, for example the Agent/Plant extensions [17], [152], or the more general NSCSPlant and NSCSController classes [5], which define agents with dynamic properties in the form of ODEs to simulate the process and controller. The process output is sampled with an, optionally adaptive, schedule and packets with the measure‐ment are sent to the controller. The downside is that complex control system logics are difficult to realize with differential equations. The Scatterweb applica‐tion programming interface has been added to ns‐2 to enable running sensor node executables in ns‐2 [167].
Simulink has been extended to simulate WNCSs by creating Simulink blocks that simulate the network. There exist many Simulink network simulation blocks, e.g. [53]. One of the first wireless network blocks developed, is an S‐function that implements the IEEE 802.11b DCF (Distributed Coordination Function) [31]. It has a frame level correlated channel model, which models indoor, non line‐of‐sight environments. Another Simulink based network simu‐lation blockset developed at the University of Michigan is capable of simulating Ethernet, ControlNet, and DeviceNet networks [93]. The networks are modeled by theoretical communication times calculated in [91].
Perhaps the most well‐known Simulink network blockset is TrueTime, which is actively developed at Lund University, Sweden [22]. It supports many network types (Wired: Ethernet, CAN, TDMA, FDMA, Round Robin, and switched
70
Ethernet, and wireless networks: 802.11b WLAN and IEEE 802.15.4) and it is widely used to simulate wireless NCSs [7]. TrueTime simulates only the physi‐cal and MAC layers. Besides the dynamic system simulation offered by Simu‐link, network node simulation includes simulation of real‐time kernels. The user can write Matlab m‐file functions that are scheduled and executed on a simulated CPU. Even ultrasound network (from version 2.0), and node battery simulation are included.
The Ad Hoc On‐demand Distance Vector (AODV) routing protocol [126] has been implemented on TrueTime by appropriate functions running on the simu‐lated kernels [24]. The simulation of mobile robots, including the physical robot model and an inter‐robot communication protocol, has been implemented for study of robots in simulations and comparing them to real robots [182]. In re‐cent work, the simulation of a WirelessHART network is made possible by an extension of TrueTime [13]. The simulation uses frequency hopping and TDMA MAC protocol, but time‐synchronization is not simulated and assumed to be perfect. The developed WirelessHART network block has useful features with input ports with which one can specify beforehand the radio interference and packet drops. With these, one can study the impact of packet drop at instants critical for the control loop. The device table, routing and communication sche‐dule are specified by the user, so no network manager functionality is imple‐mented.
The NMLab co‐simulator combines ns‐2 and Matlab [63]. The control system tasks are defined with Matlab scripts and callbacks. The approach is scalable, as it is easy to duplicate control loops using the Matlab scripting language, but complex dynamic systems might be difficult to implement compared to using Simulink. Ns‐2 and Matlab are synchronized, such that Matlab commands ns‐2 to execute to a specific time whereupon a new event is scheduled.
The wireless node operating system simulators TOSSIM, COOJA, and RTNS (Real‐Time Network Simulator) are worth mentioning. They do not specifically support control system simulation, but complete wireless applications can be simulated with these tools. TOSSIM and COOJA simulate the code execution on the wireless nodes and have simple radio models to allow simulation of many nodes communicating with each other. Both sensor node simulators use sim‐plistic range‐based network propagation models. RTNS [122] is a simulator for real‐time wireless node operation systems. It simulates the scheduling of tasks using RTSim (Real‐Time operating system SIMulator) and the network using ns‐2.
TOSSIM [89] is a simulator for TinyOS [154]. With TOSSIM, whole networks consisting of nodes running TinyOS can be modeled. The actual application code is executed on the node simulator and the communication between the nodes is simulated on the bit level. Sensing and actuation are emulated with external code for the analog read and write operations. Simulation of wireless
71
control systems can thus be done by implementing suitable read and write functions.
COOJA is a cross‐layer simulator for the Contiki node operating system [38], implemented in Java. COOJA combines the simulation of code execution, radio transceiver, network, and operating system into one tool [186]. Simulation of physical processes can be achieved by developing plug‐ins representing process models that can be attached to the simulated input/output interface of the nodes. The nodes are either run as compiled code on the host CPU, or a TI MSP430 emulator, but also simpler Java node models can be used, and com‐bined in the same simulation, which makes the simulator both accurate and scalable according to the user needs.
Another sensor network simulator is WISENES, which can simulate sensor network nodes with communication and application scheduling and takes into account energy, memory, and processing power consumption. The configura‐tion is done with a high‐level Specification and Description Language, and the results are presented in GUIs and trace‐files [83].
Other extended simulators include Ptolemy II and Arena/ns. Ptolemy is a dis‐crete event simulator, with emphasis on simulation of heterogeneous, hierar‐chical and asynchronous systems. It has, for instance been extended to simulate distributed detection with sensor networks, but is no longer developed [10]. Arena is a tool aimed at simulations of mobile multi‐robot scenarios. It is ex‐tended by integrating ns‐2 for inter‐robot communications [175]. Arena pro‐vides mechanisms for sensor reading and motor control command implementa‐tion in the simulator. Similarly to PiccSIM the positions of the robots are syn‐chronized between Arena and ns‐2. The simulator is only suitable for mobile robot scenarios and the simulation is done in real‐time, neglecting synchroniza‐tion issues (see Section 4.3.3).
More advanced WNCS simulators are hybrid: they combine two simulators by integrating a network and a dynamic system simulator. The advantages are that relevant, existing, powerful, and well known tools for both network and control simulation are used. A caveat is that it may be difficult to properly integrate two simulation tools and produce correct results.
The most relevant co‐simulation tool for WNCS simulation besides PiccSIM, appears to be Modelica/ns‐2 [5]. It is a very similar platform developed at the Case Western Reserve University (USA). As in PiccSIM, the network simulation is done in ns‐2, but the plant dynamics and the control simulation are done in Modelica. Modelica is a general purpose dynamic system simulation software [107]. With a graphical modeling and simulation environment, such as Dymola [39] among others, it corresponds to Simulink. In Modelica/ns‐2, both simula‐tors exchange information with each other to synchronize the simulation of the system in both control and network domains. The simulation is controlled by
72
ns‐2, and Modelica is instructed to run until a certain time, upon which data‐synchronization (copying values to be sent to ns‐2 and received values to Mod‐elica) is performed. The packet rates, sources and destinations are specified in the TCL (Tool Command Language) script for ns‐2 before simulation, and thus no event‐based communication, for example depending on, possibly unfore‐seen, events, such as alarms or threshold crossings, in the dynamic model, can be done. In PiccSIM, the traffic is generated in Simulink, and event based transmission of packets is possible. This enables simulation of event‐ or self‐triggered control.
Optimized Network Engineering Tool (OPNET) is a commercial package for general purpose detailed simulation and analysis of many different networks [26], [119]. It is widely used and generally regarded as one of the best network simulator packages. It is more advanced than ns‐2, as it among other things supports simulation of the physical link and the antennas, and has better confi‐guration and visualization capabilities. OPNET can be customized using the Proto‐C language, but dynamic system simulation is not easily done.
Regarding WNCS simulation, the effect of sampling interval, data rate, node movement and routing algorithm on several different plant models have been investigated with OPNET [61]. The main result was that with a higher network data rate, the radio range is decreased and more hops are needed to reach the destination, which resulted in that the more real‐time demanding plants could not be controlled. OPNET has been integrated with Simulink to simulate a two‐pendulum WNCS, including both a simulated and a real wireless network, with a remote controller [62].
The simulator by Soglo [146] combines ns‐2 with Matlab using a C/C++ inter‐face. The plant and control algorithms are implemented in C‐code or with Mat‐lab m‐files. Matlab executes them when called through an external interface program, by ns‐2. A special UDP packet format is implemented to carry control data in the simulation. The presented results focus only on the network perfor‐mance. Performance measures are given with wired links as a function of bot‐tleneck bandwidth, number of processes, and of sampling intervals. No control related results are presented.
In choosing the network simulator, it is important to evaluate the simulation objective, accuracy of the simulation result and the simulation efficiency. With simple simulation models using delay distributions or packet drop probabili‐ties, the simulation results are not accurate, but for a number of cases the result may be adequate. An example is the case of investigating the control robustness during packet drops, where the drop rate is the factor under investigation.
Several studies have been done to compare and evaluate the simulation results and accuracy of ns‐2. In the wired case, ns‐2 and OPNET give similar results compared to a real testbed network [100]. In the wireless case, however, the
73
simulation results seem to diverge significantly between different simulators [133]. This is mainly due to different assumptions and simplifications in the environment, signal propagation and radio models, in modeling the real wire‐less network [79]. The current PiccSIM simulation results are as accurate as any other simulation based on ns‐2. When evaluating control results in Chapter 5, the qualitative network performance is more important than the quantitative: patterns of packets drops rather than the average packet drop.
To compare the existing co‐simulators, Table 1 summarizes the main properties of the relevant simulators. For evaluating a complete wireless control system application, accurate network and control system simulation models must be built. This rules out TrueTime as it has simple network models. OPNET, Agent/Plant, Areena/ns and the node simulators are either not suitable as they do not support control systems well. Viable alternatives for WNCS simulation are only the co‐simulators PiccSIM and Modelica/ns‐2. Finally, PiccSIM has the advantage over all the other simulators that it offers control and network design tools.
Table 1. Comparison of simulators for wireless networked control systems.
Simulator Type Based on Free Control supported
Advanced network models
Event‐/Time‐driven
PiccSIM co‐simulator
Simulink, ns‐2
No Yes Yes Yes/Yes
Modelice/ ns‐2
co‐simulator
Simulink, ns‐2
Yes Yes Yes No/Yes
TrueTime control Simulink No Yes No Yes/Yes
OPNET network No No Yes Yes/Yes
Cooja, TOSSIM, RTNS
node simulator
Yes No No Yes/Yes
Agent/Plant network ns‐2 Yes Limited Yes Yes/Yes
Arena/ns co‐simulator
Arena, ns‐2
Yes Yes Yes Yes/Yes
74
4.3. PiccSIM Architecture The general architecture of the PiccSIM platform is depicted in Figure 19. The PiccSIM simulator consists basically of two computers on a local area network (LAN), with access from the Internet: the Simulink or xPC Target computer for system simulation, including plant dynamics, signal processing and control algorithms, and the ns‐2 computer for network simulation. An example of a wireless control system simulation with network and control co‐simulation is depicted in Figure 20, where the simulation domains are indicated using green for the network and blue for the control domain in Figure 19. The technical details are explained in the next subsections.
The Simulink model can either be run normally in Simulink (free‐run) or in real‐time with the Matlab xPC Target real‐time operating system. The Simulink mode is for pure simulations, where the time is synchronized between Simulink and ns‐2 (see Section 4.3.3), and the xPC Target mode for hardware‐in‐the‐loop runs, where a real process is run over a simulated network in real‐time. This mode is used for educational purposes in student lab exercises.
The xPC Target computer runs a compiled version of the Simulink model, in real‐time using a Matlab proprietary real‐time operating system, where the control algorithms and nodes are modeled. The xPC Target computer has an I/O board to connect it to the real process for measurements and actuation.
The network is simulated in PiccSIM by the ns‐2 computer. Packets sent over the simulated network are routed through the ns‐2 computer, which simulates the network in ns‐2 according to any TCL script specification generated auto‐matically by the network configuration tool (see Section 4.4.2). In free‐run simu‐lations, simulation time‐synchronization is performed between the computers. The integration of the simulators is explained in more detail in the next subsec‐tions.
The server computer is responsible for the remote user connections and running the PiccSIM Toolchain and Simulink models during normal simulation (free‐run). In case of using the xPC Target computer, Simulink models built by the user are automatically compiled using the automated code generation capabili‐ties of Matlab to executable code (rapid control prototyping) and uploaded to the xPC Target where it is run. For the remote user interface (Section 4.5) the server stores simulation results in a database for later retrieval. The PiccSIM server computer is attached by a LAN to a gateway, such that users on the Internet can connect to the system and operate it.
In the following subsections, some features and implementation details about the simulator are presented.
75
Server, DB
Remote User InterfaceMoCoNet GUI
Process
Simulink or xPC Target
I/O Board
Controller
Sensors
Internet
Ns‐2Network Simulator
Simulation
Configuration and management
PiccSIM ArchitectureLocal User InterfacePiccSIM Toolchain
Figure 19. PiccSIM architecture with control and network simulators, con‐nection to hardware and user interfaces. Modified from [45].
xPC Target/Simulink
Ns‐2Wireless Network
inout
SensorReference
yr
Plant
control output
Controller
PID
Actuator
Figure 20. Wireless control loop split into dynamic and network simulation domains.
76
4.3.1. Simulink and ns‐2 Integration The PiccSIM simulator is created by integrating two different simulators: Simu‐link for control system simulation and ns‐2 for network simulation. Communi‐cation over the simulated network is done with UDP packets, since in control system applications a lightweight container for a small amount of data is more suitable than TCP. Packets sent over the network in the simulation model are routed through the ns‐2 computer, which simulates the network according to a TCL script specification.
The simulated communication over a network commences by Simulink sending an UDP packet to the ns‐2 computer. The ns‐2 computer captures the UDP packets from the LAN (with so called taps) and injects them into the simulated network model. If the packet reaches the destination in the simulated network, it is sent back to Simulink. The corresponding Simulink UDP receive block captures the packet, converts it to a Simulink signal and outputs it immediately to the rest of the simulation model. Figure 21 shows the connectivity mapping between the Simulink and the ns‐2 nodes. UDP port numbers are used to asso‐ciate packets to the corresponding node in ns‐2. The communication over the simulated network is handled in Simulink by a ready‐made library of blocks, as explained in Section 4.4.1.
Figure 21. Simulink and ns‐2 integration. Communication between the si‐mulators: data packets, information updates, and time‐synchronization.
77
The integration of the control and network simulators is such that transmitting and receiving packets from/to Simulink is equivalent to communicating over a real network. In practice, at the time‐instant when a packet is sent, it is instantly (before the next simulation step) transferred to the network simulator. When a packet reaches the destination node in the network simulator, it is received in Simulink at the closest following time‐step. This ensures that the connection between the control and network simulators is as transparent as possible. Thus, the simulation is as accurate as the quantization imposed by the time‐steps of Simulink implies. Reducing the Simulink time‐step decreases the timing error due to the integration of two simulators. Compared to an actual implementa‐tion with real hardware, inaccuracies only occur in the precise timing when the packet is sent, where the preparation of the packet, going down the network stack, and other scheduled operations interfere with the timing. The transfer between Simulink and ns‐2 does not take any time, since the simulations are time‐synchronized and the transfer occurs before the next time‐step is taken.
The capabilities of the current ns‐2 version (v2.34), have been extended to suit the requirements of the PiccSIM simulator. A new scheduler to ns‐2 was devel‐oped, for synchronization of the two simulators. In normal simulation mode (free‐run), the simulation time is synchronized between the simulators, as de‐scribed in Section 4.3.3. In real‐time operation with the xPC Target, the network simulator uses the emulation mode of ns‐2 known as NSE (Network Simulator Emulator) to run in real‐time. Other developed new features are the dynamic data update mechanism (Section 4.3.2) and packet drop models (Section 4.3.4).
4.3.2. Data Exchange Between Simulators Since PiccSIM is an integration of two simulators, they are by definition sepa‐rated. To close the gap between the simulators, a data exchange mechanism is implemented. This data exchange passes information from one simulator to the other. This enables the simulation of cross‐layer protocols that take advantage of information from the other application layers.
An example of data exchange is with mobile scenarios. Ns‐2 supports node mobility, but natively only with predetermined or random movement. There exists, however, many applications, such as mobile robots, search‐and‐rescue, exploration, tracking and control (see Section 4.7.1), or collaborating robots (see Section 4.7.2), where the control system or application determines the node movement in run‐time, e.g. [P4] and [P5]. In these cases the controlled node positions must be updated from the dynamic simulation to the network simula‐tor. The updated node positions are then used in the network simulation, and affect for instance the received signal strength and changes in the network to‐pology may further initiate re‐routing.
The data exchange mechanism is used in this thesis to update the node posi‐tions in simulations in Sections 4.7.1 and 4.7.2, where the simulated movement
78
of a node is updated from Simulink to ns‐2 [130]. The node ID and x‐y position is transmitted with a user specified time‐interval to ns‐2 by a ready‐made block (see Section 4.4.1), which updates the node position in ns‐2, illustrated in Figure 21. Besides position information, other data updates are also possible, both from Simulink to ns‐2 and vice verse.
4.3.3. Simulation Clock Synchronization To generate correct simulation results, the integrated simulators must be syn‐chronized in time. This is accomplished by the data exchange mechanism and a new scheduler for ns‐2 [P3]. Previously both simulations were run in real‐time, enabling control of a real plant over a simulated network. This feature is still available with the xPC Target simulation. Running in real‐time, however, caused doubts on the correctness of the whole simulation, since accurate syn‐chronization could not be guaranteed. Using time‐synchronization, slightly different results are obtained compared to running in real‐time, indicating that synchronization makes a difference. Other reported work suggests that the real‐time simulation of ns‐2 is not accurate, due to simulation clock inaccuracies and scheduling problems [101].
The benefits of time‐synchronization between the simulators are that the simu‐lations do not need to be run in real‐time, so the simulation takes less time. The results are more accurate, because synchronization ensures that both Simulink and the network simulator are at the same time‐instant. This furthermore re‐moves the minute time‐delay caused by the LAN connecting the Simulink and ns‐2 computers.
The time‐synchronization scheme is built upon the data exchange framework presented in the previous subsection and works as depicted in Figure 22: Simu‐link sends ns‐2 a packet, which contains the current simulation time. Then ns‐2 simulates the network up to that time, replies to Simulink, and waits for a new synchronization packet. Upon receiving the reply from ns‐2, Simulink will advance one time‐step in simulation and send a new synchronization packet to ns‐2. Communication packet and data exchange are performed before the syn‐chronization and clock advancement. The accuracy of the integration is that of the Simulink time‐step, such that packets returned by ns‐2 are received by Si‐mulink on the next time‐step. The accuracy can be improved by decreasing the maximum time‐step of the Simulink solver.
External time‐synchronization of ns‐2 from Simulink is enabled by modifying the ns‐2 scheduler. Enabling the time‐synchronization mechanism needs minor changes in the ns‐2 configuration script and in the Simulink model. The Tool‐chain automatically generates the additional TCL code and the user must add a ready‐made block called “Synchronize with ns‐2”, which is included in the PiccSIM library, into the Simulink model.
79
Simulink
Ns‐2Simulateto specified time
Simulateone time‐step
Figure 22. Simulink and ns‐2 simulation time‐synchronization messaging. Exchange of packets to and from the simulated network shown with dashed arrows.
4.3.4. Other Implemented Features There are some other implemented special features in PiccSIM worth mention‐ing that are used in this thesis.
Often it is desired to compare the obtained WNCS simulation results with the case of a perfect network, to evaluate the degradation in control performance due to the network. For this purpose a super‐network feature is added to ns‐2, where the network simulator returns the packet immediately to Simulink with‐out injecting it to the network simulation model. Using the super‐network fea‐ture, the same PiccSIM simulation model can easily be simulated with or with‐out a network only by flipping the option.
In order to study the effects of different parameter values a batch run feature is developed in PiccSIM. Through user defined scripts, any value in the simula‐tion, either on the network or control side, can be varied, and several simula‐tions performed automatically. The specified results of the simulations are stored in vectors for analysis. This tool allows for easy survey of the impact of different parameters, be it controller parameters or network properties. The batch run feature is used in simulations presented in Sections 4.7.4 and 5.4.
The radio environment models in ns‐2 are quite simple [118] and thus two addi‐tional propagation models are implemented into ns‐2. For indoor simulations, an indoor fading model is integrated into ns‐2, and for simulation of any envi‐ronment, a data‐based packet drop model is made. The indoor propagation
80
model extension to ns‐2 takes into account the shadowing from walls. This extension makes more realistic indoor simulations than the default ns‐2 propa‐gation models, since the attenuation of the walls according to a real building are taken into account [P3]. The extension of ns‐2 reads a file containing signal attenuation values for every node pair in the network. The values are calculated with a multi‐wall model, as explained below. The extension allows one to use other attenuation values, for example real measurements from a factory. A similar model is presented in [29], where the signal propagation is modeled based on a blueprint and the attenuation model is integrated into the OMNeT++ network simulator. Signal strength measurements from the site can be inte‐grated into the model.
The implemented multi‐wall model takes into account the walls located directly between the transmitter and the receiver and allows for individual wall materi‐al properties. The wall attenuation values are selected based on statistical re‐sults from measurements. The selection of proper path loss exponent, which determines how much the signal is faded by distance, is crucial because the value is highly dependent on the type of building or structure of the indoor environment. In an office environment with walls and furniture, the value is usually between 3 and 6 [P3].
The calculation of wall attenuations requires description of the indoor scenario. The simulator takes a simplified grayscale picture of the environment, the building blueprint, as an input. This picture portrays different wall materials with different colors. The corresponding attenuation factors are defined for each color in a table. The simulator calculates the attenuation of the transmitted power in every pixel in the given blueprint. The losses due to walls are added to the overall path loss, which can be any of the default ns‐2 propagation mod‐els. An example is illustrated in Figure 23, where the coverage prediction of the simulator and the error compared to real measurements are shown. About 75 % of the simulator values differ from the real values less than 7.5 dB.
The wall propagation model is based on a blueprint or measured signal attenu‐ation values. It is based on the average signal conditions and it does not take into account the actual packet drop dynamics. To better model the communica‐tion channel, a Gilbert‐ Elliot model (Section 3.2) can be used. A Gilbert‐Elliot model is implemented into ns‐2, which has separate models for every link pair. G‐E models based on measured data for every link pair can be specified and ns‐2 will simulate the packet drops according to the Gilbert‐Elliott model.
In the building automation simulation in Section 4.7.3, both the wall propaga‐tion and the Gilbert‐Elliot packet drop models have been used. The measure‐ment procedure and obtained G‐E models are shown in Sections 3.1 and 3.2, respectively. In this thesis only the simulation results with the G‐E model are presented, as the simulation results are very similar with either model.
81
and error compared to measurements Figure 23. Coverage prediction (left)
-48 dB
-54 dB
-60 dB
-66 dB
-72 dB
-78 dB
-84 dB
-90 dB
-96 dB
< -100 dB
-25
-20
-15
-10
-5
0
5
10
15
20
25
of the real building (right) in dB.
4.4. PiccSIM Toolchain The PiccSIM Toolchain ties the design, implementation and simulation of WNCSs into one integrated package, where all the functionalities of the PiccSIM simulator are available through a graphical user interface. It combines several tools for designing, simulating and implementing wireless control systems. PiccSIM is unique among the WNCS simulators, because it enables the design and simulation of wireless control systems in one framework. There are other similar NCS simulators, see Section 4.2, [5], [31], [62] and references therein, but they do not for instance deliver any control design support.
The d llers or any generic algorithms can further be implemented on actual wireless nodes with the automatic code generation
distributed application can first be later tested on real hardware, without
esigned and simulated contro
tool as explained in Section 4.6. Thus, anydesigned and simulated in PiccSIM, andextra programming work.
The Toolchain runs as a Matlab GUI. It consists of tools for generating the ns‐2 configuration file with the GUI, automatically adjusting controller parameters (through tuning rules or algorithms), identifying process transfer functions and automatic generation of embedded code for wireless nodes. Next the Toolchain architecture is presented, followed by the different network and control system design tools.
With the Toolchain, both the network and the control simulators are managed, by starting and stopping them at the same time with a button click. This hides the complex networked control system co‐simulation behind one GUI, leaving the user the full capability to specify the simulation model.
82
4.4.1. PiccSIM Block Library The PiccSIM library, shown in Figure 24, is a set of Simulink blocks that add
te wireless communications between parts of the control loop. The
and receiving over a network; controller, process model and generic node blocks; additional blocks for displaying signals in a scope, simulator time‐synchronization; data exchange blocks; implementing radio triggered logic; collection of utility blocks and controller implementations for PiccSIM.
Constructing a wireless control loops with the PiccSIM library blocks is easy. In Figure 25 a simple example of a control loop with wireless measurements is
wireless communication capabilities to any Simulink model, for example to construct a networked control loop. The communication over the ns‐2 simu‐lated network is handled by the ready‐made node blocks, such that the user need not pay attention to the integration of Simulink and ns‐2. The library con‐tains Controller and Process blocks to create control loops and wireless node blocks to creaController and Process blocks are replaced by the correct implementation by the PiccSIM Toolchain. However, they can be edited to allow custom implementa‐tion of the controller or process models. [P3]
Figure 24. PiccSIM Toolchain library of Simulink blocks. Blocks for sending
PiccSIM library v1.0Mikael Björkbom, 8/2010
Aalto UniversitySchool of Science and TechnologyCopyright (C) 2010 under GNU GPL
UtilTriggeredSubsystem
In1 Out1
To scope
y
Synchronize with Ns-2
do { ... } while
Send position to ns-2
Position
Send data to ns-2
data
Receive data from ns-2
Data N 0
Radio hybrid trigger
Timestamp Hybrid Trigger
Radio event trigger
Timestamp Receive Trigger
Process
Process
Node send only
ID = 0Send to T 1 N 1
Node
Node receive only
Data T 2 N 1
ID = 0
Timestamps
Data T 1 N 1Node
Node
Data T 2 N 1
ID = 0
Timestamps
Send to T 1 N 1 Data T 1 N 1Node
Generic node
AD 1
Radio recv 1
DA 1
Radio send 1
Dummy node
Node
ID = 0
Controller
Controller uy_ref
y
Controllers
83
shown. Before simulation, the network nodes need to be configured with the source and destination IDs of the communicating nodes, and the data types and dimensions of the signals, using the dialog shown in Figure 26. The blocks sup‐port both event based and periodic transmission. The conversion to UDP pack‐ets and back to Simulink signals is done in network node blocks. Since the in‐formation transmitted over the network is actually included in the packet, the choice of data types and information sent is reflected in the packet size and network simulation results, as it would be in a real system. The output of the received signals is held until a new packet arrives. Whether a new packet has arrived can be found out by observing the timestamp port. [P2]
The library includes, for the remote user interface, a block for logging Simulink signals into the database for later retrieval and displaying the signals in a scope in real‐time during simulation [P1]. The library also has a block for sending dynamic data to ns‐2 as explained in Section 4.3.2, which is used for the mobile node simulations in Sections 4.7.1 and 4.7.2 [P3].
The PiccSIM library contains a generic node block, which is used in automatic code generation to create an implementation of any algorithm defined with Simulink blocks and execute the same algorithm on real node hardware. This block can implement any computational algorithm, whereas the Controller and Process blocks are specialized for control systems. Naturally the limited memo‐ry and computational resources of the hardware sets a constraint on the imple‐mentable algorithms. The generic node block supports reading from and writ‐ing to analog inputs and outputs and communicating user defined signals overthe r ex‐plain
The m . In the
ification script
adio. The generic node block and the automatic code generation areed in more detail in Section 4.6. [P3]
4.4.2. Toolchain User Interfaces ain graphical user interface of the Toolchain is shown in Figure 27
GUI, the Simulink model and TCL script for the network is selected, and the controls for running simulations are available. The GUI provides access to the PiccSIM block library, and the other design and simulation tools, such as net‐work setup, controller tuning, and code generation. [P3]
In the network settings window shown in Figure 28, configuration scripts for the network simulator ns‐2 are created with a user‐friendly GUI, where the user can specify the settings of the ns‐2 network simulator, including the node posi‐tions, network protocols and simulation parameters, and PiccSIM related simu‐lation settings. The settings include the propagation model, routing and MAC protocols, node movement pattern, node connection pattern, etc. It is also poss‐ible to create additional simulated traffic. The generated script can also be edited by hand for custom simulation settings. The network spec
84
Figure 25. Example of a simple wireless control loop with controller, process and blocks transmitting and receiving for wireless process output data.
Figure 26. Node communication properties, such as packet payload data types and node to communicate with.
block configuration dialog for specifying
Sensor node
Send to N 1 T 1
ID = 0
NodeReference
Out
Process
Process
Controller node
Data N 0 T 1ID = 1
Timestamps
NodeController
PID uyr
y
Controller
85
is automatically loaded to the ns‐2 computer before each simulation, so that the current network configuration is used.
With the controller tuning tool shown in Figure 29, the controllers are designed for the respective processes. Processes are modeled as transfer functions with delays, or if more complex processes are needed, any custom process can be created using Simulink blocks. Supported controller types are mainly PI, PD or PID type and they can be tuned automatically using several tuning methods presented in [47], which are suitable for networked control systems. One of theimplementing tuning rules is the jitter margin based PID controller tuning ruledescr oller block
ibed in Section 2.6.1. If other types of controllers are needed, the contr can be customized manually.
Figure 27. The Toolchain main graphical user interface window gives access to the control and network models, and management of the simula‐tion.
86
Figure 28. Network settings window showing node locations.
Figure on process model suitable for wireless control systems.
29. Controller tuning window with automatic tuning methods based
87
4.5. Remote User Interfaces A remote, virtual, or web laboratory is a system which enables the user to run a laboratory experiment and view the results remotely, for instance with a web browser. The main use of the remote laboratory is educational: to enable stu‐dents running laboratory experiments from home. Remote laboratories are used in education to enable flexible hands‐on experience and resource sharing. An example of an often used laboratory process is the inverted pendulum [138]. For a review on the history, role, objectives, benefits and impact of educational remote laboratories, see [49].
There are a large number of remote laboratories developed in universities around the world. The field is still developing and there are not yet any stan‐dards for remote laboratories, and every lab is implemented in a different way [58]. No survey of remote laboratories is attempted here, since it will be imposs‐ible to cover them adequately. A survey of remote laboratories and the used technologies can be found in [58], where the future challenges and development objectives are listed.
The remote user interface developed in this thesis is the MoCoNet platform. Asimil Net remote user interface is a Java applet that runs on a Java enabled browser. The remote inter‐face allows students to select controllers, adjust tuning parameters, and simu‐late and run the process. The web interface is shown in Figure 30 and the scope for viewing the results of the run is shown in Figure 31. The scope displays a simulation run of a wireless control system shown in Figure 25. Additional traffic is simulated in the middle of the run at t = 20 – 40 s. Disturbances due to the extra traffic is seen in the control signals. The signals are plotted in real‐time and stored in a database for later retrieval. Previous experiments are saved in a database by the MoCoNet server and the user can load these for later inspec‐tion. [P1]
The MoCoNet interface is extended to the PiccSIM Researcher’s remote inter‐face to enable remote usage of PiccSIM. The remote researcher’s interface is implemented as a special version of the MoCoNet interface, with options suita‐ble for research work on PiccSIM. The PiccSIM Researcher’s Interface allows other researchers to upload custom simulation models (Simulink model and ns‐2 TCL script), and to run simulations on PiccSIM, thus serving a larger group of researchers for WNCS simulations.
ar remote laboratory setup is reported in [121]. The MoCo
88
Figure 30. MoCoNet – the PiccSIM remote user interface.
Figure re‐sponse of process, and communication delay. Additional traffic is simu‐lated between 20 to 40 seconds.
31. Scope for remote user interface, showing control and output
89
4.6. Automatic Code Generation and Implementation
For implementing the controller or generic node algorithms on actual wireless node hardware, the PiccSIM Toolchain automatically convert the algorithms in the Simulink model to C‐code. This allows one to test the designed and simu‐lated system on real hardware, without the laborious and error‐prone task of implementing the same algorithms on the target platform. With the code gener‐ation capabilities it is relative easy to compare simulation results with the real performance, as no extra coding effort is needed. Comparison between the simulated and real performance has, however, not yet been done.
The code is generated with the Matlab Target Language Compiler (TLC) and Real‐Time Workshop Toolbox. The TLC constructs the code according to a TLC template with code instructions for the Simulink blocks. The generated code of the block contents is combined with a TLC generated main file, containing the framework for running the algorithm on the nodes. The wrapper code executes the computation task, reads and writes to the inputs and outputs of the node hardware, and takes care of the transmission and reception of packets.
The main code template is currently compatible with Sensinode Micro 100 series r different hardware. The compl operating system or with Cygwin in Windows, to the node with the operating system FreeRTOS [52] and communication stack NanoStack‐1.0.3, developed by Sensinode. The Sensinode nodes communicate with each other using an IEEE 802.15.4 based radio. The same radio model is available and used in ns‐2 for simulations.
The generic node block supports reading and writing from/to analog interfaces and sending and receiving packets with the radio. With the code generation tool, hardware specific options related to input/output voltages and resolutions, can be specified. The transmitted packet format and data types are the same as the ones used in the simulation model with the simulated network. An applica‐tion modeled in Simulink can thus be automatically implemented on sensor node hardware. This not only enables testing control applications, but also numerous sensor network applications. The automatic code generation feature is demonstrated with two examples in Section 4.7.5.
All models built with Simulink cannot be compiled to run on the node hard‐ware due to various restrictions of Matlab and the PiccSIM Toolchain. The main restrictions are: Code generation is only available for the blocks specified by the Real‐ and computation only one t order of the phases is fixed to: receive, then computations and I/O, and finally transmit.
U [142] wireless nodes, but it can be easily modified fo
ete code is compiled and programmed, on an UNIX
time Workshop Toolbox; The algorithms are limited by the memory capabilities of the hardware; Receiving and transmission of
ype of packet is possible; The
90
Figure 32. Window for automatic code generation.
4.7. Simulation Case Studies In this section some PiccSIM simulation studies are presented. The simulations have been used to develop PiccSIM by testing different simulation scenarios, and to obtain a better insight into the behavior of WNCSs. These simulations show the capabilities of PiccSIM and they also provide an understanding on how several networked control loops interact. Although all cases are control system simulations, other distributed computation applications, for example consensus algorithms or sensor networks applications, can also be simulated. The IEEE 802.15.4 radio is always used in the PiccSIM simulations here and in Chapter 5.
The first two scenarios are related to mobile robots, either a single or a squad of robots. They emphasis the packet routing performance, as the network topology changes when the mobile nodes move. The first simulation is a comparison of different routing protocols, whereas the second one also compares different control structures: the jitter margin tuned PID controller versus a Kalman filter and conventional PID controller. In Section 4.7.3, a heating and ventilation case of an office, based on wireless measurements, is simulated with the indoor models presented in Sections 3.2 and 4.3.4. Then, in Section 4.7.4 an industrial
91
case is simulated, where a crane in a hall is controlled over a wireless network and the impact of the network QoS on the control performance is shown in a more realistic case than in Section 3.5.2. Finally, the automatic code generation feature is demonstrated with two simple control cases. One is a laboratory‐scale heated airflow process and the other is a more demanding trolley crane anti‐swing control, which requires real‐time operation. The final implementation cases are used to show how easy it is to develop wireless control applications aided with the PiccSIM design, simulation, and automatic code generation tools.
4.7.1. Target Tracking Scenario The target tracking scenario considers a grid of nodes forming a static sensor network and a mobile node or robot. The sensor network serves as an infra‐structure network for transmitting measurement and control signals from/to the mobile node and providing a localization service. The objective for a centralized controller located at an edge of the infrastructure grid, is to control the mobile node according to a predefined track. On the control side a Kalman filter is used for filtering the mobile node position and predicting the position if the informa‐tion is not available, due to packet drops. A PID controller is then used to con‐trol the mobile node. The control signal is routed to the mobile robot, which applies the acceleration command. If no control packet is received for three consecutive sampling intervals, an automatic stop mechanism triggered. [P4]
The whether singlepath or multipath routing is more advantageous in mobile scenarios. A similar scenario
tor, where the controller, robot
which more measurements have arrived. This time delay caused by
is
issue under investigation is on the network side
has been investigated with the TrueTime simulahardware and communication protocol are implemented in Simulink [182]. An application using Kalman filtering for target tracking is presented in [113].
Nearby infrastructure nodes can measure their distance to the mobile node, for example by using ultrasound. The distances are transmitted to the controller. Using at least three distance measurements, the controller can determine the position of the mobile node by triangulation. By simulation it is noted that the requirement to receive three measurements from the same sampling interval is not always fulfilled. Hence the controller has to use data from older sampling instants forintermittent data is plotted in Figure 33. Notice, that in this case the delay jitter is not caused by varying communication time in the network, but by the availa‐ble information at the controller. A Kalman filter capable of fusing measure‐ments with varying time‐delays (see [P7], but in this case without the delay estimation part) is applied to estimate the current robot location and filter the localization noise.
92
Table 2. Network and control performance metrics from target tracking case.
Average delay [s]
Routing overhead [%]
Packet loss [% ]
Control cost (ISE)
AODV 0.08 8.1 23 18
LMNR 0.001 0.5 10 8.6
0 50 100 150 200 2500
2
4
6
8
Time [s]
# of
dis
tanc
es in
pos
ition
est
imat
ion
0 50 100 150 200 2500
1
Figure 33. Number of distance measurements used in position estimation for target localization and induced delay when available measurements is less than the required minimum of three.
An IEEE 802.11 network is used. A comparison between a singlepath routing protocol, specifically Ad hoc On‐demand Distance Vector (AODV) [126] and a multipath protocol called Localized Multiple Next‐hop Routing (LMNR) [114], a multipath extension of AODV, is displayed in Figure 34, where the paths of the remotely controlled robot are shown. The numerical results are in Table 2.
The simulation results show that the multipath routing protocol has lower communication delay, routing overhead and packet losses then the singlepath
2
3
4
Del
ay [s
]
Time [s]
93
routi the numb automatic stop instances of the mobile node is low, whereas using singlepath takes a considerable time before a new path is established. An ex‐ample of th r je e 34. This simulation shows that ltipath is advantageous in so bile scena‐rios, since it can quickly switch to a backup route (see next section for a counter‐example). ontrol is satisfactory with both protoc normal operation, as the network performance between the routing outages is the same.
The target position is time‐dependent, with a total time for the whole trajectory of 250 seconds. Thus, when a communication break occurs, the mobile node is left behind. When the communication is the controller moves the node straight to the current target position, as seen to the left in Figure 34. If pausing the target position when there is a communication break, the time to traverse the whole trajectory is for AODV 280 seconds and LMNR 270 seconds, resulting in a similar conclusion about the routing protocol performance.
ng protocol. There are additionally shorter communication outages ander of
is is can be seen in the upper left co mu
ner of the tra ctory in Figurme mo
The c routing ols during
restored,
Figure 34. Mobile node trajectory control with singlepath and multipath routing. Triangles indicate communication outages.
200 300 400 500 600 700 800 900 1000 1100200
300
400
500
600
700
800
900
1000
Reference trajectorySinglepath routingMultipath routingNo contact
94
4.7.2. Robot Squad with Formation Changes The Target Tracking scenario presented in the previous section is extended to a squad of 25 wireless mobile robots. The squad consists of a leader, which con‐trols the movement of the rest of the group. The target scenario is an explora‐tion or search‐and‐rescue type of situation, where the squad moves in different formations, depending on the environment or requirements of the task. Several formation changes are done, causing changes in the network topology. In this case the control and network interactions are clear: the controlled mobility changes the network topology, which causes rerouting. Vice versa, the rerout‐ing performance determines the network availability for control. The main objective is to evaluate routing protocols and control architectures in a scenario with harsh network conditions. This case has been studied in [130] and [P5].
Compared to the previous simulations, the infrastructure nodes are removed and each robot can localize itself, for example using GPS or inertia measure‐ments. The robots send their position information to the leader robot. The lead‐r then calculates the desired path and sends the control signals, taking into account collisions and the final desired formation. The position controller for each mobile robot is a discrete‐time PID controller. Several control structures are compared: A network aware PID controller tuned with the jitter margin method ((27) in Section 2.5, with δmax = 3h); a conventional PID controller (for comparison purposes tuned for no jitter margin, δmax = 0); and a Kalman filter used as a state‐estimator with a non‐network aware PID controller (δmax = 0), or a conventionally tuned controller. For comparison, the same tuning method is used for all the controllers, either taking into account the delay jitter of the net‐work or not. The controller with zero jitter margin has higher controller gains and should give a better performance, but it is less robust to delay variation than the jitter margin tuned controller. The control performance is calculated with an integral of square error (31) between the desired and actual location, summed over all the mobile robots.
Simulations of four formation changes, shown in Figure 35, of a squad of 25 robots are done. The differences on the network layer between singlepath and multipath routing, and on the control layer between the different controller structures are investigated. The results are compared to the case with no mobili‐ty, where the nodes in the network do not actually move, and without a net‐work, that is control with perfect communication. The network results are in Table 3 and all the control results (ISE cost function (31) and time to reach final formation) are collected into Table 4. The ISE cost is only calculated for the part without an outage, because the error during an outage would otherwise domnate the only otocol.
Contrary to the previous case in Section 4.7.1, using singlepath routing is slightly more advantageous than multipath routing. The reason is that in multi‐
e
i‐‐ total cost and correlate with the performance of the routing pr
95
path routing, more link breaks occur in high mobility scenarios and switching to backup links are frequent, which may also be close to breaking. Singlepath routing takes longer to find a new route, but the links seems to hold longer, hence the differences in the NCC between the singlepath and multipath proto‐cols. Since the routing is under heavy load, with frequent route breaks, a better performance might be achievable by flooding. [P5]
Table 3. Network performance metrics from the robot squad simulations using a jitter margin PID tuned controller.
Avg. delay [s]
Routing overhead [%]
Packet loss [% ]
NCC (82) Packet drop fairness (85)
No mobility 0.009 0.8 0.1 1.6 2
Singlepath 0.015 3.2 30 1330 862
Multipath 0.09 11.2 20 381 398
Figure 35. Formation changes. The leader indicated by “L”. All robots start initially from almost the same location in the center at coordinates (30, 30).
0 10 20 50 600
10
20
40
50
60
L123456789101112131415161718192021222324[m]
0 10 20 30 40 50 600
10
20
30
40
50
60
L
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[m]
0 10 20 30 40 50 [m]
0 10 20 30 40 50 600
10
20
30
40
50
60
L
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[m]0 10 20 30 40 50 60
0
10
20
30
40
50
60
L
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[m]
[m]
600
10
20
30
40
50
60
L
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[m]
[m]
96
Table 4. Control cost (ISE) and extra time to reach formation with different control configuration and network configurations.
No jitter PID Jitter PID Kalman filter + PID
Cost Time [s] Cost Time [s] Cost Time [s]
Perfemun
ct com‐ication
0.1 0 1.0 0 0.3 0
No mobility 3.3 2.3 61.5 10.5 1.4 0
AODV 2.7 84 2.9 15 1.9 5
LMNR 3.3 2.8 4.5 40 26.5 2.0
Using a state ator a conventional PID roller le to a better control performance than using the network delay jitter tuned PID controller. State estimation, however, requires more computation. The non‐network aware controller has low control cost values, but there is a risk of it being unstable, contrary to the Kalman filter plus PID alternative, even if they have the same tuning. Without a network, the jitter margin controller is conservative com‐pared to the other control structures. Contradictory to expectation, when taking the network into account, the more conservative controller performs relatively better: introducing the network has only a small effect on this control perfor‐mance. This is a general observation: with a higher jitter margin the control is more conservative, but also more robust to the adverse effects of the network, yielding graceful degradation.
The result can alternatively be compared by the time to reach the desired for‐mation listed in Table 4. Using the KF and PID controller is better than the jitter margin tuned PID, and the non‐network controller fairs the worst. This shows that it is more advantageous to use a network aware control structure, even if the pure performance metrics may be worse.
The robot squad scenario is furthermore evaluated with different sampling intervals or packet rates and using prioritization based on packet forward count. Using longer sampling intervals improves the network performance, but degrades the control results, thus there is a trade‐off between network and control performance [93]. Prioritization equalizes the network QoS between control loops and yields better overall control, similarly as in Section 4.7.3, where load balancing between several access points is used [P5].
‐estim with cont ads
97
4.7.3. Building Automation Scenario The Building Automation case is a heating, ventilation and air conditioning
VAC) scenario th Engin Aalto University, Department of Automation and Systems Technology is used as a test case for wire V l studied in [76]. The layout of the office is shown in Figure 36 with a total of 39 rooms.
ture CO2 concentration of the office which depend on y of the room, are modeled using first principles [P4]. The network IEE 2.1 twor it itabl r building automation AODV routing protocol [126] the gation model
d in Sect 4.3. 4] and identified packet drop model based on real measurements as described in 3.1‐3.2 and [P6] are tested. Here, only the
ng the sure acket model presented i cti .2 are shown, as results are very similar. The measurements from the office prototype loca‐
ons are generalized to the whole building, as the rooms are more or less iden‐
Figure 36. Layout of the office in the building automation case. Node posi‐tions and wall materials indicated.
(H . The office of e Control eering group at the
less H AC system simu ations, similar to the cases
The temperathe occupanc
and rooms,
is a wireless [76], using the
E 80 5.4 ne k, as is su. Both
e fo wall propa
presente ion 4 [P the
results usi mea d p n Se on 3thetitical. The paths between the nodes in the building are categorized according to the six prototype locations. As there are eight different path measurements for one prototype location, one of the path models is randomly selected for the node pair in the simulation model. Thus, spatial variation between similar links in the office is obtained.
Wireless sensors in each room measure the temperature and CO2 concentration. This information, along with the desired temperature (set by the occupant) and the status of the lights, are sent to the central controller at the access point. Ad‐
Room
Access point
1: 3.9x4.5 = ID: (N‐S)x(W‐E)
Wireless sensor
North
Elevator WC
Server
Kitchen
ControlCafé
Meeting room
1
Meetingroom2
WC1: 4.5x3.2
3 : 3.9x4.5 6 : 3.9x4.5
7: 4.5x4.58 : 4.5x4 9: 4.5x6
10: 4.5x6
11: 3.9x4.5
39: 8x6 38: 8x6 37: 6.8x7 36: 6.8x3.8 35: 6.8x5.2 34: 6.8x5 33: 6.8x5 32: 6.8x5 31: 6.8x4
16: 4.5x5.5
15: 2.5x4.5
17: 4.5x6 18: 4.5x3 19: 4.5x3
20: 2.5x4.528: 2.7x4.5
29: 2.7x4.5
30: 3.9x4.5
2: 4.5x3.24 .5x3.2
5 : 3.9x4.54 : 3.9x4.512: 3.9x4.5
13: 2.5x4.522: 3.9x4.5
23: 3.9x4.5
24: 4.5x4.5
25: 2.5x4.5
26: 2.7x4.5
14: 2.5x4.521: 3.9x4.5 27: 3.9x4.5
4 .5x2
2x60
11x1.6
22x1.6
11x1.6
Plaster wallConcrete wallBrick wall (50% glass windo s)w
98
ditionally, presence event messages are sent to the command center when
, all
in this scenario on average 0.14 s, considerably smaller
are determined partially based on the desired jitter margin δmax, which is set to the temporal length of two consecutively dropped packets (δmax = 2h = 60 s). Conventional tuning methods are not applicable, since they fail to guarantee the stability due to lost measurements.
Examples of the simulation results are shown in Figure 37, where the tempera‐ture of one room is shown. The results are given for a PID controller tuned to be stable with either one or two consecutive packet losses. The response with the controller with the larger jitter margin is slower, but it is conversely less prone to oscillations during packet drops.
The packet drop and network cost for control (82) for each room are shown in Figure 38a,b. The QoS is worse for the nodes multiple hops away from the access point. The control performance is evaluated with the ISE (31) cost crite‐rion with respect to the desired temperature. The increase in control cost com‐pared to the case without network (no packet drops) can be seen in Figure 39a,b which reflects the different QoS conditions. Evidently the performance of the control system depends on the network QoS, such that control performance ofthe fa
people enter or exit a room, which turns on/off the lights. The central control system coordinates the heating and ventilation of the individual rooms based on the wireless measurements. The local heating/cooling and ventilation com‐mands are transmitted back to the rooms.
The wireless network deals with both time and event‐triggered messagingcommunicated through the wireless gateway. This communication topology is similar to WirelessHART, where all the data is routed through the gateway. The centralized control architecture is justified, since it provides better capabilities of applying globally optimal control schemes. Because of the quantity of nodes, multiple hops, radio environment, and random access MAC, there are packet drops, which impair the control result.
An appropriate sampling interval cannot be easily calculated in advance, since the throughput depends on the specific network, protocols, topology, and ap‐plication generated traffic. In this case a sampling interval of h = 30 s with data quantization turned out in simulations to be the shortest obtainable without causing congestion. The average packet drop turned out to be 18 %, mainly because of the channel conditions and multihop communication. Hence the controllers need to be tuned to tolerate gaps in the measurements.
The end‐to‐end delay is than the sampling interval. Thus, only packet drop and outage lengths need to be considered in the control design. PID controllers for the heating control are tuned with the extended plant approach [47], where the controller parameters
r away nodes is limited by the network.
99
Figure 37. Simulation results of building automation case. Temperature and heating in room 18. Comparing PID controller tuned with different jitter margins: allowing for one packet drop (dotted) and allowing for two con‐secutive packet drops (solid). The number of people in the room indicated at the top. An initial time of 20 minutes to stabilize the room t perature is not shown here.
em
An alternative option is to add more access
ure 38c,d), the network QoS increases sig‐
[P6]
To improve the control results, the controllers of the far away rooms could be re‐tuned with a larger jitter margin.points and spread them out in the building. A higher bandwidth connection, such as wired Ethernet or WLAN, between the access points could then be formed. Such a hierarchical design increases the performance of the network, and hence, improves the control results.
By using a hierarchical network (Fignificantly: routing overhead, delay, packet drop and NCC listed in Table 5 are reduced. This results in better control and smaller control cost as depicted in Figure 39c, and is comparable to the case without a network, Figure 39a.
20 40 60 80 100 120 14020
21
22
23
24Room 18
10001000
20 40 60 80 100 120 140
0
Heatin
g[W
]
20 40 60 80 100 120 140‐1000
‐500
0
500
1 4 1 0 1 2
rature
[°C]
Tempe
100
On the left, the packet drops, and on the right the corresponding network cost for control. Top: one access point. Bottom: two access points. Boundary
This ard. Issue ered in th oper netw e‐tuning the control system. The topology of the net‐work is also worth considering. By simulating the system, one can make
One access point Two access points
(a)
1 2
34 5
6
7 8 9 10
1112 13
141516 17 18 19
2021
22
23
24
252627282930
313233343536373839
Packet drops (#)
0 10 20 30 40 50 60 70 80
(b)
1 2
34 5
6
7 8 9 10
1112 13
141516 17 18 19
2021
22
23
24
252627282930
313233343536373839
Network cost for control
0 0.0625 0.125 0.188 0.25 0.313 0.375 0.438 0.5
(c)
1 2
34 5
6
7 8 9 10
1112 13
141516 17 18 19
2021
22
23
24
252627282930
313233343536373839
Packet drops (#)
0 10 20 30 40 50 60 70 80
(d)
1 2
34 5
6
7 8 9 10
1112 13
141516 17 18 19
2021
22
23
24
252627282930
313233343536373839
Network cost for control
0 0.0625 0.125 0.188 0.25 0.313 0.375 0.438 0.5
Figure 38. Packet drops and network cost for control, for individual rooms.
between the nodes belonging to the two access points indicated. [P6]
short example shows that the design of a WNCS is not straightforws related to the drawbacks of the wireless network need to be conside control design. The drawbacks can be compensated by selecting prork protocols and r
changes to the communication and control design and iterate before installa‐tion. Thus, PiccSIM is a valuable simulation tool to test wireless control applica‐tions.
Table 5. Building Automation simulation results.
Packet drop [%] 18 4.5
Network cost for control (82) 0.40 0.05
End‐to‐end delay [s] 0.14 0.075
Routing overhead [%] 2.5 0.3
Mean control cost (ISE) 0.037 0.023
101
Figure 39. Control cost for building automation case. Without network (a), one access point (b), and with (c) ects the network cost for control s
e Control in Industrial Hco of a trolley cr an industrial hall. It
emphasizes the real‐time requirements of wireless communication in wireless operat es the velocity nce for the crane with
a wireless handheld device to the control system. The control messages are eless IEE 2.15.4 network alled in the hall and on
system. [P6]
two access points hown in Figure 38.
. The control cost refl
4.7.4. Cran an all This case considers wireless ntrol ane in
control applications. The or giv refere
routed over a local wir E 80 instthe crane, to the crane control
1 2
3
4 5
6
7 8 9 10
11
12 13141516 17 18 19
20
21
22
23
24
252627282930
313233343536373839
Control cost
0 0.0187 0.0375 0.0562 0.075 0.0938 0.112 0.131 0.15
1 2
3
4 5
6
7 8 9 10
11
12 13141516 17 18 19
20
21
22
23
24
252627282930
3132333435363738
Control cost
39
0 0.0187 0.0375 0.0562 0.075 0.0938 0.112 0.131 0.15
(a )
(b)
(c )
1 2
3
4 5
6
7 8 9 10
11
12 13141516 17 18 19
20
21
22
23
24
252627282930
3132333435363738
Control cost
39
0 0.0187 0.0375 0.0562 0.075 0.0938 0.112 0.131 0.15
102
The laboratory scale crane model presented in [44] is scaled up by a factor of five and used in the simulation cases. The crane control system consists of PID controllers for the trolley and hoist motors, which operate the actuators based on the velocity reference given by the operator through the wireless handheld device. An overview of the Simulink model is shown in Figure 40. For simula‐tion purposes the operator is represented by PID controllers for the vertical and horizontal movement of the load and one for stabilizing the load swing. The load of the crane is moved according to a predefined trajectory, given as refer‐ence to the “operator controllers”. The controller tuning is selected such that good performance is obtained without packet drop. There are PID tuning rules for WNCSs that could be applied, but they assume simple process models and cannot be applied to the complex and nonlinear crane model.
To assess the impact of network QoS on the control performance, simulations with different network QoS parameters are made. Several load movement tra‐jectories are simulated with different Gilbert‐Elliott (10) network model para‐meters. Examples of the resulting load angle swing are given in Figure 41 for different packet drop parameters. Significant increase in the oscillations is seen depending on the packet drop distribution. With a correlated packet drop, where the probability of packet drop is 95 % given that the previous packet is drop orm pack e.
The resulting control performances, each averaged over ten runs, is shown in 1) for the load angle, is
ver_pos
hor_pos
Verticalreference 1
ver_vel _ref
Verticalreference
ver _pos_ref
Trolley operator
Trolley_pos_reference
Trolley_pos
Trolley_speed_reference
Load angle
Trolley_speed_ref
Trolley crane
Trolley control [V]
Hoist control [V]
Trolley position [m]
Trolley speed [m/s]
Hoist position [m]
Hoist Speed [m/s]
Load angle [deg]
Load angular speed [deg/s]
Terminator 1
Terminator
Synchronize with Ns -2
do { ... } while
Node send only
Send to T 1 N 6Node
ID = 0
Node receive only
Timestamps
Data T 1 N 0
Node
ID = 6
Load angle
Horizontalreference 1
hor_vel _ref
Horizontalreference
hor_pos_ref
Cascade PID controller 3
Hoist_speed_reference
Hoist_speed
Hoist_torque
Cascade PID controller 2
Trolley_speed_refere
Trolley_speed
Trolley_torque
Cascade PID controller 1
Hoist_pos_reference
Hoist_pos
Hoist_speed_reference
Hoist_speed_ref
Operator Wirelessnetwork
Localcontrol Crane model
Visual feedbackFigure 40. Overview of Simulink model for wireless control of crane.
ped, the fast oscillations are significantly larger compared to the unifet drop distribution, even when the mean drop probabilities are the sam
Figure 42. The control cost, integral of squared error (3shown as a function of packet drop probability and network cost for control (82) in Figure 43. Considering only packet drop does not give a good indication of the resulting control performance, whereas the NCC correlates well with the control cost. This result is general, as similar results are obtained with a simple first‐order system in Section 3.5.2. There are naturally variations depending on the particular random packet drop realization.
103
of the network. Left: no packet drop. Center: 30 % uniformly distributed Figure 41. Crane load angle swing with different packet drop probabilities
packet drop. Right: 30 % packet drop with correlation of 95 %.
The radio environment of an industrial hall, where a similar crane is located is measured in Section 3.1. The packet drop range is 10 ‐ 50 % and the mean out‐age length of consecutive dropped packets is 0.05 ‐ 0.5 seconds, shown in Figure 11, which are similar to the values used in the simulation results shown in Figure 42. From Figure 42, the conclusion that the control performance in a real environment is degraded about 200 ‐ 400 %, compared to the case of perfect control, can be made. There is thus room for improvement of the network to regain the wireless control performance and reliability compared to the wired system.
Figure 42. Integral square error of load angle as a function of packet drop probability and mean bad state residence time.
0 10 20 30 40 50 60 70 80-8
-6
-4
-20
24
6
8
Time [s]
Load
ang
le [° ]
Packet drop 0 %
0 10 20 30 40 50 60 70 80-8
-6
-4
-20
24
6
8
Time [s]
Load
ang
le [° ]
Packet drop 30 %, uniformly distributed
0 10 20 30 40 50 60 70 80-8
-6
-4
-20
24
6
8
Time [s]
Load
ang
le [° ]
Packet drop 30 %, correlation 95 %
00.1
0.20.3
0.40.5
0
0.2
0.4
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Mean outage length [s]Packet drop probability
Con
trol c
ost,
ISE
104
0.15 0.2 0.25 0.3 0.3
0.3
0.2
0.1
0
Figure 43. Integral square error of load angle as a function of packet drop probability and network cost for control (82). Linear mean square error fit added.
The crane model is based on a real laboratory scale trolley crane, which is used in the PiccSIM Toolchain demonstration in the following section. There the automatic code generation is shown for the compensation of the load angle swing.
4.7.5. PiccSIM Toolchain Demonstrations Two brief examples are given here to demonstrate the modeling, simulation and automatic code generation capabilities of the PiccSIM Toolchain (Sections 4.4 and 4.6). A laboratory scale trolley crane system with an ultrasound based measurement system to measure the swing of the load is used as a testbed [44]. The system includes a Kalman filter to estimate the load angle when the ultra‐sound measurement system is unable to calculate it. Previously, the swing was compensated with a wired control system using a fuzzy logic controller.
The Kalman filter and a simple anti‐swing controller are modeled with the generic node blocks of the PiccSIM Toolchain. The corresponding PiccSIM radio blocks are added to enable wireless communication between the nodes proce unc‐tionin the process with an analog input, and sends the measurement to the angle estima‐tion node (the Kalman filter). The Kalman filter node estimates the current load
. Thess to be controlled is modeled and attached to a generic node block fg as an interface node to the trolley crane. The interface node samples
5 0.4 0.45 0.5 0.55 0.6Paket d p probability
Con
trol c
ost,
ISE
ro
0 10 20 30 40 50 60 700
0.1
0.2
0.3
0.4
E
Network cost for control
Con
trol c
ost,
IS
105
angle and angle velocity, and sends these values to the controller node. The controller is a PD controller, which uses the received estimates to calculate an appropriate control signal to compensate the load swing. Upon reception of the control value from the controller node, the interface node outputs it with the analog output to the trolley crane system. The sampling interval of the control system is 0.1 seconds, and the whole loop is traversed in two sampling intervals (because of time‐driven operation). The whole simulation model is shown in Figure 44.
Figure 44. Simulink simulation model for load swing estimation and con‐trol with wireless nodes. Green: blocks implemented on wireless nodes with automatic code generation. Gray: blocks used for communication. Red: model of the process, only used for simulation. Wireless communica‐tion indicated with arrows.
Synchronize with Ns-2
do { ... } while
Stop bit
Process_interface
AD 0 DA 6
Radio timestamp DA 7
Radio recv Radio send
Process
Process
Pendulum _controller
Radio timestamp
Radio recv
Radio trigger
Radio send
Node_Controller
Send enableData N 1 T 2
Send to N 0 T 3TimestampsNode
ID = 2
Node_ KF
ta N 0 T 1
Send to N 2 T 2
Timestamps
Da
Node
ID = 1
Kalman _filter
Radio timestamp
Radio recv
Radio send
Send to N 1 T 1
Timestamps
Data N 2 T 3
Node
ID = 0u
Interface node
106
The system is implemented with the PiccSIM Toolchain and the controller is tuned by simulation. When the results are approved, the interface, Kalman filter and controller blocks are converted into C‐code using the automatic code gen‐eration feature of the Toolchain, and downloaded to the Sensinode wireless nodes. The interface node is connected to the trolley crane system for reading the load angle measurement and writing the trolley swing compensation con‐trol signal. The whole system is run and the anti‐swing result is shown in Figure 46. [P3]
nother example using automatic code generation is realizing the wireless controller of a heated airflow process “Process Trainer” PT326 by Feedback Ltd., which is part of the Automation and Systems Technology laboratory course. The control system consists of a wireless PID controller that controls the air temperature of the out‐flowing air, as shown in Figure 46. The same process has been tested also in a NCS setting with a control area network [159].
The process is first identified and modeled by using input/output data gathered with the wireless nodes. The transfer function of the process is identified as
Start of anti‐swing at 50 s.
A
Figure 45. Anti‐swing test run with wireless nodes, angle of load and anti‐swing control signal shown.
40 45 50 55 60-50
50
0
Time [s]
Load
ang
le [ °
]
40 45 50 55 60-1
1Load angleAnti-swing Control
0A
nti-s
win
g C
ontro
l [N
m]
107
He Wireless SensorWireless Actuator
Figure 46. Wireless control of heated airflow process.
ated airflow process
Wireless Controller
Figure 47. Control result of wireless control of heated airflow process.
175 180 185 190 195 20030
40
50
60
Time [s]
Tem
pera
ture
[ °C
]
175 180 185 190 195 2000
20
40
60
Pow
er [W
]
TemperatureReferenceHeating
108
109
( ) 0.187252
12.59250.0233 0.6049 1
smG s e
s s−=
+ +, (86)
by minimizing the integral squared error between the step response of the process and the model. A PID controller is tuned with an ISE cost optimization based tuning and a jitter margin constraint using the tuning tool of the PiccSIM Toolchain. The satisfactory control result when controlling the actual process is shown in Figure 47.
This demonstrates that a wireless control system designed in simulations can be automatically implemented on actual wireless nodes with the PiccSIM Tool‐chain. The code is for instance efficient enough to run a two‐state Kalman filter, ten times a second, to estimate the load angle and angle velocity.
Effort may be needed to connect the wireless nodes to the process (sensors and actuators), including making the interface circuitry to accommodate the respec‐tive input and output voltages of the node and of the process. The voltage biasand r es to temperature values. The computations for the conversion are implemented in the sensor and actua‐r nodes, as part of their program.
4.8. Summary In this chapter the developed communication and control co‐simulator PiccSIM was presented. The integration of ns‐2 and Simulink delivers a versatile tool to simulate and study aspects of WNCSs. Several tools are available in PiccSIM that enable the design of the network and controllers, and automatic code gen‐eration for implementation. Both simulators are extended based on the special simulation requirements for WNCSs, such as packet drop models. The integration of the simulators is fulfilled with simulation time‐synchronization, dexchange capabilities between simulators, enabling for instance controlled nomobility, and a Simulink blockset library for communication over the simulanetwork. With the graphical user interfaces and tools of PiccSIM, the develop‐ment of WNCSs can be done all the way from design, simulation, to implemen‐tation.
The chapter is concluded by some simulation cases, where the capabilities of PiccSIM and the properties of WNCSs are highlighted. The simulation cases show that there are considerable interactions between the network and control, where the control performance depends significantly on the network Qo and specific behaviour of the network, such as showed in the crane control case. Thenetwork and protocol design determines the resulting communication perform‐ance tocol suit‐ability for real‐time control applications can be studied. The application deter‐mine conversely the proper selection of the network protocols, depending on the application properties and requirements.
ange need to be calibrated to translate the voltag
to
‐ata de ted
S
and further the control result. By simulations the network pro
s
5. ADAPTIVE CONTROL IN WIRELESS
NETWORKED CONTROL SYSTEMS
In this chapter several novel network adaptive control algorithms are pre‐sented. The different adaptive schemes [P8]‐[P11] are presented in separate subsections with the conclusions gained from the simulations at the end of each section.
The first adaptation scheme is the adaptive jitter margin PID controller, which changes its tuning based on the observed delay jitter of the network [P8]. This is a simple scheme where, first the network induced delay jitter is measured, whereupon a suitable controller tuning is selected such that the control loop is stable with the given jitter. The tuning is then changed on‐line as the observed network statistics changes.
The adaptive jitter margin controller adapts only itself according to the network characteristics. The adaptive control speed scheme of Section 5.2, tries to affect the network performance [P9]. Whereas the previous scheme only change to more conservative tuning in terms of the jitter margin and cannot prevent the network from congestion, this scheme changes the used network bandwidth such that it avoids congesting the network with the accompanying bad control performance.
The previous two adaptive control schemes are both for plants consisting only of SISO control loops. The step adaptive controller presented in Section 5.3 is a decentralized control scheme for MIMO plants [P10]. Full MIMO control is not desired in WNCSs, because the resulting network traffic would be high. Instead several SISO control loops are formed. The interactions between the control loops are then handled by selecting appropriate tuning depending on the situa‐tion. The appropriate tuning is explored in Section 5.3.2.
In Section 5.4, the case of a longer network outage, when the jitter margin stabil‐ity condition is exceeded, is considered. A heuristic scheme based on the IMC design to bring the process to a desired steady‐state during the outage is ex‐amined [P11]. The control action during an outage is based on the fact that the controller is tuned such that the closed‐loop system behaves as a first‐order system.
111
5.1. Adaptive Jitter Margin PID Control In this section the adaptive jitter margin (AJM) controller that adapts to the delay jitter or packet loss of the network, is presented. Deploying a control system in a real‐world application usually demands simple configuration or self‐configuration. In WNCSs this is even more important as the network per‐formance will change with time, depending on interference, moving machinery, routing, or installation of new wireless devices. Adaptive controllers are needed in these cases, or if the network performance is not known exactly in advance. It is not practical to re‐tune the whole control system, after every change in the network or the environment. Adaptation brings robustness to the control system in the case of changed network parameters, as the changes are compensated by automatic controller re‐tuning. This motivates the develop‐ment of adaptive network aware controllers.
Changes in the network might affect the control performance, which should adapt to the new conditions. These changes can stem from obstacles or interfe‐rence from other devices, which may change the route in a multihop system. The tuning of the AJM controller is automatically selected such that (re‐)config‐uration is not needed when a WNCS is deployed, new devices are added, or the network topology is changed. A similar approach is in [116], where gain sche‐duling depending on the number of hops of the communication of a state‐feedback controller is used.
In the building automation case presented in Section 4.7.3, the QoS delivered to the different rooms depended on the location in the network and the distance to the access point. This implies that the control loops should be tuned individual‐ly according to the observed network QoS. Therefore, the adaptive jitter margin PID controller is developed, such that every loop will obtain a suitable tuning based on the experienced network properties and no laborious network analy‐sis and subsequent tuning is needed.
The adaptive jitter margin PID controller principle is to tune a PID controller as tightly as possible without endangering the stability because of varying delay or packet drops. This is accomplished by observing or estimating the delay jitter, and tuning the controller according to the maximum current estimated jitter margin with a jitter margin tuning method ( )maxδ k [47]. The tuning rules (27)‐(28) in Section 2.6.1 are used. [P8]
Two alternative methods for estimating the delay jitter ( )maxδ k are developed. The first is based on counting the timestamps and the gaps between the re‐ceived packets, which is simple and exact (Section 2.4.1), but relies on certain assumptions. The other is based on probabilistic estimation using a Kalman filter, which is more complex and has less restricting assumptions. The delay estimation is made on the maximum a posteriori probability of delay, given the
112
current estimated process output and the received measurement. It takes into account the uncertainty of the estimate and probability of the delay. [P7]
In practice, the jitter margin for the controller tuning should be at least the ob‐served delay jitter with a one sampling interval margin, because if an additional packet is dropped it will increase the necessary jitter margin by h, for the con‐trol‐loop to be stable. The jitter margin used for controller tuning is thus at least h according to
( ) ( ){ }max argmax Dd
δ k k′ = + h
h
(87)
The tuning of the AJM‐PID controller is then updated with this delay jitter estimate at every time‐step.
Two simulations with delay jitter induced by a network are performed. In Sec‐tion 5.1.1 a Simulink only model with a specified packet drop probability is used, and in Section 5.1.2 PiccSIM is used, where the packet drop and delay are simulated with ns‐2 for more realistic results. Both the simple and the advanced delay jitter estimation techniques are compared. In both cases the sensors, con‐trollers and actuators are time‐driven, with a sampling interval of h. The process to be controlled is in both cases (26) with K = 1, T = 5 s, τ = 0 s, and the minimum communication delay of NL = . The output has added white‐noise with variance R = 0.012.
5.1.1. Delay Jitter Estimation Simulations A simulation with random packet drop is performed in Simulink to compare a constant gain PID with the AJM‐PID controller. The constant discrete‐time PID controller with sampling interval h = 1 s is tuned for the maximum jitter margin of max s by (88), and the jitter adaptive PID to the current estimated jitter. Both jitter estimation algorithms are evaluated with the AJM‐PID controller. A tightness factor of α = 1 is used.
5δ =
A random packet drop with probability pdrop = 0.3 and a maximum of six con‐secutive packet drops is implemented with a Markov‐chain with six states (9), resulting in a jitter margin of dmax = 6h seconds. In the simulations pdrop is a li‐nearly increasing function (in this case from 0 to 0.3) of time. The maximum delay in the simulations is ( )1 5 h+
210 70−≈ ⎡
seconds, with the overall maximum delay jitter of 5h seconds (see Figure 49). The average delay distribution when pdrop = 0.3 is about ( )π d 21 6.3 2.0 0.57 0.17
T⎤⎣ ⎦ [P7], which is
used in the KF based delay estimation algorithm for the whole simulation.
A time‐window (6) for the jitter margin calculations of 60WT = seconds is used. The KF process noise is chosen as Q = 0.00012. The accuracy of the KF‐based delay estimator depends on the change in the output Δy of the process. To ob‐tain good delay estimates, only estimates based on data exceeding a validation
113
threshold tvalid are selected. The probability of a wrong delay estimate as a func‐tion of the output change Δy for the simulated process is shown in Figure 48 for different true delays [P7]. The probability of a wrong estimate is large for im‐probable (large) delays, but at about an output change of Δ 0.03y = the proba‐bility is smaller than 0.2 in any case. Thus a threshold tvalid = 0.03 for the valid delay jitter estimates is selected.
Simulations of 200 step responses with a frequency of f = 1/50 1/s, are done. The performances of the methods are evaluated by a jitter margin estimation cost and a control cost. The jitter margin estimation cost is given as the sum of the absolute jitter estimation error
( ) ( )1max
,est0
ceil 5 /1 N
δk
k N δ kJ
N h
−
=
−= ∑ (89)
where approximates the “maximum possible” jitter margin and is the estimated jitter,
(ceil 5 /k N )f( )maxδ k 200 / 2 /N h= ⋅ . The control costs are the IAE
(29) and ISE (31) averaged over all the step responses.
The true delay and maximum delay estimate of both algorithms are plotted in Figure 49. Both algorithms estimate the delay jitter properly. The KF based algorithm overestimates the maximum delay in the beginning of the run, be‐cause it uses the wrong delay distribution (the one in the end of the run).
0 0.01 0.02 0.03
1
0.04
Figure 48. Probability of wrong delay estimate as a function of change in output for different values (1‐6) of true delay [P7].
0.050
0.2
0.4
0.6
0.8
1
2
3456
y change, Δy
Pro
babi
lity
of w
rong
del
ay e
stim
ate
114
The performance costs are compared in Table 6, including constant controllers with tunings of max and maxδ h= δ h= , corresponding to the tuning for the min‐imum respective maximum delay jitter is given. The case of a simulation with no packet drop and constant jitter margin tuning with maxδ h= is also given (the case with no network and thus maximum achievable performance). Both jitter margin estimation algorithms result in lower costs than the constant alterna‐tives, but naturally also in a higher cost than with perfect communication and without packet drops. Selecting a tuning for the minimum delay is worse than for the maximum delay, although the average delay jitter is closer and jitter margin cost is lower in this case. This implies that it is better to overestimate the delay jitter, as the opposite case will degrade the control performance and en‐danger the stability. The KF based algorithm performs better and has slightly lower costs compared to the simple algorithm, because of better delay jitter estimates.
Table 6. Jitter margin estimation and control simulation results.
Jitter margin method Jitter margin cost, ,estδJ
Control cost , IAEJ
Control cost, ISEJ
Assume min, max 1δ = 1.97 12.5 6.4
Assume max, max 5δ = 2.5 10.7 5.97
Simple estimation 0.69 6.22 3.57
Advanced estimation 0.66 6.08 3.37
No network, max 1δ = ‐ 5.32 2.90
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000
1
2
3
4
5
6
Time [s]
Actual delayMaximum estimated delayMaximum possible delay
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000
1
2
3
4
5
6
Time [s]
Actual delayMaximum estimated delayMaximum possible delay
Figure 49. Delay and estimated maximum delay for simple (left) and KF based algorithm (right).
115
5.1.2. Adaptive Control Tuning Scenario Simulations The PiccSIM simulated scenario considers a distributed plant with 25 wirelessly measured processes arranged in a 5x5 grid and centralized control. The distance between the sensors are set such that the radio can only communicate with the neighboring nodes (eight square neighbors). Because of fading there will be packet losses. Additionally, there are extra delay and packet loss because of MAC queuing and collisions. Thus the hop count and the network performance depend on the distance from the central control, located in a corner of the grid, such that the farthest nodes should get the worst quality of service.
The dynamic model of the process is (26), multiplied to 25 SISO loops. PID controllers with the tuning (27) are applied. A tightness factor of α = 0.9 is used, because of the additional unreliability of the wireless network. The worst case packet drop is in this case determined by the simulated network properties and the traffic rates of this simulation case. Based on the simulation results, an aver‐age control loop experiences packet drops with a probability of about
drop . This value is used in the packet drop probability model 0.1p = (9) of the KF based jitter estimation algorithm. Since the network is simulated, the drops are in reality not uniformly distributed, nor uncorrelated.
The network performance results are given in Table 7 and the control results in Table 8. It can be noted that the network communication delay is much smaller than the sampling interval h of the control loop. This is typical, and motivates the assumptions of the simple delay jitter estimation method.
The control costs for each individual loop, displayed in Figure 50, visualize the differences between the tuning alternatives. Constant tuning assuming the maximum delay jitter has the highest costs, and is equal to using no network. This implies that the tuning is in fact robust to packet drops. The simple delay estimation works satisfactorily and has the lowest costs. In this case the restrict‐ing assumptions of the method are fulfilled, but if the assumptions of the sim‐ple delay estimation method are not fulfilled the advanced method may per‐form better. The advanced delay estimation method tends to have a larger cost deviation between the control loops, because of the uncertainty in the estima‐tion. The larger costs are because of delay jitter overestimates, which result in a conservative control, so the stability is not endangered.
Figure 51 displays the packet drop and network communication delay as a function of the distance between the sensor and the controller. Indeed, they increase with increasing distance. Thus, the network quality of service expe‐rienced by the control loops is different, which suggests the need for individual (adaptive) tuning of the loops in a WNCS.
116
Table 7. Network results of a run with PiccSIM.
Average communica‐tion delay [s]
Communication delay std [s]
Packet deli‐very [%]
Routing over‐head [packets]
0.031 0.014 93 216
Assume max jitter Simple estimation Advanced estimation No network1.5
2
2.5
3
3.5
4
Cos
t, IS
E
Figure 50. Scatter plot of each control loop costs (average ) for different delay jitter estimation methods.
ISEJ
Table 8. Control results of a run with PiccSIM.
Control cost ISEJ ISEJ Std
Assume maximum delay jitter max 5δ = 3.31 0.078
Simple estimation 2.03 0.27
Advanced estimation 2.89 0.28
No network 3.42 0.079
117
10 20 30 40 50 60 700
20
40P
acke
t dro
p [#
]
Distance from controller [m]
10 20 30 40 50 60 700
0.05
0.1
Tran
smis
sion
del
ay [s
]
Packet drop Transmission delay
Figure 51. Scatter plot of measurement packet drop and communication de‐lay of each control loop as a function of distance from sensor to controller.
The average control costs in Table 8 indicate that both jitter margin estimation methods give better control performance than the constant tuning case. The standard deviations of the cost between the different control loops are, howev‐er, larger (see also Figure 50). This is mainly because the control loops expe‐rience different network performance (depending on the distance to the control‐ler), and some loops can be tuned tighter than others, giving a lower cost. A similar phenomenon was observed when the control was tuned too conserva‐tive or too aggressive compared to the network QoS in Figure 18.
5.1.3. Summary The adaptive tuning of PID controllers in wireless control systems is the first of the developed adaptive control methods in this thesis. The tuning is based on the estimated delay jitter caused by the network. Two jitter estimation algo‐rithms are compared, where one is simple, but has the constraining assump‐tions that only packet losses are present and packets have timestamps. The other algorithm is more general and is applicable to any network delay, even with an unknown delay.
The adaptive jitter margin PID is tuned based on the estimated delay jitter, such that the control loop is stable for all observed delay jitters. The adaptation algo‐
118
rithm is suitable for both step‐wise, shown with Simulink simulations, and slow changes, PiccSIM simulations, in the network conditions and delay jitters.
The AJM‐PID controller is compared in simulations and shown to perform better than a constant gain controller tuned for the minimum or maximum delay jitter. The AJM‐PID is further tested with the PiccSIM simulator in a mul‐tihop scenario. The simulation results show that the network quality of service depends on the number of hops of the control loop communication. With the AJM scheme, the individual control loops are tuned independently online ac‐cording to the network performance. This demonstrates the advantages of net‐work aware adaptive controllers in a WNCS case: easy deployment, automatic tuning based on observed network quality of service and reaction to changing conditions. The overall performance is better than tuning for the worst case.
5.2. Adaptive Control Speed Based on Network Quality of Service
As discussed in Section 2.8, in a network with a CSMA type MAC, the network QoS depends on the amount of traffic in the network. If the network becomes congested, the QoS decreases drastically. On the other hand, the control system performance improves with decreasing sampling interval, which implies more traffic. The target of the WNCS is thus to do cross‐layer optimization to select a suitable sampling interval, where network and control performances are good [92], [96].
The aim of the adaptive control speed (ACS) algorithm [P9] is to develop a distributed algorithm for adaptively selecting sampling intervals and control speed in a networked control system, based on a network related QoS measure. Control speed refers here to the speed of the step response or rise time of the control loop.
Increasing the control speed, must be accompanied by an increase in sampling rate. In case the network cannot deliver the required QoS, the control speed is reduced, yielding slower and more robust control. Reducing the sampling rate results in lower congestion of the network and hence a better QoS. This trade‐off has previously been demonstrated in [128], where the sampling interval is changed according to a PI controller with a desired packet drop of 5 %.
The various control loops may have different requirements in terms of sam‐pling interval, because of different time‐constants of the controlled processes. This diversity of experienced QoS and different requirements must be coordi‐nated, to enable a working WNCS. Instead of using a fixed controller tuning, it is changed according to network congestion and, as a consequence, the sam‐pling rate is changed. This is the opposite approach than most of the other con‐
119
trol adaptation mechanisms in the literature, where only the sampling interval is changed.
The framework for the adjustable control algorithm is that of the internal model control paradigm described in Section 2.7, because the control or step response speed can be nicely described with one parameter, λ, in the continuous‐time case (Section 3.4.1). This is transferred to a discrete‐time controller, since the process measurements are transmitted in discrete packets over the network. The update algorithm for the control speed λ is in discrete‐time and is de‐scribed in the next section.
In a WNCS, the network performance experienced by the controller depends among other things on the location in the network of the control loop and the traffic, generated by the other control loops on the communication path. In the following simulations it is assumed that the network congestion correlates with the packet drop rate of the network.
The presented ACS algorithm tries to converge to a suitable control speed for all the control loops in a NCS, such that a user specified QoS level is achieved. This is accomplished by adjusting the control speed λ, and indirectly the sam‐pling interval h, which affects the network traffic and QoS. The λ based control design is applicable to stable processes, controlled using measurement transmit‐ted over a network, either wired or wireless. No admission control is applied here. It is assumed that the network is designed such that sufficient bandwidth is available for the control application.
In the following subsections the adaptive control speed algorithm is described and some analysis is performed. The required internal model control prelimi‐naries are given in Section 2.7. The simulation in Section 5.2.4 demonstrates a WNCS case, using the PiccSIM simulator.
5.2.1. The Adaptive Control Speed Scheme The adaptive control speed algorithm adapts the λ parameter of an IMC tuned controller, depending on the network QoS. The user selects a desired network QoS level rd, for which the controller is stable and performs well, and ACS tries to maintain a suitable control speed and traffic rate to meet the goal.
The QoS measure should depend on the amount of traffic in the network. In this work the packet drop rate is used as the criterion. The algorithm can natu‐rally be modified to adapt the control speed according to any other network based QoS measure, for example the network delay, the packet drop QoS measure (82), or any other network congestion related measure.
If packet drops are used, a drop is detected by observing a gap in the sequence number of the received packets. The measured QoS is then
120
( )( )
meas
meas
1, if packet dropped
0, otherwise.
r k
r k
⎧ =⎪⎨
=⎪⎩ (90)
For practical application, the instantaneous packet drop is low‐pass filtered to obtain the average packet drop QoS measure, r. The filtered drop rate is then
( ) ( ) ( ) ( )meas1 1r k βr k β r k+ = + − (91)
where is a filter constant and rmeas is the measured QoS. 0 β≤ < 1
The total QoS tot , used to evaluate the simulations, is calculated as a weighted sum of the QoS of the individual loops, weighted by their share of the traffic, the reciprocal of the sampling interval h
r
tot 1
i ii
ii
r hr
h=∑∑
. (92)
The algorithm for adapting λ is developed on the following desired properties: The control speed is changed proportional to the dominating process time‐constant, and proportional to the time passed since the last update (the update interval is the same as the sampling interval). This has the effect that all the control loops will be adjusted relative to the natural process speed. The adapta‐tion step‐size is proportional to the error between the actual and desired net‐work QoS. If any loop experiences worse QoS than desired, all loops will reduce their traffic.
Because of the exponential relationship between λ and γ (44), it is more natural to adapt the linear (with respect to the response speed) λ and then calculate the corresponding γ for the discrete‐time controller. These considerations and the analysis in Section 5.2.3 lead to the following ACS update algorithm:
( ) ( ) ( ) ( ) ( )1 Δλ k λ k ch k r k m k+ = + , (93)
where c > 0 is an update step scaling factor and h(k) is the current sampling interval, and m(k) is the update speed. ( )Δr k determines the size and direction, or velocity, of the update according to the following rules
( ) ( )( )
max any Δ
otherwisei d ii
d
r r r rr k
r k r
⎧ − >⎪= ⎨−⎪⎩
d (94)
where ri is the QoS of the ith loop and rd is the desired QoS. If any loop expe‐riences worse QoS than desired, all loops use ( )max dr r
ii− . This decreases the
traffic generated by all loops to obtain a better QoS for the loop that has too low QoS. This global adjustment is used because bad QoS is usually due to the other control loops taking too much of the available bandwidth. Otherwise the loops
121
adjust λ according to the local QoS. Moreover, the update speed depends on such that ( )Δr k
( ) ( )( )
/ , if Δ 0
/ , if Δ 0
λ T r km k
T λ r k
⎧ ≤⎪= ⎨>⎪⎩ (95)
where T is the time constant of the process. The update speed thus depends on how much the control speed λ differs from the natural speed of the process T. In case of a process model of higher order, the dominating time‐constant is used for T in the adaptation algorithm.
At every time‐step the control speed λ is updated and the corresponding IMC controller is calculated. The sampling interval is updated according to (74). The sampling interval of the sensor and controller is thus proportional to the control speed. The sampling interval is additionally quantized to a multiple of two of a base sampling interval hbase such that
( )( ) ( )base 2
base
2 , where floor logp
h
λ kλ k h p
N h
⎛ ⎞⎛ ⎞⎜ ⎟= = ⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠⎝ ⎠
(96) h
where floor rounds down to the nearest integer. Quantization is used for prac‐tical reasons, because the controller cannot change the sampling interval conti‐nuously. The procedure for the change is described in the next subsection.
According to (75), a suitable jitter margin for the ACS scheme can be selected directly by specifying Nh. The jitter margin in terms of consecutive packet drops is thus the same regardless of the control speed. The actual jitter margin accord‐ing to (22), with IMC‐PID control and the parameters given in the simulation case described in Section 5.2.4 (T = 10), is solved previously in this thesis and plotted as a function of control speed in Figure 15. Without quantization the obtained jitter margin is as specified at Nh = 8, but quantization alters the jitter margin.
5.2.2. Changing the Sampling Interval The algorithm for changing the sampling interval starts by a change in the cal‐culated quantized h(k) (96) at the controller. The process model is first re‐discretized and then the new IMC controller is calculated. Changing the sam‐pling interval in the middle of a run requires some calculation to make a seam‐less transition [3]. The decision to use quantized sampling intervals in the ACS algorithm simplifies the transition calculations and avoids changing sampling intervals continuously.
Changing to a longer sampling interval is easy, as it is in this case doubled: the new samples are calculated as averages over pairs of previous control in‐put/output values and the new controller is switched on immediately. When
122
halving the sampling interval, the controller needs to be initialized with in‐between samples. There exist several applicable methods to change the sam‐pling online without bumps. One can use interpolation with splines or optimi‐zation to find the in‐between values [3]. Here, an algorithm that matches the output of the old and new controllers is proposed [P9].
The change to a shorter sampling interval is sketched in Figure 52. The sensor is first informed of the new sampling interval, and it starts transmitting with it. The old controller is still run during the initialization phase, using every other measurement of the new sampling interval. Once enough samples with the faster sampling rate are received, the initialization is done according to the following algorithm and the new controller is applied.
The switch is done at the time‐instant k = ks, with indexing according to the new, faster sampling rate. As the slow‐sampling controller has been used, every other control value is matched such that the same output response is achieved, i.e. “u(k)” of the old controller must equal “u(2k)” of the new. The in‐between u‐values (indicated in Figure 53) are solved using the controller equation
( ) ( ) ( ) ( )s sD z u k m N z y k m− = − , (97)
where ( ) ( ) ( )cG z N z D z= . The values ( )su k even−
)
are fixed by the old control‐ler and are unknown (even = 0, 2, 4,… and uneven = 1, 3, 5,…). The “uneven” values s are found, by solving x from the linear equation , using the fixed even values (
( su k uneven−
Ax b=
)(u k uneven−
Figure 53), where
Figure 52. Proposed method to switch to a shorter sampling interval, with
( )deg 5cG = . Control signal and instants for process measurements shown.
123
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
1 2 1 3 2 1 2 1
1 1 2 2 3 3 1 1 2 2
1 2 2 3 3 4 1 2 2 3
1 3 2 4 3 5 1 3 2 4
s s s s s
s s s s s
s s s s s
s s s s s
D u k D u k D u k N y k N y k
D u k D u k D u k N y k N y k
D u k D u k D u k N y k N y k
D u k D u k D u k N y k N y k
+ − + − + = + − +
− + − + − + = − + − +
− + − + − + = − + − +
− + − + − + = − + − +
Figure 53. Unknown u‐values to be solved indicated by box when switch‐ing from slower to faster sampling.
( ) ( ) ( )1 2 1 2s s su k u k u k Mx ⎡ ⎤= − − − − +⎣ ⎦1 ,
( )( )( )
( ) ( ) ( )( ) ( ) ( )
row 2 and 2 1, columns to deg / 2
2 41 3
m m m m ceil D
D D D evenD D D odd
A + +
⎡ ⎤= ⎢ ⎥⎢ ⎥⎣ ⎦
=
2
where is the order of the polynomial D(z), D(n) is the term of the nth power of D, ceil rounds up to the nearest integer, and
( )deg 2D >
( )( ) ( ) ( ) ( )( ) ( ) ( ) ( )
rows 2 and 2 1
1 2
1 1 2 1s s
s s
m m
N z y k m D even u k m
N z y k m D uneven u k m
b + =
⎡ ⎤− − + −= ⎢ ⎥
− − − + − −⎢ ⎥⎣ ⎦
where one‐based indexing is used for the elements of D, 0 /m M= ⎡ ⎤⎣ ⎦… , and . If , no solving needs to be done, the new controller
can continue immediately using every other previously received value. ( )deg 2M D= − ( )deg 2D ≤
An example of changing the sampling interval is given in Figure 54, for both increasing and decreasing the sampling interval. With the initialization calcula‐tion presented above, the control continues smoothly after the switch.
5.2.3. Analysis of the Adaptive Control Speed Algorithm In this section the ACS algorithm is shown to be of additive increase, multiplic‐ative decrease‐type (AIMD), which is a typical approach for bandwidth control of network traffic. AIMD is for instance used in TCP.
The evolution of λ is analyzed by combining (93) with (95) and using (96), neg‐lecting the rounding by using ( ) ( ) / hh k λ k N= instead. When Δr > 0, (93) be‐comes
( ) ( ) ( )1 Δh
cTλ k λ k rN
+ = + k (98)
and when Δr ≤ 0
124
0 5 10 15 20 25 300
1
2
3
Time [s]
yr
uy
0 5 10 15 20 25 300
1
2
3
Time [s]
yr
uy
Figure 54. Switching of sampling interval. Top: slow to fast (h = 1 s to 0.5 s). Bottom: fast to slow (h = 0.5 s to 1 s), at time t = 8 s. Controller switches to fast sampling at t = 10 s (top) because of required initialization. Control sig‐nal u and process response y plotted.
( ) ( ) ( )1 1 Δh
cλ k λ k rTN
⎛ ⎞+ = +⎜ ⎟
⎝ ⎠k . (99)
Equations (98) and (99) show that λ is increased additively when it is too small and decreased multiplicatively, when it is too large. Thus the ACS is an AIMD type algorithm. The additive and multiplicative constants are proportional to the error from the desired QoS, ( )Δ .r k The main difference between this algo‐rithm and any TCP algorithm, is that this adjusts the control speed, where the actual traffic amount on the application layer is adjusted, instead of adjusting the traffic speed on the transport layer.
Now the stability of the ACS scheme is analyzed. The general stability of an AIMD type rate control algorithm is difficult to prove. An early analysis is by [30]. One can consider several cases, such as one [14] or several bottleneck links [105], [75]. Below is a very simplistic proof, for the case with one bottleneck and instantaneous packet drop feedback and no queue overflows. Consider a sys‐
125
tem with several control loops all governed by the ACS scheme. If the desired QoS is reached, then Δr = 0 and the control speed update (98), (99) remains constant, . As there exists an equilibrium, we next assess what happens when the ACS is not at steady‐state. Assume that the network QoS is a function of the traffic over a bottleneck link
( ) (1λ k λ k+ = )
( ) ( ) ( )1 h
i ii i
Nr k f f
h k λ k⎛ ⎞ ⎛
= =⎜ ⎟ ⎜⎜ ⎟ ⎜⎝ ⎠ ⎝∑ ∑
⎞⎟⎟⎠, (100)
where the sum is the packet frequency over that link, summed over all the con‐trol loops with sampling intervals hi. As traffic increases, f gives the QoS cost r as a positive increasing function of the traffic (and ultimately the control speed). Hence, the network QoS cost increases in some (non‐linear) manner if the traffic over the network increases, which is a typical behavior of networks with a CSMA type MAC.
If Δr > 0, (98) implies ( ) ( )1λ k λ k+ > , and since f is positive and increasing ( ) (1 )f k f+ < k and ( ) ( )1 Δ .r k r k+ <
( )Δ
) Similar reasoning when Δr ≤ 0 gives in
(99) and ( )1λ k λ+ < (k ( )1f k f+ > k , which leads to ( ) (Δ 1 Δr k r k+ > ) . The reasoning is the same for all the control loops, as they all measure the same r, thus ( )Δr k is always decreasing until Δr approaches zero. In practice this may never happen, as packet drops are randomly distributed and all loops do not observe exactly the same QoS. The following simulations indicate that the ACS is still well behaving.
As with any similar learning algorithm, the choice of c determines the rate of convergence of the algorithm. Selecting a small value makes convergence slow, but a too large value may cause oscillation around the optimum.
5.2.4. Simulation Scenario The simulation scenario consists of six control loops using ACS. Measurements of the controlled processes are transmitted wirelessly over an IEEE 802.15.4 network. The network topology is shown in Figure 55, where all control loops communicate over one bottleneck in the center of the network. The distances are such that the radio signal reaches only the nearest neighbors, thus multihop communication is used. AODV [126] is used as the routing protocol. A simula‐tion for 6000 seconds is done, where loops 5 and 6 are initially idle and start operation at times t = 2000 s and t = 4000 s, to show how the ACS algorithm reacts when traffic is suddenly increased.
The process models in the loops are continuous‐time, first order transfer func‐tions with unit gain and time‐constants as indicated in Figure 55. All the processes have a delay of τ = 0.5 seconds. A PID controller, with the IMC‐PID tuning without a pre‐filter, described in Section 2.7.2 is used.
126
4
9
87 6
53
2
1
0Controller 1T = 10 s
Controller 2T = 20 s
Controller 3T = 30 s
Controller 4T = 40 s
Sensor 1Sensor 3Sensor 2
Sensor 4
Controller 5T = 20 s
Sensor 5
9
9Controller 6T = 30 s
Sensor 6
Figure 55. Network topology in simulated scenario, consisting of six wire‐less control loops. Possible communication routes are indicated.
The selected parameters for the ACS algorithm are the following: the packet drop low‐pass filter coefficient is β = 0.98 and the update speed is c = 2. The desired packet drop is rd = 4 %. The base sampling interval is set sufficiently low at hbase = 0.01 s and Nd = 8.
Changing the sampling interval in practice commences by the controller send‐ing a packet to the sensor, instructing it to use the new interval. The measure‐ment packets from the sensor contain the used sampling interval, such that the controller knows when the sensor has successfully switched to the new sam‐pling interval. If no change is done, the controller repeats the request.
Another practical issue is the individual QoS needed by (94). The loops must obtain this information from the other loops. Sharing this information is done with the so called send‐on‐delta approach to minimize the used bandwidth. The send‐on‐delta mechanism means that the loop notify the other loops by sending a packet of its current local QoS ri, if it is above rd and has changed more than a certain threshold since the previous update. Additionally, the nodes send a packet when the QoS returns to the desired region.
The results of one of several runs are shown in the following figures. Figure 56 shows the average packet drop of the individual loops where the bold line is the total QoS (92), which is mostly kept below the desired level of 4 %. The control speeds and corresponding sampling intervals for all the loops are shown in Figure 57. Initially all the loops decrease the sampling intervals, until packets start to drop. When loops 5 and 6 starts, congestion occurs and all the loops slow down to accommodate for the increased congestion introduced by the additional loops. Notice how the new loops find an appropriate control speed, even though they initially start with a conservative control speed. The ACS thus compensates for the changing traffic conditions.
127
0 1000 2000 3000 4000 5000 60000
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
Time [s]
QoS
Figure 56. Observed average packet drop for all individual loops and total QoS rtot (92) (black line). Desired QoS drawn with dotted line.
0 1000 2000 3000 4000 5000 60000
10
20
30
40
Time [s]
λ
0 1000 2000 3000 4000 5000 60000
2
4
6
Time [s]
h [s
]
Figure 57. Top: control speed for all the control loops, evolution as a func‐tion of time. Bottom: Corresponding sampling intervals.
128
From the experience of the simulations, which include all the network related issues of the media access and routing protocols, one can conclude that the ACS algorithm works as intended. The assumption that the packet drop depends directly on the congestion of the network turns out, in practice, not to hold completely. Packet drop mostly depends on the precise timing of the network, examples are collisions when two nodes transmit simultaneously, or in the case of a queue overflow. This is more probable when the network is congested, but is in nature stochastic. ACS could instead use network congestion information to adjust the control speed. Congestion feedback information, either implicitly through random early drop [51] or explicitly by messaging from the interme‐diate nodes [136], could be used to accomplish the rate control.
5.2.5. Summary The adaptive control speed algorithm for NCSs changes the tuning λ of an IMC controller depending on the network QoS. The measurement sampling rate is changed as a function of λ, which adjusts the traffic of the network such that it is not congested. If the network is congested the control speeds and sampling rates of all the control loops are reduced, to compensate. The algorithm is unique in the sense, that it adjusts the controller generated traffic in a NCS setting, depending on the offered network QoS. It is a control oriented approach to adapt to a network layer problem. The sampling interval adaptation can as well be applied to sensor network type of monitoring applications where the importance of the measurement is specified by the parameter T.
The proper change of sampling interval is considered here, whereas in most works found in the literature the old controller is continued to be used with a new sampling interval and the whole issue is ignored. The adaptive IMC based controller handles online change of the sampling rate without bumps, by an initialization procedure.
The presented ACS algorithm is demonstrated with PiccSIM, where six control loops using ACS are simulated. The control speeds are adjusted online as more loops are added to the network, such that the desired QoS is maintained.
5.3. Step Adaptive Controller for Networked MIMO Control Systems
In this section the multiple‐input multiple‐output WNCS case is considered. A decentralized wireless 2x2 MIMO control system is depicted in Figure 58. A MIMO process, with wireless sensors measuring all the outputs and separate controllers for the inputs, i.e. diagonal MIMO control, is assumed.
129
G11(s)
Process
Controller1
Controller2
u1
Network
Sensor 1
Sensor 2
G12(s)
G21(s)
G22(s)u2
yr1
yr2
y1
y2
Control
Figure 58. Diagram of a 2x2 MIMO process in a NCS.
Fully decentralized MIMO control results in high network traffic because the information from every sensor is needed for every control input. Due to the communication requirements of full MIMO control, diagonal MIMO control, where separate SISO loops control the MIMO process as shown in Figure 58, is more suitable for WNCSs and thus considered here.
The lightweight requirement, due to the low communication capabilities of wireless nodes, demands restricting the algorithms to simple types of control‐lers, such as PID or IMC controllers. Although the achievable performance with several SISO PID or IMC controllers controlling each input‐output pair may not be as good as with a full MIMO controller, the decomposition is justified in a WNCS, because of the low and local communication needs compared to the full MIMO case. When carefully tuned, the structural simplicity of the individual controllers may outrun the difference in performance of the more complex MIMO controller in a WNCS setting. Thus, the need for good diagonal MIMO PID controller tuning is obvious, of which there are plenty to choose from [145]. Here, a controller tuning switching method is proposed, such that good control is achieved, depending on in which input a step change in the reference is made [P10].
In the multivariable control case, the objectives of the controllers are to produce a feasible step response in one loop and an efficient cross‐interaction elimina‐tion in all the other loops. The idea of the step adaptive controller (SAC) is simi‐lar to cascade control, where the disturbance would be suppressed by creating a plain speed difference between the loops. In other words, the controller of the loop which performs a step would correspond to the primary controller of the cascade control, with a lower loop speed (equivalent to a larger IMC λ value). At the same time, the other loop would be tuned faster (smaller λ), and thus more efficient at compensating for the cross‐interaction disturbance. [P10]
130
The step adaptive controller thus switches the tuning depending on whether the loop has a change in its own reference or not. If a step response is expected, the tuning is changed in order to ensure a good response. Conversely, if a set point change is made in another interacting loop, a tuning more suitable for cross‐interaction rejection is selected. If there are concurrent reference changes, the latter strategy is selected.
The design of the step adaptive controller is naturally done by using the IMC framework (Section 2.7), where the controller can be tuned with only one tun‐ing parameter related to the speed of the step response. The tuning can be ap‐plied on a conventional IMC controller or on an IMC‐PID controller, which are considered here, with some design alternatives summarized in Figure 59. The SAC framework, which changes the controller tuning depending on the situa‐tion, is not restricted to IMC control with the notion of control speed, but can be applied through optimization to any parameterized controller. An example used here is optimizing the parameters of a PID controller.
The controller tuning is chosen by optimization, thus, the envisioned speed difference may not necessarily come true. By changing the cost criterion, the operator can choose an acceptable step response. The selection of the cost crite‐rion is investigated in the next section.
Although the step adaptive controller is applied here to a 2x2 process, it can be extended to an n x n MIMO case as well. In the n x n case, the proposed proce‐dure would yield n‐1 tuning parameter values for every controller, optimized for eliminating the cross‐interaction that originate from the n‐1 other loops. This large amount of different tuning values (n x n‐1) needed for cross‐interaction elimination could be reduced by first analyzing the interactions between the loops and then optimizing only for the loop that causes the largest interaction. The chosen tuning would then be suitable for the other, less significant cross‐interactions from other loops.
Design IMC‐PID
Discrete‐time PID
Optimize λ
Discrete‐time PID
Optimize Kp, Ki, and Kd
Design discrete‐time IMC
Optimize γ
n times for separate steps in all the loops Figure 59. Step adaptive controller tuning alternatives.
131
5.3.1. Controller Tuning by Optimization for MIMO Systems To tune the step adaptive controller, the simulation based tuning procedure [129] is applied. In the MIMO case, unit reference changes are for example made sequentially to each input of the system. Hence, each output response is composed of two different situations where the control performance is assessed: step response and cross‐interaction. These two cases are different in nature, and may set competing requirements for the control actions. Therefore the cost criterion for the tuning optimization is considered in this subsection. A suitable cost criterion is selected to fit the desired control objectives in both of the above‐mentioned situations for the SAC.
In order to evaluate the control performance of a MIMO system a new cost criterion is proposed. The total cost, which is minimized for optimal control tuning, is chosen as a weighted sum of two individual costs: the costs during the step and cross‐interaction response. The ITSE criterion (32) yields good step responses because of the absolute time included in the cost calculation, which discounts the initial step transient and emphasizes the settling down to a steady‐state. The ISE criterion in (31) is more suitable for evaluating the cost under load disturbances, which can occur at any time. Therefore, the cost crite‐rion is switched from ITSE to ISE at tload when the character of the response changes from step response to cross‐interaction, at the time in another loop has a step response, as shown in Figure 60.
A weighted sum of the two cost functions is taken, similarly as in [53]. The weight factor α (0 < α < 1) multiply the cross‐interaction cost (ISE) and 1 – α weights the step response cost (ITSE). Since the cost criteria are of different types, their absolute values may differ. This is compensated by adding a scaling term such that the costs are equal when α = 0.5, which leads to Jc
ITSE ISE 0.5/J α
c J J=
≡ . (101)
Thus, the total cost, Jtot, summed over all the N loops is:
( )tot ITSE, , ISE,1 1
1 , 0,1N N
i J i ii i
J α J α c J α= =
= − + ∈⎤ ⎡⎦ ⎣∑ ∑ . (102)
The main objective of the cost criterion weighting α is the ability to emphasize the cost from the cross‐interaction and thereby obtain tuning parameters that can repress the load disturbance adequately.
Another way to obtain efficient handling of the cross‐interaction is to set a re‐striction on the maximum cross‐interaction, depicted in Figure 60. During the cross‐interaction of a unit step response change from another loop at time tload, the absolute value of the error signal should not exceed a predefined maximum cross‐interaction constraint, maxcross. The effect of the weight α and the con‐straint on the obtained controller parameters is studied in the next section.
132
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Time
ISE criterionITSE criterion
Change to tuning forload disturbance rejectionChange to tuning for
step in this loop
tload
yr‐maxcross
yr+maxcross
Figure 60. Optimization criteria and tuning selection. The rectangle marked by dashed line represents the allowed region during cross‐interaction. The times when a change in the adaptive controller parameters occurs are indi‐cated. [P10]
5.3.2. Step Adaptive Controller Tuning and Simulations In this section the tuning of the step adaptive controller is investigated. The example plant used in the simulations is the Wood & Berry benchmark plant [168],
( ) ( )( )
( )( )
3
7 3
12.8 18.916.7 1 21.0 16.6 19.4
10.9 1 14.4 1
s s
T
s sB
e ey s R ss sy sy s S se e
s s
− −
− −
⎡ ⎤−⎢ ⎥⎡ ⎤ ⎡+ += = ⎢⎢ ⎥ ⎢
⎢ ⎥⎢ ⎥ ⎢⎣ ⎦ ⎣−⎢ ⎥+ +⎣ ⎦
⎤⎥ ⎥
⎥⎦ (103)
where the top T and bottom B product of a 2x2 distillation column is con‐trolled by the reflux flow rate R(s) and the steam flow rate to the reboiler, S(s) in a decentralized manner, with two controllers. The control is done in a NCS setting as shown in
y y
Figure 58.
Two step adaptive controllers (the IMC‐PID and pure PID) are compared to a constant parameter PID controller, which is tuned by minimizing the total ISE cost summed from both loops. Discrete‐time controllers are used, as they will
133
later be implemented in a WNCS setting. A sampling interval of h = 0.2 s is used. First the controllers are optimized by simulation in Simulink with the criterion described in Section 5.3.1 for different values of α and cross‐interaction restrictions. The tuning of the IMC controllers is obtained by optimization of (102). To get suitable control for the WNCS case, a jitter margin constraint (Sec‐tion 2.5) of two consecutive packet drops, δmax = 2h = 0.4 s, is included for each control loop in the optimization. An upper bound for the tuning parameter λ = 50 is selected to restrict the controller from too low performance. The objec‐tive is to investigate how the cost function and restrictions on the step responses influence the control results. The system is simulated in Simulink and settings yielding suitable controller performance are investigated. Later the system is run and compared in a WNCS setting with the PiccSIM simulator.
The optimal controller parameters as a function of weight α, are plotted in Figure 61. Several initial values are used for the optimization to avoid local minima. As anticipated, the weighting of the step response cost versus the cross‐interaction cost changes the control speed parameter λ of the IMC based controller, such that the controller compensating the load disturbance becomes faster with increasing weight on the load disturbance. In this case the optimal value of λ for the cross‐interaction tuning is restricted from below by the jitter margin constraint. The final result is always a trade‐off between an adequate step response in one loop and an efficient cross‐interaction in the other. By changing the cost criterion weight and the cross‐interaction restriction, the operator can choose, on one hand, how much cross‐interaction elimination is desired, and, on the other, a suitable step response.
The cross‐interaction in the previous case is still significant: the peak during a unit step response is between 0.6 and 0.8. This can be reduced with cross‐interaction restriction. Next, optimizations are done with a cross‐interaction restriction of maxcross = 0.4. Comparison of step responses is shown in Figure 62, where the optimal tuning values for α = 0.8 is used. Unit step changes in the reference signals are done at time t = 0 s for the first reference and at t = 100 s for the second reference. The cross‐interaction is now diminished as much as poss‐ible with this control structure. Optimizations with tighter cross‐interaction restriction yields very slow step responses, to fulfill the cross‐interaction con‐straint. The ISE costs of the step and cross responses of all the controllers are collected in Table 9, calculated with α = 0.5 and 1Jc = . The realized jitter margin is additionally shown in the table, calculated according to the discrete‐time jitter margin theorem (22) [72].
In the following, the step adaptive IMC controller is compared with some well known tuning methods from the literature. The quantitative parameter tuning scheme (QPT) proposed in [28] and the Mp criterion based tuning method in‐troduced in [84], which is an extension to the MIMO case of the generalized IMC‐PID tuning rule developed in [85], are used in the comparison.
134
0 0.2 0.4 0.6 0.8 10
10
20
30
40
50
α
λ
lers (bottom) as a function of cross‐interaction weight. Parameters for steps in loop 1 and loop 2 are shown separately.
The controllers obtained by the QPT and Mp criterion are discretized and com‐pared to the developed adaptive IMC‐PID controller with α = 0.8 and maxcross = 0.4. The simulation results are shown in Figure 63.
Finally, WNCS simulation is done with PiccSIM using an IEEE 802.15.4 net‐work. Separate sensor nodes for the two outputs are used and the controllers are also implemented on separate receiver nodes. All three controller alterna‐tives are simulated at the same time and the nodes are placed close to each other, such that the nodes compete for the same limited bandwidth of the net‐work
Altho methods are in continuous‐time, in NCSs the controllers are implemented in discrete‐time, because the sensors
i 0simulation case.
Figure 61. Tuning parameters of adaptive IMC‐PID (top) and PID control‐
IMC-PID, Step in loop 1
λ1
λ2
0 0.2 0.4 0.6 0.8 10
10
20
30
40
50
α
λ
IMC-PID, Step in loop 2
λ1
λ2
0 0.2 0.4 0.6 0.8 1-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
α
Adaptive PID, Step in loop 1
Kp1Ki1Kd1Kp2Ki2Kd2
0 0.2 0.4 0.6 0.8 1-0.5
0
0.5
1
α
Adaptive PID, Step in loop 2
Kp1Ki1Kd1Kp2Ki2Kd2
.
ugh many of the MIMO control tuning
transmit the measurements in discrete packets and the algorithms have to run on microprocessors. The discretization interval used here s h = .2 s, which is also the sensor sampling and transmit interval in the WNCS
135
work.
Care has to be taken when implementing the adaptive controllers. The abrupt changing of parameters may cause bumps in the control signal. The velocity form of the PID controller is in this case advantageous, because the incremental
0 20 40 60 80 100 120 140 160 180 200-0.2
0
0.2
Figure 62. Step responses of control loops in simulations without a net‐
Figure 63. Step response comparison to results in literature.
0.4
0.6
0.8
1
1.2
1.4
Adaptive IMC-PID y1Adaptive IMC-PID y2Adaptive PID y1Adaptive PID y2Constant PID y1Constant PID y2
0 20 40 60 80 100 120 140 160 180 200-0.2
0
0.2
1.4
0.4
0.6
0.8
1
1.2
Adaptive IMC-PID y1Adaptive IMC-PID y2QPT y1QPT y2Mp criterion, y1Mp criterion y2
136
form suppresses the bumps when changing the tuning [185]. In the proposed control scheme, the adaptive controllers are aware of the set point change in the other reference or a coordinator at a hierarchically higher level commands them to change their tuning values. The change may take place in advance, slightly before the step times, as shown in Figure 60.
The PiccSIM simulation results are shown in Figure 64. The results are very similar to Figure 63, as the packet drop rate is low, about 10 %. The ISE cost values in Table 9 are slightly higher due to the additional delay and packet drop induced by the network. The Mp design results in the most sensitive con‐trollers in this case.
The step adaptive controller is proposed for decentralized control of MIMO systems. The tuning parameters of the SAC change depending on whether cross‐interaction disturbance rejection or a step response is desired. The decen‐tralized control structure is advantageous in WNCSs, where the communication bandwidth is less than when using full MIMO controllers. The selection of cost function for optimization based tuning in a MIMO control case is discussed and suitable cost function parameters are investigated. Optimization with restriction on the maximum cross‐interaction yields the most pleasing step responses.
Figure 64. Step responses of control loops from PiccSIM run.
5.3.3. Summary
0 50 100 150 200-0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
Adaptive IMC-PID y1Adaptive IMC-PID y2QPT y1QPT y2Mp criterion, y1Mp criterion y2
137
Table 9. ISE costs from simulations and realized jitter margin. Separate for both controllers and cost from step response and cross‐interaction.
Loop Simulink PiccSIM Jitter Margin
Step Cross Step Cross Step Cross
IMC‐PID1 4.07 1.02 4.05 1.10 2.24 0.95
IMC‐PID2 5.00 0.74 5.10 0.84 0.51 0.61
Adaptive PID1 2.74 0.91 2.70 0.95 0.56 0.38
Adaptive PID2 4.09 0.64 4.14 0.70 0.41 0.32
Constant PID1 2.52 1.18 2.44 1.27 0.37
Constant PID2 4.19 0.97 4.22 1.02 0.98
QPT 3.71 1.67 5.38 1.57 5.8 1
QPT2 5.39 1.49 4.16 0.71 2.0
Mp criterion1 1.31 0.17 5.46 3.00 0.36
Mp criterion2 5.46 2.65 4.22 1.01 0.82
The step adaptive controller is evaluated with simulations both in Simulink and with PiccSIM. The control result of the SAC is evaluated against other decentra‐lized MIMO controllers proposed in the literature, and it was shown to perform equally well. It is observed that the SAC is better than a constant parameter PID. here cann l problems.
5 ‐S ta pe euristic The previous controller n j a eorem, which requires an upper bound on the delay jitter or consecutive packet drops. This is a com ption literature [1 17 4]. Generally, a jitter margin is as is control tuning becom nd con t ec . d re ith larger jitter margi is il i bl at all times, rt s e s wo h th of an outage might uc t pr l a outage is sufficiently
It should be noted that the step adaptive control strategy presented ot only be used for NCSs, but also for traditional MIMO contro
.4. Steady tate Ou ge Com nsation H desig s have relied on the itter m rgin th
mon assum in the NCS [23], 03], [ 8], [7 which
ing slow a smallserva
as possibleive (S
desired,4.2 an
due Figu
to the 16) wtion 3
n. In WNCSs th due to the unce
stabaintie
ity lim in th
t maywirele
not bes net
possirk. T
e to guaranteee leng
be bounded, s h tha the obabi ity of long
138
small. network ge i w s networks can be un‐bound e i eg f lo op system is quickly exceed u n ut o p p stable processes are considered an the control system is neede method i es o r e ple a Kalman filter, to predict the outage gaps and continue control. Another alternative is the predic‐tive o compensato ] ch r d er version of a Kalman filter. In this section an outage heuristic is proposed to bring the process to a safe st
The p e h t networ ag at exceed the stability of the control system is based on the approximate closed‐loop step re‐
ponse, which is used as a rough estimate of the output behavior when the
re
though be advantageous for short breaks. Therefore, to tackle
During longer ed in length, th
outality r
s, whion o
ch in the c
irelessed‐lo stab
ed. Due to the here. In these
nbou situations
ded o ages, outage
nly o action
en‐loo for
d. One s to use an timat r, fo xam
utage r [64 , whi is a educe ord
eady‐state.
roposed outag euris ic for k out es thboundsactual feedback information is unavailable. Using this estimate, the control can continue during the outage to bring the process into a desired steady‐state. In the following subsections the details of the steady‐state heuristic (SSH) are first developed, then the properties of the SSH are established, and, finally, several simulation results are presented comparing the PID controller with the Net‐worked PID, and the PID PLUS controller. [P11]
5.4.1. The Steady‐State Heuristic Assume that the parameters of a simple FOTD‐approximation of the process aknown, and the operator specifies the desired time‐constant of the closed‐loop system by the internal model control framework. The IMC controller is tuned with a model of the process. In outage situations it seems intuitive to use the model to predict the process output at the controller.
A Kalman filter is suitable for state‐estimation in the case of missing feedback information [97], [144]. It is, however, computationally heavy. Another alterna‐tive to predict the required control action is to consider the case from the PID controller viewpoint and its three terms. A simple prediction method is to li‐nearly predict the error e, integral of error Σe , and derivative of error Δe of the PID controller during an outage from the previous values according to
Δ
Σ Σ
Δ Δ
ˆ( ) ( 1) ( 1)ˆ ( ) ( 1) ( 1)ˆ ( ) ( 1)
e k e k he ke k e k he ke k e k
⎧ = − + −⎪ = − + −⎨⎪ = −⎩
. (104)
Predicting, however, can make the system unstable in case of longer outages. Prediction shouldlong network outages and to calculate the control action during outages, a me‐thod based on the expected step response of the IMC control design is pro‐posed.
139
The general idea is that during an outage, the controller should behave as nor‐mal, approaching the steady‐state values of the control loop. Consider a FOTD process with an IMC controller. The transfer function from the reference input yr to the control error e is ( )1 cl re G y= − . Assuming that the approximations are sufficiently accurate, e.g. the closed‐loop transfer function can
in the transfer function from yr to (62) holds, then
be approximated by cl f pG G G+≈ (53), resultinge
( )1 f p re G G y+= − . (105)
If there is a unit step reference change, the error behaves as
( ) ( )/1 11 1 1 1( ) 11
t τ λτsf pe t G G e e
s s s λsL L − −− + − −⎧ ⎫ ⎧ ⎫
= − = − =⎨ ⎬ ⎨ ⎬+⎩ ⎭ ⎩ ⎭. (106)
The error thus decreases exponentially to zero with the time‐constant λ, after an initial delay. Similar analysis can be done with the integral and derivative of the error
( )( )( )/1 1
2 2
1 1 1( ) ( ) 11
t τ λτse t dt E s e λ es s s λs
L L − −− − −⎧ ⎫⎧ ⎫ ⎪ ⎪= = − = −⎨ ⎬ ⎨ ⎬
+⎩ ⎭ ⎪ ⎪⎩ ⎭∫ (107)
{ } ( )/1 1 1 1( ) ( ) 1 Δ( )1
t τ λτse t sE s e t eλs λ
L L − −− − −⎧ ⎫= = − = −⎨ ⎬+⎩ ⎭
, (108)
y = 0 and at steady‐state. The should approach
where Δ is the Dirac delta function. The general trends are as well an exponen‐tial behavior. A good heuristic during a network outage would thus be to let the error e, integral of error Σe , and derivative of error Δe approach exponentially the steady‐state values of the PID controller. The e and e parts approach zero, as y –
Δpartr 0=y Σe
( ) load1 / 0pG D− , where Dload is a possible output load disturbance. The time‐constant of the exponential decay is the closed‐loop time‐constant, which if using the IMC tuning method is λ (Section 3.4.1).
The action at the actuator during an outage with the steady‐state heuristic is thus
( )Σ Σ
Δ Δˆ ( ) ( 1)i
e k ge k
ˆ( ) ( 1)(1 )ˆ ( ) ( ) ( )r
e k ge kge k g e k y k
K K
⎧ = −⎪
−⎪ ′1= − +⎨⎪
where
(109)
⎪ = −⎩
/h λg e−= Networked
(44) accomplishes the exponential decay ith time constaThe PID with SSH is depicted in Figure 65, where the control ais extrapolated at the actuator side during an outage.
w nt λ. ction
140
Figure 65. Networked PID with steady‐state heuristic. The switch in the ac‐tuator side of the controller between the normal case and outage case is in‐
case of a load disturbance, it should be compensated at steady‐state. In an outage situation where feedback is not available, disturbances cannot, in gener‐al, be compensated. A solution is to estimate the load disturbance at the control‐ler side using a Kalman filter, as described in Section 2.9, and assume thatdisturbance remains constant during outage. The target steady‐state is ified according to
dicated.
In the
the the mod‐
( ) ( ) ( )load1
r ry k D kK
′ − , (110)
where Dload is the estimated load disturbance
k y=
is accomplishes a steady‐state where the estimated load disturbance is compensated.
The advantage of this method is that it is stable even for long outages, see Sec‐tion 5.4.2, as in an outage situation the system becomes effectively open loop and the heuristic approaches the desired steady‐state. This heuristic can only be used with stable processes, but on the other hand, control of unstable systems is not advisable if unbounded network outages are to be expected.
The SSH can, moreover, be applied to other controllers, if the general step re‐sponse behavior is an exponential approach to the reference value. This should be the case in many well tuned control loops.
The prediction and steady‐state heuristic can be combined, using the advantag‐es of both methods above, by
(49). Th
Wireless Network
inout
Networked PIDsensor
Reference
Process
control output
Networked PIDwith SSH
yr
eeeΔ
u yp i du K e K e K eΣ Δ= + +
ˆ( ) ( 1)e k ge k= −
ˆ ˆ ˆ ˆp i du K e K e K eΣ Δ= + +
( )r
r
y y
ry yy y
−
−
Δ −Δ∑
Σ
141
( )
( )Δ
Σ Σ
Δ Δ
ˆ( ) ( 1) ( 1)1ˆ ( ) ( 1) ( 1) ( )
ˆ ( ) ( 1)i
e k g e k he kge k g e k he k y k
K Ke k ge k
⎧ = − + −⎪
−⎪ ′= − + − +⎨⎪⎪ = −⎩
. (111)
This may increase the performance during short outages, since the error signals should be more accurate, but at the same time ensure stability during long outages.
5.4.2. Stability of the Steady‐State Heuristic In order to analyze the steady‐state heuristic, state‐space representations are derived. The state‐space representations are given for the conventional PID and Networked PID controller structures for the normal operation case and with the SSH for the case with an outage between the sensor and the controller. In thoutag and the r ntin‐ues t te the error at the sensor, although it is not available to the con‐troller input.
e e case, the controller can naturally only use its previous state‐vectoreceived measurement is held. The networked controller, however, coo integra
The network delay is assumed, without loss of generality, to be one sampling interval, and is incorporated in the sensor model. The sensor incorporates a low‐pass filter (discrete‐time, first‐order, with time‐constant Df,
/ fh Dfd e−= ) and
set‐point weighting b for the reference signal in the derivative part. Thus, the state‐space repre( ) ( ) ( )
sentation for the normal sensor with vector is
(112)
state‐T
s rx k e k y k= ⎡ ⎤⎣ ⎦
( 1) ( ) ) ( )( )
T
s s s s rx k A x k B y k y k⎧ + = + ⎡ ⎤⎪ ⎣ ⎦⎨ .(( )s s sy k C x k=⎪⎩
0 0⎡ ⎤ 1 1where
0 0sA = ⎢ ⎥⎣ ⎦
, 1 2 1 0s s sB B B⎡ − ⎤
= ⎡ ⎤ = ⎢ ⎥⎣ ⎦⎣ ⎦
, and 0 1sC1 0⎡ ⎤
= ⎢ ⎥⎣ ⎦
.
The corresponding sensor matrices for the networked controller containing the sensor outputs e, Σe , Δe , and the reference signal yr are
0 0 0 0 0⎡ ⎤ 1 10 1 0 0 00 0 1 / 00 0 0 0 0
f fNs d d hA
⎢ ⎥⎢ ⎥⎢ ⎥− −=⎢ ⎥⎢ ⎥
, 1 2
⎡ ⎤−
/ /f fNs Ns Ns
h hbd h d hB B B
0 0 0 0 0⎢ ⎥⎣ ⎦
1b
⎢ ⎥−⎢ ⎥⎢ ⎥−= ⎡ ⎤ =⎣ ⎦ , and⎢ ⎥
−⎢ ⎥1 0⎢ ⎥
⎣ ⎦
142
1 0 0 0 00 1 0 0 00 0 1 0 00 0 0 0 1
NsC
⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
,
with the state‐vector Σ Δ Δ( ) ( ) ( ) ( ) ( 1) ( )T
Ns rx k e k e k e k e k y k= ⎡ − ⎤⎣ ⎦ . The follow‐ing controller structures are considered here: A conventional PID, which rece‐ives the error signal frompreviously received (PID‐
the sensor. In case of an outage, the controller uses the value. Conventional PID with the steady‐state heuristic
networked structure SSH) during outages. Then PID controllers with theare considered: Networked PID (NPID), a Networked PID with steady‐state heuristic (NPID‐SSH), and a predictive NPID‐SSH (NPID‐PSSH) controller.
The state‐space representation of the conventional PID controller contains the states ( ) ( ) ( ) ( ) ( )Σ Δ Δ1 1 1 2 ( 1)
T
c rx k e k e k e k e k y k⎡ ⎤= − − − − −⎣ ⎦ , where the previous reference value is also stored, in case the next packet from the sensor is dropped. Correspondingly the states of the Networked PID are
( ) ( ) ( ) ( )Σ Δ1 1 1 ( 1)T
c rx k e k e k e k y k⎡ ⎤= − − − −⎣ ⎦ . The controller state‐space equations for both regular
is obtained by the
Φc cA = , Γc cB = ,
To prove the stability fine the state‐vector of
control and control during outages are given in Table
conversion
c and c
of the proposed SSH a closed‐loop model is needed. De‐ the whole system
10 and Table 11. The controller matrices are given such that the state‐space representation
( 1) ( ) ( )( ) ( ) ( )c c c c s
c c c s
x k A x k B y ku k C x k D y k⎧ + = +⎨ = +⎩
, (113)
Φc cC H= , Γc cD H= . (114)
( ) ( ) ( ) ( )T
p s cx k x k x k x k⎡ ⎤= ⎣ ⎦ ,nd oller, respectively.
consist‐ process, sensor, a The state‐ the whole system is then
ing of th f contre states o thespace representation of
0
( ) 0
s p
py k C⎣
⎪⎨
=⎪⎪⎪⎩
where lower index p and the rest of the matrices
2 1
0
0 ( )
s s r
c s cB C A
kx
⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎦
⎡ ⎤⎣ ⎦
,
0( 1) 0 ( ) ( )
p p c s p cA B D C B Ck B C A k B y kx x
⎡ ⎤ ⎡ ⎤
(115)
indicates the discretized state‐space model of the process are as defined previously.
The stability of the SSH depends on two things: both on the stability of the heuristic during an outage, and on the possible instability due to the switching.
⎢ ⎥ ⎢ ⎥+ = +⎢ ⎥
⎧⎪⎪
143
Table 10. State‐space representations of conventional and Networked PID controllers during normal operation. “Same” indicates that the matrix is the same as the one in e cell above.
Control Normal operation
th
strategy Φc c cΓ H
Conventional PID
0 0 0 0 00 1 0 0 00 0 1 1 00 0 0 0 0
fd
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥− −⎢ ⎥⎢ ⎥
0 0 0 0 0⎢ ⎥⎣ ⎦
1 00( 1)f f
hd d bh h
( 1)
0 1
f fd d b 0
T
i
p
d
KKK
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎢ ⎥
h h
⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
0⎢ ⎥⎣ ⎦
PID‐SSH Same Same Same
NPID 4 40 x 4 4xI 0p i dK K K⎡ ⎤⎣ ⎦
NPID‐SSH Same Same Same
NPID‐PSSH Same Same Same
If the switching to th outage ac assumed be infrequent, for examlarge jitter margin tuning of the controller is used, the switching can be neg‐
trix. The condition for the sta‐bility is thus
{
e tion is to ple a
lected.
Since the system is open‐loop during the outage the stability is given by the maximum eigenvalues of the state‐transition ma
{ }} { { } { }}dropmax max , 1p ceig A eig A eig A= < , (116)
where drop 0p p c
c
A B CA
A⎡ ⎤
= ⎢ ⎥⎢ ⎥⎣ ⎦
. The separation to the maximum eigenvalue of either
open only on the controller stability. The PID‐SSH and
the process or controller is due to the fact that Adrop matrix is upper triangular. Thus, if the process is stable, which is the case when using lambda tuning, the
loop stability dependsNPID‐SSH have both a triple eigenvalue at g < 1 and a couple of eigenvalues at
144
Table 11. State‐space representations of regular and Networked PID con‐trollers during network outage operation. “Same” indicates that the matrix is the same as the one in the cell above.
strategy
acket drop Control
Operation during p
Φc,drop Γc,drop rop Hc,d
Conventional PID
1 0 0 0 011 0 0
0 0 1 0 00 0 0 1 00 0 0 0 1
i
f
ghK K
d
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥
−⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
5 2 0 x
T
00
i
p
d
KKK
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
PID‐SSH 2 2xI
3 3
0x 0gI⎡ ⎤
⎢ ⎥⎣ ⎦
Same Same
NPID I 0 0K K K4 4x 4 4x p i d⎡ ⎤⎣ ⎦
0 0
NPID‐SSH 0g 0
0 00 0 1g
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
0
0 0 0
1
i
0 0
⎡g⎤
⎢
0 0 0
K K
⎥⎢ − ⎥⎢ ⎥⎢
Same
⎢⎢⎣ ⎦
⎥⎥⎥
1 0 0h⎡ ⎤
NPID‐PSSH 1 0 0
0 0 1 0h
g⎢ ⎥⎢ ⎥⎢ ⎥
Same Same
0 0 0 1 / g⎢ ⎥⎢ ⎥⎣ ⎦
o the requirementem is always stab
y
modeled as a discrete‐time Markov jump linear system
1 due t to keep the reference in memory at the sensor. Thus, the syst le during outages. With infrequent switching between regular operation and the outage heuristic, stabilit is preserved.
With frequent switching, the stability theory of Markov jump linear systems [34] can be used to show the stability of the proposed control scheme as follows. The whole system with switching between the regular and outage modes can be
145
( ) ( ) ( ) ( )( 1) rθ k θ kx k A x k B y k+ = + ,
e ( )θ k is the random jumping par
wher ameter, which in this case indicates whether the outage heuristic is used [34], similarly as for example in [33]. The stochastic para n probability transition matrix P,
states: one corresponding to good (G) and the other to bad (B) condi‐. Defin
(117)
s if the normal controller (θ = 1: Ac, Bc, C and Dc) or the et drop controller matrices (θ = 2: c,drop, c,drop, Cc,drop, and Dc,drop) are used,
Table 10 ‐ Table 11 and (114).
The stability of a Markov jump linear system can be shown in several ways [34], of which one is repeated in the following. Define first
meter θ obeys the Markov chaiwith twotions (10) e
1, normal operation (G)( )
2, outage (B)θ k
⎧= ⎨⎩
,
which indicatepack
matricesA
c, B
( )2
1
Tn
i i
I
diag A ACN
PC =
N
⊗
= ⊗
=
, (118)
where (n x n matrix) is the state‐transition matrix of the system correspond‐ing to ith state of the Markov chain θ, I is the identity matrix, and is the Kronecker product. The system is mean square stable if [34]
A
1A the ⊗
{ }{ }1max 1eig A < . (119)
5.4.3. Simulations and Comparisons In this section the Networked PID control scheme described in Section 3.3 and
heuristic (Section 5.4.1) are in simulations. simulation scenario are con , and in owing subsec‐
tions the results are presented from different points of view. The first case is a deterministic simulation of a step response with an outage. All possible outage tart and length combinations are simulated, from the beginning of the step response until it has settled down. The second case is a simulation with a ran‐
parameter for the IMC
the steady‐stateTwo general
evaluatedsidered
variousthe folls
s
dom packet drop probability. Simulations of 400 step responses with different realizations of random packet drops are done and the average performance measured with the integral square error (31) is calculated.
A first order process (1) is used with K = 1, T = 10 s, and τ = 1 s, and an IMC‐PID controller is used. The sampling interval of the controller is h = 1 s. The network delay is one sampling interval, which implies a total delay L = τ + h = 2 s (2) used in the IMC design. The closed‐loop time‐constant
146
147
design(62)
is selected as λ = 4 and n = of a pure first‐order step response, is required for the steady‐state
c. T
of the control schemes are compared. Step responses of an IMC‐tuned PID controller (Section 2.7.2), Networked PID controller (Secti3.3) with and without the SSH, the PI PLUS controller (Section 2.6.2),Kalman filter based estimation (Section 2.9) with a standard PID are compared.
12 seconds. The plots show that the conventional
1. This tuning does not fulfill the requirement which
heuristi he SSH works flawlessly with a perfect model when the λ time‐constant approximation holds, which has been tested in simulations. Therefore it is interesting to simulate how well it works outside this region. The following simulations show that the SSH works remarkably well also in these cases.
First the actions the on D and
In Figure 66 the control signal and process output are plotted for a step re‐sponse and a step‐wise load disturbance, with a sensor to controller communi‐cation outage between t = 8 ‐ PID exhibit integral windup and overshoot because of the outage, whereas the Networked PID controller keeps the control signal constant during the outage.
Figure 66. Step response (top) and load disturbance response (bottom) comparison. Step change at t = 1 seconds, with communication outage be‐tween t = 8 ‐ 12 seconds, indicated by grey bar. On the left: control signal and on the right process output.
The addition of the SSH makes the control follow approximately the desired control. The NPID‐SSH as well as the KF based outage estimator are close to the ideal response without an outage.
Next, the performance of the algorithms as function of network packet drop
results are averaged.
The selection of the jitter margin for the controller determines the switching point of the SSH. With a large jitter margin, this switch is infrequent, but the control is conservative, as it corresponds to a larger λ according to (65). A too small jitter margin, on the other hand, switches to the outage action too often. Next, the selection of the jitter margin and the relationship between control performance and control degradation during outages is studied. The target is high control performance and a graceful degradation, where the control cost only increases slightly with decreasing network QoS. As already demonstrated in Figure 16, this depends on the controller tuning aggressiveness.
et drop probability (right).
probability and outage length are studied. In Figure 67 the ISE control cost (31) is given as a function of the average packet drop probability simulated with the random packet drop case, and as a function of outage length averaged over all possible outage start times. The performance of the conventional PID is poor in both cases due to integral windup, which is obvious with increasing outage length. Introducing the SSH restores the behavior close to the case with no outage. These results are for a step response and similar results are obtained with a load disturbance. The generally higher cost for the random packet drop case is due to the measurement noise used in the simulations. Simulations with measurement noise variances ranging between 10‐9 and 10‐2 are done and the
Figure 67. Control cost as function of outage length (left) and average pack‐
0 1 2 3 4 5 6 7 8 94
6
8
10
12
14
16
18
20
Outage length
Con
trol c
ost (
ISE
)
Regular PIDRegular PID+SSHNetworked PIDNetworked PID+SSHKF+Regular PIDPID PLUS
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.457
7.5
8
8.5
9
9.5
Average packet drop probability
Con
trol c
ost (
ISE
)
Regular PIDRegular PID+SSHNetworked PIDNetworked PID+SSHKF+Regular PIDPID PLUS
148
The control cost for different jitter margin tunings, or equivalently the selection of λ (65), and packet drop probabilities are plotted in Figure 68 with increasing cost for increased packet drop probability. The conventional PID controller (and the KF+PID) has a minimum cost in this case near δmax = 2.75 s in case of a small packet drop probability. When the drop probability increases, the cost increases drastically, making the point δmax = 3.2 s optimal (indicated by dotted arrow). Using the steady‐state heuristic does not cause a shift in the optimal jitter mar‐gin tuning, because the SSH is used when packet drop occurs. Thus, for a grace‐ful degradation δmax = 3.2 s, should be chosen in this case for the conventional PID tuning, whereas there is no large degradation using the SSH and, conse‐quently, the optimal point δmax = 2.75 s can be used in this setting.
The previous simulations were done with a perfect model. The process model is in practice never perfect, or the model is usually a simple low‐order approxima‐tion of the real process. Therefore, the impact of modeling errors on the perfor‐mance of the different approaches is studied next. The time‐constant of the actual process is varied, whereas the process model is kept constant. The results shown in Figure 69 indicate that the methods have a similar degradation with error in the time‐constant, except for the PID PLUS controller.
Figure 68. ISE cost as a function of jitter margin for different packet drop probabilities.
Increasedpacket dropprobability
Gracefuldegradation
Optimal point
149
150
nations of outage start and length (left) and random packet drop probabili‐ties ranging from 0 to 0.4 (right).
5.4.4. Summary In this section the steady‐state heuristic was introduced. During the potential longer outages in the wireless network, the control system must work in an open‐loop configuration. The SSH is proposed to steer the system to a desired steady‐state during these outages.
The SSH approximates the behaviour of the step response of the closed‐loop system and is based on the IMC design framework. Using a load disturbance estimator at the controller, a constant load disturbance can be compensated for in the SSH.
Extensive simulations are done where the Networked PID controller is com‐pared with the conventional PID controller, the PID PLUS controller, and Kalman Net‐work e SSH perform well compared to other methods,
Figure 69. Control cost as function of model error, averaged over all combi‐
‐filter based estimation during outages. Despite its simplicity, the ed PID controller with th
such as model based estimation during outages.
-40 -30 -20 -10 0 10 20 30 402
4
6
8
10
12
14
16
18
20
Model error (% ΔT)
Con
trol c
ost (
ISE
)
Regular PIDRegular PID+SSHNetworked PIDNetworked PID+SSHKF+Regular PIDPID PLUS
-40 -30 -20 -10 0 10 20 30 40
5
5.2
5.4
5.6
5.8
6
6.2
6.4
Model error (% ΔT)
Con
trol c
ost (
ISE
)
Regular PIDRegular PID+SSHNetworked PIDNetworked PID+SSHKF+Regular PIDPID PLUS
6. CONCLUSIONS
In this thesis the simulation of wireless control systems and network adaptive control are discussed. Today, the use of wireless technology in control is mostly for single cable replacements, or in cases cables are unfeasible, for instance due to long distances. There is an interest in applying wireless control systems be‐cause of reduced installation and cabling costs, and the flexibility of wireless networks. The aim is to develop and deploy WNCSs in at least monitoring application, process control, and home automation. New wireless applications will emerge due to the enabling properties regarding flexibility and mobility of wireless control technology.
The current wireless standards are designed for determinism and reliability, such that traditional control can be applied without modification. This thesis considers the control over a next generation flexible, but unreliable network. The simulations and theory show that control is possible in this case also. By modifying the control algorithms, the control system can be tailored for wire‐less control. The presented adaptive control algorithms adapt to the conditions in the network, maintaining a working wireless control system despite the inhe‐rent n
An a plicity is followed, where the basic behavior of the network and control system is studied. Control based on the PID controller and the in‐ternal model control framework are utilized, to obtain a wireless control system that is easy to implement in practice and thus likely to be used in industry. To accommodate the basic control algorithms for the problems related to the unre‐liable wireless network, several controller adaptation mechanisms are proposed and studied through simulation and analytically.
The PiccSIM simulation platform developed in this thesis is a network and control co‐simulation tool, which enables the study of (Wi)NCSs. The necessity of the simulation approach is motivated by the current research problems in order to understand how control algorithms behave when run over an unrelible w nsor netw and contr d verified through simulations before dep‐loyment. On large scale systems, simulation is the only option to study and verify the protocols and algorithms.
etwork problems.
pproach of sim
a‐ireless network with specific network protocols. Additionally, seork applications have received much attention. The developed theoryol algorithms must be tested an
151
The simulations of WNCSs presented here give insight into the behavior and
network and the control system can be
ther things, shown that different control loops experience different
of the whole application. In this thesis the focus is on the control system. Network protocol design is not studied here in detail, but specific protocol design for real‐time control can be done with PiccSIM.
PiccSIM is an integration of two separate simulators. The process dynamics and control algorithms are simulated in Simulink and the network with ns‐2. The integration of Simulink and ns‐2, which are in their own fields widely used, into one simulation tool, produces a powerful and flexible NCS simulator, enabling the study of both the network and the control system at the same time. PiccSIM is suitable for simulation of any wireless or wired distributed application, in‐cluding networked control systems and wireless sensor networks applications. It has already delivered invaluable benefits in the research work at the Aalto University. PiccSIM has been released as open source, and is available for download. Several research groups around the world are using it today.
The unique feature of this simulator is the PiccSIM Toolchain, which is a com‐plete toolset for networked control system design, modeling and implementa‐tion. With it, the whole control system development, from design to simulation and implementation, can be done in one framework. The Toolchain contains graphical user interfaces for easy network and controller design, and a main interface for running simulations.
The main intended usage of PiccSIM is for research and rapid testing of new wireless control applications. The strength of the PiccSIM simulation platform is thus in the realistic simulation of WNCSs. This is demonstrated with several WNCS case studies in this thesis. Different wireless control applications are simulated and evaluated with PiccSIM. The practical simulations give insight into the behavior of WNCSs and the purpose of them is to act as stepping stones for the adaptive control algorithms developed in this thesis.
interaction between the network and the control system. Especially the impact of the network on the control system must be further understood. With PiccSIM, specific network protocols and the critical design details can be stu‐died, and the particular impact on the control system assessed. Protocol and algorithm specific issues of both theexamined in detail.
The general assumptions used in the literature, for instance the uniform ran‐dom drop assumption, are exchanged to the actual behavior of the network. It is, among onetwork QoS and thus the control performance varies, depending on their loca‐tion in the network. The network QoS is mainly determined by the amount of nearby nodes and traffic, which can vary from one location to another in the wireless control system. The effect on the control thus varies, depending on the actual network protocols, traffic patterns, and implementation issues, which only become evident through realistic simulations
152
The PiccSIM Toolchain supports automatic code generation from Simulink models to wireless sensor nodes. This enables the verification of the simulated algorithms on real hardware, without the laborious and error prone process of re‐implementing the algorithm on a different platform. Any wireless applica‐tion can be first designed and tested in simulations, and then implemented by automatic code generation on real wireless sensor node hardware, with a but‐
capability is demonstrated with two ned in simulations are implemented
on real wireless nodes controlling an actual process.
network models focus on packet drop, as loss of feedback is the main obstacle for real‐time control. The effect of the network on
degraded with degrading network quality of service. The stability is not endangered, proven by the jitter margin theorem. The unre‐
drop is established.
ton click. The automatic code generationexamples, where control algorithms desig
PiccSIM, or ns‐2, has not at this moment simulation models for the emerging wireless automation network standards such as WirelessHART or ISA100.11a. Work, by the Wireless Sensor Systems group at Aalto University is underway to add extensions to ns‐2, to allow the simulation of these wireless automation fieldbus‐type networks using the TSMP protocol.
In addition to the simulations, several networked control aspects, such as con‐trol system structures, network models, and the jitter margin stability criterion are discussed. The considered
the control system is discussed. The congestion of the network and the control traffic rate is a trade‐off between control robustness and performance. The net‐work cost for control measure is developed to evaluate the impact of the net‐work packet drop on the control performance.
The hard real‐time operation requirement often imposed by traditional control design, is not necessary needed. The simulations in this thesis show that the control performance is
liability and varying performance of the wireless network can even be compen‐sated by adaptive control schemes developed in this thesis.
The properties of the IMC‐PID controller, in a NCS context, are studied, and the stability to delay jitter is investigated. The jitter margin theorem is applied to guarantee the stability of the controllers in the simulations. A simple relation‐ship between the controller speed and stability to packet
The conventional PID controller and network aware variants suitable for WNCSs, where the distributed nature of wireless control is taken into account, are considered. The conventional PID controller modifications are the sug‐gested Networked PID and the, industry proposed, PID PLUS controller. The Networked PID is a distributed PID controller, where a part of the control algo‐rithm is calculated at the sensor, taking advantage of the full information from the process.
153
Action must be taken if the network induced delay exceeds the stability boun‐dary of the control system. This is the case of longer network outages during congestion or rerouting, resulting for example in exceeding of the jitter margin stability bound. Switching to an outage heuristic is proposed, which is based on an open loop steady‐state approach for driving the system to the desired state, according to the approximate closed‐loop dynamics. Several control structures are compared with and without the outage heuristic, and the Networked PID
h that a suitable control performance and traffic rate are
etwork
control systems of the future. The thesis con‐
with the SSH are shown in simulations to perform well compared to other con‐ventional methods.
For enabling flexible wireless control applications, adaptive control schemes that can adapt to the QoS changes in the network, are needed. In the literature, only control sampling interval adaptation has been considered. This thesis introduces several new methods for network adaptive control. The adaptive tuning of controllers depending on the network QoS enables automatic control‐ler self‐tuning at deployment and during operation. The controller tuning is changed if the network performance changes, according to the observed net‐work delay jitter. The second adaptive control mechanism is an adaptive con‐trol speed mechanism that adjusts the control system to the current congestion of the network, sucobtained. The adaptation is done on the control speed and sampling interval, which determine the generated traffic. This accommodates the wireless control system to the available network bandwidth, without causing congestion in the network. The third adaptive scheme is the selection of tuning values for control‐lers in a diagonal decentralized MIMO control system depending on the refer‐ence signal. Diagonal control is selected due to the reduced communication requirement compared to full MIMO control. The adaptation scheme switches the controller tuning to load disturbance rejection when cross‐interaction from another control loop is experienced, and then back to an efficient step response tuning when needed. The final adaptive method considers the control during network outages, where a heuristic is used when a long outage in the noccurs. All the proposed adaptive control schemes are simulated with PiccSIM and verified to perform as intended.
This thesis studies and gives an insight into wireless networked control and several adaptive control algorithms, which adapt to the problems caused by the wireless network, are proposed. The knowledge gained in this thesis can be used to develop the agile wireless sidered primarily time‐driven control with an unreliable network. Future work includes the development and simulation study of event‐driven control, where one of the premises of the thesis regarding the use of non‐deterministic, CSMA‐type MAC, is natural. Event‐driven control will result in more agile and re‐source optimized control applications than time‐driven control.
154
155
The field of wireless control systems still needs better understanding. New design and tuning tools need to be developed to apply wireless control in real industrial cases. Pilot plants are needed to demonstrate the feasibility and ap‐plicability of wireless control systems in practice to pave the way for general use. The benefits of wireless automation, both financial and flexibility, will most certainly drive the increasing application of wireless in control.
REFERENCES
[1] Aakvaag, N., M. Mathiesen, and G. Thonet, Timing and power issues in wireless sensor networks ‐ an industrial test case, in Proc. International Conference on Parallel Processing Workshops, 14‐17 June, 2005, pp. 419‐ 426.
[2] Akyildiz, I.F., S. Weilian, Y. Sankarasubramaniam, and E. Cayirci, Wireless sensor networks: A survey, Computer Networks, vol. 38, iss. 4, March 2002.
[3] Albertos, P., M. Vallés, and A. Valera, Controller transfer under sampling rate dynamic changes, IFAC Workshop on Modelling and Analysis of Logic Controlled Dynamic Systems, Irkutsk, Russia, 30 June‐1 August, 2003.
[4] Al‐Hammouri, A.T., M.S. Branicky, V. Liberatore, and S.M. Phillips, Decentralized and dynamic bandwidth allocation in networked control systems, in Proc. 20th International Parallel and Distributed Processing Symposium, 25‐29 April, 2006.
[5] Al‐Hammouri, A.T., V. Liberatore, H. Al‐Omari, Z. Al‐Qudah, M.S. Branicky, and D. Agrawal, A co‐simulation platform for actuator networks, in Proc. ACM Conference on Embedded Networked Sensor Systems, Sydney, 2007.
[6] Akkaya, K. and M. Younis, A survey on routing protocols for wireless sensor networks, Ad Hoc Networks, vol. 3, iss. 3, Elsevier, May 2005, pp. 325‐349.
[7] Andersson, M., D. Henriksson, A. Cervin, and K.‐E. Årzén, Simulation of wireless networked control systems, in Proc. 44th IEEE Conference on Decision and Control and European Control Conference, Seville, Spain, December 2005.
[8] Anta, A. and P. Tabuada, On the benefits of relaxing the periodicity assumption for networked control systems over CAN, IEEE International Real‐Time Systems Symposium, pp. 3‐12, 2009.
[9] Antsaklis, P. and J. Baillieul, (Eds.), Special issue on networked control systems, IEEE Transactions on Automatic Control, vol. 49, no. 9, September 2004, pp. 1421–1597.
157
[10] Baldwin, P., S. Kohli, E.A. Lee, X. Liu, and Y. Zhao, Modeling of sensor nets in Ptolemy II, in Proc. Third International Symposium on Information Processing in Sensor Networks, Berkeley, CA, 26‐27 April, 2004, pp. 359‐368.
[11] Baronti, P., P. Pillai, V.W.C. Chook, S. Chessa, A. Gotta, and Y.F. Hu, Wireless sensor networks: A survey on the state of the art and the 802.15.4 and ZigBee standards, Computer Communications, vol. 30, iss. 7, 26 May 2007, pp. 1655‐1695.
[12] Baum, L. E., T. Petrie, G. Soules, and N. Weiss, A maximization technique occurring in the statistical analysis of probabilistic functions in Markov chains, The Annals of Mathematical Statistics, vol. 41 no. 1, February 1970, pp. 164–171.
[13] Biasi, M. De, C. Snickars, K. Landernas, and A.J. Isaksson, Simulation of process control with WirelessHART networks subject to packet losses, in Proc. IEEE International Conference on Automation Science and Engineering, 23‐26 August, 2008.
[14] Bolot J.‐C. and A.U. Shankar, Dynamic behavior of rate‐based flow control mechanisms, in Proc. ACM SIGCOMM Computer Communication Review, vol. 20, iss. 2, April 1990, pp. 35‐49.
[15] Bolot, J.‐C., Characterizing end‐to‐end packet delay and loss in the Internet, Journal of High‐Speed Networks, vol. 2, 1993, pp. 289 – 298.
[16] Bond, A., My view: wireless offers a chance to get it right the third time, The IEE Computing & Control Engineering, vol. 16, iss. 6, Dec. 2005/Jan. 2006.
[17] Branicky, M.S., V. Liberatore, and S.M. Phillips, Networked control system co‐simulation for co‐design, in Proc. American Control Conference, vol. 4, 4‐6 June, 2003, pp. 3341‐ 3346.
[18] Breslau, L., D. Estrin, K. Fall, S. Floyd, J. Heidemann, A. Helmy, P. Huang, S. McCanne, K. Varadhan, Y. Xu, and H. Yu, Advances in network simulation, Computer, vol. 33, no. 5, May 2000, pp. 59‐67.
[19] Brooks, T., Wireless technology for industrial sensor and control networks, in Proc. Sensors for Industry Conference, Rosemount, Illinois, USA, 5‐7 November, 2001.
[20] Buttazzo, G., M. Velasco, P. Marti, G. Fohler, Managing quality‐of‐control performance under overload conditions, in Proc. 16th Euromicro Conference on Real‐Time Systems, July, 2004.
158
[21] Cena, G. and F. Vasques, Guest Editorial: Special section on communication in automation—Part I, IEEE Transactions of Industrial Informatics, vol. 4, iss. 2, May, 2008, pp. 2‐5.
[22] Cerv ncoln, J. Eker, and K.‐E. Årzén, How does contr ce?, IEEE Control Systems Magazine, vol. 23, iss. 3, June 2003, pp. 16‐ 30.
puting Systems and Applications, Göteborg, Sweden, Aug. 2004.
ontrol Systems: Tolerant to Faults, Nancy, France, June 2007.
[26] Chang, X., Network simulations with OPNET, in Proc. 1999 Winter
design in ad hoc wireless networks, in
D. Gu, Quantitative parameter tuning scheme for systems, IET Control Theory Applications,
R. German, and F. Dressler, QoS‐oriented integrated network planning for industrial wireless sensor networks, in Proc. 6th IEEE
for congestion avoidance in computer networks, Computer Networks and
sis and co‐simulation of an IEEE 802.11B wireless networked control system, in Proc. 16th IFAC
in, A., D. Henriksson, B. Liol timing affect performan
[23] Cervin A., B. Lincoln, J. Eker, K.‐E. Årzén, and G. Buttazzo, The jitter margin and its application in the design of real‐time control systems, in Proc. 10th International Conference on Real‐Time and Embedded Com
[24] Cervin, A., M. Ohlin, and D. Henriksson, Simulation of networked control systems using TrueTime, in Proc. 3rd International Workshop on Networked C
[25] Cervin, A. and T. Henningsson, Scheduling of event‐triggered controllers on a shared network, in Proc. 47th IEEE Conference on Decision and Control, Cancun, Mexico, December 2008.
Simulation Conference, vol. 1, Phoenix, AZ, USA, 5‐8 December, 1999, pp. 307‐314.
[27] Chen, L., S.H. Low, M. Chiang, and J.C. Doyle, Cross‐layer congestion control, routing and schedulingProc. 25th IEEE International Conference on Computer Communications, April 2006.
[28] Chen, P., W. Zhang, anda class of multi‐loop controlvol. 1, no. 5, 2007, pp. 1413‐1422.
[29] Chen, F.,
Communications Society Conference on Sensor and Ad Hoc Communications and Networks, Poster Session, Rome, Italy, June 2009,
[30] Chiu, D.M. and R. Jain, Analysis of the increase and decrease algorithms
ISDN Systems, vol. 17, 1989, pp. 1–14.
[31] Colandairaj, J., G. Irwin, and W. Scanlon, Analy
World Congress, Prague, Czech Republic, 4‐8 July, 2005.
159
[32] Colandairaj, J., G.W. Irwin, and W.G. Scanlon, A co‐design solution for wireless feedback control, in Proc. IEEE International Conference on Networking, Sensing and Control, London, UK, 15‐17 April, 2007.
[33] Colandairaj, J., G.W. Irwin, and W.G. Scanlon, Wireless networked control
ger‐Verlag, London, UK, 2005.
‐D., Ethernet‐based real‐time and industrial communications,
oigt, Contiki ‐ a lightweight and flexible
strial
E.O., Estimates of error rates for codes on burst‐noise channels, Bell
ing Practice, vol. 8, 2000, pp. 1369‐1378.
lsinki, Finland, 28‐29 September, 2006.
systems with QoS‐based sampling, IET Control Theory & Applications 2007, no. 1, pp. 430‐438.
[34] Costa, O.L., M.D. Fragoso, and R.P. Marques, Discrete‐time Markov jump linear systems, Sprin
[35] Curren, D., A survey of simulation in sensor networks, University of Binghamton.
[36] Decotignie, J.Proceedings of the IEEE, vol. 93, iss. 6, June 2005, pp. 1102‐1117.
[37] Dorf, R.C., R.H. Bishop, Modern Control Systems (10th edition), Prentice Hall, 2004.
[38] Dunkels, A., B. Grönvall, and T. Voperating system for tiny networked sensors, in Proc. First IEEE Workshop on Embedded Networked Sensors, Tampa, Florida, USA, November 2004.
[39] Dymola, online: www.dynasim.se.
[40] Egan, D., The emergence of ZigBee in building automation and inducontrol, Computing & Control Engineering Journal, vol. 16, iss. 2, April‐May 2005, pp. 14‐19.
[41] ElliottSystem Technical Journal, no. 42, 1963, pp. 1977–1997.
[42] Eker J., P. Hagander, and K.E. Årzén, A feedback scheduler for real time control tasks, Control Engineer
[43] Enz, C.C., A. El‐Hoiydi, J.‐D. Decontigne, and V. Peiris, WiseNET: an ultralow‐power wireless sensor network solution, IEEE Computer, vol. 37, iss. 8, August 2004.
[44] Eriksson, L., V. Hölttä, and M. Misol, Modeling, simulation and control of a laboratory‐scale trolley crane, in Proc. 47th Conference on Simulation and Modelling, He
[45] Eriksson, L., R. Jäntti, and M. Pohjola, Säädön ja tietoliikenteen yhteissimulointi langattomassa automaatiossa, Automaatioväylä, no. 6, 2007.
160
[46] Eriksson, L., M. Elmusrati, M. Pohjola, Introduction to wireless automation, Collected Papers of the Spring 2007 Postgraduate Seminar, Helsinki University of Technology, Control Engineering, Report 155, 2008.
. Mikkola, PID controller tuning rules for
o. 1,
tworking, vol. 1, iss. 4, August
w.freertos.org/
w, vol. 33, iss 1, January 2003.
ion, Cambridge University Press,
going mutations, International Journal of Online Engineering, vol. 4, iss. 1, 2008.
[47] Eriksson, L., PID controller design and tuning in networked control systems, Ph.D. Thesis, Helsinki University of Technology, October 2008.
[48] Eriksson, L., T. Oksanen, and Kintegrating processes with varying time‐delays, Journal of the Franklin Institute, vol. 346, iss. 5, June 2009, pp 470‐487.
[49] Feisel, L.D. and A.J. Rosa, The role of the laboratory in undergraduate engineering education, Journal of Engineering Education, vol. 94, nJanuary 2005.
[50] Flammini, A., P. Ferrari, D. Marioli, E. Sisinni, and A. Taroni, Wired and wireless sensor networks for industrial applications, Microelectronics Journal, vol. 40, iss. 9, September 2009, pp. 1322‐1336.
[51] Floyd S. and V. Jacobson, Random early detection gateways for congestion avoidance, IEEE/ACM Transactions on Ne1993, pp. 397 – 413.
[52] FreeRTOS – Free Real‐Time Operating System. available at http://ww
[53] Gabel, O. and L. Litz, QoS‐adaptive control in NCS with variable delays and packet losses – a heuristic approach, in Proc. 43rd IEEE Conference on Decision and Control, Atlantis, Paradise Island, Bahamas, December 14‐17, 2004.
[54] Ganesan, D., D. Estrin, and J. Heidemann, Dimensions: why do we need a new data handling architecture for sensor networks?, ACM SIGCOMM Computer Communication Revie
[55] Garcia, C.E. and M. Morari, Internal model control. A unifying review and some new results, Industrial & Engineering Chemistry Process Design and Development, vol. 21, iss. 2, 1982.
[56] Gilbert, E.N., Capacity of a burst‐noise channel, Bell System Technical Journal no. 39, 1960, pp. 1253–1265.
[57] Goldsmith, A., Wireless communicatCambridge, UK, 2004.
[58] Gravier, C., J. Fayolle, B. Bayard, M. Ates, and J. Lardon, State of the art about remote laboratories paradigms – Foundations of on
161
[59] Gungor, V.C., G.P. Hancke, Industrial wireless sensor networks: challenges, design principles, and technical approaches, IEEE Transactions on Industrial Electronics, vol. 56, no. 10, October 2009.
[60] Gupta, V., B. Hassibi, R.M. Murray, Optimal LQG control across packet‐dropping links, Systems & Control Letters, vol. 56, iss. 6, June 2007, pp 439‐446.
[61] Hasan, M.S., Y. Hongnian, A. Griffiths, and T.C. Yang, Simulation of distributed wireless networked control systems over MANET using
for Matlab and ns‐2, in Proc. 2nd International Conference on
and Control, Shanghai, China, 2009.
Y. Xu, A survey of recent results in
b, G. Haßlinger, Packet loss in real‐time services:
8, pp. 239–248.
ut,
ce on Computer Communications, Rio de Janeiro,
l Conference on
/.
OPNET, in Proc. IEEE International Conference on Networking, Sensing and Control, London, UK, 15‐17 April, 2007, pp. 699‐704.
[62] Hasan, M.S., H. Yu, A. Griffiths, and T.C. Yang, Co‐simulation framework for networked control systems over multi‐hop mobile ad‐hoc networks, in Proc. 17th IFAC World Congress, Seoul, Korea, 6‐11 July, 2008.
[63] Heimlich, O., R. Sailer, and L. Budzisz, NMLab: A co‐simulation frameworkAdvances in System Simulation, Nice, France, 22‐27 August, 2010.
[64] Henriksson, E., H. Sandberg, and K. H. Johansson, Reduced‐order predictive outage compensators for networked systems, in Proc. IEEE Conference on Decision
[65] Hespanha, J., P. Naghshtabrizi, andnetworked control systems, Proceedings of the IEEE, Special Issue on Networked Control Systems Technology, vol. 95, no. 1, 2006, pp. 138‐162.
[66] Hohlfeld, O., R. GeiMarkovian models generating QoE impairments, in Proc. 16th International Workshop on Quality of Service, June 200
[67] Hongbo, L., Z. Sun, M.‐Y. Chow, and B. Chen, State feedback controller design of networked control systems with time delay and packet dropoin Proc. IFAC World Congress, Seoul, South Korea, 6‐11 July, 2008.
[68] Hou, I. H., V. Borkar, P. R. Kumar, A theory of QoS for wireless, in Proc. 28th IEEE ConferenBrazil, June 2009, pp. 486‐494.
[69] Irwin, G.W., J. Colandairaj, and W.G. Scanlon, An overview of wireless networks in control and monitoring, in Proc. InternationaIntelligent Computing, Kunming, China 16‐19 August, 2006.
[70] ISA100, Wireless Systems for Automation, online: http://www.isa.org/ isa100
162
[71] Ji, K. and W.‐J. Kim, Optimal bandwidth allocation and QoS‐adaptive control co‐design for networked control systems, International Journal of Control, Automation, and Systems, vol. 6, no. 4, August 2008, pp. 596‐606.
[72] Kao, C.‐Y. and B. Lincoln, Simple stability criteria for systems with timevarying delays, Automatica, vol. 40, 2004, pp. 1429‐1434.
[73] Karl, H., A. Willig, Protocols and architectures for wireless sensor networks, Wiley, June 2005.
[74] Kawka, P. and A. Alleyne, Stability and feedback control of wireless
K. Maulloo, D.K.H. Tan, Rate control for communication
27‐48.
6th Annual Conference of the IEEE Industrial
rsity of Technology, publication 808, May 2009.
ulation of
g beyond cable replacement, in Proc.
from a semiconductor plant and the North Sea, in
networked systems, in Proc. American Control Conference, Portland, OR, USA, 2005, pp. 2953–2959.
[75] Kelly, F.P., A.networks: Shadow prices, proportional fairness and stability, The Journal of the Operational Research Society, vol. 49, no. 3, March 1998, pp. 237‐252.
[76] Kintner‐Meyer, M., Opportunities of wireless sensors and controls for building operation, Journal of Energy Engineering, vol. 102, iss. 5, September 2005, pp
[77] Kjesbu, S. and T. Brunsvik, Radiowave propagation in industrial environments, in Proc. 2Electronics Society, vol. 4, Nagoya, Japan, 22‐28 October, 2000.
[78] Kohvakka, M., Medium access control and hardware prototype designs for low‐energy wireless sensor networks, Ph.D. Thesis, Tampere Unive
[79] Kotz, D., C. Newport, R.S. Gray, J. Liu, Y. Yuan, and C. Elliott, Experimental evaluation of wireless simulation assumptions, in Proc. 7th ACM international Symposium on Modeling, Analysis and SimWireless and Mobile Systems, Venice, Italy, 4‐6 October, 2004.
[80] Koumpis, K., L. Hanna, M. Andersson, and M. Johansson, Wireless industrial control and monitorinPROFIBUS International Conference, Coombe Abbey, Warwickshire, UK, June 2005.
[81] Krishnamurthy, L., R. Adler, P. Buonadonna, J. Chhabra, M. Flanigan, A. Kushalnagar, and M. Yarvis, Design and deployment of industrial sensor networks: experiences Proc. 3rd International Sensors and Systems, San Diego, California, USA, 2005.
163
[82] Kumar, P.R., New technological vistas for systems and control: the example of wireless networks, IEEE Control Systems Magazine, vol. 21, iss. 1, February 2001, pp. 24‐37.
[83] Kuorilehto, M., M. Hännikäinen, T.D. Hämäläinen, Rapid design and evaluation framework for wireless sensor networks, Ad Hoc Networks, Elsevier, vol. 6, iss. 6, 15 August, 2008, pp. 909‐935.
oller tuning for
desired
self‐
irelessHART
1, Feb. 2001, pp. 66–83.
munication modules for guaranteeing acceptable control and
[84] Lee, D.‐Y., M. Lee, Y. Lee, and S. Park, Mp criterion based multi‐loop PID controller tuning for desired closed‐loop responses, Korean Journal of Chemical Engineering Science, vol. 61, no. 5, 2003, pp. 8‐13.
[85] Lee, J., W. Cho, and T. F. Edgar, Multi‐loop PI contrinteracting multivariable processes, Computer and Chemical Engineering, vol. 22, 1998, pp. 1711‐1722.
[86] Lee Y, S. Park, M. Lee, C. Brosilow, PID controller tuning forclosed‐loop responses for SI/SO systems, AICh Journal, vol. 44, no. 1, 1998, pp. 10‐115.
[87] Leland, W.E., M.S. Taqqu, W. Willinger, and D.V. Wilson, On thesimilar nature of Ethernet traffic (extended version), IEEE/ACM Transactions on Networking, vol. 2, iss. 1, February, 1994.
[88] Lennvall, T., S. Svensson, and F. Hekland, A comparison of Wand Zigbee for industrial applications, in Proc. 7th IEEE International Workshop on Factory Communication Systems, 2008.
[89] Levis, P., N. Lee, M. Welsh, and D. Culler, TOSSIM: accurate and scalablesimulation of entire TinyOS applications, in Proc. 1st International Conference on Embedded Networked Sensor Systems, Los Angeles, California, USA, 2003, pp. 126‐137.
[90] Lian, F.‐L., J.R. Moyne, and D.M. Tilbury, Control performance study of a networked machining cell, in Proc. American Control Conference, vol. 4, 2000, pp. 2337‐2341.
[91] Lian, F.‐L., J.R. Moyne, and D.M. Tilbury, Performance evaluation of control networks: Ethernet, ControlNet, and DeviceNet, IEEE Control Systems Magazine, vol. 21, no.
[92] Lian, F.‐L., J. Moyne, and D. Tilbury, Network design consideration for distributed control systems, IEEE Transactions on Control Systems Technology, vol. 10, no. 2, March 2002.
[93] Lian, F.‐L., J.K. Yook, D.M. Tilbury, and J. Moyne, Network architecture and com
164
communication performance for networked multi‐agent systems, IEEE Transactions on Industrial Informatics, vol. 2, no. 1, February 2006.
[94] Ling, Q., M.D. Lemmon, Robust performance of soft real‐time networked control systems with data dropouts, in Proc. 41st IEEE Conference on Decision and Control vol. 2, 10‐13 December, 2002, pp. 1225‐1230.
International Symposium on Mathematical
Wireless medium access control in networked
oc. 43rd IEEE Conference on Decision and Control, vol. 4, Bahamas,
s and Control: Foundations and Applications,
Journal on
network simulators
ulation and
s, in Proc. IEEE International Conference on Communications,
atic Control, vol. 55, no. 8, August 2010, pp. 1781‐1796.
[95] Lincoln, B. and B. Bernhardsson, Optimal control over networks with long random delays, in Proc. 14thTheory of Networks and Systems, Laboratoire de Théorie des Systèmes, University of Perpignan, 2000.
[96] Liu, X. and A. Goldsmith, control systems, in Proc. 42nd Conference on Decision and Control, December 2003.
[97] Liu, X. and A. Goldsmith, Kalman filtering with partial observation losses, in Pr14‐17 December 2004, pp. 4180‐4186.
[98] Liu, X. and A. Goldsmith, Cross‐layer design of distributed control over wireless networks, SystemEditor T. Basar, Birkhauser, 2005.
[99] Liu, Q., S. Zhou, and G.B. Giannakis, Cross‐layer scheduling with prescribed QoS guarantees in adaptive wireless networks, IEEESelected Areas in Communications, vol. 23, no. 5, May 2005, pp. 1056‐1066.
[100] Lucio, G.F., M. Paredes‐Farrera, E. Jammeh, M. Fleury, and M.J. Reed, OPNET modeler and ns‐2: Comparing the accuracy offor packet‐level analysis using a network testbed, WSEAS Transactions on Computers, vol. 2, no. 3, July 2003, pp. 700‐707.
[101] Mahrenholz, D. and S. Ivanov, Real‐time network emulation with ns‐2, in Proc. 8th IEEE International Symposium on Distributed SimReal‐Time Applications, 21‐23 October 2004, pp. 29‐ 36.
[102] Marco P. Di, P. Park, C. Fischione, and K. H. Johansson, TREnD: a timely, reliable, energy‐efficient and dynamic WSN protocol for control applicationCape Town, South Africa, 23‐27 May, 2010.
[103] Maurice, W.P.H.H., A.R. Teel, N. van de Wouw, and D. Nešić, Networked Control Systems With Communication Constraints: Tradeoffs Between Transmission Intervals, Delays and Performance, IEEE Transactions on Autom
165
[104] Maybeck, P.S., Stochastic models, estimation and control, vol. 1, Academic Press, 1979, pp. 229–230.
[105] Middleton, R. H., C. M. Kellett, and R. N. Shorten, Fairness and
.
tica, vol. 39, no. 10, 2003.
Urban Sensing, 2008.
convergence results for additive‐increase multiplicative‐decrease multiple‐bottleneck networks, in Proc. of the 45th IEEE Conference on Decision and Control, San Diego, CA, USA, December 13‐15, 2006
[106] Mirkin, L, Some remarks on the use of time‐varying delay to model sample‐and‐hold circuits, IEEE Transactions on Automatic Control, vol. 52, iss. 6, 2007, pp. 1109‐1112.
[107] Modelica and the Modelica Association, online: http://www.modelica.org.
[108] Montestruque, L. A. and P. J. Antsaklis, On the model‐based control of networked systems, Automa
[109] Montestruque, L. and M.D. Lemmon, CSOnet: a metropolitan scale wireless sensor‐actuator network, International Workshop on Mobile Device and
[110] Moyne, J.R. and D.M. Tilbury, The emergence of industrial control networks for manufacturing control, diagnostics, and safety data, Proceedings of the IEEE, vol. 95, iss. 1, January 2007.
[111] Mukherjee, A., On the dynamics and significance of low frequency components of the internet load. Internetworking: Research and Experience, 5, no. 4. 1994, pp. 163–205.
[112] Mustard, S., Security of distributed control systems: the concern increases, The IEE Computing & Control Engineering, vol. 16, iss. 6, Dec. 2005/Jan. 2006.
[113] Nebot, E.M., M. Bozorg, and H.F. Durrant‐Whyte, Decentralized architecture for asynchronous sensors, Journal of Autonomous Robots, Kluwer Academic Publishers, Boston, vol. 6, no. 2, May, 1999, pp. 147‐164.
[114] Nethi, S., C. Gao, R. Jäntti, and M. Pohjola, Localized multiple next‐hop routing protocol, in Proc. 7th international conference on ITS telecommunication, Sophia Antipolis, France, June 5‐8, 2007.
[115] Neumann, P., Communication in industrial automation—What is going on? Control Engineering Practice, Special Issue on Manufacturing Plant Control: Challenges and Issues, vol. 15, iss. 11, November 2007, pp. 1332‐1347.
[116] Nikolakopoulos, G., A. Panousopoulou, A. Tzes, and J. Lygeros, Multi‐hopping induced gain scheduling for wireless networked controlled
166
systems, in Proc. 44th IEEE Conference on Decision and Control, and the European Control Conference, Seville, Spain, December 12‐15, 2005.
23‐26
ttp://www.opnet.com.
39.
[124] Pellegrini, F. De, D. Miorandi, S. Vitturi, and A. Zanella, On the use of
n/software/PiccSIM
[117] Nixon, M., D. Chen, T. Blevins, and A.K. Mok, Meeting control performance over a wireless mesh network, in Proc. 4th IEEE Conference on Automation Science and Engineering, Washington DC, USA,August, 2008, pp. 540‐547.
[118] ns‐2, The Network Simulator, online: http://www.isi.edu/nsnam/ns/ and http://nsnam.isi.edu/nsnam/index.php/Main_Page.
[119] OPNET Technologies, online: h
[120] O.C. Imer, S. Yuksel, T. Basar, Optimal control of LTI systems over unreliable communication links, Automatica, vol. 42, iss. 9, September 2006, pp. 1429‐14
[121] Overstreet, J.W., A. Tzes, An internet‐based real‐time control engineering laboratory, IEEE Control Systems Magazine, vol. 19, iss. 5, October 1999.
[122] Pagano, P., M. Chitnis, G. Lipari, C. Nastasi, and Y. Liang, Simulating real‐time aspects of wireless sensor networks, EURASIP Journal on Wireless Communications and Networking, vol. 2010, Article ID 107946, 2010.
[123] Paradiso, J.A., T. Starner, Energy scavenging for mobile and wireless electronics, IEEE Pervasive Computing, vol.4, no.1, pp. 18‐ 27, Jan.‐March 2005.
wireless networks at low level of factory automation systems, IEEE Transactions on Industrial Informatics, vol. 2, iss. 2, May 2006, pp. 129‐ 143.
[125] Pérez Yuste, A., Early developments of wireless remote control: the Telekino of Torres‐Quevedo, Proceedings of the IEEE, vol. 96, no. 1, January 2008.
[126] Perkins, C.E. and E.M. Royer, Ad‐hoc on‐demand distance vector routing, in Proc. 2nd IEEE Workshop on Mobile Computer Systems and Applications, Washington, DC, USA, 1999.
[127] PiccSIM, available online: http://wsn.tkk.fi/e
[128] Ploplys, N. J., P.A. Kawka, and A.G. Alleyne, Closed‐loop control over wireless networks, IEEE Control Systems Magazine, vol. 24, iss. 3, June, 2004, pp. 58‐ 71.
167
[129] Pohjola, M., L. Eriksson, and H. Koivo, Tuning of PID Controller in Networked Control, in Proc. 32th IEEE Industrial Electronics Conference,
rmany, 4th of April,
evaluation, system design and network
Publishers, Hingham, MA, USA, vol. 1, iss. 19, October 2001,
for industrial
[133] Reddy, D., G.F. Riley, B. Larish, and Y. Chen, Measuring and explaining
E Computer and Communications
ller design, Industrial & Engineering Chemistry Process Design and
systems, in
nment for remote control system laboratories: Illustrated with an
Paris, France, 7‐10 November, 2006.
[130] Pohjola, M., S. Nethi, R. Jäntti, Wireless control of mobile robot squad with link failure, in Proc. First Workshop on Wireless Multihop Communications in Networked Robotics, Berlin, Ge2008.
[131] Prasad, A.R., N.R. Prasad, A. Kamerman, H. Moelard, and A. Eikelenboom, Performance deployment of IEEE 802.11, wireless personal communications, Kluwer Academicpp. 57 ‐ 79.
[132] Rabi, M. and K.H. Johansson, Event‐triggered strategiescontrol over wireless networks, Invited paper, Wireless Internet Conference, Maui, HI, USA, 2008.
differences in wireless simulation models, in Proc. 14th IEEE International Symposium on Modeling, Analysis, and Simulation, Computer and Telecommunication Systems, 2006.
[134] Rejaie, R., M. Handley, and D. Estrin, RAP: An end‐to‐end rate‐based congestion control mechanism for realtime streams in the Internet, in Proc. 8th Annual Joint Conference of the IEESocieties, vol. 3, New York, USA, 21‐25 March, 1999.
[135] Rivera, D.E., M. Morari, S. Skogestad, Internal model control: PID controDevelopment, no. 25, 1986, pp. 252‐265.
[136] Ryu, S., C. Rump, and C. Qiao, Advances in active queue management (AQM) Based TCP Congestion Control, Telecommunication Systems, vol. 25, no. 3‐4, March 2004, pp. 317‐351.
[137] Samii, S., A. Cervin, P. Eles, and Z. Peng, Integrated scheduling and synthesis of control applications on distributed embeddedProc. ACM IEEE Design, Automation & Test in Europe, pp. 57–62, April 2009.
[138] Sánchez, J., S. Dormido, R. Pastor, and F. Morilla, A Java/Matlab‐based enviroinverted pendulum, IEEE Transactions on Education, vol. 47, no. 3, August 2004.
168
[139] Sanchis, R., I. Peñarrocha, and P. Albertos, Design of robust output predictors under scarce measurements with time‐varying delays,
EEE Industrial Electronics Magazine, vol.1, no.
trol systems subject to ference on Decision
zung, J. Endresen, and J.‐E. Frey, Design and
A.B., X. Yang, Networked control system simulation design and its
znis, Improving PID control with unreliable communications, in ISA
: Applying wireless technology in real‐time industrial
m independent designs of the 8th IFAC
ional Conference on Fieldbuses and Networks in Industrial and
[151] Thomesse, J.‐P., Fieldbus technology in industrial automation, Proceedings of the IEEE, vol. 93, no. 6, June 2005, pp. 1073–1101.
Automatica, no. 43. 2007, pp. 281‐289.
[140] Scheible, G., D. Dacfey, J. Endresen, and J.‐E. Frey, Unplugged but connected ‐ design and implementation of a truly wireless real‐time sensor/actuator Interface, I2, Summer 2007, pp. 25‐34.
[141] Schenato, L., Optimal estimation in networked conrandom delay and packet drop, in Proc. 45th IEEE Conand Control. San Diego, CA, USA, 2006.
[142] Sensinode Ltd., online: http://www.sensinode.com/.
[143] Schieble, G., D. Dimplementation of a truly wireless real‐time sensor/actuator interface, IEEE Industrial Electronics Magazine, Summer 2007.
[144] Sinopoli, B., L. Schenato, M. Franceschetti, K. Poolla, M.I. Jordan, and S.S. Sastry, Kalman filtering with intermittent observations, IEEE Transactions on Automatic Control, vol. 49, iss. 9, Sept. 2004, pp. 1453‐1464.
[145] Skogestad, S. and I. Postlethwaite, Multivariable feedback control, Analysis and Design (2nd edition), Wiley, September 2005.
[146] Soglo,application, Tsinghua Science & Technology, vol. 11, iss. 3, June 2006, pp 287‐294.
[147] Song, J., A. Mok, K. Aloysius, D. Chen, M. Nixon, T. Blevins, and W. WojsEXPO, Reliant Center Houston Texas, 2006.
[148] Song, J., S. Han, A.K. Mok, D. Chen, M. Lucas, M. Nixon, and W. Prat, WirelessHARTprocess control, in Proc. IEEE Real‐Time and Embedded Technology and Applications Symposium, 22‐24 April, 2008.
[149] Song, Y.‐Q., Networked control systems: fronetwork QoS and the control to the co‐design, in Proc. InternatEmbedded Systems, Ansan, Korea, May 2009.
[150] Steigmann, R., J. Endresen, Introduction to WISA, White Paper, V2.0, ABB, July 2006.
169
[152] Tian, G., C.J. Fidge, and Y.‐C. Tian, Hybrid system simulation of computer control applications over communication networks, in Proc. IEEE International Symposium on Modelling, Analysis and Simulation of
ed control over IEEE 802.15.4 networks, in Proc. IEEE Conference
etworks, online: http://www.dustnetworks.com/docs/
orts on industrial controls, information
control systems, in Proc. 30th
Conference on
etworked control systems, IEEE
networks, Joint 48th IEEE Conference
, IEEE Communications Magazine, vol. 45, iss. 4, April 2007, pp. 70‐
Computer and Telecommunication Systems, Imperial College, London, 21‐23 September, 2009.
[153] Tiberi, U., C. Fischione, K.H. Johansson, M.D. Di Benedetto, Adaptive self‐triggeron Decision and Control, Atlanta, Georgia, USA, 15‐17 December, 2010.
[154] TinyOS, Operating system for wireless embedded sensor networks, online: http://www.tinyos.net/
[155] TSMP, Technical overview of time synchronized mesh protocol (TSMP), Dust NTSMP_Whitepaper.pdf.
[156] U.S. Department of Energy, Assessment study on sensors and automation in the industries of the future: Repprocessing, automation, and robotics, Office of Energy and Renewable Energy, November, 2004.
[157] Velasco, M., J.M. Fuertes, C. Lin, P. Marti, S. Brandt, A control approach to bandwidth management in networkedAnnual Conference of IEEE Industrial Electronics Society, vol. 3, 2‐6 November, 2004.
[158] Vieira, M.A.M., C.N. Coelho Jr, D.C. da Silva Jr, and J.M. da Mata, Survey on wireless sensor network devices”, in Proc. IEEEEmerging Technologies and Factory Automation, vol. 1, 16‐19 September, 2003, pp. 537‐544.
[159] Walch, G.C., H. Ye, Scheduling of nControl Systems Magazine, vol. 21, no. 1, February 2001, pp. 57‐65.
[160] Weiss, G., A. D’Innocenzo, R. Alur, K.H. Johansson, G.J. Pappas, in Proc. Robust stability of multi‐hop control on Decision and Control and 28th Chinese Control Conference Shanghai, P.R. China, December 16‐18, 2009.
[161] Wheeler, A., Commercial applications of wireless sensor networks using ZigBee77.
[162] Willig, A., M. Kubisch, C. Hoene, and A. Wolisz, Measurements of a wireless link in an industrial environment using an IEEE 802.11‐compliant
170
physical layer, IEEE Transactions on Industrial Electronics, vol. 49, iss. 6, Dec. 2002.
[163] Willig, A., K. Matheus, and A. Wolisz, Wireless technology in industrial
rd European Workshop on Sensor Networks,
topics in wireless industrial Informatics,
1973, pp. 1707‐
ffice wireless sensor channels, IEEE Transactions on Wireless
nsors 2008, no 8(7), 15. July,
ion in an inhomogeneous
ber 2005.
4th IEEE
networks, Proceedings of the IEEE, vol. 93, iss. 6, June 2005, pp. 1130‐1151.
[164] Willig, A. and R. Mitschke, Result of bit error measurements with sensor nodes and casuistic consequences for the design of energy‐efficient error control schemes, in Proc. 3Lecture Notes in Computer Science, vol. 3868, Springer Berlin / Heidelberg, 2006, pp. 310‐325.
[165] Willig, A., Recent and emerging communications: A selection, IEEE Transactions of Industrialvol. 4, iss. 2, May, 2008.
[166] WirelessHART, online: http://www.hartcomm2.org/hart_protocol/ wireless_hart/wireless_hart_main.html.
[167] Wittenburg, G. and J. Schiller, Running realworld software on simulated wireless sensor nodes, in Proc. ACM Workshop on Real‐World Wireless Sensor Networks, Uppsala, Sweden, June 2006.
[168] Wood, R.K. and M.W. Berry, Terminal composition control of binary distillation columns, Chemical Engineering Science, vol. 28, 1717.
[169] Wyne, S., A.P. Singh, F. Tufvesson, A.F. Molisch, A statistical model for indoor oCommunications, vol. 8, no. 8, August 2009, pp. 4154‐4164.
[170] Xia, F., L. Ma, C. Peng, Y. Sun, and J. Dong, Cross‐layer adaptive feedback scheduling of wireless control systems, Se2008, pp. 4265‐4281.
[171] Xiangheng, L. and A. Goldsmith, Kalman filtering with partial observation losses, in Proc. 43rd IEEE Conference on Decision and Control, vol. 4, 14‐17 December, 2004, pp. 4180‐ 4186,.
[172] Xiao, L., A. Hassabi, and J. P. How, Control with random communication delays via a discrete‐time jump system approach, in Proc. American Control Conference, Chicago, Illinois, June 2000.
[173] Xiao, J.‐J., Z.‐Q. Luo, Decentralized estimatsensing environment, IEEE Transactions on Information Theory, vol. 51, no. 10, Octo
[174] Xu, Y., and J.P. Hespanha, Estimation under uncontrolled and controlled communications in networked control systems, in Proc. 4
171
Conference on Decision and Control, and the European Control Conference. Seville, Spain, 2005.
[175] Ye, W., R.T. Vaughan, G.S Sukhartme, J. Heidemann, D. Estrin, and M.J. Mataric, Evaluating control strategies for wireless‐networked robots using an integrated robot and network simulation, in Proc. IEEE International
mmunication in distributed control systems using
, and S.M. Phillips, Stability of networked control
s and challenges, in Proc. IEEE Conference on
area networks ‐ Specific requirements, Part 15.4: wireless
.org.
öm, K.J. and B. Wittenmark, Adaptive Control 2nd edition, Addison‐
2nd ed., Instrument Society of America, 1995.
Conference on Robotics & Automation, Seoul, Korea, 21‐26 May, 2001.
[176] Yick, J., B. Mukherjee, and D. Ghosal, Wireless sensor network survey, Computer Networks, Elsevier B.V., iss. 52, 2008, pp. 2292‐2330.
[177] Yook, J.K., D. Tilbury, and N.R. Soparkar, Trading computation for bandwidth: reducing costate estimators, IEEE Transactions on Control Systems Technology, vol. 10, no. 4, July, 2002.
[178] Zhang, W., M.S. Branickysystems, IEEE Control Systems Magazine, vol. 21, iss. 1, February 2001, pp. 84‐99.
[179] Zhuang, L.Q., K.M. Goh, and J.B. Zhang, The wireless sensor networks for factory automation: IssueEmerging Technologies & Factory Automation, 25‐28 September, 2007.
[180] ZigBee, IEEE 802.15.4‐2006, IEEE Standard for Information Technology, Telecommunications and information exchange between systems ‐ Local and metropolitan medium access control (MAC) and physical layer (PHY) specifications for low‐rate wireless personal area networks (WPANs), IEEE Computer Society, New York, USA, September 2006, ZigBee Alliance online: http://www.zigbee
[181] K.‐E. Årzén. A simple event‐based PID controller, in Proc. 14th IFAC World Congress, Beijing, P.R. China, 1999.
[182] Årzén, K.‐E., M. Ohlin, A. Cervin, P. Alriksson, and D. Henriksson, Holistic simulation of mobile robot and sensor network applications using TrueTime, in Proc. The European Control Conference, Kos, Greece, 2‐5 July, 2007.
[183] Åström, K.J. and B. Wittenmark, Computer‐Controlled Systems, Prentice Hall, Upper Saddle River Inc., NJ, USA, 1997.
[184] ÅstrWesley Longman Publishing Co. Inc., Boston, MA, USA, 1994.
[185] Åström, K.J and T. Hägglund, PID controllers: theory, design, and tuning,
172
173
ork simulation with COOJA, in Proc. First IEEE International [186] Österlind, F., A. Dunkels, J. Eriksson, N. Finne, and T. Voigt, Cross‐level
sensor netwWorkshop on Practical Issues in Building Sensor Network Applications, Tampa, Florida, USA, November 2006.
HELSINKI UNIVERSITY OF TECHNOLOGY CONTROL ENGINEERING Editor: H. Koivo Report 155 Eriksson, L., Elmusrati, M., Pohjola, M. (eds.) Introduction to Wireless Automation - Collected papers of the spring 2007 postgraduate seminar. April 2008. Report 156 Korkiakoski, V. Improving the Performance of Adaptive Optics Systems with Optimized Control Methods. April 2008. Report 157 Al.Towati, A. Dynamic Analysis and QFT-Based Robust Control Design of Switched-Mode Power Converters. September
2008. Report 158 Eriksson, L. PID Controller Design and Tuning in Networked Control Systems. October 2008. Report 159 Pohjoranta, A. Modelling Surfactant Mass Balance with the ALE Method on Deforming 2D Surfaces. May 2009. Report 160 Kaartinen, J. Machine Vision in Measurement and Control of Mineral Concentration Process. June 2009. Report 161 Hölttä, V. Plant Performance Evaluation in Complex Industrial Applications. September 2009. Report 162 Halmevaara, K. Simulation Assisted Performance Optimization of Large-Scale Multiparameter Technical Systems. September
2009. Report 163 Haavisto, O. Reflectance Spectrum Analysis of Mineral Flotation Froths and Slurries. November 2009. Report 164 Cosar, E. I. A Wireless Toolkit for Monitoring Applications. October 2009. Report 165 Pohjoranta, A. Computational Models for Microvia Fill Process Control. February 2010. Report 166 Mendelson, A. Identification and Control of Deposition Processes. March 2010. Report 167 Björkbom, M. Wireless Control System Simulation and Network Adaptive Control. October 2010. ISBN 978-952-60-3460-7 (printed)
ISBN 978-952-60-3461-4 (pdf)
ISSN 0356-0872
Aalto-Print, Helsinki 2010