SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON...

15
TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 1 A System Level Approach to Controller Synthesis Yuh-Shyang Wang, Member, IEEE, Nikolai Matni, Member, IEEE, and John C. Doyle Abstract—Biological and advanced cyberphysical control sys- tems often have limited, sparse, uncertain, and distributed com- munication and computing in addition to sensing and actuation. Fortunately, the corresponding plants and performance require- ments are also sparse and structured, and this must be exploited to make constrained controller design feasible and tractable. We introduce a new “system level” (SL) approach involving three complementary SL elements. System Level Parameterizations (SLPs) provide an alternative to the Youla parameterization of all stabilizing controllers and the responses they achieve, and combine with System Level Constraints (SLCs) to parameterize the largest known class of constrained stabilizing controllers that admit a convex characterization, generalizing quadratic invariance (QI). SLPs also lead to a generalization of detectability and stabilizability, suggesting the existence of a rich separation structure, that when combined with SLCs, is naturally applicable to structurally constrained controllers and systems. We further provide a catalog of useful SLCs, most importantly including sparsity, delay, and locality constraints on both communication and computing internal to the controller, and external system performance. Finally, we formulate System Level Synthesis (SLS) problems, which define the broadest known class of constrained optimal control problems that can be solved using convex programming. Index Terms—constrained & structured optimal control, de- centralized control, large-scale systems Preliminaries & Notation: We use lower and upper case Latin letters such as x and A to denote vectors and matrices, respectively, and lower and upper case boldface Latin letters such as x and G to denote signals and transfer matrices, respectively. We use calligraphic letters such as S to denote sets. In the interest of clarity, we work with discrete time linear time invariant systems, but unless stated otherwise, all results extend naturally to the continuous time setting. We use standard definitions of the Hardy spaces H 2 and H , and denote their restriction to the set of real-rational proper transfer matrices by RH 2 and RH . We use G[i] to denote the ith spectral component of a transfer function G, i.e., G(z)= i=0 1 z i G[i] for |z| > 1. Finally, we use F T to denote the This paper was presented in part at IEEE American Control Conference, June 4 - 6, 2014; in part at 52nd Annual Allerton Conference on Communica- tion, Control, and Computing, September 30 - October 3, 2014; in part at 53rd IEEE Conference on Decision and Control, Los Angeles, CA, USA, December 15 - 17, 2014; in part at 5th IFAC Workshop on Distributed Estimation and Control in Networked Systems, September 10 - 11, 2015; in part at IEEE American Control Conference, July 6 - 8, 2016; and in part at IEEE American Control Conference, May 24 - 26, 2017 [1]–[7]. This work was supported by Air Force Office of Scientific Research and National Science Foundation and gifts from Huawei and Google. Y.-S. Wang is with the Control and Optimization Group, GE Global Research Center, Niskayuna, NY 12309 USA (e-mail: yuh- [email protected]). N. Matni is with the Department of Electrical Engineering and Computer Sciences, UC Berkeley, Berkeley, CA 94720 USA (e-mail: [email protected]). J. C. Doyle is with the Department of Control and Dynamical Sys- tems, California Institute of Technology, Pasadena, CA 91125 USA (e-mail: [email protected]). space of finite impulse response (FIR) transfer matrices with horizon T , i.e., F T := {G ∈ RH | G = T i=0 1 z i G[i]}. I. I NTRODUCTION T HE foundation of many optimal controller synthesis procedures is a parameterization of all internally stabi- lizing controllers, and the responses that they achieve, over which relevant performance measures can be easily optimized. For finite dimensional linear-time-invariant (LTI) systems, the class of internally stabilizing LTI feedback controllers is characterized by the celebrated Youla parameterization [8] and the closely related factorization approach [9]. In [8], the authors showed that the Youla parameterization defines an isomorphism between a stabilizing controller and the resulting closed loop system response from sensors to actuators – therefore rather than synthesizing the controller itself, this system response (or Youla parameter) could be directly op- timized. This allowed for the incorporation of customized design specifications on the closed loop system into the controller design process via convex optimization [10] or interpolation [11]. Subsequently, analogous parameterizations of stabilizing controllers for more general classes of systems were developed: notable examples include the polynomial ap- proach [12] for generalized Rosenbrock systems [13] and the behavioral approach [14]–[17] for linear differential systems. These results illustrate the power and generality of Youla pa- rameterization and factorization approaches to optimal control in the centralized setting. Together with state-space methods, they played a major role in shifting controller synthesis from an ad hoc, loop-at-a-time tuning process to a principled one with well defined notions of optimality, and in the LTI setting, paved the way for the foundational results of robust and optimal control that would follow [18]. However, as control engineers shifted their attention from centralized to distributed optimal control, it was observed that the parameterization approaches that were so fruitful in the centralized setting were no longer directly applicable. In contrast to centralized systems, modern cyber-physical systems (CPS) are large-scale, physically distributed, and intercon- nected. Rather than a logically centralized controller, these sys- tems are composed of several sub-controllers, each equipped with their own sensors and actuators – these sub-controllers then exchange locally available information (such as sensor measurements or applied control actions) via a communica- tion network. These information sharing constraints make the corresponding distributed optimal controller synthesis problem challenging to solve [19]–[24]. In particular, imposing such structural constraints on the controller can lead to optimal control problems that are NP-hard [25], [26]. Despite these technical and conceptual challenges, a body of work [20]–[24], [27], [28] that began in the early 2000s, and arXiv:1610.04815v3 [eess.SY] 12 Jun 2019

Transcript of SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON...

Page 1: SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2016 3 and provide a novel characterization

TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 1

A System Level Approach to Controller SynthesisYuh-Shyang Wang, Member, IEEE, Nikolai Matni, Member, IEEE, and John C. Doyle

Abstract—Biological and advanced cyberphysical control sys-tems often have limited, sparse, uncertain, and distributed com-munication and computing in addition to sensing and actuation.Fortunately, the corresponding plants and performance require-ments are also sparse and structured, and this must be exploitedto make constrained controller design feasible and tractable. Weintroduce a new “system level” (SL) approach involving threecomplementary SL elements. System Level Parameterizations(SLPs) provide an alternative to the Youla parameterization ofall stabilizing controllers and the responses they achieve, andcombine with System Level Constraints (SLCs) to parameterizethe largest known class of constrained stabilizing controllersthat admit a convex characterization, generalizing quadraticinvariance (QI). SLPs also lead to a generalization of detectabilityand stabilizability, suggesting the existence of a rich separationstructure, that when combined with SLCs, is naturally applicableto structurally constrained controllers and systems. We furtherprovide a catalog of useful SLCs, most importantly includingsparsity, delay, and locality constraints on both communicationand computing internal to the controller, and external systemperformance. Finally, we formulate System Level Synthesis (SLS)problems, which define the broadest known class of constrainedoptimal control problems that can be solved using convexprogramming.

Index Terms—constrained & structured optimal control, de-centralized control, large-scale systems

Preliminaries & Notation: We use lower and upper caseLatin letters such as x and A to denote vectors and matrices,respectively, and lower and upper case boldface Latin letterssuch as x and G to denote signals and transfer matrices,respectively. We use calligraphic letters such as S to denotesets. In the interest of clarity, we work with discrete timelinear time invariant systems, but unless stated otherwise, allresults extend naturally to the continuous time setting. Weuse standard definitions of the Hardy spaces H2 and H∞, anddenote their restriction to the set of real-rational proper transfermatrices by RH2 and RH∞. We use G[i] to denote the ithspectral component of a transfer function G, i.e., G(z) =∑∞

i=01ziG[i] for |z| > 1. Finally, we use FT to denote the

This paper was presented in part at IEEE American Control Conference,June 4 - 6, 2014; in part at 52nd Annual Allerton Conference on Communica-tion, Control, and Computing, September 30 - October 3, 2014; in part at 53rdIEEE Conference on Decision and Control, Los Angeles, CA, USA, December15 - 17, 2014; in part at 5th IFAC Workshop on Distributed Estimation andControl in Networked Systems, September 10 - 11, 2015; in part at IEEEAmerican Control Conference, July 6 - 8, 2016; and in part at IEEE AmericanControl Conference, May 24 - 26, 2017 [1]–[7]. This work was supported byAir Force Office of Scientific Research and National Science Foundation andgifts from Huawei and Google.

Y.-S. Wang is with the Control and Optimization Group, GEGlobal Research Center, Niskayuna, NY 12309 USA (e-mail: [email protected]).

N. Matni is with the Department of Electrical Engineering andComputer Sciences, UC Berkeley, Berkeley, CA 94720 USA (e-mail:[email protected]).

J. C. Doyle is with the Department of Control and Dynamical Sys-tems, California Institute of Technology, Pasadena, CA 91125 USA (e-mail:[email protected]).

space of finite impulse response (FIR) transfer matrices withhorizon T , i.e., FT := {G ∈ RH∞ |G =

∑Ti=0

1ziG[i]}.

I. INTRODUCTION

THE foundation of many optimal controller synthesisprocedures is a parameterization of all internally stabi-

lizing controllers, and the responses that they achieve, overwhich relevant performance measures can be easily optimized.For finite dimensional linear-time-invariant (LTI) systems,the class of internally stabilizing LTI feedback controllersis characterized by the celebrated Youla parameterization [8]and the closely related factorization approach [9]. In [8], theauthors showed that the Youla parameterization defines anisomorphism between a stabilizing controller and the resultingclosed loop system response from sensors to actuators –therefore rather than synthesizing the controller itself, thissystem response (or Youla parameter) could be directly op-timized. This allowed for the incorporation of customizeddesign specifications on the closed loop system into thecontroller design process via convex optimization [10] orinterpolation [11]. Subsequently, analogous parameterizationsof stabilizing controllers for more general classes of systemswere developed: notable examples include the polynomial ap-proach [12] for generalized Rosenbrock systems [13] and thebehavioral approach [14]–[17] for linear differential systems.These results illustrate the power and generality of Youla pa-rameterization and factorization approaches to optimal controlin the centralized setting. Together with state-space methods,they played a major role in shifting controller synthesis froman ad hoc, loop-at-a-time tuning process to a principled onewith well defined notions of optimality, and in the LTI setting,paved the way for the foundational results of robust andoptimal control that would follow [18].

However, as control engineers shifted their attention fromcentralized to distributed optimal control, it was observedthat the parameterization approaches that were so fruitful inthe centralized setting were no longer directly applicable. Incontrast to centralized systems, modern cyber-physical systems(CPS) are large-scale, physically distributed, and intercon-nected. Rather than a logically centralized controller, these sys-tems are composed of several sub-controllers, each equippedwith their own sensors and actuators – these sub-controllersthen exchange locally available information (such as sensormeasurements or applied control actions) via a communica-tion network. These information sharing constraints make thecorresponding distributed optimal controller synthesis problemchallenging to solve [19]–[24]. In particular, imposing suchstructural constraints on the controller can lead to optimalcontrol problems that are NP-hard [25], [26].

Despite these technical and conceptual challenges, a bodyof work [20]–[24], [27], [28] that began in the early 2000s, and

arX

iv:1

610.

0481

5v3

[ee

ss.S

Y]

12

Jun

2019

Page 2: SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2016 3 and provide a novel characterization

TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 2

that culminated with the introduction of quadratic invariance(QI) in the seminal paper [21], showed that for a large classof practically relevant LTI systems, such internal structurecould be integrated with the Youla parameterization and stillpreserve the convexity of the optimal controller synthesistask. Informally, a system is quadratically invariant if sub-controllers are able to exchange information with each otherfaster than their control actions propagate through the CPS[29]. Even more remarkable is that this condition is tight,in the sense that QI is a necessary [30] and sufficient [21]condition for subspace constraints (defined by, for example,communication delays) on the controller to be enforceable viaconvex constraints on the Youla parameter. The identificationof QI triggered an explosion of results in distributed optimalcontroller synthesis [31]–[39] – these results showed that therobust and optimal control methods that proved so powerful forcentralized systems could be ported to the distributed setting.As far as we are aware, no such results exist for the moregeneral classes of systems considered in [12], [14]–[17].

However, a fact that is not emphasized in the distributedoptimal control literature is that distributed controllers areactually more complex to synthesize and implement than theircentralized counterparts.1 In particular, a major limitation ofthe QI framework is that, for strongly connected systems,2

it cannot provide a convex characterization of localized con-trollers, in which local sub-controllers only access a subsetof system-wide measurements (c.f., Section II-D and IV-D).This need for global exchange of information between sub-controllers is a limiting factor in the scalability of the synthesisand implementation of these distributed optimal controllers.

Motivated by this issue, we propose a novel parameteriza-tion of internally stabilizing controllers and the closed loopresponses that they achieve, providing an alternative to theQI framework for constrained optimal controller synthesis.Specifically, rather than directly designing only the feedbackloop between sensors and actuators, as in the Youla framework,we propose directly designing the entire closed loop responseof the system, as captured by the maps from process andmeasurement disturbances to control actions and states. Assuch, we call the proposed method a System Level Approach(SLA) to controller synthesis, which is composed of threeelements: System Level Parameterizations (SLPs), SystemLevel Constraints (SLCs) and System Level Synthesis (SLS)problems. Further, in contrast to the QI framework, whichseeks to impose structure on the input/output map betweensensor measurements and control actions, the SLA imposesstructural constraints on the system response itself, and showsthat this structure carries over to the internal realization ofthe corresponding controller. It is this conceptual shift fromstructure on the input/output map to the internal realization ofthe controller that allows us to expand the class of structuredcontrollers that admit a convex characterization, and in doingso, vastly increase the scalability of distributed optimal controlmethods. We summarize our main contributions below.

1For example, see the solutions presented in [31]–[39] and the messagepassing implementation suggested in [39].

2We say that a plant is strongly connected if the state of any subsystemcan eventually alter the state of all other subsystems.

A. Contributions

This paper presents novel theoretical and computationalcontributions to the area of constrained optimal controllersynthesis. In particular, we• define and analyze the system level approach to controller

synthesis, which is built around novel SLPs of all stabi-lizing controllers and the closed loop responses that theyachieve;

• show that SLPs allow us to constrain the closed loopresponse of the system to lie in arbitrary sets: we callsuch constraints on the system SLCs. If these SLCs admita convex representation, then the resulting set of con-strained system responses admits a convex representationas well;

• show that such constrained system responses can be usedto directly implement a controller achieving them – inparticular, any SLC imposed on the system responseimposes a corresponding SLC on the internal structureof the resulting controller;

• show that the set of constrained stabilizing controllers thatadmit a convex parameterization using SLPs and SLCs isa strict superset of those that can be parameterized usingquadratic invariance – hence we provide a generalizationof the QI framework, characterizing the broadest knownclass of constrained controllers that admit a convexparameterization;

• formulate and analyze the SLS problem, which exploitsSLPs and SLCs to define the broadest known class ofconstrained optimal control problems that can be solvedusing convex programming. We show that the optimalcontrol problems considered in the QI literature [20],as well as the recently defined localized optimal controlframework [4] are all special cases of SLS problems.

B. Paper Structure

In Section II, we define the system model considered inthis paper, and review relevant results from the distributedoptimal control and QI literature. In Section III we defineand analyze SLPs for state and output feedback problems,and provide a novel characterization of stable closed loopsystem responses and the controllers that achieve them – thecorresponding controller realization makes clear that SLCsimposed on the system responses carry over to the internalstructure of the controller that achieves them. In Section IV,we provide a catalog of SLCs that can be imposed on thesystem responses parameterized by the SLPs described in theprevious section – in particular, we show that by appropriatelyselecting these SLCs, we can provide convex characterizationsof all stabilizing controllers satisfying QI subspace constraints,convex constraints on the Youla parameter, finite impulseresponse (FIR) constraints, sparsity constraints, spatiotemporalconstraints [1]–[4], [7], controller architecture constraints [5],[40], [41], and any combination thereof. In Section V, wedefine and analyze the SLS problem, which incorporates SLPsand SLCs into an optimal control problem, and show that thedistributed optimal control problem ((5) in Section II-C) is aspecial case of SLS. We end with conclusions in Section VI.

Page 3: SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2016 3 and provide a novel characterization

TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 3

II. PRELIMINARIES

A. System Model

We consider discrete time linear time invariant (LTI) sys-tems of the form

x[t+ 1] = Ax[t] +B1w[t] +B2u[t] (1a)z[t] = C1x[t] +D11w[t] +D12u[t] (1b)y[t] = C2x[t] +D21w[t] +D22u[t] (1c)

where x, u, w, y, z are the state vector, control action, externaldisturbance, measurement, and regulated output, respectively.Equation (1) can be written in state space form as

P =

A B1 B2

C1 D11 D12

C2 D21 D22

=

[P11 P12

P21 P22

]

where Pij = Ci(zI − A)−1Bj + Dij . We refer to P as theopen loop plant model.

Consider a dynamic output feedback control law u = Ky.The controller K is assumed to have the state space realization

ξ[t+ 1] = Akξ[t] +Bky[t] (2a)u[t] = Ckξ[t] +Dky[t], (2b)

where ξ is the internal state of the controller. We haveK = Ck(zI − Ak)−1Bk + Dk. A schematic diagram of theinterconnection of the plant P and the controller K is shownin Figure 1.

P11 P12

P21 P22

K

y u

wz

Fig. 1. Interconnection of the plant P and controller K.

The following assumptions are made throughout the paper.Assumption 1: The interconnection in Figure 1 is well-posed

– the matrix (I −D22Dk) is invertible.Assumption 2: Both the plant and the controller realizations

are stabilizable and detectable; i.e., (A,B2) and (Ak, Bk) arestabilizable, and (A,C2) and (Ak, Ck) are detectable.

The goal of the optimal control problem is to find acontroller K to stabilize the plant P and minimize a suit-ably chosen norm3 of the closed loop transfer matrix fromexternal disturbance w to regulated output z. This leads to thefollowing centralized optimal control formulation:

minimizeK

||P11 + P12K(I −P22K)−1P21||subject to K internally stabilizes P. (3)

3Typical choices for the norm include H2 and H∞.

B. Youla Parameterization

A common technique to solve the optimal control problem(3) is via the Youla parameterization, which is based on adoubly co-prime factorization of the plant, defined as follows.

Definition 1: A collection of stable transfer matrices, Ur,Vr, Xr, Yr, Ul, Vl, Xl, Yl ∈ RH∞ defines a doubly co-prime factorization of P22 if P22 = VrU

−1r = U−1l Vl and[

Xl −Yl

−Vl Ul

] [Ur Yr

Vr Xr

]= I.

Such doubly co-prime factorizations can always be computedif P22 is stabilizable and detectable [42]. Let Q be the Youlaparameter. From [42], problem (3) can be reformulated interms of the Youla parameter as

minimizeQ

||T11 + T12QT21||subject to Q ∈ RH∞ (4)

with T11 = P11 + P12YrUlP21, T12 = −P12Ur, andT21 = UlP21. The benefit of optimizing over the Youlaparameter Q, rather than the controller K, is that (4) isconvex with respect to the Youla parameter. One can thenincorporate various convex design specifications [10] in (4)to customize the controller synthesis task. Once the optimalYoula parameter Q, or a suitable approximation thereof, isfound in (4), we reconstruct the controller by setting K =(Yr −UrQ)(Xr −VrQ)−1.

C. Structured Controller Synthesis and QI

We now move our discussion to the distributed optimalcontrol problem. We follow the paradigm adopted in [21],[31]–[38], and focus on information asymmetry introduced bydelays in the communication network – this is a reasonablemodeling assumption when one has dedicated physical com-munication channels (e.g., fiber optic channels), but may notbe valid under wireless settings. In the references cited above,locally acquired measurements are exchanged between sub-controllers subject to delays imposed by the communicationnetwork,4 which manifest as subspace constraints on thecontroller itself.5

Let C be a subspace enforcing the information sharingconstraints imposed on the controller K. A distributed optimalcontrol problem can then be formulated as [21], [30], [43],[44]:

minimizeK

‖P11 + P12K(I −P22K)−1P21‖subject to K internally stabilizes P, K ∈ C.

(5)

A summary of the main results from the distributed optimalcontrol literature [21], [31]–[38] can be given as follows: ifthe subspace C is quadratically invariant with respect to P22

(KP22K ∈ C, ∀K ∈ C) [21], then the set of all stabilizingcontrollers lying in subspace C can be parameterized by those

4Note that this delay may range from 0, modeling instantaneous com-munication between sub-controllers, to infinite, modeling no communicationbetween sub-controllers.

5For continuous time systems, the delays can be encoded via subspacesthat may reside within H∞ as opposed RH∞.

Page 4: SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2016 3 and provide a novel characterization

TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 4

stable transfer matrices Q ∈ RH∞ satisfying M(Q) ∈ C, forM(Q) := K(I − P22K)−1 = (Yr − UrQ)Ul.6 Further,these conditions can be viewed as tight, in the sense thatquadratic invariance is also a necessary condition [30], [43]for a subspace constraint C on the controller K to be enforcedvia a convex constraint on the Youla parameter Q.

This allows the optimal control problem (5) to be recast asthe following convex model matching problem:

minimizeQ

‖T11 + T12QT21‖subject to Q ∈ RH∞, M(Q) ∈ C.

(6)

D. QI imposes limitations on controller sparsity

When working with large-scale systems, it is natural toimpose that sub-controllers only collect information from alocal subset of all other sub-controllers. This can be enforcedby setting the subspace constraint C in problem (5) to encodea suitable sparsity pattern Kij = 0,7 for some i, j. However, ifthe plant P22 is dense (i.e., if the underlying system is stronglyconnected), which may occur even if the system matrices(A,B2, C2) are sparse, then any such sparsity constraint isnot quadratically invariant with respect to the plant P22:this follows immediately from the algebraic definition of QIKP22K ∈ C, ∀K ∈ C. As QI is a necessary and sufficientcondition for the subspace constraint K ∈ C to be enforced viaa convex constraint on the Youla parameter Q, we concludethat for strongly connected systems, any sparsity constraintimposed on the controller K can only be enforced via a non-convex constraint on Youla parameter. A major motivationfor the SLA developed in this paper was to circumvent thislimitation of the QI framework – we revisit this discussion inSection IV-C, and show, through the use of a simple example,that the SLA does indeed allow for these limitations to beovercome.

III. SYSTEM LEVEL PARAMETERIZATION

In this section, we propose a novel parameterization of inter-nally stabilizing controllers centered around system responses,which are defined by the closed loop maps from processand measurement disturbances to state and control action.We show that for a given system, the set of stable closedloop system responses that are achievable by an internallystabilizing LTI controller is an affine subspace of RH∞,and that the corresponding internally stabilizing controllerachieving the desired system response admits a particularlysimple and transparent realization.

We begin by analyzing the state feedback case, as it hasa simpler characterization and allows us to provide intuitionabout the construction of a controller that achieves a desiredsystem response. With this intuition in hand, we present ourresults for the output feedback setting, which is the main focusof this paper. We conclude the section with a comparison ofthe pros and cons of using the SL and Youla parameterizations.

6By definition, we have P22 = VrU−1r = U−1

l Vl. This implies that thetransfer matrices Ur and Ul are both invertible. Therefore, M is an invertibleaffine map of the Youla parameter Q.

7Kij denotes the (i, j)-entry of the transfer matrix K.

A. State Feedback

Consider a state feedback model given by

P =

A B1 B2

C1 D11 D12

I 0 0

. (7)

The z-transform of the state dynamics (1a) is given by

(zI −A)x = B2u + δx, (8)

where we let δx := B1w denote the disturbance affectingthe state. We define R to be the system response mapping theexternal disturbance δx to the state x, and M to be the systemresponse mapping the disturbance δx to the control action u.For a given dynamic state feedback control rule u = Kx into(8), we define the system response {R,M} achieved by thecontroller K to be

R = (zI −A−B2K)−1

M = K(zI −A−B2K)−1, (9)

from which it follows that x = Rδx and u = Mδx.Similarly, given transfer matrices {R,M}, we say that they

define an achievable system response for the system (7) if thereexists a LTI controller K such that x = Rδx and u = Mδx,for {R,M} as defined in equation (9).

The main result of this subsection is an algebraic character-ization of the set {R,M} of state-feedback system responsesthat are achievable by an internally stabilizing controller K,as stated in the following theorem.

Theorem 1: For the state feedback system (7), the followingare true:(a) The affine subspace defined by[

zI −A −B2

] [RM

]= I (10a)

R,M ∈ 1

zRH∞ (10b)

parameterizes all system responses (9) achievable by aninternally stabilizing state feedback controller K.

(b) For any transfer matrices {R,M} satisfying (10), thecontroller K = MR−1 achieves the desired systemresponse (9).8. Further, if the controller K = MR−1 isimplemented as in Fig. 2, then it is internally stabilizing.

The rest of this subsection is devoted to the proof ofTheorem 1.

Necessity: The necessity of a stable and achievable systemresponse {R,M} lying in the affine subspace (10) is shownin the following lemma.

Lemma 1 (Necessity of conditions (10)): Consider the statefeedback system (7). Let {R,M} be the system responseachieved by an internally stabilizing controller K. Then,{R,M} is a solution of (10).

Proof: Equation (10a) follows directly from (8), whichholds for the system response achieved by any controller.

8Note that for any transfer matrices {R,M} satisfying (10), the transfermatrix R is always invertible because its leading spectral component 1

zI is

invertible. This is also true for the transfer matrices defined in equation (9).

Page 5: SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2016 3 and provide a novel characterization

TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 5

For an internally stabilizing controller, the system response{R,M} is in RH∞ by definition of internal stability. From(9) and the properness of K, the system response is strictlyproper, implying equation (10b) and completing the proof.

Remark 1: We show in Lemma 6 in Appendix A that thefeasibility of (10) is equivalent to the stabilizability of thepair (A,B2). In this sense, the conditions described in (10)provides an alternative definition of the stabilizability of asystem. A dual argument is also provided to characterize thedetectability of the pair (A,C2).

Sufficiency: Here we show that for any system response{R,M} lying in the affine subspace (10), we can constructan internally stabilizing controller K that leads to the desiredsystem response (9).

B21/z

A

KR

M

y

�y

�x

�u

x

u

�x

P22

�x

Fig. 2. The proposed state feedback controller structure, with R = I − zRand M = zM.

Consider the block diagram shown in Figure 2, wherehere R = I − zR and M = zM. It can be checkedthat zR, M ∈ RH∞, and hence the internal feedback loopbetween δx and the reference state trajectory x is well defined.As is standard, we introduce external perturbations δx, δy , andδu into the system and note that the perturbations enteringother links of the block diagram can be expressed as acombination of (δx, δy, δu) being acted upon by some stabletransfer matrices.9 Hence the standard definition of internalstability applies, and we can use a bounded-input bounded-output argument (e.g., Lemma 5.3 in [42]) to conclude that itsuffices to check the stability of the nine closed loop trans-fer matrices from perturbations (δx, δy, δu) to the internalvariables (x,u, δx) to determine the internal stability of thestructure as a whole. With this in mind, we can prove thesufficiency of Theorem 1 via the following lemma.

Lemma 2 (Sufficiency of conditions (10)): Consider the statefeedback system (7). Given any system response {R,M}lying in the affine subspace described by (10), the statefeedback controller K = MR−1, with structure shown inFigure 2, internally stabilizes the plant. In addition, the desiredsystem response, as specified by x = Rδx and u = Mδx, isachieved.

Proof: We first note that from Figure 2, we can expressthe state feedback controller K as K = M(I − R)−1 =(zM)(zR)−1 = MR−1. Now, for any system response

9The matrix A may define an unstable system, but viewed as an elementof F0, defines a stable (FIR) transfer matrix.

{R,M} lying in the affine subspace described by (10), weconstruct a controller using the structure given in Figure 2. Toshow that the constructed controller internally stabilizes theplant, we list the following equations from Figure 2:

zx = Ax +B2u + δxu = Mδx + δuδx = x + δy + Rδx.

Routine calculations show that the closed loop transfer matri-ces from (δx, δy, δu) to (x,u, δx) are given by x

u

δx

=

R −R−RA RB2

M M−MA I + MB21z I I − 1

zA1zB2

δxδyδu

. (11)

As all nine transfer matrices in (11) are stable, the implemen-tation in Figure 2 is internally stable. Furthermore, the desiredsystem response {R,M}, from δx to (x,u), is achieved.

Remark 2: The controller parameterization K = MR−1

can also be derived by rewriting (10) as[I − 1

zA − 1zB2

] [zRzM

]= I, zR, zM ∈ RH∞.

Note that[I − 1

zA − 1zB2

]is a left coprime factorization

of the plant model. Classical methods therefore allow for thecontroller K = (zM)(zR)−1 = MR−1 to be obtained viathe Youla parameterization. Although the controller can beimplemented via the dynamic feedback gain K = MR−1, weshow in Section IV that the proposed realization in Figure 2has significant advantages. Specifically, this implementationallows us to connect constraints imposed on the systemresponse to constraints on the internal blocks of the controllerimplementation.

Summary: Theorem 1 provides a necessary and sufficientcondition for the system response {R,M} to be stable andachievable, in that elements of the affine subspace defined by(10) parameterize all stable system responses achievable viastate-feedback, as well as the internally stabilizing controllersthat achieve them. Further, Figure 2 provides an internallystabilizing realization for a controller achieving the desiredresponse.

B. Output Feedback with D22 = 0

We now extend the arguments of the previous subsection tothe output feedback setting, and begin by considering the caseof a strictly proper plant

P =

A B1 B2

C1 D11 D12

C2 D21 0

. (12)

Letting δx[t] = B1w[t] denote the disturbance on thestate, and δy[t] = D21w[t] denote the disturbance on themeasurement, the dynamics defined by plant (12) can bewritten as

x[t+ 1] = Ax[t] +B2u[t] + δx[t]

y[t] = C2x[t] + δy[t]. (13)

Page 6: SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2016 3 and provide a novel characterization

TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 6

Analogous to the state-feedback case, we define a systemresponse {R,M,N,L} from perturbations (δx, δy) to stateand control inputs (x,u) via the following relation:[

xu

]=

[R NM L

] [δxδy

]. (14)

Substituting the output feedback control law u = Ky intothe z-transform of system equation (13), we obtain

(zI −A−B2KC2)x = δx +B2Kδy.

For a proper controller K, the transfer matrix (zI − A −B2KC2) is always invertible, hence we obtain the followingequivalent expressions for the system response (14) in termsof an output feedback controller K:

R = (zI −A−B2KC2)−1

M = KC2R

N = RB2K

L = K + KC2RB2K. (15)

We now present one of the main results of the paper: analgebraic characterization of the set {R,M,N,L} of output-feedback system responses that are achievable by an internallystabilizing controller K.

Theorem 2: For the output feedback system (12), the fol-lowing are true:(a) The affine subspace described by:[

zI −A −B2

] [R NM L

]=[I 0

](16a)[

R NM L

] [zI −A−C2

]=

[I0

](16b)

R,M,N ∈ 1

zRH∞, L ∈ RH∞ (16c)

parameterizes all system responses (15) achievable by aninternally stabilizing controller K.

(b) For any transfer matrices {R,M,N,L} satisfying (16),the controller K = L −MR−1N achieves the desiredresponse (15).10 Further, if the controller is implementedas in Fig. 3, then it is internally stabilizing.

Necessity: The necessity of a stable and achievable systemresponse {R,M,N,L} lying in the affine subspace (16) isshown in the following lemma.

Lemma 3 (Necessity of conditions (16)): Consider the outputfeedback system (12). Let {R,M,N,L}, with x = Rδx +Nδy and u = Mδx + Lδy , be the system response achievedby an internally stabilizing control law u = Ky. Then{R,M,N,L} lies in the affine subspace described by (16).

Proof: Consider an internally stabilizing controller Kwith state space realization (2). Combining (2) with the systemequation (13), we obtain the closed loop dynamics[zxzξ

]=

[A+B2DkC2 B2Ck

BkC2 Ak

] [xξ

]+

[I B2Dk

0 Bk

] [δxδy

].

10Note that for any transfer matrices {R,M,N,L} satisfying (16), thetransfer matrix R is always invertible because its leading spectral component1zI is invertible. The same holds true for the transfer matrices defined in

equation (15).

From the assumption that K is internally stabilizing, we knowthat the state matrix of the above equation is a stable matrix(Lemma 5.2 in [42]). The system response achieved by u =Ky is given by

[R NM L

]=

A+B2DkC2 B2Ck I B2Dk

BkC2 Ak 0 Bk

I 0 0 0DkC2 Ck 0 Dk

, (17)

which satisfies (16c). In addition, it can be shown by routinecalculation that (17) satisfies both (16a) and (16b) for arbitrary(Ak, Bk, Ck, Dk). This completes the proof.

Remark 3: We show in Lemma 7 of Appendix A thatthe feasibility of (16) is equivalent to the stabilizability anddetectability of the triple (A,B2, C2). In this sense, the con-ditions described in (16) provide an alternative definition ofstabilizability and detectability.

B21/z

A

1/z

C2

��

�y

�x

�u

x

uy

M

L

N

R+

Fig. 3. The proposed output feedback controller structure, with R+ = zR =z(I − zR), M = zM, and N = −zN.

Sufficiency: Here we show that for any system response{R,M,N,L} lying in the affine subspace (16), there existsan internally stabilizing controller K that leads to the desiredsystem response (15). From the relations in (15), we noticethe identity K = L − KC2RB2K = L −MR−1N. Thisrelation leads to the controller structure given in Figure 3,with R+ = zR = z(I − zR), M = zM, and N = −zN. Aswas the case for the state feedback setting, it can be verifiedthat R+, M, and N are all in RH∞. Therefore, the structuregiven in Figure 3 is well defined. In addition, all of the blocksin Figure 3 are stable filters – thus, as long as the origin(x, β) = (0, 0) is asymptotically stable, all signals internalto the block diagram will decay to zero. To check the internalstability of the structure, we introduce external perturbationsδx, δy , δu, and δβ to the system. The perturbations appearingon other links of the block diagram can all be expressedas a combination of the perturbations (δx, δy, δu, δβ) beingacted upon by some stable transfer matrices, and so it sufficesto check the input-output stability of the closed loop trans-fer matrices from perturbations (δx, δy, δu, δβ) to controllersignals (x,u,y,β) to determine the internal stability of thestructure [42]. With this in mind, we can prove the sufficiencyof Theorem 2 via the following lemma.

Page 7: SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2016 3 and provide a novel characterization

TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 7

TABLE ICLOSED LOOP MAPS FROM PERTURBATIONS TO INTERNAL VARIABLES

δx δy δu δβ

x R N RB21zNC2

u M L I +MB21zLC2

y C2R I + C2N C2RB21zC2NC2

β − 1zB2M − 1

zB2L − 1

zB2MB2

1zI − 1

z2(A+B2LC2)

Lemma 4 (Sufficiency of conditions (16)): Consider theoutput feedback system (12). For any system response{R,M,N,L} lying in the affine subspace defined by (16),the controller K = L − MR−1N (with structure shownin Figure 3) internally stabilizes the plant. In addition, thedesired system response, as specified by x = Rδx+Nδy andu = Mδx + Lδy , is achieved.

Proof: For any system response {R,M,N,L} lying inthe affine subspace defined by (16), we construct a controllerusing the structure given in Figure 3. We now check the stabil-ity of the closed loop transfer matrices from the perturbations(δx, δy, δu, δβ) to the internal variables (x,u,y,β). We havethe following equations from Figure 3:

zx = Ax +B2u + δx

y = C2x + δy

zβ = R+β + Ny + δβ

u = Mβ + Ly + δu.

Combining these equations with the relations in (16a) -(16b), we summarize the closed loop transfer matrices from(δx, δy, δu, δβ) to (x,u,y,β) in Table I.

Equation (16c) implies that all sixteen transfer matricesin Table I are stable, so the implementation in Figure 3 isinternally stable. Furthermore, the desired system responsefrom (δx, δy) to (x,u) is achieved.

The controller implementation of Figure 3 is governed bythe following equations:

zβ = R+β + Ny

u = Mβ + Ly, (18)

which can be informally interpreted as an extension ofthe state-space realization (2) of a controller K. In par-ticular, the realization equations (18) can be viewed as astate-space like implementation where the constant matri-ces AK , BK , CK , DK of the state-space realization (2) arereplaced with stable proper transfer matrices R+, M, N,L.The benefit of this implementation is that arbitrary convexconstraints imposed on the transfer matrices R+, M, N,Lcarry over directly to the controller implementation. We showin Section IV that this allows for a class of structural (locality)constraints to be imposed on the system response (and hencethe controller) that are crucial for extending controller syn-thesis methods to large-scale systems. In contrast, we recall

that imposing general convex constraints on the controller Kor directly on its state-space realization AK , BK , CK , DK donot lead to convex optimal control problems.

Remark 4: The controller implementation (18) admits thefollowing equivalent representation[

R NM L

] [zβy

]=

[0u

], (19)

allowing for an interesting interpretation of the controllerK = L −MR−1N in terms of Rosenbrock system matrixrepresentations [13]. In particular, the system response (14)specifies a Rosenbrock system matrix representation of thecontroller that achieves it.

Summary: Theorem 2 provides a necessary and sufficientcondition for the system response {R,M,N,L} to be stableand achievable, in that elements of the affine subspace definedby (16) parameterize all stable achievable system responses, aswell as all internally stabilizing controllers that achieve them.Further, Figure 3 provides an internally stabilizing realizationfor a controller achieving the desired response.

C. Specialized Implementations for Open-loop Stable Systems

In this subsection, we propose two specializations of thecontroller implementation in Figure 3 for open loop stablesystems. From Table I, if we set δu and δβ to 0, it followsthat β = − 1

zB2u. This leads to a simpler controller imple-mentation given by u = Ly−MB2u, with the correspondingcontroller structure shown in Figure 4(b). This implementationcan also be obtained from the identity K = (I + MB2)−1L,which follows from the relations in (15). Unfortunately, asshown below, this implementation is internally stable onlywhen the open loop plant is stable.

For the controller implementation and structure shown inFigure 4(b), the closed loop transfer matrices from perturba-tions to the internal variables are given by

[xu

]=

[R N RB2 (zI −A)−1B2

M L I + MB2 I

]δxδyδuδβ

. (20)

When A defines a stable system, the implementation in Figure4(b) is internally stable. However, when the open loop plantis unstable (and the realization (A,B2) is stabilizable), thetransfer matrix (zI−A)−1B2 is unstable. From (20), the effectof the perturbation δβ can lead to instability of the closed loopsystem. This structure thus shows the necessity of introducingand analyzing the effects of perturbations δβ on the controllerinternal state.

Alternatively, if we start with the identity K = L(I +C2N)−1, which also follows from (15), we obtain the con-troller structure shown in Figure 4(c). The closed loop mapfrom perturbations to internal signals is then given byx

=

R N RB2

M L I + MB2

C2(zI −A)−1 I C2(zI −A)−1B2

δxδyδu

.

Page 8: SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2016 3 and provide a novel characterization

TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 8

As can be seen, the controller implementation is once againinternally stable only when the open loop plant is stable(if the realization (A,C2) is detectable). This structure thusshows the necessity of introducing and analyzing the effectsof perturbations on the controller internal state β.

Of course, when the open loop system is stable, the con-troller structures illustrated below may be appealing as theyare simpler and easier to implement. In fact, we can showthat the controller structure in Figure 4(b) is an alternativerealization of the internal model control principle (IMC) [45],[46] as applied to the Youla parameterization. Specifically, foropen loop stable systems, the Youla parameter is given byQ = K(I − P22K)−1. As we show in Lemma 5 of SectionIV-A, the Youla parameter Q is equal to the system responseL for open loop stable systems. We then have

u = Ly −MB2u (21a)

= Qy − LC2(zI −A)−1B2u (21b)= Qy −QP22u (21c)= Q(y −P22u), (21d)

where (21b) is obtained by substituting M = LC2(zI−A)−1

from (16b) into (21a). Equation (21d) is exactly IMC. Thus,we see that IMC is equivalent to our proposed parameterization(and the simplified representation shown in Figure 4(b)) foropen loop stable systems.

uy

P22

Q+�

(a) Internal Model Control

���y

�u

u

yL

�MB2

(b) Structure 1

�y

�u

� u

y L

�C2N

(c) Structure 2

Fig. 4. Alternative controller structures for stable systems.

D. Output Feedback with D22 6= 0

Finally, for a general proper plant model (1) with D22 6= 0,we define a new measurement y[t] = y[t]−D22u[t]. This leadsto the controller structure shown in Figure 5. In this case, theclosed loop transfer matrices from δu to the internal variablesbecome

xuyβ

=

RB2 + ND22

I + MB2 + LD22

C2RB2 +D22 + C2ND22

− 1zB2(MB2 + LD22)

δu.

The remaining entries of Table I remain the same. Therefore,the controller structure shown in Figure 5 internally stabilizesthe plant.

1/z

���

M

L

N

R+

�y �u

u

y

Fig. 5. The proposed output feedback controller structure for D22 6= 0.

E. System Level and Youla Parameterizations

A key difference between the SL and Youla parameteriza-tions is the manner in which they characterize the achievableclosed loop responses of a system. The Youla parameterizationprovides an image space representation of the achievablesystem responses, parameterized explicitly by the free Youlaparameter. This parameterization lends itself naturally to effi-cient computation via the standard and theoretically supportedapproach [10] of restricting the Youla parameter and objectivefunction to be FIR. However, despite this ease of computation,as alluded to earlier and discussed in detail in Section IV-D,imposing sparsity constraints on the controller via the Youlaparameter is in general intractable.

In contrast, the proposed SL parameterization specifies akernel space representation of achievable system responses,parameterized implicitly by the affine space (16a) - (16b).While our discussion highlights the benefits and flexibility ofthe SL approach, there is the important caveat that the affineconstraints (16a) - (16b) are in general infinite dimensional.Hence, although the parameterization is a convex one, it doesnot immediately lend itself to efficient computation. In SectionIV-E we show that imposing FIR constraints on the systemresponses leads to a finite-dimensional optimization problem,and further show that such constraints are feasible if the systemis controllable and observable.

IV. SYSTEM LEVEL CONSTRAINTS

An advantage of the parameterizations described in theprevious section is that they allow us to impose additionalconstraints on the system response and the correspondinginternal structure of the controller. These constraints may be inthe form of structural (subspace) constraints on the response,or may capture a suitable measure of system performance: inthis section, we provide a catalog of useful SLCs that can benaturally incorporated into the SLPs described in the previoussection. In addition to all of the performance specificationsdescribed in [10], we also show that QI subspace constraintsare a special case of SLCs. We then provide an example as towhy one may wish to go beyond QI subspace constraints tolocalized (sparse) subspace constraints on the system response,

Page 9: SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2016 3 and provide a novel characterization

TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 9

and show that such constraints can be trivially imposed inour framework. As far as we are aware, no other parameter-izations [9], [12], [15]–[17], [21] allow for such constraintsto be tractably enforced for general (i.e., strongly connected)systems. As such, we provide here a description of thelargest known class of constrained stabilizing controllers thatadmit a convex parameterization. Further, as we show in ourcompanion paper [47], it is this ability to impose localityconstraints on the controller structure via convex constraintsthat allows us to scale the methods proposed in [10], [21] tolarge-scale systems.

Before proceeding, we emphasize that although the Youlaparameterization and co-prime factors are needed to prove theresults presented in Sections IV-A and IV-B, these are onlyused for the purposes of establishing connections between theYoula/QI parameterizations and the SLA. The SLPs presentedin the previous section require neither the Youla parameteri-zation nor co-prime factors.

A. Constraints on the Youla Parameter

We show that any constraint imposed on the Youla parame-ter can be translated into a SLC, and vice versa. In particular,if this constraint is convex, then so is the corresponding SLC.Consider the following modification of the standard Youlaparameterization, which characterizes a set of constrainedinternally stabilizing controllers K for a plant (12):

K = (Yr −UrQ)(Xr −VrQ)−1, Q ∈ Q ∩RH∞. (22)

Here the expression for K is in terms of the co-prime factorsdefined in Section II-B, and Q is an arbitrary set – if we takeQ = RH∞, we recover the standard Youla parameterization.Similarly, if we take Q to be a QI subspace constraints, werecover a distributed optimal control problem that admits aconvex parameterization: we discuss the connection betweenQI and SLCs in more detail in the next subsection. Further, ifthe plant is open-loop stable or has special structure, it may bedesirable to enforce non-QI constraints on the Youla parameter.In general, one can use this expression to characterize all pos-sible constrained internally stabilizing controllers by suitablyvarying the set Q,11 and hence this formulation is as generalas possible. We now show that an equivalent parameterizationcan be given in terms of a SLC.

Theorem 3: The set of constrained internally stabilizingcontrollers described by (22) can be equivalently expressed asK = L−MR−1N, where the system response {R,M,N,L}lies in the set

{R,M,N,L∣∣ (16a) - (16c) hold, L ∈M(Q)}, (23)

for M(Q) := K(I − P22K)−1 = (Yr − UrQ)Ul theinvertible affine map as defined in Section II-C. Further, thisparameterization is convex if and only if Q is convex.

In order to prove this result, we first need to understand therelationship between the controller K, the Youla parameter Q,and the system response {R,M,N,L}.

11In particular, to ensure that K ∈ C, it suffices to enforce that (Yr −UrQ)(Xr −VrQ)−1 ∈ C.

Lemma 5: Let L be defined as in (15), and the invertibleaffine map M be defined as in Section II-C. We then have that

L = K(I −P22K)−1 = M(Q). (24)

Proof: From the equations u = Ky and y = P21w +P22u, we can eliminate u and express y as y = (I −P22K)−1P21w. We then have that

u = Ky = K(I −P22K)−1P21w. (25)

Recall that we define δx = B1w and δy = D21w. As aresult, we have P21w = C2(zI −A)−1δx + δy . Substitutingthis identity into (25) yields

u = K(I −P22K)−1[C2(zI −A)−1δx + δy]. (26)

By definition, L is the closed loop mapping from δy to u.Equation (26) then implies that L = K(I −P22K)−1. From[44], [48] (c.f. Section II-C), we have K(I − P22K)−1 =M(Q), which completes the proof.

Proof of Theorem 3: The equivalence between the pa-rameterizations (22) and (23) is readily obtained from Lemma5. As M is an invertible affine mapping between L and Q,any convex constraint imposed on the Youla parameter Q canbe equivalently translated into a convex SLC imposed on L,and vice versa.

B. Quadratically Invariant Subspace Constraints

Recall that for a subspace C that is quadratically invariantwith respect to a plant P22, the set of internally stabilizingcontrollers K that lie within the subspace C can be expressedas the set of stable transfer matrices Q ∈ RH∞ satisfyingM(Q) ∈ C, for M the invertible affine map defined in SectionII-C. We therefore have the following corollary to Theorem 3.

Corollary 1: Let C be a subspace constraint that is quadrat-ically invariant with respect to P22. Then the set of internallystabilizing controllers satisfying K ∈ C can be parameterizedas in Theorem 3 with L = M(Q) ∈ C.

Proof: From Lemma 5, we have L = K(I − P22K)−1.Invoking Theorem 14 of [21], we have that K ∈ C if andonly if L = K(I − P22K)−1 ∈ C. The claim then followsimmediately from Theorem 3.

Note that Corollary 1 holds true for stable and unstableplants P. Therefore, in order to parameterize the set ofinternally stabilizing controllers lying in C, we do not needto assume the existence of an initial strongly stabilizingcontroller as in [21] nor do we need to perform a doubly co-prime factorization as in [44]. Thus we see that QI subspaceconstraints are a special case of SLCs.

Finally, we note that in [30] and [43], the authors show thatQI is necessary for a subspace constraint C on the controllerK to be enforceable via a convex constraint on the Youlaparameter Q. However, when C is not a subspace constraint,no general methods exist to determine whether the set ofinternally stabilizing controllers lying in C admits a convexrepresentation. In contrast, determining the convexity of a SLCis straightforward.

Page 10: SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2016 3 and provide a novel characterization

TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 10

C. Beyond QI

Before introducing the class of localized SLCs, we presenta simple example for which the QI framework fails to capturean “obvious” controller with localized structure, but for whichthe SLA can. This example also serves to illustrate theimportance of locality in achieving scalability of controllerimplementation. Our companion paper [47] shows how localityfurther leads to scalability of controller synthesis.

Example 1: Consider the optimal control problem:

minimizeu

limT→∞ 1T

∑Tt=0 E‖x[t]‖22

subject to x[t+ 1] = Ax[t] + u[t] + w[t],(27)

with disturbance w[t]i.i.d∼ N (0, I). We assume full state-

feedback, i.e., the control action at time t can be expressedas u[t] = f(x[0 : t]) for some function f . An optimal controlpolicy u? for this LQR problem is easily seen to be given byu?[t] = −Ax[t].

Further suppose that the state matrix A is sparse and let itssupport define the adjacency matrix of a graph G for which weidentify the ith node with the corresponding state/control pair(xi, ui). In this case, we have that the optimal control policyu? can be implemented in a localized manner. In particular,in order to implement the state feedback policy for the ithactuator ui, only those states xj for which Aij 6= 0 need to becollected – thus only those states corresponding to immediateneighbors of node i in the graph G, i.e., only local states,need to be collected to compute the corresponding controlaction, leading to a localized implementation. As we discussin our companion paper [47], the idea of locality is essentialto allowing controller synthesis and implementation to scale toarbitrarily large systems, and hence such a structured controlleris desirable.

Now suppose that we naively attempt to solve optimalcontrol problem (27) by converting it to its equivalent H2

model matching problem (5) and constraining the controllerK to have the same support as A, i.e., K =

∑∞t=0

1ztK[t],

supp (K[t]) ⊂ supp (A). If the graph G is strongly connected,then any sparsity constraint in the form of Kij = 0 is not QIwith respect to the plant P22 = (zI − A)−1. To see this,note that if the graph G is strongly connected, then P22 isa dense transfer matrix: it then follows immediately that anysubspace C enforcing sparsity constraints on K fails to satisfyKP22K ∈ C, ∀K ∈ C, and hence is not QI with respectto P22. The results of [30] further allow us to conclude thatcomputing such a structured controller can never be done usingconvex programming when using the Youla parameterization.

In contrast, in the case of a full control (B2 = I) problem,the condition (10) simplifies to (zI−A)R−M = I , R,M ∈1zRH∞. Again, suppose that we wish to synthesize an optimalcontroller that has a communication topology given by thesupport of A – from the above implementation, it suffices toconstrain the support of transfer matrices R and M to be asubset of that of A. It can be checked that R = 1

z I , and M =− 1

zA satisfy the above constraints, and recover the globallyoptimal controller K = −A.

D. Subspace and Sparsity Constraints

Motivated by the previous example, we consider here sub-space SLCs, with a particular emphasis on those that encodesparse structure in the system response and correspondingcontroller implementation. Let L be a subspace of RH∞.We can parameterize all stable achievable system responsesthat lie in this subspace by adding the following SLC to theparameterization of Theorem 2:[

R NM L

]∈ L. (28)

Of particular interest are subspaces L that define transfermatrices of sparse support. An immediate benefit of enforc-ing such sparsity constraints on the system response is thatimplementing the resulting controller (18) can be done in alocalized way, i.e., each controller state βi and control actionui can be computed using a local subset (as defined by thesupport of the system response) of the global controller stateβ and sensor measurements y. For this reason, we refer tothe constraint (28) as a localized SLC when it defines asubspace with sparse support. As we show in our companionpaper [47], such localized constraints further allow for theresulting system response to be computed in a localizedway, i.e., the global computation decomposes naturally intodecoupled subproblems that depend only on local sub-matricesof the state-space representation (1). Clearly, both of thesefeatures are extremely desirable when computing controllersfor large-scale systems. To the best of our knowledge, suchconstraints cannot be enforced using convex constraints usingexisting controller parameterizations [9], [12], [15]–[17], [21]for general systems.

A caveat of our approach is that although arbitrary subspacestructure can be enforced on the system response, it is possiblethat the intersection of the affine space described in Theorem2 with the specified subspace is empty. Indeed, selectingan appropriate (feasible) localized SLC, as defined by thesubspace L, is a subtle task: it depends on an interplaybetween actuator and sensor density, information exchangedelay and disturbance propagation delay. Formally definingand analyzing a procedure for designing a localized SLC isbeyond the scope of this paper: as such, we refer the readerto our recent paper [5], in which we present a method thatallows for the joint design of an actuator architecture andcorresponding feasible localized SLC.

E. FIR Constraints

Given the parameterization of stabilizing controllers ofTheorem 2, it is straightforward to enforce that a systemresponse be FIR with horizon T via the following SLC

R,M,N,L ∈ FT . (29)

Whereas the pros and cons of deadbeat control in thecentralized setting are well studied [49]–[51], we argue herethat imposing an appropriately tuned FIR SLC has benefitsthat are specific to the distributed large-scale setting:(a) The controller achieving the desired system response can

be implemented using the FIR filter banks R+, M, N,L ∈

Page 11: SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2016 3 and provide a novel characterization

TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 11

FT , as illustrated in Figure 3. This simplicity of im-plementation is extremely helpful when applying thesemethods in practice.

(b) When a FIR SLC is imposed, the resulting set ofstable achievable system responses and correspondingcontrollers admit a finite dimensional representation –specifically, the constraints specified in Theorem 2 onlyneed to be applied to the impulse response elements{R[t],M [t], N [t], L[t]}Tt=0.

Remark 5: It should be noted that the computational benefitsclaimed above hold only for discrete time systems. For con-tinuous time systems, a FIR transfer matrix is still an infinitedimensional object, and hence the resulting parameterizationsand constraints are in general infinite dimensional as well.

Remark 6: The complexity of local implementations usingFIR filter banks scales linearly with the horizon T – aninteresting direction for future work is to determine if infiniteimpulse response (IIR) system responses lead to simplercontroller implementations via state-space realizations.

We conclude this subsection by showing that such FIRconstraints are always feasible, for suitably chosen horizonsT , if the system is controllable and observable.

Theorem 4: The SLP (16) admits a FIR solution if the triple(A,B2, C2) is controllable and observable.

Proof: By definition, if (A,B2) is controllable, then thereexists FIR transfer matrices (R1,M1) ∈ FT 1 satisfying (10)for some finite T1. Similarly, if (A,C2) is observable, thenthere exists FIR transfer matrices (R2,N2) ∈ FT 2 satisfying(37) for some finite T2. When (A,B2, C2) is controllable andobservable, the following FIR transfer matrices can be verifiedto lie in the affine space (16)

R = R1 + R2 −R1(zI −A)R2 (30a)M = M1 −M1(zI −A)R2 (30b)N = N2 −R1(zI −A)N2 (30c)L = −M1(zI −A)N2. (30d)

Finally, we note that recently developed relaxations [52]of SLP can be used when such FIR constraints cannot besatisfied. This may occur, for instance, when the underlyingsystem is only stabilizable and/or detectable.

F. Intersections of SLCs and Spatiotemporal Constraints

Another major benefit of SLCs is that several such con-straints can be imposed on the system response at once.Further, as convex sets are closed under intersection, convexSLCs are also closed under intersection. To illustrate theusefulness of this property, consider the intersection of a QIsubspace SLC (enforcing information exchange constraintsbetween sub-controllers), a FIR SLC and a localized SLC. Theresulting SLC can be interpreted as enforcing a spatiotemporalconstraint on the system response and its corresponding con-troller, as we explain using the chain example shown below.

Figure 6 shows a diagram of the system response to aparticular disturbance (δx)i. In this figure, the vertical axis

denotes the spatial coordinate of a state in the chain, and thehorizontal axis denotes time: hence we refer to this figureas a space-time diagram. Depicted are the three componentsof the spatiotemporal constraint, namely the communicationdelay imposed on the controller via the QI subspace SLC, thedeadbeat response of the system to the disturbance imposedby the FIR SLC, and the localized region affected by thedisturbance (δx)i imposed by the localized SLC.

When the effect of each disturbance (δx)i can be localizedwithin such a spatiotemporal SLC, the system is said to belocalizable (c.f., [2], [4]). It follows that the feasibility ofa spatiotemporal constraint implies a more general notionof controllability (observability), wherein the system impulseresponse is constrained to be finite in both space and time, andthe controller is subject to communication delays. Thus ratherthan the traditional computational test of verifying the rank ofa suitable controllability (observability) matrix, localizabilityis verified by the feasibility of a set of affine constraints.

Space

Time

Sparsity Constraint

FIR T

Communication delay

�xiAffected Region

t t + 1

xi

xi+1

xi�1

Fig. 6. Space time diagram for a single disturbance striking the chaindescribed in Example 1.

G. Closed Loop Specifications

As in [10], our parameterization allows for arbitrary perfor-mance constraints to be imposed on the closed loop response.In contrast to the method proposed in [10], these performanceconstraints can be combined with structural (i.e., localizedspatiotemporal) constraints on the controller realization, natu-rally extending their applicability to the large-scale distributedsetting. In the interest of completeness, we highlight someparticularly useful SLCs here.

1) System Performance Constraints: Let g(·) be a func-tional of the system response – it then follows that allinternally stabilizing controllers satisfying a performance level,as specified by a scalar γ, are given by transfer matrices{R,M,N,L} satisfying the conditions of Theorem 2 and theSLC

g(R,M,N,L) ≤ γ. (31)

Further, recall that the sublevel set of a convex functionalis a convex set, and hence if g is convex, then so is the SLC(31). A particularly useful choice of convex functional is

g(R,M,N,L) =

∥∥∥∥[C1 D12

] [R NM L

] [B1

D21

]+D11

∥∥∥∥ ,(32)

Page 12: SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2016 3 and provide a novel characterization

TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 12

for a system norm ‖ · ‖, which is equivalent to the objectivefunction of the decentralized optimal control problem (5). Thusby imposing several performance SLCs (32) with differentchoices of norm, one can naturally formulate multi-objectiveoptimal control problems.

2) Controller Robustness Constraints: Suppose that thecontroller is to be implemented using limited hardware, thusintroducing non-negligible quantization (or other errors) tothe internally computed signals: this can be modeled via aninternal additive noise δβ in the controller structure (c.f.,Figure 3). In this case, we may wish to design a controllerthat further limits the effects of these perturbations on thesystem: to do so, we can impose a performance SLC on theclosed loop transfer matrices specified in the rightmost columnof Table I.

3) Controller Architecture Constraints: The controller im-plementation (18) also allows us to naturally control thenumber of actuators and sensors used by a controller –this can be useful when designing controllers for large-scalesystems that use a limited number of hardware resources(c.f., Section V-B3). In particular, assume that implementation(18) parameterizing stabilizing controllers that use all possibleactuators and sensors. It then suffices to constrain the numberof non-zero rows of the transfer matrix [M,L] to limit thenumber of actuators used by the controller, and similarly, thenumber of non-zero columns of the transfer matrix [N>,L>]>

to limit the number of sensors used by the controller. As stated,these constraints are non-convex, but recently proposed convexrelaxations [40], [41] can be used in their stead to imposeconvex SLCs on the controller architecture.

4) Positivity Constraints: It has recently been observed that(internally) positive systems are amenable to efficient analysisand synthesis techniques (c.f., [53] and the references therein).Therefore it may be desirable to synthesize a controller thateither preserves or enforces positivity of the resulting closedloop system. We can enforce this condition via the SLC thatthe elements{ [

C1 D12

] [R[t] N [t]M [t] L[t]

] [B1

D21

]}∞t=1

and the matrix (D12L[0]D21 + D11) are all element-wisenonnegative matrices. This SLC is easily seen to be convex.

V. SYSTEM LEVEL SYNTHESIS

We build on the results of the previous sections to formulatethe SLS problem. We show that by combining appropriateSLPs and SLCs, the largest known class of convex structuredoptimal control problems can be formulated. As a special case,we show that we recover all possible structured optimal controlproblems of the form (5) that admit a convex representationin the Youla domain.

A. General Formulation

Let g(·) be a functional capturing a desired measure of theperformance of the system (as described in Section IV-G1),

and let S be a SLC. We then pose the SLS problem as

minimize{R,M,N,L}

g(R,M,N,L)

subject to (16a)− (16c)[R NM L

]∈ S. (33)

For g a convex functional and S a convex set,12 the resultingSLS problem is a convex optimization problem.

Remark 7: For a state feedback problem, the SLS problemcan be simplified to

minimize{R,M}

g(R,M)

subject to (10a)− (10b)[RM

]∈ S. (34)

B. Examples of Convex SLS

Here we highlight some convex SLS problems. A moreextensive list can be found in [54], [55].

1) Distributed Optimal Control: The distributed optimalcontrol problem (5) with a QI subspace constraint C can beformulated as a SLS problem as

minimize (32)subject to (16a)− (16c), L ∈ C. (35)

Thus all distributed optimal control problems that can beformulated as convex optimization problems in the Youladomain are special cases of convex SLS problem (33).

2) Localized LQG Control: In [2], [4] we posed and solveda localized LQG optimal control problem. In the case of astate-feedback problem [2], the resulting SLS problem is ofthe form

minimize{R,M}

‖C1R +D12M‖2H2

subject to (10a)− (10b)[RM

]∈ C ∩ L ∩ FT , (36)

for C a QI subspace SLC, L a sparsity SLC, and FT a FIRSLC.

The observation that we make in [2] (and extend to theoutput feedback setting in [4]), is that the localized SLSproblem (36) can be decomposed into a set of independent sub-problems solving for the columns Ri and Mi of the transfermatrices R and M – as these problems are independent, theycan be solved in parallel. Further, the sparsity constraint Lrestricts each sub-problem to a local subset of the systemmodel and states, as specified by the nonzero componentsof the corresponding column of the transfer matrices R andM (e.g., as was described in Example 1), allowing each ofthese sub-problems to be expressed in terms of optimizationvariables (and corresponding sub-matrices of the state-spacerealization (16)) that are of significantly smaller dimension

12More generally, we only need the intersection of the set S and therestriction of the functional g to the affine subspace described in (16) tobe convex.

Page 13: SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2016 3 and provide a novel characterization

TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 13

than the global system response {R,M}. Thus for a givenfeasible spatiotemporal SLC, the localized SLS problem (36)can be solved for arbitrarily large-scale systems, assuming thateach sub-controller can solve its corresponding sub-problem inparallel.13 As far as we are aware, such constrained optimalcontrol problems cannot be solved via convex programmingusing existing controller parameterizations in the literature.

In our companion paper [47], we generalize all of these con-cepts to the system level approach to controller synthesis, andshow that appropriate notions of separability for SLCs can bedefined which allow for optimal controllers to be synthesizedand implemented with order constant complexity (assumingparallel computation is available for each subproblem) relativeto the global system size.

3) Regularization for Design: The regularization for designframework (RFD) [40], [41], [56], [57] explores tradeoffsbetween closed loop performance and architectural cost usingconvex programming by augmenting the objective functionwith a suitable convex regularizer that penalizes the use ofactuators, sensors and communication links. To integrate RFDinto the SLA, it suffices to add a suitable convex regularizer, asmentioned in Section IV-G3 and described in [5], [40], to theobjective function of the SLS problem (33). We demonstratethe usefulness of combining RFD, locality and SLS in ourcompanion paper [47].

C. Computational Complexity and Non-convex Optimization

A final advantage of the SLS problem (33) is that it istransparent to determine the computational complexity of theoptimization problem. Specifically, the complexity of solving(33) is determined by the type of the objective function g(·)and the characterization of the intersection of the set S andthe affine space (16a) - (16c). Further, when the SLS problemis non-convex, the direct nature of the formulation makesit straightforward to determine suitable convex relaxationsor non-convex optimization techniques for the problem. Incontrast, as discussed in [30], no general method exists todetermine the computational complexity of the decentralizedoptimal control problem (5) for a general constraint set C.

VI. CONCLUSION

In this paper, we defined and analyzed the system levelapproach to controller synthesis, which consists of threeelements: System Level Parameterizations (SLPs), SystemLevel Constraints (SLCs), and System Level Synthesis (SLS)problems. We showed that all achievable and stable systemresponses can be characterized via the SLPs given in Theorems1 and 2. We further showed that these system responsescould be used to parameterize internally stabilizing controllersthat achieved them, and proposed a novel controller imple-mentation (18). We then argued that this novel controllerimplementation had the important benefit of allowing for SLCsto be naturally imposed on it, and showed in Section IV thatusing this controller structure and SLCs, we can characterize

13We also show how to co-design an actuation architecture and feasiblecorresponding spatiotemporal constraint in [5], and so the assumption of afeasible spatiotemporal constraint is a reasonable one.

the broadest known class of constrained internally stabilizingcontrollers that admit a convex representation. Finally, wecombined SLPs and SLCs to formulate the SLS problem,and showed that it recovered as a special case many wellstudied constrained optimal controller synthesis problems fromthe literature. In our companion paper [47], we show how touse the system level approach to controller synthesis to co-design controllers, system responses and actuation, sensingand communication architectures for large-scale networkedsystems.

APPENDIX ASTABILIZABILITY AND DETECTABILITY

Lemma 6: The pair (A,B2) is stabilizable if and only if theaffine subspace defined by (10) is non-empty.

Proof: We first show that the stabilizability of (A,B2)implies that there exist transfer matrices R,M ∈ 1

zRH∞satisfying equation (10a). From the definition of stabilizability,there exists a matrix F such that A + B2F is a stablematrix. Substituting the state feedback control law u = Fxinto (8), we have x = (zI − A − B2F )−1δx and u =F (zI − A − B2F )−1δx. The system response is given byR = (zI − A − B2F )−1 and M = F (zI − A − B2F )−1,which lie in 1

zRH∞ and are a solution to (10a).For the opposite direction, we note that R,M ∈ RH∞

implies that these transfer matrices do not have poles outsidethe unit circle |z| ≥ 1. From (10a), we further observe that[zI −A −B2

]is right invertible in the region where R

and M do not have poles, with[R> M>]> being its right

inverse. This then implies that[zI −A −B2

]has full row

rank for all |z| ≥ 1. This is equivalent to the PBH test [58]for stabilizability, proving the claim.

We note that the analysis for the state feedback problem inSection III-A can be applied to the state estimation problemby considering the dual to a full control system (c.f., §16.5 in[42]). For instance, the following corollary to Lemma 6 givesan alternative definition of the detectability of pair (A,C2)[6].

Corollary 2: The pair (A,C2) is detectable if and only ifthe following conditions are feasible:

[R N

] [zI −A−C2

]= I (37a)

R,N ∈ 1

zRH∞. (37b)

A parameterization of all detectable observers can be con-structed using the affine subspace (37) in a manner analogousto that described above.

Lemma 7: The triple (A,B2, C2) is stabilizable and de-tectable if and only if the affine subspace described by (16) isnon-empty.

Proof: This follows from an identical construction as thatpresented in the proof Theorem 4, but now using stable transfermatrices with possibly infinite impulse responses.

Page 14: SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2016 3 and provide a novel characterization

TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 14

REFERENCES

[1] Y.-S. Wang, N. Matni, S. You, and J. C. Doyle, “Localized distributedstate feedback control with communication delays,” in Proc. 2014 IEEEAmer. Control Conf., June 2014, pp. 5748–5755.

[2] Y.-S. Wang, N. Matni, and J. C. Doyle, “Localized LQR optimalcontrol,” in Proc. 2014 53rd IEEE Conf. Decision Control, 2014, pp.1661–1668.

[3] Y.-S. Wang and N. Matni, “Localized distributed optimal control withoutput feedback and communication delays,” in IEEE 52nd AnnualAllerton Conference on Communication, Control, and Computing, 2014,pp. 605–612.

[4] ——, “Localized LQG optimal control for large-scale systems,” in Proc.2016 IEEE Amer. Control Conf., 2016, pp. 1954–1961.

[5] Y.-S. Wang, N. Matni, and J. C. Doyle, “Localized LQR control withactuator regularization,” in Proc. 2016 IEEE Amer. Control Conf., 2016,pp. 5205–5212.

[6] Y.-S. Wang, S. You, and N. Matni, “Localized distributed Kalmanfilters for large-scale systems,” in 5th IFAC Workshop on DistributedEstimation and Control in Networked Systems, vol. 48, no. 22, 2015,pp. 52–57.

[7] Y.-S. Wang, N. Matni, and J. C. Doyle, “System level parameterizations,constraints and synthesis,” in Proc. 2017 Amer. Control Conf., May 2017,pp. 1308–1315.

[8] D. C. Youla, H. A. Jabr, and J. J. B. Jr., “Modern wiener-hopf design ofoptimal controllers-part ii: The multivariable case,” IEEE Trans. Autom.Control, vol. 21, no. 3, pp. 319–338, 1976.

[9] M. Vidyasagar, Control System Synthesis:A Factorization Approach, PartII. Morgan & Claypool, 2011.

[10] S. Boyd and C. Barratt, Linear controller design: limits of performance.Prentice-Hall, 1991.

[11] M. A. Dahleh and I. J. Diaz-Bobillo, Control of uncertain systems: alinear programming approach. Prentice-Hall, Inc., 1994.

[12] J. Y. Ishihara and R. M. Sales, “Parametrization of admissible controllersfor generalized rosenbrock systems,” in Proc. 2000 39th IEEE Conf.Decision Control, vol. 5, 2000, pp. 5014–5019.

[13] H. H. Rosenbrock, Computer aided control system design. AcademicPress, 1974.

[14] J. C. Willems and J. W. Polderman, Introduction to mathematical systemstheory: a behavioral approach. Springer Science & Business Media,2013, vol. 26.

[15] J. C. Willems and H. L. Trentelman, “Synthesis of dissipative systemsusing quadratic differential forms: Part i,” IEEE Trans. Autom. Control,vol. 47, no. 1, pp. 53–69, 2002.

[16] H. L. Trentelman and J. C. Willems, “Synthesis of dissipative systemsusing quadratic differential forms: Part ii,” IEEE Trans. Autom. Control,vol. 47, no. 1, pp. 70–86, 2002.

[17] C. Praagman, H. L. Trentelman, and R. Z. Yoe, “On the parametrizationof all regularly implementing and stabilizing controllers,” SIAM Journalon Control and Optimization, vol. 45, no. 6, pp. 2035–2053, 2007.

[18] J. C. Doyle, K. Glover, P. P. Khargonekar, and B. A. Francis, “State-space solutions to standard H2 and H∞ control problems,” IEEE Trans.Autom. Control, vol. 34, no. 8, pp. 831–847, Aug 1989.

[19] Y.-C. Ho and K.-C. Chu, “Team decision theory and informationstructures in optimal control problems–part i,” IEEE Trans. Autom.Control, vol. 17, no. 1, pp. 15–22, 1972.

[20] A. Mahajan, N. Martins, M. Rotkowitz, and S. Yuksel, “Informationstructures in optimal decentralized control,” in Proc. 2012 51st IEEEConf. Decision Control, 2012, pp. 1291–1306.

[21] M. Rotkowitz and S. Lall, “A characterization of convex problems indecentralized control,” IEEE Trans. Autom. Control, vol. 51, no. 2, pp.274–286, 2006.

[22] B. Bamieh, F. Paganini, and M. A. Dahleh, “Distributed control ofspatially invariant systems,” IEEE Trans. Autom. Control, vol. 47, no. 7,pp. 1091–1107, 2002.

[23] B. Bamieh and P. G. Voulgaris, “A convex characterization of distributedcontrol problems in spatially invariant systems with communicationconstraints,” Systems & Control Letters, vol. 54, no. 6, pp. 575–583,2005.

[24] A. Nayyar, A. Mahajan, and D. Teneketzis, “Decentralized stochasticcontrol with partial history sharing: A common information approach,”IEEE Trans. Autom. Control, vol. 58, no. 7, pp. 1644–1658, July 2013.

[25] H. S. Witsenhausen, “A counterexample in stochastic optimum control,”SIAM Journal of Control, vol. 6, no. 1, pp. 131–147, 1968.

[26] J. N. Tsitsiklis and M. Athans, “On the complexity of decentralizeddecision making and detection problems,” in Proc. 1984 23rd IEEEConf. Decision Control, 1984, pp. 1638–1641.

[27] X. Qi, M. V. Salapaka, P. G. Voulgaris, and M. Khammash, “Structuredoptimal and robust control with multiple criteria: A convex solution,”IEEE Trans. Autom. Control, vol. 49, no. 10, pp. 1623–1640, 2004.

[28] G. E. Dullerud and R. D’Andrea, “Distributed control of heterogeneoussystems,” IEEE Trans. Autom. Control, vol. 49, no. 12, pp. 2113–2128,2004.

[29] M. Rotkowitz, R. Cogill, and S. Lall, “Convexity of optimal controlover networks with delays and arbitrary topology,” Int. J. Syst., ControlCommun., vol. 2, no. 1/2/3, pp. 30–54, Jan. 2010.

[30] L. Lessard and S. Lall, “Convexity of decentralized controller synthesis,”IEEE Trans. Autom. Control, vol. 61, no. 10, pp. 3122–3127, 2016.

[31] ——, “Optimal controller synthesis for the decentralized two-playerproblem with output feedback,” in Proc. 2012 IEEE Amer. Control Conf.,June 2012, pp. 6314–6321.

[32] P. Shah and P. A. Parrilo, “H2-optimal decentralized control over posets:A state space solution for state-feedback,” in Proc. 2010 49th IEEE Conf.Decision Control, 2010, pp. 6722–6727.

[33] A. Lamperski and J. C. Doyle, “Output feedbackH2 model matching fordecentralized systems with delays,” in Proc. 2013 IEEE Amer. ControlConf., June 2013, pp. 5778–5783.

[34] L. Lessard, M. Kristalny, and A. Rantzer, “On structured realizabilityand stabilizability of linear systems,” in Proc. 2013 IEEE Amer. ControlConf., June 2013, pp. 5784–5790.

[35] C. W. Scherer, “Structured H∞-optimal control for nested interconnec-tions: A state-space solution,” Systems and Control Letters, vol. 62, pp.1105–1113, 2013.

[36] L. Lessard, “State-space solution to a minimum-entropy H∞-optimalcontrol problem with a nested information constraint,” in Proc. 201453rd IEEE Conf. Decision Control, 2014, pp. 4026–4031.

[37] N. Matni, “Distributed control subject to delays satisfying an H∞ normbound,” in Proc. 2014 53rd IEEE Conf. Decision Control, 2014, pp.4006–4013.

[38] T. Tanaka and P. A. Parrilo, “Optimal output feedback architecture fortriangular LQG problems,” in Proc. 2014 IEEE Amer. Control Conf.,June 2014, pp. 5730–5735.

[39] A. Lamperski and L. Lessard, “Optimal decentralized state-feedbackcontrol with sparsity and delays,” Automatica, vol. 58, pp. 143–151,2015.

[40] N. Matni and V. Chandrasekaran, “Regularization for design,” IEEETrans. Autom. Control, vol. 61, no. 12, pp. 3991–4006, 2016.

[41] ——, “Regularization for design,” in Proc. 53rd IEEE Conf. DecisionControl, Dec 2014, pp. 1111–1118.

[42] K. Zhou, J. C. Doyle, and K. Glover, Robust and optimal control.Prentice Hall New Jersey, 1996.

[43] L. Lessard and S. Lall, “Quadratic invariance is necessary and sufficientfor convexity,” in Proc. 2011 IEEE Amer. Control Conf., 2011, pp. 5360–5362.

[44] S. Sabau and N. C. Martins, “Youla-like parametrizations subject to QIsubspace constraints,” IEEE Trans. Autom. Control, vol. 59, no. 6, pp.1411–1422, 2014.

[45] D. E. Rivera, M. Morari, and S. Skogestad, “Internal model control: Pidcontroller design,” Industrial & engineering chemistry process designand development, vol. 25, no. 1, pp. 252–265, 1986.

[46] C. E. Garcia and M. Morari, “Internal model control. a unifying reviewand some new results,” Industrial & Engineering Chemistry ProcessDesign and Development, vol. 21, no. 2, pp. 308–323, 1982.

[47] Y.-S. Wang, N. Matni, and J. C. Doyle, “Separable and localized systemlevel synthesis for large-scale systems,” IEEE Trans. Autom. Control,vol. 63, no. 12, pp. 4234–4249, 2018.

[48] A. Lamperski and J. C. Doyle, “The H2 control problem for quadrat-ically invariant systems with delays,” IEEE Trans. Autom. Control,vol. 60, no. 7, pp. 1945–1950, 2015.

[49] B. Leden, “Multivariable dead-beat control,” Automatica, vol. 13, no. 2,pp. 185 – 188, 1977.

[50] H. Kwakernaak and R. Sivan, Linear optimal control systems. Wiley-Interscience New York, 1972, vol. 1.

[51] J. O’Reilly, “The discrete linear time invariant time-optimal controlproblem—an overview,” Automatica, vol. 17, no. 2, pp. 363 – 370, 1981.

[52] N. Matni, Y.-S. Wang, and J. Anderson, “Scalable system level synthesisfor virtually localizable systems,” in Proc. 2017 56th IEEE Conf.Decision Control, 2017, pp. 3473–3480.

[53] A. Rantzer, “Scalable control of positive systems,” European Journal ofControl, vol. 24, pp. 72–80, 2015.

[54] Y.-S. Wang, “A system level approach to optimal controller design forlarge-scale distributed systems,” Ph.D. dissertation, California Instituteof Technology, 2016.

Page 15: SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, … · SUBMITTED TO IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2016 3 and provide a novel characterization

TO APPEAR IN IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 15

[55] J. C. Doyle, N. Matni, Y.-S. Wang, J. Anderson, and S. Low, “Systemlevel synthesis: A tutorial,” in Proc. 2017 56th IEEE Conf. DecisionControl, 2017, pp. 2856–2867.

[56] N. Matni, “Communication delay co-design in H2 distributed controlusing atomic norm minimization,” IEEE Trans. Control Netw. Syst.,vol. 4, no. 2, pp. 267–278, 2017.

[57] ——, “Communication delay co-design in H2 decentralized controlusing atomic norm minimization,” in Proc. 52nd 2013 IEEE Conf.Decision Control, Dec 2013, pp. 6522–6529.

[58] G. E. Dullerud and F. Paganini, A Course In Robust Control Theory: AConvex Approach. Springer-Verlag, 2000.

Yuh-Shyang Wang (M’10) received the B.S. degreein electrical engineering from National Taiwan Uni-versity, Taipei, Taiwan, in 2011, and the Ph.D. de-gree in control and dynamical systems from Caltech,Pasadena, CA, USA, in 2016 under the advisementof John C. Doyle.

He is currently a Research Engineer at GE GlobalResearch Center, Niskayuna, NY, USA. His researchinterests include optimization, control, and machinelearning for industrial cyber-physical systems andrenewable energy systems.

Dr. Wang was the recipient of the 2017 ACC Best Student Paper Award.

Nikolai Matni (M’08) received the B.A.Sc. andM.A.Sc. degrees in electrical engineering from theUniversity of British Columbia, Vancouver, BC,Canada, in 2008 and 2010, respectively, and thePh.D. degree in control and dynamical systems fromthe California Institute of Technology, Pasadena,CA, USA, in June 2016 under the advisement ofJohn C. Doyle.

He is currently a Postdoctoral Scholar at ElectricalEngineering & Computer Sciences, UC Berkeley,Berkeley, CA, USA. His research interests include

the use of learning, layering, dynamics, control and optimization in the designand analysis of complex cyber-physical systems.

Dr. Matni received the IEEE CDC 2013 Best Student Paper Award, theIEEE ACC 2017 Best Student Paper Award (as co-advisor), and was anEverhart Lecture Series speaker at Caltech.

John C. Doyle received the B.S. and M.S. degrees inelectrical engineering from Massachusetts Instituteof Technology, Cambridge, MA, USA, in 1977, andthe Ph.D. degree in mathematics from UC Berkeley,Berkeley, CA, USA, in 1984.

He is currently the Jean-Lou Chameau Profes-sor of Control and Dynamical Systems, ElectricalEngineer, and Bio-Engineering, Caltech, Pasadena,CA, USA. His research interests include mathe-matical foundations for complex networks with ap-plications inbiology, technology, medicine, ecology,

neuroscience, and multiscale physics that integrates theory from control,computation, communication, optimization, statistics (e.g., machine learning).

Dr. Doyle received the 1990 IEEE Baker Prize (for all IEEE publications),also listed in the world top 10 most important papers in mathematics19811993, the IEEE Automatic Control Transactions Award (twice 1998,1999), the 1994 AACC American Control Conference Schuck Award, the2004 ACM Sigcomm Paper Prize and 2016 test of Time Award, and inclusionin Best Writing on Mathematics 2010. His individual awards include 1977IEEE Power Hickernell, 1983 AACC Eckman, 1984 UC Berkeley Friedman,1984 IEEE Centennial Outstanding Young Engineer (a one-time award forIEEE 100th anniversary), and 2004 IEEE Control Systems Field Award.