State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback...

18
International Journal of Control, 2014 Vol. 87, No. 1, 143–160, http://dx.doi.org/10.1080/00207179.2013.824612 State feedback control of nonlinear systems: a simple approach Jacob Hammer Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA (Received 3 October 2012; accepted 9 July 2013) A simple methodology for the design of state feedback controllers for nonlinear continuous-time systems is introduced. The objective is to develop controllers that drive a nonlinear system from an initial condition into an assigned region of state space. It is shown that such state feedback controllers can be derived by solving a set of inequalities obtained directly from quantities given in the controlled system’s equation. The results are applied to the design of state feedback controllers that achieve robust asymptotic stabilisation for a rather general class of nonlinear systems. Keywords: nonlinear control; robust control; state feedback 1. Introduction Techniques for the design of state feedback controllers for nonlinear systems have received considerable attention in the scientific literature for at least three quarters of a cen- tury. Important progress has been made, but, to this day, the process of deriving feedback controllers for nonlinear systems remains a daunting one for many classes of sys- tems. Important difficulties persist in conceptual as well as in computational aspects of the theory of nonlinear feed- back control. This paper revisits state feedback control of nonlinear systems from a perspective that enhances intu- itive insight and yields a relatively simple process for the derivation of state feedback controllers. We concentrate on the problem of designing state feed- back controllers that drive a nonlinear system from an initial condition into a specified region of state space, a problem that includes asymptotic stabilisation. We show that such feedback controllers can be derived simply by solving a set of inequalities that is obtained directly from quantities given in the equation of the controlled system. These inequalities provide necessary and sufficient condi- tions for the existence of appropriate feedback controllers as well as computational means for controller design. The discussion concentrates on ‘autonomous’ con- trollers, namely, on controllers that operate on their own with no need for external intervention. The control con- figuration is described in Figure 1, where is the system being controlled and C is a controller. It is assumed that is an input/state system, namely, that has its state as output, so that C is a state feedback controller. We denote by c the system induced by the closed-loop configuration. In formal terms, our control objective can be described as follows. Email: [email protected]fl.edu Problem 1.1: Let be an input/state system, let X 0 be a set of potential initial conditions of , and let D 0 be an open domain in R n . Referring to Figure 1, find necessary and sufficient conditions for the existence a state feedback controller C that takes c from every initial condition in X 0 into D 0 in finite time. If such a controller exists, provide a method for its derivation. The domain D 0 of Problem 1.1 is called the target do- main; it is the domain into which must be driven by the state feedback controller C. We show in Section 3 that state feedback controllers that fulfill the design objective of Problem 1.1 can be de- rived from the solution of a set of inequalities; the inequal- ities are obtained directly from quantities that appear in the differential equation of the controlled system . Further- more, we show that, whenever achievable, the objective of Problem 1.1 can be achieved by static state feedback con- trollers, namely, by controllers that are described by a state feedback function, rather than being described by a dif- ferential equation. All controllers derived in this paper are robust in the sense that they continue to perform their task in the presence small deviations or errors. In Section 6, we specialise our discussion to the case where the target domain D 0 is a small neighbourhood of the state-space origin and consider the existence and the design of state feedback controllers that take the controlled system closer and closer to the origin of its state space as time progresses. In other words, we examine the existence and the design of state feedback controllers that asymptotically stabilise a given nonlinear system . As before, we show that such controllers can be derived from the solution of a set of inequalities obtained directly from quantities given in the differential equation of . C 2013 Taylor & Francis Downloaded by [Jacob Hammer] at 13:34 20 January 2015

Transcript of State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback...

Page 1: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

International Journal of Control, 2014Vol. 87, No. 1, 143–160, http://dx.doi.org/10.1080/00207179.2013.824612

State feedback control of nonlinear systems: a simple approach

Jacob Hammer∗

Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA

(Received 3 October 2012; accepted 9 July 2013)

A simple methodology for the design of state feedback controllers for nonlinear continuous-time systems is introduced. Theobjective is to develop controllers that drive a nonlinear system from an initial condition into an assigned region of statespace. It is shown that such state feedback controllers can be derived by solving a set of inequalities obtained directly fromquantities given in the controlled system’s equation. The results are applied to the design of state feedback controllers thatachieve robust asymptotic stabilisation for a rather general class of nonlinear systems.

Keywords: nonlinear control; robust control; state feedback

1. Introduction

Techniques for the design of state feedback controllers fornonlinear systems have received considerable attention inthe scientific literature for at least three quarters of a cen-tury. Important progress has been made, but, to this day,the process of deriving feedback controllers for nonlinearsystems remains a daunting one for many classes of sys-tems. Important difficulties persist in conceptual as well asin computational aspects of the theory of nonlinear feed-back control. This paper revisits state feedback control ofnonlinear systems from a perspective that enhances intu-itive insight and yields a relatively simple process for thederivation of state feedback controllers.

We concentrate on the problem of designing state feed-back controllers that drive a nonlinear system ! from aninitial condition into a specified region of state space, aproblem that includes asymptotic stabilisation. We showthat such feedback controllers can be derived simply bysolving a set of inequalities that is obtained directly fromquantities given in the equation of the controlled system.These inequalities provide necessary and sufficient condi-tions for the existence of appropriate feedback controllersas well as computational means for controller design.

The discussion concentrates on ‘autonomous’ con-trollers, namely, on controllers that operate on their ownwith no need for external intervention. The control con-figuration is described in Figure 1, where ! is the systembeing controlled and C is a controller. It is assumed that! is an input/state system, namely, that ! has its state asoutput, so that C is a state feedback controller. We denoteby !c the system induced by the closed-loop configuration.In formal terms, our control objective can be described asfollows.

∗Email: [email protected]

Problem 1.1: Let ! be an input/state system, let X0 be aset of potential initial conditions of !, and let D0 be anopen domain in Rn. Referring to Figure 1, find necessaryand sufficient conditions for the existence a state feedbackcontroller C that takes !c from every initial condition in X0

into D0 in finite time. If such a controller exists, provide amethod for its derivation.

The domain D0 of Problem 1.1 is called the target do-main; it is the domain into which ! must be driven by thestate feedback controller C.

We show in Section 3 that state feedback controllersthat fulfill the design objective of Problem 1.1 can be de-rived from the solution of a set of inequalities; the inequal-ities are obtained directly from quantities that appear in thedifferential equation of the controlled system !. Further-more, we show that, whenever achievable, the objective ofProblem 1.1 can be achieved by static state feedback con-trollers, namely, by controllers that are described by a statefeedback function, rather than being described by a dif-ferential equation. All controllers derived in this paper arerobust in the sense that they continue to perform their taskin the presence small deviations or errors.

In Section 6, we specialise our discussion to the casewhere the target domain D0 is a small neighbourhood of thestate-space origin and consider the existence and the designof state feedback controllers that take the controlled system! closer and closer to the origin of its state space as timeprogresses. In other words, we examine the existence andthe design of state feedback controllers that asymptoticallystabilise a given nonlinear system !. As before, we showthat such controllers can be derived from the solution of aset of inequalities obtained directly from quantities givenin the differential equation of !.

C⃝ 2013 Taylor & Francis

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 2: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

144 J. Hammer

Figure 1. State feedback.

To be specific, consider a time-invariant input/state sys-tem ! described by the differential equation

! :x(t) = f (x(t), u(t)), t ≥ 0,

x(0) = x0,(1.1)

where x(t) ∈ Rn is the state of ! and u(t) ∈ Rm is the inputof ! at the time t. Here, f: Rn × Rm → Rn is a continuousfunction, and x0 is the initial state of !. (In Section 6, thefunction f is required to be twice continuously differen-tiable.) Small uncertainties about the exact values of f andof x0 are allowed.

Referring to Problem 1.1, our objective is to derive(whenever possible) a state feedback controller C that takes! from its initial condition x0 into the target domain D0. Weshow in Section 3 that, if such a state feedback controllerexists, it can be chosen as a static time-invariant state feed-back controller, namely, as a member of the simplest classof controllers. Such a controller is characterised by a statefeedback function ϕ: Rn → Rm that generates the inputsignal u(t) of ! according to the equation

u(t) = ϕ(x(t)). (1.2)

The corresponding control configuration is shown inFigure 2, where !ϕ denotes the closed-loop system. Thedifferential equation of !ϕ is given by

!ϕ :x(t) = f (x(t),ϕ(x(t))), t ≥ 0,

x(0) = x0.(1.3)

As !ϕ includes no external input, it as an autonomous sys-tem. In these terms, Problem 1.1 reduces to the following:find necessary and sufficient conditions for the existenceof a state feedback function ϕ such that, for every initialstate x0 ∈ X0, there is a time τ > 0 at which the state of !ϕ

satisfies x(τ ) ∈ D0.

Figure 2. Static state feedback control.

An important aspect of the present approach is its con-ceptual simplicity. We show in Sections 2 and 3 that an ap-propriate state feedback function ϕ can be calculated fromthe solution of a set of inequalities derived directly from thefunction f that appears in the differential equation (1.1) ofthe controlled system !. Furthermore, Problem 1.1 has asolution if and only if this set of inequalities has a solution.In this sense, the methodology presented here is reminiscentof linear state feedback theory, where state feedback func-tions are derived directly from matrices that appear in thecontrolled system’s differential equation. Notwithstanding,the present methodology is very different from its linearcounterpart; it does not yield exclusively linear feedbackfunctions when the controlled system ! is linear.

When discussing feedback controllers that take a system! from an initial condition into an assigned target domainin state space, it is natural to contemplate the form of thepath taken by the closed-loop system as it makes its way.We focus here on paths formed by broken straight lines,namely, paths that consist of successions of straight linesegments. The state feedback function ϕ is designed toguide the closed-loop system !ϕ along a broken line froman initial condition x0 to the target domain D0. Of course,errors and disturbances, which are permitted, may distortthe form of the path. Other categories of paths can also beemployed.

The construction of state feedback functions ϕ is de-scribed in detail in Sections 2 and 3. We provide here asimplified (and somewhat inaccurate) overview of this con-struction. At a state x ∈ Rn, let U1(x) be the set of all inputvalues u ∈ Rm for which the vector f (x, u) points from xto the target domain D0. Let D1 be the set of all states x ∈Rn at which the set U1(x) is not empty. The set D1 is de-rived by solving an inequality induced by the function f of(1.1), namely, by the function that appears in the differentialequation of the controlled system !.

At each point x ∈ D1, choose a value u(x) ∈ U1(x) anddefine the state feedback function ϕ(x) := u(x). Then, inview of (1.3), the path derivative x(t) = f (x(t),ϕ(x(t))) ofthe closed-loop system !ϕ is directed toward D0 at everypoint x(t) ∈ D1. As a result, the state x(t) of the closed-loop system !ϕ moves toward the target domain D0 as timeprogresses. In the special case when the target domain D0

is a single point, the fact that x(t) points toward D0 at everypoint of D1 implies that the closed-loop system !ϕ moveswithin D1 in constant direction toward D0, thus tracing astraight line segment.

Having built the set D1, we look at the difference setD′

2 := Rn \ D1, namely, the set of remaining states. At astate x ∈ D′

2 , denote by U2(x) the set of all input valuesu ∈ Rm for which the vector f(x, u) points from x to a pointof the set D1. Let D2 be the set of all points x ∈ D′

2 atwhich U2(x) is not empty. As before, at each point x ∈ D2,choose a value u(x) ∈ U2(x) and define the state feedbackfunction ϕ(x) := u(x). The set U2(x) and the domain D2

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 3: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

International Journal of Control 145

are calculated by solving an inequality based on the givenfunction f of (1.1). Considering (1.3), we conclude that thederivative x(t) = f (x(t),ϕ(x(t))) points toward the set D1

at all points x(t) of D2. Consequently, the state trajectory x(t)takes !ϕ from every point of D2 toward a point of D1. Oncea point of D1 is reached, the values of the state feedbackfunction ϕ previously defined on D1 take !ϕ to the targetdomain D0. Thus, the resulting state feedback function ϕ

takes the closed-loop system !ϕ to the target domain D0

from all points of the union D1 ∪ D2.Continuing with this process, we derive in a recursive

manner a sequence of domains D1, D2, . . . ⊆Rn, as follows.Let i ≥ 1 be an integer, and assume that the domains D1, D2,. . . Di ⊆ Rn have been derived. Then, define the differenceset

D′i+1 := Rn \

⎝⋃

j=0,...,i

Dj

that consists of all points outside these domains. At eachpoint x ∈ D′

i+1, let Ui+1(x) be the set of all input valuesu ∈ Rm for which the vector f (x, u) points from x to apoint of the set Di. Denote by Di+1 the set of all pointsx ∈ D′

i+1 at which Ui+1(x) is not empty. The domain Di+1

is obtained from the solution of an inequality based on thegiven function f of (1.1). At each point x ∈ Di+1, choose avalue u(x) ∈ Ui+1(x) and define the state feedback functionϕ(x) := u(x). Then, considering (1.3), the path x(t) of theclosed-loop system !ϕ points toward the set Di at all pointsof Di+1. As a result, !ϕ moves from any point of Di+1

toward a point of Di. Once a point of Di is reached, thepreviously defined values of ϕ on Di take the closed-loopsystem into the set Di−1. From there, previously definedvalues of ϕ assure that !ϕ progresses to the set Di−2, andso on, until !ϕ reaches the target domain D0.

Schematically, the progression of the closed-loop sys-tem !ϕ toward its target domain can be described as de-picted in Figure 3. Note that, in general, the domain Di

Figure 3. Domains and paths.

may not be a connected set for some or for all integersi = 1, 2, . . .

In summary, the static state feedback function ϕ soconstructed takes the system ! to the target domain D0

from any point of the union:

S(D0) :=⋃

i≥0

Di.

Furthermore, we show in Section 3 that this is an exclusivefeature of the set S(D0): there is no state feedback controller,not static nor dynamic, which, in finite time, can take ! intothe target domain D0 from an initial state outside S(D0).Thus, recalling that X0 denotes the set of all potential initialconditions of !, we conclude that there is a state feedbackcontroller solving Problem 1.1 if and only if

X0 ⊆ S(D0). (1.4)

We show in Section 3 that, whenever it exists, a state feed-back controller C that solves Problem 1.1 can be imple-mented as a static state feedback function ϕ. The statefeedback function ϕ is obtained from the solution of a setof inequalities based on the function f given in the differen-tial equation (1.1) of the controlled system !.

As mentioned earlier, the tools presented here can beused to derive feedback controllers that asymptotically sta-bilise a nonlinear system !. To this end, simply choose thetarget domain D0 as a tight neighbourhood of the originand derive a state feedback function ϕ that takes ! intoD0. Then, expand ϕ onto D0 by patching it together with alinear state feedback function that asymptotically stabilisesa linearisation of ! at the origin. The resulting state feed-back function provides asymptotic stabilisation of ! overthe domain S(D0). This process yields a simple approachto global stabilisation of nonlinear system by state feed-back. The critical computational step involves the solutionof a set of inequalities based on the function f given in thedifferential equation (1.1) of !. A detailed discussion ofasymptotic stabilisation is provided in Section 6.

As mentioned earlier, we show in Section 3 that, when-ever solvable, Problem 1.1 can be solved by a static statefeedback controller. Thus, in the context of Problem 1.1,dynamic state feedback is not necessary. Considering thediscussion of the previous paragraph, it also follows thatasymptotic stabilisation of nonlinear systems, wheneverpossible, can be achieved by static state feedback. Thisis not to say that dynamic state feedback controllers are tobe ignored, as they may offer broader capabilities of mold-ing the dynamical behaviour of the closed-loop system !c.The static state feedback controllers derived in this reportcan help calculate desired dynamic controllers. Indeed, astabilising static state feedback controller can be used toobtain a fraction representation of the controlled system !,and such a fraction representation can be utilised to derive

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 4: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

146 J. Hammer

dynamical state feedback controllers that assign desirabledynamical behaviour to the closed-loop system (Hammer,1984, 1985, 1989, 1994).

To conclude, static state feedback controllers that leada nonlinear system ! into a prescribed target domain D0

in state space can be derived from the solution of a set ofinequalities obtained directly from the function f given inthe differential equation (1.1) of the controlled system !. Ifthis set of inequalities has no solution, then there is no statefeedback controller, not static nor dynamic, that takes ! intoD0 from the desired initial condition. These facts providea simple and transparent foundation for the solution of awide range of problems in nonlinear control. In Sections 2and 3, these principles are formulated within a frameworkthat assures robustness of the closed-loop feedback system!ϕ , guaranteeing that small errors in the function f or inthe implementation of the state feedback function ϕ do notbreach control objectives.

Alternative approaches to the control of nonlinearsystems can be found in Lasalle and Lefschetz (1961),Lefschetz (1965), Hammer (1984, 1985, 1989, 2004, 1994),Desoer and Kabuli (1988), Verma (1988), Sontag (1989),Chen and de Figueiredo (1990), Paice and Moore (1990),Verma and Hunt (1993), Sandberg (1993), Paice and vander Schaft (1994), Baramov and Kimura (1995), Georgiouand Smith (1997), Logemann, Ryan, and Townley (1999),Hammer (2004), the references cited in these publications,and others.

The paper is organised as follows. Section 2 introducesbasic concepts and notation, while Section 3 expands onthese concepts and utilises them to derive state feedbackcontrollers that solve Problem 1.1. Section 4 derives nec-essary and sufficient conditions for robust state feedbackcontrol of nonlinear systems. Section 5 consists of twoexamples that demonstrate the computation of robust statefeedback controllers using the formalism developed in Sec-tion 4. The paper concludes in Section 6 with the derivationof state feedback controllers that provide robust asymptoticstabilisation of nonlinear systems.

2. Basics

In this section, we introduce some of the notions that under-lie our discussion. Let ! be an input/state system describedby the differential equation (1.1) with a continuous func-tion f : Rn × Rm → Rn, and consider the static state feedbackconfiguration of Figure 2. We concentrate on necessary andsufficient conditions for the existence of a state feedbackfunction ϕ : Rn → Rm for which the closed-loop system!ϕ proceeds from an initial state x0 into a specified targetdomain D0 in state space. First, some notation.

Notation: As usual, the superscript T indicates the trans-pose. Throughout our discussion, we employ the Euclidiannorm | · | given, for an m-dimensional vector u = (u1, u2,

. . . , um)T ∈ Rm, by

|u| :=√

u21 + u2

2 + · · · + u2m.

In practice, systems often have bounds on the maximalinput signal amplitude they can tolerate. These bounds aredetermined by structural limitations of a system’s compo-nents. To incorporate such bounds into our considerations,we assume that the controlled system ! allows only inputsignals whose amplitude does not exceed a specified mag-nitude M > 0. It is convenient to state this fact formally forfuture reference.

Assumption 2.1: The controlled system ! permits onlyinput signals u of magnitude |u| ≤ M, where M > 0 is aspecified real number.

Note that when Assumption 2.1 is valid, the solutionx(t) of the controlled system’s differential equation (1.1) isa continuous function of time. We take implicit advantageof this continuity in our forthcoming discussion.

Given a point s ∈ Rn and a real number ρ > 0, the openball B(s, ρ) of centre s and radius ρ is, as usual, the set

B(s, ρ) := {x ∈ Rn : |x − s| < ρ}.

With every non-zero vector z ∈ Rn, we associate a unitvector z in the direction of z; we take z to be the zero vectorwhen z = 0. Formally,

z :={z/|z| if z = 0,

0 if z = 0.(2.1)

A slight reflection shows that the function · is continuouseverywhere, except at the origin.

An object that is frequently used in our discussion isthe straight line segment ℓ(y, z) that connects two distinctpoints y, z ∈ Rn; it consists of the set of points:

ℓ(y, z) := {α(z − y) + y : α ∈ [0, 1]}. (2.2)

Using such straight line segments, we build the followingbody in Rn.

Definition 2.2: Given two points z, s ∈ Rn and a real numberρ > 0, the ball-cone χ (z, s, ρ) is a body in Rn consisting ofall straight line segments that start in the open ball B(s, ρ)and end at z, namely,

χ (z, s, ρ) : =⋃

y∈B(s,ρ)

ℓ(y, z)

= {x ∈ Rn : x = α(z − y)

+ y,α ∈ [0, 1], y ∈ B(s, ρ)}.

The point z is the apex of the ball-cone χ (z, s, ρ), while sis its centre of base.

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 5: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

International Journal of Control 147

Figure 4. A ball-cone χ (z, s, ρ).

Note that, when s = z, the ball-cone reduces to the ballχ (s, s, ρ) = B(s, ρ).

Visually, the ball-cone is akin to a cone, except thatinstead of a cone’s ‘flat’ base, the ball-cone has a ball, asdepicted in Figure 4. The shaded area in the figure showsa standard right circular cone c(z, s, ρ) whose vertex is atz, the centre of its base is at s, and the radius of its baseis ρ. The angle between the generator and the axis of theright circular cone c(z, s, ρ) is denoted by θ in the figure.We refer to θ as the opening angle of the ball-cone χ (z, s,ρ). Note that the angle between the generator and the axisof χ (z, s, ρ) is larger than θ , since a ball-cone has the entiren-dimensional ball B(s, ρ) as its ‘base’.

For the right circular cone c(z, s, ρ), we have

ρ = |s − z| tan θ . (2.3)

2.1 Directional error

Consider a system ! described by the differential equa-tion (1.1), and assume that the system is at a state z ∈ Rn.Suppose that it is necessary to make ! proceed from thestate z to a state s = z ∈ Rn along the straight line segmentthat connects the two states, namely, along the segmentℓ(z, s) of (2.2). For ! to proceed from z to s along thisstraight line segment, the trajectory x(t) of ! must have thefollowing feature: the derivative x = f (x, u) must pointin the direction from z to s at all points x ∈ ℓ(z, s). Asthe direction of the derivative x is given by the functionf, there must be, at every state x ∈ ℓ(z, s), an input valueu(x) for which f (x, u(x)) points in the direction of the unitvector (s−z). Considering the representation (2.2) of theline segment, every point x of ℓ(z, s) is characterised by thetriplet z, s, α; hence, we can express the appropriate inputvalue at each point as a function u(z, s, α). In these terms, itfollows that ! can be driven along the straight line segmentfrom z to s if an only if there are input values u(z, s, α) ∈Rm such that

f (α(z − s) + s, u(z, s,α)) = (s−z) for all α ∈ [0, 1].(2.4)

The requirement (2.4) implies complete accuracy and,as a result, cannot be implemented in practice. We mustmodify the requirement into a form that allows for smallerrors and leads to a robust implementation. To this end, we

replace (2.4) by the following somewhat weaker condition:rather than pointing exactly at s, we allow the derivativex = f (x, u) to point to the vicinity of s. Permitting suchdeviation in the direction of motion removes the burden ofabsolute accuracy and facilitates a robust implementation.

Formally, let ε > 0 be a real number that describes theavailable directional accuracy. Then, (2.4) is replaced bythe requirement that there be an input function u(z, s, α) ∈Rm satisfying

∣∣f (α(z − s) + s, u(z, s,α)) − (s−z)∣∣

< ε for all α ∈ [0, 1]. (2.5)

The error ε may be caused by a combination of factors,including inaccuracies of the function f and errors in theimplementation of the input u(z, s, α). We say that (2.5)represents a directional error of ε. As usual, the term ‘er-ror’ indicates an unpredictable event. The expression ‘anerror of ε’ refers to all errors whose magnitude does notexceed ε.

To be meaningful, condition (2.5) must be valid at allstates through which the system might pass on its wayfrom z to s. Due to the directional error of ε, the motionof ! induced by the input values u(z, s, α) may not beconfined to the straight line segment connecting z to s. Tocharacterise the states through which the system may travelunder condition (2.5), let θ (ε) ≥ 0 be the supremal anglebetween the two unit vectors f (α(z − s) + s, u(z, s,α))and (s−z) that is consistent with a directional error ofε; namely, θ (ε) is the angle between the two unit vec-tors f (α(z − s) + s, u(z, s,α)) and (s−z) when |f (α(z −s) + s, u(z, s,α)) − (s−z)| = ε. Build a ball-cone withopening angle θ (ε) and base centre s, as depicted inFigure 5. Letting ρ(z, s, ε) be the base radius of this ball-cone, it follows by (2.3) that

ρ(z, s, ε) = |z − s| tan θ(ε). (2.6)

Letting *(z, s, ε) denote the resulting ball-cone, wehave

*(z, s, ε) =⋃

y∈B(s,ρ(z,s,ε))

ℓ(z, y)

= {x ∈ Rn : x = α(z − y) + y,α ∈ [0, 1],

y ∈ B(s, ρ(z, s, ε))}, (2.7)

where the radius ρ(z, s, ε) is given by (2.6).To express our quantities directly in terms of ε, consider

the supremal case when

∣∣f (α(z − s) + s, u(z, s,α)) − (s−z)∣∣ = ε.

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 6: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

148 J. Hammer

Figure 5. Directional errors.

Then, the triangle formed by the two unit vectorsf (α(z − s) + s, u(z, s,α)) and (s−z), when pinned to-gether at their tail and connected by a straight line segmentat their heads, is a bilateral triangle with sides of length 1(the unit vectors) and base of length ε. The height h of thistriangle is determined by the Pythagorean theorem to beh =

√1 − ε2/4, and the apex angle of this triangle is

θ (ε) = 2 arctanε/2h

= 2 arctanε/2

√1 − ε2/4

. (2.8)

A direct examination of Figures 4 and 5 shows that θ (ε) isthe opening angle of the ball-cone *(z, s, ε). Note that θ (ε)is completely determined by ε.

In view of (2.6), the base radius ρ(z, s, ε) of the ball-cone*(z, s, ε) is given by

ρ(z, s, ε) = |z − s| tan

(

2 arctanε/2

√1 − ε2/4

)

. (2.9)

For ε → 0, this yields the first-order approximations:

θ (ε) = ε,

ρ(z, s, ε) = |z − s|ε.

As we discuss next, condition (2.5) must be valid within theentire ball-cone *(z, s, ε), if it is to be meaningful.

2.2 Ball-cones and invariance

Note that the ball-cone *(z, s, ε) of (2.7) is not an openset, since it does not include a neighbourhood of the apexz. However, a slight reflection shows that, except for z, allpoints of *(z, s, ε) are interior points. Given a subset S ⊆ Rn,

denote by S the closure of S in Rn. We start now the processof showing that the closure of a ball-cone *(z, s, ε) forms aninvariant set in the sense that, as long as the directional errordoes not exceed ε, the trajectory of the closed-loop systemdoes not exit *(z, s, ε) before reaching a vicinity of s.

Proposition 2.3: Let z, s ∈ Rn be two distinct points, andlet ε > 0 be a directional error for which the openingangle of the ball-cone *(z, s, ε) satisfies θ (ε) < π /4. Then,*(z′, s, ε) ⊆ *(z, s, ε) for all z′ ∈ *(z, s, ε).

Proof: For a point z′ ∈ *(z, s, ε), we can write

*(z′, s, ε) = {x ∈ Rn : x = α(z′ − y ′) + y ′,α ∈ [0.1],

y ′ ∈ B(s, ρ(z′, s, ε))}, (2.10)

where, according to (2.6), we have ρ(z′, s, ε) =|z′−s|tan θ (ε). Considering that z′ ∈ *(z, s, ε), it followsfrom (2.7) that

z′ = β(z − y) + y (2.11)

for some β ∈ [0, 1] and y ∈ B(s, ρ(z, s, ε)). Substitutinginto (2.10), we get

*(z′, s, ε) = {x ∈ Rn : x = α(β(z − y) + y − y ′)

+ y ′,α,β ∈ [0.1], y ∈ B(ρ(z, s, ε),

y ′ ∈ B(ρ(z′, s, ε)}. (2.12)

We claim that the point

x = α(β(z − y) + y − y ′) + y ′ (2.13)

of (2.12) is a member of *(z, s, ε) for all α, β ∈ [0, 1].

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 7: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

International Journal of Control 149

To this end, note first that when α = β = 1, we get x =z, so that x ∈ *(z, s, ε), and our claim is valid in this case.Next, consider the case where α and β are not both equalto 1. Then, x can be rewritten in the form

x = αβ(z − η) + η, (2.14)

where a comparison of (2.13) and (2.14) yields, after somesimplification, that

η = y − 1 − α

1 − αβ(y − y ′) for all α,β ∈ [0, 1]

satisfying αβ = 1. (2.15)

Now, assume for a moment that η ∈ B(s, ρ(z, s, ε)). Then,setting δ := αβ ∈ [0, 1], we can rewrite (2.14) in the formx = δ(z−η) + η. In view of (2.7), this implies that x ∈*(z, s, ε). As the latter is true for every point x of theball-cone *(z′, s, ε) and for all z′ ∈ *(z, s, ε), it followsthat *(z′, s, ε) ⊆ *(z, s, ε) for all z′ ∈ *(z, s, ε). Thus, ourproof will conclude upon showing that η ∈ B(s, ρ(z, s, ε)).

To prove the latter, examine the coefficient of (2.15)

γ (α,β) := 1 − α

1 − αβ,

where α, β ∈ [0, 1] and αβ = 1. Considering that

∂γ (α,β)∂β

= (1 − α)α(1 − αβ)2

> 0 for all α,β ∈ [0, 1],αβ = 1,

it follows that γ (α, β) is a monotone increasing function ofβ for all α, β ∈ [0, 1], αβ = 1. Since we have γ (α, 0) =(1−α) and γ (α, 1) = 1, we conclude that 0 ≤ γ (α, β) ≤ 1for all α, β ∈ [0, 1], αβ = 1. Thus, we can rewrite (2.15) inthe form

η = γ (α,β)(y ′ − y) + y, γ (α,β) ∈ [0, 1],

which implies that η is a point on the straight line segmentconnecting y and y′.

Now, recall that y ∈ B(s, ρ(z, s, ε)); if we can show thatalso y ′ ∈ B(s, ρ(z, s, ε)), then, since B(s, ρ(z, s, ε)) is aconvex set, it would follow that η ∈ B(s, ρ(z, s, ε)), and ourproof would conclude. Further, since y ′ ∈ B(s, ρ(z′, s, ε))by (2.12), and since the two balls B(s, ρ(z′, s, ε)) andB(s, ρ(z, s, ε)) are concentric (with centre at s), the inclu-sion y ′ ∈ B(s, ρ(z, s, ε)) would follow from the inequalityρ(z′, s, ε) ≤ ρ(z, s, ε). By (2.9), this inequality would bea consequence of the inequality |z′−s| ≤ |z−s|. Thus, ourproof will conclude upon showing that the last inequality isvalid.

Using the Proposition’s assumption that θ (ε) < π /4, weinfer from (2.6) that

ρ(z, s, ε) < |z − s|. (2.16)

Subtracting s from both sides of (2.11), we obtain z′−s=β(z−y) + (y−s) = β(z−s) + (1−β)(y−s). As 0 ≤ β ≤ 1,we can write

|z′ − s| = |β(z − s) + (1 − β)(y − s)| ≤ β|z − s|+ (1 − β)|y − s|. (2.17)

Now, since y ∈ B(s, ρ(z, s, ε)), we have |y−s| ≤ ρ(z, s,ε); using (2.16) this yields |y−s| < |z−s|. Substituting into(2.17), we obtain |z′−s| < β |z−s| + (1−β)|z−s| =|z−s|, andour proof concludes. !

Proposition 2.3 is an important component of our forth-coming discussion. As is it valid for θ (ε) < π /4, we restrictour attention from now on to cases that satisfy this re-quirement. In light of (2.8), this assumption is not overlyrestrictive; a direct calculation shows that it corresponds to0 < ε < 0.7654. Normally, directional errors are small, andtheir corresponding opening angles much smaller than π /4.

Assumption 2.4: The directional error satisfies θ (ε) <

π /4.

The next statement is a technical result. It will help uslater to show that a system pointed toward a point s withdirectional error of ε, remains within the closed ball-cone*(z, s, ε) until reaching a vicinity of s.

Notation 2.5: For a real number β > 0, the symbol 0(β)represents the set of all functions ω: R → Rn for whichlimβ → 0|ω(β)|/β = 0.

Lemma 2.6: Let x, x′, s, z ∈ Rn be points and assume thatx ′ − x = βa + µ(x ′, x), where β > 0 is a real number,a is a unit vector satisfying |a − (s − x)| < ε, and µ(x′,x) ∈ 0(β). If x ∈ *(z, s, ε), then also x ′ ∈ *(z, s, ε) forsufficiently small β > 0.

Proof: Consider x as a fixed point, so that x′ is a functionof β. Note that, if x = s, then x is the centre of the ballB(s, ρ(z, s, ε)) ⊆ *(z, s, ε); in such case, every point x′

sufficiently close to x is in B(s, ρ(z, s, ε)) and, consequently,in *(z, s, ε). Thus, the statement is valid when x = s.

Next, examine the case where x = s. Let θ (ε) bethe opening angle of the ball-cone *(z, s, ε). Since x ∈*(z, s, ε), it follows by Proposition 2.3 that *(x, s, ε) ⊆*(z, s, ε). This implies that *(z, s, ε) includes the straightline segment ℓ(x, s) as well as any straight line segment thatconnects x to a point of the ball B(s, ρ(x, s, ε)).

Applying magnitude to the equation in the Lemma’sstatement, we obtain 0 ≤ |x ′ − x| ≤ β |a| + |µ(x ′, x)| =β(1 + |µ(x′, x)|/β), since a is a unit vector. As

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 8: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

150 J. Hammer

µ(x′, x) ∈ 0(β), this implies that limβ→0|x′−x| = 0. Keep-ing in mind that x = s, the latter implies that there is a realnumber β0 > 0 such that

|x ′ − x| < |s − x|/2 for all 0 < β < β0. (2.18)

Further, define the real number ξ := |a − (s−x)| < ε,and let θ (ξ ) be the angle between the unit vectors a and(s−x). Then, as the function (2.8) is strictly monotoneincreasing over our range and ξ < ε, we have that θ (ξ ) <θ (ε). Let ψ(x′) be the angle formed between the vector x′−xand the unit vector (s−x). Using the expression x ′ − x =βa + µ(x, x ′) and the fact that (x − s) is a unit vector, wecan write

ψ(x ′) = arccos

((x ′ − x) · (s−x)

|x ′ − x|

)

= arccos

(βa · (s−x) + µ(x, x ′) · (s−x)

√β2 + µ(x, x ′) · µ(x, x ′) + 2βµ(x, x ′) · a

)

= arccos

(a · (s−x) + µ(x, x ′) · (s−x)/β

√1 + µ(x, x ′) · µ(x, x ′)/β2 + 2µ(x, x ′) · a/β

)

.

Considering that limβ→0|µ(x, x′)|/β = 0, we conclude that

0 ≤ limβ→0

ψ(x ′) = arccos(a · (s−x)) = θ (ξ ) < θ (ε).

(2.19)

Now, denote ζ := θ (ε)−θ (ξ ); in view of (2.19), there isa real number β1 > 0 such that ψ(x′) < θ (ε)−ζ /2 for all 0 <

β < β1. Then, for all 0 < β < min {β0, β1}, the vector x′−xforms an angle smaller than θ (ε) with the line segment ℓ(x,s), and, by (2.18), the length of x′−x does not exceed |x−s|.This shows that x ′ ∈ *(x, s, ε), and, since x ∈ *(z, s, ε),the lemma follows by Proposition 2.3. !

In Section 3, we examine state feedback functions ϕ thattake a system ! from a state z to a state s, while continuallyaiming at s with directional error of ε. At that point, Lem-ma 2.6 will help us show that such state feedback keeps theclosed-loop system !ϕ within the ball-cone *(z, s, ε) untila vicinity of s is reached. This fact turns out to be criticalto our discussion.

2.3 Directional uncertainty and interception

We start by characterising the set of all states from which asystem ! with directional error of ε can reach the vicinityof a specific target state. For a state s ∈ Rn, let 5(s, ε) bethe set of all states of ! at which the trajectory of ! can bepointed in the direction of s with a directional error of ε >

0. Recalling Assumption 2.1, we have

5(s, ε) ={x ∈ Rn \ s : |f (x, u(x)) − (s−x)|

< ε for some u(x) ∈ Rm satisfying |u(x)| ≤ M}.

(2.20)

Figure 6. Interception.

Note that pointing the path of ! toward s does not guaranteethat ! can reach s from x, since it might not be possible tomaintain this direction all the way from x to s. The set ofall states from which ! can actually reach the vicinity of swill be characterised shortly.

Considering that our objective is to move the state of !

in the direction of s within 5(s, ε), we have to make sure thatf(x, u(x)) = 0 at all states x along the way, since otherwisea stationary point is met at (x, u(x)) and the system stopsprogress. Now, substituting f(x, u(x)) = 0 into (2.20) yields|(s−x)| < ε, or 1 < ε, since f (x, u(x)) = 0 whenever f (x,u(x)) = 0 (see (2.1)). Thus, Assumption 2.4 guaranties thatf(x, u(x)) = 0 at all points of 5(s, ε), and stationary pointsare avoidable in 5(s, ε).

Problem 1.1 requires the design of a feedback controllerthat takes ! from an initial state x0 = z into a target domainD0. The path that takes ! from z to D0 is, generally speak-ing, unpredictable, since the closed-loop system is subjectto a directional error of ε. Nevertheless, as Proposition 2.3hints (this connection is made precise in Proposition 3.1),the closed-loop system remains confined to the closed ball-cone *(z, s, ε). Thus, if every path through *(z, s, ε) meetsthe target domain D0, we are assured that the objective ofentering D0 is achievable, despite possible directional er-rors. These considerations lead us to the following notion.

Definition 2.7: An open domain D0⊆Rn intercepts theball-cone *(z, s, ε) if ℓ(z, y) ∩ D0 = ∅ for all y ∈B(s, ρ(z, s, ε)).

When the target domain D0 intercepts the ball-cone *(z,s, ε), then every ray within the ball-cone from the apex zmeets D0. We show later that this implies that every pathtaking ! from z to s with directional error of ε must enterD0. The situation is depicted schematically in Figure 6. Inparticular, note that D0 intercepts the ball-cone *(z, s, ε)

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 9: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

International Journal of Control 151

whenever z ∈ D0; this is, of course, the degenerate casehere.

Our interest in the system’s progress ends upon enteringthe target domain D0. Therefore, in Figure 6, our interestis confined to the ‘upper’ part of the ball-cone, namely tothe part between the apex z and the set D0. Formally, this isdescribed by the following notion.

Definition 2.8: Let D0 be an open subset of Rn that inter-cepts the ball-cone *(z, s, ε), and denote by D0 := D0 \ D0

the boundary of D0. Then, the restriction *D0 (z, s, ε) is theset

*D0 (z, s, ε)

:=

⎧⎨

{⋃ℓ(y, z) : y ∈ *(z, s, ε) ∩ D0

and ℓ(y, z) ∩ D0 = ∅}

if z /∈ D0,

∅ if z ∈ D0.

(2.21)

When z ∈ D0, the restriction *D0 (z, s, ε) consists of allpoints of the closed ball-cone *(z, s, ε) that are between theapex z of the ball-cone and the target domain D0, includingz and the ‘upper’ boundary of D0. The restriction is repre-sented graphically in Figure 6. It is a closed and boundedset in Rn, as follows.

Lemma 2.9: Let D0 ⊆ Rn be an open set that interceptsthe ball-cone *(z, s, ε). Then, the restriction *D0 (z, s, ε) isa compact set.

Proof: According to Definition 2.8, the restriction*D0 (z, s, ε) is a subset of *(z, s, ε), and hence is a boundedset. Thus, it remains to show that the restriction is aclosed set. To this end, consider a sequence of pointsx1, x2, . . . ∈ *D0 (z, s, ε) that converges to a point x ∈ Rn.We have to show that x ∈ *D0 (z, s, ε). By Definition 2.8,there is a sequence of points y1, y2, . . . ∈ *(z, s, ε) ∩ D

such that xi ∈ ℓ(yi, z) and ℓ(yi, z) ∩ D0 = ∅, i = 1, 2, . . .

As *(z, s, ε) ∩ D is a closed subset of a bounded set, itis a compact set, and whence the sequence y1, y2, . . .

has a convergent subsequence that converges to apoint y ∈ *(z, s, ε) ∩ D. It follows then that ℓ(y, z) =limi → ∞ℓ(yi, z), so that ℓ(y, z) ∩ D0 = (limi→∞ ℓ(yi, z)) ∩D0 = limi→∞(ℓ(yi, z) ∩ D0) = ∅. Referring again to Def-inition 2.8, we conclude that ℓ(y, z) ⊆ *D0 (z, s, ε). Thisimplies that, if x ∈ ℓ(y, z), then x ∈ *D0 (z, s, ε), and it fol-lows that *D0 (z, s, ε) is a closed set. Thus, our proof willconclude upon showing that x ∈ ℓ(y, z).

To show the latter, assume by contradiction that x∈ℓ(y,z). Then, considering that ℓ(y, z) is a closed set, the com-plement of ℓ(y, z) is an open set, and hence there is a realnumber µ > 0 such that the ball B(x, µ) includes no pointsof ℓ(y, z). As the sequence {xi} converges to x, there is an in-teger N > 0 such that |x−xi| < µ/2 for all i ≥ N. This impliesthat none of the points {xi}∞i=N gets closer than µ/2 to thesegment ℓ(y, z). Considering that the segments {ℓ(yi, z)}∞i=1

all have the common apex z, this implies that none of thepoints {yi}∞i=N gets closer than µ/2 to y, contradicting thefact that y is a limit point of the sequence {yi}. Thus, wemust have that x ∈ ℓ(y, z), and our proof concludes. !

3. Existence and construction of state feedbackfunctions

3.1 Reaching the target domain

We start our investigation of the existence of state feedbackfunctions that solve Problem 1.1 by considering states fromwhich the controlled system can be driven into the targetdomain along a path related to a straight line.

Proposition 3.1: Let ! be a system described by (1.1),where the function f is continuous. Let z, s ∈ Rn be a pair ofpoints, let ε > 0 be a real number, let 5(s, ε) be given by(2.20), and let D0 ⊆ Rn be an open domain that intercepts*(z, s, ε). If *D0 (z, s, ε) ⊆ 5(s, ε), then there is a statefeedback function ϕ with directional error of ε that takes !

from z into D0 in finite time.

Proof: As the proposition is clearly true when z ∈ D0,we consider the case where z ∈ D0. Let x(t) be the stateof the system ! at the time t, and let u(x(t)) be the inputof ! at t. Use the initial state x(0) = z, so that x(0) ∈*D0 (z, s, ε) by Definition 2.8. As *D0 (z, s, ε) ⊆ 5(s, ε) bythe proposition’s assumption, it follows that x(0) ∈ 5(s, ε).Consequently, there is an input value u(x(0)) ∈ Rm suchthat

|f (x(0), u(x(0))) − (s − x(0))| < ε. (3.1)

Now, let τ ≥ 0 be a real number for which the followingis true: there is a state feedback function u(x) that drives !

with a directional error of ε in such a way that the state sat-isfies x(t) ∈ *D0 (z, s, ε) for all t ∈ [0, τ ]. We are interestedin the upper limit of τ , namely, in the quantity

T := sup τ. (3.2)

In view of (3.1), we have τ ≥ 0, so also T ≥ 0.By (1.1), we can write for a real number δ > 0 that

x(t + δ) = x(t) + f (x(t), u(x(t))) δ + 0(δ). (3.3)

Setting t = 0 and considering that x(0) = z ∈ *(z, s, ε), itfollows by Lemma 2.6 that there is a real number δ0 > 0such that x(δ) ∈ *(z, s, ε) for all 0 < δ < δ0. Clearly, ifx(δ) ∈ D0 for some 0 < δ < δ0, then our proof is complete,as the target domain has been reached. Otherwise, it followsby Definition 2.8 that x(δ) ∈ *D0 (z, s, ε) for all 0 < δ <

δ0. Also, since *D0 (z, s, ε) ⊆ 5(s, ε) by the proposition’sassumption, it follows that x(δ) ∈ 5(s, ε) for all 0 < δ < δ0.

Next, by (3.2), there is a sequence of times t1, t2, . . . ∈[0, T] converging to T such that x(ti) ∈ *D0 (z, s, ε) for all

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 10: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

152 J. Hammer

i = 1, 2, . . . As *D0 (z, s, ε) is compact by Lemma 2.9, the se-quence {x(ti)}∞i=1 has a convergent subsequence {x(tik )}∞k=1and limk→∞ x(tik ) ∈ *D0 (z, s, ε). Considering that x(t) isthe solution of the differential equation (1.1) with a boundedinput function (Assumption 2.1), it follows that x(t) is a con-tinuous function of time; thus, limk→∞ tik = T implies thatlimk→∞ x(tik ) = x(T ), and whence x(T ) ∈ *D0 (z, s, ε).

Further, since *D0 (z, s, ε) ⊆ 5(s, ε) by assumption,x(T) ∈ 5(s, ε). Thus, there is an input value u(x(T)) ∈ Rm

such that |f (x(T ), u(T )) − (s − x(T ))| < ε. Using (3.3)with t = T and Lemma 2.6, this implies that there is areal number δ1 > 0 such that x(T + δ) ∈ *(z, s, ε) for all0 < δ < δ1. Selecting one such value of δ, we are left withthe following two options: (i) x(T + δ) ∈ D0, or (ii) x(T + δ)∈ D0. Case (i) implies that x(T + δ) ∈ *D0 (z, s, ε), violat-ing the fact that T is the supremum given by (3.2). Thus,(ii) must be valid, and our proof concludes. !

Proposition 3.1 forms the foundation of our solution ofProblem 1.1. The solution is based on a concept introducedin the next subsection.

3.2 The expansion set

Let D0 be an open domain in Rn serving as the target domainfor the system !. Let ε > 0 be a real number, let f : Rn × Rm

→ Rn be a continuous function, and let 5(s, ε) be givenby (2.20). Then, in view of Proposition 3.1, the system !

can be driven into the target domain D0 by a state feedbackfunction with directional error of ε from any point of theset

E1f (D0, ε) :=

⎧⎨

⎩z ∈ Rn

∣∣∣∣∣∣

D0 intercepts *(z, s, ε)for some s ∈ Rn and

*D0 (z, s, ε) ⊆ 5(s, ε).

⎫⎬

⎭ (3.4)

Definition 3.2: E1f (D0, ε) is the expansion set of D0 rela-

tive to f with directional error of ε.

In view of Definitions 2.7 and 2.8, we have that

D0 ⊆ E1f (D0, ε). (3.5)

Proposition 3.1 can then be restated in the followingform.

Proposition 3.3: Let ! be a system described by (1.1)with a continuous function f. Let ε > 0 be a real number,let D0 ⊆ Rn be an open domain, and let E1

f (D0, ε) be theexpansion set. Then, there is a state feedback function ϕ

with directional error of ε that takes ! from every initialstate z ∈ E1

f (D0, ε) into D0 in finite time.

A partial inverse of Proposition 3.3 is provided by thefollowing.

Proposition 3.4: Let ! be a system described by the dif-ferential equation (1.1) with a continuous function f. LetD0 be an open domain in Rn, let ε > 0 be a real number,and let E1

f (D0, ε) be the expansion set. If the differenceset E1

f (D0, ε)\D0 is empty, then there is no state feedbackfunction with directional error of ε that drives ! from astate outside of D0 into D0 in finite time.

Proof: The proof is by contradiction. Assume thatE1

f (D0, ε)\D0 = ∅, but there is a state z ∈ D0 from which! can be driven into D0 by a state feedback function withdirectional error of ε. Let ϕ: Rn → Rm be such a state feed-back function, and let s ∈ D0 be a point reached by theclosed-loop system !ϕ from z. Recalling that a directionalerror of ε includes all directional errors smaller than ε, itfollows by the proof of Proposition 3.1 that D0 intercepts*(z, s, ε).

Now, let x(t) be the state of the closed loop system !ϕ

at the time t, as it progresses from z to s. Set x(0) = z, andlet t1 > 0 be a time at which x(t1) = s ∈ D0. Consideringthat D0 is an open set, x(t1) is an interior point of D0. Inview of Assumption 2.1, all input values of ! are uniformlybounded, and, as a result, x(t) is a continuous function of t.Define the time

t∗ := inf{t ≥ 0 |x(t) ∈ D0 }; (3.6)

as x(t1) is an interior point of D0, it follows that t1−t∗ ≥ 0.We claim that x(t∗) is a boundary point of D0.

Indeed, assume, by contradiction, that x(t∗) ∈ D0. As D0

is an open set by the proposition’s assumptions, there is areal number δ > 0 such that the ball B(x(t∗), δ) of radius δ

and centre x(t∗) is included in D0. By the continuity of x(t),there is a real number µ > 0 such that |x(t∗)−x(t)| < δ/2for all times t satisfying |t∗−t| < µ. But then, x(t∗−µ) ∈B(x(t∗), δ) and, since B(x(t∗), δ)⊆D0, it follows that x(t∗−µ)∈ D0, in contradiction to the definition of t∗ as the infimum(3.6). Thus, x(t∗) ∈D0 and x(t∗) is a boundary point of theopen domain D0.

Consider next the vector f(x(t∗), ϕ(x(t∗))). We claimthat this vector has the following two properties: (a) it isnot zero, and (b) it points toward a point of D0. To provethis claim, note that, if f(x(t∗), ϕ(x(t∗))) = 0, then x(t∗) isa stationary point of the closed-loop system !ϕ . In suchcase, !ϕ remains at the state x(t∗) indefinitely, and, as x(t∗)∈ D0, the closed-loop system !ϕ never reaches a point ofD0. This is in contradiction to the fact that x(t1) ∈ D0, sincet1 ≥ t∗. Thus, f(x(t∗), ϕ(x(t∗))) = 0 and (a) is valid.

Regarding (b), it follows by (3.6) and the continuityof the function x(t) that there is a real number η0 > 0such that x(t∗ + η) ∈ D0 for all 0 < η < η0. Using thedifferential equation (1.1) of !, we can write x(t∗ + η) =

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 11: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

International Journal of Control 153

x(t∗) + ηf(x(t∗), ϕ(x(t∗))) + 0(η), or

x(t∗ + η) − x(t∗) = η[f (x(t∗),ϕ(x(t∗))) + 0(η)/η].(3.7)

Considering that x(t∗ + η) is an interior point of D0 for all0 < η < η0, it follows that the vector

fη := f (x(t∗),ϕ(x(t∗))) + 0(η)/η

points from x(t∗) to an interior point of D0 for all 0 < η < η0.As all directions may include a directional error of ε, thereis then a point s′ ∈ D0 such that |fη − (x(t∗) − s ′)| < ε.Denote ξ (η) := |fη − (x(t∗) − s ′)| < ε.

Using the facts that limη→00(η)/η = 0 and that f (x(t∗),ϕ(x(t∗))) = 0, it follows that the unit vector

fη := f (x(t∗),ϕ(x(t∗))) + 0(η)/η|f (x(t∗),ϕ(x(t∗))) + 0(η)/η|

satisfies the condition limη→0 fη = f (x(t∗),ϕ(x(t∗))).Therefore, for every real number ζ > 0, there is a realnumber 0 < η1 < η0 such that

|fη − f (x(t∗),ϕ(x(t∗)))| < ζ (3.8)

for all 0 < η < η1. In particular, we can choose 0 < ζ <

ε. Then, the vector f(x(t∗), ϕ(x(t∗))) points in the directionof a point of D0 with a directional error not exceeding ε.By (3.7) and (3.8), we obtain that x(t∗) ∈ 5(x(t∗ + η), ε),and by the definition of η we have x(t∗ + η) ∈ D0. Thus,x(t∗) ∈ E1

f (D0, ε) by (3.4). Recalling that x(t∗) ∈ D0, thiscontradicts the assumption that E1

f (D0, ε) \ D0 = ∅, andour proof concludes. !

Our ensuing discussion depends on the fact that theexpansion set is an open set, as follows.

Lemma 3.5: Let ! be a system described by (1.1) with acontinuous function f, and let D0 be an open domain in Rn.Then, the expansion set E1

f (D0, ε) is an open set for everyε > 0.

Proof: First, we show that the set 5(s, ε) is an open set.Indeed, consider a point s ∈ Rn at which 5(s, ε) = ∅.Then, since ε < 1 by Assumption 2.4, it follows in (2.20)that f(x, u(x)) = 0 for all x ∈ 5(s, ε). Consequently,the unit vector f (y, u(x)) is a continuous function ofy in a neighbourhood of x at any x ∈ 5(s, ε). Now,consider a point x ∈ 5(s, ε) and let u(x) ∈ Rm be aninput value at which |f (x, u(x)) − (s−x)| < ε. Denoteδ := ε − |f (x, u(x)) − (s−x)| > 0. By continuity, thereare real numbers ζ 1, ζ 2 > 0 such that |f (x ′, u(x)) −f (x, u(x))| < δ/3 for all |x′−x| < ζ 1 and (s − x ′′) −(s−x)| < δ/3 for all |x′′−x| < ζ 2. Let ζ := min {ζ 1, ζ 2}.Then, for all y ∈ B(x, ζ ), we have |f (y, u(x)) − (s − y)| =

|[f (y, u(x)) − f (x, u(x))] + f (x, u(x))−[ (s − y) −(s−x)] − (s−x)| ≤ |f (y, u(x)) − f (x, u(x))| +|(s − y) − (s−x)| + |f (x, u(x)) − (s−x)|≤δ/3 + δ/3+ |f (x, u(x)) − (s−x)| <ε. This shows that B(x, ζ )⊆5(s,ε) (the constant input value u := u(x) is used for all y ∈B(x, ζ )). Consequently, 5(s, ε) is an open set.

Now, consider a point z ∈ E1f (D0, ε). By (3.4), there

is a point s ∈ Rn such that D0 intercepts *(z, s, ε) and*D0 (z, s, ε) ⊆ 5(s, ε). The fact that D0 and 5(s, ε) areboth open sets implies that there is a real number ρ1 > 0such that D0 intercepts *(y, s, ε) and *D0 (y, s, ε) ⊆ 5(s, ε)for all y ∈ B(z, ρ1). Consequently, B(z, ρ1) ⊆ E1

f (D0, ε).As this argument is valid at every point z ∈ E1

f (D0, ε), itfollows that E1

f (D0, ε) is an open set. !

3.3 Iterating expansion sets

Iterating the construction (3.4) of an expansion set, we builda sequence of sets E0

f (D0, ε), E1f (D0, ε), E2

f (D0, ε) . . . bysetting

Ei+1f (D0, ε) := E1

f (Eif (D0, ε), ε), i = 1, 2, . . .

E0f (D0, ε) := D0.

(3.9)

According to (3.5), we have the relationship

Eif (D0) ⊆ Ei+1

f (D0), i = 0, 1, 2, . . . ,

so that we have obtained a monotone increasing sequenceof sets. Translating Proposition 3.3 to the current notation,we obtain

Proposition 3.6: Let ε > 0 be a real number, and let D0

be an open domain in Rn. There is a static state feedbackcontroller with directional error of ε that drives ! fromevery initial state z ∈ Ei+1

f (D0, ε) into Eif (D0, ε).

Iterating Lemma 3.5 yields the following.

Lemma 3.7: Let ! be a system described by the differentialequation (1.1) with a continuous function f, and let D0 bean open domain in Rn. Then, the expansion set Ei

f (D0, ε)is an open set for all ε > 0 and all i = 1, 2, . . .

We can introduce now the main notion of our discussion.

Definition 3.8: Let D0 be an open domain in Rn, let f :Rn × Rm → Rn be a continuous function, and let ε > 0 be areal number. The expansion Ef(D0, ε) of D0 with respect tof and ε is given by

Ef (D0, ε) :=⋃

i≥0

Eif (D0, ε). (3.10)

Considering that the union of open sets is an open set,we conclude from Lemma 3.7 that the following is true.

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 12: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

154 J. Hammer

Corollary 3.9: Let f: Rn × Rm → Rn be a continuous func-tion and let ε > 0 be a real number. Then, for every opendomain D0 ⊆ Rn, the expansion Ef(D0, ε) is an open set.

The definition of a union directly yields the next state-ment.

Lemma 3.10: If x ∈ Ef(D0, ε), then there is a first integeri ≥ 0 such that x ∈ Ei

f (D0, ε).

Our interest in the expansion set Ef(D0, ε) originatesfrom the following statement, which is one of the mainresults of our discussion.

Theorem 3.11: Let ! be a system described by the differ-ential equation (1.1) with a continuous function f, and let D0

be an open domain in Rn. Then, (i) and (ii) are equivalent.

(i) There is a static state feedback controller with di-rectional error of ε that drives ! from a state z ∈Rn into D0 in finite time.

(ii) z ∈ Ef (D0, ε).

Furthermore,

(iii) For a state z ∈ Ef (D0, ε), there is no state feedbackcontroller – not static nor dynamic – that drives !

from z into D0 in finite time with a directional errorof ε.

Proof: First, consider a point z ∈ Ef(D0, ε). Accord-ing to Lemma 3.10, there is a first integer i such thatz ∈ Ei

f (D0, ε). In view of Proposition 3.6, this implies thatthere is a state feedback function ϕ with a directional errorof ε that drives ! from z to a point z1 ∈ Ei−1

f (D0, ε) infinite time. The same Proposition implies further that thestate feedback function ϕ can be extended to take ! from z1

to a point z2 ∈ Ei−2f (D0, ε) in finite time, and so on, until !

reaches a point zi ∈ D0. As the number of such segments isfinite and each segment is traversed in finite time, the totaltime is also finite. Hence, (ii) implies (i).

Conversely, assume by contradiction that (i) is valid, butz ∈ Ef (D0, ε). Considering (3.10), the latter implies that

z /∈ Eif (D0, ε) for any i = 0, 1, 2, . . . (3.11)

Now, let ϕ: Rn → Rm be a state feedback function satisfying(i), and denote by x(t) the trajectory of the closed-loopsystem !ϕ from the initial state x(0) = z to a point s ∈D0. Let T be the time at which !ϕ reaches s. As x(t) isa solution of the differential equation (1.1) with boundedinput (Assumption 2.1), it follows that x(t) is a continuousfunction of t. Further, as D0 ⊆ Ef (D0, ε) by definition, weconclude that s ∈ Ef(D0, ε). By replacing the open set D0

by the open set Ef(D0, ε) in the proof of Proposition 3.4,we conclude that the assumption at the start of the current

paragraph leads to a contradiction. Hence, (i) implies (ii),and (i) and (ii) are equivalent.

Finally, to prove (iii), consider again a state z ∈ Ef (D0,ε), and assume, by contradiction, that there is a state feed-back controller C that takes ! from z to a point s of D0 infinite time T. By the previous part of the proof, it followsthat C is not a static state feedback controller. Then, C mustbe a dynamic state feedback controller. Denote by x(t), 0≤ t ≤ T the path of the closed-loop system !c from x(0)= z to x(T) = s. Note that, irrespective of the nature of thecontroller C, the system ! receives an input value u(t) atthe state x(t) at every time 0 ≤ t ≤ T, and this input func-tion takes the closed-loop system !c from z into D0. Asthe latter is accomplished through the values of f(x(t), u(t)),0 ≤ t ≤ T, the argument used in the previous paragraph alsoshows that our current assumption leads to a contradiction.Therefore, (iii) is valid, and the proof concludes. !

So far, we have given no consideration to continuityfeatures of the state feedback function ϕ that guides !

toward the target domain D0. We show next that a piecewisecontinuous implementation of ϕ can be used.

Theorem 3.12: Let ! be a system described by the dif-ferential equation (1.1) with a continuous function f. LetD0 be an open domain in Rn, let ε > 0 be a real numberconsistent with Assumption 2.4, and assume that the initialstate of ! satisfies x0 ∈ Ef (D0, ε). Then, there is a piecewisecontinuous state feedback function ϕ that takes ! from x0

into D0 in finite time, with a directional error of ε.

Proof: The proof is an elaboration on the proofs of The-orem 3.11 and Lemma 3.5. Assume that the closed-loopsystem has reached a point x1 ∈ Ei

f (D0, ε), i ≥ 1. There isthen an input value u(x1) ∈ Rm for which the vector f(x1,u(x1)) points to a point y ∈ Ei−1

f (D0, ε). As Ei−1f (D0, ε) is

an open set by Lemma 3.7, the state y is an interior point ofEi−1

f (D0, ε). Recall from the proof of Lemma 3.5 that f(x,ϕ(x)) = 0 at all points of Ef(D0, ε)\D0, since the closed-loopsystem cannot have a stationary point there. Combining thiswith the continuity of the function f, it follows that there isa real number δ′

1 > 0 such that the vector f(x′, u(x1)) pointsto a point of Ei−1

f (D0, ε) for all x ′ ∈ B(x1, δ′1). In addition,

as Eif (D0, ε) is an open set, there is a real number δ′′

1 > 0for which the ball B(x1, δ

′′1 ) is included in Ei

f (D0, ε). Let-ting δ1 := min{δ′

1, δ′′1 }, we conclude that the constant value

ϕc(x′) := u(x1) can be used as a state feedback functionat all states x′ ∈ B(x1, δ1). When we reach a point x2 inthe boundary of the ball B(x1, δ1), we repeat this processto obtain a radius δ2 analogous to δ1; extend the functionϕc with the corresponding new constant value. Continu-ing in this manner, one of the following two option mustresult: (a) the closed-loop system !ϕc

reaches a point ofthe set Ei−1

f (D0, ε); or (b) the sequence of radii δ1, δ2, . . .

converges to zero for all possible choices of δ1, δ2, . . .

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 13: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

International Journal of Control 155

In case (a), the state feedback function ϕc takes theclosed-loop system !ϕc

into Ei−1f (D0, ε); as ϕc is a piece-

wise constant function, it is piecewise continuous, and ourtheorem follows by recursion on i. In case (b), let z be alimit point of the sequence x1, x2, . . . . As the argument ofthe previous paragraph applies to the point z in the sameway it applied to the point x1, we conclude that there is areal number δ > 0 such that the ball B(z, δ) has the samefeatures as the ball B(x1, δ1), contradicting the possibilitythat limi→∞δi = 0 for all possible choices of the radii δ1,δ2, . . . . Hence, case (b) leads to a contradiction, and ourproof concludes. !

The proof of Theorem 3.11 outlines a simple and ef-fective method for calculating state feedback functions thatdrive a given system ! into a desired domain in state spacewith a directional error of ε: at each state x of a domainEi

f (D0, ε), choose a state feedback function ϕ for whichthe vector f (x, ϕ(x)) points to a point of Ei−1

f (D0, ε). Sucha value of ϕ is obtained by solving an inequality based onthe function f – the function that is given in the differentialequation of the controlled system !. Section 5 demon-strates this process of deriving state feedback functions ontwo examples.

4. Robust control

The question often arises as to whether a system can becontrolled toward an assigned objective under conditionsof imperfect accuracy, with no specific inaccuracy speci-fied. The motivation behind this question originates fromthe fact that controller accuracy can frequently be improvedat additional cost. Formally, this leads us to the notion of ro-bustness, which addresses the issue of whether an imperfectcontroller can be effective. If an imperfect controller canbe effective, then one can proceed further to estimate themaximal tolerable controller error. In more precise terms,we refer to the following.

Definition 4.1: A robust implementation of a state feed-back controller is an implementation with a non-zero direc-tional error.

Our investigation of robust implementations requiresthe next notion.

Definition 4.2: Let f : Rn × Rm → Rn be a continuousfunction, and let D0 be an open domain in Rn. The superextension set of D0 with respect to f is

Ef (D0) :=⋃

ε>0

Ef (D0, ε). (4.1)

Combining this notion with Theorem 3.11 leads to the fol-lowing characterisation of the circumstances under whicha robust implementation can be effective.

Theorem 4.3: Let ! be a system described by the differen-tial equation (1.1) with a continuous function f. Let D0 be anopen domain in Rn, and let Ef(D0) be the super expansionset. Then, (i) and (ii) are equivalent.

(i) There is a robust implementation of a static statefeedback controller that drives ! from a state z ∈Rn into D0 in finite time.

(ii) z ∈ Ef(D0).

Furthermore,

(iii) For a state z ∈ Ef(D0), there is no robust implemen-tation of a state feedback controller – not a staticnor a dynamic controller – that takes ! from z intoD0 in finite time.

Proof: Since z ∈ Ef(D0), it follows from (4.1) that there isa real number ε > 0 such that z ∈ Ef(D0, ε). Consequently,parts (i) and (ii) follow directly from Theorem 3.11(i) and(ii). Regarding (iii), the fact that z ∈ Ef(D0) means that thereis no ε > 0 for which z ∈ Ef(D0, ε). Thus, (iii) is a conse-quence of Theorem 3.11(iii). !

Considering that a union of open sets is an open set, thefollowing is a consequence of Corollary 3.9.

Corollary 4.4: Let f: Rn × Rm → Rn be a continuous func-tion, and let D0 be an open domain in Rn. Then, the superexpansion set Ef(D0) is an open set.

5. Examples

In this section, we demonstrate the computation of robustcontrollers. We start with an example where the controlledsystem has a one-dimensional state space.

Example 5.1: Consider the system

x = x2 + xu

with the target domain D0 = B(0, 0.1), namely, a ball ofradius 0.1 around the origin. Considering that this is a one-dimensional system, it follows by Theorem 3.11 that weare looking for a state feedback function ϕ : R → R thatsatisfies the following inequalities:

(i) x2 + xϕ(x) < 0 for x ≥ 0.1; and(ii) x2 + xϕ(x) > 0 for x ≤ −0.1.

There are, in fact, infinitely many state feedback functionsϕ(x) that satisfy these inequalities at all states x ∈ R ina robust manner. To demonstrate one such state feedback

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 14: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

156 J. Hammer

Figure 7. Closed-loop trajectory as a function of time for Example 5.1.

function, define

sgn(x) :=

⎧⎨

1 for x > 0,

0 for x = 0,

−1 for x < 0.

Then, for an input magnitude bound of M > 0, the statefeedback function given by

ϕ(x) := −x(1 + sgn(x))

satisfies the requirements for all |x| ≤ M/2. A brief examina-tion shows that this function tolerates non-zero implemen-tation errors, and hence provides robust control. Accordingto (1.3), this state feedback function yields a closed-loopsystem !ϕ with the differential equation x = −x2sgn(x).A slight reflection shows that this closed-loop system ap-proaches the origin asymptotically. Figure 7 shows the tra-jectory of !ϕ as a function of time, starting from two initialstates: x(0) = −1 (left side of the figure) and x(0) = 1 (rightside of the figure).

Next, we demonstrate the construction of a statefeedback function for a nonlinear system with a two-dimensional state space.

Example 5.2: Consider a system ! with a two-dimensional state space and a one-dimensional input de-scribed by the differential equation

! :x = x2 − y2

y = u= f (x, y, u).

We use again the target domain D0 = B(0, 0.1). For the sakeof transparency, we ignore here the input magnitude boundM; it can be readily incorporated. Note that the first com-ponent of f cannot be directly affected by a state feedback

function, since the input u does not appear in it. Wheneverx2 > y2, we have x > 0, so the state will move generally tothe right, as indicated by the arrows in Figure 8 . Similarly,when x2 < y2, we have x < 0, and the state will move gener-ally toward the left, as indicated in the same figure. Seeingthat y = u, it follows that the tilt of the state’s trajectory canbe assigned as desired by selecting the value of the input u.Combining the observations of the last three sentences, weconclude that f can be directed toward the origin only in thedomains marked A in Figure 8. Consequently, the domainsmarked A form the domain E1

f (D0) in this case. In explicitform, we have

E1f (D0)

={ (

x

y

): y2 > x2 and x > 0; or y2 < x2 and x < 0

}.

A slight reflection shows that, in the remaining partsof the plane, the direction of the vector f can always be

Figure 8. Orientations of motion for Example 5.2.

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 15: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

International Journal of Control 157

oriented toward a point of E1f (D0) by choosing an appro-

priate input value u. Furthermore, within E1f (D0), the vector

f can also be oriented toward a point of E1f (D0) by choosing

an appropriate input value u. Consequently,

E2f (D0) ⊇ R2 \ E1

f (D0),

and E2f (D0) can be taken as an open set that includes the set

R2 \ E1f (D0). Specifically, E2

f (D0) is formed by an openset that includes the domains marked as B in Figure 8.Combining our conclusions, we obtain in this case that

Ef (D0) = E1f (D0) ∪ E2

f (D0) = R2

(ignoring input magnitude bounds). Thus, according to The-orem 4.3, there is a state feedback function ϕ that takes !

to a close vicinity of the origin from every bounded domainin state space (with suitable input magnitude bound).

We construct now one such state feedback function (asusual, there are infinitely many appropriate state feedbackfunctions). To simplify the form of our state feedback func-tion, define the domain

A′ := {(x, y) : x2 − y2 < 0, x > 0, |y/x| < 100

or x2 − y2 > 0, x < 0, |y/x| < 100}.

This domain is obtained from the domain A by excludinga thin angle around the y-axis. All remaining points of theplane are included in the domain

B ′ := R2 \ A′.

Then, a slight reflection shows that the following state feed-back function assigns directions to the vector f (x, y, ϕ(x,y)) that point to the origin within A′, and point to A′ withinB′. Consequently, this feedback function takes ! into D0 infinite time from every state (x, y) in the plane (with inputmagnitude bound determined by |(x, y)|).

ϕ(x, y) :=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

(x2 − y2)y/x for all (x, y) ∈ A′,(5(x2 − y2) + 1) for all (x, y) /∈ A′

with (y ≥ 0 and x > 0),(5(x2 − y2) − 1) for all (x, y) /∈ A′

with (y > 0 and x < 0),(5(y2 − x2) − 1) for all (x, y) /∈ A′

with (y ≤ 0 and x > 0),(5(y2 − x2) + 1) for all (x, y) /∈ A′

with (y < 0 and x < 0).

Figure 9 shows the path of the closed-loop system !ϕ instate space, starting from four different initial conditions –one initial condition in each quadrant; the origin is at thecentre of each subfigure.

6. Asymptotic stabilisation

The feedback methodology developed in the previous sec-tions can be used to achieve global asymptotic stabilisationof nonlinear systems by static state feedback under rathergeneral conditions. In fact, Examples 5.1 and 5.2 demon-strate static state feedback controllers that achieve globalasymptotic stabilisation. To consider this issue in general,let ! be a system described by the differential equation(1.1), where f is twice continuously differentiable. Given astate feedback function ϕ, denote by !ϕ(x0, t) the responseof the closed-loop system as a function of the time t, whenstarted at the initial state x0. We aim to derive a state feed-back function ϕ that takes ! asymptotically from x0 to theorigin; namely, we look for a state feedback function ϕ forwhich limt → ∞!ϕ(x0, t) = 0. It is convenient to assume inthis section that ! has a stationary point at the origin, i.e.,that f(0, 0) = 0.

Clearly, a state feedback function ϕ that achieves globalasymptotic stabilisation of ! can be built in two steps:

(i) Use the technique of Theorem 4.3 to derive a statefeedback function ϕ1 that brings ! from the initialstate x0 into a close vicinity V = B(0, ρ) of theorigin, where ρ > 0 is a small radius.

(ii) Use the linear approximation of ! at the origin

x(t) = ∂f (0, 0)∂x

x(t) + ∂f (0, 0)∂u

u(t) (6.1)

to derive a linear state feedback function ϕ2 thattakes ! asymptotically to the origin from within V.

Patching ϕ1 and ϕ2 together into one function ϕ yields astate feedback function that drives ! asymptotically froman initial state x0 to the origin, thus achieving asymptoticstabilisation.

As step (ii) involves the use of a linear state feedback toasymptotically stabilises ! near the origin, we must assumethat the linear approximation (6.1) is stabilisable.

Preliminary consideration must be given to the questionof how close must ! be taken to the origin, before a linearstate feedback function can be activated. The proof of thenext statement includes an estimate of the radius ρ∗ of a ballB(0, ρ∗) from within which a linear state feedback functioncan take ! asymptotically to the origin.

Proposition 6.1: Let ! be a system with the differentialequation x(t) = f (x(t), u(t)), where f is twice continuouslydifferentiable and f (0, 0) = 0, and assume that the lin-ear approximation (6.1) of ! at the origin is stabilisable.Then, there is a real number ρ∗ > 0 and an m × n constantmatrix F such that the solution x(t) of the differential equa-tion x(t) = f (x(t), Fx(t)) satisfies limt → ∞x(t) = 0 for allinitial conditions x(0) ∈ B(0, ρ∗).

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 16: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

158 J. Hammer

Figure 9. Trajectories of the closed-loop system of Example 5.2.

Proof: As f (0, 0) = 0 and the linear system (6.1) is stabil-isable, there is an m × n constant matrix F for which everysolution z(t) of the linear differential equation

z(t) = ∂f (0, 0)∂x

z(t) + ∂f (0, 0)∂u

Fz(t) =: Dz(t) (6.2)

satisfies limt→∞z(t) = 0; here,

D := ∂f (0, 0)/∂x + (∂f (0, 0)/∂u)F (6.3)

is a constant n × n matrix. As this linear system is asymp-totically stable, it follows by Lyapunov’s theory that there isa constant symmetric positive definite matrix P such that

d

dt[zT (t)Pz(t)] < 0 (6.4)

for all z(t) = 0. Invoking the derivative, we get zT (t)Pz(t) +zT (t)P z(t) < 0 for all z(t) = 0. Substituting from (6.2)

yields

zT (t)DTP z(t) + zT (t)PDz(t) < 0

for all z(t) = 0. Consequently, the symmetric matrix

Q := −(DTP + PD) (6.5)

is positive definite.Turning now to the nonlinear system !, define the func-

tion

g(x) := f (x, Fx).

Then, the closed-loop system around ! with the linearstate feedback function F is described by the differentialequation

x(t) = g(x(t)), x(0) = x0. (6.6)

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 17: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

International Journal of Control 159

Using the fact that f is twice continuously differentiable andexpanding it around the origin yields, according to Taylor’stheorem, that

g(x) = Dx + 12

⎜⎝x...x

⎟⎠

T

g(2)(ξ )

⎜⎝x...x

⎟⎠ , (6.7)

where D is given by (6.3),

g(2)(ξ ) :=

⎜⎝H (g1(ξ ))

...H (gn(ξ ))

⎟⎠

is an n2 × n2 matrix with H(gi(ξ )) being the n × n Hessianof the i-th component of g, and ξ ∈ Rn is an appropriate pointdetermined according to the error term of Taylor’s theorem.To simplify notation, define the n-dimensional vector

µ(x, ξ ) := 12

⎜⎝x...x

⎟⎠

T

g(2)(ξ )

⎜⎝x...x

⎟⎠ ,

so that

g(x) = Dx + µ(x, ξ ). (6.8)

Substituting the solution x(t) of (6.6) into (6.4) insteadof z(t), and using the form (6.8) of g, we obtain

d

dt[xT (t)Px(t)] = gT (x(t))Px(t) + xT (t)Pg(x(t))

= (Dx(t) + µ(x(t), ξ (t)))T Px(t)

+ xT (t)P (Dx(t) + µ(x(t), ξ (t)))

= −xT (t)Qx(t) + [µT (x(t), ξ (t))Px(t)

+ xT (t)Pµ(x(t), ξ (t))]

= xT (t)Qx(t)

×[−1 + µT (x(t), ξ (t))Px(t) + xT (t)Pµ(x(t), ξ (t))

xT (t)Qx(t)

],

x(t) = 0, (6.9)

where Q is given by (6.5). Now, according to Taylor’s theo-rem and the fact that f is twice continuously differentiable,µ(x, ξ ) is a second-order term in x. Combining this withthe fact that Q is positive definite, it follows that, for a realnumber ρ > 0, the quantity

e(ρ) := supx,ξ∈B(0,ρ),x =0

∣∣∣∣µT (x, ξ )Px + xT Pµ(x, ξ )

xT Qx

∣∣∣∣

satisfies

limρ→0

e(ρ) = 0.

Consequently, for every real number 0 < α < 1, there is aradius ρα > 0 such that 0 < e(ρ) < α for all 0 < ρ ≤ ρα .Substituting this fact into (6.9), we obtain that

d

dt

[xT (t)Px(t)

]< 0 (6.10)

for all x(t) = 0 satisfying x(t) ∈ B(0, ρ), where 0 < ρ ≤ ρα

and 0 < α < 1.Next, let p1 be the smallest eigenvalue of P and let p2

be the largest eigenvalue of P. Then, since P is a positivedefinite matrix, we have

p2 ≥ p1 > 0. (6.11)

By classical features of eigenvalues of positive definite ma-trices, we can write

p1|x|2 ≤ xT Px ≤ p2|x|2 (6.12)

for all x ∈ Rn. Now, select a real number 0 < α < 1 andconsider the positive number

ρ∗ := p1

p2ρα. (6.13)

Note that by (6.11), we have ρ∗ ≤ ρα .An initial state x(0) ∈ B(0, ρ∗) satisfies xT(0)Px(0) ≤

p2|x(0)|2 ≤ p2ρ∗ = p1ρα . In view of (6.10) and the fact

that ρ∗ ≤ ρα , this implies that xT(t)Px(t) ≤ p1ρα for all t ≥0. But then, by (6.12), we have p1|x(t)|2 ≤ p1ρα , or |x(t)|2

≤ ρα for all t ≥ 0. In other words, when x(0) ∈ B(0, ρ∗),then the state of ! stays within the ball B(0, ρα) for all t≥ 0, and (6.10) is valid for all t ≥ 0. Lyapunov’s theoremthen guarantees that limt → ∞x(t) = 0. This concludes ourproof. !

The proof of Proposition 6.1 proves the followingstatement.

Corollary 6.2: Under the notation and conditions ofProposition 6.1, the system ! can be asymptotically sta-bilised by linear state feedback within a ball of radius ρ∗

around the origin, where ρ∗ is given by (6.13).

In view of Proposition 6.1 and Corollary 6.2, we canobtain asymptotic stabilisation of the system ! if we canfind a state feedback function that takes ! from its initialstate into the ball B(0, ρ∗), where ρ∗ is given by (6.13).Combining this with Theorem 4.3 and recalling that linearstate feedback is robust, yields the following, which is themain result of this section.

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015

Page 18: State feedback control of nonlinear systems: a simple approachmst.ufl.edu/pdf papers/State feedback control of nonlinear systems-a... · objective is to develop controllers that drive

160 J. Hammer

Theorem 6.3: Let ! be a system described by the differ-ential equation (1.1), where the function f is twice continu-ously differentiable and f(0, 0) = 0. Assume that the linearapproximation (6.1) of ! at the origin forms a stabilisablelinear system. Let X0 ⊆ Rn be the set of all potential initialstates of !, let ρ∗ be given by (6.13), and let Ef(B(0, ρ∗)) bethe super expansion set. Then, the following two statementsare equivalent.

(i) ! is robustly and asymptotically stabilisable overthe domain X0 of initial states.

(ii) X0 ⊆ Ef(B(0, ρ∗)).

7. Conclusion

In summary, we have described a general framework forthe construction of state feedback controllers for nonlinearsystems. The main step of this construction involves thesolution of a set of inequalities based on the given func-tion f that appears in the differential equation (1.1) of thecontrolled system !. These inequalities are induced by re-quiring the vector f(x, u) to point in an appropriate rangeof directions. As the examples of Section 5 demonstrate,the calculation of stabilising feedback controllers is rathersimple in this framework.

In the special case of systems with states of dimensions1, 2, and 3, the selection of state feedback functions can bevisualised. However, visualisation is, of course, not neces-sary; the corresponding inequalities can be solved compu-tationally in any dimension, and appropriate state feedbackfunctions are derived from these solutions.

ReferencesBaramov, L., & Kimura, H. (1995). Nonlinear coprime factoriza-

tions and parametrization of a class of stabilizing controllers.

Proceedings of the 1995 IEEE Conference on Decision andControl (pp. 981–986).

Chen, G., & de Figueiredo, R.J.P. (1990). Construction of theleft coprime fractional representations for a class of non-linear control systems. Systems and Control Letters, 14,353–361.

Desoer, C.A., & Kabuli, M.G. (1988). Right factorization of aclass of nonlinear systems. IEEE Transactions on AutomaticControl, AC-33, 755–756.

Georgiou, T.T., & Smith, M.C. (1997). Robustness analysisof nonlinear feedback systems: An input-output approach.IEEE Transactions on Automatic Control, AC-42, 1200–1221.

Hammer, J. (1984). Nonlinear systems: Stability and rationality.International Journal of Control, 40, 1–35.

Hammer, J. (1985). Nonlinear systems, stabilization, and coprime-ness. International Journal of Control, 42, 1–20.

Hammer, J. (1989). Robust stabilization of nonlinear systems.International Journal of Control, 49(2), 629–653.

Hammer, J. (1994). Internally stable nonlinear systems with dis-turbances: A parametrization. IEEE Transactions on Auto-matic Control, AC-39, 300–314.

Hammer, J. (2004). Tracking and approximate model matching fornonlinear systems. International Journal of Control, 77(14),1269–1286.

Lasalle, J.F., & Lefschetz, S. (1961). Stability by Liapunov’s directmethod with applications. New York: Academic Press.

Lefschetz, S. (1965). Stability of nonlinear control systems. NewYork: Academic Press.

Logemann, H., Ryan, E.P., & Townley, S. (1999). Integral controlof linear systems with actuator nonlinearities: lower boundsfor the maximal regulating gain. IEEE Transactions on Auto-matic Control, AC-44, 1315–1319.

Paice, A.D.B., & Moore, J.B. (1990). Robust stabilization of non-linear plants via left coprime factorizations. Systems and Con-trol Letters, 15, 121–129.

Paice, A.D.B., & van der Schaft, A.J. (1994). Stable kernel rep-resentation and the Youla parametrization for nonlinear sys-tems. Proceedings of the 33rd IEEE Conference on Decisionand Control (pp. 781–786), Lake Buena Vista, FL.

Sandberg, I.W. (1993). Uniform approximation and the circlecriterion. Trans. IEEE, AC-38, 1450–1458.

Dow

nloa

ded

by [J

acob

Ham

mer

] at 1

3:34

20

Janu

ary

2015