Autoresilience

15
Preliminary Contributions Towards Auto-Resilience Vincenzo De Florio PATS/University of Antwerp and PATS/iMinds research institute Middelheimlaan 1, 2020 Antwerp, Belgium [email protected] Abstract. The variability in the conditions of deployment environments introduces new challenges for the resilience of our computer systems. As a response to said challenges, novel approaches must be devised so that identity robustness be guaranteed autonomously and with minimal overhead. This paper provides the elements of one such approach. First, building on top of previous results, we formulate a metric framework to compare specific aspects of the resilience of systems and environments. Such framework is then put to use by sketching the elements of a hand- shake mechanism between systems declaring their resilience figures and environments stating their minimal resilience requirements. Despite its simple formulation it is shown how said mechanism enables scenarios in which resilience can be autonomously enhanced, e.g., through forms of social collaboration. This paves the way to future “auto-resilient” sys- tems, namely systems able to reason and revise their own architectures and organisations so as to optimally guarantee identity persistence. 1 Introduction Self-adaptive systems are able to mutate their structure and function in order to match “changing circumstances” [1]. When relevant changes in their deploy- ment environment are perceived—due, for instance, to application mobility or ambient adaptations—self-adaptive systems typically perform some form of rea- soning and introspection so as to conceive a new structure best matching the new circumstances at hand. This new structure may indeed allow the adapting system to tolerate or even profit from the new conditions; at the same time, it is possible that the mutation affected the identity of that system, that is, the functional and non-functional aspects and properties characterising the expected behaviour of that system. A relevant problem then becomes robust feature per- sistence, namely a system’s capability to retain certain characteristics of inter- est throughout changes and adaptations affecting, e.g., its constituent modules, topology, and the environment. The term commonly used to refer to robust feature persistence is resilience a concept discussed so early as in Aristotle’s Physics and Psychology [2]. Re- silience is called by Aristotle as entelechy, which he defines as the ability to pur- sue completion (that is, one’s optimal behaviour) by continuously re-adjusting oneself. Sachs’s translation for entelechy is particularly intriguing and pertinent

description

 

Transcript of Autoresilience

Page 1: Autoresilience

Preliminary Contributions TowardsAuto-Resilience

Vincenzo De Florio

PATS/University of Antwerp and PATS/iMinds research instituteMiddelheimlaan 1, 2020 Antwerp, Belgium [email protected]

Abstract. The variability in the conditions of deployment environmentsintroduces new challenges for the resilience of our computer systems.As a response to said challenges, novel approaches must be devised sothat identity robustness be guaranteed autonomously and with minimaloverhead. This paper provides the elements of one such approach. First,building on top of previous results, we formulate a metric framework tocompare specific aspects of the resilience of systems and environments.Such framework is then put to use by sketching the elements of a hand-shake mechanism between systems declaring their resilience figures andenvironments stating their minimal resilience requirements. Despite itssimple formulation it is shown how said mechanism enables scenarios inwhich resilience can be autonomously enhanced, e.g., through forms ofsocial collaboration. This paves the way to future “auto-resilient” sys-tems, namely systems able to reason and revise their own architecturesand organisations so as to optimally guarantee identity persistence.

1 Introduction

Self-adaptive systems are able to mutate their structure and function in orderto match “changing circumstances” [1]. When relevant changes in their deploy-ment environment are perceived—due, for instance, to application mobility orambient adaptations—self-adaptive systems typically perform some form of rea-soning and introspection so as to conceive a new structure best matching thenew circumstances at hand. This new structure may indeed allow the adaptingsystem to tolerate or even profit from the new conditions; at the same time, itis possible that the mutation affected the identity of that system, that is, thefunctional and non-functional aspects and properties characterising the expectedbehaviour of that system. A relevant problem then becomes robust feature per-sistence, namely a system’s capability to retain certain characteristics of inter-est throughout changes and adaptations affecting, e.g., its constituent modules,topology, and the environment.

The term commonly used to refer to robust feature persistence is resilience—a concept discussed so early as in Aristotle’s Physics and Psychology [2]. Re-silience is called by Aristotle as entelechy, which he defines as the ability to pur-sue completion (that is, one’s optimal behaviour) by continuously re-adjustingoneself. Sachs’s translation for entelechy is particularly intriguing and pertinent

Page 2: Autoresilience

here, as it is “being-at-work-staying-the-same” [3]. So complex and central isthis idea within Aristotle’s corpus that Sachs again refers to it in the cited refer-ence as to a “three-ring circus of a word”. In fact resilience still escapes a clearand widely agreed understanding. Different domain-specific definitions exist orcapture but a few aspects of the whole [4]. In previous contributions [5–7] weconjectured that some insight on this complex concept may be gained by realis-ing its nature as a multi-attribute property, “defined and measured by a set ofdifferent indicators” [8]. As a matter of fact, breaking down a complex propertyinto a set of constituent attributes proved to be beneficial with another mostelusive property—dependability, which was characterised into six constituentproperties by Laprie [9, 10]. Encouraged by this lesson in [5–7] we set to applythe same method to try and capture some aspects of the resilience of adaptivesystems.

Building on top of the above mentioned preliminary results, this paper’s firstcontribution is the definition of a number of system classes and partial orders toenable a qualitative evaluation of system-environment fits—in other words, howa system’s resilience features match with the resilience requirements called forby that system’s deployment environments. This is done in Sect. 2.

A second contribution is presented in Sect. 3 through the high level descrip-tion of a handshake mechanism between systems declaring their resilience figuresand environments stating their minimal resilience requirements. Said mechanismis exemplified through an ambient intelligence case study. In particular it isshown how putting the resilience characteristics of systems and environments inthe foreground enables scenarios in which resilience can be enhanced throughsimple forms of social collaboration.

Finally, in Sect. 4, we enunciate a conjecture: resilience-oriented handshakemechanisms such as the one presented in this paper pave the way to futureauto-resilient systems—entities, that is, that are able to reason about their ownarchitectures and organisations and to optimally revise them, autonomously, inorder to match the variability of conditions in their deployment environments.

2 Perception, Apperception, and Entelechism

In previous work we identified three main constituent properties for system andorganisational resilience [5–7]. Here we recall and extend said properties anddiscuss the major threats associated to their failure. We also introduce systemclasses and partial orders to facilitate the assessment of how a system’s resiliencearchitecture matches its mission and deployment environment.

2.1 Perception

What we cannot perceive, we cannot react from—hence we cannot adapt to. As aconsequence a necessary constituent attribute of resilience is given by perception,namely a system’s ability to become timely aware of some portion of the context.In what follows we shall represent perception through the collection of context

Page 3: Autoresilience

figures—originating within and without the system boundaries—whose changeswe can be alerted from within a reasonable amount of time. From this definitionwe observe how perception may be interpreted as a measure of how “open-world”a system is—be it biological, societal, or computer-based. Perception is carriedout through several mechanisms. We distinguish three sub-functions to percep-tion, which we call sensors, qualia, and memory. Sensors represent a system’sprimary interface with the physical world. The sensors’ main function is to reflecta given subset of the world’s “raw facts” into internal representations that arethen stored in some form within the system’s processing and control units—its“brains”. Qualia [6] is the name used in literature to refer to such representations.Qualia are then persisted—to some extent—in the system memory.

Sensors, qualia, and memory are very important towards the emergence ofresilience: the quality of reactive control strictly depends on the quality of ser-vice of the sensory system as well as that of the system components responsi-ble for the reliable production, storage, persistence, and retrieval of trustworthyqualia [6]. Important aspects of such quality of service include what we call thequalia manifestation latency (namely the time between the physical appear-ance of a raw fact and the corresponding production of a qualia), the reflectivethroughput (that is, the largest amount of raw facts that may be reliably en-coded as qualia per time unit), and the qualia access time (how quickly thecontrol layers may access the qualia). An example of a software system usingapplication-level qualia to operate control is described in [11,12].

As mentioned already, be it computer-based or organic, any system is char-acterised—and limited—in its resilience by the characteristics of its perceptionsub-system. In particular the amount and quality of its sensors and the qualityof its qualia production, storage, and persistence services define what the systemis going to timely and reliably perceive; and consequently what it may effectivelyreact upon.

This concept matches well with what Leibniz referred to as a system’s “clearrepresentation”, as opposed to an “obscure representation” resulting from, e.g.,sensor shortage or insufficient quality of service in the qualia layers. We referto this region of clear representation as to a system’s perception spectrum. Ahypothetical system of all clear representation and no obscure representation iscalled by Leibniz a monad. At the other end of the spectrum we have closed-world systems—systems that is that operate in their “virtual world” completelyunaware of any physical world “raw fact”. The term we use to refer to suchcontext-agnostic systems is ataraxies (from “ataraxy”, namely the attitude oftaking actions without considering any external event or condition; from a-, not,and tarassein, to disturb). Ataraxies may operate as reliably and efficiently asmonads, but they are not designed to withstand changes—they are what theAmerican refer to as “sitting ducks” in the face of changes. As long as theirsystem assumptions hold, they constitute our unquestioning avatars diligentlyperforming their appointed tasks; though they fail miserably when facing the

Page 4: Autoresilience

slightest perturbation in their design hypotheses1 [15]. Likewise monads, thoughcharacterised by perfect perception, may be unable to make use of this quality toachieve awareness and ultimately guarantee their resilience or other design goalsof interest. In what follows we shall refer to a system’s quality of perception asto its “power of representation”—a term introduced by Leibniz [16].

In [6] we presented a simple Algebraic model for perception by consideringperception spectra as subsets of a same “perfect” perception spectrum (corre-sponding to the “all-seeing eye” of the fabled monad, which “could see reflectedin it all the rest of creation” [16]). Figure 1(a) depicts this by considering theperception spectra of two systems, a and b, respectively represented as set Aand set B. Little can be said in this case about the power of representation of awith respect to that of b: here in fact the spectra are not comparable with oneanother, because it is not true that

(A ⊂ B) ∨ (B ⊂ A).

On the other hand, when for instance A ⊆ B then we shall say that b has “greaterperception” (that is, a greater power of representation) than a:

a ≺P b if and only if A ⊆ B. (1)

This is exemplified in Fig. 1(b), in which A ⊆ B ⊆ M , the latter being the wholecontext (that is, the perception spectrum of monad m). This means that a, b,and m are endowed with a larger and larger set of perception capabilities—agreater and greater power of representation. Expression a ≺P b ≺P m statessuch property.

We deem it important to highlight how perception spectra such as set A andB should be actually represented as functions of time; the mission characteristics;and the current context. In other words, perception should not be taken as anabsolute and immutable feature but rather as the result of several dynamicprocesses, e.g., the current state of the sensory subsystem, the current qualityof their services, as well as how the resulting times, throughputs, failures, andlatencies match with the current mission requirements. For the sake of simplicitywe shall nevertheless refer to perception spectra simply as sets.

Perception spectra / powers of representation may be also used to evaluatethe environmental fit of a given system with respect to a given deploymentenvironment—that is, to gain insight in the match between that system and itsintended execution environment. As an example, Fig. 1(a) may be interpretedalso as the perception spectrum of system a and the power of representationcalled for by deployment environment b. The fact that B \A is non-empty tells

1 As discussed in [13], another problem with closed-world systems is that they are ina sense systems “frozen in time”: verifications for any such system implicitly refer toscenarios that may differ from the current one. We use the term frozen ducks to referto ataraxies with stale certifications. A typical case of frozen ducks is efficaciouslyreported by engineer Bill Strauss: “A plane is designed to the right specs, but nobodygoes back and checks if it is still robust” [14].

Page 5: Autoresilience

(a) Regions of clear representation ofsystem a and b with respect to that ofhypothetical perfect system m. The in-tersection region represents the portionof the spectrum that is in common be-tween a and b.

(b) The region of clear representationA is fully included in B, in turn fullyincluded in M . In this case we can statethat the power of representation of sys-tem a is inferior to that of b’s, which inturn is less than m’s.

Fig. 1. Exemplification of perception spectra and regions of clear representa-tions.

us that a will not be sufficiently aware of the context changes occurring in b.Likewise A \ B 6= ∅ tells us that a is designed so as to be aware of figures thatwill not be subjected to change while a is in b. The corresponding extra designcomplexity is (in this case) a waste of resources in that it does not contribute toany improvement in resilience. The case study introduced in Sect. 3 makes useof perception spectra to evaluate a system-environment fit.

As a final remark, perception spectra may be used to compare environmentswith one another. This may be useful especially in ambient intelligence scenariosin which some control may be exercised on the properties of the deploymentenvironment(s).

Estimating shortcoming or excess in a system’s perception capabilities pro-vides useful information to the “upper functions” responsible for driving theevolution of that system. Such functions may then make use of said informationto perform design trade-offs among the resilience layers. As an example, the sys-tem may reduce its perception spectrum and use the resulting complexity budgetto widen its apperception capabilities—that is, the subject of next section.

2.2 Apperception

As the perception spectrum defines the basic facts that are going to triggerawareness and ultimately reaction and control, likewise apperception defines howthe reflected qualia are accrued, put in relation with past perception, and used tocreate dynamic models of the “self” and of the “world” [17]. In turn this abilityenables higher level functions of system evolution—in particular, the planning ofreactions (e.g., parametric adaptations or system reconfigurations). Also in thecase of apperception we can introduce a ranking of sort stating different powersof apperception. Several such rankings and classifications were introduced in the

Page 6: Autoresilience

past, the first and foremost example may be found in Aristotle’s De Anima2.Leibniz also compiled a hierarchy of “substances”—as he referred to systemsand beings [16]. More recently Lycan suggested [18] that there might be at leasteight classes of apperception. An important contribution in the matter is dueto Rosenblueth, Wiener, and Bigelow, who proposed in [19] a classification ofsystems according to their behaviour and purpose. In particular in their citedwork they composed a hierarchy consisting of the following behavioural classes:

1. Systems characterised by passive behaviour: no source of “output energy”may be identified in any activity of the system.

2. Systems with active, but non-purposeful behaviour—systems, that is, thatdo not have a “specific final condition toward which they strive” [19].

3. Systems with purposeful, but non-teleological (i.e., feedback-free) behaviour:systems, that is, in which “there are no signals from the goal which modifythe activity of the object” (viz., the system) “in the course of the behaviour.”

4. Systems with teleological, but non-extrapolative behaviour: systems that arepurposeful but unable to construct models and predictions of a future stateto base their reactions upon.

5. First-order predictive systems, able to extrapolate along a single perceptiondimension—i.e., a single qualia.

6. Higher-order predictive systems, or in other words systems that are able tobase their reactions on the correlation of two or more qualia dimensions,possibly of different nature—temporal and spatial coordinates for instance.

The behaviours of systems in classes 4–6 exhibit increasing powers of appercep-tion.

The just discussed seminal work was then continued by Boulding in his classicpaper on General Systems Theory [20]. In said paper the Author introducednine classes structured after a system’s perception and apperception capabilities.More specifically, Boulding’s classes refer to the following system types:

1. Ataraxies, subdivided into so-called Frameworks and Clockworks.2. Simple control mechanisms, e.g., thermostats, that are able to track a single

context figure.3. Self-maintaining structures, e.g., biological cells, which are able to track mul-

tiple context features. Both thermostats and cells correspond to the systemswith purposeful, though non-teleological, behaviour of [19].

4. Simple stationary systems comprising several specialised sub-systems, likeplants, characterised by very simple forms of predictive behaviour and ap-perception.

5. Complex mobile systems with extensive power of representation and sim-ple forms of apperception (especially self-awareness). Boulding refers to thisclass as to “animals”. A classic example of this is a cat moving towards

2 As cleverly expressed in [2], Aristotle finds that “living things all take their place ina cosmic hierarchy according to their abilities in the fields of nutrition, perception,thought and purposive action.”

Page 7: Autoresilience

its prey’s extrapolated future position [19]. These systems may be charac-terised by “precooked apperception”, i.e., innate behaviour commonly knownas instinct. This corresponds to systems initialised with domain-specific pre-defined and immutable apperception capabilities and adaptation plans.

6. Complex mobile systems endowed with extensive apperception capability,e.g., self-awareness, self-consciousness, and high order extrapolative capabil-ity. “Human beings” is the term used by Boulding for this class.

7. Collective adaptive systems, e.g. digital ecosystems, cyber-physical societies,multi-agent systems, or social organisations [21]. Boulding refers to this classas “a set of roles tied together with channels of communication”.

8. Totally open-world systems, namely the equivalent of Leibniz’s monads.Transcendental systems is the name that Boulding gives to this class.

Again classes 4–6 represent (non-transcendental, non-collective) systems withincreasing powers of apperception. It is then possible to define a projection map πreturning for any such system s the class that system belongs to (or, alternatively,the behaviour class characterising s) represented as an integer in {1, . . . , 6}.Function π then defines a second partial order among systems—for any twosystems p and q with apperception capability we shall say that p has less powerof apperception than q when the following condition holds:

p ≺A q if and only if π(p) < π(q). (2)

As we have done with perception, also in this case we remark how the abovepartial order may apply to environments as well as to systems. As such the abovepartial order may be used to detect mismatches between a system’s apperceptioncharacteristics and those expected by a given environment. One such mismatchis detected in the scenario discussed in Sect. 3.

2.3 Entelechism

Once trustworthy models of the endogenous conditions and exogenous scenariosare built through perception and apperception, resilient systems typically makeuse of the accrued knowledge to plan some form of reactive control. The aimof this reactive control is to guarantee the persistence of a system’s functionaland non-functional “identity”—namely what that system is supposed to do andunder which conditions and terms. As mentioned in Sect. 1, already Aristotleidentified this quality, which he called entelechy and solely attributed to humanbeings. Entelechy is in fact the driving force—the movement, or “energy”—that makes active-behaviour systems strive towards resilience. By analogy, inwhat follows we refer to a system’s entelechy as to the quality of the mecha-nisms responsible for planning and controlling the robust emergence of that sys-tem’s peculiar characteristics while changes and system adaptations take place.Such characteristics may include, e.g., timeliness, determinism, security, safety,or functional behaviours as prescribed in the system specifications.

Page 8: Autoresilience

In [7] we called evolution engine of system s the portion of s responsible forcontrolling its adaptation. In what follows we shall refer to the evolution engineas to EE(s)—or simply EE when s can be omitted without ambiguity.

We now propose a tentative classification of systems according to their en-telechism—namely, according to the properties and characteristics of their EE.Also in this case we found it convenient to isolate a number of ancillary con-stituent components in order to tackle separately different aspects of this “three-ring circus” [3] of a concept.

Meta-apperception When considered as a separate entity, system EE(s) maybe subjected to a classification such as Boulding’s or Rosenblueth’s, intendedto highlight the characteristics of the resilience logics of system s. Said charac-teristics may differ considerably from those of s. As an example, the adaptivelyredundant data structures introduced in [22] may be regarded as a whole as afirst-order predictive behaviour mechanism [5]. On the other hand that system’sEE(s) is remarkably simpler and only capable of purposeful active behaviours. Infact, a system’s EE may or may not be endowed with apperception capabilities,and it may or may not be a resilient system altogether. This feature representsa first coordinate to assess the entelechism of evolving systems. Making use ofthe partial order defined in Sect. 2.2 we shall say that, for any two systems pand q, q is endowed with greater meta-apperception than p (written as p ≺µA q)if and only if the following condition holds:

p ≺µA q if and only if π(EE(p)) ≺A π(EE(q)). (3)

Multiplicity and organisation of the planning entities In what followswe propose to identify classes of resilient systems also by taking into account theindividual or social organisation of the processes that constitute their evolutionengines. Three are the aspects that—we deem—play an important role in thiscontext:

– The presence of a single or multiple concurrent evolution engines.– The individual vs. social nature of the interactions between neighbouring

systems. This may range from “weak” forms of interactions [23]—e.g., as inthe individual-context middleware of [24]—up to high level forms of struc-tured social organisation (multi-level coupling of the individual to the en-vironment). The latter case corresponds to the social-context middlewaresystems of [24].

– (When multiple concurrent EE’s contribute to the emergence of the globalsystem behaviour:) The organisation of control amongst the EE’s.

Table 1 provides a classification of systems according to the just enunciatedcriteria. The first class is given by systems with a single EE and only capableof individual-context planning. This means that decisions are taken in isolationand without considering the decisions taken by neighbouring systems [24]. GPS

Page 9: Autoresilience

1) Single-logic 2) Single-logic 3) Collective-logic 4) Collective-logic 5) Bionic, holar-individual- social-context social-context social-context chic, or fractal

context systems systems hierarchies heterarchies organisations

Table 1. A tentative classification of evolving systems according to the numberand the complexity of their EE’s.

planning their route only by means of digital maps of the territory are exam-ples of said systems. The second class comprises again systems with a singleEE but this time planning is executed while taking into account the behaviourof neighbouring systems [24]. A collision avoidance system in a smart car be-longs to this class. Classes 3 to 5 all consist of systems capable of collectiveplanning. Class 3 includes systems where planning is centralised or hierarchical:one or multiple decision layers exist and on each layer multiple planners sub-mit or publish their plans to a next-layer planner. Air traffic control systemsand the ACCADA middleware [25] provide us with two examples of this type ofsystems. Class 4 refers to decentralised societies with peer-to-peer planning andmanagement. The term used to refer to such systems is heterarchy [26]. Heterar-chies are flat (i.e., layer-less) organisations characterised by multiple concurrentsystem-of-values and -goals. They introduce redundant control logics from whicha system’s expected service may be distributed across a diversity of routes andproviders. Such diversity provides a “mutating factor” of sorts, useful to avoidlocal minima—what Stark refers to as “lock-ins” [26]. The absence of layersremoves the typical flaws of hierarchical organisations (propagation and controldelays and failures). The distributed decision making introduces new criticalitiesthough, e.g., deterministic and timely behaviours are more difficult to guarantee.“Different branches of government that have checks and balances through sepa-ration and overlap of power” [27] constitute an example of heterarchy. The fifthand last class includes systems characterised by distributed hierarchical organi-sation: bionic organisations, holarchies, and fractal organisations. Said systemsare a hierarchical composition of autonomous planners—called respectively mod-elons, holons, and fractals—characterised by spontaneous behaviour and localinteraction. Said planners autonomously establish cooperative relationships withone another, which ultimately produce the emerging functional and adaptive be-haviours of the system. “Simultaneously a part and a whole, a container and acontained, a controller and a controlled” [28], these organisations result in sys-tems able to avoid the flaws of both hierarchical and heterarchical systems. Theemergence of stability, flexibility, and efficient use of the available resources havebeen experienced in systems belonging to this class [29–31].

In this case the above classes can not be used to define a partial order—asit was the case for perception, apperception and meta-apperception—but ratherto identify general characteristics exhibited by systems or expected by a hostingenvironment. As an example, a digital ecosystem may have an admittance policygranting deployment only to systems characterised by social-context capabilities.

Page 10: Autoresilience

This may be done, e.g., so as to prevent the diffusion of greedy individualisticbehaviours potentially jeopardising the whole ecosystem.

Complexity of the planned adaptive behaviours A third aspect that—we conjecture—plays an important role in an entity’s reactive control processesis given by the magnitude and complexity of the adaptation behaviours. Wedistinguish three major cases:

1. Parametric adaptation. In this case s retains its structure and organisa-tion whatever the adaptation processes instructed by EE(s). Adaptationis achieved by switching among structurally equivalent configurations thatdepend on one or more internal “knobs” or tunable parameters—e.g., thenumber of replicas in the redundant data structures in [7]. The adaptive be-haviours of parametrically adaptive systems are therefore simple3. As doneby Rosenblueth et al. for their classification of behaviours we shall classifyhere parametrically adaptive systems by considering their order, namely thenumber of involved knobs. As an example, the above mentioned redundantdata structures are a first-order parametrically adaptive system.

2. Structural adaptation. In this case the adaptation processes of EE(s) brings to mutate its structure and/or organisation by reconfiguring the topology,the role, and the number of its constituents. Note how said constituents mayalso be part of EE(s). Clearly the adaptive behaviours of this class of systemsis more complex and thus less stable. An example of such systems is givenby Transformer, a framework for self-adaptive component-based applicationsdescribed in [32,33].

3. Hybrid adaptation—systems that is whose adaptation plans comprise bothstructural and parametric adaptation. An example of this class of systems isgiven by the family of adaptive distributed gossipping algorithms describedin [34], for which the choice of a combinatorial parameter also induces arestructuring of the roles of the involved agents.

3 Resilience Handshake Mechanisms

As well known, any system—be it made by man or by nature—is the result oforganisational and design choices in turn produced by the mechanisms of bio-logical or machine-driven evolution [35]. Resilience is a key property emergingfrom the match between these choices and a deployment environment. Regret-tably, both man and nature have no complete freedom in their design choices,as enhancing one design aspect in most cases reduces the degree of freedom onother design aspects. Isolating the constituent attributes of resilience helps gain-ing insight into this problem and paves the way to approaches were perception,

3 Of course this does not mean that the effect that said adaptations is going to have ons will also be simple. In general this will depend on the sensitivity of the parametersand on the extent of their correlation.

Page 11: Autoresilience

apperception, and entelechism can be dynamically refined so as to optimallymatch with corresponding figures expected by the target environments. In whatfollows we propose a strategy to achieve said “auto-resilient” behaviours. Themain idea is to set up admission control mechanisms constraining the deploymentof a system in a target environment. This allows a system’s resilience figures tobe matched with the expected minimal resilience requirements of a deploymentenvironment. This is similar to defining an “adaptation contract” to be matchedwith an “environment policy”—in the sense discussed, e.g., in [24].

Figure 2 exemplifies our idea through an ambient intelligence scenario. Inthis case the ambient is a coal mine. Said environments are known to experi-ence occasionally high concentrations of toxic gases—e.g., carbon monoxide anddioxide as well as methane—that are lethal to both animals and human beings.Regrettably human beings are not endowed with perception capabilities able toprovide early warning against the increasing presence of toxic gases. In otherwords, miners are subjected to dangerous perception failures when working incoal mines. A common way to address said problem is to make use of so-calledsentinel species [36], namely systems or animals able to compensate for anothersystem’s lack in perception. The English vernacular “being like a canary in acoal mine” refers to the traditional use of canaries as sentinel species for miners.Our scenario is inspired by the above expedient. We envision the presence oftwo types of environmental agents: a Feature Register (FR) and one or moreAmbient Agents (AA).

FR is the manager of a dynamically growing associative array. It stores associ-ations of the form

s → {Ps, As, Es}, (4)

stating the perception, apperception, and entelechy characteristics of systems. As an example, if s is a miner, then Ps is a representation of the per-ception spectrum of said agent, As is his apperception class, and Es is atriple representing the entelechism of a miner. We shall refer to the triplets{Ps, As, Es} as to the “R-features” of s.

AA is an entity representing the R-features of a certain ecoregion, e.g., a “mine”.Indicator species is the term used in the literature to refer to entities repre-sentative of an ecoregion [37].

In the scenario depicted in Fig. 2 we have a single AA called Mine Ambient.We assume that every deployment in a target environment e (in this case, a“mine”) must be authorised through a handshake with the local FR. This meansthat, before processing any admittance requests, the FR first expects the AAof e to declare their R-features. This is done in Fig. 2(a) by calling methodDclClient.

For the sake of simplicity we assume that said R-features are constant. Whenthat is not the case the AA is responsible to update their R-features with newDclClient calls.

The scenario continues with a system, a Miner Agent, requesting access to e.This is done in Fig. 2(b) through another call to DclClient. Once the FR receives

Page 12: Autoresilience

Fig. 2. Resilience handshake scenario. A Mine Ambient declares its resiliencerequirements (in particular, Perception of carbon monoxide, methane or carbondioxide). A Miner Agent and a Canary Agent are both not qualified enough toenter. A Feature Register detects that collaboration between them may solve theproblem. As a result a new collective system, Miner+Canary, is created, whichpasses the test and is allowed into the Mine Ambient.

the corresponding R-features, a record is added to the FR associative array and

Page 13: Autoresilience

the request is evaluated. By comparing the perception spectra of e and the MinerAgent, the FR is able to detect a perception failure: Miner Agent ≺P e, or inother words some of the events in e would go undetected by the Miner Agentwhen deployed in e. As a consequence, a call to method PerceptionFailurenotifies the Miner Agent that the resilience handshake failed (Fig. 2(c)). Despitethis, the entry describing the R-features of the Miner Agent is not purged fromthe associative array in FR.

After some time a second system, called Canary Agent, requests deploymentin the mine e by submitting their R-features. This is shown in Fig. 2(d). TheCanary Agent is comparably simpler than the Miner Agent in terms of bothapperception and entelechism, and in particular the apperception class of theCanary Agent is insufficient with respect to the apperception expected by e:Canary Agent ≺A e. As a consequence, a failure is declared (see Fig. 2(e)) bycalling method ApperceptionFailure. Despite said failure, a new record statingthe R-features of Canary Agent is added to the associative array of FR.

By some strategy, e.g., a brute force analysis of every possible unions of allstored associations, the FR realises that the union of the perception spectrum ofthe Miner Agent and that of the Canary Agent optimally fulfils the admittancerequirements of e and therefore does not result in a perception failure. BothMiner and Canary agents are then notified of this symbiotic opportunity bymeans of a call to method JoinPerceptionSpectra (Fig. 2(f)). This is followedby the creation of a simple form of social organisation: the Miner Agent monitorsthe state of the Canary Agent in order to detect the presence of toxic gases. If thismonitoring process is not faulty—that is, if the Miner Agent does not fail to checkregularly and frequently enough for the state of the Canary Agent—this resultsin an effective method to augment artificially one’s perception spectrum. Theresulting collective system, Miner+Canary Agent, is created in Fig. 2(g). Finally,Fig. 2(h) and (i) show how the newly created system fulfils the admittancerequirements and is allowed in the Mine ambient.

4 Conclusions

Continuing our work reported in [5–7] we have introduced here a classificationof resilience based on several attributes. We have shown how breaking down re-silience into simpler constituents makes it possible to conceive handshake mecha-nisms between systems declaring their resilience figures and environments statingtheir minimal resilience requirements. One such mechanism has been exempli-fied through an ambient intelligence scenario. We have shown in particular howidentifying shortcoming and excess in resilience may be used to enhance thesystem-environment fit through simple forms of social collaboration.

We observe how decomposing resilience into a set of constituent attributesallows a set of sub-systems to be ortogonally associated to the management ofsaid attributes. This paves the way to strategies that

1. assess the resilience requirements called for by the current environmentalconditions; and

Page 14: Autoresilience

2. reconfigure the resilience sub-systems by optimally redistributing the avail-able resource budgets, e.g., in terms of complexity and energy.

Fine-tuning the resilience architectures and organisations after the current en-vironmental conditions may be used to design auto-resilient systems—systemthat is whose evolution engines are able to self-guarantee identity persistencewhile systematically adapting their perception, apperception, and entelechismsub-systems. We conjecture that this in turn may help matching the challengesintroduced by the high variability in current deployment environments.

We envisage the study and the application of auto-resilience to constitute asignificant portion of our future research activity.

References

1. Jen, E.: Stable or robust? What’s the difference? In Jen, E., ed.: Robust Design:a repertoire of biological, ecological, and engineering case studies. SFI Studies inthe Sciences of Complexity. Oxford Univ. Press (2004) 7–20

2. Aristotle, Lawson-Tancred, H.: De Anima (On the Soul). Penguin (1986)3. Sachs, J.: Aristotle’s Physics: A Guided Study. Rutgers (1995)4. Meyer, J.F.: Defining and evaluating resilience: A performability perspective. In:

Proc. Int.l Work. on Performability Modeling of Comp. & Comm. Sys. (2009)5. De Florio, V.: On the constituent attributes of software and organizational re-

silience. Interdisciplinary Science Reviews 38(2) (2013)6. De Florio, V.: On the role of perception and apperception in ubiquitous and perva-

sive environments. In: Proc. of the 3rd Work. on Service Discovery & Compositionin Ubiquitous & Pervasive Environments (SUPE’12). (2012)

7. De Florio, V.: Robust-and-evolvable resilient software systems: Open problemsand lessons learned. In: Proc. of the 8th workshop on Assurances for Self-AdaptiveSystems (ASAS’11), Szeged, Hungary, ACM (2011) 10–17

8. Costa, P., Rus, I.: Characterizing software dependability from multiple stakehold-ers perspective. Journal of Software Technology 6(2) (2003)

9. Laprie, J.C.: Dependable computing and fault tolerance: Concepts and terminol-ogy. In: Proc. of the 15th Int. Symp. on Fault-Tolerant Computing (FTCS-15),Ann Arbor, Mich., IEEE Comp. Soc. Press (1985) 2–11

10. Laprie, J.C.: Dependability—its attributes, impairments and means. In Randell,B. et al., eds.: Predictably Dependable Comp. Systems. Springer, Berlin (1995)3–18

11. De Florio, V., Blondia, C.: Reflective and refractive variables: A model for effectiveand maintainable adaptive-and-dependable software. In: Proc. of the 33rd Conf.on Software Eng. & Adv. Appl. (SEAA 2007), Lubeck, Germany (2007)

12. De Florio, V., Blondia, C.: System Structure for Dependable Software Systems.In: Proc. of the 11th Int.l Conf. on Computational Science and its Applications(ICCSA 2011), Santander, Spain (2011)

13. De Florio, V.: Cost-effective software reliability through autonomictuning of system resources (2011) http://mediasite.imec.be/mediasite/

SilverlightPlayer/Default.aspx?peid=a66bb1768e184e86b5965b13ad24b7dd.14. Charette, R.: Electronic devices, airplanes and interference: Signif-

icant danger or not? (2011) IEEE Spectrum blog “Risk Factor”,http://spectrum.ieee.org/riskfactor/aerospace/aviation/electronic-

devices-airplanes-and-interference-significant-danger-or-not.

Page 15: Autoresilience

15. De Florio, V.: Software assumptions failure tolerance: Role, strategies, and visions.In Casimiro, A., de Lemos, R., Gacek, C., eds.: Architecting Dependable Sys. VII.Vol. 6420 of LNCS. Springer (2010) 249–272

16. Leibniz, G., Strickland, L.: The shorter Leibniz texts. Continuum (2006)17. Runes, D.D., ed.: Dictionary of Philosophy. Philosophical Library (1962)18. Lycan, W.: Consciousness and experience. Bradford Books. MIT Press (1996)19. Rosenblueth, A., Wiener, N., Bigelow, J.: Behavior, purpose and teleology. Phi-

losophy of Science 10(1) (1943) 18–2420. Boulding, K.: General systems theory—the skeleton of science. Management Sci-

ence 2(3) (1956)21. De Florio, V., Blondia, C.: Service-oriented communities: Visions and contributions

towards social organizations. In Meersman, R. et al., eds.: OTM 2010 Workshops.Vol. 6428 of LNCS. Springer (2010) 319–328

22. De Florio, V., Blondia, C.: On the requirements of new software development. Int.lJournal of Business Intelligence and Data Mining 3(3) (2008)

23. Pavard, B., et al.: Design of robust socio-technical systems. In: Proc. of the 2ndInt.l Symp. on Resilience Eng., Cannes, France (2006)

24. Eugster, P.T., Garbinato, B., Holzer, A.: Middleware support for context awareapplications. In Garbinato, B., Miranda, H., Rodrigues, L., eds.: Middleware forNetwork Eccentric and Mobile Appl. Springer (2009) 305–322

25. Gui, N., et al.: ACCADA: A framework for continuous context-aware deploymentand adaptation. In Proc. of the 11th Int.l Symp. on Stabilization, Safety, andSecurity of Distr. Sys., (SSS 2009). Vol. 5873 of LNCS, Springer (2009) 325–340

26. Stark, D.C.: Heterarchy: Distributing Authorithy and Organizing Diversity. In:The Biology of Business. Jossey-Bass (1999) 153–179

27. Anonymous: Heterarchy. Technical report, P2P Foundation (2010)28. Sousa, P., Silva, N., Heikkila, T., Kallingbaum, M., Valcknears, P.: Aspects of

co-operation in distributed manufacturing systems. Studies in Informatics andControl Journal 9(2) (2000) 89–110

29. Ryu, K.: Fractal-based Reference Model for Self-reconfigurable ManufacturingSystems. PhD thesis, Pohang Univ. of Science and Technology, Korea (2003)

30. Tharumarajah, A., Wells, A.J., Nemes, L.: Comparison of emerging manufacturingconcepts. In: Systems, Man, and Cybernetics. 1998 IEEE Int.l Conf. on. Vol. 1.(1998) 325–331

31. Warnecke, H., Huser, M.: The fractal company. Springer (1993)32. Gui, N., De Florio, V.: Towards meta-adaptation support with reusable and com-

posable adaptation components. In: Proc. of the sixth IEEE Int.l Conf. on Self-Adaptive and Self-Organizing Systems (SASO 2012), IEEE (2012)

33. Gui, N., De Florio, V., Holvoet, T.: Transformer: an adaptation framework withcontextual adaptation behavior composition support. Software Pract. Exper.(2012)

34. De Florio, V., Blondia, C.: Robust and tuneable family of gossiping algorithms.In: Proc. of the 20th Euromicro Int.l Conf. on Parallel, Distr., and Network-BasedProcessing (PDP 2012), Garching, Germany, IEEE Comp. Soc. (2012) 154–161

35. Nilsson, T.: How neural branching solved an information bottleneck opening theway to smart life. In: Proc. of the 10th Int.l Conf. on Cognitive and Neural Systems,Boston Univ., MA (2008)

36. van der Schalie, W.H., et al.: Animals as sentinels of human health hazards ofenvironmental chemicals. Environ. Health Persp. 107(4) (1999)

37. Farr, D.: Indicator Species. In: Encycl. of Environmetrics. Wiley (2002)