Mental States as Emergent Properties - openscience.ub.uni ...

38
Mental States as Emergent Properties From Walking to Consciousness Holk Cruse & Malte Schilling In this article we propose a bottom-up approach to higher-level mental states, such as emotions, attention, intention, volition, or consciousness. The idea behind this bottom-up approach is that higher-level properties may arise as emergent properties, i.e., occur without requiring explicit implementation of the phenomenon under examination. Using a neural architecture that shows the abilities of autonomous agents, we want to come up with quantitative hypotheses concerning cognitive mechanisms, i.e., to come up with testable predictions concerning the underlying structure and functioning of an autonomous system that can be tested in a robot-control system. We do not want to build an artificial system that is, for example, conscious in the first place. On the contrary, we want to construct a system able to control behavior. Only then will this system be used as a tool to test to what extent de- scriptions of mental phenomena used in psychology or philosophy of mind may be applied to such an artificial system. Originally these phenomena are necessarily defined using verbal formulations that allow for interpreting them differently. A functional definition, in contrast, does not suffer from being ambiguous, because it can be expressed explicitly using mathematical formulations that can be tested, for example, in a quantitative simulation. It is important to note that we are not concerned with the “hard” problem of consciousness, i.e., the subjective aspect of mental phenomena. This approach is possible because, adopting a monist view, we assume that we can circumvent the “hard” problem without losing information con- cerning the possible function of these phenomena. In other words, we assume that phenomenality is an inherent property of both access consciousness and metacog- nition (or reflexive consciousness). Following these arguments, we claim that our network does not only show emergent properties on the reactive level; it also shows that mental states, such as emotions, attention, intention, volition, or con- sciousness can be observed, too. Concerning consciousness, we argue that proper- ties assumed to partially constitute access consciousness are present in our net- work, including the property of global availability, which means that elements of the procedural memory can be addressed even if they do not belong to the current context. Further expansions are discussed that may allow for the recognition of properties attributed to metacognition or reflexive consciousness. Keywords Access consciousness | Consciousness | Emergent properties | Internal body model | Minimal cognitive system | Motor control | ReaCog | Recurrent neural networks | Reflexive consciousness | Robotic architecture Authors Holk Cruse holk.cruse @ uni-bielefeld.de Universität Bielefeld Bielefeld, Germany Malte Schilling malte.schilling @ uni-bielefeld.de Universität Bielefeld Bielefeld, Germany Commentator Aaron Gutknecht aaron-gutknecht @ gmx.de Goethe-Universität Frankfurt a. M., Germany Editors Thomas Metzinger metzinger @ uni-mainz.de Johannes Gutenberg-Universität Mainz, Germany Jennifer M. Windt jennifer.windt @ monash.edu Monash University Melbourne, Australia 1 Introduction In this article we propose a bottom-up approach to higher-level mental states, such as, for ex- ample, consciousness. In contrast to most re- lated approaches, we do not take consciousness as our point of departure, but rather aim, firstly, to construct a system that has basic properties of a reactive system. In a second step, this system will be expanded and will gain cognitive properties in the sense of being able to plan ahead. Only after this work is finished, we Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness. In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 1 | 38

Transcript of Mental States as Emergent Properties - openscience.ub.uni ...

Mental States as Emergent PropertiesFrom Walking to Consciousness

Holk Cruse & Malte Schilling

In this article we propose a bottom-up approach to higher-level mental states,such as emotions, attention, intention, volition, or consciousness. The idea behindthis bottom-up approach is that higher-level properties may arise as emergentproperties, i.e., occur without requiring explicit implementation of the phenomenonunder examination. Using a neural architecture that shows the abilities ofautonomous agents, we want to come up with quantitative hypotheses concerningcognitive mechanisms, i.e., to come up with testable predictions concerning theunderlying structure and functioning of an autonomous system that can be testedin a robot-control system.

We do not want to build an artificial system that is, for example, consciousin the first place. On the contrary, we want to construct a system able to controlbehavior. Only then will this system be used as a tool to test to what extent de-scriptions of mental phenomena used in psychology or philosophy of mind may beapplied to such an artificial system. Originally these phenomena are necessarilydefined using verbal formulations that allow for interpreting them differently. Afunctional definition, in contrast, does not suffer from being ambiguous, because itcan be expressed explicitly using mathematical formulations that can be tested,for example, in a quantitative simulation. It is important to note that we are notconcerned with the “hard” problem of consciousness, i.e., the subjective aspect ofmental phenomena. This approach is possible because, adopting a monist view, weassume that we can circumvent the “hard” problem without losing information con-cerning the possible function of these phenomena. In other words, we assume thatphenomenality is an inherent property of both access consciousness and metacog-nition (or reflexive consciousness). Following these arguments, we claim that ournetwork does not only show emergent properties on the reactive level; it alsoshows that mental states, such as emotions, attention, intention, volition, or con-sciousness can be observed, too. Concerning consciousness, we argue that proper-ties assumed to partially constitute access consciousness are present in our net-work, including the property of global availability, which means that elements ofthe procedural memory can be addressed even if they do not belong to the currentcontext. Further expansions are discussed that may allow for the recognition ofproperties attributed to metacognition or reflexive consciousness.

KeywordsAccess consciousness | Consciousness | Emergent properties | Internal bodymodel | Minimal cognitive system | Motor control | ReaCog | Recurrent neuralnetworks | Reflexive consciousness | Robotic architecture

Authors

Holk [email protected]   Universität BielefeldBielefeld, Germany

Malte [email protected]   Universität BielefeldBielefeld, Germany

Commentator

Aaron [email protected]   Goethe-UniversitätFrankfurt a. M., Germany

Editors

Thomas [email protected]   Johannes Gutenberg-UniversitätMainz, Germany

Jennifer M. [email protected]   Monash UniversityMelbourne, Australia

1 Introduction

In this article we propose a bottom-up approachto higher-level mental states, such as, for ex-ample, consciousness. In contrast to most re-lated approaches, we do not take consciousnessas our point of departure, but rather aim,

firstly, to construct a system that has basicproperties of a reactive system. In a secondstep, this system will be expanded and will gaincognitive properties in the sense of being able toplan ahead. Only after this work is finished, we

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 1 | 38

www.open-mind.net

ask to what extent this system is equipped withhigher-level properties as for example emotionsor consciousness. While other approaches wouldrequire an exact definition of, for example, con-sciousness, in our case we do not have to startfrom a clear-cut definition and try to fit it intoa model. We follow this alternative route be-cause there are no generally accepted definitionsconcerning these higher-level phenomena. Inthis way we hope to identify the essential ele-ments required to instantiate, for example, con-sciousness.

The idea behind this approach is thathigher-level properties may arise as emergentproperties, i.e., may occur without requiring ex-plicit implementation of the phenomenon underexamination but instead arise from the coopera-tion of lower-level elements. Some authors dis-tinguish between “strong” emergence and“weak” emergence (e.g., Laughlin & Pines2000). Strong emergence means that there isprincipally no way to explain the emergentproperty by known properties of the elements ofthe system and their coupling. Here we aredealing with weak emergence. In this case, aproperty recognized when looking at the wholesystem can at first glance not be traced back

(or perhaps only partially) to known propertiesof the elements and their couplings. Often, aux-iliary assumptions are made to explain thisproperty as a global property, i.e., as a propertyascribed to the system as a whole. A more de-tailed inspection may, however, show that suchauxiliary assumptions are not required. Instead,the emergent property follows from the proper-ties of the elements and the specific ways inwhich they causally interact. This insight allowsfor an understanding of an emergent property inthe sense that this property can be predicted,although we may not understand why it arises,and that one is able to construct a new systemshowing this property.

Following this approach, one crucial de-cision to be made at the beginning concerns thegranularity of the lower-level elements. In ourapproach, we start from a behavioral perspect-ive and focus on the nervous system as centralto the control of action. Therefore, we use neur-onal units as the basic elements for our model-ing and for the analysis. Specifically, we use ar-tificial neural network units with analogue ac-tivation values and dynamic (low-pass filter)properties1. That is, our neural elements arequalitatively comparable with non-spiking neur-ons. Although there are arguments that con-sciousness, in order to arise, might require syn-chronously oscillating spikes (Singer & Gray1995), we claim that the level applied here isgeneral enough to allow for an understanding ofsuch mental processes. As a side effect, thislevel of abstraction covers different evolutionarygroups, such as those represented by insects andmammals, for example. Though much of ourdiscussion, below, focuses on the example of in-sects, we do not want to argue that insects haveall the higher-level properties addressed later inthis article, but only that they share the samefundamental functions used in motor controland have, on that level, a comparable structure.

Using these simple neural elements, westart by implementing very basic faculties thatinclude the ability to move one’s own (non-

1 A low-pass filter is qualitatively characterized by an increase of out-put activation that, when excited by a constant stimulus, asymptot-ically approaches a given output value.

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 2 | 38

Figure 1: Arrangement of the leg-controllers (boxes: FLfront left, ML middle left, HL hind left, FR front right,MR middle right, HL hind right) of the hexapod walker.The arrows show coordinating influences (1–4) that actbetween neighbouring leg-controllers.

www.open-mind.net

trivial2) body, and allow for orientation andnavigation in an only partially known environ-ment. To this end we use a body with six, in-sect-like legs. This means that we deal with atleast eighteen active degrees of freedom (DoF)and not two—as is the case for many robotsthat are restricted to moving around on a two-dimensional plane. This means that the control-ler has to deal with a large number of redund-ant DoFs. To control the behavior of the robotwe use a reactive and embodied neuronal con-troller, as it is available from earlier work on in-sect behavior (Schilling et al. 2013a). Later, aminor expansion of the network will allow forcognitive faculties.

What are the properties of thereactive/cognitive system considered here? Thereactive system is called “Walknet” and it isbased on biological insights from experiments2 I.e., a body with redundant degrees of freedom arranged in both par-

allel and serial order.

on the walking behavior of stick insects (Dürr etal. 2004; Bläsing 2006; Schilling et al. 2013b).As will be explained in section 2, Walknet wasset up as a system for controlling the walkingbehavior of a six-legged system in an unpredict-able environment, e.g., on cluttered terrain orclimbing over large gaps—which, when per-formed in a realistic, natural environment is anon-trivial task. Already on this level we canobserve emergent properties. The number oflegs on the ground differs depending on the ve-locity of the walker (for slower walking morelegs are on the ground). As a consequence thephase relations between different legs differ de-pending on the velocity of the walker. Import-antly, the resulting stepping patterns (“gaits”)are not explicitly encoded in the control net-work, but are a result of the interaction of thecontrol network with the environment as medi-ated through the body (1st order embodimentMetzinger 2014). In a further step, the reactive

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 3 | 38

Figure 2: The network controlling the reactive system. Motivation units (red) form an RNN that can assume variousattractor states (only two leg-controllers are shown). Arrows show excitatory influences, T-shaped connections show in-hibitory influences (fw forward, bw backward, r1 coordination rule 1) The motivation units at the lower margin controlprocedures (boxes, e.g., swing, stance). The procedures include the internal body model (blue). The body is marked bydashed boxes (“leg”). Indicated here is the network Navinet that controls walking direction (see figure 4 for more de-tails).

www.open-mind.net

controller is expanded to be able to deal withnavigation tasks. This additional network,called “Navinet”, is able to simulate a numberof experimental results observed in desert antsand honeybees, such as the capability of findingfood sources using path integration and orienta-tion with respect to visual landmarks.

Both networks are characterized by theirdecentralized nature. These networks consist ofprocedural, (reactive) elements, namely smallneural networks that in general connect sensoryinput with motor output, thereby forming theprocedural memory. Inspired by (Maes 1991),these procedural elements are coupled via a“motivation unit network”, a recurrent neuralnetwork (RNN) that forms the backbone of thecomplete system. This type of architecture hasbeen termed MUBCA (for Motivation UnitBased Columnar Architecture (MUBCA),Schilling et al. 2013b). The motivation unit net-work allows for selection of different behaviorsby adopting different attractor states, whereeach attractor represents a group of motivationunits being activated, which in turn control theprocedural elements. As the different groups doin part overlap, albeit in different ways, the net-work allows for the representation of a heter-archical structure (e.g., see left upper part offigure 2).

As a next “evolutionary” step, this react-ive network will be expanded to be able to em-brace cognitive properties (sects. 3 and 6). Thenotion of cognition is often used in a broad andsometimes unspecific way. In the following wewill rely on the definition given by McFarland& Bösser (1993) who assume that a cognitivesystem is characterized by the capability of plan-ning ahead. We prefer this clear-cut definition ofcognition compared to many others found in theliterature, as the latter are generally quite weak(in extreme cases cognitive properties are evenattributed to bacteria, which, in our view,would make the term cognition meaningless).While such a specific definition might seem toonarrow, in our understanding it captures the es-sence of cognition. Focusing on planning aheadbeing realized by mental simulation (Hesslow2002) allows extending this notion of cognitionto easily include other high-level phenomena,

while still relying on the same internal mechan-ism. Therefore, in this article, apart from sec-tion 10.3 (Metacognition) we will use the termcognition in the strict sense as proposed by Mc-Farland & Bösser (1993).

Being able to plan ahead implies the cap-ability of being able to internally simulate beha-vior, which basically means to be able to simu-late movements of one’s own body within agiven environment. This faculty requires, as afirst step, the availability of a flexible, “manip-ulable” internal body-model. Planning ahead isinteresting in a situation where the actually car-ried out reactive behavior cannot reach the cur-rently pending goal. Therefore, a further expan-sion is required that allows for the invention ofnew behaviors. Together with the faculty ofplanning ahead, the system can then test newly-invented behaviors by applying internal simula-tion (“internal trial-and-error”) in order to finda solution for novel problems for which no solu-tion is currently known to the system.3

This system, called “reaCog”, represents abasic version of a cognitive system in the strictsense intended by McFarland & Bösser (1993).As such, cognitive expansion does not functionby itself, but only, like a parasite, on top of thereactive structures—a view that has been sup-ported for a long time (Norman & Shallice1986). The cognitive system depends on its re-active basis (therefore it is called reaCog).Therefore, the evolution of cognitive abilitiescrucially requires a correspondingly rich (pro-cedural) memory.

In order to increase the richness of thememory of the complete system, in section 5 weintroduce perceptual memory and complete thesystem by implementing “Word-nets”, a specificform of procedural and perceptual memory. Inthis way, the whole system is equipped with as-pects of semantic memory, and can be claimedto represent a minimal cognitive system. We donot deal with learning but only discuss theproperties of the finished network. The learning

3 Note that the term simulation is used here in two different ways. “In-ternal simulation” enables the agent to simulate behaviors internally,i.e. without actually performing them in reality. Simulation of an an-imal addresses the construction of an artificial agent. The agent maytake the form of a software simulation or a hardware simulation (i.e.,a physical robot).

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 4 | 38

www.open-mind.net

of some aspects has, however, been treatedearlier (Hoinville et al. 2012; Cruse & Schilling2010a).

After having introduced reaCog in sections2–6, we will, in sections 7–11, discuss how moreabstract functions, such as those described in,e.g., psychology, can be based on such a simply-structured network.

A fundamental problem when aiming foran understanding of phenomena like emotionsor consciousness concerns the phenomenal as-pect. The phenomenal aspect, often character-ized as the hard problem (Chalmers 1997),refers to the strange, unexplainable phe-nomenon that physical systems, in our case rep-resented by specific dynamics of neuronal struc-tures, can be accompanied by subjective experi-ence. Basic examples are experiencing pain, acolor, or the internal state of an emotion (e.g.,joy, fear). In section 7 we discuss this aspect insome detail and postulate that phenomenality isan emergent property. As mentioned, we are notaiming to solve the “hard” problem (Chalmers1997), but we argue that it is sufficient to con-centrate on the functional aspect.

In particular, we focus on the phenomenaof emotions and consciousness. According to anumber of authors (e.g., Valdez & Mehrabian1994), these are assumed to be an inherentproperty for some cognitive systems. There-fore, although we do not want to state thatemotions (section 8), attention, volition, inten-tion (section 9), and consciousness (section10) should necessarily be attributed to oursystem in any sense, we want to discuss towhat extent properties characterized by differ-ent levels of description can be observed inour model.

Considering emotions, these are defined ondifferent levels in the literature, so that there isno clear, generally accepted distinction betweenconcepts like emotions, moods, motivations,drives, etc., which appear to form a continuumof overlapping, not clearly separable concepts(Pérez et al. 2012). Focusing on selected ex-amples, in section 8 we will show how thesephenomena may be attributed to our system,for example by referring to basic emotions asproposed by Ekman (1999).

Concerning consciousness, as discussed byCleeremans (2005), this phenomenon should beapproached by differentiating different aspectsand treating those aspects separately. To thisend, following Block (1995, 2001), Cleeremans(2005), introduces a distinction between accessconsciousness, metacognition, and phenomenalconsciousness. In sections 10.1 (access con-sciousness) and 10.3 (metacognition) we will fo-cus on whether and how the presented modelcan be related to the different aspects that aredescribed by Cleeremans (2005), such as accessconsciousness and metacognition. From ourpoint of view the simple control structurepresented does fulfill some aspects of both ac-cess consciousness and metacognition. We shallfinish with discussion and conclusion in sects.11, 12.4

2 Walknet

ReaCog is an expansion of a control system thathas been realized as a neural network. The un-derlying system has been termed Walknet.Walknet is biologically inspired and is supposedto describe the results of many behavioral stud-ies on the walking behavior of stick insects(Dürr et al. 2004; Schilling et al. 2013b). Wewill briefly sketch the properties of the networkas far as is required for understanding the basicabilities considered here.

Overall, the controller has to deal withthe difficult task of coordinating multiple de-grees of freedom; in the case of the hexapodwalker the body consists of twenty-two DoF.There are three DoF for each of the six legsand an additional four DoF are present inbetween the body segments. The system is re-dundant, as only six DoFs are needed to definea position and orientation in three-dimensionalspace. The controller therefore has to to dealwith sixteen extra DoFs. The architecture ofthe Walknet controller is decentral. Each leghas an individual and more or less independentcontroller that decides which action to choose(two such leg-controllers are shown in figure 2,the black boxes in the lower part). A single leg4 This article comprises an essential extension of an earlier paper

(Cruse & Schilling 2013).

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 5 | 38

www.open-mind.net

controller consists of several procedures. In thefigure, each procedure is represented as a singleblack box. In the basic system, the two import-ant behaviors a leg can perform are the swingand stance movement. The procedures them-selves are realized as artificial RNN. Examplesare the two basic procedures: the “Swing-net”,which controls the swing movement, and the“Stance-net”, which controls the stance move-ment of the leg. Only two of the six leg-con-trollers are shown. These networks constitutethe procedural memory of the system. The pro-cedural modules receive direct sensory inputand provide motor control commands as anoutput. But there are also modules thatprovide input to another module. The control-ler on the leg level determines which procedureshould be actived at any given time, dependingon the current state of the leg (swing orstance), as well as on sensory inputs (groundcontact, position). In addition, controllers ofneighboring legs can influence each otherthrough a small number of connectionsbetween those controllers. These influences areexplicitly derived from experiments on the co-ordination of legs in walking experiments onthe stick insect.

As was found in the insects, during theswing movement (protraction) the legs aim to-wards a position at the front, close to the posi-tion of the anterior leg. Therefore, each leg pos-sesses a so-called “target net” in order to pro-duce these targeted movements. During forwardwalking the so-called “Target_fw-net” is re-sponsible for this targeting. During backwardwalking “Target_bw-net” is used. Both directlyinfluence the Swing-net. Procedures marked asblue boxes (“body model”, “leg model”) will beexplained below (section 3.1).

ReaCog is expanded by an RNN, whichconsists of motivation units (figure 2, markedin red). This network allows the system toautonomously select one of the different pos-sible behaviors. For example, the system maychoose between forward or backward walking,or standing. A motivation unit is an artificialneuron with linear summation input andpiecewise linear activation function, showingoutput values from zero to one. Applied to a

procedure, for example Swing-net, a motiva-tion unit determines the strength of the out-put of the corresponding procedural network(in a multiplicative way). As mentioned above,motivation units form a recurrent neural net-work and can influence each other through ex-citatory or inhibitory connections (as shownin figure 2).

In addition, there are sensory units thatare part of this RNN and that can directly in-fluence the motivation units’ activation, e.g.,as shown in figure 2 for the “lower-level” unitsfor Swing and Stance. There, an activeground-contact sensor of a leg reinforces thestance motivation unit for this leg. As the mo-tivation unit network can be arbitrarily ex-panded, it allows to control of complex beha-viors. To illustrate a small group of behaviorsonly, units as “walk”, “fw” (forward), “bw”(backward), “leg1” are depicted (for more ex-amples see Schilling et al. 2013b; Cruse &Wehner 2011).

The network of motivation and sensoryunits does not have to form a simple, tree-likestructure (see figure 2). It can constitute aheterarchy. Motivation units can be bi-direc-tionally connected through positive (arrow-heads) and negative (T-shaped heads) connec-tions. As shown in the figure, this can lead tocycles. There are also different overlappingsubnetworks, e.g., the “leg” units as well asthe motivation unit for “walk” are active dur-ing backward and forward walking. But onlyone unit indicating the direction of walkingcan be active at any given time, i.e. either theunit “fw” or “bw” can be active. As a con-sequence, there are multiple stable attractorstates formed through the combinations of ex-citatory and inhibitory connections. Thestable “internal states” stabilize the behaviorof the overall control system, as the systemcannot be easily disturbed solely through in-appropriate sensory inputs. For example, sens-ory inputs are treated differently dependingon the current state (swing or stance) of thecontrol system, and these internal states canbe differentiated on a higher-level, e.g., intowalking, standing, or feeding (for details seeSchilling et al. 2013a; Schilling et al. 2013b).

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 6 | 38

www.open-mind.net

Figure 3: Step pattern arising from the decentralizedleg-controllers connected by local rules and the environ-ment. Abscissa is time; black bars indicate swing move-ment; the gaps represent stance movement of this leg(from top to bottom: front left leg (FL), middle left leg(ML), hind left leg (HL), correspondingly front right leg(FR), middle right leg (MR) and hind right leg (HR) forthe right side). The lower bars indicate 500 iterations cor-responding to 5s real time. These “foot-fall patterns”show various locally or globally stable patterns dependingon walking velocity (a: slow, b: fast) and of starting posi-tion. In (a) the legs start with an “uncomfortable” legconfiguration leading to a gallop-like pattern (indicatedby the vertical ellipses) that after about six steps changesto the globally stable pattern, typical for slow insectwalking (see inclined ellipses, step # 8). (b) shows fastwalking leading to a tripod gait characterized by syn-chronous swing movements of ML, FR, HR and FL, HL,MR (see vertical ellipses).

For an RNN, maintaining a stable state isa non-trivial problem, in particular, when thereare various disturbances. To illustrate the ad-aptability and at the same time the stability ofthe behavior controlled by such a motivationunit network, in figure 3 we show two cases ofhexapod walking. Figure 3a shows an exampleof a slow walking speed where the legs beginfrom a difficult starting configuration (bothfront legs, both middle legs and both hind legsstart from the same position, which is oppositeto the coordination found in normal walking,

where opposite legs alternate). Nonetheless, theagent is able to walk. After some steps, theagent reaches a temporally stable pattern cor-responding to normal walking. Figure 3b showsa step pattern corresponding to high-speedwalking, often termed “tripod gait”. Althoughusually considered to be a regular pattern, de-tailed inspection shows that there are local tem-poral variations, but the overall pattern remainsstable (for videos of further walking examplessee Schilling et al. 2013b). It is important tonote that none of these step-patterns are expli-citly implemented, but arise as emergent prop-erties (for details see Schilling et al. 2013a). Asanother impressive emergent property, Bläsing(2006) showed that, with some minor exten-sions, this walker is able to climb over largeobstacles (which can be more than twice thenormal step-width).

3 Internal representation

In addition to using the loop through the envir-onment itself, some form of internalization is aprerequisite for any kind of planning. Therefore,specific internal representations5 are necessaryfor a cognitive system. This is well in line withthe embodied perspective, because from anevolutionary point of view internal models arenot at first disconnectable from a very specificfunction, and they work in service of a specificbehavior (Glenberg 1997). Internal models have,in this sense, co-evolved with behavior (Steels2003). An early representation is the representa-tion of one’s own body, and such a representa-tion becomes meaningful early on, in simplecontrol tasks like targeted movements or sensorfusion.

3.1 Body model

In reaCog we introduced an internal model ofthe body. This model is realized as an RNN(Schilling 2011) and has a modular structure(Schilling & Cruse 2007; Schilling et al. 2012).The overall model consists of two different

5 The term representation is used here in the broad sense of Steels(1995) “physical structures (for example electro-chemical states)which have correlations with aspects of the environment”.

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 7 | 38

www.open-mind.net

levels. On the top level the whole body and thestructure of the insect are represented in an ab-stract way. Only on the lower level are the de-tails filled in. The lower level consists of six legnetworks. Here, for each leg the functionalstructure of the joints and the limb is captured.In this way this level of representation can beused for motor control and provides detailed in-formation about joint movements. On thehigher level, the structure of the body and thelegs is represented in an abstract form, i.e., onlythe footholds of the legs appear on this level.Figure 2 shows the different parts of the bodymodel (drawn in blue). The body model is mod-ular. It comprises a holistic system that is real-ized as an RNN (figure 5, see Schilling 2011;Schilling et al. 2012 for details).

The body model is used during normalwalking, meaning that the system is still in thereactive mode, in forward as well as backwardwalking or when negotiating curves. It coordin-ates the movement of the joints and delivers theappropriate control signals for the Stance-net-works. As explained above, overall the system isredundant, with twenty-two DoFs in the wholebody structure, and this makes deriving consist-ent control signals for all the joints a difficultproblem that can’t be computed directly, butrather requires application of additional criteria(e.g., for optimizing energy consumption). Inour approach, which uses the internal bodymodel, we employ the passive motion paradigm(von Kleist 1810; Mussa-Ivaldi et al. 1988; Loeb2001). Consider the body model as a simulatedpuppet of the body (figure 5) that is pulled byits head in the direction of the goal (figure 5b,pull_fw). This information on the target direc-tion could be provided by sensory input, e.g.,from the antennae or vision, in the form of atarget vector (figure 2, sensory input). Whenpulled in this direction, the whole model shouldtake up this movement and therefore the indi-vidual legs currently in stance should follow themovement in an appropriate way. The inducedchanges in the joints can be read out and ap-plied as motor commands in order to controlthe real joints. In backward or curved walking,the body model has only to be pulled into acorresponding direction (in backward walking

using the vector attached to the back of thebody model, pull_bw (figure 5b). In this waywe obtain an easy solution to the inverse kin-ematic problem as the body-model representsthe kinematical constraints of the body of thewalker. It restrains the possible movements ofthe individual joints through these constraints,and only allows possible solutions for the legsstanding on the ground, thereby providing co-ordinated movements in all the involved joints.

The body-model is also connected to thesensors of the walking system and integrates theincoming sensory information into the cur-rently-assumed state of the body as representedin the body-model. In this way the body-modelis able to correct noisy or incorrect sensory data(Schilling & Cruse 2012). Overall, the main taskof the body model is pattern completion. It usesthe current state and incoming sensory data tocome up with the most likely state of the bodythat fulfils the encoded kinematic constraints.In this way, the model can also be used as a for-ward-model, meaning that, given specific jointconfiguration, the model can predict the three-dimensional arrangement of the body, for ex-ample the position of the leg tips. The predict-ive nature of the model is crucial as it allowsexploiting the model for planning ahead (see be-low). It is important to note that while we donot want to claim the existence of such a modelin insects, the functions of internal models areprediction, inverse function, and sensor fusion,and these can all already be found in insects.

3.2 Representation of the environment

Of course, internal representation should alsocontain information on the surroundings. Westarted with a focus on the body and want toextend this network in a way that reflects howthe environment affords (Gibson 1979) itself tothe body, i.e., a focus on interaction with theenvironment.

As an example of how the reaCog architec-ture could be extended to include representa-tion of meaningful parts of the environment, wewant to briefly sketch an expansion of Walknetthat would allow for insect-like navigation(“Navinet” Cruse & Wehner 2011; Hoinville et

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 8 | 38

www.open-mind.net

al. 2012). Navinet provides an output that willbe used by the body-model explained above toguide walking direction. Due to the network,the agent can make an informed decision aboutwhich learned food source she will visit (e.g.,sources A, B or C), or if she is travelling backhome or not (Outbound, Inbound, respectively).The output of Navinet is, in this way, on theone hand tightly coupled to the control of walk-ing and the representation of the body. On theother hand, Navinet is constructed using motiv-ation units in the same way as the walking con-

troller, and those motivation units take part inthe action-selection process. Importantly, Nav-inet (like desert ants) shows the capability of se-lective attention, since it is context dependentand only responds to learned visual landmarksin the appropriate context, i.e., when related tothe current active target food source. The struc-ture of the motivation-unit network is sketchedin figure 4. Examples of possible stable internalstates are (Forage – Outbound – source A –landmarks associated with source A) or (In-bound – landmarks associated with Inbound),

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 9 | 38

Figure 4: Motivation unit network of Navinet for the control of ant-like navigation. Unit Outbound controls travelfrom the home to a food source (A, B, C) or a default search for a new source (D). Unit Inbound controls travel backto the home. Memory elements (black boxes) contain position and quality of the food source (A, B, C) or informationon visual landmarks (landmark memory).

www.open-mind.net

for instance. As an interesting emergent prop-erty, Navinet does not presuppose an explicit“cognitive map”. Such a map-like representationhas been assumed necessary by several other au-thors (Cruse & Wehner 2011). How learning offood source positions and food quality is pos-sible has been shown by Hoinville et al. (2012).

4 Planning ahead, cognition

Even though Walknet is set up as a fixed struc-ture consisting of hard-wired connections of theRNN, it can flexibly adapt to disturbances inthe environment as needed during, for instance,crossing large gaps (Bläsing 2006). Nonetheless,the system might of course run into novel situ-ations that require an even higher degree of ad-aption, and as such will require novel behaviors.As an example, think of a situation in which allthe legs except the right hind leg are in the an-terior part of the working range. When theright hind leg is forced to lift from the groundas it approaches a position very far to the rear,the whole system will become unstable, as thecenter of gravity is positioned very far towardsthe rear of the animal. In this case, the centerof gravity would not be supported by the otherlegs, nor by the right hind leg that tries to starta swing movement. As a consequence, the agentwould fall over, backwards. This problem couldbe detected by “problem detectors”, e.g., spe-cific sensory input that reacts to the specificload distribution (a different solution is ex-plained in section 8). In order to overcome thisproblem, the system would have to break out ofits usual pattern of behavioral selection and tryto select a different behavioral module that isusually not applicable in the given context. Forinstance, making a step backward with the rightmiddle leg would be a possible solution, as thiswould provide support for the body and wouldafterwards allow going back to the normal walk-ing behavior and the subsequent swing move-ment of the right hind leg. Usually, backwardsteps can only be selected in the context ofbackward walking.

Figure 6 shows an expansion that allowsthe system to search for solutions that are notconnected to the current context. This expan-

sion is termed the “attention controller”. We in-troduce a third layer of units (figure 6, ingreen), that is essentially a recurrent winner-take-all network (WTA-net). For each motiva-tion unit there is a corresponding partner unitin this WTA-network. Currently-active motiva-tion units suppress their winner-take-all (WTA)partner units (T-shaped connections in figure6). Therefore, a random activation of thisWTA-net will lead to the activation of onesingle unit not belonging to the currently- activ-ated context. The random activation will be in-duced by another parallel layer, the “SpreadingActivation Layer” (not depicted in figure 6, fur-ther details are described in (Schilling & Crusesubmitted). The winning unit of the WTA layerthan activates its corresponding motivationunit. This triggers the connected behavior thatcan be tested as a solution to the problem athand. The network follows a trial-and-errorstrategy as observed in, e.g., insects.

As has been proposed (Schilling & Cruse2008), a further expansion of the system thatis, most probably, not given in insects is notthe testing of a behavior in reality, but insteadthe application of a newly-selected behavior onthe body-model and the use of the model in-stead of the real body. The motor output isrouted to the body-model instead of to the realbody, and the real body is decoupled from thecontrol system while testing new behaviors.Due to the predictive nature of the body-model, it can be used to predict possible con-sequences and to afterwards decide if a beha-vior solves the current problem and should betried out on the real body. This procedure iscalled internal simulation and requires the in-troduction of switches that reroute motor out-put signals from the real body to the bodymodel (figure 6, switch SW). Only after a suc-cessful internal simulation will the behavior beapplied to the real body. McFarland & Bösser(1993) defined a cognitive system as a systemthat has the ability of planning ahead, i.e.,that is able to perform internal simulations inorder to predict possible outcomes of behavi-ors. Therefore, this latter expansion wouldmake the control system cognitive (for detailssee Cruse & Schilling 2010b).

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 10 | 38

www.open-mind.net

5 Word-net and perceptual memory

In our network, we have up to this point onlydealt with procedural memories, i.e., memoriesrepresenting the connections between specificsensorimotor elements that are able to controlspecific behaviors (e.g., Swingnet, landmark).As a final extension, we will now show how thenetwork might also be equipped with some as-pect of semantic memory, such that meaningcan be attributed to verbal expressions. To thisend, the network can be expanded through theintroduction of another layer (not shown in fig-ure 6). In this fourth layer, verbal expressionsare stored as procedures or “Word-nets”. Theseprocedures can either be used to pronounce astored word or to comprehend it, i.e., they canbe used for motor control and for auditory per-ception. As is the case for other procedures,each Word-net is equipped with a motivationunit. As the motivation units of Word-nets havea specific function, for an easier distinction wewill call them word units (WU). FollowingSteels (2007; Steels & Belpaeme 2005) eachWord-net is related to a corresponding unit ofthe motivation network that carries meaning

(e.g., the motivation unit for walking is connec-ted to a Word-net “walk”). The meaning of theWord-nets is in this way grounded in the beha-viors of the corresponding motivation units. Asan example, figure 7 shows a possible detail ofsuch a network, including some elements ofWalknet and Navinet. The motivation units of aprocedure (e.g., Swing net) and its correspond-ing Word-net (e.g., “Swing”) are coupled via bi-directional connections (dashed double-headedarrows). The connections cannot be active atthe same time, but depend on an overall stateof the network, termed “Report” and “Per-ceive”. In the Perceive state, only connectionsfrom the word unit to the motivation unit of itsnon-word procedure can be activated (from topto bottom in figure 7), whereas in the Reportstate only the opposite connections can be ac-tivated. As can be seen in figure 7, Word-netscan not only be connected with motivationunits of the sensorimotor nets, but also withmotivation units that do not directly control asensorimotor element (e.g., Walk, Outbound).

What might be the function of this exten-sion by Word-nets? In the Perceive state (or re-act state), a perceived word, uttered by another

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 11 | 38

Figure 5: The body-model and its relation to the body of robot Hector (a). (b) shows the vectors forming the centralbody (left) and the vectors forming one leg model (right). The central model and the leg-models are connected via theshared “leg vector” (white arrows) that point from the hip to the tip of the leg (shown here for the left front leg only).Walking direction and velocity are controlled by the input vectors pull_fw (forward) or pull_bw (backward) providedby sensory input.

www.open-mind.net

agent, will activate, via its word unit, its part-ner’s motivation unit, and thereby possibly in-fluence behavior (depending on the actual in-ternal state of the system and on the strengthof the word input). When in the Report state,the actually active motivation units will in turnactivate their corresponding word units, whichmay lead to an uttering of a word. As, ofcourse, only one word can be activated at agiven time, some kind of decision network (e.g.,a WTA net) is required, though, for reasons ofsimplicity, not shown in figure 7. In any case,introduction of Word-nets allows for a very ba-sic form of communication between the agentand any other partner, communication beinglimited to “one-word sentences”.

As indicated on the left side of figure 7(units “front”, “left”), further motivation unitsmight be introduced into the network that donot have a direct function within, in this case,the Walknet controller. Of course, these unitsmay be connected to word units. (Note that wedo not deal with the question how these unitsmay be connected within the network throughtraining).

This architecture combines sensorimotorprocedures with Word-nets (which by them-selves represent specific sensorimotor proced-ures). Together, they form a simple case of se-mantic memory, because procedural memoryrepresenting an action (e.g., Swing-net) is con-nected with a memory element representingverbal symbols.

To illustrate the versatility of this archi-tecture, we will briefly address how it can alsobe applied in order to embrace perceptualmemory. Following ideas of O’Connor et al.(2009), Cruse & Schilling (2010a) have shownhow an RNN, using the same elements as ap-plied here for the motivation unit network,could be used to construct a perceptualmemory. This network does not only allow therepresentation of directly perceived perceptualelements (e.g., the colour or shape of an object),but also of superordinate concepts (e.g., Cow,Animal, four-legged). Note that “four-legged”might also be a feature of non-animals, e.g., atable. Therefore, the ability of our network todeal with heterarchical structures is advantage-

ous for perceptual memory, too. Elements ofsuch a distributed memory can also be connec-ted to specific Word-nets (e.g., “red”, “Cow”,“animal”), as has been explained above for thesensorimotor motivation units. Correspondingly,activation of one memory element of this per-ceptual memory may elicit the uttering of thecorresponding word, and, in turn, when in Per-ceive mode, the hearing of a word may activatevarious elements of the procedural memory thatare associated with this word.

6 ReaCog: Emergent properties characterized by applying other levels of description

To summarise, the neural controller Walknet,(for details see Dürr et al. 2004; Schilling et al.2013a) is an embodied control system (first-or-der embodiment, cf. Metzinger (2006, 2014).The reactive system can deal with varying un-predictable environments. It relies only on in-formation that is available to the givenmechanosensors, which is possible because bothbody and environment are integral to the over-all computational system. In this way, the sys-tem is embodied. Of course, the system has aphysical body, but even more, being embodiedmeans that properties of the body (like its geo-metry) are exploited in computations of thecontroller. Using its own body as part of a loopthrough the world allows for dramatically sim-plifying computations (Schmitz et al. 2008).These properties are of course also present inthe expanded version, reaCog. Even though inreaCog an internal body-model is introduced inorder to control the high number of DoFs,reaCog still relies heavily on the cooperation ofindividual parts, i.e., the combination of coup-lings between body, environment, the internalbody model, and the controller itself. In addi-tion, this internal model of its own body is usedfor planning ahead. Such a network, followingMetzinger (2006, 2014) represents a system thatis characterized by second-order embodiment.

As shown in figure 2, the procedures form-ing the decentralized controller are basically ar-ranged in parallel, i.e., each procedure obtainsits own sensory input and provides a specific

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 12 | 38

www.open-mind.net

motor output. But procedures can also receiveinput from other procedures and can provideoutput directly to other procedures. This relat-ively flat, heterarchical structure is also appliedby the Word-nets and in perceptual memory(Cruse & Schilling 2010a).

ReaCog automatically selects actions onthe lower reactive level. Several of these proced-ures can be performed in parallel. On the cog-nitive level, decisions about which action tochoose are not based solely on sensory input,but are chosen depending on the imagined ac-tion, since there is a stochastic effect due tonoise in the attention controller. The decision isafterwards tested by internal simulation beforeit is applied to the real system, and only aftersuccessful execution is the proposed behaviorstored in long-term memory. Therefore, this de-cision process can be envisioned as a Darwiniantype of selection that begins from stochastic

“mutations” that are then tested for “fitness”and selected based on this fitness. Thus, reaCogis a minimally cognitive system in the sense ofthe definition given by McFarland & Bösser(1993).

After we have defined the control networkquantitatively, we can use reaCog to analyzeemergent properties, which haven’t been imple-mented explicitly. As an example we havealready considered a term like “tripod gait”that is sensible on a behavioral level in order todescribe the emergent overall behavior of thewalker. But on the control level there is no ex-plicit tripod gait controller in reaCog (Schillinget al. 2008; Schilling et al. 2013a). The local in-fluences coupling neighboring legs are respons-ible for overall coordinated walking behavior(different from many other hexapod control-lers), and different gaits can emerge just bychoosing different velocities. Therefore, appar-

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 13 | 38

Figure 6: The controller of the reactive system as depicted in figure 2 expanded by a WTA-net (green units, not allconnections are shown). Each WTA unit shows a bi-directional connection to a unit of the motivation unit network.This architecture provides the basis of reaCog, as explained in the main text. (for further explanations see figure 2).

www.open-mind.net

ent “gaits” or the observation that “cognitivemaps” are required can be seen an emergentproperty of such a network.

In the following, we will turn to conceptsthat are usually applied in fields different fromcomputer science or behavioral biology, like psy-chology and philosophy of mind. Choosing an-other level of description can help us gain a bet-ter understanding of the system on a more ab-stract level. In addition, this approach can leadto more operational definitions for conceptsused in other disciplines. This is based on theassumption that many of the above-mentionedphenomena emerge (Vision 2011) and that theycan be used as concepts only on a higher, moreabstract level.

For some authors, consciousness is thoughtto be restricted to human beings. In contrast,other authors share the opinion that there aredegrees of consciousness and that consciousnessdoes occur, to a smaller degree, in lower-levelanimals (Dennett 1991). Showing that quitesmall and simplistic networks can allow for in-teresting cognitive properties (Chittka & Niven2009; Menzel et al. 2007) supports such a view,as it provides a plausible evolutionary explana-tion for consciousness (or better degrees of con-sciousness). Agreeing with this basic assump-tion, we want to analyze to what extent oursimple control network fulfils certain aspects ofconsciousness or emotions, even though we didnot intend to realize this in our system in thebeginning. The graded emergence of such high-level concepts would offer an evolutionary ac-count and might allow us to address questionson the function, e.g., of consciousness, and ex-plain how it relates to the control of behavior.

7 Phenomenality

Before concentrating on specific phenomena,such as emotions or consciousness, we wouldlike to address a more fundamental aspect thatappears to be relevant for all higher-level phe-nomena, namely the occurrence of subjectiveexperience.

An example of subjective experience ispain. Even though it might be possible for us toclosely attend to all neuronal activities of a hu-

man test subject while stimulating that person’sskin with a needle, the observed data would bedifferent from the experienced pain, which isonly felt by that person. Nobody other thanthat person can feel the pain. This form of ex-periencing an internal perspective is thereforeonly accessible to us through self-observation.Intuitively, other systems—like non-living thingsor simple machines—lack such an internal per-spective. But in many cases, like for animals, itis hard to determine whether they have subject-ive experience or are merely reflexive machinesthat do not possess an internal perspective.

This problem is also visible when we con-sider a human brain, in the contrasting states ofbeing awake or asleep, for example. While in(dreamless) sleep or under anesthesia the sameneuronal systems as in a wakeful state may beactive, subjective experience is assumed not tobe present. And even in a normal wakeful state,we are not aware of all the contents of the dif-ferent neuronal activities that take place in ourbrain. Therefore, only a specific type of neur-onal activity seems to be accompanied by sub-jective experience.

There is only indirect evidence on the con-ditions required for subjective experience. Libetet al. (1964) performed an early experiment,where the cortex of a human subject was dir-ectly stimulated, electronically. Only for stimulilonger than 500 ms did the subjects report asubjective experience. Bloch’s law (Bloch 1885)formulates this connection more generally. Thesubjectively-experienced strength of a stimulusdepends on the mathematical product of stimu-lus duration and stimulus intensity. In otherwords, a stimulus is only experienced subject-ively when the temporally-integrated stimulusintensity surpasses a given threshold.

More recent experiments have studied theconcurrent activation of different proceduresthat compete for becoming subjectively experi-enced. A basic experiment has been performedby Fehrer & Raab (1962), and has been fol-lowed by detailed later studies (Neumann &Klotz 1994). First, participants learned to pressa button whenever a square was shown on ascreen, but not when two squares were shown ina position on the screen flanking the first

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 14 | 38

www.open-mind.net

square. After the learning period was over, inthe experiment the single square was presentedfor only a short period (about 30 ms), whichwas then followed by a longer presentation ofthe two squares. The participants did not reporthaving seen the single square, but reported onlyhaving seen the two squares. Nonetheless, theypressed the button. This result shows, first, thatthe first procedure A (“stimulus single square-motor response”), can be executed without be-ing accompanied by subjective experience ofstimulus stimA, the single square. Second, pro-cedure B (“stimulus double squares—no motorresponse”) appears to influence how the firstprocedure is experienced, i.e., this procedure in-hibits the subjective experience of stimulusstimA. Therefore, stimulus stimA is not subject-ively experienced (the “masking” effect), butnonetheless triggers the motor reaction.

This situation can be interpreted in thefollowing way (Figure 8, left). On the inputside, each procedure shows temporal dynamicsthat are similar to that of a low-pass filter(LPF) (see footnote on page 2) followed by anintegrator (IntA, IntB).6 Stimulation of oneprocedure inhibits the representation of theother procedure for some limited time (figure8, Δt). In addition, both integrators arecoupled via mutual inhibition (in figure 8 de-picted by separate units). In the masking ex-periment, the first stimulus (stimA) does notinhibit the second procedure (B), because thelatter is not yet stimulated, as long as stimu-lus stimA is active. In contrast, when thesecond stimulus, stimB, is given, the repres-entation of procedure A may be suppressed.The representation of the input given by unitsIntA and IntB activate the corresponding mo-tivation units (MU) of the procedures, MUAand MUB, respectively. This could be ex-plained if we assume two different thresholds.First, the motor command of a procedure canbe elicited when a small threshold (thr1, fig-ure 8) is reached. But, a second, largerthreshold (thr2, figure 8) must be reached inorder to have subjective experience. Then, inour paradigm, procedure A, which was activ-6 An integrator performs a mathematical integration, i.e., it sums the

input over time.

ated first, may reach the level of thr1, whichis sufficient to activate the motor output, butnot thr2. Only the second procedure, B, hasenough time to reach the state of subjectiveexperience (thr2, figure 8, right), which allowsthe double square (stimB) to become subject-ively experienced (however this comes about).The model therefore suffices to explain the ba-sic properties characterizing the backward-masking experiment. As has been shown byCruse & Schilling (2014), the structure depic-ted in figure 8 can also deal with a forward-masking paradigm, the so-called attentionalblink effect (Schneider 2013). To further de-scribe another experiment, showing the socalled psychological refractory period (PRP)paradigm (e.g., Zylberberg et al. 2011), themotivation units (MUA, MUB) of procedureA and procedure B are connected in such away as to inhibit each other. In other words,the motivation units of these procedures forma WTA network. In addition, each procedureinhibits its own motivation unit after its ac-tion has been completed.

From these observations we conclude thatthere are specific neuronal states that requiretime to be developed. While eliciting an outputsignal (like a motor command) is the basic func-tion of the system, this can happen without ac-companying subjective experiences. Only someprocedures may give rise to such phenomenalexperience and might, in addition, trigger sub-sequent functions in the neural system. For ex-ample, this procedure may be able to accessmore neuronal sources and perhaps allow fasterstoring of new information (e.g., for one-shotlearning). In addition to such functional proper-ties the network can endorse the (mental) prop-erty of showing subjective experience, i.e., en-tering the phenomenal state.

The experimental findings mentionedabove support a non-dualist, or monist, view,which means that there are no separate do-mains (or “substances”), such as the mental andthe physical domain, in the sense that there arecausal influences from one domain to the otherone as postulated by substance dualism. Rather,the impression of there being two “domains”—often characterized as being separated by an ex-

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 15 | 38

www.open-mind.net

planatory gap (Levine 1983)—, results from us-ing different levels of descriptions.7

An explanation of the necessary and suffi-cient conditions of neural networks that allowfor subjective experience would be extremely in-teresting. Even though there currently existonly early insights or mere speculations, therehas been a lot of progress during the past fewyears (review Schier 2009; Dehaene & Changeux2011). The continuation of these research pro-jects will hopefully yield a more detailed under-standing. Using combinations of neurophysiolo-gical and behavioral studies may lead a betterunderstanding of the physiological propertiesand functions of this state. It is, however, gener-ally assumed that even if we knew the physicaldetails at some future time, we would not un-derstand why this state, which is characterizedby physical properties, is accompanied by phe-nomenal experience. Here we propose anotherview. We assume that this problem will be“solved” such that the question concerning theexplanatory gap will simply disappear, as hashappened in the case of explaining the occur-rence of life. Concerning the latter, there was anintensive debate between Vitalists and Mechan-ists at the beginning of the last century on hownon-living matter could be transformed into liv-ing matter. The Vitalists argued that a special,unknown force, termed vis vitalis, was required.After many decades of intensive research, we arein a position where an internal model is avail-able, which represents the observation that aspecific collection and arrangement of moleculesis endowed with the property of living. This andsimilar cases may be generalized as the follow-ing rule: If we have enough information, suchthat we can develop an internal model of thephenomena under examination, and if it is suffi-ciently detailed to allow the prediction of theproperties of the system, we have the impres-sion of having understood the system. In thecase of life, indeed we do not need a vis vitalisany longer, but consider liveliness an emergentproperty. Correspondingly, we propose that if

7 There are various views adopting a monist approach, that differ indetail (epiphenomenalism, emergentism, property dualism and theirmany derivatives, see Vision 2011). We will not enter into this dis-cussion here.

we knew the functional details and conditionsthat lead to matter having subjective experiencewell enough, so that the appearance of subject-ive experience can be predicted, we would havethe impression of having understood the prob-lem. Therefore, we assume that the question ofthe explanatory gap will disappear at somepoint, as was the case in the example of life.

Adopting a monist view allows us to con-centrate on the functional aspects when tryingto compare systems endowed with the phenom-enality, i.e., human beings, with animals or arti-ficial systems. According to this view, phenom-enality is considered a property that is directlyconnected with specific functions of the net-work. This means that mental phenomena thatare characterized by phenomenal properties—asare, for example, attention, intention, volition,emotion, and consciousness—can be examinedby concentrating on the aspect of informationprocessing (Neisser 1967).

To avoid possible misunderstandings, wewant to stress that we do not mean that thephenomenal aspect does not have any functionin the sense that the system would work in thesame way if there was no such phenomenalproperties. Since, according to our view, thephenomenality necessarily arises with such asystem, a version of such a system showing ex-actly the same functions but not having thephenomenal aspect would not be possible. Achange in the phenomenal properties of a sys-tem has to be accompanied by a change in itsfunctional properties. Functional and phenom-enal aspects are two sides of one coin. However,remaining on the functional side makes the dis-cussion much easier.

To summarize, the content of any memoryelement may be subjectively experienced (oravailable to conscious awareness) if (1) the (un-known) neuronal structures that allow for theneural dynamics required for the phenomenalaspect to occur are given, and (2) the strengthand duration of the activation of the memoryelement is large enough, provided the element isnot inhibited by competing elements.

The question of how any system can pos-sibly have subjective experience was famouslycalled the “hard problem” by Chalmers (1997).

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 16 | 38

www.open-mind.net

Adopting a monist view, we can avoid this ques-tion and leave it open, as we are interested inunderstanding the functional aspects of con-sciousness (on the ethical implications of an ar-tificial system having subjective experience im-plemented in appropriate neural dynamics seeMetzinger 2009, 2013). Regarding what kind of

dynamics could be thought of, it has been spec-ulated that subjective experience might occur ina recurrent neural network that is equippedwith attractor properties. Following this hypo-thesis, subjective experience would occur if sucha network approached its attractor state (Cruse2003). This assumption would mean that any

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 17 | 38

Figure 7: The reactive network expanded by a layer containing procedures that represent words (Word-net, upperrow). The motivation unit of a Word-net (WU) is bi-directionally connected (dashed double-headed arrows) with thecorresponding motivation unit of the reactive system containing procedural elements of Walknet (left, see figure 2) andof Navinet (right, see figure 4). The word stored in a Word-net is indicated as (“ ... ”). Not all of these motivation unitshave to be connected with a Word-net.

www.open-mind.net

system showing an attractor might be endowedwith the phenomenon of subjective experience.It may, however, not have all the other proper-ties characterizing consciousness. On the otherhand, there might be systems in which the func-tional aspects currently attributed to conscious-ness are fulfilled, but where there is no subject-ive experience present. This case would implythat our list representing the functions of con-sciousness as given in section 10 below is notyet complete.

In the following two sections we shallbriefly treat two phenomena—emotions andconsciousness—and discuss how they might berelated to the minimally-cognitive system asrepresented by reaCog.

8 Emotions

Most authors generally agree that emotions areaccompanied by subjective experience and thatthey have the function of helping the subject re-spond adaptively to environmental pressures. Sothere is the phenomenal aspect of emotions aswell as a functional aspect. As we have alreadytreated the phenomenal aspect above, here wewill put aside this aspect, i.e., how it feels to behappy, sad, etc., and concentrate on the func-tional aspect of emotions.

Even though several authors assume oreven demand that emotions are already presentin simple reactive systems, and that they arenecessary for a cognitive system (Valdez &Mehrabian 1994), in our above description ofthe properties of the network reaCog, any emo-tional aspects have not been taken into account.We did not require the term “emotions” to ex-plain our approach, nor have we built in anykind of explicit emotional system. However, wewill argue that there are emerging propertiesthat are comparable to what is usually ascribedto properties of emotional systems. In the fol-lowing, we want to focus on which parts in oursystem take this role and how the functions ofthese parts can be described and related to at-tributes of emotional systems.

The attempt to relate the properties ofour network with the concept of emotions ap-pears not very promising at first sight, because

a series of interrelated conceptual terms such asemotions, attitudes, motivations, sentiments,moods, drives, and feelings can be found in theliterature, and are defined in different butpartly overlapping ways by different authors(Pérez et al. 2012). The reason for this disagree-ment might be that there are indeed no clearlyseparable mechanisms underlying these phenom-ena but rather we are dealing with a holisticsystem, which makes separation into clear-cutconcepts difficult, if not impossible. As men-tioned, the problem of being confronted withheterarchical structures appeared when lookingat the reactive level (and reappeared later whendealing with perceptual memory), which led usto the neutral term “motivation unit“ for all“levels” of the heterarchy formed by the motiva-tion unit network. To simplify matters, we willonly deal with the term emotions in the follow-ing.

What might be possible functions of emo-tions? As follows from the examples of overlap-ping conceptual approaches found in the literat-ure and mentioned below, emotions are attrib-uted to various functions characterized by dif-ferent levels of complexity. These range from en-abling the agent to select sensory input (e.g.,tunnel vision, Pérez et al. 2012) and activatedifferent procedures, or, at a higher level, to se-lect between different behavioral demands (e.g.,hunger – thirst, flight – fight, Parisi & Petrosino2010) up to more abstract states such as suffer-ing from sadness or being in a state of happi-ness and controlling the corresponding behavi-ors (e.g., Ekman 1999). The lower-level de-cisions are well covered by our motivation unitnetwork, and form a heterarchical system show-ing attractor states (e.g., swing – stance, In-bound – Outbound). These states allow for se-lection of sensory input and/or motor proced-ures that are stimulated by sensory input tospecific motivation units. In the following, wetherefore focus on higher-level states, such as,for example, emotions, as listed by Ekman(1999).

In general, and as discussed below, onecan distinguish between prototypical approachesand reductionist approaches—the latter simpli-fying emotions down to just a few basic dimen-

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 18 | 38

www.open-mind.net

sions. In current research, both views appear tobe justified as they both try to describe thephenomena observed, though at different levelsof description.

Following the first approach, research triesto trace emotions back to a set of basic emo-tions, the combination of which can explain fur-ther derived emotions. This approach has beenadvocated by Plutchik (1980). A problem withsuch an approach is how to draw bordersbetween emotions and what counts as a basicemotion. Ekman (1999) proposed a list of char-acteristics of similarity between emotions andcame up with a set of fifteen basic emotions.Later on, based on their relation to facial ex-pressions, he reduced this number to six. Thisset, which is now widely used as the basic set ofemotions in many different contexts, consists of

happiness, anger, disgust, sadness, fear, and sur-prise. As an example, let us consider happiness.Happiness is elicited when we are in a state ofhaving had or expecting positive situations. Thebehavioral effect of happiness might be charac-terized as being open to new ideas, perhaps notbeing too critical and open to performing new,unconventional behaviors. How might such aphenomenon be represented in reaCog? First ofall, a neuronal state of the motivation networkwould correspond to a specific emotion. Such anetwork state is usually triggered by some sens-ory stimulus eliciting an emotion. This stimulusactivates specific, basically innate, networkswhich, when active, influence the system andput it into the respective emotional state. Sucha network—which could, in the most reducedcase, consist of just one neuronal unit—has not

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 19 | 38

Figure 8: (a) A hypothetical network that is capable of dealing with some dual task experiments, for example thebackward masking experiment. Stimulation of one of the procedures, A or B, activates a low-pass filter (LPFA, LPFB)followed by an integrator (IntA, IntB) and inhibits the corresponding units of the other procedure for a limited time(Δt). The integrators are coupled via mutual inhibition. After activation of one of the integrator units has reachedthreshold thr1 (lower dashed line), the corresponding motor motivation unit (MuA or MuB), coupled via mutual inhibi-tion, is activated, which drives the behavior. If threshold thr2 (upper dashed line) is reached, the stimulus can be phe-nomenally experienced. A feedback from the procedure can provide an “end” signal to inhibit its own motivation unit.(b) Temporal development of the activation of some units (procedure A, blue, procedure B, red). Abscissa is relativetime. If stimB follows briefly after stimA, the unit IntA may reach its motor threshold thr1, but not the threshold thr2for eliciting the phenomenal experience. In contrast, stimB elicits both the motor output and the phenomenal experi -ence that corresponds to the backward masking effect (for details see Cruse & Schilling 2014).

www.open-mind.net

been introduced in reaCog, but if assumed asgiven, it may modulate meta-control parameterssuch as, for example, noise levels, thresholds, orlearning rates (Doya 2000, 2002). To stick toour example of “happiness”, activation of such anetwork, which represents stimulus situationsconsidered to elicit this state may, within theSpreading Activation Layer, lead to a faster dif-fusion process, perhaps supported by strongernoise amplitude. Such a broadening of the at-tention range as a consequence of positive af-fects has been reported by Dreisbach & Goschke(2004). In addition, or as an alternative, thethreshold for the problem detectors that wementioned in section 4 might be increased. As aconsequence, the system would take more risks.All these changes would lead to an increase in“creativity”, i.e., the ability to find new ideasfor possible solutions. Corresponding structuresmight be found in the other basic emotions lis-ted by Ekman.

In the second group of approaches to char-acterizing the emotions, emotions are describedthrough a set of dimensions that represent theemotional state. We will briefly sketch thisseemingly alternative reductionist approach andwill again draw parallels with reaCog. The con-nection to reaCog is made on a different leveland is therefore not logically exclusive with re-spect to the former. Wundt (1863) was quiteopposed to the idea of breaking down emotionsinto a set of basic emotions that serve as proto-types, mainly because he assumed that a set ofemotions is better described by a continuumthan by separable categories. This follows hisidea of describing emotions through principalcomponents leading to dimensional systems, likethe pleasure-arousal-dominance (PAD) frame-work (Mehrabian 1996). In the PAD framework,three dimensions span the space of the emo-tions. The first describes the state pleasure–dis-pleasure and corresponds to the affective state(excited – relaxed). Arousal, as the second di-mension, represents the level of mental alertnessand physical activity (tense – sleepy). The thirdaxis describes the level of dominance–sub-missiveness, i.e., the feeling of being in control.The three factors of the PAD framework havesuccessfully been employed as semantic differen-

tial factors to describe emotional states in dif-ferent contexts, e.g., for describing postures, fa-cial expressions, gestures, and vocal expression.The three dimensions appear to be sufficient asthey capture large parts of the variance (Mehra-bian 1996). Mehrabian has related the threetraits—pleasure, arousal and dominance—tospecific cognitive characteristics. First, pleasure-displeasure, according to Mehrabian, deals withthe fulfillment of expectations. Fulfillment of anexpectation (or not) occurs when, during aproblematic situation, planning ahead is activ-ated and after some time and searching a solu-tion is found (or not)—a state that can befound in reaCog, too. But fulfillment of expect-ation might also occur at lower levels, when, forexample, a simple procedure such as Swingnetis equipped with a target value and this goal iseither reached or not. The error signal mightthen be used as a measure for fulfillment of ex-pectation. For example, it might be used asproblem detector in the case mentioned earlier,when a subject tries to lift a leg off the ground,but due to an inconvenient load distribution,the body falls down and the leg remains in con-tact with the substrate. The arousability trait,as introduced by Mehrabian (1996), was meantto incorporate the process of “stimulus screen-ing”. In short, “stimulus screening” is a processof attentional focusing. Such a process of focus-ing attention occurs in our system, too, as, onthe one end of the spectrum, the systembroadly attends to all environmental influencesas perceived through its sensors, and this ischaracterized as its being in the “reactive state”.At the other extreme, when a specific problem-atic situation occurs, it is necessary to focus at-tention and to guide the search for a solutiontowards specific modalities, parts of the body,etc. But even on the reactive level, attention se-lection can be observed, as we mentionedearlier. Finally, the dominance trait (“general-ized expectations of control” Mehrabian 1996)concerns the extent to which the agent takesover in the actual situation and is not only re-sponding and reacting, which agrees with themain thesis of our approach, namely that it ispossible to switch between the reactive modeand the cognitive mode. Similarly, Russell &

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 20 | 38

www.open-mind.net

Norvig (2003) have required an autonomousagent to be able to both react to known situ-ations and to be in control of the situation itself(or as Russell and Norvig call it: being proact-ive).

Our approach, as we have mentioned, doesnot aim to build specific emotional properties,but tries to build a functional autonomous sys-tem and then to look at the aspects of emo-tional properties that might be found in thenetwork or gained after some further functionalexpansion of the network. We have listed ex-amples from different levels of description inpsychology and point to related properties inour network. We are not arguing that reaCoghas emotions (we are in any case agnostic withrespect to the subjective aspect). Rather, weclaim that by taking a network like reaCog as ascaffold, different conceptualizations of thefunctional aspects of emotions can be mappedonto such a quantitatively defined system andthus be considered emergent properties.

It might be added here that recent studiessupport the idea that emotion-like states do in-deed occur in brains which are by far less com-plex than mammalian brains. Yang et al. (2014)could show that the concept of “learned uncon-trollability”, generally considered as an animalmodel for depression as observed in humans,can be found in Drosophila, too. For verteb-rates, it is known that stress induces the stateof fear or of anxiety, the latter being consideredas a second order emotion. Fossat et al. (2014)could show that a crayfish treated by stressors(i) avoids illuminated parts of the environmentand (b) shows an increased level of serotonin inthe brain, as can be observed in vertebrates. Asin vertebrates, the state of anxiety could be re-lieved by application of anxiolytic drugs. Bothresults have been interpreted such that the abil-ity to adopt emotional states must have beenevolved before the separation of the arthropodsand vertebrates.

9 Attention, volition and intention

In the following section we want to turn to at-tention, intention, and volition. To what degreecan those properties be attributed to our sys-

tem? We start from the definitions of attentionprovided by Desimone & Duncan (1995), of in-tention from Pacherie (2006) and Goschke(2013), and of volition from Goschke (2013).

Attention is the ongoing selection processin perception. It can be driven bottom-up, i.e.,by sensory influences, or it can be controlled bytop-down influences (Desimone & Duncan1995). Top-down driving of attention dependson the internal or emotional state and might de-pend on familiarity with the stimulus.

We can indeed find properties correspond-ing to attention in reaCog. The motivation net-work is constituted of local clusters of unitsthat always compete on this local level andform in this way coalitions of units and smallsubclusters. As an example, we introduced theselection of procedures at the leg level. Either aswing or a stance motivation unit can be activeand inhibits the other one. These two unitscompete for control of behavior. Sensory unitscan influence this competition. For example, anincoming ground-contact signal ends a swingmovement and initiates stance activation. Afteractivating the “Stance” unit only sensory inputrelevant to stance can be perceived by the sys-tem, but not inputs relevant to swing. There-fore, this case corresponds to bottom-up atten-tion control.

Such competition can also be found on aglobal level, on which different behaviors can bechosen. The activation of these higher-level ele-ments influences the lower level. This activationprovides a context for the lower level, whichguides the selection process on that level anddecides which sensory inputs might be relevant.Thereby, more global clusters control the atten-tion on the lower levels in a top-down fashion.Corresponding examples can be found in Nav-inet, which we mentioned earlier. Only visualsignals concerning landmarks that belong to thecurrent active context are considered andswitching between contexts only becomes pos-sible after the food source has been depletedand found empty.

The cognitive expansion of reaCog repres-ents another case of top-down influence. Thissystem comes up with new behaviors andprobes them via internal simulation. As men-

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 21 | 38

www.open-mind.net

tioned, there is a specific WTA layer that mir-rors the arrangement of the lower motor controllayer (figure 6, green units). This part of thecontroller can be called an “attention control-ler”, as the explicit function of this layer is tonarrow down the search for suitable behaviorand to actively select a single one. We call thisselection a cognitive decision, as the system issupposed to select a behavior that would notnormally be triggered through the given con-text. In this way the system represents a specialtype of top-down attention. The focusing mech-anism may correspond to what sometimes hasbeen termed “spot light” (Baars & Franklin2007 p. 955). Overall, we can therefore observethree different types of attentional influences inreaCog.

Volition is an umbrella term denotingmechanisms allowing for voluntary actions. Thelatter are “actions that are not fully determinedby the immediate stimulus situation but dependon mental representations of intended goals andanticipated effects” (Goschke 2013). For an out-side observer, voluntary actions cannot be pre-dicted. As mentioned above, it is crucial for thecognitive expansion that it can select behaviorsthat are not triggered by the current situation.The system has to invent new behaviors. Eventhough the consequences of these behaviors arepredicted, from the outside the finally chosenbehavior is not predictable, as this inventionand selection of new behaviors is stochastic tosome extent. The application of internal simula-tion only guarantees that the proposed behaviorwill lead to a solution, but it does not give awaywhich behavior will be chosen. To the contrary,the search space of possible solutions can easilybecome very large and has to be restricted.Such restrictions help to span a tractable spaceof possible solutions. In our example, reaCoglooks first for solutions in the morphologicalneighborhood, i.e., it tries to use the neighbor-ing legs to help find a solution for a locally-given problem. There are still many possible be-haviors that must be tested in a somewhat ran-dom order. The system will end up with onethat has been anticipated as a solution in in-ternal simulation, but this solution is not selec-ted through sensory inputs or the current con-

text as such. Therefore, volition may be attrib-uted to a system like reaCog.

Does an agent controlled by reaCog showintentions? Intentions are present when the con-trolled action is goal-directed. We are followingPacherie (2006), who proposes a differentiationof three types of intentions (based on Bratman’s(1987) original differentiation into two suchtypes). Pacherie distinguishes future-directed aswell as present-directed intentions and intro-duces motor-intentions as a third type. Present-directed intentions are considered to be under“conscious” (or “rational”) control. In contrast,motor intentions are related to lower-level func-tion (Pacherie 2006). Defining for these types ofintention is that they provide guidance for thefunction on the respective level. In reaCog, mo-tor-intentions are realized by the fact that, onthe reactive-control level, behaviors can be se-lected based on the context. Present-directed in-tentions can be found on the level of cognitivedecision. Future-directed intensions are nottreated by reaCog, because its architecture inthe current version only allows for dealing withproblems that occur in the context of currentwalking behavior. However, an expansion ofreaCog that would include planning ahead usingNavinet as a substrate would include future-dir-ected intensions, too.

Goschke (2013) defines intentions as“causal preconditions explaining why a particu-lar stimulus triggers a particular action (ratherthan a different action)” (Goschke 2013, p.415). In other words, “intentions can be said toshape the “attractor landscape” of an agent’sbehavioral state space” (Kugler et al. 1990, ref.from Goschke 2013, p. 415). In reaCog, such anattractor landscape is described by the motiva-tion unit network. As explained in the preced-ing paragraph on attention, the activation of acontext guides, in a top-down fashion, both theselection of a suitable behavior as well as whichsensory inputs the system should attend to. Thelower-level activation and incoming sensory in-puts influence, on the one hand, the adaptiveexecution of the behavior as such. On the otherhand, the sensory input can inform the higherlevel in bottom-up fashion and might indirectlytrigger changes on this higher-level, too. The ac-

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 22 | 38

www.open-mind.net

tivation on the higher level will, however, be ingeneral more stable on a temporal scale and willreflect a specific context as well as relate to spe-cific goals. For example, in the case of Navinet,there are different possible goals, such as foodsources or the nest, which are represented in thehigher-level network. Selecting one of these as agoal will guide the overall function of the sys-tem, as its behavior is directed towards ap-proaching that location, while the sensory sys-tem will attend only to the specific (expected)sensory stimuli. Therefore, reaCog can be as-sumed to show goal-directed behavior and in-tentions.

10 Consciousness

In this section we would like to discuss to whatextent properties of consciousness might befound in our system. Even though we start froma common notion of how consciousness can beviewed as consisting of separable domains, weare well aware that this approach is not theonly or ultimate solution for approaching thisquestion. But such a differentiation appearswell-suited for our bottom-up approach.

Overall, many authors contribute to theview in which consciousness is broken down intoa set of properties. We start from a review byCleeremans (2005), who gives a good overviewon the diverse philosophical views on conscious-ness and tries to integrate them into one frame-work. While there is disagreement in generaland also on the details (see also Vision 2011),Cleeremans interestingly finds a common de-nominator between the different opinions thatcharacterize possible computational correlates ofconsciousness. He introduced a differentiation ofconsciousness into three domains: phenomenalconsciousness, access consciousness, and meta-cognition (or in other contexts referred to as re-flexive consciousness). There is disagreement onthe phenomenal aspect, as it is seen by onegroup of philosophers to be an independent do-main. In contrast, there is also a view in whichphenomenality cannot be separated from meta-cognition and access consciousness, but must beseen in relation to those (see review Cleeremans2005).

We have argued in section 7, that the phe-nomenal aspect as such, i.e., the property ofsome neuronal structures that are equippedwith subjective experience, has per se no func-tion, but is, nonetheless, not separable from thefunctional properties. Therefore, we see the phe-nomenal aspect not as a separate type of con-sciousness, but as a property of both access con-sciousness and metacognition. This view hasconvincingly been supported by Kouider et al.(2010) as well as, in a recent review, by Cohen& Dennett (2011). Therefore, we will compareproperties of reaCog with current definitionsfound in the literature concerning the phenom-ena of access consciousness and metacognition,abstracting from the phenomenal aspect.

While other philosophers require metacog-nition or reflexive consciousness in a system inorder to attribute consciousness (see for ex-ample Rosenthal 2002 or Lau & Rosenthal 2011for a recent review defending this view), we donot want and cannot get into this discussion asit is not our goal to review the different types oftaxonomies. We basically follow one valid andcommon perspective, as presented by Cleere-mans, and apply it to our system in order toanalyze functions of our system that can matchthe different phenomena described. We do notaim with this approach to give a rigorous defini-tion of consciousness (which does not seem suit-able at this point, see also Holland & Goodman2003). Instead, applying our approach, we aimto provide insight into specific functions of oursystem that are connected to the phenomenadiscussed.

10.1 Access consciousness

In this section we want to focus on the aspectsof access consciousness that can be found inreaCog. Following Cleeremans, access conscious-ness of a system is defined by the ability to planahead, to guide actions, and to reason, as wellas to report verbally on the content of internalrepresentations. In contrast, non-conscious rep-resentations cannot be used this way. Selectingbehaviors, planning ahead, and guiding actionsare the central tasks of reaCog (see section 4,Planning ahead).

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 23 | 38

www.open-mind.net

Being able to use internal representationsfor verbal report is currently not a part ofreaCog. However, the internal representation ofreaCog is already suited to allow for accessinginternal representations (section 5 and figure 7).The simple solution proposed allows for commu-nication using one-word sentences only, butprovides a way, within the framework of reaCog,for the symbol-grounding problem to be ad-dressed. Steels (2007; Steels & Belpaeme 2005)and Narayanan (1997) have already studied indetail how more complex sentences may begrounded in simple reactive systems. Thus,

there already exists work on similar systemsthat shows how the ability to report by usingmore complex language structures could be im-plemented in a reactive system. Therefore, atleast in principle, this property could be real-ized in reaCog, too.

The last property describing access con-sciousness, symbolic reasoning, is not addressedby reaCog. In the symbolic domain, there are,however, many interesting approaches in the lit-erature that might be connected to a systemlike reaCog after the symbolic level has beenimplemented.

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 24 | 38

Figure 9: A possible expansion of reaCog. Without the connections 1A and 1B the network enables the agent to rep-resent its own actions, as is already possible for the network shown in figure 2, and figure 6. After introduction of con-nections 1A and 1B the network is also able to represent the actions of a partner using the now shared procedure“grasp”. (a) and (b) show two attractor states where active motivation units are depicted in red, whereas inactive mo -tivation units are shown in black. Half circles indicate sensory input.

www.open-mind.net

Concerning related work, Dehaene &Changeux (2011) review relevant network mod-els that are supposed to simulate consciousness,including their own approach, which is termedglobal neural workspace theory (GNW) (seealso Seth 2007 for a systematic summary). Acomparison of reaCog with these approachescan be found in Cruse & Schilling (2013)). Herewe will only refer to one important notion,“global availability” as used by several authorsto represent a crucial property of access con-sciousness (e.g., Dehaene & Changeux 2011; De-haene & Naccache 2001; Baars & Franklin 2007;Cleeremans 2005). Global availability describesthe notion that many representations of the sys-tem can potentially become conscious. Theserepresentations can be selected to solve a cur-rent problem (as described for reaCog) or couldbe selected in a task (see GNW).

Are the representations used in reaCogglobally accessible? During execution of aform of behavior the reactive system simplyreacts to sensory inputs. Single local modulesof the procedural memory are activated by thecontext, for example, the walking behaviorthat can execute walking even in a clutteredenvironment. While the behavior is driven bysensory stimuli, it is not “cognitively atten-ded” and runs automatically in response todirect interaction with the environment. Inthis case, the representations are not attendedby cognitive expansion and are clearly not apart of access consciousness. But, importantly,this can change whenever a problem is detec-ted and the reactive (automatic) system is notsufficient anymore. In such a case, the WTA-net of the attention controller is activated andhas to select one of the elements of the pro-cedural memory. During planning, these ele-ments become accessible to the attention sys-tem (Norman & Shallice 1986). The WTA-net,which constitutes the essential part of the at-tention controller, projects directly back tothe motivation units of the procedural memor-ies (figure 6, dashed arrows) and thereby se-lects just one of the possible behaviors (due tothe characteristics of a Winner-Take-All net-work). Therefore, all the procedural modulesthat could be activated by the attention con-

troller are “globally available” and form pos-sible elements of access consciousness.

10.2 Further relations between reaCog andaccess consciousness

Another interesting property of reaCog andfindings in psychology concern the relationbetween conscious and automatic procedures. Itis well known that humans are able to learn anew behavior by consciously attending to thatbehavior. Over time, this can change and theexecution of the behavior becomes more andmore automatic, i.e., it is no longer necessary tobe consciously aware of the exact execution ofthe behavior. A similar shift of attention can befound when reaCog is planning new behaviors.Triggered by the activation of a problem de-tector, reaCog has to shift its attention towardsthe new behavior during planning and the fol-lowing execution of a behavior. As long as theproblem-detector is still active, the reactive sys-tem is basically suspended (by switching off theloop through the body), and instead the plan-ning system tries out new procedures that haveto be attended to. After the successful execu-tion the new solution can also be stored as aprocedural memory and become part of the re-active system; it does not require cognitive at-tention anymore (the procedure how to storethis information has not yet been implementedin reaCog). An advantage of this integrationinto the reactive system is that access to react-ive procedures is faster than using the cognitiveprocess, which agrees with the findings men-tioned above.

There are other experimental findingshighlighting the relation between conscious andnon-conscious access to procedural elements.Beilock et al. (2002) found that athletes whohave learned a behavior so that it can be per-formed automatically perform worse when theyconcentrate on the behavior compared to whenperforming the behavior while being distracted.In the attention controller of reaCog we can ob-serve a similar phenomenon. If the attentioncontroller is externally activated by a higher-level unit while the connected behavior is per-formed, this could possibly activate learning.

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 25 | 38

www.open-mind.net

Such an influence would change the underlyingneuronal module and could worsen the result.In contrast, without attention the behaviorwould be performed as it had been learnedearlier.

ReaCog differs in an important aspectfrom the simulation studies conducted by De-haene and colleagues, as well as from thoseconducted by Baars and colleagues. While thelatter approaches aim to relate conscious func-tioning to individual brain areas or brain cir-cuits, reaCog is not intended at all as a modelof the human brain or any of its areas. In-stead, it is envisioned as a reductionist ap-proach that focuses only on function. Fromthe bottom-up development of more and morehigher-level function we offer a post-hoc dis-cussion of the question of to what extentreaCog shows aspects of access consciousness.This approach seems particularly suitable foraddressing access consciousness, as it turnsout that there is no single identifiable part ofreaCog that might be attributed the propertyof access consciousness. Instead, access con-sciousness appears to be an emergent propertyconstituted by the complete system. Attentioncontroller, procedural memory, and the con-nections between those two parts, as well asthe internal model and the ability to use it ininternal simulation, seem to be the requiredstructures that allow access consciousness, or,in other words, together constitute the “neuralworkspace.” The dynamics of the neural work-space as defined by Dehaene & Naccache(2001) are given through the WTA-net. But,and this is an important difference, there nore-representation in this neural workspace isnecessary. The already-present representationscan be reused in novel contexts. The existingmodules of procedural memory are recruitedin the internal simulation when planningahead. The only difference is that the body isdecoupled from the control loop and insteadthe loop through the world is replaced by aloop using internal models and their predic-tions as feedback. Together, these representa-tions form the global workspace (this notionof internal models has been termed “second-order embodiment,” c.f. Metzinger 2014).

Koch & Tsuchiya (2007) differentiate at-tention and consciousness, as both can bepresent individually and independently of eachother. They conclude that different mechanismsare responsible for attention and consciousness.While such a differentiation is of course basedon basic definitions, we can indeed identify dif-ferent mechanisms related to these two phenom-ena, even though they seem to be related. InreaCog, attending to a specific stimulus is mod-elled as a specific activation of motivation units.Only if this activation is strong enough and/oractive for enough time, can the procedure enterthe phenomenal state (section 7, figure 8).Therefore, both attention and the phenomenalaspect of consciousness refer to different, buttightly coupled properties of our system.

10.3 Metacognition

Although in this article we use the term cogni-tion in the strict sense as proposed by McFar-land & Bösser (1993), when dealing with meta-cognition, this definition is no longer generallyapplicable. Therefore, in this section the termcognition is used in the usual, more qualitat-ively-defined way. We will describe how the mo-tivation unit network could be expanded to al-low our agent to be endowed with different as-pects of metacognition. These expansions, how-ever, have not yet been simulated by being im-plemented into the complete network.

Metacognition, or reflexive consciousness(sometimes called metarepresentation), thesecond essential domain of consciousness, ac-cording to Block (1995, 2001) and Cleeremans(2005), is characterized by Lau & Rosenthal(2011) as “cognition that is about another cog-nitive process as opposed to about objects inthe world” (p. 365).

While the selection of procedures for con-trol of behavior may occur on the reactive levelor by application of access consciousness, meta-cognition in addition is able to exploit informa-tion concerning a subject’s own internal states.As a further property, a metacognitive system,when selecting behavior, can represent itself asselecting this behavior (“I make the decision”).Metzinger (2014) classifies this ability as third-

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 26 | 38

www.open-mind.net

order embodiment, where the subject’s ownbody is “explicitly represented as existing” (p.274) and the “body as a whole” can turn “intoan object of self-directed attention” (p. 275).Thus, metacognition is about monitoring in-ternal states in order to exploit this knowledgefor the control of behavior. According to Cleere-mans (2005), metacognition may also be usedfor inferring knowledge about the internal statesof other agents from observing their behaviorand for communicating a subject’s own states toothers.

Let us first focus on the individual agent.What kind of information might be used by ametacognitive system? A typical case discussedin the literature concerns some quality measureof the procedure to be selected. During decision-making, a person, when relying on own know-ledge, needs to be able to access his or her owninternal state in order to estimate how sure heor she is about the specific piece of knowledge.Cleeremans et al. (2007) use as an illustrativeexample a system consisting of two artificialneural networks. While the first network learnsan input-output mapping of the task, the othernetwork, as a second-order network, learns toestimate a quality measure describing the per-formance of the first-order network. As the com-bination of the two networks does not only storeinformation in the complete system, but alsocontains information about and for the system,the authors conclude that such a system alreadyshows a limited form of metacognition. Such anetwork, using an additional second-order sub-net, might be implemented in our system, too.For example, motivation units could be activ-ated by confidence, or quality values estimatedby such a second-order network. Such a situ-ation can indeed be found in the network Nav-inet. Navinet is used for navigation controltasks and is inspired by work on navigation inants. In this system, the salience of a storedstimulus guides memory retrieval (Cruse &Wehner 2011; Hoinville et al. 2012). For in-stance, the decision to choose one of many dif-ferent food sources is influenced by the internalrepresentation of the learned food quality(Hoinville et al. 2012). As another example, theconfidence value of a visual landmark that is to

be followed or not might depend on the salienceof the visual stimulus, similar to the implement-ation of a Bayesian-like system. A different ex-ample is given by reaCog, which, by exploitingits internal body model, is capable of represent-ing its own body for internal simulation as wellas for control of behavior. Thus, at least somebasic requirements for metacognition, such asbeing able to use own internal representationsfor the control of behavior, are fulfilled, if we,again, leave the phenomenal aspect aside. Belowwe will, in addition, briefly address the abilityof the agent to represent itself.

How may metacognition be suited to sup-port information transfer between differentagents? We will not refer to communication us-ing verbal or gestural symbols here. Instead, wewant to start with the ability to identify oneselfwith another agent, or, in other words, to beable to “step into the shoes of the other.” Thisfaculty has been referred to as Theory of Mind(ToM). Central is the notion of being able to at-tribute mental states to other agents (Premack& Woodruff 1978). A classical example is the“Sally–Anne task”. In this experiment, two sub-jects observe how a cover hides a piece of candylying on a table. While one subject, Sally, isoutside of the room, the other subject, Anne, isable to observe how the hidden candy is movedto a new location. After the change the candylies underneath a white cover and not under theblack cover, which it did to start with. The cru-cial test question is put to Anne: where doesshe think Sally will search for the candy? IfAnne points to the white cover she only usesher own current beliefs about the situation, butdoes not apply a ToM, i.e., she does not takeinto account what Sally believes—since Sallyhas not observed the switch. But if Anne pointsto the original location, the black cover, she isassumed to have a Theory of Mind as she oper-ates on a set of mental states that she ascribesto Sally.

ToM is crucial when an agent needs tocapture not only physical objects, but in addi-tion represent other agents. It becomes neces-sary to explicitly keep track of others’ observa-tions, plans, and intentions. Only such agentsthat can attribute mental states to other agents

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 27 | 38

www.open-mind.net

can successfully predict their behavior. Thereare two common explanations to account forhow ToM is realized. First, the so-called theory–theory (Carruthers 1996) assumes that there arededicated, innate, or learned procedures that al-low for prediction of internal states and there-fore the behavior of others. We want to concen-trate on the second main approach, namely sim-ulation theory (Goldman 2005).

Central to simulation theory is the alreadyintroduced notion of an internal simulation. Asa prerequisite an agent needs an internal modelof him or herself. This model can be used (asexplained) for planning ahead using internalsimulation. But in the same way this model canalso be recruited in order to represent anotheragent. Thereby, other agents may be mappedonto the own internal model that allows simu-lating the behavior of the other agent. This fac-ulty would enable the agent to derive all sortsof conclusions based on its own representations,such as, for example, current goals or intentions.

In the case of reaCog, we envision an ex-tension that allows mapping another agentonto the already existing internal model. In-ternal simulation could be used in this con-text, too. Therefore, the application of suchan internal simulation of another agent couldlead to an interpretation of the behavior ofthe other. However, the two theories men-tioned do not necessarily exclude each other,as can be shown when regarding the proper-ties of the cognitive expansion further. If theinterpretation found via an internal simulationof another agent is new and succeeds in simu-lating its behavior, the result could be storedin the procedural memory in a similar way asdescribed for reaCog, when coming up with anew solution to a given problem. In this way,a new procedure has been learned that allowsfor prediction of the behavior of the otheragent. As such, application of simulation the-ory might in the end lead to results that aredescribed as characterizing theory-theory. Thefaculty of applying a ToM is currently beyondthe ability of reaCog as described above,which allows for an egocentric view only. Inthe following, we will, however, sketch a wayin which such a network may be implemented

into the architecture of reaCog (for more de-tails see Cruse & Schilling 2011).

Figure 9 shows a possible expansion ofreaCog. Two motivation units represent thestate “awake” and the state “sleep”, respect-ively. In the awake state, several sensory and/ormotor elements can be activated. These ele-ments may form different contextual groups. Tosimplify matters, here we focus on two suchgroups only. One group contains the procedure“grasp” and a memory element representing thevisually-given input “position of an object” (rel-ative to the agent), in this case the position of apiece of candy (pos.candy), which is hidden un-der a cover. We further assume that the agentcan also recognize, as a specific kind of object, aconspecific (“partner”), (see Steels & Spranger2008 and Spranger et al. 2009 for solutions), towhom the agent can attribute properties. Theseare, in our example, the memory elements“face” and “position”, which stand for thevisual appearance and spatial location of thepartner to be recognized. Together with theunit “partner” these motivation units form anexcitatory network (the dashed connectionsmarked 1A and 1B will be treated later). Theprocedure “grasp” contains a body-model con-sisting of an RNN (Schilling 2011) that containsinformation on the arm used for grasping. Thisnetwork can be applied to both motor controland recognition of the arm. The former functionis symbolized by the output arrow. Concerningthe latter function, the body-model is used tominimize errors between the position of the in-ternal model of an arm and the (underspecified)visual input of the arm (e.g., Schilling 2011). Ifthe error could be made small enough, thevisual input can be interpreted so as to matchthe morphology and the specific spatial config-uration of the model arm. To symbolize thiscapability, in figure 9 the procedure “grasp” isalso equipped with sensory (visual) input.

The network depicted in figure 9 (disreg-arding connections 1A and 1B) enables theagent to recognize the position of the candy andto grasp it (“Ego grasp candy”), as indicated bythe motivation units marked red in figure 9a. Itfurther allows recognition of the face and theposition of the partner. But it does not enable

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 28 | 38

www.open-mind.net

the agent to “put itself into the partner’sshoes”. In other words, the agent is not able torealize that the partner may have his/her ownrepresentation of the world. Thus, the capabilityof a ToM is lacking.

The motivation unit connecting the agent-related elements “pos.candy” and “grasp” hasbeen called “Ego” in the figures. Although notrequired for the functioning of this network asshown in figure 9a (disregarding connections 1Aand 1B), the application of the unit Ego wouldallow the introduction of a Word-net represent-ing the word “I”. Thus, with this expansion theconcept of “I”, as opposed to other agents (e.g.,a partner), can be used by our agent, allowingfor internal states like “I grasp candy”, andtherefore for self-representation.

Unit Ego is, however, necessary in ourframework when two units (here “Ego” and“Partner”) share elements, as will be the case inthe following example, where we will enable theagent to represent the partner performing agrasping movement. To this end, we introducemutual excitatory connections between the unitrepresenting the partner and the procedural ele-ment “grasp” (dashed excitatory connection 1A,figure 9). In addition, Unit “Ego” and unit“Partner” have to be connected via mutual in-hibition (dashed inhibitory connection 1B, fig-ure 9). This inhibitory connection has the effectthat only one of the units—either unit “Ego” orunit “Partner”—can be activated at a given mo-ment in time. With these additional connections1A and 1B, the network can adopt the internalstate “Partner grasp candy”. This situation canbe represented in the agent’s memory by activa-tion of the motivation units illustrated in figure9b, highlighted in red. Note that the introduc-tion of connections 1A and 1B does not alterthe ability of the agent to represent the situ-ation “Ego grasp candy” addressed above.

The architecture depicted in figure 9, in-cluding connections 1A and 1B, has eventuallybeen termed the application of “shared cir-cuits”, since the procedure “grasp” can be ad-dressed by both unit “Ego” and unit “Partner”,which strongly reminds us of properties charac-terizing mirror neurons. Therefore, applicationof such shared circuits has been described as

“mirroring” (Keysers & Gazzola 2007). Units ofthe grasp-net (including the target pos.candy)represent the movement and its goal, and thuscorrespond to representations of a motor act,such as has been attributed to mirror neurons(Rizzolatti & Luppino 2001). The graspingmovement in both cases (figure 9a, b) is repres-ented as being viewed by the agent (“Ego graspcandy”, figure 9a) or by the partner (“Partnergrasp candy”, figure 9b). This means that thereis still no ToM possible for the agent. To enablethe agent to develop a ToM, we need anotherexpansion.

To explain this, we will present a simplesimulation of the Sally–Anne task mentionedabove. Both protagonists, Sally and Anne, mayhave different memory contents concerning theposition of the candy. This means that theagent, in this case Anne, needs to be able torepresent some aspects of the memory of herpartner, too. Therefore, the memory sectionrepresenting her partner will be equipped with amemory element representing the position ofthe candy as viewed by her partner Sally, wholeft the room (figure 10, connection 2). Bothmemory elements that have possible access tothe procedure “grasp” have to be connected bymutual inhibition, so that only one of these ele-ments can address the procedure at a giventime in order to allow for sensible representa-tion of the situation. Now imagine that the sub-ject Anne is either equipped with a network asdepicted in figure 9, or that depicted in figure10. Application of a system as shown in figure 9means that the agent (Anne) has only one rep-resentation of the candy’s position, namely theone seen last. Therefore only this, correct, posi-tion can be activated and it is imagined thatthe partner grasps the correct position—thiskind of prediction is observed in childrenyounger than about four years. Anne cannottake into account the likely assumption herpartner will make about the location of thecandy. In contrast, in a system as presented infigure 10, there is a difference in thinking ofoneself grasping the candy or the partner grasp-ing it. When the agent, Anne, imagines herselfgrasping the candy, she would grasp its positionas under the correct cover (figure 10a). If asked

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 29 | 38

www.open-mind.net

to simulate the internal state of her partner, asis required in the case of the Sally–Anne test(figure 10b), the position connected to her part-ner Sally is used and the agent will rightfullydeduct that her partner’s grasp would be direc-ted towards this position—which is wrong, butthis fact is not known by her partner. There-fore, the network shown in figure 10 allows forToM, in contrast to the network shown in figure9. The critical difference between both networksis that the network shown in figure 10 containsa separate representation of (a part of) thepartner’s memory. This means that a comparat-

ively simple expansion of our network showshow the agent could be equipped with the abil-ity to apply ToM.

11 Discussion

Consciousness and the relation of the outsideworld to mental representation are central tophilosophy of mind, and have led to many di-verse views (Vision 2011). While many of thoseviews appear plausible in themselves, especiallyfrom a non-philosopher’s perspective, there ap-pears to be much disagreement among philo-

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 30 | 38

Figure 10: An expansion of the network shown in figure 9, allowing the agent to apply ToM. The expansion concernsthe introduction of a new memory element (“mem.candy”) plus connection # 2, which enable the agent to representthe assumed content of the partner’s memory.

www.open-mind.net

sophers. Many of the positions are based onhigh-level views approaching consciousness in atop-down fashion. In contrast, our approachstarts from a low-level control system for a be-having agent. The goal is the bottom-up devel-opment of higher-level faculties. In this way, theneural architecture implements a minimal cog-nitive system that can be used as a hypothesisfor cognitive mechanisms and higher-level func-tioning, which are testable in a real-world sys-tem, for example, on a robot. This allows deriv-ing testable and quantitative hypotheses forhigher-level phenomena. In this way, a bottom-up approach can nicely complement philosoph-ical discussions focusing mainly on higher-levelaspects. In addition, such a minimal cognitivesystem can provide functional descriptions ofhigher-level properties. We briefly introducedthe reaCog system in this article, following thisbottom-up approach. The central concern is theemergent properties that can be identified whenanalyzing this system. In particular, high-levelproperties, such as emotions, attention, inten-tion, volition, or consciousness have been con-sidered here and related to the system.

From our point of view, such a bottom-upapproach leads to a system that can be used totest quantitative hypotheses. Even though thesystem was not intended to model, for example,consciousness, the system can be thoroughlyanalyzed and emergent properties can be re-lated to mental phenomena. This is particularlyinteresting, as high-level descriptions can leave alot of room for interpretation. In contrast, con-necting mental phenomena to mechanisms of awell-defined system allows for detailed studiesand clear-cut definitions on a functional level.In this way, a system can be examined with re-spect to many even diverging views and may al-low resolving ambiguities. Knowledge gainedfrom analyzing the system can in this way in-form philosophical theories and refine existingdefinitions by defining sufficient aspects as wellas missing criteria.

One might ask if higher-level phenomenaas considered here are not simply too far re-moved for such a simple system. One basicproblem is represented by the frequently-formu-lated assumption that all these phenomena have

to be tied to the notion of an internal perspect-ive and that phenomenality has a function inand of itself. In contrast, we claim that focusingon the functional aspect is a sensible approach.It is possible because we believe that the phe-nomenal aspect is always coupled to specific,yet unknown, properties of the neuronal systemthat, at the same time, have functional effectsand show subjective experience. In other words,adopting a monist view, we assume that we cancircumvent the “hard” problem, i.e., the ques-tion concerning the subjective aspect of mentalphenomena, without losing information concern-ing the function of the underlying procedures.Of course, we are not in a position to claimwhich of these structures, if any, are accompan-ied by phenomenality. If, however, the functionof, for example, the artificial system indeed cor-responds well enough to those of the neuronalstructures that are accompanied by phenomen-ality, the artificial system may have this prop-erty, too.

The control network reaCog consists oflocal procedural modules. We have presentedtwo subnetworks: Walknet, which aims at thecontrol of walking, and Navinet, which dealswith navigation. Both consist of a heterarchicalstructure of motivation units that form a recur-rent neural network. This, via competition andcooperation between those units, allows for vari-ous attractor states that enforce action selec-tion. Selection of one or a group of proceduresprotects a current behavioral context againstnon-relevant sensory input. An internal modelof the body is part of the control network co-ordinating joint movements in walking. As thismodel is quite flexible and predictive, it can beused for planning ahead through internal simu-lation. Following the definition of McFarland &Bösser (1993), the network, since it is based onreactive procedures and is capable of planningahead, can be termed a cognitive system, givingrise to its name: reaCog. In combination withthe attention controller, the whole frameworkcan come up with new behavioral solutionswhen encountering problems, i.e., behaviorsthat are not automatically activated by the cur-rent context. Internal simulation allows us totest these behaviors and to come up with pre-

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 31 | 38

www.open-mind.net

dicted consequences, which can be used to guidethe selection process for the real system. Theattention controller cannot function independ-ently. It is tightly connected to the reactivestructures. The procedural memory of the react-ive system is further accompanied with percep-tual memory and Word-nets, a specific form ofmixed procedural and perceptual memory. Thelatter memory elements allow the introductionof symbolic information. Symbol-grounding isrealized by specific connections between the mo-tivation unit of a Word-net and its partner mo-tivation unit, representing the correspondingconcept in the procedural (or the perceptual)memory.

Key characteristics of reaCog are modular-ity, heterarchy, redundancy, cross-modal influ-ences (e.g., path integration and landmark nav-igation in Navinet), bottom-up and top-downattention control, i.e., the selection of relevantsensory inputs, as well as recruitment of in-ternal models for planning. The complete con-trol system constitutes a holistic system as thecentral selection control process—including theinternal body-model—is implemented as anRNN. Overall, reaCog follows Anderson’smassive redeployment hypothesis (Anderson2010), since large parts of the reactive controlnetwork structure are reused in higher-leveltasks (as discussed in detail in section 4 forplanning ahead and in section 10.3 for Theoryof Mind).

ReaCog nicely demonstrates how complexbehavior can emerge from the interaction ofsimple control networks and coordination on alocal level, as well as through the loop throughthe environment. Its feasibility is shown throughthe implementation of the system at first in dy-namic simulation (for Navinet on a two DoF,wheeled robot platform; for Walknet using ahexapod, twenty-two DoF hexapod robot).Second, those control networks are currently ap-plied to a real robot, called Hector (Schneider etal. 2011).

Emergent properties are properties thatare to be addressed using levels of descriptionother than those used to describe the propertiesof the elements. In the reactive part of the sys-tem (Walknet, Navinet) we have already found

some emergent properties (development of dif-ferent “gaits”, climbing over large gaps, findingshortcuts in navigation characterized as cognit-ive-map-like behavior) as well as forms of bot-tom-up and top-down attention. With respectto the notion of access consciousness, severalcontributing properties are present in reaCog.Most notably, planning ahead through internalsimulation is central to reaCog. New behavioralplans are tested in the internal simulation, thusexploiting the existing internal model and itspredictive capabilities. Only afterwards are suc-cessful behaviors applied on the real agent. Inthis way, the agent can deal with novel contextsand is not restricted to the hard-wired structureof the reactive system.

Furthermore, the system shows globalavailability, which means that elements of theprocedural memory can be addressed even ifthey do not belong to the current context. Athird property contributing to elements formingaccess consciousness concerns the ability of thesystem to communicate with an external super-visor by following (i.e., understanding) verbalcommands and by reporting on its internalstates. Therefore, except for the ability of lin-guistic reasoning, which is clearly missing, theissues characterizing access consciousness as lis-ted by Cleeremans (2005) are fulfilled. Butthere are also disadvantages: (i) First, reactiveautomatic control is faster. As cognitive controlinvolves internal simulation (and probably mul-tiple simulations) the whole process takes moretime. In addition, there is an overhead ofhigher-level control going on in contrast to re-active control. (ii) While access consciousnessenables the system to deal with novel situationsand to come up with new behaviors, the sameprocesses might interfere when they are activeduring processing of the reactive control level.This might lead to worse performance whenboth levels are active at the same time. Bothmentioned drawbacks have been confirmed inpsychological experiments. We have not dealtwith the subjective aspect of consciousness. Butleaving this aside, we have shown how reaCogshows important constituent properties of accessconsciousness and how it may provide, in thisway, a scaffold for a more complex system that

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 32 | 38

www.open-mind.net

can manifest additional basic aspects of con-sciousness.

The property of having an internal body-model and the property of being able to intern-ally simulate behavior have been explicitly im-plemented and can therefore not be consideredemergent properties in our approach. However,when referring to a hypothetical evolutionaryprocess that may have led to the developmentof these properties, the appearance of the body-model and of cognitive expansion might well becharacterized as representing an emergent prop-erty.

We based our analysis and discussion onthe perspective of Cleeremans, and used hisconcepts. One counter argument addressing thenotion of access consciousness is that this no-tion is too unspecific as it does not help to dis-tinguish between systems, and may cover “toomany” systems. For instance, one may ask, fol-lowing a minimalist approach, whether this no-tion of access consciousness might even includeprograms like chess-playing software. One mightalso ask whether there is a fundamental differ-ence between such a system and a system likereaCog.

While both systems are able to search forthe solution to a problem using internal simula-tion, there are indeed crucial differences. A typ-ical chess program would be not embodied, but,obviously, today this difference can be easilyovercome and the system could be realized in arobot equipped with a vision system and a handthat could move the chess figures.

However, more importantly, the basic dif-ference between such a chess player and reaCogwould be their flexibility in using internal mod-els. A chess-playing robot always operateswithin the same context, which is stored in aseparate memory-domain, for example in a listof symbolic rules. In contrast, reaCog basicallyoperates with a reactive system, but can alsoswitch to the state of internal simulation whena problem occurs. It then searches for a solutionby testing memory elements not belonging tothe actual context. In other words, reaCog isable to exchange information between differentcontextual domains. Such a switch is not avail-able to a chess-playing program at all. Such a

program cannot distinguish between differentcontexts. In other words, there is no global ac-cessibility in the sense described for systemsshowing access consciousness. As a consequence,the discussion of drawbacks connected with ac-cess consciousness as mentioned in the aboveparagraph on emergent properties, that is, is-sues (i), and (ii), is not applicable to such achess-playing system, and nor are the dynamicaleffects observed in the experiments of Beilock etal. (2002) 1 (section 10.2).

The same holds for the phenomena of apsychological refractory period, attentionalblink, and the masking experiments discussedearlier in section 7. None of these phenomenacan be addressed by a classical chess-player sys-tem, first, because due to the different architec-tures, no search of a domain belonging to a dif-ferent context is possible. A chess player doesnot meet the requirements of access conscious-ness as listed by Cleeremans 2005 and represen-ted by reaCog. Second, no specific dynamicscan be found in such a chess-player system thatcould be made responsible for the dynamical ef-fects mentioned above and which may providethe substrate for the occurrence of phenomenalexperience. Therefore, both systems are qualit-atively different. If at all, the chess player maycorrespond to a subsection of the symbolic do-main of access consciousness, which has not yetbeen explicitly addressed in this article.

In an earlier paper (Cruse & Schilling2013), taking a conservative position, we arguedthat properties of metacognition could not befound in the earlier version of reaCog. We havenow provided some new arguments that permit adifferent position concerning this matter. Usingthis architecture, the agent is able to monitor in-ternal states and use this information to controlits behavior. Internal states may also be able torepresent the agent itself. A first expansion al-lows representation of the activations of a part-ner by using the same procedure as is used forcontrolling the agent’s own behavior (applicationof “shared” circuits, “mirroring”). Furthermore,using an expansion proposed by Cruse &Schilling (2011), the agent is also able to exploitand represent knowledge about the internalstates of others, specifically by applying ToM.

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 33 | 38

www.open-mind.net

Cruse & Schilling (2011) have furthershown how this network can be expanded torepresent the discrimination between subjectand object (e.g., Ego push Partner) and to at-tribute subjective experience (e.g., pain) to thepartner using a shared body-model. A furtherexpansion that allows for mutualism—twoagents cooperate to reach a common goal(“shared intention”, Tomasello 2009)—requirestwo body-models, corresponding to what To-masello calls a we-model.

In the remainder of this section we brieflymention some aspects not addressed by reaCog.First, not all combinations of the elements ex-plained for our network have been tested withinthe complete system. For example, Walknet andNavinet have been tested in separate softwareand hardware simulations. Second, we concen-trated on solving motor problems alone, and didnot deal with how this system could solve prob-lems in the symbolic domain at all. From anembodied point of view, this restriction is notas problematic as it might initially seem, as thesolution process for many problems can betraced back to abilities that are based on solv-ing motor tasks (Glenberg & Gallese 2011); forexample this holds true even for abstract do-mains such as mathematical problem-solving(Lakoff & Nunez 2000).

Finally, an important aspect not addressedhere in any detail concerns how learning of thememory elements, including the weight of themotivation unit network, is possible. Examplesof learning position and quality of new foodsources in Navinet are given by Hoinville et al.(2012), examples of learning perceptual net-works, including the heterarchical arrangementof concepts, are given by Cruse & Schilling(2010a), but introduction of the ability to learnsuch properties within the complete system hasnot yet been introduced.

12 Conclusion

We describe a way to construct an artificialagent whose architecture is characterized by anumber of local, reactive procedures controlledby an RNN, termed motivation unit network.This network is able to adopt various attractor

states, or internal states, which are able to pro-tect the complete system from sensory inputnot belonging to the current internal state. Nostrict hierarchy can be observed in this network.Instead, internal states may be represented bypartly overlapping state vectors.

Where required, further procedures havebeen introduced that can be interpreted asforming explicit representations of parts of theenvironment. Specifically, an internal model ofthe agent’s own body is introduced that can, asa “manipulable” body-model, be used for plan-ning new behaviors via internal simulation. In-ternal manipulation is possible because thebody-model, like a marionette puppet, able toadopt all configurations the real body can as-sume. This expansion allows the agent to switchbetween reactive control and cognitive control(in the sense of McFarland & Bösser 1993).

When aiming to study higher mentalproperties, at least in human beings, we have todeal with the phenomenal aspect of these prop-erties. A number of experimental results suggestthat, i) some, but not all neuronal activities are,under specific—and unknown in any detail—conditions equipped with a phenomenal aspect,i.e., show subjective experience, but that ii)there is no specific function of this phenomenalaspect apart from the functions that can beascribed to the physical properties of the sys-tem. Note that this does not mean that thephenomenal aspect has no function. Rather, anetwork adopts the function only when, at thesame time, the phenomenal aspect is given.This view allows us to focus the analysis on thefunctional aspect of the procedure (see section7). However, due to our lack of knowledge, asan external observer we cannot decide whethera given internal state is a mental state or not (ifmental states are understood as internal statesthat are equipped with a phenomenal aspect).

The complete network represents a collec-tion of hypotheses that can be tested by com-paring their properties with experimental dataand by trying to match them with theoreticalconcepts. Examples studied in this article con-cern behaviors that, for an external observer,may be conceptualized as various gait patterns,or navigation using an internal map, on the

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 34 | 38

www.open-mind.net

“lower” level. On a higher level, we deal with in-venting new behaviors and planning ahead, aswell as phenomena attributed to mental stateslike emotions, attention, intention, and volition.Last but not least we compare the properties ofour approach with different aspects of con-sciousness, such as access consciousness (includ-ing global accessibility) and metacognition. Weclaim that, at least in their basic form, thesephenomena can be attributed to internal statesemerging from the cooperation of decentralizedelements of our network.

References

Anderson, M. L. (2010). Neural reuse: A fundamental or-ganizational principle of the brain. Behavioral andBrain Sciences, 33 (4), 254-313.10.1017/S0140525X10000853

Baars, B. J. & Franklin, S. (2007). An architecturalmodel of conscious and unconscious brain functions:Global workspace theory and IDA. Neural Networks,20 (9), 955 - 961. 10.1016/j.neunet.2007.09.013

Beilock, S. L., Carr, T. H., MacMahon, C. & Starkes, J.L. (2002). When paying attention becomes counter-productive: Impact of divided versus skill-focussed at-tention on novice and experienced performance of sen-sorimotor skills. Journal of Experimental Psychology:Applied, 8 (1), 6-16. 10.1037/1076-898X.8.1.6

Bloch, A. M. (1885). Expérience sur la vision. Comptes Ren-dus de Séances de la Société de Biologie, 37, 493-495.

Block, N. (1995). On a confusion about a function of con-sciousness. The Behavioral and Brain Sciences, 18 (2),227-287. 10.1017/S0140525X00038188

(2001). Paradox and cross purposes in recent workon consciousness. Cognition, 79 (1-2), 197-219.10.1016/S0010-0277(00)00129-3

Bläsing, B. (2006). Crossing large gaps: A simulationstudy of stick insect behavior. Adaptive Behavior, 14(3), 265-285. 10.1177/105971230601400307

Bratman, M. E. (1987). Intention, plans and practicalreason. Cambridge, MA: Harvard University Press.

Carruthers, P. (1996). Simulation and self-knowledge: Adefence of the theory-theory. In Carruthers, P. andSmith, P.K. (Eds.) Theories of theories of mind. Cam-bridge, UK: Cambridge University Press.

Chalmers, D. (1997). The conscious mind : In search of afundamental theory. New York, NY: Oxford UniversityPress.

Chittka, L. & Niven, J. (2009). Are bigger brains better?Current Biology (19), R995-R1008.10.1016/j.cub.2009.08.023

Cleeremans, A. (2005). Computational correlates of con-sciousness. Progress in Brain Research, 150, 81-98.10.1016/S0079-6123(05)50007-4

Cleeremans, A., Timmermans, B. & Pasquali, A. (2007).Consciousness and metarepresentation: A computa-tional sketch. Neural Networks, 20 (9), 1032-1039.10.1016/j.neunet.2007.09.011

Cohen, M. A. & Dennett, D. C. (2011). Consciousnesscannot be separated from function. Trends in CognitiveSciences, 15 (8), 358-64. 10.1016/j.tics.2011.06.008

Cruse, H. (2003). The evolution of cognition: A hypo-thesis. Cognitive Science, 27 (1), 135-155.10.1207/s15516709cog2701_5

Cruse, H. & Schilling, M. (2010a). Learning and retrievalof hierarchically organized information in a simple,one-layered RNN. Proceedings of the InternationalJoint Conference on Neural Networks (IJCNN 2010 atWCCI 2010 IEEE World Congress on ComputationalIntelligence), Barcelona, Spain (pp. 521-528).10.1109/IJCNN.2010.5596804

(2010b). Getting cognitive. In Bläsing, P., Puttke,M. and Schack, T. (Eds.) The Neurocognition of Dance(pp. 53-74). Psychology Press.

(2011). From egocentric systems to systems allow-ing for theory of mind and mutualism. In R. Doursat(Ed.) Proceedings of the ECAL 2011, Paris (pp. 184-191). MIT Press.

(2013). How and to what end may consciousnesscontribute to action? Attributing properties of con-sciousness to an embodied, minimally cognitive artifi-cial neural network. Frontiers in Psychology, 4 (324).10.3389/fpsyg.2013.00324

(2014). Action selection within short time win-dows. Biomimetic and Biohybrid Systems, 8608, 47-58.10.1007/978-3-319-09435-9_5

Cruse, H. & Wehner, R. (2011). No need for a cognitivemap: Decentralized memory for insect navigation.PLoS Computational Biology, 7 (3), e1002009.10.1371/journal.pcbi.1002009

Dehaene, S. & Changeux, J. P. (2011). Experimental andtheoretical approaches to conscious processing. Neuron,70 (2), 200-227. 10.1016/j.neuron.2011.03.018

Dehaene, S. & Naccache, L. (2001). Towards a cognitiveneuroscience of consciousness: Basic evidence and aworkspace framework. Cognition, 79 (1-2), 1-37.10.1016/S0010-0277(00)00123-2

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 35 | 38

www.open-mind.net

Dennett, D. C. (1991). Consciousness explained. Boston,MA: Little, Brown & Co.

Desimone, R. & Duncan, J. (1995). Neural mechanisms ofselective visual attention. Annual Review of Neuroscience,18, 193-222. 10.1146/annurev.ne.18.030195.001205

Doya, K. (2000). Complementary roles of basal gangliaand cerebellum in learning and motor control. CurrentOpinion in Neurobiology, 10 (6), 732-739.10.1016/S0959-4388(00)00153-7

(2002). Metalearning and neuromodulation.Neural Networks, 15 (4-6), 495-506.10.3410/f.1001684.173108

Dreisbach, G. & Goschke, T. (2004). How positive affectmodulates cognitive control: reduced perseveration atthe cost of increased distractibility. Journal of Experi-mental Psychology: Learning, Memory, and Cognition,30 (2), 343-53. 10.1111/j.1460-9568.2007.05949.x

Dürr V., Schmitz, J. & Cruse, H. (2004). Behaviour-basedmodelling of hexapod locomotion: Linking biology andtechnical application. Arthropod Structure & Develop-ment, 33 (3), 237-250. 10.1016/j.asd.2004.05.004

Ekman, P. (1999). Basic emotions. In Dalgleish, T. andPower, M. J. (Eds.) Basic emotions (pp. 45-60). NewYork, NY: John Wiley & Sons Ltd..

Fehrer, E. & Raab, D. (1962). Reaction time to stimulimasked by metacontrast. Journal of Experimental Psy-chology, 62 (2), 143-147. 10.1037/h0040795

Fossat, P., Bacqué-Cazenave, J., De Deurwaerdère, P.,Delbecque, J.-P. & Cattaert, D. (2014). Comparativebehavior. Anxiety-like behavior in crayfish is controlledby serotonin. Science, 344 (6189), 1293-1297.10.1126/science.1248811

Gibson, J. J. (1979). The ecological approach to visualperception. New Jersey: Lawrence Erlbaum Associates.

Glenberg, A. M. (1997). What memory is for. Behavioraland Brain Sciences, 20 (1), 1-55.

Glenberg, A. M. & Gallese, V. (2011). Action-based lan-guage: A theory of language acquisition, comprehen-sion, and production. Cortex, 48 (7), 905-922.10.1016/j.cortex.2011.04.010

Goldman, A. (2005). Imitation, mind reading, and simu-lation. In Hurley, S. and Chater, N. (Eds.) Perspectiveson imitation II (pp. 80-81). Cambridge, MA: MITPress.

Goschke, T. (2013). Volition in action: intentions, controldilemmas, and the dynamic regulation of cognitive con-trol. In Prinz, W., Beisert, M. and Herwig, A. (Eds.)Action science: Foundations of an emerging discipline(pp. 409-434). Cambridge, MA: MIT Press.

Hesslow, G. (2002). Conscious thought as simulation ofbehaviour and perception. Trends in Cognitive Sci-ences, 6 (6), 242-247. 10.1016/s1364-6613(02)01913-7

Hoinville, T., Wehner, R. & Cruse, H. (2012). Learningand retrieval of memory elements in a navigation task.In Prescott, T.J., Lepora, N.F., Mura, A. and Ver-schure, P.F.M.J. (Eds.) Biomimetic and Biohybrid Sys-tems (pp. 120-131). 10.1007/978-3-642-31525-1_11

Holland, O. & Goodman, R. (2003). Robots with internalmodels: A route to machine consciousness? Journal ofConsciousness Studies, Special Issue on Machine Con-sciousness, 10 (4-5), 77-109.

Keysers, C. & Gazzola, V. (2007). Integrating simulationand theory of mind: From self to social cognition.Trends in Cognitive Sciences, 11 (5), 194-196.10.1016/j.tics.2007.02.002

Koch, C. & Tsuchiya, N. (2007). Attention and conscious-ness: Two distinct brain processes. Trends in CognitiveSciences, 11 (1), 16-22. 10.1016/j.tics.2006.10.012

Kouider, S., de Gardelle, V., Sackur, J. & Dupoux, E.(2010). How rich is consciousness? The partial aware-ness hypothesis. Trends in Cognitive Sciences, 14 (7),301-307. 10.1016/j.tics.2010.04.006

Kugler, P. N., Shaw, R. E., Vicente, K. J. & Kinsella-Shaw, J. (1990). Inquiry into intentional systems I: Is-sues in ecological physics. Psychol. Res., 52 (2), 98-121.10.1007/BF00877518

Lakoff, G. & Nunez, R. (2000). Where mathematicscomes from: How the embodied mind brings mathemat-ics into being. New York, NY: Basic Books.

Laughlin, R. B. & Pines, D. (2000). The theory ofeverything. Proceedings of the National Academy ofSciences of the United States of America, 97 (1), 28-31.10.1073/pnas.97.1.28

Lau, H. & Rosenthal, D. M. (2011). Empirical support forhigher-order theories of conscious awareness. Trends inCognitive Sciences, 15 (8), 365-373.10.1016/j.tics.2011.05.009

Levine, J. (1983). Materialism and qualia: The explanat-ory gap. Pacific Philosophical Quarterly, 64, 354-361.

Libet, B., Alberts, W. W., Wright, E. W., Delattre, L.D., Levin, G. & Feinstein, B. (1964). Production ofthreshold levels of conscious sensation by electricalstimulation of human somatosensory cortex. Journal ofNeurophysiology, 27 (4), 546-578. 10.1007/978-1-4612-0355-1_1

Loeb, G. E. (2001). Learning from the spinal cord.Journal of Physiology, 533, 111-117. 10.1111/j.1469-7793.2001.0111b.x

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 36 | 38

www.open-mind.net

Maes, P. (1991). A bottom-up mechanism for behavior selec-tion in an artificial creature. In Meyer, J.-A. and Wilson,S.W. (Eds.) (pp. 238-246). Cambridge, MA: MIT Press.

McFarland, D. & Bösser, T. (1993). Intelligent behaviorin animals and robots. Cambridge, MA: MIT Press.

Mehrabian, A. (1996). Pleasure-arousal-dominance: A generalframework for describing and measuring individual differ-ences in temperament. Current Psychology, 4, 261-292.

Menzel, R., Brembs, B. & Giurfa, M. (2007). Cognition ininvertebrates. In Kaas, J.H. (Ed.) Evolution of nervoussystems in invertebrates (pp. 403-442). Oxford, UK:Oxford University Press.

Metzinger, T. (2006). Different conceptions of embodi-ment. Psyche, 12 (4)

(2009). The ego tunnel - The science of the mindand the myth of the self. New York, NY: Basic Books.

(2013). Two principles for robot ethics. In Hilgen-dorf, E. and Günther, J.-P. (Eds.) Robotik und Geset-zgebung (pp. 263-302). Baden-Baden: Nomos.

(2014). First-order embodiment, second-order em-bodiment, third-order embodiment: From spatiotem-poral self-location to minimal phenomenal selfhood. InShapiro, L.A. (Ed.) The Routledge Handbook of Em-bodied Cognition (pp. 272-286). London, UK: Rout-ledge.

Mussa-Ivaldi, F. A., Morasso, P. & Zaccaria, R. (1988).Kinematic networks distributed model for representingand regularizing motor redundancy. Biological Cyber-netics, 60 (1), 1-16. 10.1007/BF00205967

Narayanan, S. (1997). Talking the talk is like walkingthe walk: A computational model of verbal aspect.COGSCI-97 (pp. 548-553). Stanford, CA.10.1.1.35.1211

Neisser, U. (1967). Cognitive psychology. New York, NY:Appleton-Century-Crofts.

Neumann, O. & Klotz, W. (1994). Motor responses tonon-reportable, masked stimuli: Where is the limit ofdirect parameter specification? In Umiltà, C. and Mo-scovitch, M. (Eds.) Attention and performance XV (pp.123-150). Cambridge, MA: MIT Press.

Norman, D. A. & Shallice, T. (1986). Attention to action:Willed and automatic control of behavior. In Davidson,R.J., Schwartx, G.E. and Shapiro, D. (Eds.) Con-sciousness and self-regulation: Advances in researchand theory (pp. 1-18). New York, NY: Plenum.

O’Connor, C. M., Cree, G.S. & McRae, K. (2009). Con-ceptual hierarchies in a flat attractor network: Dynam-ics of learning and computations. Cognitive Science, 33(4), 665-708. 10.1111/j.1551-6709.2009.01024.x

Pacherie, E. (2006). Toward a dynamic theory of inten-tions. In Pockett, S., Banks W.P. and Gallagher, S.(Eds.) Does consciousness cause behavior? An investig-ation of the nature of volition (pp. 145-167). Cam-bridge, MA: MIT Press.

Parisi, D. & Petrosino, G. (2010). Robots that have emo-tions. Adaptive Behavior, 18 (6), 453-469.10.1177/1059712310388528

Plutchik, R. (1980). A general psychoevolutionary theory ofemotion. In Plutchik, R. and Kellerman, H. (Eds.) Emo-tion: Theory, research, and experience: Vol. 1. Theoriesof emotion (pp. 3-23). New York, NY: Academic.

Premack, D. G. & Woodruff, G. (1978). Does the chim-panzee have a theory of mind? Behavioral Brain Sci-ences, 1 (4), 515-526. 10.1017/S0140525X00076512

Pérez, C. H., Escribano, G. S. & Sanz, R. (2012). Themorphological approach to emotion modelling in robot-ics. Adaptive Behavior, 20 (5), 388-404.10.1177/1059712312451604

Rizzolatti, G. & Luppino, G. (2001). The cortical motor system.Neuron, 31 (6), 889 - 901. 10.1016/S0896-6273(01)00423-8

Rosenthal, D. M. (2002). How many kinds of conscious-ness? Consciousness and Cognition, 11 (4), 653-665.10.1016/S1053-8100(02)00017-X

Russell, S. & Norvig, P. (2003). Artificial intelligence: Amodern approach. Englewood Cliffs, NJ: Prentice-Hall.

Schier, E. (2009). Identifying phenomenal consciousness.Consciousness and Cognition, 18 (1), 216-222.10.1016/j.concog.2008.04.001

Schilling, M. (2011). Universally manipulable body mod-els - Dual quaternion representations in layered anddynamic (MMCs). Autonomous Robots, 30 (4), 399-425. 10.1007/s10514-011-9226-3

Schilling, M. & Cruse, H. (2007). Hierarchical MMC net-works as a manipulable body model. Proceedings of theInternational Joint Conference on Neural Networks(IJCNN 2007), Orlando, FL (pp. 2141-2146). Orlando,FL. 10.1109/IJCNN.2007.4371289

(2008). The evolution of cognition - From first or-der to second order embodiment. In Wachsmuth, I. andKnoblich, G. (Eds.) Modeling Communication with Ro-bots and Virtual Humans (pp. 77-108). Springer, GER:Springer.

(2012). What’s next: Recruitment of a grounded pre-dictive body model for planning a robot’s actions. Fronti-ers in Cognition, 3 (383), 10.3389/fpsyg.2012.00383

Schilling, M. & Cruse, H. (submitted). reaCog, a minimalcognitive controller based on recruitment of reactivesystems.

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 37 | 38

www.open-mind.net

Schilling, M., Schneider, A., Cruse, H. & Schmitz, J.(2008). Local control mechanisms in six-legged walking.Proceedings of IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) 2008 (pp. 2655-2660). 10.1109/iros.2008.4650591

Schilling, M., Paskarbeit, J., Schmitz, J., Schneider, A.& Cruse, H. (2012). Grounding an internal bodymodel of a hexapod walker - Control of curve walkingin a biological inspired robot. Proceedings ofIEEE/RSJ International Conference on Intel ligentRobots and Systems, IROS 2012 (pp. 2762-2768).10.1109/iros.2012.6385709

Schilling, M., Hoinville, T., Schmitz, J. & Cruse, H.(2013a). Walknet, a bio-inspired controller for hexapodwalking. Biological Cybernetics, 107 (4), 397-419.10.1007/s00422-013-0563-5

Schilling, M., Paskarbeit, J., Hoinville, T., Hüffmeier, A.,Schneider, A., Schmitz, J. & Cruse, H. (2013b). Ahexapod walker using a heterarchical architecture foraction selection. Frontiers in Computational Neuros-cience, 7. 10.3389/fncom.2013.00126

Schmitz, J., Schneider, A., Schilling, M. & Cruse, H.(2008). No need for a body model: Positive velocityfeedback for the control of an 18DOF robot walker.Applied Bionics and Biomechanics, 5 (3), 135-147.10.1080/11762320802221074

Schneider, W. (2013). Selective visual processing acrosscompetition episodes: a theory of task-driven visualattention and working memory. PhilosophicalTransansaction of the Royal Society of London B:Biological Sciences, 368 (1628), 20130060.10.1098/rstb.2013.0060

Schneider, A., Paskarbeit, J., Schäffersmann, M. &Schmitz, J. (2011). Biomechatronics for of embodiedintelligence an insectoid robot. Proc. ICRA 2 (pp. 1-11). 10.1007/978-3-642-25489-5_1

Seth, A. (2007). Models of consciousness. Scholarpedia 2,1328, 2 (1), 1328. 10.4249/scholarpedia.1328

Singer, W. & Gray, C. M. (1995). Visual feature integration andthe temporal correlation hypothesis. Annual Review of Neur-oscience, 18, 555-586. 10.1146/annurev.ne.18.030195.003011

Spranger, M., Höfer, S. & Hild, M. (2009). Biologicallyinspired posture recognition and posture change detec-tion for humanoid robots. Proceedings of ROBIO’09:IEEE International Conference on Robotics and Bio-mimetics. (pp. 562-567). 10.1109/ROBIO.2009.5420708

Steels, L. (1995). Intelligence - Dynamics and Representa-tions. In Steels, L. (Ed.) (pp. 72-89). New York, NY:Springer. 10.1007/978-3-642-79629-6_4

(2003). Intelligence with representation. PhilosophicalTransactions: Mathematical, Physical and Engineering Sci-ences, 361 (1811), 2381-2395. 10.1098/rsta.2003.1257

(2007). The symbol grounding problem is solved,so what’s next? In De Vega, M., Glennberg, G. andGraesser, G. (Eds.) Symbols, embodiment and meaning.New Haven, CT: Academic Press.

Steels, L. & Belpaeme, T. (2005). Coordinating perceptu-ally grounded categories through language: A casestudy for colour. Behavioral and Brain Sciences, 28(04), 469-489. 10.1017/S0140525X05000087

Steels, L. & Spranger, M. (2008). The robot in the mir-ror. Connection Science, 20 (4), 337-358.10.1080/09540090802413186

Tomasello, M. (2009). Why we cooperate. Cambridge,MA: MIT Press.

Valdez, P. & Mehrabian, A. (1994). Effects of color onemotions. Journal of Experimental Psychology: Gen-eral, 123 (4), 394-409. 10.1037/a0031821

Vision, G. (2011). Re-emergence. Locating consciousproperties in a material world. Cambridge, MA: MITPress.

von Kleist, H. (1810). Über das Marionettentheater. InSembdner, H. (Ed.) Heinrich von Kleist, SämtlicheWerke und Briefe, Bd. 2 (p. 345). München, GER:Deutscher Taschenbuch Verlag (originally appeared inBerliner Abendblättern, 1. Jg., 1810).

Wundt, W. (1863). Vorlesung über die Menschen- undTierseele. Leipzig, GER: Voss Verlag.

Yang, Z., Bertolucci, F., Wolf R. & Heisenberg M. (2014).Flies cope with uncontrollable stress by learned help-lessness. Current Biology, 23 (9), 799-803.10.1016/j.cub.2013.03.054

Zylberberg, A., Dehaene, S., Roelfsema, P. R. & Sigman,M. (2011). The human turing machine: A neural frame-work for mental programs. Trends in Cognitive Sci-ences, 15 (7), 293-300. 10.1016/j.tics.2011.05.007

Cruse, H. & Schilling, M. (2015). Mental States as Emergent Properties - From Walking to Consciousness.In T. Metzinger & J. M. Windt (Eds). Open MIND: 9(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570436 38 | 38