Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental...

25
Journal of Artificial General Intelligence 8(1) 1-30, 2017 Submitted 2017-02-14 DOI: 10.1515/jagi-2017-0001 Accepted 2017-00-00 Motivation and Emotion in NARS Pei Wang PEI .WANG@TEMPLE.EDU Xiang Li XIANG.LI 003@TEMPLE.EDU Department of Computer and Information Sciences Temple University, USA Patrick Hammer PATRICKHAMMER9@HOTMAIL.COM Institute for Software Technology Graz University of Technology, Austria Editor: Joscha Bach, Eva Hudlicka, Stacy C. Marsella Abstract This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation and emotion. As an AGI system, NARS is constantly driven by multiple tasks, which both cooperate and compete with each other, and this task complex evolves according to the system’s experience. Emotion in NARS serves two main functions; appraise and categorize the current experience, and then influence the system’s allocation of resources accordingly. This design is compared to the other approaches in AI and AGI research, as well as to the relevant aspects in the human mind. Keywords: motivation, emotion, reasoning, learning, insufficient knowledge and resources 1. Introduction There was a time when most AI (Artificial Intelligence) researchers assumed a simple motivational mechanism, and considered emotions as unnecessary and even undesired. The major reason is that a traditional AI system is designed for a specific type of problem whose solving process follows a pre-defined algorithm. In such a situation, the system’s motivation is fixed, and emotions are distracting. Like in human thinking, “being emotional” or “acting emotionally” are often considered as irrational. Consequently, researchers thought there was no reason for an AI system to have emotion, instead, an AI system should simply pursue the goal it is given or designed for. However, the needs in applications and the influence of other fields (like psychology) have driven AI research in the direction of more and more complicated motivational mechanisms. Often these mechanisms have emotion as a necessary component. In the field of AGI (Artificial General Intelligence), this is especially the case. In this paper, we are going to argue that the motivations in AGI systems are necessarily complex, conflicting, and evolving over time. To handle this situation, some form of emotion will be crucial, especially when the system does not have sufficient knowledge and resources with respect to the problem being solved. We will introduce the motivation and emotion components in NARS, an AGI system we have been building, and compare it with the other approaches. Though some of the ideas have been presented in our previous publications (Wang, 2006, 2012, 2013; Wang, Talanov, and Hammer, 2016), here they are described more systematically, and updated according to the progress of our research. Published by De Gruyter Open under the Creative Commons Attribution 3.0 License.

Transcript of Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental...

Page 1: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

Journal of Artificial General Intelligence 8(1) 1-30, 2017 Submitted 2017-02-14DOI: 10.1515/jagi-2017-0001 Accepted 2017-00-00

Motivation and Emotion in NARS

Pei Wang [email protected] Li [email protected] of Computer and Information SciencesTemple University, USA

Patrick Hammer [email protected]

Institute for Software TechnologyGraz University of Technology, Austria

Editor: Joscha Bach, Eva Hudlicka, Stacy C. Marsella

AbstractThis paper describes the conceptual design and experimental implementation of the components

of NARS that are directly related to motivation and emotion. As an AGI system, NARS isconstantly driven by multiple tasks, which both cooperate and compete with each other, and thistask complex evolves according to the system’s experience. Emotion in NARS serves two mainfunctions; appraise and categorize the current experience, and then influence the system’s allocationof resources accordingly. This design is compared to the other approaches in AI and AGI research,as well as to the relevant aspects in the human mind.Keywords: motivation, emotion, reasoning, learning, insufficient knowledge and resources

1. Introduction

There was a time when most AI (Artificial Intelligence) researchers assumed a simple motivationalmechanism, and considered emotions as unnecessary and even undesired. The major reason is thata traditional AI system is designed for a specific type of problem whose solving process followsa pre-defined algorithm. In such a situation, the system’s motivation is fixed, and emotions aredistracting. Like in human thinking, “being emotional” or “acting emotionally” are often consideredas irrational. Consequently, researchers thought there was no reason for an AI system to haveemotion, instead, an AI system should simply pursue the goal it is given or designed for.

However, the needs in applications and the influence of other fields (like psychology) havedriven AI research in the direction of more and more complicated motivational mechanisms. Oftenthese mechanisms have emotion as a necessary component. In the field of AGI (Artificial GeneralIntelligence), this is especially the case.

In this paper, we are going to argue that the motivations in AGI systems are necessarily complex,conflicting, and evolving over time. To handle this situation, some form of emotion will be crucial,especially when the system does not have sufficient knowledge and resources with respect to theproblem being solved. We will introduce the motivation and emotion components in NARS, anAGI system we have been building, and compare it with the other approaches. Though some of theideas have been presented in our previous publications (Wang, 2006, 2012, 2013; Wang, Talanov,and Hammer, 2016), here they are described more systematically, and updated according to theprogress of our research.

Published by De Gruyter Open under the Creative Commons Attribution 3.0 License.

Page 2: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

WANG ET AL.

2. Previous Works

2.1 Motivation

The study of “motivation” is the study of initialization, orientation, and termination of the processeswithin an intelligent system. Similar terms include “drive”, “objective”, “intention”, “aim”, “goal”,“task”, “job”, “purpose”, and so on. Processes in an intelligent system serve certain purposes, andhave teleological explanations that answer questions about “What to do” and “Why to do it”. Thesequestions are different from those about “What is being done” and “How to do it”.

In traditional computer science, these topics are implicitly covered in the notions of “algorithm”and “computation”. For instance, in a Turing Machine the start state and final states mustbe predetermined for a computation process. A computing machinery repeats such a processindefinitely, so the issue of “motivation” is trivially handled – a program is “motivated” to do whatit is designed to do and will stop whenever the job is done.

AI often involves procedures that are more complex or general than those solved by typicalalgorithms. There are two major ways to specify the objective of these more general functions.First, the definition of predetermined ‘goal states’ can be used to answer the question of “Has theobjective been accomplished?”, as seen in heuristic search methods (Simon and Newell, 1958).Second, the use of a numerical function, where the optimization of its value is tied to the degree towhich an objective has been met, as seen in reinforcement learning (Sutton and Barto, 1998).

In terms of the origin of a motivation, there are three types:

• Predetermined as part of the system design. Motivations of this type are designed andcoded into the system by the designers before the system is used. When running, the systemis driven by these motivations to achieve its predefined objective. Usually, this type ofsystem does not manage resources by itself, and the knowledge required by the objectiveis assumed to be sufficient. When these conditions are not satisfied, the progress of achievingthe objective may be aborted.

• Entered from the user interface when the system runs. This type of system does notentirely rely on predetermined motivations, as it accepts tasks via the user interface from timeto time. It is more flexible than the previous type of system, because it can be driven bydifferent tasks in different conditions. There are usually restrictions on what types of task areacceptable, and when they can be accepted. Usually it is assumed that for each type of taskthere exists a program to call, and the resource demands are within the system’s capability.Therefore it is the user’s responsibility that the system will not be overloaded or be asked todo things beyond its capability.

• Generated by the system itself as means to achieve given ends. When a given (prede-termined or input) goal cannot be directly achieved, it is usually decomposed into sub-goalsrecursively, then the results of the sub-goals are merged or combined into the result of thegiven goal. Related AI techniques include means-ends analysis and backward chaining.Usually such goals are treated as subsidiary and dependent on the given goals, and theirscope and generation routines are also predetermined.

Besides the above basic mechanisms, there are works with more complicated motivationalmechanisms. Here we introduce a few examples.

2

Page 3: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

MOTIVATION AND EMOTION IN NARS

Beaudoin (1994) surveyed the goal processing mechanisms in the contemporary agent systems,and made some insightful conclusions. He argued that autonomous agents should have multiplegoals produced by difference motivations, and that autonomous agents should have the ability todiscover, set and meet deadlines for its goals. Furthermore, an agent often does not have completeand correct knowledge at the very beginning, and can only partially predict consequences causedby their actions. To satisfy the requirements for autonomous agents, a system needs to generate andmanage goals in interaction with dynamic environments.

The BDI (Belief-Desire-Intention) model provides a mechanism for the interaction amongthree separated components of system state; beliefs (information about the environment), desires(motivational state of the system), and intentions (committed goals to realize). To deal withthe “changing world” problem, different types of commitment are distinguished depending onwhether the corresponding beliefs and desires can be modified. Consequently, this model allowsits motivations to change as the environment changes (Rao and Georgeff, 1995).

A “goal reasoning” process is used in the actor simulator ACTORSIM to decide “which goal(s)to progress given trade-offs in dynamic, possibly adversarial, environment”. A goal is described asa set of propositions, and is stored together with related information, such as its constraints and howit can be achieved. The system does not simply pursue the given goals immediately, but instead,goals are created, stored, and selected to pursue. Furthermore, the system can learn how to achievethe goals from its experience (Roberts et al., 2016).

To meet the requirements of generality and versatility, AGI systems usually employ morecomplicated goal maintenance mechanisms.

Some AGI systems are designed mainly to achieve certain functions, and their motivationalstructures are built accordingly. For instance, Soar is a problem-solving system, so that all deliberategoal-oriented behaviors can be cast as the selection and application of operators to a state, and a goalis implicitly represented by goal-specific rules that test the state for specific features and recognizewhen the goal is achieved (Marinier, Laird, and Leweis, 2008). On the other hand, GLAIR isdesigned as a reasoning system, which is either thinking about some percept, or answering somequestion. Later an acting component is added, so the system can also be driven by a command or agoal (Shapiro and Bona, 2010).

There are also AGI systems that are designed mainly to simulate the human mind. Consequently,their motivations are more human-like. In MicroPsi2, the system generates its own goals to satisfya predefined set of needs: physiological needs, social needs, and cognitive needs. If a need becomesactive and cannot be resolved by autonomous regulation, an urge is triggered. For each unsatisfiedgoal, the agent would try to recall an applicable strategy, and a spreading activation is used toidentify a possible sequence of actions (Bach, 2015). On the contrary, LIDA does not have anybuilt-in drives or motivators. It uses artificial feelings and emotions to implement the motivationneeded for an agent to select appropriate actions. This kind of feelings wold give rise to valuesimplicitly which is a preference for an action in a situation, and will serve to generate an motivationof action selection (Franklin et al., 2014).

There are also works focusing on the implications of motivations. For example, Omohundro(2008) predicts that for a sufficiently powerful artificial intelligence, some potentially harmfulbehavior will inevitably occur. This problem is intrinsic to the nature of a goal-driven system.Regardless of initial programmed goals, the systems will not stop or limit their improvement, or beself-protective. Such possibilities raise the issue of how to regulate the goals in AI systems.

3

Page 4: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

WANG ET AL.

2.2 Emotion

Emotions are complex psychophysiological phenomena displayed by countless living creatures.This complexity is highlighted by the varying definitions provided by the many academic disciplinesthat study emotion. These definitions typically include subjective descriptions of feelings,physiological expressions, biological reactions, and mental state.

Although emotions have been studied deeply by psychologists and cognitive scientists, it is stilla relatively new territory for AI researchers. The debate on the appropriateness and necessity ofadding emotion to AI systems has lasted for years, but it has not stopped research on the rolesemotions could play in decision making, planning, resource management, etc., within AI systems.

The objectives of emotional mechanism in AI system can be separated into two types:

• Recognizing and simulating human emotions. This is the major objective of affectivecomputing (Picard, 1997), which requires the machine to have the ability to interpret theemotional state of humans. It can be achieved by training the machine with pictures orvideos of human facial expressions, body postures, records of speech, or some clues likeskin temperature, so that the machine can recognize different human emotions. The machineadapts its behaviors accordingly, so as to make appropriate responses to humans. A project ofthis type typically starts with simulating some basic emotions of humans, like fear, happinessor sadness. Later on, those basic emotions are extended to some more complicated emotionsby adding more features and factors. Some researchers believe that neurotransmitters likedopamine, epinephrine, and neuropeptides play critical roles in emotions, so they are trying tosimulate the variation of these factors to add emotions to the machine (Talanov and Toschev,2014).

• Carrying out similar functions as human emotion. Works of this type consider “emotion”as a mechanism that serves certain cognitive functions, including the “intrapersonal”functions in regulating a system’s perception and action, as well as the “interpersonal”functions in communicating a system’s mental status to other systems. As an AI system hasa mechanism serving similar functions, it qualifies to be considered as a form of “emotion”,which is taken to be a more general notion than “human emotion”. Among the projectsguided by such a belief, emotion is treated quite differently, as each system usually focuses ondifferent functions and uses different techniques (Arbib and Fellous, 2004; Beaudoin, 1994;Ganjoo, 2005). Here the results are not evaluated according to how close the mechanism issimilar to human emotion in details, but how well it carries out the relevant functions.

In AGI projects, emotion has been widely recognized as a necessary component:

• Soar uses emotions to drive reinforcement learning. Emotion is understood as an appraisalprocess that generates a reward signal, which is then used in reinforcement learning toimprove the system’s performance (Marinier, Laird, and Leweis, 2008).

• Sigma also takes emotion as appraisal of situations on qualities including expectedness anddesirability, though the result is mainly used in attention allocation (Rosenbloom, Gratch, andUstun, 2015).

• In LIDA, feelings and emotions are used as motivators and facilitators of learning, where“feelings” are inputs from sensors, and “emotions” are feelings of cognitive content, “such as

4

Page 5: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

MOTIVATION AND EMOTION IN NARS

the joy at the unexpected meeting with a friend or the embarrassment at having said the wrongthing”. These feelings and emotions play roles in action selection by adjusting activation ofthe relevant behavior schemes, as well as modulating various forms of learning (Franklin etal., 2014).

• MicroPsi models emotions as emergent phenomena that are captured implicitly “by identify-ing the parameters that modify the agents cognitive behavior and are thus the correlates of theemotions”. At the lowest level, there are various modulators that influence the potential of theactions, and altogether they form an “affective state” that indicates the mode of processingfor the system as a whole, while may have a mental representation, and eventually modify thesystem’s behavior (Bach, 2012).

• In Thill and Lowe (2012), some functions of emotions are discussed. It is argued that althoughthe human body is very adaptive to the external environment, it is also very sensitive to theinternal changes. So when we try to build adaptive systems, “feeling” of the system is a veryimportant aspect that cannot be ignored.

• Strannegard, Cirillo, and Wessberg (2015) also proposed a simple network model withartificial emotions that is used for controlling concept development. The objective of thisresearch is to use emotion to guide the process of internal concept formation.

Instead of focusing on topics about how systems express emotions, AGI researchers are moreconcerned about how emotions effect systems itself, even though the concrete design is differentfrom system to system.

3. NARS Overview

NARS (Non-Axiomatic Reasoning System) is a general-purpose intelligent system designed in thereasoning framework. Here we only introduce the aspects of NARS that are directly related to thecurrent discussion. For comprehensive descriptions of this project, please see Wang (2006, 2013).

3.1 Theoretical and strategic assumptions

NARS does not accept the mainstream AI opinion that “intelligence” is the ability to solve problemsthat are previously only solvable by the human brain, and that this ability can be obtained byintegrating domain-specific solutions. Instead, intelligence is taken to be general and holistic,and defined as “the ability for a system to adapt to its environment and to work with insufficientknowledge and resources” (Wang, 2006).

When designing NARS, the environment is not taken to be uniform or stationary, even in thestatistical sense. Instead, the system is open to any future situation, though as an adaptive system,NARS uses its (inevitably limited) past experience to guide its behaviors, and interact with itsenvironment in real time. This is considered as a principle of “relative rationality”.

In this context, “To adapt” means that the system selects its actions as if the environment isrelatively stable, so when NARS makes predictions, it makes sense to depend on the system’s pastexperience, though the system is prepared to deal with novel and unusual situations, and revise itsbeliefs accordingly.

5

Page 6: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

WANG ET AL.

The most significant feature of this relative rationality is the Assumption of InsufficientKnowledge and Resources, or AIKR for short (Wang, 2006). Concretely, AIKR demands thefollowing three features of the system:

Finite: The system depends on constant information-processing capacity, in terms of processorspeed, storage space, etc.

Real-Time: The system has to deal with problems that may show up at any moment and the utilityof their solutions may decrease over time.

Open: The system must accept input data and problems of any content, as far as they can beexpressed in a format recognizable to the system.

Hence, for dealing with unexpected problems, constrained time and resources, NARS usuallycannot consider all possibilities when facing a problem, but will only consider a subset of them,the important and relevant possibilities, judged according to the systems experience. The finiteassumption also requires a mechanism of forgetting. This is a special feature of NARS, allowing itto deal with limited storage space.

AIKR is a fundamental assumption motivated by human problem solving in the real world.Humans learn from experience and summarize experience into knowledge. With this knowledge,they attempt to solve problems which they do not know how to solve at the moment. This abilityis exactly what we consider intelligence, which is characterized not by what problems it can solve,but the restriction under which the problems are solved.

The research goal of NARS is to design and build a computer system that can adapt to itsenvironment while working with insufficient knowledge and resources. This is different from thegoals of the mainstream AI models which are aimed at a system with given specific problem-solvingskills. The NARS project is aimed at building a system with a given (meta-level and general-purposes) learning ability that allows the system to acquire various problem-solving skills from itsexperience.

Although, being a reasoning system is not a necessary and sufficient condition for being aintelligent system, a reasoning system can provide a suitable framework for the study of intelligence.Instead of being domain-specific, the framework of reasoning forces the system to be generalpurpose. Reasoning is at a more abstract level than other low-level cognitive activities, and it isobviously a critical cognitive skill to distinguish human beings from other animals qualitatively.

Many cognitive processes can be formulated as types of reasoning, such as planning, learning,decision making, etc., therefore, an intelligent system designed in the framework of reasoningsystem can be easily extended to cover them. As a reasoning system follows a logic, each stepof processing must be justifiable independently. As a result, inference steps can be linked at runtime in novel orders to handle novel problems. This is a major reason for NARS to be designed inthe framework of a reasoning system.

3.2 Knowledge representation

As a reasoning system, NARS uses a formal language “Narsese” for knowledge representation,which is defined by a set of grammar rules. The system’s inference rules form a formal logic “NAL”(Non-Axiomatic Logic) that uses Narsese sentences as premises and conclusions. Here we onlybriefly introduce Narsese, without repeating all the formal definitions given in Wang (2013).

6

Page 7: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

MOTIVATION AND EMOTION IN NARS

NAL belongs to a type of logic called “term logic” where the smallest component of thelanguage is a “term”, and the simplest sentence has a “subject-copula-predicate“ format. Aninheritance statement is a basic form of statements in NAL with a “S → P ” format, where Sis the subject term, P is the predicate term, and “→” is the inheritance copula, defined as a reflexiveand transitive relation from one term to another term. The intuitive meaning of “‘S → P ” is “S isa special case of P ” or “P is a general case of S”. For example, the representation of “Robin is atype of bird” is “robin→ bird”.

For a given term T, its extension consists of all of its known special cases, and its intensionconsists of all of its known general cases. Therefore, “S → P ” is equivalent to “S is in the extensionof P”, and “P is in the intension of S”.

A term in Narsese, in its simplest “atomic” form, is just an identifier, as a string of symbols froman arbitrary alphabet, even though in this papers we use English words to make the examples morecomprehensible.

Beside atomic terms, Narsese also includes compound terms of various types. A compoundterm (con, C1, C2, . . . Cn) is formed by a term connector, con, and one or more component termsC1, C2, . . . Cn. The term connector is a logical constant with pre-defined meaning in the system.Major types of compound terms in NARS includes:

• Sets: An extensional set can be specified by enumerating its instances, such as {Tom, Jerry};An intensional set can be specified by enumerating its properties, such as [big, red].

• Intersections and differences: Given two terms, the intersection and difference of theirextensions correspond to compound terms, respectively. For example, “(∩, bird, swimmer)”and “(−, bird, swimmer)” represent “birds that can swim” and “birds that cannot swim”,respectively. Similar operators can be applied to their intensions, too.

• Relations: A relation can be specified between terms. For example, “Mary is the mother ofTom” can be expressed in Narsese equivalently as “(×, {Mary}, {Tom}) → mother-of”,“{Mary} → (/,mother-of, , {Tom})”, and “{Tom} → (/,mother-of, {Mary}, )”,where ‘×’ and ‘/’ are term connectors product and external image, respectively.

• Statements: A statement can be used as a term. For example, “Tom knows that the Earth isround” can be expressed in Narsese as “{{Earth} → [round]} → (/, know, {Tom}, )”,where the subject term of the statement is a statement itself.

• Compound statements: Statements can be combined using term connectors for disjunction(‘∨’), conjunction (‘∧’), and negation (‘¬’), similar to in propositional logic.

Besides the above constant terms that each corresponds to a specific concept in the system,Narsese also includes variable terms that each corresponds to an unspecified concept.

As variants of the inheritance copula (‘→’) introduced previously, Narsese also has three othercopulas: similarity (‘↔’), implication (‘⇒’), and equivalence (‘⇔’), and the last two are used onlybetween statements.

In NARS, an “event” is a statement with temporal attributes. Based on their occurrence order,two events E1 and E2 may have one of the following temporal relations:

• E1 happens before E2

7

Page 8: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

WANG ET AL.

• E1 happens after E2

• E1 happens when E2 happens

Temporal statements are formed by combining these temporal relations with the logical relationsindicated by the term connectors and copulas. For example, implication statement “E1 ⇒ E2” hasthree temporal versions corresponding to the above three temporal order, respectively:

• E1 /⇒ E2

• E1 \⇒ E2

• E1 |⇒ E2

All the previous statements can be seen as NARS describing things or events from a third personview, with the system as like a spectator. Narsese also has operation as a special kind of event thatcan be executed by the system itself. An atomic operation is expressed as “op(a1, . . . , an)”, andinterpreted as a statement “(×, SELF, a1, . . . , an) → op” where a1, . . . , an is a list of arguments,op is an operator that has a procedural interpretation, and SELF is a special term which indicatesthe system itself. For example, “(×, SELF, {key 007})→ hold” means the system holds the key007.

Overall, there are three types of sentences defined in Narsese:

1. Judgment: a statement with a truth-value, for a piece of new knowledge that system needs toabsorb or already believes,

2. Question: a statement without a truth-value, for a question to be answered according to thesystem’s knowledge,

3. Goal: a statement without a truth-value, for a goal to be achieved by executing someoperations, according to the system’s knowledge.

These three are also the types of tasks that NARS can be required to process.

3.3 Experience-grounded semantics

When studying a language, semantics refers to how items in the language are related to theenvironment in which the language is used. It provides the theory of meaning and truth, byanswering questions like “What is the meaning of this term?”, or “What is the truth value of thatstatement?”.

As an adaptive system working with insufficient knowledge and resources, NARS does notdetermine the truthfulness of its knowledge in a static and completely described environment. Sincethe environment changes over time, there is no guarantee that the past is always identical to thepresent, and the future. Hence, in NARS the truth of each statement and the meaning of each termare grounded on nothing but the system’s experience.

According to this experiences-grounded semantics, no knowledge is guaranteed to be “true”or “false” without any changes in the future, and due to insufficient knowledge, no knowledge isguaranteed to be confirmed as “true” or “false”. In NARS, the truth value of statement is a degreeto which the statement is supported by the past experience.

8

Page 9: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

MOTIVATION AND EMOTION IN NARS

As the example we showed in the last section, “robin → bird” states that “Robin is a type ofbird”, and it is equivalent to saying that the extension of robin in included in the extension of bird,as well as that the intension of bird is included in the intension of robin. Therefore, if a term isin the extension (or intension) of both robin and bird, then its existence supports the statement,or provides positive evidence. On the contrary, if a term is in the extension of robin but not theextension of bird, or is in the intension of bird but not the intension of robin, it provides negativeevidence for the statement.

For a given statement, we use w+, w− and w to denote the amount of positive evidence, negativeevidence, and total amount of evidence, respectively. With these measurements, we defined a two-dimensional truth-value for Narsese statement as a pair of real numbers 〈f, c〉. Here f is called thefrequency of the statement, and is defined as the proportion of positive evidence among total amountof evidence, that is, f = w+/w. The value c is called confidence of the statement, and is defined asthe proportion of current evidence among total amount of evidence at a moment in the future afternew evidence of a constant amount k is collected, that is, c = w/(w + k). In the current discussion,the “personality parameter” k takes the default value 1. Roughly speaking, frequency represents theuncertainty of the statement, and confidence represents the uncertainty of the frequency.

Defined in this way, the truth-value of a statement depends on the available evidence. Themeaning of a term is defined by its extension and intension, that is, the experienced relationsbetween this term and the other terms. In this way, both “truth” and “meaning” in NARS are definedaccording to experience of the system, which is why the semantics is called “experience-grounded”.

3.4 Inference rules

Inference rules of NAL derive conclusions recursively from the system’s experience, which consistsof statements with truth-values, indicating the experienced relations between terms as well as thestrength of these relations. In the following paragraphs the rules are briefly introduced and theirdetails can be found in Wang (2006).

The typical inference rules of NAL are syllogistic, as each needs two premises sharing onecommon term (either subject term or predicate term). According to the experience-groundedsemantics, a valid rule should generate conclusions between the other two (unshared) terms andthe truth-value of each conclusion shows the evidence provided by the premises.

In terms of the nature of conclusion, inference rules of NARS are divided into three categories:

• Local rules: No new statement is produced, and the conclusion comes out from a revision orselection of the premises.

• Forward rules: New judgments are produced from a given judgment and a relevant belief.

• Backward rules: New questions (or goals) are produced from a given question (or goal) anda relevant belief.

There are two local inference rules: Revision rule and Choice rule.Since NARS is open to all kinds of new experiences, there may be inconsistent judgments. As

the future is often different from to the past, all existing beliefs may be revised by new evidence.The revision rule accepts two judgments about the same statement as premises, and generatesa new judgment for the statement, with a truth-value obtained by pooling the evidence of the

9

Page 10: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

WANG ET AL.

premises. Consequently, the frequency of the conclusion is a weighted sum of those premises andthe conclusion of the conclusion is higher than those of the premises.

The choice rule is used to choose between competing judgments, for example, as answers to agiven question. There are two kinds of questions in NARS:

• Evaluative question: A question like “S → P?” asks the system to decide the truth-value ofthe given statement. If there are two candidates, the choice rule will return the one with thehigher confidence value. This is because when an adaptive system faces contradicting beliefs,it should prefer the belief which is supported by more evidence.

• Selective question: A question like “S → ?x” (or “?x → P ”) asks the system to find aconstant term that can instantiate the variable ’?x’ in the question and give the best-supportedanswer. If two candidate answers suggest different instantiations T1 and T2 for the variable,their frequency and confidence will be combined into a single expectation measurementdefined as e = c × (f − 0.5) + 0.5. The justification of this measurement is given in Wang(2013).

The following three inference rules are the most basic forward rules of NAL:

deduction induction abductionM→ P 〈f1, c1〉 M→ P 〈f1, c1〉 P→M 〈f1, c1〉S→M 〈f2, c2〉 M→ S 〈f2, c2〉 S→M 〈f2, c2〉

S→ P 〈f, c〉 S→ P 〈f, c〉 S→ P 〈f, c〉

Each forward inference rule has its own truth-value function that calculates 〈f, c〉 from 〈f1, c1〉and 〈f2, c2〉. These functions are established in Wang (2013), here we only put them into twogroups:

• Strong Inference: The upper bound of c is 1. Among the above three, only the deduction rulebelongs to this group.

• Weak Inference: The Upper bound of c is 1/(1+k) ≤ 1/2. Among the above three, inductionand abduction belong to this group.

NAL has other syllogistic rules, and also has composition rules to build compound terms tocapture the observed patterns in experience. For example, from “swan → bird 〈f1, c1〉” and“swan→ swimmer 〈f2, c2〉”, a rule can produce “swan→ (∩, bird, swimmer) 〈f, c〉”.

For each forward inference rule that from two judgments J1 and J2 to derive a conclusion J , abackward inference rule can be established that takes J1 and a question on J as input, and derives aquestion on J2, because an answer for the derived question can be used together with J1 to providean answer to the original question. For example, if the question is “robin → animal?”, and thereis a related belief “robin → bird 〈f1, c1〉”, then a derived question “bird → animal?” can begenerated. Backward inference on goals are similar.

3.5 Control mechanism

As a reasoning system, NARS works and accepts tasks in real time. New tasks include learningnew knowledge, answering questions, and achieving goals. Beside input tasks (from users or other

10

Page 11: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

MOTIVATION AND EMOTION IN NARS

systems), there are also derived tasks generated by the inference rules, recursively from the inputtasks and the system’s beliefs (i.e., accumulated judgments).

As new tasks (either input or derived) come to the system time to time, the system has to dealwith a large number of tasks at any moment. Given AIKR, the system cannot use all relevant beliefsfor each task. Instead, the system has to use the available knowledge and affordable resources toprocess a task, and try to get the solution that is the best possible.

A data structure specially designed for resource allocation in NARS is called “bag”. A certaintype of data items is contained in a bag with a constant capacity, with a priority distribution amongitems is maintained among them. There are three basic operations defined on bag:

• put(item): put an item into the bag, and, if the bag is full, remove one item with the lowestpriority.

• get(key): take an item from the bag with a given key that uniquely identifies the item.

• select(): select an item from the bag, and the probability for each item to be selected is roughlyproportional to its priority value.

NARS organizes beliefs (knowledge) and tasks into concepts. For each term T , there is acorresponding concept CT . CT contains all the beliefs and tasks involving T . For example,knowledge on “robin→ bird” is stored within the concept directly corresponding to that statement,and can also be accessed from concepts corresponding to robin and concept bird, as well as someother relevant concepts.

The memory of NARS can be seen as a bag of concepts, where each concept is named by aterm (either atomic or compound), and contains a bag of beliefs and a bag of tasks. Each data itemin a bag has a “budget”, which include a priority value indicating its current activation level and adurability value indicating how fast its activation level decreases over time.

As a reasoning system, NARS runs by repeating an inference cycle roughly consisting of thefollowing actions:

1. Select a concept from the memory.

2. Select a task within the concept.

3. Select a belief within the concept.

4. Trigger applicable inference rules on the selected tasks and belief to derive new tasks.

5. Adjust the budget of the selected belief, task, and concept and put them back into thecorresponding bags.

6. Selectively put the new tasks into the corresponding concepts.

Clearly, in the above procedure the priority distributions decide which possible inference willbe realized via allocating the system’s time–space resources. The priority value of each data item(belief, task, or concept) is decided by two factors:

• Long-term factor: More important items will have higher priority values. The importance ofan item is initially determined by its intrinsic features then gradually adjusted according tothe system’s experience, such as how useful it is in the past.

11

Page 12: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

WANG ET AL.

• Short-term factor: More relevant items will have higher priority values. When a new taskis added into a concept, the priority of the concept will increase, and if a task or belief isgenerated multiple times, its priority will increase. On the other hand, there is a forgettingprocess that decreases the priority of every item over time, according to its durability value.

Normally, NARS processes many tasks in parallel, but with different priority and durability. Theprocessing path of each task is determined at the run time by the selections, which implicitly dependon the priority distributions at the moment, rather than according to a predetermined algorithm.Since all these priority distributions change constantly and are influenced by many factors, thesystem works in an adaptive and context-sensitive manner.

4. Motivation in NARS

The motivation management of NARS allows a general goal-oriented behavior of the system whenfacing complicated demands, in which the system can also generate new motivations based onexisting motivations and beliefs, as well as deal with the conflicts among motivations.

4.1 Tasks and goals

In NARS, all “mental” activities are task-driven and in a broad sense goal-oriented. As mentionedpreviously, every internal activity is triggered by the processing of a certain task, being the absorbingof new experience, the answering of a question, or the realization of a goal.

As such, in NARS a “goal” is a special type of “task”, which is different from the othertasks in that it eventually invokes some operations that cause changes in the internal or externalenvironments. On the contrary, other tasks (judgments and questions) only changes the system’sbeliefs, and provides solutions in the form of declarative sentences.

To handle conflicts and competitions among goals, each goal gets a “desire-value”, which isdefined as the truth-value of the corresponding belief that the achieving of this goal will lead to adesired state. In this way, desire-value functions can be obtained from truth-value functions (Wang,2013).

Tasks themselves can either be input tasks, that are given by the user or by another externalsource or derived, meaning they were generated by the inference process recursively from the inputtasks and the system’s beliefs.

Input tasks can either be implant (innate) before the system starts running or assigned at runtime. The system accepts any task expressible in Narsese at any time.

Task derivation is belief-based in the sense that topically in each inference step a task T and abelief B derives a task T ′, and T ′ have the same type as T (i.e., judgment, question, or goal), andinitially T ′ serves as a means in achieving T . If T is a judgment, then this step is forward inference;otherwise it is backward inference. For instance, if T is a goal, T ′ will be a derived goal, and itsderivation is based on the system’s belief that the achieving of T ′ will more or less contribute to theachieving of T .

Given this mechanism, the system will generate new tasks, but never generate a task “out ofnowhere”. Instead, each derived task is anchored in previous experience and triggered by certaininput task(s).

In NARS, all tasks are treated independently of each other, but are competing for resourceswithin the structure they are stored in. It means that if T ′ is derived from T , initially its “value” to

12

Page 13: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

MOTIVATION AND EMOTION IN NARS

the system depends on that of T , but later its processing will not depend on the value, or even theexistence, of T anymore. For this reason, we do not call T ′ a “sub-task” of T .

What makes goal derivation different from other task derivation process is that when a derivedgoal is generated, it is not immediately pursued by the system, but only increases the desire-value ofthe corresponding statement. Only when the desire-value of a statement is higher than a threshold,then it is pursued as a goal. This “decision-making” step is added because even if T ′ serves therequirement of T , it may be in conflict with other co-existing goals. So it is pursued only when theoverall positive evidence for its desirability overcomes the negative evidence. This is another reasonthat a derived goal should not be taken as a sub-goal of a single input goal.

All adaptation and self-organization activities within the system are driven and evaluated bytask processing, though not by any single task alone. Instead each of them makes their individualcontribution within the system. In particular:

• Self-organization of beliefs: As described previously, the priority value of a belief isdetermined by its contribution in task processing which is mostly, but not only, determinedby the amount of evidence that was summarized in the result corresponding to the truth valueof the derived belief. In this way, the system’s beliefs eventually will be prioritized by theirusefulness. This allows the system to work properly even when it cannot consult all relevantbeliefs when making a decision. A theoretical implication is that the beliefs are not aimedat an objective description of the world as it is, but as a subjective summary of the system’sexperience in the process of carrying out its tasks.

• Self-organization of operations: The system’s primary operations are implanted, like a set ofinstructions from which all of the behaviors come. Under AIKR, NARS cannot afford theresources to search the space of all of their combinations when solving a problem. Instead,compound operations are formed and kept according to their contribution to task processing.Additionally, the preconditions and consequences of operations are learned, whenever theyare relevant to a goal. In this way the system can quickly recall relevant operations thathelp it to realize similar goals and also develop procedural skills that will be executed almostautomatically by the system without much deliberation, while taking relevant preconditionsinto account. It can be considered as a form of self-programming which is not only driven bythe current goal, but also shaped by the related task-processing experience in the past.

• Self-organization of goals: The system is typically driven by multiple goals, and this “goalcomplex” develops over time. NARS is always open to new input goals, and there is norequirement on their consistency and achievability. At the beginning, most derived goalscorrespond to input goals and are subordinate to them in terms of the attention that theyreceive. After some time this can change however, especially when many input goals share acommon derived goal or a derived goal gets direct rewards (from the emotional mechanism,for instance). Further, goal priority is adjusted by their contribution to task processing, mainlyby the evidence that was summarized in the result. This differs for beliefs however, the revisedevidence with goals of the same content determines the summarized result according to whichthe decision on whether the goal is pursued or not is made.

13

Page 14: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

WANG ET AL.

4.2 Major features

There are several factors that make the motivation mechanism of NARS quite special whencompared to traditional approaches:

• Input goals can be specified at any time whatsoever (including design and run time). Also, thesystem is designed in such a way that it can also potentially immediately react to new inputgoals, especially when they come with a high priority value. However, this does not meanthat all mental processes that took place are stopped to react to this new input. Instead, thenew input is also processed and only the lowest-priority tasks are omitted to make place forthe new input and other new tasks arising from this input when they are received.

• Multiple goals can co-exist within the system at the same time without any restriction in theircontent. This is related to “polytelic motivation” and the systems innate strive to satisfy themis a form of multidimensional optimization. Under AIKR, the system does not guarantee tofind any global optimum here, though, all of the goals act merely as a “force” that guide theself-organization within the system towards fulfillment of its goals.

• A goal is specified by a statement, rather than by a state, and they can represent parts of asituation. In NARS, goals are treated as events the system tries to achieve. Here achievingmeans causing a belief with high expectation to appear that corresponds to the goal the systemwants to achieve.

• Goals can be unreachable, variable, and also inconsistent to each other. Unreachable goalstend to be forgotten over time when the system finds no clue how to realize them. Forinconsistent goals about the same statement as previously mentioned, the system will tryto resolve the conflict by the use of revision. Often additional reasoning is needed to evenrecognize a conflict however to find “implied” results about the same statement so that therevision rule can apply. In the case that the conflict can not be resolved, often the best thesystem can do is choose one option over the other. Mechanisms such as the choice rule existfor this purpose.

• Each goal has its own priority, and the system can balance long-term goals and urgent goals:since goals are tasks, they compete with other tasks and goals within the system. This alsohas major implications to the previous point: for example a conflict in low-priority goals willoften be irrelevant, while a lot of time will be spent to resolve conflicts with high prioritygoals. But even then, full consistency cannot be guaranteed.

• A functional autonomy of derived tasks is realized, also meaning that they can becomeindependent of their parents, while competing with them and with other tasks for priority.Also, the functional autonomy especially of derived goals has another huge consequence: inmost traditional systems, the execution order from goal to sub-goal is fixed in the sense thatwhen the sub-goal is processed, the parent is resting, and is only active again when the sub-goal failed and an alternative sub-goal has been selected (backtracking). In NARS there isonly a loose connection here: the child and the parent can be active at the same time, workin parallel, and are even competing for priority with each other. But when the derived goal isgenerated, the concept corresponding to the derived goal gets activated, tending to make thederived goals more active than their parents, at least for short time after they were generated.

14

Page 15: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

MOTIVATION AND EMOTION IN NARS

• In most traditional approaches, no more time is spent on fulfilled goals, while in NARS agoal can never be satisfied to 100 percent. However, the more satisfied a goal is, the more itspriority will sink. This results in the system spending most of its time on highly desired goalsthat are not, or almost not, satisfied.

• The system’s motivations are determined by both nature and nurture. For example in a givenusage-scenario such as a robot, the goal to keep the battery level high should have a highdesire-value by default. While other goals can potentially gain a high desire-value over timewhen there is more and more evidence for their desirability, which is mainly dependent onthe experience the system will have.

5. Emotion in NARS

Under AIKR, emotion is a functional necessity that improves the overall adaptation capability ofthe system. Emotion in NARS is not an imitation of human emotion that is intended to replicate allof the details of it. Instead, it is based on a generalization of human emotion, where the biologicalaspects are removed but the functional aspects preserved.

Among the various functions of emotion in cognition and intelligence, in NARS the primaryfunctions of emotion are object and situation appraisal, resource allocation, and action preparation.The communication functions of emotion are taken to be secondary, which will be considered later.

5.1 The representation of emotion

As described previously, in NARS every event has a truth-value and a desire-value, expressing thecurrent status and what the system wants it to be, respectively. Their difference is called (event-level) “satisfaction”, which provides a basic appraisal of an individual aspect of the situation. It isa real number in [0, 1], with 0 for “completely unsatisfied”, 1 for “completely satisfied”, and theother cases in between.

Also there is system-level satisfaction, as the accumulation of recent event-level satisfactions,that represents an appraisal of the overall situation. Technically, this value is adjusted in everyworking cycle, by using the satisfaction value of the event under processing to adjust one overallsatisfaction value. Consequently, the system is satisfied overall if on the recently considered eventswhat it desires is approximately what it believes to be the case.

This system-level satisfaction can be roughly interpreted as an indicator of the system’s“pleasure” or “happiness” level, which plays multiple roles within the system, such as influencingthe resource allocation.

To let the system be aware of the values of these operators, some “feeling” operators areequipped, which reflect these satisfaction values into the internal experience of the system, as eventswhose full meaning the system has to acquire by itself. This happens by the usage of reserved(innate) terms and statements which form its own concepts, so called “emotional concepts” withinthe memory of the system. These emotional concepts provide a perception of emotions and feelingsto NARS, just like how the perceptive concepts summarize the system’s experience when interactingwith the outside world.

These emotional concepts interact with other concepts as normal concepts would, leading to thegeneration of compound emotions by the inference process, represented by concepts that combinethe emotional information with other experiences. For instance, unsatisfaction on an event may be

15

Page 16: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

WANG ET AL.

caused by other systems or the system itself, may be about the past or the future, may be controllableor inevitable, etc., and all these differences will lead to different categorization about the matter.

Additionally, desire-value is extended to non-event concepts according to their correlationwith overall satisfaction. For example, if the appearing of an object consistently leads to highsatisfaction level, it will be “liked” by the system, while if it usually leads to low satisfaction, itwill be “disliked”. Of course, there are many other things for which the system has little emotion.These different attitudes mainly come from the system’s experience, and will influence the system’streatment to them.

As indicated above, in NARS emotional information appears in two distinct forms:

• At “subconscious level”, as desire-values and satisfaction values. They are outside of theexperience of the system, since these values do not form statements the system could reasonabout.

• At “conscious level”, as events expressed using emotional concepts. These are inside of theexperience of the system, since they are represented as statements that are considered in theinference process of the system.

5.2 The effects of emotion

Emotions in both forms, these corresponding to the “subconscious level” as well as thesecorresponding to the “conscious level”, contribute to the system’s internal processes, and also tothe system’s external behaviors.

The emotional concepts in experience are processed as other concepts in inference. Conse-quently they categorize the system’s appraisal to the objects and situations and allow the systemto behave accordingly. For instance, the system may develop behavior patterns for “danger”, eventhough each concrete danger has very different sources and causes.

Given the priority and desire-value of concepts, concepts also provide very specific hints to thecontrol system, to modulate the resources of incoming events when they enter the system, dependenton which concept they refer to, what statement they have assigned. This way concepts implement atop-down attention that is modulated by emotional parameters of the system, which is effectively aplace where emotions take influence on the perception of the system.

The “emotion-specific” treatments mainly happen at the “subconscious level” where theemotional information is used in various processes.

• The desire-values of concept are taken into account in attention allocation where conceptswith strong feeling (extreme desire-values) get more resources than those with weak feeling(neutral desire-values). These desire-values not only help the system to judge how long dataitems should be stored in memory but also how much priority they should have assigned whenentering the system.

• After an inference step, when a goal is relatively satisfied, its priority is decreased accordinglyand the belief used in the step gets a higher priority because of its usefulness. This way,already satisfied goals get less attention by the system, while relevant knowledge that satisfiedthese goals tends to be kept in memory.

16

Page 17: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

MOTIVATION AND EMOTION IN NARS

• In the decision-making rule, the threshold for a decision is lower in high emotional situationsto allow quick responses, which is especially relevant in situations where there is not a lot oftime available to react.

• The overall satisfaction is used as feedback to adjust the priority values of data items(concepts, tasks, beliefs), so that the ones associated with positive feeling are rewarded,and the ones associated with negative feeling punished. In this way, the system shows a“pleasure-seeking” tendency, and its extent can be adjusted by a system parameter. Thispleasure-seeking tendency can be considered as a motivation that is not directly based on anytask, but as a “meta-task”.

• When the system is relatively satisfied, it is more likely to create new goals, while when thesystem is unsatisfied about the current situation, it is more likely to focus on the existing goalsthat have not been achieved.

5.3 An example

The above mechanisms have been mostly implemented in the current NARS, and are under testingand tuning, so at the moment have not yet produced profound results to be evaluated systematically.Here we just use a simple example to show the motivational aspect of the system as well as someemotional aspect.

Here we see a belief (ends with ‘.’, with a default truth-value) and a goal (ends with ‘!’, with adefault desire-value):

"Myself eating an apple makes myself replete."<eat(SELF,apple) =/> <SELF --> [replete]>>."I want to be replete!"<SELF --> [replete]>!

In this example, the system would derive a new goal that corresponds to a derived motivation,

"I am motivated to eat an apple!"eat(SELF,apple)!

Since this goal is an executable operation, it is directly achieved, now the system feels theconsequence in the form of a new judgment in its experience:

"I am replete."<SELF --> [replete]>.

Now the system notices that it has satisfied the existing goal

"I want to be replete!"<SELF --> [replete]>!

This result also causes the overall satisfaction level of the system to raise. And when it raisesabove a threshold, the following event is also generated:

"I am satisfied."<SELF --> [satisfied]>.

17

Page 18: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

WANG ET AL.

This event is observed as all other events, and it will be used by the temporal induction rule forexample, with a possible resulting statement such as:

"Myself eating an apple makes myself satisfied."<eat(SELF, apple) =/> <SELF --> [satisfied]>>.

As a result, the system forms a belief that eating apple can make itself satisfied, and will like thisoperation, as well as the object involved, a little more. As an inductive conclusion, the confidenceof this belief is relatively low (here the numerical details are omitted), but it will still increase thechance for this behavior to be repeated in the future, even when the system has forgot the full storyof the previous experience, so cannot explain why it likes apples.

6. Comparisons and Discussion

This section compares NARS with the other AI/AGI models on a few major design decisions directlyrelated to motivation and emotion.

6.1 Motivation representation

As described previously, all activities within NARS are driven or motivated by inference tasks:digesting a judgment (i.e., new experience), answering a question, or achieving a goal. The contentsof them are all statements.

Though all tasks are motivations or drives in NARS, only goals will directly change the externalenvironment via the execution of operations, while the other types of tasks only change the internalenvironment within the system. Here we will focus on the representation of goals.

As reviewed at the beginning of the article, as well as in the other literature (Beaudoin, 1994;Norman, 1997; Hawes, 2011), the most common way in AI is to define a “goal” as a final state ofa computation. This practice is inherited from the notion of Turing machine (Hopcroft, Motwani,and Ullman, 2007), and it has influenced generations of AI works, from the early ones on heuristicsearch (Newell and Simon, 1963) to the recent ones on goal reasoning (Roberts et al., 2016),

To define a goal as a statement to be realized or a state to be arrived have fundamentaldifferences. To decide whether a given state has been arrived, the system needs full informationabout the current environment in which the state is defined. On the contrary, to judge the truth-valueof a statement, the system only needs partial information about the environment. Given AIKR,NARS cannot take a “state-based” representation of its goals. Actually few practical AI systemcan really represent the “states” of its environment, but only the relevant information about it. Thistreatment is acceptable for many special-purpose systems, where “relevance” is specified accordingto the special purpose of the system. However, for an AGI such a treatment is hard to justify, as thedesigner cannot decide in advance which aspects of the environment will be relevant.

As explained previously, NARS takes a very different approach. The system is open to new taskswith any expressible content at any moment, and let them compete, for both realization opportunityand time–space resources. Each statement gets a “desire-value” to summarize the system’s totaldesire on it, and only when this value is high enough then its realization is actually pursued. Assoon as a desired statement becomes a goal, it will participate in resource competition with othertasks (including goals), and the system’s preference among them is represented by the “budget-value” of each task, which reflects the system’s willingness to further invest in this process, andtherefore is similar to the “intensity of motivation” discussed in Norman (1997).

18

Page 19: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

MOTIVATION AND EMOTION IN NARS

Since each goal is only a partial description of the desired situation, in NARS there are normallymany coexistent goals, all demanding satisfaction. Though such a need has been acknowledge bymany AI projects, only limited situations have been allowed. For instance, it is usually assumed thatcoexistent goals do not conflict, or all conflicts can be resolved by re-planning or rejecting somegoals (Beaudoin, 1994; Rao and Georgeff, 1995; Hawes, 2011). NARS makes no such restrictionsto its applicable situations.

This statement-based goal representation demands the other knowledge to be representedaccordingly. For instance, the system’s knowledge about each operation is mainly representedby a group of statements, each associates the operation with certain preconditions and certainconsequences, both represented as (compound) statements, with a truth-value indicating theevidential support of the knowledge. Different from reinforcement learning (Sutton and Barto,1998), here an operation does not represent a transition between states, since the preconditionsand consequences are by no means complete. Similarly, there is no given reward signal telling thesystem how much the operation can lead to the overall satisfaction of the system, but only its chanceto achieve a certain consequence, whose relation to the overall satisfaction remains to be determinedby the system itself via further reasoning.

6.2 Task life-cycle

The tasks in NARS are either input or derived. The former can be accepted at the user interfacewhen the system is running, or built into the system so is active at the very beginning. Eitherway, their content is not decided by the system. As a general-purpose system, NARS can be givenany input, as long as the content can be expressed in Narsese. In this aspect, it is fundamentallydifferent from systems designed with human-like motivations, like MicroPsi (Bach, 2012). WhenNARS is applied to a specific domain, it may be assigned human-like motivations, though that isnot a theoretical requirement for all applications of NARS.

Task derivation is carried out by inference rules. Normally, forward inference generates derivedjudgments, while backward inference generates derived questions and goals. In this process, themajor difference between NARS and the other AI systems with goal generators (Beaudoin, 1994;Roberts et al., 2016) is that since NARS uses a “non-axiomatic” logic whose inference is not binarydeduction, the realizing of a “child” task in usually neither a sufficient condition nor a necessarycondition of the realizing of its “parent” task, even though the two are related in a certain way,according to the system’s experience and the logic.

While the other type of tasks can be directly produced via derivation, a goal needs an extra“decision-making” step, since the derivation result from one goal must be checked with othercoexistent goals before the system actually pursues the derived goal. For instance, when goalG1 derives goal G2, the latter is not immediately taken as a goal by the system. Instead, it is a“potential goal” that increases (or decreases) the desire-value of the statement in it. Only when theaccumulated desire-value of a statement reaches a certain threshold, it is turned into a goal. This“desire vs. goal” distinction is similar to the “desire vs. intention” distinction in the BDI model(Rao and Georgeff, 1995), though NARS uses a numerical desire-value to quantitatively balancethe demands of the relevant goals.

Consequently, a derived goal may have multiple parent goals, and even if its desire-value comesfrom a single goal, its generation still requires the absence of obstruction from the other goals. Partlyfor this reason, after a goal is derived and decided, it is treated as other (input or derived) goals, and

19

Page 20: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

WANG ET AL.

processed independently. In this way, all derived tasks (including goals) are “top-level” (Beaudoin,1994) or “autonomous” (Norman, 1997), rather than as the “sub-goals” of the input goals fromwhich they are derived, recursively (Beaudoin, 1994).

NARS does not treat all of its tasks as equals. Each input task is assigned an initial budget,determined by the designer or user of the system. The system also provides a default budget foreach type of input task. The inference rules have associated functions that determines the budget ofthe derived task, according to factors like the budget of the “parent” task and belief, the type of theinference, etc. The budget values are relatively defined, in the sense that how much time and spacea task eventually gets not only depends on its own budget, but also those of the other coexistenttasks. The control mechanism dynamically distributes resources among tasks in a context-sensitivemanner, as well as adjusts the budget of the tasks according to the collected feedback.

NARS does not take the decision-theoretic approach like Horvitz (1989) or Russell and Wefald(1991), because the system does not have the knowledge required by that approach, as new tasksmay show up at any moment with any content. For the same reason, it cannot process taskssequentially, but have to handle them in parallel, with different speeds.

Since in NARS every statement is “true to a degree”, so is the achieving of a goal, as indicatedby its satisfaction level. As long as there are still available resources, the system will continue totry to increase this level, as an anytime algorithm (Dean and Boddy, 1988; Beaudoin, 1994). In thisaspect, the system is also similar to Reinforcement Learning (RL) (Sutton and Barto, 1998) wherethe expected total reward is optimized. However, RL also uses a “state-based” representation, andassumes the existence of a reward signal that provides reliable guidance, which are not assumed inNARS.

Because a goal will never reach “full satisfaction”, its processing ends whenever it loses theresource competition, though the same task may appear again when needed. What the systemattempts to optimize is not the satisfaction level of any individual task, but the overall satisfactionlevel of the whole system. It means some tasks may fail to be satisfied, no matter what standard isused. This is an inevitable consequence of AIKR. For this reason, NARS cannot be evaluated in thetraditional way, i.e., by the quality of its solution to a specific type of problem, or even to severaltypes of them, because its intelligence is not reflected at problem-solving level, but at the meta-levelwhere many problem-solving processes are carried out and coordinated.

6.3 Motivational complex

For the system as a whole, NARS’ behaviors are better seen as driven by a motivational complex,rather than by the individual tasks, one by one. As explained previously, at any moment there aremultiple tasks under processing, and each of them is more or less shaped and bounded by the others,either present or past.

It is possible to talk about the “resultant” of the motivational complex – the system-levelsatisfaction can be seen as indicating how it is achieved, and can be roughly related to notionslike “total reward”, “pleasure”, “happiness”, and so on. However, the system’s motivation cannotbe fruitfully analyzed accordingly. One reason is that the system’s beliefs (knowledge) are mostlyabout individual goals, and how these goals are summarized into the “overall satisfaction” changesfrom time to time. This is one reason why reinforcement learning should not be taken as aframework of AGI systems, since it assumes that the utility of all operations can be directlyevaluated according to a unique goal (Hutter, 2005).

20

Page 21: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

MOTIVATION AND EMOTION IN NARS

The wide-spreading “Paperclip Maximizer” thought experiment proposed by Nick Bostromwarns that even seemingly-harmless goals like “to make as many paper clips as possible” maydrive an AGI system to cause a disaster by turning everything into paperclips. It is correct that anyconcrete goal may cause undesired consequence when pursued at all costs. However, this argumentshould be understood as a criticism to RL and other single-motivation techniques, but not to all AGIapproaches. Actually we do not believe a single-motivation system can be very intelligent at all, andto make this motivation more general (like “to be human-friendly”) will not solve the problem. Atruly intelligent system will necessarily be driven by a motivational complex consisting of mutuallyrestricting goals, so as to balance all the needs of the system. There is no way to summarize thiscomplex into a single “super-goal” that is not only comprehensive enough to cover all the needs,but also concrete enough to guide the system’s behaviors.

Another distinctive feature of the motivational complex of NARS is that it evolves constantly.Since the system is open to new tasks, it can accept new goals at any moment (from the userswith proper authority, of course), which drives the system to new directions. More significantly,its derived goals are not bounded to their parent goals logically though derived from the latterhistorically, as explained previously.

This design decision has profound implications. On one hand, it means the system’smotivational complex not only depends on the input goals, but also on the system’s whole pastexperience. For example, if the system has highly biased or limited experience, it may initiallybelieve that goal G1 can be achieved via goal G2, but learns at a much later time that the realizingof G2 actually prevents G1 from being realized. This will indeed case serious problems, but underAIKR, it is impossible for the system to completely avoid this type of mistake. There are interestingattempts in “goal preservation” (Goertzel, 2014), but they inevitably make assumptions conflictingwith AIKR, such as being able to enumerate all possible future situations.

People tend to see this phenomenon of “functional autonomy” (Allport, 1937) or “taskalienation” (Wang, 2006) as a undesired consequence, though it is arguably an intrinsic characterof intelligence. Many aspects of the human nature grows out of “animal nature” while cannot bereducible back to their origin. Besides the impossibility of fully recording and maintaining the fullhistory for every derived task due to AIKR, there is actually adaptive advantage to pursue a childgoal when its parent goal is no longer active, because similar needs may appear again in the future.

Like it or not, the evolution of motivational complex is probably an inevitable property of AGI,as it is directly related to the adaptivity, originality, and autonomy of the system. Since this evolutionis not random, but jointly determined by the system’s design and initial setting (its nature) as wellas its education and usage (its nurture), it is still possible for us to develop and raise AGI systemsaccording to human moral and ethical values.

6.4 Emotion for appraising and regulating

Compared to the other AI works on emotion, the emotional mechanism in NARS is characterizedby the following major design decisions:

1. Do not simulate human emotion, but take emotion as a necessary aspect of intelligence ingeneral.

2. Do not start with the communication functions of emotion, but its functions in categorizationand resource allocation.

21

Page 22: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

WANG ET AL.

3. Do not realize emotion as a separate module or process, but as embedded and unified with thecore reasoning and learning mechanism.

In the following, each of the three decisions is discussed, with our position explained.The works in AI on emotion can aim at “adapting human emotions to computers” (Picard,

1997) or “to understand emotions in their functional context” (Arbib and Fellous, 2004). Thoughboth objectives are valid and valuable, they lead the research to very different directions. Whilethe works in affective computing are mostly about the former (Picard, 1997), the works in AGI aremostly about the latter, where the focus is what emotion means to the system itself, rather than tothe human users. However, even in AGI, there is still the issue of how close we want the emotionsto whose of the human beings. Given the fundamental differences between humans and computersin body and experience, it is obvious impossible for them to have identical emotions; on the otherhand, if the mechanism and process in the computer systems are too different from that in the humanbeings, to call them “emotion” would be far-stretched and therefore misleading.

At this point, currently the AGI projects form a wide spectrum on where and how muchtheir emotional mechanisms are similar to human emotions. Roughly speaking, systems includingMicroPsi (Bach, 2012) and LIDA (Franklin et al., 2014) stress more on the psychological realism oftheir designs, while systems including NARS and Sigma (Rosenbloom, Gratch, and Ustun, 2015)stress more on the functional necessity of their designs. For example, in theory NARS allowsgoals of any content expressible in Narsese, while MicroPsi has built-in motivations satisfying“physiological needs” and “social needs”, which approximate certain human needs. Once again,this choice is not a matter of right or wrong, but come from different understandings of AI/AGI(Wang, 2006). It is not necessary for an AGI system to be similar to the human brain–mind inall aspects to all details. A mechanism can be called “emotion” if it plays similar functions ashuman emotion in the system, even though the concrete content of these emotions are differentfrom those of the human beings. For instance, any system with emotion will “like” something and“dislike” something else, though they can have completely different standards when making theseassessments.

In terms of the functions played by emotion, it is widely agreed that there are “intrapersonal”ones (in appraisal and categorization) and “interpersonal” ones (in communication and cooperation)(Arbib and Fellous, 2004; Thill and Lowe, 2012). The assumption we made is to take the former asprimary and basic, and the latter as secondary and collateral.

The need for emotion is implied by AIKR. For a system to efficiently adapt to the environment,it is necessary for it to appraise the situation and objects according to its needs, and to take actionsaccordingly. For example, when the system is satisfied with the current situation, it can afford topay attention to new and less crucial tasks, while when the system does not feel well on certainaspect of the situation, its attention will be largely on the unachieved goals. By combining thesatisfaction measurement with other factors, the system has the potential of developing concepts formore complicated emotions, as suggested by Ganjoo (2005). These appraisals can also be used asfeedback to guide the learning process, as suggested by Marinier, Laird, and Leweis (2008).

As different emotions cause different behaviors, they play important roles in communicationand cooperation, as they show a system’s opinions on the environment and the objects in it, so theother systems can use this information to predict this system’s action, as well as to decide theirown behaviors in the shared environment. Since these interpersonal functions depend on emotion’sintrapersonal functions, we will consider it in the future when studying the communication

22

Page 23: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

MOTIVATION AND EMOTION IN NARS

processes between NARS and the human users, as well as between multiple NARS-based systemsthat work together.

In NARS implementation, emotion has been integrated into the core reasoning–learning process,as described previously. Since NARS is designed in the framework of a reasoning system, thetechnical details of its emotion mechanism are quite different from the emotion mechanismsdeveloped in AGI systems using other frameworks (Bach, 2012; Rosenbloom, Gratch, and Ustun,2015; Strannegard, Cirillo, and Wessberg, 2015), even though there are similar phenomena here orthere, as these mechanisms provide related functions.

7. Conclusions

In this article, we described the motivational and emotional mechanism of NARS, and compared itwith the related works.

The motivational and emotional mechanism of NARS is designed according to the postulationthat intelligence means adaptation under insufficient knowledge and resources. Different from mostother AI systems, NARS normally has many tasks (including goals) which may conflict and competewith each other, and usually cannot be fully achieved. These motivations drive actions of the system,generate new tasks via inference, and evolve according to the system’s experience. Emotions inNARS start from the system’s appraise of events by comparing the desired status and the actualstatus. The system uses this information to categorize the situation, adjusts the system’s resourceallocation, and decides the system’s actions.

Limited by the length of the article, we cannot describe and discuss many related aspects ofNARS, but have to leave them to the other publications, such as Wang (2006, 2013).

In recent years the progress of deep learning has brought AI research to a new height, and mademany people to take it as the technique that represents the right approach to AGI. However, there aremany important topics in AGI, including the one discussed in this article, that have not even beentouched in deep learning or even in the broader fields like artificial neural networks and machinelearning. We hope this article shows that reasoning system still provides a better framework for AGIwhen the proper assumptions and suitable extensions are made.

Acknowledgments

Thanks to the development team of OpenNARS (the open-source implementation of NARS) fortesting the ideas described in the article. Thanks to Max Talanov, Bill Power, Alexander Rivera,Quinn Dougherty, and Tony Lofthouse for commenting and proofreading.

References

Allport, G. W. 1937. The functional autonomy of motives. American Journal of Psychology50:141–156.

Arbib, M., and Fellous, J.-M. 2004. Emotions: from brain to robot. Trends in Cognitive Sciences8(12):554–559.

23

Page 24: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

WANG ET AL.

Bach, J. 2012. Modeling Motivation and the Emergence of Affect in a Cognitive Agent. In Wang, P.,and Goertzel, B., eds., Theoretical Foundations of Artificial General Intelligence. Paris: AtlantisPress. 241–262.

Bach, J. 2015. Modeling Motivation in MicroPsi2. In Proceedings of the Eighth Conference onArtificial General Intelligence, 3–13.

Beaudoin, L. P. 1994. Goal processing in autonomous agents. Ph.D. Dissertation, School ofComputer Science, The University of Birmingham.

Dean, T., and Boddy, M. 1988. An analysis of time-dependent planning. In Proceedings of AAAI-88,49–54.

Franklin, S.; Madl, T.; D’Mello, S.; and Snaider, J. 2014. LIDA: A Systems-level Architecturefor Cognition, Emotion, and Learning. IEEE Transactions on Autonomous Mental Development6(1):19–41.

Ganjoo, A. 2005. Designing Emotion-Capable Robots, One Emotion at a Time. In Proceedings ofCognitive Science Society Conference, 755–760.

Goertzel, B. 2014. GOLEM: towards an AGI meta-architecture enabling both goal preservationand radical self-improvement. Journal of Experimental & Theoretical Artificial Intelligence26(3):391–403.

Hawes, N. 2011. A survey of motivation frameworks for intelligent systems. Artificial Intelligence175(5-6):1020–1036.

Hopcroft, J. E.; Motwani, R.; and Ullman, J. D. 2007. Introduction to Automata Theory, Languages,and Computation. Boston: Addison-Wesley, 3rd edition.

Horvitz, E. J. 1989. Reasoning about beliefs and actions under computational resource constraints.In Kanal, L. N.; Levitt, T. S.; and Lemmer, J. F., eds., Uncertainty in Artificial Intelligence 3.Amsterdam: North-Holland. 301–324.

Hutter, M. 2005. Universal Artificial Intelligence: Sequential Decisions based on AlgorithmicProbability. Berlin: Springer.

Marinier, R. P.; Laird, J. E.; and Leweis, R. L. 2008. A computational unification of cognitivebehavior and emotion. Cognitive Systems Research 10:48–69.

Newell, A., and Simon, H. A. 1963. GPS, a program that simulates human thought. In Feigenbaum,E. A., and Feldman, J., eds., Computers and Thought. McGraw-Hill, New York. 279–293.

Norman, T. J. F. 1997. Motivation-based direction of planning attention in agents with goalautonomy. Ph.D. Dissertation, Department of Computer Science, University College London.

Omohundro, S. M. 2008. The Basic AI Drives. In Proceedings of the First Conference on ArtificialGeneral Intelligence, 483–492.

Picard, R. W. 1997. Affective Computing. Cambridge, Massachusetts: MIT Press.

24

Page 25: Motivation and Emotion in NARS€¦ · This paper describes the conceptual design and experimental implementation of the components of NARS that are directly related to motivation

MOTIVATION AND EMOTION IN NARS

Rao, A. S., and Georgeff, M. P. 1995. BDI-agents: from theory to practice. In Proceedings of theFirst International Conference on Multiagent Systems.

Roberts, M.; Shivashankar, V.; Alford, R.; Leece, M.; Gupta, S.; and Aha, D. W. 2016. GoalReasoning, Planning, and Acting with ACTORSIM, The Actor Simulator. Advances in CognitiveSystems 4:1–16.

Rosenbloom, P. S.; Gratch, J.; and Ustun, V. 2015. Towards Emotion in Sigma: From Appraisal toAttention. In Proceedings of the Eighth Conference on Artificial General Intelligence, 142 – 151.

Russell, S., and Wefald, E. H. 1991. Principles of metareasoning. Artificial Intelligence 49:361–395.

Shapiro, S. C., and Bona, J. P. 2010. The GLAIR Cognitive Architecture. International Journal ofMachine Consciousness 2(2):307–332.

Simon, H. A., and Newell, A. 1958. Heuristic problem solving: the next advance in operationsresearch. Operations Research 6:1–10.

Strannegard, C.; Cirillo, S.; and Wessberg, J. 2015. Emotional Concept Development. InProceedings of the Eighth Conference on Artificial General Intelligence, 362–372.

Sutton, R. S., and Barto, A. G. 1998. Reinforcement Learning: An Introduction. Cambridge,Massachusetts: MIT Press.

Talanov, M., and Toschev, A. 2014. Computational emotional thinking and virtualneurotransmitters. International Journal of Synthetic Emotions 5(1).

Thill, S., and Lowe, R. 2012. On the Functional Contributions of Emotion Mechanisms to(Artificial) Cognition and Intelligence. In Proceedings of the Fifth Conference on ArtificialGeneral Intelligence, 322–331.

Wang, P.; Talanov, M.; and Hammer, P. 2016. The Emotional Mechanisms in NARS. In Proceedingsof the Ninth Conference on Artificial General Intelligence, 150–159.

Wang, P. 2006. Rigid Flexibility: The Logic of Intelligence. Dordrecht: Springer.

Wang, P. 2012. Motivation Management in AGI Systems. In Proceedings of the Fifth Conferenceon Artificial General Intelligence, 352–361.

Wang, P. 2013. Non-Axiomatic Logic: A Model of Intelligent Reasoning. Singapore: WorldScientific.

25