Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get...

15
R ecent years have seen a growing recognition of the transformative potential of game-based learning envi- ronments. The burgeoning field of game-based learn- ing has made significant advances, including theoretical developments (Gee 2007), as well as the creation of game- based learning environments for a broad range of K–12 sub- jects (Habgood and Ainsworth 2011; Ketelhut et al. 2010; Warren, Dondlinger, and Barab 2008) and training objectives (Johnson 2010; Kim et al. 2009). Of particular note are the results of recent empirical studies demonstrating that in addi- tion to game-based learning environments’ potential for motivation, they can enable students to achieve learning gains in controlled laboratory settings (Habgood and Ainsworth 2011) as well as classroom settings (Ketelhut et al. 2010). Motivated by the goal of creating game-based learning experiences that are personalized to individual students, we have been investigating intelligent game-based learning environments that leverage commercial game technologies and AI techniques from intelligent tutoring systems (D’Mel- lo and Graesser 2010; Feng, Heffernan, and Koedinger 2009; VanLehn 2006) and intelligent narrative technologies (Lim et al. 2012; McCoy et al. 2011; Si, Marsella, and Pynadath 2009; Yu and Riedl 2012). This work ranges from research on real-time narrative planning and goal recognition to affective computing models for recognizing student emotional states. Intelligent game-based learning environments serve as an excellent laboratory for investigating AI techniques because they make significant inferential demands and play out in complex interactive scenarios. In this article we introduce intelligent game-based learning Articles WINTER 2013 31 Copyright © 2013, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602 Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung Y. Lee, Bradford W. Mott, Jonathan P. Rowe, Jennifer L. Sabourin Intelligent game-based learning envi- ronments integrate commercial game technologies with AI methods from intelligent tutoring systems and intelli- gent narrative technologies. This article introduces the Crystal Island intelligent game-based learning environment, which has been under development in the authors’ laboratory for the past sev- en years. After presenting Crystal Island, the principal technical problems of intelligent game-based learning envi- ronments are discussed: narrative-cen- tered tutorial planning, student affect recognition, student knowledge model- ing, and student goal recognition. Solu- tions to these problems are illustrated with research conducted with the Crys- tal Island learning environment

Transcript of Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get...

Page 1: Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung

Recent years have seen a growing recognition of thetransformative potential of game-based learning envi-ronments. The burgeoning field of game-based learn-

ing has made significant advances, including theoreticaldevelopments (Gee 2007), as well as the creation of game-based learning environments for a broad range of K–12 sub-jects (Habgood and Ainsworth 2011; Ketelhut et al. 2010;Warren, Dondlinger, and Barab 2008) and training objectives(Johnson 2010; Kim et al. 2009). Of particular note are theresults of recent empirical studies demonstrating that in addi-tion to game-based learning environments’ potential formotivation, they can enable students to achieve learninggains in controlled laboratory settings (Habgood andAinsworth 2011) as well as classroom settings (Ketelhut et al.2010).

Motivated by the goal of creating game-based learningexperiences that are personalized to individual students, wehave been investigating intelligent game-based learningenvironments that leverage commercial game technologiesand AI techniques from intelligent tutoring systems (D’Mel-lo and Graesser 2010; Feng, Heffernan, and Koedinger 2009;VanLehn 2006) and intelligent narrative technologies (Limet al. 2012; McCoy et al. 2011; Si, Marsella, and Pynadath2009; Yu and Riedl 2012). This work ranges from research onreal-time narrative planning and goal recognition to affectivecomputing models for recognizing student emotional states.Intelligent game-based learning environments serve as anexcellent laboratory for investigating AI techniques becausethey make significant inferential demands and play out incomplex interactive scenarios.

In this article we introduce intelligent game-based learning

Articles

WINTER 2013 31Copyright © 2013, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602

Serious Games Get Smart: Intelligent Game-Based Learning Environments

James C. Lester, Eun Y. Ha, Seung Y. Lee, Bradford W. Mott, Jonathan P. Rowe, Jennifer L. Sabourin

� Intelligent game-based learning envi-ronments integrate commercial gametechnologies with AI methods fromintelligent tutoring systems and intelli-gent narrative technologies. This articleintroduces the Crystal Island intelligentgame-based learning environment,which has been under development inthe authors’ laboratory for the past sev-en years. After presenting CrystalIsland, the principal technical problemsof intelligent game-based learning envi-ronments are discussed: narrative-cen-tered tutorial planning, student affectrecognition, student knowledge model-ing, and student goal recognition. Solu-tions to these problems are illustratedwith research conducted with the Crys-tal Island learning environment

Page 2: Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung

environments by presenting Crystal Island, an intel-ligent game-based learning environment for middlegrade science education (Rowe et al. 2011). CrystalIsland has been under continual developmentthrough a series of learning technology investiga-tions and laboratory and classroom studies over thepast seven years. We discuss technical problems thatwe have investigated in the context of Crystal Island,including narrative-centered tutorial planning, stu-dent affect recognition, student modeling, and stu-dent goal recognition. We conclude with a discussionof educational impacts of intelligent game-basedlearning environments, and future directions for thefield.

The Crystal Island Learning Environment

Crystal Island (figure 1) is a narrative-centered learn-ing environment that was originally built on ValveSoftware’s Source engine, the three-dimensional (3-D) game platform for Half-Life 2, and now runs onthe Unity cross-platform game engine from UnityTechnologies. The curriculum underlying CrystalIsland’s mystery narrative is derived from the NorthCarolina state standard course of study for eighth-grade microbiology. The environment is designed asa supplement to classroom instruction, and it blendselements of both inquiry learning and direct instruc-tion. Crystal Island has served as a platform for inves-

tigating a range of artificial intelligence technologiesfor dynamically supporting students’ learning expe-riences. This includes work on narrative-centeredtutorial planning (Lee, Mott, and Lester 2012; Mottand Lester 2006), student knowledge modeling(Rowe and Lester 2010), student goal recognition (Haet al. 2011), and affect recognition models (Sabourin,Mott, and Lester 2011). The environment has alsobeen the subject of extensive empirical investigationsof student learning and presence (Rowe et al. 2011),with results informing the design and revision of suc-cessive iterations of the system.

Crystal Island features a science mystery where stu-dents attempt to discover the identity and source ofan infectious disease that is plaguing a research teamstationed on a remote island. Students adopt the roleof a medical field agent who has been assigned toinvestigate the illness and save the research teamfrom the outbreak. Students explore the researchcamp from a first-person viewpoint and manipulatevirtual objects, converse with characters, and use labequipment and other resources to solve the mystery.The mystery is solved when students complete aseries of partially ordered goals that involve uncover-ing details about the spreading infection, testingpotential transmission sources of the disease, record-ing a diagnosis and treatment plan, and presentingthe findings to the camp nurse.

The following scenario illustrates a typical interac-tion with Crystal Island. The scenario begins with the

Articles

32 AI MAGAZINE

Figure 1. Crystal Island Narrative-Centered Learning Environment.

Page 3: Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung

Articles

WINTER 2013 33

player-character disembarking from her boat shortlyafter arriving on the island. After completing a brieftutorial that introduces the game’s controls, the play-er leaves the dock to approach the research camp.Near the camp entrance, the student encounters aninfirmary with several sick patients and a campnurse. Upon entering the infirmary, the studentapproaches the nurse and initiates a conversation(figure 2). The nurse explains that an unidentified ill-ness is spreading through the camp and asks for theplayer’s help in determining a diagnosis. She advisesthe student to use an in-game diagnosis worksheet inorder to record her findings, hypotheses, and finaldiagnosis (figure 3). This worksheet is designed toscaffold the student’s problem-solving process, aswell as provide a space for the student to offload anyfindings gathered about the illness. The conversationwith the nurse takes place through a combination ofmultimodal character dialogue — spoken language,gesture, facial expression, and text — and player dia-logue menu selections.

After speaking with the nurse, the student has sev-eral options for investigating the illness. The studentcan talk to sick patients lying on medical cots in orderto gather information about the team members’ symp-toms and recent eating habits. Alternatively, the stu-dent can move to the camp’s dining hall to speak withthe camp cook. The cook describes the types of foodthat the team has recently eaten and provides cluesabout which items warrant closer investigation. In

addition to learning about the sick team members, thestudent can walk to the camp’s living quarters to con-verse with a pair of virtual scientists who answer ques-tions about viruses and bacteria. The student can alsolearn more about pathogens by viewing posters hang-ing inside of the camp’s buildings or reading bookslocated in a small library.

Beyond gathering information about the diseasefrom virtual scientists and other instructionalresources, the student can test potential transmissionsources using the laboratory’s testing equipment. Forexample, the student encounters several food itemsthat have been lying out in the dining hall, and shecan test the items for infectious agents at any pointduring the learning interaction.

After running several tests, the student discoversthat the sick team members have consumed milk thatis contaminated with a bacterial agent. The studentcan use the camp’s books and posters in order to inves-tigate bacterial diseases that are associated with symp-toms matching those reported by the sick team mem-bers. Once she has narrowed down a diagnosis andrecommended treatment, the student returns to theinfirmary in order to inform the camp nurse. If the stu-dent’s diagnosis is incorrect, the nurse identifies theerror and recommends that the player keep working.The student can use this feedback to roughly deter-mine how close she is to solving the mystery. Whenthe student correctly diagnoses the illness and speci-fies an appropriate treatment, the mystery is solved.

Figure 2. Camp Nurse and Sick Patient in Virtual Infirmary.

Page 4: Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung

Decision-Theoretic Narrative Planning

In order to perform real-time adaptive personaliza-tion of interactive narratives in game-based learningenvironments like Crystal Island, director agents (ordrama managers) plan unfolding narratives by oper-ating on at least two distinct but interacting levels:they craft the global story arc, typically by traversinga plot graph that encodes a partial order of significantevents in the narrative, and they plan virtual charac-ter behaviors and physical events in the storyworld.This task, known as narrative-centered tutorial plan-ning, blends aspects of interactive narrative planningand tutorial planning in a single problem. Interactivenarrative planning encompasses a number of chal-lenges, such as balancing character believability andplot coherence in the presence of unpredictable useractions (Riedl and Young 2004), or providing richcharacter behaviors that are consistent with authori-al objectives (Mateas and Stern 2005). The task’s tuto-rial planning component requires that the directoragent guide students’ cognitive, affective, andmetacognitive processes to promote effective learn-ing outcomes.

To address these requirements for Crystal Island,we designed and implemented U-Director (Mott andLester 2006), a decision-theoretic narrative planningarchitecture that uses a dynamic decision network

(DDN) (Dean and Kanazawa 1989) to model narrativeobjectives, storyworld state, and user state in order toinform run-time adaptations of Crystal Island’s sci-ence mystery. A DDN-based approach to narrativeplanning provides a principled decision-makingframework for reasoning about the multiple sourcesof evidence that inform narrative-centered tutorialplanning. At each game clock cycle, U-Director sys-tematically evaluates available evidence, updates itsbeliefs, and selects the storyworld action that maxi-mizes expected narrative-tutorial utility.

Because narrative is fundamentally a time-basedphenomenon, in each decision cycle U-Director con-siders candidate narrative actions to project forwardin time the effects of the actions being taken andtheir consequent effects on the user (Mott and Lester2006). To do so, it evaluates its narrative objectives inlight of the current storyworld state and user state.Each decision cycle considers three distinct timeslices (narrative statet, narrative statet+1, and narrativestatet+2), each of which consists of interconnectedsubnetworks containing chance nodes in the DDN(figure 4). The three slices represent (1) the currentnarrative state, (2) the narrative state after the direc-tor agent’s decision, and (3) the narrative state afterthe user’s next action. The DDN’s director action is adecision node, the DDN’s user action is a chancenode, and utilityt+2 is a utility node in the DDN. Eachtime slice encodes a probabilistic representation of

Articles

34 AI MAGAZINE

Figure 3. Diagnosis Worksheet Where Students Record their Findings.

Page 5: Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung

the director’s beliefs about the overall state of the nar-rative, represented with narrative-centered knowl-edge sources.

U-Director begins its computation by consideringnarrative statet, which represents the model’s currentbeliefs about the unfolding story’s narrative objec-tives, storyworld state, and user state (Mott andLester 2006). Links from the director action node tothe narrative statet+1 node model candidate directoractions and how they affect the story. Example direc-tor actions include instructing nonplayer charactersto perform actions in the environment or guidingstudents through the story by providing appropriate-ly leveled hints. U-Director constrains the number ofcandidate director actions to evaluate at each timestep using abstract director actions, thereby limitingthe number of concrete actions considered. Next, itmodels how the possible worlds encoded in narrativestatet+1 influence the user’s action and how the user’saction in turn affects the story in narrative statet+2Finally, using links from narrative statet+2 to the utili-ty node utilityt+2, U-Director models preferences overpotential narrative states. Preferences provide a rep-resentation in which authors specify the relativeimportance of salient features of the narrative state.

To gauge the overall effectiveness of U-Director’snarrative planning capabilities a simulated userapproach was taken (Mott and Lester 2006). Six simu-

lated users (three cooperative users who typically fol-lowed the agent’s guidance and three uncooperativeusers who typically did not follow the agent’s guid-ance) interacted with the Crystal Island director agentto create six different narrative experiences. Traces ofthe director agent’s decision making were analyzed tosee if the agent took appropriate action to guide usersthrough the narrative. As was desired, the analysisrevealed that the director agent adopted a hint-cen-tered approach for cooperative users and a moreheavy-handed approach for uncooperative users. Thesimulated user approach offers a promising means forestablishing baseline performance prior to conductingextensive focus group studies with human users.

In addition to formalisms based on dynamic deci-sion networks, we have also experimented withempirically based models of narrative-centered tuto-rial planning using a Wizard-of-Oz framework,where human wizards serve as expert narrative-cen-tered tutorial planners (Lee, Mott, and Lester 2012).In this approach, trained wizards interact with stu-dents who are attempting to solve Crystal Island’sscience mystery. The wizards perform tutorial andnarrative planning functionalities by controlling avirtual character and guiding students through thegame-based learning environment. Detailed tracedata from these wizard-student interactions is col-lected by the virtual environment — including all

Articles

WINTER 2013 35

Utilityt+2

Sub-network

Utility Node

Chance Node

Decision Node

NarrativeObjectivest

Narrative Statet

Plot Progresst

Narrative Flowt

StoryworldStatet

Plot Focust

DirectorAction

Physical Statet

Goalst

Beliefst

Experiential Statet

User Statet

NarrativeObjectivest+1

Plot Progresst+1

Narrative Flowt+1

StoryworldStatet+1

Plot Focust+1

Physical Statet+1

Goalst+1

Beliefst+1

Experiential Statet+1

User Statet+1

Narrative Statet+1

NarrativeObjectivest+2

Plot Progresst+2

Narrative Flowt+2

StoryworldStatet+2

Plot Focust+2

Physical Statet+2

Goalst+2

Beliefst+2

Experiential Statet+2

User Statet+2

Narrative Statet+2

UserAction

t +2t +1t

Figure 4. High-Level Illustration of Dynamic Decision Network for Interactive Narrative Director Agent.

Page 6: Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung

wizard decision-making, navigation, and manipula-tion activities — in order to generate a training cor-pus for inducing narrative-centered tutorial planningmodels using supervised machine-learning tech-niques.

To explore the run-time effectiveness of themachine-learned narrative-centered tutorial plan-ning models, induced intervention and action mod-els were integrated into Crystal Island, where themodels determine when is the most appropriate timeto intervene and what is the most appropriate nextnarrative-tutorial action. This extended version ofCrystal Island was identical to the Wizard-of-Oz-based version, except a narrative-centered tutorialplanner rather than a human wizard drove nonplay-er character interactions. The planning modelsactively observe students’ activities and dynamicallyguide students by directing characters in the virtualstoryworld.

Three experimental conditions were created toevaluate the effectiveness of the induced narrative-centered tutorial planner: minimal guidance, inter-mediate guidance, and full guidance. Outcomes ofthe three conditions were compared to determine theeffectiveness of utilizing our machine-learned mod-els.

Minimal Guidance: Students experience the story-world under the guidance of a minimal narrative-centered tutorial planning model. This model con-trols events that are required to be performed by thesystem (that is, students cannot complete the eventswithout the system taking action). The model in thiscondition is not machine learned. It performs anaction once all preconditions are met for the actionto be taken. This condition served as a baseline forthe investigation.

Intermediate Guidance: Students experience the sto-ryworld under the guidance of an intermediate nar-rative-centered tutorial planning model. This is anablated model inspired by the notions of interactivenarrative islands (Riedl et al. 2008). Islands are inter-mediate plan steps through which all valid solutionpaths must pass. They have preconditions describingthe intermediate world state, and if the plan does notsatisfy each island’s preconditions, the plan will nev-er achieve its goal. In our version of Crystal Island,transitions between narrative arc phases represent“islands” in the narrative. Each arc phase consists ofa number of potential narrative-centered tutorialplanning decisions. However, the phases are bound-ed by specific narrative-centered tutorial planningdecisions that define when each phase starts andends. We employ these specific tutorial action deci-sions as our islands.

Full Guidance: Students experience the storyworldunder the guidance of the full narrative-centeredtutorial planning model. The model actively moni-tors students interacting with the storyworld in orderto determine when it is appropriate to intervene with

the next tutorial action. The model has full control ofthe tutorial intervention decisions (that is, determin-ing when to intervene) and tutorial action decisions(that is, determining what the intervention shouldbe). The full guidance tutorial planning modelemploys the entire set of narrative-centered tutorialplanning decisions available to it. This list isdescribed in Lee, Mott, and Lester (2011).

In an experiment comparing the three narrative-centered tutorial planning models, a total of 150eighth-grade students used Crystal Island and com-pleted the pre- and posttest measures (Lee, Mott, andLester 2012). The students were drawn from a subur-ban middle school. While the study was held duringthe school day, groups of students were pulled out ofclass to play the game in a laboratory setting. The pre-and posttests consisted of the same multiple-choicequestions about Crystal Island’s microbiology cur-riculum. Students completed the pretest several daysprior to using Crystal Island, and they completed theposttest immediately after solving the science mys-tery. In all three conditions, the tutorial planningmodels’ actions focused on introducing narrativeevents, or providing problem-solving advice to stu-dents. The tutorial planner did not perform actionsinvolving direct instruction of curricular content.

An investigation of overall learning found that stu-dents who interacted with Crystal Island achievedpositive learning gains on the curriculum test. A two-tailed matched pairs t-test between posttest andpretest scores indicates that the learning gains weresignificant, t(149) = 2.03, p < .05. It was also foundthat students’ Crystal Island interactions in the fullguidance condition yielded significant learninggains, as measured by the difference of posttest andpretest scores. As shown in table 1, a two-tailedmatched pairs t-test showed that students in the fullguidance condition showed statistically significantlearning gains. Students in the intermediate and min-imal guidance conditions did not achieve significantlearning gains.

In addition, we analyzed learning gain differencesbetween the conditions and the results showed thatthere were significant differences. Performing anANCOVA to control for pretest scores, the learninggains were significantly different for the full and min-imal guidance conditions, F(2, 99) = 38.64, p < .001,as were the learning gains for the full and intermedi-ate guidance conditions, F(2, 100) = 40.22, p < .001.Thus, students who received full guidance from ourmachine-learned models achieved significantly high-er learning gains than the students who were in theother two conditions. These results are consistentwith findings from the education literature, whichsuggest that students who receive problem-solvingguidance (coaching, hints) during inquiry-basedlearning achieve greater learning outcomes than stu-dents who receive minimal or no guidance (Kirschn-er, Sweller, and Clark 2006; Mayer 2004).

Articles

36 AI MAGAZINE

Page 7: Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung

Modeling Student AffectAffect has begun to play an increasingly importantrole in game-based learning environments. The intel-ligent tutoring systems community has seen theemergence of work on affective student modeling(Conati and Maclaren 2009), including models fordetecting frustration and stress (McQuiggan, Lee, andLester 2007), modeling agents’ emotional states(Marsella and Gratch 2009), and detecting studentmotivation (de Vicente and Pain 2002). All of thiswork seeks to increase the fidelity with which affec-tive and motivational processes are understood andutilized in intelligent tutoring systems in an effort toincrease the effectiveness of tutorial interactions and,ultimately, learning.

This focus on examining affect is largely due to theeffects it has been shown to have on learning out-comes. Emotional experiences during learning havebeen shown to affect problem-solving strategies, thelevel of engagement exhibited by students, and thedegree to which the student is motivated to contin-ue with the learning process (Picard et al. 2004).These factors have the power to dictate how studentslearn immediately and their learning behaviors inthe future. Consequently, the ability to understandand model affective behaviors in learning environ-ments has been a focus of recent work (Arroyo et al.2009; Conati and Maclaren 2009; D’Mello andGraesser 2010).

Correct prediction of students’ affective states is animportant first step in designing affect-sensitivegame-based learning environments. The delivery ofappropriate affective feedback requires first that thestudent’s state be accurately identified. However, thedetection and modeling of affective behaviors inlearning environments poses significant challenges.For example, many successful approaches to affectdetection utilize a variety of physical sensors toinform model predictions. However, deploying thesetypes of sensors in schools or home settings oftenproves to be difficult, or impractical, due to concernsabout cost, privacy, and invasiveness. Consequently,reliance on physical sensors when building affect-sensitive learning environments can limit the poten-tial scalability of these systems. To combat this issue,many systems attempt to model emotion withoutthe use of physiological sensors. Some researcherstaking this approach have focused on incorporatingtheoretical models of emotion, such as appraisal the-ory, which is particularly well suited for computa-tional environments (Marsella and Gratch 2009).These models propose that individuals appraiseevents and actions in their environment according tospecific criteria (for example, desirability or cause) toarrive at emotional experiences. While there are avariety of appraisal-based theories of emotions, fewappraisal models focus specifically on the emotionsthat typically occur during learning (Picard et al.2004).

The lack of a widely accepted and validated mod-el of learner emotions poses a challenge for thedevelopment of affect-detection systems using theo-retical grounding in place of physical sensors. How-ever, Elliot and Pekrun’s model of learner emotionsdescribes how students’ affective states relate to theirgeneral goals during learning tasks, such as whetherthey are focused on learning or performance, andhow well these goals are being met (Elliot andPekrun 2007). The generality of this model allows itto be adapted to specific learning tasks and makes itwell suited for predictive modeling in open-endedlearning environments.

Elliot and Pekrun’s model of learner emotions wasused as a theoretical foundation for structuring asensor-free affect detection model (Sabourin, Mott,and Lester 2011). This model was empirically learnedfrom a corpus of student interaction data. Duringtheir interactions with Crystal Island, students wereasked to self-report on their affective state throughan in-game smartphone device. They were promptedevery seven minutes to select one emotion from a setof seven options, which included anxious, bored,confused, curious, excited, focused, and frustrated.This set of cognitive-affective states is based on priorresearch identifying states that are relevant to learn-ing (Craig et al. 2004, Elliot and Pekrun 2007). Eachemotional state was also accompanied by a validatedemoticon to provide clarity to the emotion label.

A static Bayesian network (figure 5) was designedwith the structure hand-crafted to include the rela-tionships described within Elliot and Pekrun’s mod-el of learner emotions (Sabourin, Mott, and Lester2011). Specifically the structure focused on repre-senting the appraisal of learning and performancegoals and how these goals were being met based onstudents’ progress and activities in the game. Forexample, some activities such as book reading ornote taking in Crystal Island are related to learningobjectives, while achievement of certain milestonesand external validation are more likely to contributeto measures of performance goals. Several other com-ponents of Elliot and Pekrun’s model are also con-sidered. For example, students with approach orien-tations are expected to have generally more positivetemperaments and emotional experiences than stu-dents with avoidance orientations.

Using a training corpus of student interaction

Articles

WINTER 2013 37

Conditions Students Gain Avg. SD t p

Full 55 1.28 2.66 2.03 < 0.05

Intermediate 48 0.13 2.69 0.19 0.84

Minimal 47 0.89 3.12 1.23 0.22

Table 1. Learning Gain Statistics for Comparison of Induced Narrative-Centered Tutorial Planning Models.

Page 8: Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung

data to learn and validate the parameters of staticand dynamic Bayesian network models (Sabourin,Mott, and Lester 2011), it was found that the Elliotand Pekrun (2007) model of learner emotions cansuccessfully serve as the basis for computationalmodels of learner emotions. The model groundedby a theoretical understanding of learner emotionsoutperformed several baseline measures includingmost-frequent class as well as a Bayesian networkmodel operating under naïve variable independenceassumptions (table 2). It was also found that model-ing the dynamic quality of emotional states acrosstime through a dynamic Bayesian network offeredsignificant improvement in performance over thestatic Bayesian network. The model performed par-ticularly well at identifying positive states, includ-ing the state of focused, which was the most fre-quent emotion label (tables 3–4). The model haddifficulty distinguishing between the states of con-fusion and frustration, which is intriguing becausethe causes and experiences of these two states areoften very similar. These findings indicate the needfor future work to improve predictions of negativeemotional states so that models for automatic detec-tion of students’ learning emotions can be used foraffective support in game-based learning environ-ments.

Student Modeling with Dynamic Bayesian Networks

Devising effective models of student knowledge ingame-based learning environments poses significantcomputational challenges. First, models of studentknowledge must cope with multiple sources of uncer-tainty inherent in the modeling task. Second, knowl-edge models must dynamically model knowledgestates that change over the course of a narrative inter-action. Third, the models must concisely representcomplex interdependencies among different types ofknowledge, and naturally incorporate multiplesources of evidence about user knowledge. Fourth, themodels must operate under the real-time performanceconstraints of the interactive narratives that play outin intelligent game-based learning environments.

To address these challenges, we developed adynamic Bayesian network (DBN)) approach to mod-eling user knowledge during interactive narrativeexperiences (Rowe and Lester 2010). DBNs offer aunified formalism for representing temporal stochas-tic processes such as those associated with knowledgemodeling in interactive narrative environments. Theframework provides a mechanism for dynamicallyupdating a set of probabilistic beliefs about a stu-dent’s understanding of narrative, mystery solution,strategic, and curricular knowledge components that

Articles

38 AI MAGAZINE

Poster Views

Book Views

Worksheet Checks

Tests Run

Successful Test

LearningValence

Emotion

Learning Focus

PerformanceFocus

PerformanceValence

Total Goals Completed

Worksheet Incorrect

Worksheet Correct

Valence

EnvironmentVariables

PersonalAttributes

AppraisalVariables

Notes Taken

MasteryApproach

PerformanceAvoidance

MasteryAvoidance

Performance Approach

Openness

Agreeableness

Conscientiousness

Figure 5. Structure of Static Bayesian Network Model of Learner Affect.

Page 9: Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung

are accumulated and demonstrated during interac-tions with a narrative environment. An initial ver-sion of the model has been implemented in CrystalIsland.

The DBN for knowledge tracing was implementedwith the SMILE Bayesian modeling and inferencelibrary developed by the University of Pittsburgh’sDecision Systems Laboratory (Druzdzel 1999). Themodel maintains approximately 135 binary nodes,100 directed links, and more than 750 conditionalprobabilities. As the knowledge-tracing modelobserves student actions in the environment, theassociated evidence is incorporated into the network,and a Bayesian update procedure is performed. Theupdate procedure, in combination with the net-work’s singly connected structure, yields updates thatcomplete in less than one second. Initial probabilityvalues were fixed across all students; probabilitieswere chosen to represent the assumption that stu-dents at the onset had no understanding of scenario-specific knowledge components and were unlikely tohave mastery of curriculum knowledge components.

A human participant study was conducted with116 eighth-grade students from a middle schoolinteracting with the Crystal Island environment(Rowe and Lester 2010). During the study, studentsinteracted with Crystal Island for approximately 50minutes. Logs of students’ in-game actions wererecorded, and were subsequently used to conduct anevaluation of the DBN knowledge-tracing model. Theevaluation aimed to assess the model’s ability toaccurately predict students’ performance on a con-tent knowledge postexperiment test. While the eval-uation does not inspect the student knowledge mod-el’s intermediate states during the narrativeinteraction, nor students’ narrative-specific knowl-edge, an evaluation of the model’s final knowledgeassessment accuracy provided a useful starting pointfor refining the model and evaluating its accuracy.

The DBN knowledge-tracing model used the stu-dents’ recorded trace data as evidence to approximatestudents’ knowledge at the end of the learning inter-action. This yielded a set of probability values foreach student, corresponding to each of the knowl-edge-tracing model’s knowledge components. Theresultant data was used in the analysis of the model’sability to accurately predict student responses onpostexperiment content test questions. The mappingbetween the model’s knowledge components andindividual posttest questions was generated by aresearcher, and used the following heuristic: if aposttest question or correct response shared impor-tant content terms with the description of a particu-lar knowledge component, that knowledge compo-nent was designated as necessary for providing aninformed, correct response to the question. Accord-ing to this heuristic, several questions required thesimultaneous application of multiple knowledgecomponents, and a number of knowledge compo-

nents bore on multiple questions. This yielded amany-to-many mapping between knowledge com-ponents and posttest questions.

The evaluation procedure required the definitionof a threshold value to discriminate between mas-tered and unmastered knowledge components:knowledge components whose model values exceed-ed the threshold were considered mastered, andknowledge components whose model values fellbelow the threshold were considered unmastered.The model predicted a correct response on a posttestquestion if all of the question’s associated knowledgecomponents were considered mastered. The modelpredicted an incorrect response on a posttest ques-tion if one or more associated knowledge compo-nents were considered unmastered. The use of athreshold to discriminate between mastered andunmastered knowledge components mirrors howthe knowledge model might be used in a run-time

Articles

WINTER 2013 39

Emotion Accuracy Valence Accuracy

Baseline 22.4% 54.5%

Naïve Bayes 18.1% 51.2%

Bayes Net 25.5% 66.8%

Dynamic BN 32.6% 72.6%

Table 2. Prediction Accuracies.

Actual Emotion Correct Emotion Prediction Correct ValencePrediction

anxious 2% 60%

bored 18% 75%

confused 32% 59%

curious 38% 85%

excited 19% 79%

focused 52% 81%

frustrated 28% 56%

Table 3. Prediction Accuracy by Emotion.

Predicted Valence

Positive Negative

Actual Positive 823 184

Valence Negative 326 512

Table 4. Valence Confusion Matrix.

Page 10: Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung

environment to inform interactive narrative decisionmaking.

Rather than choose a single threshold, a series ofvalues ranging between 0.0 and 1.0 were selected. Foreach threshold, the DBN knowledge model was com-pared to a random model, which assigned uniformlydistributed, random probabilities for each knowledgecomponent. New random probabilities were generat-ed for each knowledge component, student, andthreshold. Both the DBN model and random modelwere used to predict student posttest responses, andaccuracies for each threshold were determined acrossthe entire test. Accuracy was measured as the sum ofsuccessfully predicted correct responses plus thenumber of successfully predicted incorrect respons-es, divided by the total number of questions.

The DBN model outperformed a random baselinemodel across a range of threshold values. The DBNmodel most accurately predicted students’ posttestresponses at a threshold level of 0.32 (M = .594, SD =.152). A Wilcoxon-Mann-Whitney U test verifiedthat the DBN knowledge-tracing model was signifi-cantly more accurate than the random model at the0.32 threshold level, z = 4.79, p < .0001. AdditionalMann-Whitney tests revealed that the DBN model’spredictive accuracy was significantly greater thanthat of the random model, at the � = .05 level, for theentire range of thresholds between .08 and .56.

Student Goal RecognitionGoal recognition, as well as its sibling tasks, planrecognition and activity recognition, are long-stand-ing AI problems (Charniak and Goldman 1993; Kautzand Allen 1986; Singla and Mooney 2011) that arecentral to game-based learning. The problems are cas-es of abduction: given domain knowledge and asequence of actions performed by an agent, the taskis to infer which plan or goal the agent is pursuing.Recent work has yielded notable advances in recog-nizing agents’ goals and plans, including methods forrecognizing multiple concurrent and interleavedgoals (Hu and Yang 2008), methods for recognizingactivities in multiagent settings (Sadilek and Kautz2010), and methods for augmenting statistical rela-tional learning techniques to better support abduc-tion (Singla and Mooney 2011).

Digital games pose significant computational chal-lenges for goal-recognition models. For example, inmany games players’ abilities change over time, bothby unlocking new powers and improving motor skillsover the course of game play. In effect, a player’saction model may change, in turn modifying therelationships between actions and goals. Action fail-ure is also a critical and deliberate design choice ingames; in platform games, a poorly timed jump oftenleads to a player’s demise. In multiplayer games, mul-tiagent goals arise that may involve players compet-ing or collaborating to accomplish game objectives.

Individual players may also pursue ill-defined goals,such as “explore” or “try to break the game.” In thesecases goals and actions may be cyclically related;players take actions in pursuit of goals, but they mayalso choose goals after they are revealed by particularactions. For these reasons, game-based learning envi-ronments offer promising test beds for investigatingdifferent formulations of goal-recognition tasks.

To investigate these issues, we have been devisinggoal-recognition models for Crystal Island (Ha et al.2011). Given its relationship to abduction, goalrecognition appears well suited for logical represen-tation and inference. However, goal recognition indigital games also involves inherent uncertainty. Forexample, a single sequence of actions is oftenexplainable by multiple possible goals. Markov logicnetworks (MLNs) provide a formalism that unifieslogical and probabilistic representations into a singleframework (Richardson and Domingos 2006). Toaddress the problem of goal recognition withexploratory goals in game environments, a Markovlogic goal-recognition framework was devised.

The MLN goal-recognition model was trained on acorpus collected from player interactions with Crys-tal Island (Ha et al. 2011). In this setting, goal recog-nition involves predicting the next narrative subgoalthat the student will complete as part of investigatingthe mystery.

An MLN consists of a set of weighted first-orderlogic formulae. A weight reflects the importance ofthe constraint represented by its associated logic for-mula in a given model. Figure 6 shows 13 MLN for-mulae that are included in the goal-recognition mod-el. Formula 1 represents a hard constraint that needsto be satisfied at all times. This formula requires that,for each action a at each time step t, there exists a sin-gle goal g. The formulae 2–13 are soft constraints thatare allowed to be violated. Formula 2 reflects the pri-or distribution of goals in the corpus. Formulae 3–13predict the player’s goal g at time t based on the val-ues of the three action properties, action type a, loca-tion l, and narrative state s, as well as the previousgoal. The weights for the soft formulae were learnedwith theBeast, an off-the-shelf tool for MLNs thatuses cutting plane inference (Riedel 2008).

Similar to the U-Director framework describedabove, the MLN framework encodes player actions inthe game environment using three properties: actiontype, location, and narrative state.

Action Type: Type of action taken by the player,such as moving to a particular location, opening adoor, and testing an object using the laboratory’s test-ing equipment. Our data includes 19 distinct types ofplayer actions.

Location: Place in the virtual environment where aplayer action is taken. This includes 39 fine-grainedand nonoverlapping sublocations that decomposethe seven major camp locations in Crystal Island.

Narrative State: Representation of the player’s

Articles

40 AI MAGAZINE

Page 11: Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung

progress in solving the narrative scenario. Narrativestate is encoded as a vector of four binary variables,each of which represents a milestone event withinthe narrative.

The performance of the proposed MLN goal-recog-nition model was compared to one trivial and twonontrivial baseline models (Ha et al. 2011). The triv-ial baseline was the majority baseline, which alwayspredicted the goal that appears most frequently inthe training data. The nontrivial baselines were twon-gram models, unigram and bigram. The unigrammodel predicted goals based on the current playeraction only, while the bigram model considered theprevious action as well. In the n-gram models, playeractions were represented by a single aggregate featurethat combined all three action properties: actiontype, location, and narrative state. In addition togame environments, n-gram models have been usedin previous goal-recognition work for spoken dia-logue systems (Blaylock and Allen 2003).

The two n-gram models and the proposed MLNmodel were evaluated with tenfold cross validation.The entire data set was partitioned into ten nonover-lapping subsets, ensuring data from the same playerdid not appear in both the training and the testing

data. Each subset of the data was used for testingexactly once. The models’ performance was meas-ured using F1, which is the harmonic mean of preci-sion and recall. Table 5 shows the average perform-ance of each model over ten-fold cross validation.The MLN model scored 0.484 in F1, achieving an 82percent improvement over the baseline. The uni-gram model performed better than the bigram mod-el. A one-way repeated-measures ANOVA confirmedthat the differences among the three compared mod-els were statistically significant (F(2, 18) = 71.87, p <0.0001). A post hoc Tukey test revealed the differ-ences between all pairs of the three models were sta-tistically significant (p < .01).

Educational Impact In addition to the version of Crystal Island for mid-dle school science education described above, twoadditional versions of Crystal Island have beendeveloped: one for elementary science education,and one for integrated science and literacy educationfor middle school students. More than 4000 studentshave been involved with studies with the CrystalIsland learning environments. While the learning

Articles

WINTER 2013 41

Figure 6. Formulae for MLN Goal-Recognition Model.

Page 12: Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung

environments are in continual development, theyregularly achieve learning gains in students. Forexample, an observational study involving 150 stu-dents investigated the relationship between learningand engagement in Crystal Island (Rowe et al. 2011).All students in the study played Crystal Island. Theinvestigation explored questions in the science edu-cation community about whether learning effective-ness and engagement are synergistic or conflicting ingame-based learning. Students were given pre- andposttests on the science subject matter. An investiga-tion of learning gains found that students answeredmore questions correctly on the posttest (M = 8.61,SD = 2.98) than the pretest (M = 6.34, SD = 2.02), andthis finding was statistically significant, t(149) =10.49, p < .001.

The relationship between learning and engage-ment was investigated by analyzing students’ learn-ing gains, problem-solving performance, and severalengagement-related factors. The engagement-relatedfactors included presence, situational interest, andin-game score. Rather than finding an oppositionalrelationship between learning and engagement, thestudy found a strong positive relationship betweenlearning outcomes, in-game problem solving, andincreased engagement (table 6). Specifically, a linearregression analysis found that microbiology back-ground knowledge, presence, and in-game score weresignificant predictors of microbiology posttest score,and the model as a whole was significant, R2 = .33,F(3, 143) = 23.46, p < .001. A related linear regressionanalysis found that microbiology pretest score, num-ber of in-game goals completed, and presence wereall significant predictors of microbiology posttest per-

formance, R2 = .35, F(4, 127) = 16.9, p < .01. Similarresults have been found with the elementary schoolversion of Crystal Island. With 800 students overtwelve 50-minute class periods, students’ science con-tent test scores increased significantly, the equivalentof a letter grade.

Related WorkIntelligent game-based learning environments lever-age advances from two communities: intelligenttutoring systems and intelligent narrative technolo-gies. Intelligent tutoring systems model key aspectsof one-on-one human tutoring in order to createlearning experiences that are individually tailored tostudents based on their cognitive, affective, andmetacognitive states (Woolf 2008). By maintainingexplicit representations of learners’ knowledge andproblem-solving skills, intelligent tutoring systemscan dynamically customize problems, feedback, andhints to individual learners (Koedinger et al. 1997,VanLehn 2006). Recent advances in intelligent tutor-ing systems include improvements to fine-grained,temporal models of student knowledge acquisition(Baker, Goldstein, and Heffernan 2011); models oftutorial dialogue strategies that enhance students’cognitive and affective learning outcomes (Forbes-Riley and Litman 2011); models of students’ affectivestates and transitions during learning (Conati andMaclaren 2009, D’Mello and Graesser 2010);machine-learning-based techniques for embeddedassessment (Feng, Heffernan, and Koedinger 2009);and tutors that model and directly enhance students’self-regulated learning skills (Azevedo et al. 2010,Biswas et al. 2010). Critically, the field has convergedon a set of scaffolding functionalities that yieldimproved student learning outcomes compared tononadaptive techniques (VanLehn 2006).

Intelligent narrative technologies model humanstory telling and comprehension processes, includingmethods for generating interactive narrative experi-ences that develop and adapt in real time. Given thecentral role of narratives in human cognition andcommunication, there has been growing interest inleveraging computational models of narrative for abroad range of applications in training (Johnson2010; Kim et al. 2009; Si, Marsella, and Pynadath2009) and entertainment (Lin and Walker 2011;McCoy et al. 2011; Porteous et al. 2011; Swanson andGordon 2008; Yu and Riedl 2012). In educational set-tings, computational models of interactive narrativecan serve as the basis for story-centric problem-solv-ing scenarios that adapt story-centered instruction toindividual learners (Johnson 2010; Lim et al. 2012;Rowe et al. 2011).

Educational research on story-centered game-based learning environments is still in its nascentstages. While there have been several examples ofsuccessful story-centric game-based learning envi-

Articles

42 AI MAGAZINE

Baseline Unigram Bigram Factored MLN

F1 0.266 0.396 0.330 0.484

Improvement over Baseline

N/A 49% 24% 82%

Table 5. F1 Scores for MLN and Baseline Goal-Recognition Models.

Predictor B SE

Pretest .46** .09 .33**

Presence .03* .01 .15*

Final Game Score .01** .00 .31**

R`1 .33

Table 6. Regression Results Predicting Students’ Microbiology Posttest Performance.

Note: ** - p < .01; * - p < .05

Page 13: Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung

ronments for K–12 education (Ketelhut et al. 2010;Lim et al. 2012; Warren, Dondlinger, and Barab 2008)and training applications (Johnson 2010), consider-able work remains to establish an empirical accountof which educational settings and game features arebest suited for promoting effective learning and highengagement. For example, recent experiments con-ducted with college psychology students have indi-cated that narrative-centered learning environmentsmay not always enhance short-term content reten-tion more effectively than direct instruction (Adamset al. 2012). However, the instructional effectivenessof story-centered game-based learning environmentsis likely affected by their instructional designs, gamedesigns, and capacity to tailor scaffolding to individ-ual learners using narrative-centered tutorial plan-ning, student affect recognition, student modeling,and student goal recognition. Additional empiricalresearch is needed to determine which features, andartificial intelligence techniques, are most effective invarious settings. Furthermore, intelligent game-basedlearning environments have a broad range of appli-cations for guided inquiry-based learning, prepara-tion for future learning, assessment, and learning ininformal settings such as museums and homes. Iden-tifying how integrated AI systems that combine tech-niques from intelligent tutoring systems and intelli-gent narrative technologies can best be utilized toenhance robust, lifelong learning is a critical chal-lenge for the field.

Conclusion Intelligent game-based learning environments artful-ly integrate commercial game technologies and AIframeworks from intelligent tutoring systems andintelligent narrative technologies to create personal-ized learning experiences that are both effective andengaging. Because of their ability to dynamically tai-lor narrative-centered problem-solving scenarios tocustomize advice to students, and to provide real-time assessment, intelligent game-based learningenvironments offer significant potential for learningboth in and out of the classroom.

Over the next few years as student modeling andaffect recognition capabilities become even morepowerful, intelligent game-based learning environ-ments will continue to make their way into anincreasingly broad range of educational settingsspanning classrooms, homes, science centers, andmuseums. With their growing ubiquity, they will bescaled to new curricula and serve students of all ages.It will thus become increasingly important to under-stand how students can most effectively interact withthem, and what role AI-based narrative and gamemechanics can play in scaffolding learning and real-izing sustained engagement. Further, continuedimprovements in the accuracy, scalability, robust-ness, and ease of authorship of AI models for narra-

tive-centered tutorial planning, student affect recog-nition, student modeling, and student goal recogni-tion will be critical for actualizing wide-scale use ofintelligent game-based learning environments inlearners’ everyday lives.

Acknowledgements The authors wish to thank members of the Intelli-Media Group of North Carolina State University fortheir assistance. This research was supported by theNational Science Foundation under Grants REC-0632450, DRL-0822200, IIS-0812291, and CNS-0540523. This work was also supported by theNational Science Foundation through a GraduateResearch Fellowship. Any opinions, findings, andconclusions or recommendations expressed in thismaterial are those of the authors and do not neces-sarily reflect the views of the National Science Foun-dation. This work was also supported by the Bill andMelinda Gates Foundation, the William and FloraHewlett Foundation, and Educause.

ReferencesAdams, D. M.; Mayer, R. E.; Macnamara, A.; Koenig, A.; andWainess, R. 2012. Narrative Games for Learning: Testing theDiscovery and Narrative Hypotheses. Journal of EducationalPsychology 104(1): 235–249.

Arroyo, I.; Cooper, D.; Burleson, W.; Woolf, B.; Muldner, K.;and Christopherson, R. 2009. Emotion Sensors Go toSchool. In Proceedings of the Fourteenth International Confer-ence on Artificial Intelligence in Education, 17–24. Amsterdam:IOS Press.

Azevedo, R.; Moos, D. C.; Johnson, A. M.; and Chauncey, A.D. 2010. Measuring Cognitive and Metacognitive Regula-tory Processes During Hypermedia Learning: Issues andChallenges. Educational Psychologist 45(4): 210–223.

Baker, R. S. J. D.; Goldstein, A. B.; and Heffernan, N. T. 2011.Detecting Learning Moment-By-Moment. International Jour-nal of Artificial Intelligence in Education 21(1–2): 5–25.

Biswas, G.; Jeong, H.; Kinnebrew, J. S.; Sulcer, B.; andRoscoe, R. 2010. Measuring Self-Regulated Learning SkillsThrough Social Interactions in a Teachable Agent Environ-ment. Research and Practice in Technology Enhanced Learning5(2): 123–152.

Blaylock, N., and Allen, J. 2003. Corpus-Based, StatisticalGoal Recognition. in Proceedings of the Eighteenth Interna-tional Joint Conference on Artificial Intelligence, 1303–1308.San Francisco: Morgan Kaufmann Publishers.

Charniak, E., and Goldman, R. P. 1993. A Bayesian Modelof Plan Recognition. Artificial Intelligence 64(1): 53–79.

Conati, C., and MacLaren, H. 2009. Empirically Buildingand Evaluating a Probabilistic Model of User Affect. UserModeling and User-Adapted Interaction 19(3): 267–303.

Craig, S.; Graesser, A.; Sullins, J.; and Gholson, B. 2004.Affect and Learning: An Exploratory Look into the Role ofAffect in Learning with Autotutor. Learning, Media and Tech-nology 29(3): 241–250.

Dean, T., and Kanazawa, K. 1989. A Model for ReasoningAbout Persistence and Causation. Computational Intelligence5(3): 142–150.

de Vicente, A., and Pain, H. 2002. Informing the Detection

Articles

WINTER 2013 43

Page 14: Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung

of the Students’ Motivational State: An Empirical Study. InProceedings of the Sixth International Conference on IntelligentTutoring Systems, 933–943. Berlin: Springer.

D’Mello, S., and Graesser, A. 2010. Multimodal Semi-Auto-mated Affect Detection from Conversational Cues, GrossBody Language, and Facial Features. User Modeling and User-Adapted Interaction 20(2): 147–187.

Druzdzel, M. J. 1999. Smile: Structural Modeling, Inference,and Learning Engine and Genie: A Development Environ-ment for Graphical Decision-Theoretic Models. In Proceed-ings of the Eleventh Conference on Innovative Applications ofArtificial Intelligence, 902–903. Menlo Park, CA: AAAI Press.

Elliot, A. J., and Pekrun, R. 2007. Emotion in the Hierarchi-cal Model of Approach-Avoidance Achievement Motiva-tion. In Emotion in Education, ed. P. A. Schutz, and R. Pekrun,57–74. Amsterdam: Elsevier.

Feng, M.; Heffernan, N.; and Koedinger, K. 2009. Addressingthe Assessment Challenge with an Online System thatTutors as It Assesses. User Modeling and User-Adapted Interac-tion 19(3): 243–266.

Forbes-Riley, K., and Litman, D. 2011. Designing and Eval-uating a Wizarded Uncertainty-Adaptive Spoken DialogueTutoring System. Computer Speech and Language 25(1): 105–126.

Gee, J. P. 2007. What Video Games Have to Teach Us AboutLearning and Literacy, 2nd ed. New York: Palgrave Macmil-lan.

Ha, E. Y.; Rowe, J. P.; Mott, B. W.; and Lester, J. C. 2011. GoalRecognition with Markov Logic Networks for Player-Adap-tive Games. In Proceedings of the Seventh International Con-ference on Artificial Intelligence and Interactive Digital Enter-tainment, 32–39. Menlo Park, CA: AAAI Press.

Habgood, M. P. J., and Ainsworth, S. E. 2011. MotivatingChildren to Learn Effectively: Exploring the Value of Intrin-sic Integration in Educational Games. Journal of the LearningSciences 20(2): 169–206.

Hu, D. H., and Yang, Q. 2008. Cigar: Concurrent and Inter-leaving Goal and Activity Recognition. In Proceedings of theTwenty-Third National Conference on Artificial Intelligence,1363–1368. Menlo Park, CA: AAAI Press.

Johnson, W. L. 2010. Serious Use of a Serious Game for Lan-guage Learning. International Journal of Artificial Intelligencein Education 20(2): 175–195.

Kautz, H., and Allen, J. F. 1986. Generalized Plan Recogni-tion. In Proceedings of the Fifth National Conference on Artifi-cial Intelligence, 32–37. Menlo Park, CA: AAAI Press.

Ketelhut, D. J.; Dede, C.; Clarke, J.; and Nelson, B. C. 2010.A Multi-User Virtual Environment for Building HigherOrder Inquiry Skills in Science. British Journal of EducationalTechnology 41(1): 56–68.

Kim, J.; Hill, R. W.; Durlach, P. J.; Lane, H. C.; Forbell, E.;Core, M.; Marsella, S. C., Et Al. 2009. Bilat: A Game-BasedEnvironment for Practicing Negotiation in a Cultural Con-text. International Journal of Artificial Intelligence in Education19(3): 289–308.

Kirschner, P.; Sweller, J.; and Clark, R. 2006. Why MinimalGuidance During Instruction Does Not Work: An Analysisof the Failure of Constructivist, Discovery, Problem-Based,Experiential, and Inquiry-Based Teaching. Educational Psy-chologist 41(2): 75–86.

Koedinger, K. R.; Anderson, J. R.; Hadley, W. H.; and Mark,M. A. 1997. Intelligent Tutoring Goes to School in the Big

City. International Journal of Artificial Intelligence in Education8(1): 30–43.

Lee, S. Y.; Mott, B. W.; and Lester, J. C. 2012. Real-Time Nar-rative-Centered Tutorial Planning for Story-Based Learning.In Proceedings of the Eleventh International Conference on Intel-ligent Tutoring Systems, 476–481. Berlin: Springer.

Lee, S. Y.; Mott, B. W.; and Lester, J. C. 2011. Modeling Nar-rative-Centered Tutorial Decision Making in Guided Dis-covery Learning. In Proceedings of the Fifteenth InternationalConference on Artificial Intelligence and Education, 163–170.Berlin: Springer.

Lim, M. Y.; Dias, J.; Aylett, R.; and Paiva, A. 2012. CreatingAdaptive Affective Autonomous NPCS. Autonomous Agentsand Multi-Agent Systems 24(2): 287–311.

Lin, G. I., and Walker, M. A. 2011. All the World’s a Stage:Learning Character Models from Film. In Proceedings of theSeventh Artificial Intelligence and Interactive Digital Entertain-ment Conference, 46–52. Palo Alto, CA: AAAI Press.

Marsella, S. C., and Gratch, J. 2009. EMA: A Process Modelof Appraisal Dynamics. Cognitive Systems Research 10(1): 70–90.

Mateas, M., and Stern, A. 2005. Structuring Content in theFacade Interactive Drama Architecture. In Proceedings of Arti-ficial Intelligence and Interactive Digital Entertainment Confer-ence, 93–98. Palo Alto CA: AAAI Press.

Mayer, R. 2004. Should There Be a Three-Strike Rule AgainstPure Discovery Learning? American Psychologist 59(1): 4–19.

McCoy, J.; Treanor, M.; Samuel, B.; Wardrip-Fruin, N.; andMateas, M. 2011. Comme Il Faut: A System for AuthoringPlayable Social Models. In Proceedings of the Seventh Interna-tional Conference on Artificial Intelligence and Interactive Digi-tal Entertainment, 158–163. Palo Alto, CA: AAAI Press.

Mcquiggan, S.; Lee, S.; and Lester, J. 2007. Early Predictionof Student Frustration. In Proceedings of the Second Interna-tional Conference on Affective Computing and Intelligent Inter-action, 698–709. Berlin: Springer.

Meluso, A.; Zheng, M.; Spires, H. A.; and Lester, J. 2012.Enhancing Fifth Graders’ Science Content Knowledge andSelf-Efficacy Through Game-Based Learning. Computers andEducation 59(2): 497–504.

Mott, B. W., and Lester, J. C. 2006. U-Director: A Decision-Theoretic Narrative Planning Architecture for StorytellingEnvironments. In Proceedings of the 5th International JointConference on Autonomous Agents and Multiagent Systems,997–984. New York: Association for Computing Machinery.

Picard, R. W.; Papert, S.; Bender, W.; Blumberg, B.; Breazeal,C.; Cavallo, D.; Machover, T. 2004. Affective Learning — AManifesto. BT Technology Journal 22(4): 253–269.

Porteous, J.; Teutenberg, J.; Charles, F.; and Cavazza, M.2011. Controlling Narrative Time in Interactive Storytelling.In Proceedings of the Tenth International Conference onAutonomous Agents and Multiagent Systems, 449–456. Rich-land, SC: International Foundation for Autonomous Agentsand Multiagent Systems.

Richardson, M., and Domingos, P. 2006. Markov Logic Net-works. Journal of Machine Learning 62(1–2): 107–136.

Riedel, S. 2008. Improving the Accuracy and Efficiency ofMap Inference for Markov Logic. In Proceedings of the Twen-ty-Fourth Annual Conference on Uncertainty in Artificial Intelli-gence, 468–475. Seattle, WA: AUAI Press.

Riedl, M. O.; Stern, A.; Dini, D. M.; and Alderman, J. M.2008. Dynamic Experience Management in Virtual Worlds

Articles

44 AI MAGAZINE

Page 15: Serious Games Get Smart: Intelligent Game-Based Learning … · 2017. 9. 2. · Serious Games Get Smart: Intelligent Game-Based Learning Environments James C. Lester, Eun Y. Ha, Seung

for Entertainment, Education, and Training.International Transactions on Systems Scienceand Applications, Special Issue on Agent BasedSystems for Human Learning 4(2): 23–42.

Riedl, M. O., and Young, R. M. 2004. AnIntent-Driven Planner for Multi-Agent Sto-ry Generation. In Proceedings of the ThirdInternational Conference on AutonomousAgents and Multi Agent Systems, 186–193. Pis-cataway, NJ: Institute of Electrical and Elec-tronics Engineers.

Rowe, J. P., and Lester, J. C. 2010. ModelingUser Knowledge with Dynamic BayesianNetworks in Interactive Narrative Environ-ments. In Proceedings of the Sixth AnnualArtificial Intelligence and Interactive DigitalEntertainment Conference, 57–62. Palo Alto,CA: AAAI Press.

Rowe, J. P.; Shores, L. R.; Mott, B. W.; andLester, J. C. 2011. Integrating Learning,Problem Solving, and Engagement in Nar-rative-Centered Learning Environments.International Journal of Artificial Intelligencein Education 21(2): 115–133.

Sabourin, J. L.; Mott, B. W.; and Lester, J. C.2011. Modeling Learner Affect with Theo-retically Grounded Dynamic Bayesian Net-works. In Proceedings of the Fourth Interna-tional Conference on Affective Computing andIntelligent Interaction, 286–295. Berlin:Springer.

Sadilek, A., and Kautz, H. 2010. RecognizingMulti-Agent Activities from Gps Data. InProceedings of the Twenty-Fourth AAAI Con-ference on Artificial Intelligence, 1134–1139.Menlo Park, CA: AAAI Press.

Si, M.; Marsella, S.; and Pynadath, D. 2009.Directorial Control in a Decision-TheoreticFramework for Interactive Narrative. In Pro-ceedings of the Second International Conferenceon Interactive Digital Storytelling, 221–233.Berlin: Springer.

Singla, P., and Mooney, R. 2011. AbductiveMarkov Logic for Plan Recognition. In Pro-ceedings of the Twenty-Fifth Conference onArtificial Intelligence, 1069-1075. San Fran-cisco, CA: AAAI Press.

Swanson, R., and Gordon, A. 2008. Say Any-thing: A Massively Collaborative OpenDomain Story Writing Companion. In Pro-ceedings of the 1st Joint International Confer-ence on Interactive Digital Storytelling, 32–40.Berlin: Springer.

VanLehn, K. 2006. The Behavior of TutoringSystems. International Journal of ArtificialIntelligence in Education 16(3): 227–265.

Warren, S. J.; Dondlinger, M. J.; and Barab,S. A. 2008. A MUVE Towards Pbl Writing:Effects of a Digital Learning EnvironmentDesigned to Improve Elementary StudentWriting. Journal of Research on Technology inEducation 41(1): 113–140.

Woolf, B. P. 2008. Building Intelligent Interac-

tive Tutors: Student-Centered Strategies for Rev-olutionizing E-Learning. San Francisco: Mor-gan Kaufmann Publishers.

Yu, H., and Riedl, M. O. 2012. A SequentialRecommendation Approach for InteractivePersonalized Story Generation. In Proceed-ings of the Eleventh International Conferenceon Autonomous Agents and Multiagent Sys-tems, 71–78. Richland, SC: InternationalFoundation for Autonomous Agents andMultiagent Systems.

James Lester is a distinguished professor ofcomputer science at North Carolina StateUniversity. His research focuses on trans-forming education with technology-richlearning environments. Utilizing AI, gametechnologies, and computational linguis-tics, he designs, develops, fields, and evalu-ates next-generation learning technologiesfor K–12 science, literacy, and computer sci-ence education. His work on personalizedlearning ranges from game-based learningenvironments and intelligent tutoring sys-tems to affective computing, computation-al models of narrative, and natural lan-guage tutorial dialogue. He has served asprogram chair for the International Con-ference on Intelligent Tutoring Systems,the International Conference on IntelligentUser Interfaces, and the International Con-ference on Foundations of Digital Games,and as editor-in-chief of the InternationalJournal of Artificial Intelligence in Education.He has been recognized with a National Sci-ence Foundation CAREER Award and sever-al Best Paper Awards.

Eun Ha is a postdoctoral research scholarin the Department of Computer Science atNorth Carolina State University. Herresearch interests lie in bringing robustsolutions to complex problems at the inter-face of humans and computers by leverag-ing and advancing AI to create intelligentinteractive systems. Her research in naturallanguage has focused on discourse process-ing and natural language dialogue systems,and her research on user modeling hasfocused on goal recognition.

Seung Lee received the Ph.D. degree incomputer science from North CarolinaState University in 2012. His research inter-ests are in multimodal interaction, interac-tive narrative, statistical natural languageprocessing, and affective computing. He iscurrently with the SAS Institute where hiswork focuses on sentiment analysis in nat-ural language processing.

Bradford Mott is a senior research scientistat the Center for Educational Informaticsin the College of Engineering at North Car-olina State University. His research interestsinclude AI and human-computer interac-tion, with applications in educational tech-

Articles

WINTER 2013 45

nology. In particular, his research focuseson game-based learning environments,intelligent tutoring systems, computergames, and computational models of inter-active narrative. Prior to joining North Car-olina State University, he worked in thegames industry developing cross-platformmiddleware solutions for the PlayStation 3,Wii, and Xbox 360. He received his Ph.D.from North Carolina State University in2006.

Jonathan Rowe is a senior research associ-ate in the Department of Computer Scienceat North Carolina State University. Hisresearch investigates applications of artifi-cial intelligence and data mining foradvanced learning technologies, with anemphasis on game-based learning environ-ments. He is particularly interested in mod-eling how learners solve problems andexperience interactive narratives in virtualworlds.

Jennifer Sabourin is a doctoral student inthe Department of Computer Science atNorth Carolina State University. Herresearch focus is on developing effectiveeducational technology with the applica-tion of AI and data mining techniques. Sheis particularly interested in the affective andmetacognitive processes associated withlearning with technology.

AAAI-14 Timetable for Authors

December 6, 2013 –January 31, 2014:

Authors register on the AAAI web site

January 31, 2014:Electronic abstracts due

February 4, 2014:Electronic papers due

March 18 – March 22, 2014:Author feedback about

initial reviews

April 7, 2014:Notification of acceptance

or rejection

April 22, 2014:Camera-ready copy due

at AAAI office