Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence...

53
Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is rising as an AI-based paradigm with one of the highest potentials to impact daily human life in the near future. The broad idea is to enrich a space (e.g., a room, house, building, bus station, or a critical area in a hospital) with sensors so that the people using that space can benefit from a more flexible and intelligent environment. Ambient Intelligence offers many expected benefits. For example, it can in- crease safety by monitoring lifestyle patterns or recent activities and providing assistance when a potentially harmful situation arises. It can increase comfort by making resources available in advance of demand, or by automatically man- aging multiple parameters of the environment such as temperature, lighting, and music. The most well-known area of Ambient Intelligence is usually referred as “Smart Homes”, in which a house is equipped to bring advanced services to its users. Recent applications include the use of smart homes to provide a safe environment for people with special needs. For example in the case of elderly people suffering from Alzheimer’s disease, a system can minimize risks and ensure appropriate care at critical times by monitoring activities, diagnosing situations and advising human care-givers as available/required. Applications of this form can greatly improve quality of life. Other applications of this concept include the use of Ambient Intelligence technology to diagnose situations where safety can be compromised in areas such as hospitals, airports, buses, trains, and underground stations. The ability to rapidly detect and react to hazards in such public environments is extremely important for diminishing the negative effects of any crisis. These environments share the characteristic of being enclosed spaces, which tend to support the necessary instrumentation. However, security minded applications can also be developed for open areas, such as parks, and individual streets. Ambient intelligence is a multidisciplinary area that encompasses a wide range of technical and practical challenges. It includes research in agent design and distributed intelligence, broad issues in user interfaces, special problems in networking and data communication, plus design considerations involved in embedding smart systems in public spaces and buildings. This workshop empha- sizes the impact that Artificial Intelligence can have on Ambient Intelligence, specifically to impact the range of services Ambient Intelligence systems can offer and to increase the flexibility such applications can provide. Although there have been a number of workshops and conferences on re- lated topics, none have made a strong connection between the achievements of Artificial Intelligence and the potential to improve Ambient Intelligence by borrowing/adapting solutions from this rich area. The topic of Ambient Intelligence is strongly related to traditional AI top- ics such as spatio-temporal reasoning, robotics, HCI, and to knowledge rep- resentation, inference, planning and execution, which are essential in any AI i AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.) AITAmI'06

Transcript of Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence...

Page 1: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Artificial Intelligence Techniques

for Ambient Intelligence

(AITAmI06)

Ambient intelligence [1] (AmI) is rising as an AI-based paradigm with one ofthe highest potentials to impact daily human life in the near future. The broadidea is to enrich a space (e.g., a room, house, building, bus station, or a criticalarea in a hospital) with sensors so that the people using that space can benefitfrom a more flexible and intelligent environment.

Ambient Intelligence offers many expected benefits. For example, it can in-crease safety by monitoring lifestyle patterns or recent activities and providingassistance when a potentially harmful situation arises. It can increase comfortby making resources available in advance of demand, or by automatically man-aging multiple parameters of the environment such as temperature, lighting,and music.

The most well-known area of Ambient Intelligence is usually referred as“Smart Homes”, in which a house is equipped to bring advanced services toits users. Recent applications include the use of smart homes to provide asafe environment for people with special needs. For example in the case ofelderly people suffering from Alzheimer’s disease, a system can minimize risksand ensure appropriate care at critical times by monitoring activities, diagnosingsituations and advising human care-givers as available/required. Applicationsof this form can greatly improve quality of life.

Other applications of this concept include the use of Ambient Intelligencetechnology to diagnose situations where safety can be compromised in areassuch as hospitals, airports, buses, trains, and underground stations. The abilityto rapidly detect and react to hazards in such public environments is extremelyimportant for diminishing the negative effects of any crisis. These environmentsshare the characteristic of being enclosed spaces, which tend to support thenecessary instrumentation. However, security minded applications can also bedeveloped for open areas, such as parks, and individual streets.

Ambient intelligence is a multidisciplinary area that encompasses a widerange of technical and practical challenges. It includes research in agent designand distributed intelligence, broad issues in user interfaces, special problemsin networking and data communication, plus design considerations involved inembedding smart systems in public spaces and buildings. This workshop empha-sizes the impact that Artificial Intelligence can have on Ambient Intelligence,specifically to impact the range of services Ambient Intelligence systems canoffer and to increase the flexibility such applications can provide.

Although there have been a number of workshops and conferences on re-lated topics, none have made a strong connection between the achievementsof Artificial Intelligence and the potential to improve Ambient Intelligence byborrowing/adapting solutions from this rich area.

The topic of Ambient Intelligence is strongly related to traditional AI top-ics such as spatio-temporal reasoning, robotics, HCI, and to knowledge rep-resentation, inference, planning and execution, which are essential in any AI

i

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06

Page 2: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

application. Rather than select any single of these threads, the workshop aimsto provide a broad perspective on the possibilities that AI has to make smartspaces smarter. This can include work in: Spatio-Temporal Reasoning, CausalReasoning, Planning, Learning / Data Mining, Case-based Reasoning, DecisionMaking under Uncertainty, Decision Trees, Neural Networks, and Multi-AgentSystems. This list should not be regarded as exclusive, as other, newer ar-eas are emerging that are relevant to both AI and AmI: Human Interactionwith Autonomous Systems in Complex Environments, Self-adaptive systems,Context Awareness, Innovative applications of AI to Ambient Intelligence, andAgent-based approaches to AmI.

We are very pleased to offer this compilation of papers from the 1st Work-shop on Artificial Intelligence Techniques for Ambient Intelligence. It reflectsthe contributions made by researchers and practitioners, and includes materialfrom three keynote speakers, five regular papers, one demo and five posters. Allof them provide interesting advances in the area. We want to thank the authorsfor making this event an exciting one, and thank the Program Committee mem-bers who gave shape to the event. We would also like to give our special thanksto ECAI, represented by Toby Walsh, who allowed us to organize AITAmI’06and co-locate it with ECAI. We would also like to thank Stephen Downey atUUJ and Harsha Veeramachaneni at ECAI for their technical assistance to pro-duce this proceedings.

References

[1] The European Union report, Scenarios for Ambient Intelligence in 2010,available at ftp://ftp.cordis.lu/pub/ist/docs/istagscenarios2010.pdf

Dr. Juan Carlos AugustoDr. Daniel Shapiro

August 2006

ii

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06

Page 3: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Organization

Program Chairs

Dr. Juan Carlos Augusto, University of Ulster, U.K.Dr. Daniel Shapiro, Applied Reactivity, Inc., U.S.A.

Program Committee

E. Aarts, Philips, The NetherlandsM. Bhlen, State University of New York, USAA. Butz, University of Munich, GermanyV. Callaghan, University of Essex, UKC. Combi, University of Verona, ItalyA.K. Dey, Carnegie Mellon University, USAM. Divitini, Norwegian University of Science, NorwayCh. Fernstrom, Xerox ResearchM. Freed, NASA-Ames Research, USAB. Gottfried, University Bremen, GermanyH. Guesgen, University of Auckland, New ZealandA. Kameas, Computer Technical Institute, GreeceH. Gellersen, Lancaster University, UKJ. Plomp, VTT Electronics, FinlandH. Raffler, Siemens AG, GermanyP. Remagnino, Kingston University, UKB. Schiele, Darmstadt University, GermanyK. Stathis, City University of London, UK

iii

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06

Page 4: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Table of Contents

Keynotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Towards Ambient Intelligence An Industrial ViewMichael Berger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Social Interactions in Ambient Intelligent EnvironmentsBoris de Ruyter and Emile Aarts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Context and Content: A Perfect MarriagePertti Huuskonen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5

Research Papers and Demos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

An Agent-based Approach to Personalized House ControlBerardina De Carolis, Giovanni Cozzolongo and Sebastiano Pizzutilo . . . . . . . 7

Defining basic behaviours in ambient intelligence environments by means ofrule-based programming with visual toolsAndres Munoz and Antonia Vera and Juan A. Botıa and Antonio F. GomezSkarmeta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12

A logical framework for an emotionally aware intelligent environmentCarole Adam and Benoit Gaudou and Andreas Herzig and Dominique Longin 17

A Human Home Interaction ApplicationNati Herrasti and Antonio Lopez and Alfonso Garate . . . . . . . . . . . . . . . . . . . . . . 22

An Intelligent Data Management Reasoning Model within a Pervasive MedicalEnvironmentJohn O’Donoghue and John Herbert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

IPRA An Integrated Pattern Recognition Approach to Enhance the SensingAbilities of Ambient IntelligenceHolger Schultheis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

iv

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06

Page 5: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Posters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33

Humans and Agents Teaming for Ambient CognitionMasja Kempen, Manuela Viezzer and Niek Wijngaards . . . . . . . . . . . . . . . . . . . . 38

A Rule Based Application: Diet AdvisorNati Herrasti, Antonio Lopez and Aitzol Gallastegi . . . . . . . . . . . . . . . . . . . . . . . . 40

Spatiotemporal Ambient IntelligenceBjorn Gottfried and HansW. Guesgen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Semantic Tuple Space: application in a Contextual Information ManagementSystemIgnacio Nieto and Juan Botıa and Antonio Gomez-Skarmeta . . . . . . . . . . . . . . 44

A mixture of experts for learning lighting controlStephan K. Nufer, Mathias Buehlmann, Tobi Delbruck and Josef M. Joller 46

v

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06

Page 6: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

vi

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06

Page 7: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

AI Techniques for Ambient Intelligence Keynotes

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 1

Page 8: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Towards Ambient Intelligence – An Industrial ViewDr. Michael Berger1

1 INTRODUCTION

Ambient Intelligence (AmI) refers to the emerging computingparadigm where human users are empowered through interactingwith ambient environments (such as our homes, workplaces, trans-portation infrastructures, etc). The environments are aware of thecontext of the user, able to provide personalized services to theirneeds, capable of anticipating their behavior and responding to theirpresence. Agent and AI technologies play a key role for realizingAmI systems and applications by providing the fundamental infor-mation processing technology.

This paper/talk identifies and analyzes in a first step the most im-portant technology building blocks (in the sense of a roadmap) inorder to realize the vision of AmI. It shows how agent and AI tech-nologies can support implementation and gives examples of currenttechnological solutions. In a second step industrial and consumerapplication case studies are described. The paper/talk ends with anidentification of research challenges and an outlook on future de-velopments. (For more detailed information on roadmap and sampleapplications see [1].)

2 TECHNOLOGY BUILDING BLOCKS ANDROADMAP

For realizing the vision of AmI three technological phases are neededto be fulfilled. In parallel to all phases, standards for interoperability,a general AmI (reference) architecture and privacy mechanisms haveto be developed.

Phase 1: An important basis for AmI is the provisioning of a ubiq-uitous information and service infrastructure. Effectively, this is adynamic distributed network of embedded devices, sensors, actorsand systems that can interact with humans and provide them witha variety of information, communication, or collaboration services.Communication services are based for example on Bluetooth, WiFior ZigBee-based self-organizing sensor networks and may comprisepeer-to-peer (P2P) communication mechanisms. Service-oriented ar-chitectures, e.g. Web Services or multi-agent platforms with self-organizing behavior, are the basis for the next phases.

Phase 2: Situation/context-processing as a second phase meansthe ability of a system to sense/acquire and understand the contextof a user or process and to use the knowledge about the situationfor selecting the best course of action. By context, we mean any in-formation that can be used to characterize the situation of an entity(person, place, or object that is considered relevant to the interactionbetween a user and an application). Here, intelligent agent technol-ogy will help in modeling, reasoning/learning and communicatingcontext. Furthermore, more advanced services for distributed datamanagement and synchronization are considered in that phase.

1 Siemens AG, Corporate Technology, email: [email protected]

Phase 3: In a third phase, which I call “intelligent autonomousbehavior”, intelligent cooperating agents will play an importantrole for AmI. These are systems comprised of autonomous soft-ware components capable of flexible, collaborative action in dy-namic environments. Useful core concepts include reactive (prede-fined rules), active (predefined goals) and pro-active (self-definedgoals/anticipating behavior) functions, co-ordination, customizableand adaptive/learning mechanisms for routine automation, and flex-ible protocols and strategies for dynamic negotiation and collabora-tion among autonomous components. In addition, agent technologyprovides basic concepts for intelligent matchmaking, grouping, in-formation retrieval and filtering.

Simple reactive agents monitor the user’s current situation and aretriggered when certain conditions are met. These conditions can behard-coded or derived from user preferences. More advanced agentsare able to plan and consider future events. Even more advanced andadaptive agents try to learn by unobtrusive observation what behaviorwould be appropriate in a particular situation. Approaches includelearning based on Bayesian Networks or Neural Networks as well asuser modeling and information filtering techniques.

3 APPLICATION CASE STUDIES

Inside Siemens different case studies were developed in several ap-plication domains, such as consumer applications for adaptive of-fices, smart homes and intelligent cars. As an example we devel-oped an intelligent answering machine and the corresponding con-text model. It autonomously actives/deactivates itself depending onwhether its user is present (and able to answer the call) or not. Whenactivated and receiving a call, it autonomously decides how to han-dle the call depending on the social relationship between the callerand its user. Industrial applications examples include dynamic supplychains, logistics and the human resource market.

4 CHALLENGES AND OUTLOOK

Besides case studies for heterogeneous domains, integrated solutionsfor several domains are needed in order to evaluate the vision of AmI.For such integrated solutions we are currently lacking higher levelinteroperability concepts, privacy concepts and an overall AmI refer-ence architecture.

User acceptance for AmI solutions is another challenge. Biggerand longer lasting field trials for evaluation and approaches for test-ing and introducing AmI solutions at the customer side are needed.

REFERENCES[1] Michael Berger, Florian Fuchs, and Michael Pirker, ‘Agent technologies

for ambient information systems’, in Proc. of 1st International Work-shop in Software Agents in Information Systems, Fraunhofer, Karlsruhe,Germany, (February 2006).

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 2

Page 9: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Social Interactions in Ambient Intelligent environments

Boris de Ruyter, Emile Aarts1

1 INTRODUCTION Recent technological developments have triggered the

realization of innovative application- and service scenarios of intelligent environments. However, there is a potential problem with regard to the social acceptance of such scenarios. Aspects such as information overload, violations of privacy and lack of trust in general threaten the introduction of these technologies into our day-to-day life. It is also often not clear whether people will perceive such scenarios as beneficial. Within the vision of Ambient Intelligence (AmI) human needs are positioned centrally and technology is seen as a means to enrich our life. In course terms Ambient Intelligence [1] refers to the embedding of technologies into electronic environments that are sensitive and responsive to the presence of people.

Essential in this vision are the experiences people have when

being in AmI environments. Examples of user experiences such as immersiveness and social connectedness have been investigated in the Philips HomeLab: a fully functional home environment for conducting behavioral studies.

HomeLab offers a unique scientific environment for evaluating

the feasibility and usability of technologies that are used in the realization of AmI scenarios. Equipped with an extensive observation infrastructure of 34 cameras and microphones, the HomeLab has enabled behavioral researchers to study the effect of innovative technologies on the user’s acceptance for Ambient Intelligence.

2 FROM ENTERTAINING EXPERIENCES TO ASSISTED LIVING

Whereas AmI research has traditionally been focusing on user experiences in more entertainment oriented scenarios, there is move towards the deployment of AmI technologies for health and wellbeing related scenarios. This tendency has taken form in the theme of Ambient Assisted Living (http://www.aal169.org/). In AAL there is a clear role for AmI technologies to support people in their daily life. Although user experiences (e.g. experiencing social presence) are still the major objectives, in AAL there is an additional objective to influence and change human behavior (e.g. change lifestyle to a healthier way of living). Due to these increased expectations of AmI technologies we observe that the system intelligence of AmI environments requires complementing with social intelligence (see Figure 1).

Related to creating user acceptance, the AmI technologies need

to be socialized: compliant to social conventions. For example, in a sensing environment some form of system intelligence can be context aware and thus know that a person is in a private situation. A personalized system would know that it is the user’s preference not to be disturbed in such situation. An intelligent system that is socialized would use common sense knowledge to not allow disturbing the person in such a context. From this simple example it should also be clear that although we make a distinction between system and social intelligence at conceptual level, that at implementation level both forms of intelligence need to come together.

An empathic system is able to take into account the inner state of

emotions and motives a person has and adapt to this state. For example, a form of system intelligence could infer that a person is getting frustrated while the social intelligent system with empathic capabilities would trigger the AmI environment to demonstrate understanding and helpful behavior towards the person.

Ultimately, a conscious system would not only be aware of the

inner state of the person but also about its own inner state. With such level of social intelligence, the conscious system could anticipate the effect a person is trying to get onto the system. With this level of social intelligence it will be possible to develop rich and human like interactions in AmI environments.

In the next section we describe the results of an empirical study into the effect of adding social intelligence to an interactive system.

Figure 1: Embedding and intelligence aspects of AmI

1. Philips Research Europe.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 3

Page 10: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

3 THE IMPACT OF SOCIAL INTELLIGENCE In a controlled experiment [2] the effects of perceived social

intelligence in a home dialogue system were studied. More specific, the following research questions were addressed:

1) Will test participants be able to perceive the level of social

intelligence implemented in the home dialogue system? 2) What is the effect of bringing the concept of social intelligence

into a home dialogue system on the perception of quality of the interactive systems (other than the home dialogue system) in the environment?

3) Will the participant’s acceptance for home dialogue systems increase if the concept of social intelligence is implemented into these systems?

The home dialogue system used in our study takes the form of an

“interactive Cat”. The iCat [3] is a 38 cm tall user-interface robot. The robot’s head is equipped with 13 standard R/C servos that control different parts of its face, such as the eyebrows, eyes, eyelids, mouth and head position. With this setup we are able to generate many different facial expressions that are needed to create an emotionally expressive character (see Figure 2).

Figure 2: The iCat Home Dialogue System

Test participants were asked to conduct a number of experimental tasks under different behavioral conditions of the home dialogue system:

1) During the ‘social intelligence’ condition the robot would talk

(using synthesized speech) with lip synchronization, blink its eyes throughout the session, and display facial expressions while exhibiting social intelligence aspects such as not ignoring affective signals from the user by, responding verbally or by displaying appropriate facial expression to obvious frustration, confusion, or contentment.

2) In the ‘socially neutral’ condition the iCat did not display any facial expressions and did not blink its eyes. It responded verbally only to explicit questions from the participant.

All interactions between the user and the iCat were recorded and annotated for further analysis. Three post-experimental were administered:

1) The Social Behaviors Questionnaire (SBQ). This questionnaire

assesses the amount of perceived social intelligence in an interactive system.

2) The User Satisfaction Questionnaire (USQ). This questionnaire measures the amount of user satisfaction as a result of interacting with technology.

3) The Unified Theory of Acceptance and the Use of Technology (UTAUT). This questionnaire assesses the attitude people have towards future usage of technology.

The results from the questionnaire data conform that: (i) user

perceive the social intelligence in the home dialogue system, (ii) users are more satisfied with the technology in their environment in the context of a social intelligent home dialogue system and (iii) users have a more positive attitude towards having a home dialogue system in their environment if this system is social intelligent. From the behavioral observations it was noted that participants were more ‘social’ with the socially intelligent iCat: they were much more inclined to laugh, ask questions and ask for elaborations, than with the neutral iCat.

5 CONCLUSIONS As AmI technologies are gaining impact in our daily lives, the

need for extending the system intelligence with social intelligence becomes more articulated. Based on controlled experimental studies it can be concluded that bringing the concept of social intelligence into AmI systems has a positive effect on the user’s acceptance of AmI technologies.

REFERENCES [1] Aarts, E.H.L., Harwig, R., & Schuurmans, M. (2001), Ambient

Intelligence, in: P. Denning, The Invisible Future, McGraw Hill, New York, 235—250.

[2] de Ruyter, B., Saini, P., Markopoulos, P. & van Breemen, A. (2005). Assessing the effects of building social intelligence in a robotic interface for the home, Interacting with computers, in 17, 522 - 541, Elsevier.

[3] Van Breemen, A.J.N. (2004). Bringing Robots to Life: Applying principles of animation to robots, CHI2004 workshop Shaping Human-Robot Interaction, Vienna.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 4

Page 11: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Context and Content: A Perfect Marriage Pertti Huuskonen1

1 Nokia Research Center, Visiokatu 1, FI-33720 Tampere, Finland, [email protected]

1 FILE CONTEXT The recent surges in the popularity of digital imaging, music, and networking are causing an explosion of the amount of personal content. As a result, users are facing the task of organizing and maintaining their digital content collections [1]. Information management has so far only been a problem to large corporations, with IT departments to help with the necessary machinery. Now similar problems are appearing in the world of common users. One way to decrease the apparent complexity of content management is to take advantage of context information. It can, among other things, help in limiting search spaces, decreasing the amount of user input, and visualising file histories. Mental deconstruction of the context when an old photo was shot can be assisted by recording all the context information that was available at that moment. We call this notion the file context. Such contextual information can be crucial for reconstructing the past and predicting the future. We feel that a useful channel and format for conveying context information is to attach it into the metadata contained in media files. Even though metadata-enhanced context has been addressed in earlier work (e.g. [2]), context as metadata has gained much less attention. This context metadata can include information about the originating device of the media object, it’s operating parameters, the creator (human) and practically any other available information on the surrounding situation when the object was created. Further events in the life of the media object can also be recorded in the file context. Editing, viewing, sending, organising the objects can all leave a context history, which can be useful for later analysis. For instance, people often look for email attachments not by their names nor their content, but by the name of the sender. It can be very beneficial or interesting to recover the communication routes our important files have traveled.

2 USES OF FILE CONTEXT Context and metadata enjoy a symbiosis. Metadata definitions offer a way to store and communicate context, and context information gives added semantic meaning to the metadata. The more automated analysis of media files we want, the more we need the added context clues. The file context can also be used to guess the context for other related objects. For instance, an image that belongs to a slide show probably has something to do with the other images in that show. Maybe they have been taken in the same place, showing similar subjects, having similar colors, etc. This potential similarity can be used to focus a closer search into promising initial candidates. Location and time adjacency can be strong context clues for

images; other clues will be found for other media files. Any application that needs the fundamental content management operations (searching, sorting, copying, sending…) can potentially benefit of file context. Another area to benefit is user interaction. Simply put, any time asking for user input can be avoided, context information increases usability. File context can be visualized in with the help of metadata-enabled objects. Instead of showing context information as plain symbolic representations (lists of parameter name-value pairs, for instance) we can visualise contexts through media files. To present a certain context, we display a corresponding media object as a visual clue to the user. This gives the users a mental “hook” to attach the context information into. Media objects can also be convenient triggers for searches. The query-by-example principle (e.g. click an media object and ask for similar objects) has been used with large databanks; now personal media collections are evolving to a situation where such techniques will be needed.

3 FUTURE WORK To make file context a new enabler of context aware applications,, several advances are needed in the areas of context representation, standards for exchanging contextual metadata, discovering implicit dependencies between media objects, and ways to refine low level context clues into higher level concepts, to name but a few. Content search clients, such as Google Desktop and Apple Spotlight, should be enhanced with context. This obviously calls for a matured base of context enabled files. Many of the necessary knowledge representation techniques have been designed for use in the Semantic Web [3]. We propose to apply and extend those techniques for content stored in mobile personal devices. A number of challenges need to be overcome, including limited memory sizes, intermittent connectivity, and limited possibilities for user interaction.

REFERENCES [1] Lifeblog (2004) Nokia Lifeblog. Available as

http://www.nokia.com/nokia/0,1522,,00.html?orig=/lifeblog. [2] Hönle N., Käppler U-P., Großmann M., Nicklas D., Schwarz T.

Benefits Of Integrating Meta Data Into A Context Model. In Proceedings of 2nd Workshop on Context Modeling and Reasoning (CoMoRea) at 3rd IEEE International Conference on Pervasive Computing and Communication (PerCom'05), March 12th 2005, Hawaii, pp. 25-29.

[3] Berners-Lee T., Hendler J., and Lassila O. (2001) The Semantic Web. Scientific American, 284(5):34-43, May 2001.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 5

Page 12: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

AI Techniques for Ambient Intelligence Research Papers and Demos

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 6

Page 13: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Abstract. This paper illustrates our work concerning the development of an agent-based architecture for the control of a smart home environment. In particular, we focus the description on a particular component of the system: the Butler Interactor Agent (BIA). The BIA has the role of mediating between the agents controlling environment devices and the user. As any good butler, it is able to observe and learn about users preferences but it leaves to its “owner” the last word on critical decisions. This is possible by employing user and context modeling techniques in order to provide a dynamic adaptation of the interaction with the environment according to the vision of ambient intelligence. Moreover, in order to support trust, this agent is able to adapt its autonomy on the basis of the received user delegation.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 7

Page 14: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

The prototype has been developed using JADE (jade.cselt.it) The decisional behaviour has been modeled and implemented using Hugin Lite 6.6 Java API (www.huginexpert.com). The communication among the agents has been formalized using ACL messages whose content is encoded in XML – RDF in order to use a more neutral and machine-understandable and easy to parse format.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 8

Page 15: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Figure 2. Initial user and context model.

Figure 1. Context-Based Intelligent Environment Control.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 9

Page 16: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

High autonomy level

The BIA starts asking for confirmation

Events in time

Threshold

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 10

Page 17: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

[7]

[8]

[9]

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 11

Page 18: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Defining basic behaviours in ambient intelligenceenvironments by means of rule-based programming with

visual toolsAndres Munoz and Antonia Vera and Juan A. Botıa and Antonio F. Gomez Skarmeta1

Abstract. Applications deployed within ubiquitous systems mustbe equipped with certain intelligence, with the aim of making themost of the ambient information that is provided by the context-middleware used by these applications. With the use of ontologies,we can ellaborate a model of the domain which allows making infer-ence through rules specified over pieces of knowledge which belongto the model.

We believe that allowing the user to specify small and simple be-haviors about how the surroundings should behave is interesting andimproves the user-centric feature that all ambient intelligence basedsystems should manifest. For this, we also believe that specificationof rules must be done in an easy and flexible way.

Within this context, the life cycle of such kind of program is thefollowing: user needs to specify a new behavior, then she defines anew rule set,validates it and adds it to the system. By periodically ex-ecuting this small program, the system generates and/or deletes newfacts which, in consequence, modifies the context information of theapplication. One problem that has to be taken into account is thatconflicts among rules in the same or different programs, and of thesame of different users could arise and a mechanism to set prefer-ences in that case is needed. In this paper, we introduce ORE (On-tology Rule Editor), a platform independent application to manageusing inference rules defined by the user within an ambient intelli-gence environment.

1 IntroductionThe specification of rules which govern the behaviour of context-aware applications [4][2] has been a responsibility of application de-velopers so far. It is not easy for an end-user to define her own ruleset to adapt the application’s behaviour with her preferences. On theother hand, the utility of these applications is increased when they aredeployed in mobile devices, like laptops or PDAs, since the applica-tions could adapt their behaviours to highly dynamic environments inwhich they are working at each moment. For instance, we could wishthat when we are attending to a meeting, our PDA switches to vibra-tion mode, or when we get home, the system automatically transfersthe notes we have noted down from the mobile device to the desktopcomputer.

The personal character of these behaviours makes impossible tocompose all of them into context-aware software during its devel-opment. We need that the end-user can express and modify the be-haviours according to her preferences. Thus, the application itself1 Departamento de Ingenierıa de la Informacion y las Comunicaciones, Uni-

versity of Murcia, Spain, email: {amo4,avm5}alu.um.es, [email protected],[email protected]

must allow specifying behaviours and then adding them with the ex-istent ones, and eventually executing them when the required condi-tions are given. Moreover, new definition of behaviours has to be asimple task for users, who usually are not familiarized with modelsof domains, ontology and inference issues.

Another problem we have to face is that of conflicting rules. Dif-ferent rules specified by the same user, of different users, could arriveto contradictions. We call them, conflictive rules. By conflictive wemean that those conclusions are both well-founded and inconsistentbetween them, that is, one rule concludes A and the other infers ¬A,but both are based in founded grounds and hence both situations arepossible. In order to solve these conflicts, we need a mechanism toset preferences among the rules. But only giving numerical prefer-ences to the rules is not a suitable form of resolving the problem,because we may want to use one rule or the other depending on thecontextual situation. As a result, a way of defining preferences on therules taking into account the ambient information is needed.

The authors of this article are involved in a research project inwhich one of its outcomes is a highly usable graphic tool, ORE (On-tology Rule Editor). We use ORE to define inference rules in a do-main that is modelled with descriptivel logic based ontology. In prin-ciple, ORE can use whichever rule inference engine and rule lan-guage to work with. The version introduced here uses Jena [1] frame-work as rule engine and SWRL [5] as rule language to permanentlystore rules.

The idea of ORE came up from a previous work, whose details canbe found in [3]. In this previous work, we experimented the necessityof a tool to specify inference rules to design the fusion process of het-erogeneous contextual information in ubiquitous systems. Althoughthere exist several applications that allow modelling ontologies andeven defining rules over the model, like Protege2000 [9] or SWOOP[7], none of them are able to execute the rules over it. Both appli-cations are more directed to browse and edit ontologies, and the in-ference engines of these applications are used to validate the model,not to test the rules. ORE has two main advantages. In the one hand,it gives more easiness to edit rules thanks to a more guided processthan in the other two applications, and in the other hand, ORE is ag-nostic with respect to the inference engine we should use to executerule based programs. It does not impose a specific one.

The rest of the paper is structured as follows. In section 2 we ex-plain our main goals in this research, design of the framework and itsfunctionality. In section 3 an use case of ORE together with a context-aware application is exposed. Section 4 introduces the approach weuse to resolve conflicts among rules. Finally, in section 5 we exposeour conclusions and future works we are involved in.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 12

Page 19: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

2 Visually programming small rule basedprograms

2.1 Goals

The main goal we seek for, with development of ORE, is to easilyedit rules over an ontology. This means editing new rules by pick-ing up elements of the ontology and dropping them in their corre-sponding place in the rule and all of this should be done through anintuitive and comfortable user interface. This implies that the moreintuitively the ontology is shown to the user, the easier rules may beedited. Once rules are defined, they can be executed against any gen-eral purpose rule engine. After evaluating the rule set, ORE showsthe new facts and actions that would occur if the rules were actuallyexecuted. Optionally, the new derivative facts are allowed to be partof the model that is being represented by the ontology.

Other goals we seek for are the following:

• ORE is flexible:it is able to work with different generic rule engine• it is distributed: an user defining her rules from local informa-

tion loaded in the mobile device and testing those rules in a ruleserver, which manages much more information about that user,other users and the environment where they are moving about.

2.2 Architecture

ORE architecture appears depicted in Figure 1. The left side of thefigure shows the client or user that uses ORE framework. Whenthe user edits a rule, she needs a knowledge base about the do-main in which the rule is created. The ontology (usually expressed inOWL language) that is representing the domain contributes with thisknowledge base to the client. The ontology’s classes, instances, prop-erties, etc., -which will represent the reality that we are modelling-will be part of the LHS (left-hand-side) and RHS (right-hand-side)of the rules as subjects, predicates or objects. This rule format is as-sociated with the RDF triples [8].

The generated rules are stored into the local rule base at the clientside, which is managed by ORE. We can add new rules, edit the ex-istent ones or remove those rules we are not interested in at each mo-ment. The rules can permanently stored in a file by using the SWRLparser. This parser transforms the rules from the local rule base tothe SWRL abstract language. This abstract format allow us (a) totranslate from the abstract rule language to whichever specific rulelanguage in a simple way and (b) to keep the semantic burden of therules, otherwise it might be lost if a specific language is used.

ORE gives the chance of checking -using local knowledge- theresulting effects obtained after executing a rule set over the modelthat represents the domain we are working on. Depending on thegeneric rule engine being used by ORE (we use Jena in the versionintroduced in this paper), the rules are translated from the local rulebase to the specific language that the rule engine understands. Byusing the rules and the knowledge base represented by the ontology,the ORE client is able to start an inference process. The result of thisprocess is shown to the user by means of instructions of what ruleswere fired, an explanation of the reason for those rules were fired, andthe new facts and actions that have been inferred from the modelleddomain.

The right side in Figure 1 depicts the rest of the components inORE’s architecture: the rule servers. At the moment, we have onlydeveloped a ORE version which uses a rule server based on the Jenaframework, but, in the same way, another servers with their specificrule definitions could be used to test rules defined at the client. As we

will see below, this modularity in the architecture is achieved thanksto the SWRL abstract rule language that is used for expressing rulesexchanges among clients and servers.

When the ORE client needs to execute a rule set in a server, therule set is sent together with a reference to the ontology that wasused to define the rules. The server manages a copy of that ontology,which usually is more richer and complete in representing the realitythan the copy hold at the client. After receiving the rules in SWRLformat, the server translates them to its specific language (Jena in thisversion) and then it makes inference through its rule engine togetherwith its knowledge base. In this process, unlike local inference, theserver makes a syntactic validation of the new inferred facts beforereturning them back to the client.

If the user is satisfied with the results that her rules produced onthe knowledge base at the server, she still has at her disposal thepossibility of executing (i.e. commit) those rules, with the aim ofmaking those new inferred fact permanently be part of the ontology.So as new facts are available for the rest.

Figure 1. ORE’s architecture

2.3 Functionality

ORE offers the following functionalities:

• Loads ontologies, from sources coming in different formatslike,for example, OWL, RDFS and XML files and from an URIthat identifies the ontology. Once the ontology is loaded, OREshows diverse information about it (version, imported ontologies,etc.) and generates a hierarchical tree, which represents theclasses and instances contained in the ontology model, please seeFigure 2(a) for an example.

• Edits inference rules in a guided and easy way thanks to itsWizard, please see Figure 2. Both left-hand-side (LHS) andright-hand-side (RHS) tuple-elements of any rule are defined inthe same way, the Wizard guiding the user across three steps,choosing the subject, predicate and object of the LHS or RHStuple-element. The Wizard makes semantical checking over therules created, and warns the user if they are not correct.

• Permanently stores rules in SWRL format. Hence, all the seman-tic burden of the rules is kept for a later use. Whichever ruleswritten in SWRL language could also be loaded into ORE.

• Tests the user-defined rules. There are two possible ways to testthe rules:

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 13

Page 20: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

(a) Step 1 in the Wizard

(b) Step 2 in the Wizard

Figure 2. ORE’s Wizard during a rule edition

1. Using the local reasoner that ORE includes. This reasoner is thesame that Jena owns. We could use this local reasoner to watchthe results that our rules are producing on the local knowledgebase but, in this stage, the new facts or actions inferred couldnot be asserted or performed, respectively, by the ORE client.

2. Using a remote ontology server. In this case, ORE sends therules to the server, the server evaluates them in its inferenceengine with its own ontologies (normally these ontologies arerather richer and complete than the local ones) , and the resultsare returned to the ORE client and shown to the user. In thiscase, if the user is authorized to, the facts or actions inferredcould be asserted or performed, respectively.

3 Using ORE with a context-aware application

In this section we pretend to illustrate the use of ORE with an ex-ample. We will show how in may be used within a context-awareapplication. The client software will be running on a mobile device,for example a PDA. The scenario we are going to work on involves aprofessor from the Computer Science Faculty (from now on, the userof the context-aware application). This user wants that, when he ar-rives to the Faculty, the air-conditioned placed in his office should an-

ticipate his arriving and, consequently, it should automatically switchon at 25oC.

To make this scenario possible, we will assume the next premises:

• We start from a context-middleware system which gives supportto the contextual information management, maintains up to datethe dynamically changing information about the elements of anenvironment and delivers to users, contextual information. Thecontext-middleware system is able to communicate with anothersystems, like personal computers, PDAs and whichever devicethat manages a network connection.

• An ontology modelling the Computer Science Faculty with its el-ements is available: floors, rooms, doors, windows, details of aroom like personal computers, air-conditioned devices, etc. Thisontology also models people that are staying into the building ateach moment: Professors, students, service staff, etc. We call thisontology FIContextOntology.

• There is a rule server inserted into the context-middleware system,ready to accept rules defined over the elements of FIContextOn-tology.

• The user connects to the system through a PDA in which thecontext-aware application is running on. This application knowsat each moment where the user is located. It also has access to theFIContextOntology ontology.

Now, the user is going to edit a rule to express his preference on theair-conditioned device state when he arrives at the Faculty. Throughthe ORE client that is installed in his PDA (Figure 3), the user editsa rule as the one that shows Figure 4(a). To edit this rule, the OREclient uses the FIContextOntology scheme and the local-partial con-text knowledge available to it. In other words, the ORE client onlyknows user location and not all the others location, just like it doesnot know all the elements of the building, only those that are relatedto its user.

(a) ORE running on a Zaurus sl-c3100

(b) ORE running on a Qtek S110

Figure 3. ORE displayed in several PDAs

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 14

Page 21: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

(a) A rule defined with ORE

(b) New facts inferred from the rule server

Figure 4. ORE’s snapshots

If we express the rule using natural language:

If(user) Andrew is located at Computer Science Faculty, andThere is a office assigned to Andrew, andThat office has available a device which is able to control thetemperatureThenThe temperature device switches to ’on’, andThe device thermostat value is set to 25oC.

The rule is now sent to the rule server at the context-middlewareto be tested in its knowledge base. This rule server, as well as own-ing the FIContextOntology scheme, has a knowledge base which isricher and more complete that that located at the client, with a to-tal overview about all the elements belonging to the Faculty. Amongthese facts, we are specially interested in the following:

• The user Andrew is actually staying at the Science Computer Fac-ulty.

• The office room1 is assigned to the user Andrew.• The office room1 is equipped with a temperature controller device

called Cool Design GG.

The result of testing this client rule into the ontology at the serveris shown in Figure 4(b). The rule (called ruleDemo) concludes thatthe state of the Cool Design GG device is on with a temperaturevalue of 25oC. The user agrees with the new facts obtained, so hedecides to add these facts to the server’s knowledge base by clickingon the Commit button. Now, the context-middleware server capturesthat the air-conditioned in the user’s office is on at 25oC. When thisupdate is noticed by the context-middleware system, it will act ac-cordingly with the proper systems to make these facts permanentlyadded.

4 Resolving conflicts among inference rulesAs we previously mentioned in Section 1, user-defined behaviorscould potentially generate conflicts at the generated deductions. Con-flicts of this kind have to be solved as they can insert incoherence orinconsistences in global ontology (i.e. the knowledge base).

Conflict resolution using argumentation theory [10] has been gain-ing increasing interest in recent years. One interesting approach inthis field which resolves conflicts among inference rules can be foundin the work of Kakas et al [6]. In this work, the rule set which dic-tates behaviour in the application is arranged into several hierarchicalmodules. These modules represent the different strategies and evenattitudes that an application could take according to the contextualinformation available. To be precise, three hierarchical levels of rulesare defined:

• The Object-level, refers to the layer in which user-defined rulescan be placed. These rules are the ones that refer directly to themodelled domain the application is working on. The rules at thislevel are called Object-level Decision Rules. An example of thiskind of rule is the ruleDemo described in Section 3.

• The Default or Normal Context Priority level, in which policyrules are defined using the object-level decision rules to assignpriority among them. This priority is applied in case that conflictscome up in normal or default situations. These types of situationsare represented by the left hand side of the policy rule, whereasthe right hand side contains decision rules which are conflicting.

• The Specific Context Priority level, which also defines policyrules, but now related to specific situations. The left hand side ofthese policy rules are the description of the specific situations, butin this case the right hand side contains Normal and other Spe-cific Context Priority policy rules. Thus, this top level managesthe potential conflicts between policy rules.

As an example used to illustrate this conflict-resolver mechanism,let us return to the scenario described in section 3. For the exam-ple, we will use a first order logic notation for rules, predicates andfacts. Suppose that the professor from the Computer Science Fac-ulty keeps on adding behaviour rules to manage the air-conditioneddevice. The following object-level decision rules represents these be-haviours (variables and predicates begin with a small letter, whereasconstants are denoted by a capital letter):

r1: located(user, ComputerScienceFaculty),has assigned(user, office) →

switch on(temp device, office), value(temp device, 25)

r2: empty(office) → ¬switch on(temp device, office)

Rule r1 is equivalent to the ruleDemo rule defined in section 3.Rule r2 tries to save power energy by not allowing the temperaturedevice to be switched on when the office is empty.

Consider now, following the above scenario, that these rules aresent to the rule server. Let us remember that the knowledge ownedby the server was asserting the body terms of the rule r1, and hencethis rule was fired. Moreover, the server is updated with the fact thatthe office of the user is empty. Then, r2 infers not switching on thetemperature device. Thus, a conflict between two well-founded ruleshas been raised. Clearly, both rules express dessiderable behaviorsand both should be maintained. Hence, the conflict must be solvedby enhabling some activation mechanism between both rules.

To solve this situation, default-policy rules can be added to setpreferences between the conflict rules in a normal situation. From

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 15

Page 22: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

the point of view of interactions between ORE and the server, in thecase where a conflict arise, the rule server will notify the conflict tothe ORE client, so it will be able of making use of the followingdefault-policy rules:

R1 : timetable(user, time), is in(time, Work) → hp(r1, r2)

R2 : lecture(user, room), has assigned(user, office),room �= office → hp(r2, r1)

hp(X, Y ) states that decision rule X has higher priority than deci-sion rule Y . Therefore, R1 expresses that in case of conflict betweenr1 and r2, the former is preferred to the latter because the user is inhis working hours, and in spite of his temporary absence out of theroom, he is expected to get back soon. On the other hand, R2 statesthat the temperature device must not be switched on because the useris giving a lecture in another room and it will take some time for himto come back to his office.

Note that through this mechanism, the preferences among decisionrules are not fixed, but depending on the normal situation or contextwe expect to find. By the definition of the default-level policy rules,this level could also arise conflicts between its rules. Consider againthe scenario proposed. Now, the user is in his working hours, so R1

gets activated. But what happens if the user is also giving a lecture inanother room of the Faculty?. Both R1 and R2 could be fired. In thatcase, R1 and R2 arrive to a conflict.

To resolve the conflicts presented at the second level, we resort tothe specific-level context priority rules. This top level is able to avoidconflicts among default-level and specific-level rules themselves, byconstraining the special situation when these rules must be applied.In our example, we may want to state that if the rest of the deviceslocated at the office are switched on, then it is intention of the user tocome back soon. This rule would form part of the specific-level:

C1 : switch on(other device, room),other device �= temp device → hp(R1, R2)

A great advantage of this approach is its modularity. The policyof priorities could be change at whichever level without altering anyother level. For example, if the user decides to change his policythat “not switching on air-conditioned during lecture hours” to applywhen he is attending to a meeting, then we would replace R2 with

R′2 : has assigned(user, office), meeting(user, room),

room �= office → hp(r2, r1)

Clearly, the implementation of this mechanism in ORE is a veryinteresting idea. Thanks to the expressiveness of the SWRL languageused, we can specify policy rules that contain decision rules like theORE client’s rules described in section 2.2. First, ORE will searchfor conflicts among those client rules, showing them to the user.Then, the user has got the opportunity of defining policy rules, whichwill resolve these conflicts in normal situations. Afterward, ORE willcheck these policy rules in order to find new conflicts at the defaultlevel. Finally, the user is suggested to state preferences between theconflictive policy rules through new specific policy rules describingspecial situations.

As a future work, we are exploring how to extend this mechanismfor resolving conflicts among rules from different users, that is, whena rule r1 from Usera conflicts with a rule r2 from Userb in the ruleserver. Issues as how to get the user’s preferences between these tworules and how to make the user’s applications reason about the con-flict for giving suitable policy conditions must be take into account.

5 Conclusions and future workOntology technology is playing a central role in the context-awaresoftware field within ubiquitous systems. One of the most importantcapabilities that ontologies offer is the inference process. Reasoningabout a model through inference rules to set new facts or relationsbetween them adds a greater value to ontologies. Thus, when we aremodelling a context, it could be updated dynamically.

Until now, we did not have a tool that gave the opportunity of edit-ing - in a well-guided, easy and graphic way- rules on an ontology,and then checking and adding the derivative effects of their execu-tion. The ORE application introduced in this paper wants to fill thisgap. It states that is possible to make a valid and effective inferenceprocess on a knowledge model represented by an ontology, laying thefoundations to build a flexible and useful rule editor. Moreover, wehave pointed out how the conflicts among rules could be resolved ina smart and modular way, setting preferences according the normalor specific situations by means of the current context.

The authors of this paper carry on expanding ORE functionality.We are adding the procedural builtins to the ORE Wizard, in orderto insert arithmetic and logical expressions, either in a pre-defined oruser-customized way. We also are working on making easy the in-sertion of new inference engines. Another ambitious goal, in a long-term, is to make the contex-aware applications learn the user habitsto edit automatically the rules itself through ORE’s API. Finally, weare studying how to show a customized GUI according to each userand the idea she has about the modelled domain that surrounds her.

AcknowledgmentsThis work was partly supported by the Seneca Foundation be-longing to CARM, in part by the research project ENCUEN-TRO(00511/PI/04) and in part by Spanish MEC through the researchprojects TIN2005-08501 and SAVIA (CIT-410000-2005-1).

REFERENCES[1] Jena - a semantic web framework for Java.

http://jena.sourceforge.net/index.html.[2] Dey A. and Abowd G., ‘Towards a better understanding of context and

context-awareness’, Workshop, (2000).[3] Ignacio Nieto Carvajal, Juan A. Botıa, and Antonio F. Gomez Skarmeta,

‘El modelo de informacion y la arquitectura hıbrida del sistema gestorde informacion contextual OCP’, CEDI. I Simposio sobre ComputacionUbicua e Inteligencia Ambiental, 157–165, (2005).

[4] Guanling Chen and David Kotz, ‘A survey of context-aware mobilecomputing research’, Darmouth Computer Science Technical ReportTR2000-381, (2000).

[5] Ian Horrocks, Peter F. Patel-Schneider, Harold Boley, and SaidTabet, ‘SWRL: A Semantic Web Rule Language Combinining OWLand RuleML’, http://www.w3.org/Submission/SWRL/Overview.html,(2003).

[6] Antonis Kakas, Nicolas Maudet, and Pavlos Moraitis, ‘Modular repre-sentation of agent interaction rules through argumentation’, Journal ofAutonomous Agents and Multiagent Systems, 11(2), 189–206, (2005).Special Issue on Argumentation in Multi-Agent Systems.

[7] Aditya Kalyanpur, Bijan Parsia, and Evren Sirin, ‘SWOOP- a hypermedia-based featherweight OWL ontology editor’,http://www.mindswap.org/2004/SWOOP.

[8] Graham Klyne and Jeremy J. Carroll, ‘Resource Descrip-tion Framework (RDF): Concepts and Abstract Syntax’,http://www.w3.org/TR/rdf-concepts/, (2004).

[9] Holger Knublauch, ‘Protege OWL Plugin. Ontology Editor for the Se-mantic Web’, http://protege.stanford.edu/plugins/owl/index.html.

[10] Katia Sycara, ‘The PERSUADER’, in The Encyclopedia of ArtificialIntelligence, ed., D. Shapiro, John Wiley and Sons, (January 1992).

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 16

Page 23: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

A logical framework for an emotionally aware intelligentenvironment

Carole Adam and Benoit Gaudou andAndreas Herzig and Dominique Longin 1

Abstract. In the agent community, emotional aspects receive moreand more attention since they were proven to be essential for intelli-gent agents. Indeed, from a theoretical point of view, results fromcognitive psychology and neuroscience have established the closelinks that exist in humans between emotions and reasoning or deci-sion making. And from a practical point of view, numerous researchfindings show the interest of emotions in agents communicating withhumans: interface agents, pedagogical agents... However, among thelogical frameworks used to formalize these rational agents, very fewintegrate these emotional aspects. In this paper, we characterize someemotions, as defined in cognitive psychology, in a BDI (Belief, De-sire, Intention) modal logic. We then validate our framework with acase study illustrating the problematic of Ambient Intelligence.

1 INTRODUCTION

Ambient Intelligence is the art of designing intelligent environments,i.e. environments that can adapt their behavior to their user, to hisspecific goals, needs... at every moment, in order to insure his well-being in a non-intrusive and nearly invisible way. At the same time,a great community is interested in agents and all their aspects. Re-cently, some researchers tried to integrate agents into Ambient Intel-ligence Systems (AmIS) [1, 3]. Our aim in this paper is to design suchan agent. To be intelligent, it must have emotional abilities [6, 22],i.e. it must be able to feel emotions and to perceive the user’s ones,for example to fulfil his expectancies. In this setting, we believe thatan AmIS needs a computational model of emotion in the followingcases: (C1) to compute the user’s emotion triggered by an externalevent; (C2) to anticipate the effect of its actions on the user and thenchoose the best adapted one; (C3) to understand the causes of anemotion noticed in the user by observing his behavior, through infer-ring (C3b) or not (C3a) some hypothesis about the user’s beliefs. Toknow the emotion felt by the user and the causes of this emotion isfundamental to act in a really adapted way.

In this paper, our aim is to propose a framework for rational agentsable to manipulate some emotions (viz. emotional agents) intendedto be integrated in an AmIS. This framework is based on modal BDIlogics (Belief, Desire, Intention). These logics, that ground on thephilosophy of language, thought, and action, propose to model agentsvia some key concepts such as action and mental attitudes (beliefs,goals, intentions, obligations, choices...). This framework is com-monly used in the international agent community, and offers well-known interesting properties: great explanatory power of the agent’s

1 Universite Paul Sabatier, IRIT/LILaC, 118 r. de Narbonne, F-31062Toulouse cedex 9; {adam,gaudou,herzig,longin}@irit.fr

behavior, formal verifiability, rigorous and well-established theoreti-cal frame (both from the philosophical and the formal logic point ofview).

Our approach thus consists in extending a BDI logic in a minimalway in order to handle emotions. In Sect. 3, we show that this exten-sion needs the definition of two operators representing what an agentlikes or dislikes2, and that from these two operators, traditional men-tal attitudes (belief, choice), action, and time, we can define someemotions (Sect. 4) that have been identified in psychology (Sect. 2).We illustrate the performance of our logic with a case study of anAmIS controlling an intelligent house, where the main agent home

takes care of its dweller by handling his emotions, possibly with helpfrom other agents of the AmIS (Sect. 5).

2 ANALYSIS OF EMOTIONS

2.1 How to represent emotions?

Psychology proposes three types of explanatory models of emotions:dimensional models represent them with three dimensions, whichgenerally are valence, arousal, and stance [25]; discrete models con-sider them as basic universal adaptive mechanisms designed duringevolution to favor survival [7, 8]; cognitive models argue that emo-tions are triggered by a cognitive process appraising the relationshipbetween the individual and its environment [17, 21]: the individualcontinuously evaluates, be it consciously or not, the impact of thestimulus on his internal needs (desires, goals...). While the two for-mer models are mainly descriptive, the latter is normative and thusbetter adapted to our aim.

Ortony et al. [21] proposed a typology of emotions (the OCC ty-pology, for short) that is based on the theory of cognitive appraisal.They consider three types of physical stimuli: events, actions ofagents, and objects, that can be appraised following various crite-ria such as pleasantness, causal attribution, or probability, triggeringtwenty-two different emotions. This typology has been used very of-ten in computer science to design virtual emotional agents [11].

2.2 How to know about a human’s emotions?

Agents designers have investigated different methods to know aboutthe user’s emotion when he does not express it directly. Prendingerand Ishizuka [23] deduce3 an emotion label from monitoring hisphysiological signals and gaze direction. This method allows to de-tect in real-time the least changes in the subject’s emotions, but it is

2 We use two operators instead of only one because these notions are bipolarrather than complementary or dual (Sect. 3).

3 Existing works [22] allow to do this deduction

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 17

Page 24: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

quite intrusive, disobeying an important principle of ambient intelli-gence.

Other researchers use fuzzy inference rules [18] to deduce theuser’s emotion from prosody or other physiological cues. But suchmodels can not explain the causes of the emotion and so do not allowthe agent to adapt to it.

Another method is explored by Jaques et al. [16]. Their pedagog-ical agent deduces its pupil’s emotion by construing events from hispoint of view (thanks to a model of user), via an appraisal functionbased on the OCC typology. This method only gives a speculation onthe user’s emotional state, but is quite efficient when associated witha good model of his mental attitudes, and most important, it is notintrusive at all. Moreover, it also allows the agent to anticipate theuser’s emotional reaction to its actions, and thereof to try to induce aparticular emotion in the user by choosing the adapted action. This isa very important feature, because emotions were proven to influenceall aspects of human reasoning [6]. Finally this third method is betteradapted to the problematic of ambient intelligence.

In the next section we present our framework that is an extensionof standard BDI logics, and that will allow to represent the user’smental attitudes and emotions.

3 LOGICAL FRAMEWORK

The agent’s initial knowledge base T0 is made up of: initial knowl-edge that can change over time depending on observation actions ofthe agent or informative actions of other agents of the AmIS; andof non-logical global axioms i.e. axioms true in every possible state(alias world). Initial knowledge includes Factual Knowledge (FK),i.e. knowledge of world facts (e.g.: the weather is fine); and epis-temic knowledge (EK), i.e. knowledge of mental attitudes of othersagents, who could be human or not (e.g.: the user does not know thatit is raining outside). Global axioms contain some world law knowl-edge (WLK) (e.g.: if a glass falls over, then it breaks), and somebehavioral knowledge (BK) (e.g.: if the user slams the door then hemay be angry).

In this section, we define our formal framework, based on themodal logic of belief, choice, time, and action of Herzig and Lon-gin [15] which is a reformulation of Cohen and Levesque’s works[5]. In particular, we enhance it with the probability operator definedby Herzig [12].

Let AGT be the set of agents, ACT the set of actions andATM = {p, q...} the set of atomic formulae. Complex formulaeare noted A, B... The following paragraphs present a set of axiomscharacterizing our operators.4

Full belief. Full belief represents what the agent privately thinksto be true. (Note that contrarily to knowledge, belief is not truth-related.) We use the operator Bel to represent it: Bel i A reads “agenti believes that A”.

The logic of belief operator is a standard normal modal logic(KD45) [4]. Normal modal logics can be characterized by the fol-lowing axiom and inference rule, under which the set of beliefs isclosed:

if A then Bel i A (RNBeli )

Bel i A ∧ Bel i (A → B) → Bel i B (KBeli )

4 Only the axiomatics will be presented; we use a standard possible worldssemantics, for details see [15, 12].

For a KD45 logic, the following axioms, expressing that theagent’s beliefs are consistent (DBeli ), and that the agent is aware ofwhat he believes (4Beli ) and of what he does not (5Beli ), hold too:

Bel i A → ¬Bel i ¬A (DBeli )

Bel i A → Bel i Bel i A (4Beli )

¬Bel i A → Bel i ¬Bel i A (5Beli )

Probability. To model emotions we will need a notion of weakbelief. Thereby, we will use the modal operator P defined by Herzigin [12], and based on the notion of subjective probability measure.Pi A means that “for agent i A is probable”. The logic of P is muchweaker than the one of belief, in particular it is non-normal: the infer-ence rule (RNBeli ) and the axiom (KBeli ) do not have any counterpartin terms of P . We still have the following inference rule and axioms:

if A → B then Pi A → Pi B (RMP )

Pi � (NP )

Pi A → ¬Pi ¬A (DP )

Belief and Probability are deeply linked. We only expose here themain axioms from [12]:

Bel i A → Pi A (BPR1)

Pi A → Bel i Pi A (BPR2)

¬Pi A → Bel i ¬Pi A (BPR3)

Choice. Choice refers to what the agent prefers: Choicei A reads“i prefers that A is true”. As for the belief operator, choice is definedin a KD45 logic. Thus we have the following axioms, expressing thatan agent’s choices are consistent (DChoicei ), and that an agent must bein agreement with what he chooses (4Choicei ) and what he does not(5Choicei ):

Choicei A → ¬Choicei ¬A (DChoicei )

Choicei A → Choicei Choicei A (4Choicei )

¬Choicei A → Choicei ¬Choicei A (5Choicei )

Following Cohen & Levesque [5] and Rao & Georgeff [24] works,we here consider the choice as realistic, in the sense that an agentcannot choose something false for him, i.e.:

Bel i A → Choicei A (BCR1)

An agent is aware of his choices (as of his beliefs):

Choicei A → Bel i Choicei A (BCR2)

¬Choicei A → Bel i ¬Choicei A (BCR3)

Like/Dislike. Like represents a preference disconnected from re-ality. Likei A (resp. Dislikei A) reads “agent i likes (resp. hates)that A”. These operators are close to the notions of choice (as de-fined above) and desire. They differ from choice because they aredisconnected from the real world: Likei A ∧ Bel i ¬A is satisfiablecontrarily to Choicei A∧Bel i ¬A. We cannot view them as a desireeither because they are bipolar: agent i can like A (Likei A), hate A

(Dislikei A), or be indifferent to A. On the contrary, the desire hasno symmetrical notion.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 18

Page 25: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

For the sake of simplicity, we use a standard KD-logic for Like

and Dislike 5. Note that there is no strong link between Like andDislike so neither Likei A → Dislikei ¬A nor the converseDislikei A → Likei ¬A is valid. As for other mental attitudes wemake the negative (NPLikei ) and positive (IPLikei ) introspection hy-potheses:

Likei A → Bel i Likei A (IPLikei )

¬Likei A → Bel i ¬Likei A (NPLikei )

Action. Dynamic operators Afterα

and Beforeα

mean that “after(resp. before) every execution of the action α A holds”. They aredefined in the standard tense logic Kt, i.e. in a normal modal logicwith conversion axioms:

A → Afterα¬Before

α¬A (CA1)

A → Beforeα¬After

α¬A (CA2)

Time. We do not use exactly the temporal logic defined in [15], butwe keep a linear temporal logic LTL with operators G (GA meansthat “henceforth A is true”) and H (HA means that “until now A

was true”) operators. These operators are defined in a S4 logic withconfluence and conversion axioms (in particular, axiom (TG) meansthat future includes present):

GA → A (TG)

GA → GGA (4G)

¬G¬GA → G¬G¬A (GG)

A → G¬H¬A (GHR1)

A → H¬G¬A (GHR2)

We add to the Herzig & Longin’s framework the operator “next”X (XA means that A will hold at the next instant) and its converseX−1 (X−1A means that A held at the previous instant). These op-erators are defined in a KD logic with the corresponding conversionaxioms.

4 EMOTIONS FORMALIZATION

4.1 Appraisal criteria

The agreement criterion. It characterizes stimuli providing akind of well-being to the agent, for various reasons; non exhaus-tively: because he likes it, or because it takes part in the satisfactionof one of his choices. We define Pleasant i A and Unpleasant

iA as

follows:

Pleasant i Adef= (Likei A ∧ ¬Dislikei A)

∨ (¬Dislikei A ∧ X−1

Choicei XA ∧ X−1¬Bel i XA)

UnpleasantiA

def= (Dislikei A ∧ ¬Likei A)

∨ (¬Likei A ∧ X−1

Choicei X¬A ∧ X−1¬Bel i X¬A)

A can thus be pleasant either if the agent likes A (and does notdislike it at the same time), or if he does not dislike A and just

5 This implies that Likei A holds for every tautology A. This property seemsto be quite counterintuitive, but this does not prevent us from handling

emotion in this simple logic: indeed, by defining Like′iA

def= Likei A ∧

¬Dislikei A, we get that Like′i� is not a tautology, and we will use this

Like′i

in the emotion definitions.

before preferred that A occurred (X−1Choicei XA) without beingsure (X−1¬Bel i XA).

Given these definitions we deduce the following properties:

¬Pleasant i � ∧ ¬Unpleasanti� (1)

¬Pleasant i ⊥ ∧ ¬Unpleasanti⊥ (2)

Pleasant i A → ¬UnpleasantiA (3)

(1) and (2) mean that we are indifferent to tautologies and con-tradictions: we do not like them and we do not dislike them. (3)means that what is pleasant cannot be unpleasant, and conversely(which is quite intuitive). But these definitions allow to deduce nei-ther (Pleasant i A ∧ Pleasant i A′) → Pleasant i (A ∧ A′) norPleasant i (A ∧ A′) → (Pleasant i A ∧ Pleasant i A′): A and A′

can be both pleasant but not when associated; the classical exampleis that it can be pleasant to marry Ann, be pleasant to marry Betty,but not be pleasant to be polygamous.

The probability criterion. It characterizes stimuli that the agentconsiders to be probable. Expect

iA means “the agent i believes that

A is probable”.

ExpectiA

def= Pi A (DefExpecti

)

We also define EnvisageiA (“the agent i is not sure that A is

false”) as follows:

EnvisageiA

def= ¬Bel i ¬A (DefEnvisagei

)

We notice that ExpectiA → Envisage

iA, which is intuitive.

In the rest of this section, we specify the existence conditions ofthe eight emotions that we want to characterize.

4.2 Emotional existence conditions

In the OCC typology, emotions result from the occurrence of threetypes of stimuli (events, actions, and objects; herein we only considerevents) which may change the agent’s mental attitudes and makesome particular conditions true. The emotions are thus abbreviationsof the language, equivalent to their existence conditions.

Joy/Sadness. Joy (resp. sadness) is the emotion that an agent feelswhen an event occurs that is pleasant (resp. unpleasant) for him.

• JoyiAdef= Bel i A ∧ Pleasant i A

• SadnessiAdef= Bel i A ∧ Unpleasant

iA

Hope/Fear. An agent feels hope (resp. fear) when he expects anevent to occur in the future, but envisages that the contrary eventcould occur instead, and this second event is pleasant (resp. unpleas-ant) for him.

• HopeiAdef= Expect

i¬A ∧ Pleasant i A ∧ Envisage

iA

• FeariAdef= Expect

i¬A ∧ Unpleasant

iA ∧ Envisage

iA

Satisfaction/Fear-confirmed/Relief/Disappointment. These fouremotions are triggered when an event occurs that confirms or dis-confirms a past emotion of hope or fear.

• SatisfactioniAdef= Bel i A ∧ X−1HopeiA

• Disappointmenti¬Adef= Bel i ¬A ∧ X−1HopeiA

• Reliefi¬Adef= Bel i ¬A ∧ X−1FeariA

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 19

Page 26: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

• FearConfirmediAdef= Bel i A ∧ X−1FeariA

We notice that satisfaction implies joy, and fear-confirmed impliessadness, what seem to be intuitively correct.

5 CASE STUDY

We now want to apply our framework to four different scenarios cor-responding to the four cases identified in the introduction where theagent needs emotions. In every case, we consider a home managingAmIS, administrated by agent m. Let h be a human dweller of thishouse.

Case (C1): appraisal of an external event from the user’s point ofview. By definition, as soon as the agent m believes that h’s mentalstate validates the conditions composing a given emotion, m believesthat h feels this emotion. Thus, if m believes that h believes thatthe sun is shining (i.e. Belm Belh sunny) and m also believes thatthis is pleasant for h (Belm Pleasanth sunny) then by definition m

believes that h is joyful about this (i.e. Belh Joymsunny)6.

Case (C2): pre-evaluation of the emotional effect of an agent’saction on the user. In some cases, emotional impact can be partof a plan. For example, when the production or removal of someemotion of the addressee of the action accounts for the aimed effect(commonly named Rational Effect in the agent community [9]), orwhen various actions with the same informative or physical effecthave different emotional effects (these effects are a selection criterionof the action among the other possible ones).

In the first case, suppose that m knows that h feels a negativeemotion (for example, sadness) because it is raining (and thus hecannot take a walk), i.e. Belm Sadnesshraining . Some behaviorallaws can motivate m to help h to cope with his emotions7 either byinforming h that it is not raining anymore as soon as m learns it, orby focusing h’s attention on something else8. In the first case such alaw could be: Belm (Sadnesshϕ∧¬ϕ) → Intendm Belh ¬ϕ (i.e.:if m believes that h feels sad about ϕ whereas himself knows that ϕ

is wrong, then he will intend to inform h about this).Concerning the second case, let’s suppose that m believes

that the time just before h hoped to play chess with John(i.e. Belm X−1HopehJohnPlaysChess), but finally John doesnot come anymore (Belm ¬JohnComesHome). We also sup-pose that m and the other agents have world laws knowl-edge like XPlaysChess → XComesHome where X ∈{John,Peter ,Paul} meaning that if X plays chess with h thennecessarily X comes at h’s home. Under these conditions, if m

informs h that John does not come home, m can logically de-duce that h will know that he will not play chess with him (i.e.Belm Belh ¬JohnPlaysChess) which will disappoint him, by def-inition (i.e. Disappointmenth¬JeanPlaysChess). m then hearsthat Peter and Paul propose to play chess with h, and must choosethe partner that will best fit h’s likings. m believes that h likes that

6 Here, the considered emotion is positive. m can then aim at maintaining it,or considering it in particular situations (for example if he has bad news totell to h).

7 In psychology, the coping is the agent’s choice of a strategy aiming at sup-pressing or decreasing a negative emotion that he feels (for example bydownplaying or totally suppressing its causes). We consider here that theAmIS can help the user in this task.

8 This (yet uncovered here) case needs a handling of activation degrees ac-counting for the accessibility of the belief to the conscious. See John An-derson’s works in cognitive psychology [2].

Paul visit him (Belm Pleasanth PaulComesHome), but is indif-ferent to Peter visiting him (Belm ¬Pleasanth PaulComesHome).We can then prove that if m informs h that Paul plays chess withhim then m will believe that h is joyful about Paul visiting him(Belm JoyhPaulComesHome) (and not joyful about playing chesswith Paul, which is indifferent to him; what was pleasant to him wasto play chess with John in particular). We can also prove that if m

informs h that Peter will play chess with him, h will feel no emotion.Thus m will rather ask Paul to come than Peter.

Case (C3a): observation and explanation of behavior. Inthe morning, h is visibly stressed but m does not know why. Wesuppose that m believes that h has a meeting in the morning andmust present his work here. Moreover, his world knowledge tellsm that when one is well-prepared, one expects one’s meeting to gowell but envisage it could go wrong (i.e. Belm (Belh prepared →Expect

hmeetingOk ∧ Envisage

h¬meetingOk))9. More-

over m knows that a good performance is pleasant for h

(Belm Pleasanth meetingOk ), and that a bad performanceis unpleasant for h (Belm Unpleasant

h¬smeetingOk ).

m deduces that if h believes that he is not well-preparedthen h hopes that his meeting could go well anyway(Belm (Belh ¬prepared → HopehmeetingOk)), and if h believesthat he is well-prepared then h fears that his meeting could go wronganyway (i.e. Belm (Belh prepared → Fearh¬meetingOk)).Note: if m did not know about h’s likings he could deduce noemotion from the same information.

Case (C3b): observation, and explanation hypothesis. h comeshome in the evening and m observes that he looks sad (i.e. af-ter his observation10, m believes that h is sad about a certainϕ: Belm Sadnesshϕ). However, m does not know the object ofthis emotion, i.e. there exists in his knowledge base no formula ϕ

verifying the definition of sadness (i.e. verifying Belm Belh ϕ ∧Belm Unpleasant

hϕ). If we suppose that, following his factual

knowledge, the agent m knows that h had a meeting, and we wouldlike that m could deduce that h believes his meeting has gonewrong (i.e. Belm Belh ¬meetingOk ), and that he is sad about this11.Thereby he could try to cheer him up or propose him some relaxationservices.

By now, we have no reason to suppose that m knows if themeeting has gone wrong (i.e. ¬BelIf

mmeetingOk ), even though

he knows a priori from his epistemic knowledge that this wouldbe unpleasant for h (i.e. Belm Unpleasant

h¬meetingOk ), and he

believes that h knows if his meeting has gone wrong or well (i.e.Belm BelIf

hmeetingOk ).

In addition, following the emotional definitions he disposes of, m

knows that if h believes that his meeting has gone well then he will behappy about it (i.e. Belm (Belh meetingOk → JoyhmeetingOk )and if he believes that it has gone wrong he will be sad (i.e.Belm (Belh ¬meetingOk → SadnesshmeetingOk ). Given thatm believes that h is sad, m infers that h believes his meeting hasgone wrong (Belm Belh ¬meetingOk ) : what we wanted to prove12.

9 m also disposes of a law accounting for the consequences of an unpreparedperformance, i.e. Belm (Belh ¬prepared → Expecth ¬meetingOk ∧Envisageh meetingOk)

10 In previous work we studied the perception actions (called sensing orknowledge gathering actions in the literature cf. [13, 14]).

11 This requires the integration of some abductive reasoning12 We can notice that we have here an example using non trivial epistemic

logic inferences.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 20

Page 27: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

We have sketched how the four example cases of the introductioncan be handled in our framework. We did not show details on thedomain modeling, and have omitted the proofs. These will be elabo-rated in future work.

6 RELATED WORKS & CONCLUSION

Some other researchers recently took an interest in logical formal-ization of emotions. We analyze below the originality of our work inrelation to two other contributions to this growing field.

Meyer’s work [19] is mainly oriented towards the link betweenemotions and the satisfaction of a plan. He designs KARO, a specificlogic of action, belief, and choice, and uses it to express a generationrule for each of his four emotions (happiness, sadness, anger andfear). These rules are quite complex and do not for now associatean intensity degree to the generated emotions. Moreover, only fouremotions are described. However, Meyer investigates the influencethat emotions then have on action, a very important point that we donot handle at all for now.

Ochs et al. [20] focus on emotional facial expression for embodiedagents. They yet provide a formalization based on Sadek’s rationalinteraction theory [26], a logic of belief, intention and uncertainty.They stay very close to the OCC typology, but restrict it to only fouremotions (joy, sadness, hope and fear), that they associate with inten-sity degrees depending on their uncertainty degrees. Moreover, theyinvestigate the essential problem of emotional blending. Their resultsare not very formal, since they propose a kind of spatial mixing ofemotional facial expressions.

To conclude, we argue that our model, based on definitions ofemotions stemming from psychology, is quite simple to manipulate.Of course, we will need to make it more complex to express moreemotions, but we believe that its modular construction allows to ex-tend it quite easily. Moreover, it is already useable as is, as illustratedby our case study. In later work, we envisage to explore several direc-tions, like the integration of intensity degrees for emotions, allowinga sharper adaptation, or the study of adaptation strategies from psy-chological theories of coping.

For now, we have designed a logic able to deal with emotions. Be-cause of the lack of place, neither semantics nor completeness resulthave been presented. The next step is to implement this framework ina theorem prover. First, we aim at using the generic prover developedin our team, Lotrec [10], to prove the feasibility of an implementa-tion. Then a specialized prover could be developed and optimizedfor this logic, although this is not our field of research. All these en-hancements aim at designing agents more and more useful for humanbeings.

REFERENCES[1] Emile Aarts, Rick Harwing, and Martin Shuurmans, ‘Ambiant in-

telligence’, in The Invisible Future, ed., Peter J. Denning, 235–250,McGray-Hill, New-York, (2002).

[2] J.R. Anderson and C. Lebiere, The Atomic Components of Thought,LEA, Mahwah, NJ, 1998.

[3] C. Bartneck and O. Michio. eMuu - An Emotional Robot. Demonstra-tion at Robo Festa, Osaka, 2001.

[4] B. F. Chellas, Modal Logic: an introduction, Cambridge UniversityPress, 1980.

[5] Philip R. Cohen and Hector J. Levesque, ‘Intention is choice with com-mitment’, Artificial Intelligence Journal, 42(2–3), (1990).

[6] Antonio R. Damasio, Descartes’ Error: Emotion, Reason, and the Hu-man Brain, Putnam Pub Group, 1994.

[7] Charles R. Darwin, The expression of emotions in man and animals,Murray, London, 1872.

[8] Paul Ekman, ‘An argument for basic emotions’, Cognition and Emo-tion, 6, 169–200, (1992).

[9] FIPA (Foundation for Intelligent Physical Agents). FIPACommunicative Act Library Specification, 2002. URL:http://www.fipa.org/repository/aclspecs.html.

[10] Olivier Gasquet, Andreas Herzig, Dominique Longin, and MohamedSaade, ‘LoTREC: Logical Tableaux Research Engineering Compan-ion’, in International Conference on Automated Reasoning with An-alytic Tableaux and Related Methods (TABLEAUX 2005), Koblenz(14–17 septembre), Germany, ed., Bernhard Beckert, number 3702 inLNCS, pp. 318–322. Springer Verlag, (2005).

[11] Jonathan Gratch and Stacy Marsella, ‘A domain-independent frame-work for modeling emotion’, Journal of Cognitive Systems Research,5(4), p. 269–306, (2004).

[12] Andreas Herzig, ‘Modal probability, belief, and actions’, FundamentaInformaticae, 57(2-4), 323–344, (2003).

[13] Andreas Herzig, Jerome Lang, Dominique Longin, and Thomas Po-lacsek, ‘A logic for planning under partial observability’, in Proc. of17th National Conf. on Artificial Intelligence (AAAI-2000), pp. 768–773, Austin, Texas, (2000). AAAI Press.

[14] Andreas Herzig and Dominique Longin, ‘Sensing and revision in amodal logic of belief and action’, in Proc. of 15th European Conf. onArtificial Intelligence (ECAI 2002), ed., F. van Harmelen, pp. 307–311,Amsterdam, (2002). IOS Press.

[15] Andreas Herzig and Dominique Longin, ‘C&L intention revisited’, inProc. 9th Int. Conf. on Principles of Knowledge Representation andReasoning (KR2004), eds., Didier Dubois, Chris Welty, and Mary-AnneWilliams, pp. 527–535. AAAI Press, (2004).

[16] Patricia A. Jaques, Rosa M. Vicari, Sylvie Pesty, and Jean-FrancoisBonneville, ‘Applying affective tactics for a better learning.’, in In Pro-ceedings of the 16th European Conference on Artificial Intelligence(ECAI 2004), (2004).

[17] Richard S. Lazarus, Emotion and Adaptation, Oxford University Press,1991.

[18] Chul Min Lee and Shrikanth Narayanan, ‘Emotion recognition using adata-driven fuzzy inference system’, in Proc. of Eurospeech-2003, pp.157–160. ISCA Archive, (2003).

[19] John Jules Meyer, ‘Reasoning about emotional agents’, in 16th Euro-pean Conf. on Artif. Intell. (ECAI), eds., R. Lopez de Mantaras andL. Saitta, pp. 129–133, (2004).

[20] Magali Ochs, R. Niewiadomski, Catherine Pelachaud, and DavidSadek, ‘Intelligent expressions of emotions’, in 1st International Con-ference on Affective Computing and Intelligent Interaction ACII, China,(October 2005).

[21] Andrew Ortony, G.L. Clore, and A. Collins, The cognitive structure ofemotions, Cambridge University Press, Cambridge, MA, 1988.

[22] Rosalind W. Picard, Affective Computing, The MIT Press, 1997.[23] Helmut Prendinger and Mitsuru Ishizuka, ‘Human physiology as a ba-

sis for designing and evaluating affective communication with life-like characters’, IEICE Transactions on Information and Systems, E88-D(11), 2453–2460, (2005).

[24] Anand S. Rao and Michael P. Georgeff, ‘Modeling rational agentswithin a BDI-architecture’, in Proc. Second Int. Conf. on Principles ofKnowledge Representation and Reasoning (KR’91), eds., J. A. Allen,R. Fikes, and E. Sandewall, pp. 473–484. Morgan Kaufmann Publish-ers, (1991).

[25] James A. Russell, ‘How shall an emotion be called?’, in Circum-plex models of personality and emotions, eds., R. Plutchik and H.R.Conte, 205–220, American Psychological Association, Washington,DC, (1997).

[26] David Sadek, P. Bretier, and Franck Panaget, ‘Artimis: Natural dialoguemeets rational agency’, in Proceedings of 15th International Joint Con-ference on Artificial Intelligence (IJCAI’97), pp. 1030–1035, Nagoya,Japon, (1997).

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 21

Page 28: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

A Human Home Interaction ApplicationNati Herrasti and Antonio López and Alfonso Gárate1

1

Abstract. As a result of the increasing miniaturization ofelectronic circuitry and the corresponding increase incomputational power of embedded systems, the point has beenreached where it is possible to integrate electronics into the day-to-day elements of people’s lives. The currently-existing types of man-machine-product relationship based on pushbuttons, keys andmenus will disappear, to be replaced by intelligent systems with which we will interact through natural speech, movements,gestures, etc. The environments, by means of the systems, products and machines of which they are made, will respond, and will evenanticipate, the orders given by the user’s voice, gestures or hands, facial expressions, etc. This challenge has been met by the projectcalled GENIO: to get a multimodal user interface to control anykind of domestic devices, mainly household appliances andentertainment systems, in order to obtain a first product based onthe precepts of Ambient Intelligence.

1 Ikerlan research and development centre, Spain, email: {nherrasti, alopez} @ikerlan.es Fagor Electrodomésticos household appliance manufacturer, Spain, email: [email protected]

1 INTRODUCTIONAn environment may be termed as “Ambient Intelligence” [1]when it is not intrusive, where diverse technologies complementeach other so that the users are surrounded by the saidenvironment, and as many services and features as required or as predictable are offered, in as many environments as the users mayhave available.Thus, an Ambient Intelligence environment, with a technologicalnetwork surrounding those inhabiting it, will be able to:

Recognise users and their circumstances (activities, state ofmind, etc.) and operate in consequence, i.e., be sensitive tohuman presence.

Have a predictive behaviour based on knowledge of the environment (context awareness), of the habits of those it is “serving”, and of the specific activities of the same when acting.

Produce new real-time services in fields such as entertainment, security, health, housework, workenvironment, access to information, computing,communications, etc., to improve the quality of life bycreating adequate atmospheres and functions.

Allow access to as many services and features as it can carryout, regardless of where the user is located, the position fromwhich the user demands the said services and the artefactsavailable at that particular moment (ubiquity).

Relate in a natural manner to the users by means of multi-modal voice-based interface technologies; by reading

movements and gestures; by generating, emitting and projecting images; by generating holograms, etc. (naturalrelationship) [2].

The home is the perfect place to apply Ambient Intelligent preceptsand technologies for providing high-level services to the user and allowing him to address and command his home by natural humaninterfaces like voice and dialogue, taking into account that one ofthe greatest human aspirations is to relate naturally with one’ssurroundings, and this includes devices and machines ascribed to the said environment. As there is no means of relating more naturalthan speech, voice-processing technologies are considerablyimportant in the development of Ambient Intelligence applications[3].Fagor Electrodomésticos, with the technological support of Ikerlan,has developed a first “Ambient Intelligence” applicationimplementing two of these previously mentioned preceptors:ubiquity and a natural relationship. In this application the mostrelevant activities at home (household appliance control,entertainment and telework) are implemented by several newservices that are controlled by means of a conversational voice-based interface.

2 HUMAN HOME INTERACTIONAPPLICATION

A network of the most sophisticated Fagor household appliances, based on power-line as a communication medium, has beendeveloped. The main product in this network is the DomoticController called the Maior-Domo, which is able to receive naturalvoice orders from the users and to dialogue with them in a conversational way.The Maior-Domo extracts the different commands from thesevocal orders and controls the home devices. In the same way, afterreceiving any event or information from any device of the network,the Maior-Domo generates a speech message to the users in orderto inform them about it. To achieve a demonstrator of this Ambient Intelligence application,a real scenario has been built where the users can command their home talking naturally (in Spanish). A demo has been implemented to show some actions carried out in the course of a day by a user at home [4]. These actions are: reading e-mails, programming thewashing machine, checking the items in the fridge, creating ashopping list, doing shopping with a PDA in the supermarket, turning on the dishwasher, being guided on how to prepare an oven recipe checking if the items needed to make it are available,listening to some music stored at home, looking at some photos,watching a chosen video and so on.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 22

Page 29: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

2.1 Real Kitchen: Devices And ElementsInvolved

In order to test the Human Home Interaction application a realkitchen has been installed at Fagor’s installations. This kitchencontains the following devices:

The prototype of a new oven, called Conect@, which containsa database of recipes and allows a recipe to be preparedsimply by putting the food inside it, as it knows how to command the sequence of temperatures, times and thecorresponding heating method.

A common fridge, with an RFID antenna and a reader insideto read the items stored in it. Each product has a smart label attached to it.

A washing machine that can be activated, deactivated andprogrammed.

A dishwasher that can be activated, deactivated and programmed.

Water and gas sensors and actuators to warn the user if a water or gas leakage is detected and close the valves.

Several intelligent plugs to automate devices such as lamps,blinds, curtains, water devices and any device that can be switched on/off and programmed.

PDAs and mobile devices through which the user can manageall the home devices and which can be used to move home information to outside the home (e.g. the shopping list).

A computer working as the Maior-Domo, which cancommunicate by power line with the oven, the washing machine and the dishwasher by WiFi with some PDAs and byradio via the pocket microphone (which looks something likea fluorescent marker pen) carried by the user. A RFID readeris attached to the Maior-Domo.

The following modules are located in this computer: ASR(Automatic Speech Recognition) voice recognition software, aTTS (Text To Speech) engine and a home command and control application.

The same computer stores all the digital information such assongs, videos and photos, and the context awarenessinformation needed for controlling the whole application. Thecontent database and the context database are stored in thiscomputer.

A projector to display the visual interaction with the Maior-Domo on the kitchen wall.

Figure 1: The kitchen where the Human Home Interaction Applicationhas been installed and tested

This kitchen is a demonstrator room where ubiquity and the naturalrelationship, both of them as Ambient Intelligence preceptors, have been implemented in a real life application.

2.2 Features and Services The services implemented in this demonstrator are shown as asuccession of daily activities carried out in the home. Let’s followthe sequence of different actions of a person within the home andsee how this can be managed using natural voice interface [5].The services implemented are:

2.3 Reading e-mails The user is able to ask how many new e-mails he/she has, using themost common sentences for this such as “Maior-Domo, tell mehow many e-mails I have” or “Maior-Domo, do I have any new e-mails?”, among many others. The Maior-Domo answers sayinghow many messages there are and the sender of each one. The user can choose which one they would like to listen to and the Maior-Domo will read it out.

2.4 Activation, deactivation and programming of household appliances

The user can switch any of these appliances on or off, addressingthe Maior-Domo using the most usual expressions (“turn on” or“switch on” or “put on” the washing machine; “please wash the clothes”, etc.). In the same way, the user can decide on aprogramming time for turning on any of these appliances, holding a dialogue like this:

User: “Maior-Domo”. Maior-Domo: “Yes?”User: “Wash my clothes by the time I come back from work”.Maior-Domo: “When are you going to come back?”.User: “At eight o’clock in the evening”.Maior-Domo: “Which washing program?”User: “Soft, 30ºC.”Maior-Domo: “Washing machine programmed.” User: “Show the programmed process on the TV”Maior-Domo: “OK”.

2.5 Checking the items in the fridge. Creating the shopping list and doing shopping with a PDA in the supermarket

Every item used in this application has an RFID label to identify it.An RFID antenna has been installed inside the Fagor fridge, and anRFID reader forms part of the Maior-Domo architecture. The user,always talking in a natural way, can ask for the list of the items in the fridge.The user defines his basic shopping list with the items he will always want to purchase. When the Maior-Domo detects that anyof these basic items are not in the fridge, it automatically addsthese items to the shopping list. The user does not even have to beaware of this. Moreover, at any moment, the user can address the Maior-Domo asking for other items to be added or current onesremoved.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 23

Page 30: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Before the user goes shopping, he asks the Maior-Domo todownload the shopping list to his PDA, which will be used as anassistant in the supermarket. The PDA shopping application is alsosupported by voice and image interface in order to guide the user. This fridge application can be extended to any kitchen storage unit. The most common dialogues are possible in this context, forexample:

User: “Tell me the shopping list.”Maior-Domo: “Right now you have chicken, carrots,champagne, olive oil, fabric softener and detergent. Anythingelse?”User: “Add paper napkins to the list.”User: “Tell me the shopping list.”Maior-Domo: “Right now you have chicken, carrots,champagne, olive oil, paper napkins, fabric softener and detergent. Do you want anything else?”User: “Download it.” Maior-Domo: “Shopping list downloaded.”

2.6 Preparing a recipeThe Conect@ oven contains a database of recipes. When the userwants to cook a recipe, he asks the Maior-Domo to check if all theitems needed are in the fridge. If this is so, the user and the Maior-Domo begin a dialogue to help the user to prepare the recipe beforecooking it. The user is guided through each step of the recipe at thepace he requires.An example of this is:

User: “Maior-Domo.”Maior-Domo: “What do you fancy?”User: “What do I have in the fridge?”Maior-Domo: “You’ve got apples, onions, yoghurts, milk, boiled ham and champagne in the refrigerator. That’s all. Anything else?”A piece of chicken and some carrots are put into the fridge bythe user.User: “What do I have in the refrigerator?”Maior-Domo: “You’ve got chicken, apples, onions, carrots,yogurts, milk, boiled ham and champagne in the fridge. That’sall.”Maior-Domo: “Would you like anything else?”User: “Show me the recipe for roast chicken.”User: “I want to see the next page.”User: “Go on.” User: “Show me the ingredients.”Maior-Domo: “For how many people?”User: “Four people.”User: “Go through the cooking process step by step.”

(The Maior-Domo reads the first line of the recipe)User: “Done.”

(The Maior-Domo reads the second line of the recipe)User: “OK.”

(The Maior-Domo reads the third line of the recipe)User: “Sorry?”

(The Maior-Domo repeats the third line of the recipe)User: “Done.”

(The Maior-Domo reads the fourth line of the recipe)User: “OK.”

(The Maior-Domo reads the fifth line of the recipe) User: “All right.”Maior-Domo: “This is the last step of the recipe. Turn the oven on whenever you want.”User: “Go cook the chicken.”

2.7 EntertainmentNaturally, the home is becoming more and more of a place for leisure, and the natural voice interfaces therefore need to beinvolved in this circumstance. In this context, the user may have alot of digital information inside the home or coming from outside, like films, music, photos and so on. In this project this field hasbeen covered using the database stored in the home.One of the implemented dialogues is:

User: “I’d like to listen to some music.”Maior-Domo: “What would you like to listen to?”User: “Something by Mozart.” Maior-Domo: “You’ve got the “Alla Turca March”, the “40th Symphony” and “A little night music”.”User: “A little night music.”User: “Maior-Domo.”Maior-Domo: “What do you fancy?”User: “I’d like to see some photos”Maior-Domo: “You’ve got photos of the Olympic games, thewedding and Tibet. Which ones would you like to see?”User: “The ones of Tibet.” User: “Stop.”User: “Continue.”User: “End.”User: “Maior-Domo.”Maior-Domo: “What do you want?”User: “Switch off the music.”User: “I want to watch the video of Harry Potter.”User: “Maior-Domo.”Maior-Domo: “What would you like me to do?”User: “Switch off the video.”

Figure 2: Entertainment time. Looking at photos

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 24

Page 31: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

3 USER INTERFACESTwo different multimodal interfaces have been implemented in thisHuman Home Interaction Application: Speech interface andVisualization.

3.1 Speech InterfaceWhen the user speaks, the Maior-Domo extracts the differentcommands from these voice orders and controls the home devices.The Maior-Domo recognizes the grammar included in theapplication with the common expressions used to manage the household appliances, the entertainment devices and the home ingeneral. To create this grammar and the sentences involved, about50 people have been interviewed to ask them what they would say to their home, treating it as a person and asking it to performservices and functions. People need to imagine how to talk to inanimate objects and they find this a bit strange; we encouragethem to use the same sentences as would if they were talking to aservant or a butler. The result of this interview was quite uniform,with people using similar sentences and orders [6].

In the same way, after receiving any event or information from any device in the network, the Maior-Domo generates a speech message to the user to inform them about it.

In the kitchen the user carries a microphone in his/her pocketproviding mobility and ubiquity as the user is able to accessdifferent services from anywhere in the home, thanks to thewireless connection between the pocket microphone and the voice recogniser [7].

Figure 3: The user carries a microphone in his pocket

Every user has a wireless microphone in his/her shirt pocket. This microphone captures his/her voice and all the sounds aroundhim/her and sends them to an equalizer, which filters the voicefrequency range from other sounds.From here the voice recognition system "understands" the complete sentence pronounced and processes it. The user canaddress the whole system in different ways using a lot ofexpressions, talking naturally and spontaneously and dialoguingwith the home. The defined grammar is so large that almost totalnaturalness has been achieved. This has been proved by severalspeakers (young men and women) talking to the system withoutprevious training.

Thanks to this natural dialogue, the user does not have to learn anykind of commands, does not need remote controllers for each oneof the devices, feels closer to home and has the perception ofhaving everything under control.To implement the speech dialogue the tools used are: ASR (Automatic Speech Recognition) and TTS (text To Speech) fromLoquendo. As it has been said previously the grammar and the possible dialogue has designed following the recommendationsfrom multiple interviews with users.

3.2 VisualizationTo complement the voice dialogues, a visual interface has beendeveloped in order to obtain a multimedia and multimodalenvironment. This interface can be displayed on several homescreens, TVs, Tablet PCs, mobile phones, PDAs and by means ofprojectors.As the main visual interface a wall projection helps the user to manage the home and receive information at any time about thehome services.

3.2.1 AvatarAn avatar is a computer-developed image with human appearancehabitually used in user interfaces [8].An avatar is an interface linking a user with the information theyneed. It responds to the user’s requests and needs. It provides a clear, insightful, rapid link to an information database and it does so in a manner that is easy to understand.The main objective of an avatar is to increase the degree ofinteraction between the user and the application, as well as toobtain one more natural, interactive and customizedcommunication.As well as speech, the human face is an important element forhuman communication. Phonemes are distinct sounds of alanguage. Each phoneme can be visually represented by means of a facial expression related to a certain format of the mouth, called aviseme. With visemes and phonetic representation, it is feasible tobuild a facial expression for each small speech segment.The application receives as input a text conforming to the character’s speech and gender, as well as language and emotionparameters, and generates as output, in real time, the animation of a virtual character uttering the input text with speech synchronizationand expressiveness. This system explores the naturalness of facialanimation and seeks to offer a real-time interactive interface.After making a study on user interfaces, according to the results,the use of facial animation in the design of interactive services was favorably rated as the most important attribute in theseexperiments. An important result of this is that users cooperate more with a computer if it is represented with facial animation andTTS instead of TTS only.The project focuses on facial animation, more specifically on facialanimations with synchronization between speech and facialexpressions.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 25

Page 32: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Figure 4: The avatar

4 CONCLUSIONSBy building this real AmI scenario and by allowing some people tointeract with this real kitchen, the achieved conclusion is thatnatural human interaction in common user’s environments can bereached in a quite short time and from here a lot of applications andproducts could derive.The impact of Ambient Intelligence and Human Interactions ondevelopments in technological products over the next few decadespromises to be colossal [9]. The same will be true of the influencethat these new products will exert on the lives of those who ownthem. Their proliferation in the West will generate, or at leastinspire, a new social change of unpredictable proportions, althoughit may be supposed that it will have a bearing on the very conceptsof globalisation and on extending access to welfare.The standard of living, taken as a natural access to numerousfeatures and services that relieve us of tiresome tasks, giving usmore leisure time while drawing us closer to culture andentertainment will increase in a manner that we cannot imagine atpresent, and will be attainable by practically all social classes. Thewelfare and leisure society will be nearer to removing, or at leasttoning down, the social stratifications that still persist.Although society as a whole will benefit from the technologicaladvancements brought by Ambient Intelligence, it will especiallybe people with disabilities and the growing third age populationwho will most improve their standard of living, thanks to futuregenerations of products based on these advancements.

5 FUTURE WORKIn the near future the following aspects will be carried out:

Development of a wireless network of microphones (microphone arrays) integrated in the home environment (some in each room) for capturing the voice of the users andisolating it from stationary and accidental noise. This will be done through microsystem technology.

Obtaining total speaker independence by enlarging thegrammar and digitalizing any kind of human voice in order toprovide the system with a non-rejectable and easilyunderstandable voice signal.

Obtaining an absolute speaker dependence for some specificservices like security and safety applications. The challenge isto prevent any user, for example a child, from being able to

turn on the oven or activate the anti-intrusion system. Voiceverification techniques will be implemented.

Fagor/Ikerlan have a test environment, called DomoLab,where the usability of the new products is tested. In this futurehome, different groups (families, students, etc.) will live inand test the final prototypes with the purpose of our obtainingimpressions and suggestions from real users.

ACKNOWLEDGEMENTSFagor Electrodomésticos is the leading household appliancemanufacturer in Spain and one of the biggest in Europe(www.fagor.com).Ikerlan is a Research Centre funded within the MCC (Mondragon Corporation Cooperative) under the auspices of the Basque Government (www.ikerlan.es).

REFERENCES[1] Nigel Shadbolt, “Ambient Intelligence”. University of

Southampton. 2003.

[2] ITEA Ambience project, http://www.extra.research.philips.com/euprojects/ambience/.

[3] Berry Eggen, Gerard Hollemans, Richard van de Sluis,“Exploring and Enhancing the Home Experience”.Cognition, Technology and Work. Springer-Verlag, 2002.

[4] A.J.N. van Bremen, K. Crucq, B.J.A. Kröse, “A User-Interface Robot for Ambient Intelligent Environments”. 1stInternational Workshop on Advances in Services Robotics (Italy) 2003.

[5] K. Ducatel, M. Bogdanowicz, F. Scapolo, “SCENARIOS FOR AMBIENT INTELLIGENCE IN 2010”. EuropeanCommission Information Society Directorate-General.Febrero 2001.

[6] L. J. Rodríguez, I. Torres, A. Varona, "Annotation and analysis of disfluencies in a Spontaneous speech corpus in Spanish". Disfluency in Spontaneous Speech, August 2001.

[7] M. I. Torres, "Reconocimiento del habla en sistemas dediálogo. El procesamiento del lenguaje y del habla en los sistemas de diálogo”. J. Llisterri ed. FDS, 2003.

[8] Network of Excellence: Methods for Image and VideoProcessing in Ambient Intelligence Applications. MIVAI, 2002

[9] Menno Lindwer, Diana Marculescu, Twan Basten, “AmbientIntelligence Visions and Achievements: Linking AbstractIdeas to Real World Concepts”. 2003 IEEE.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 26

Page 33: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

An Intelligent Data Management Reasoning Modelwithin a Pervasive Medical Environment

John O’Donoghue and John Herbert 1

Abstract. Large quantities of sensory data may be generated withina pervasive medical environment. Using the information gathered, in-telligent systems may be developed to assist our medical practition-ers in real-time. Patient datasets require constant scrutiny and mustbe analysed in the context of other available information (e.g. patientprofile, medical knowledge base). An accurate real-time overviewof a patients state of health is not always possible as communica-tion links may break or servers fail. Therefore it is essential that theinformation provided on a patients state of health in taken in the con-text of available datasets. Presented is a Data Management System-Tripartite Ontology Medical Reasoning Model (DMS-TOMRM). Itis built on three input streams 1) External stimuli (e.g. patient vi-tal signs, patient location), 2) Medical knowledge base (medicaldatabase, ontologies) and 3) User profiles (medical history and pa-tient properties). All three pools of information are merged togetherto provide the medical practitioner with a real-time diagnosis assis-tant. A key element of the DMS-TOMRM is its ability to cope withphysical failures. For example, if the medical knowledge base fails,the DMS-TOMRM may still provide a diagnosis based on the usersprofile and current real-time sensor values. This supports the DMSprinciple of providing a higher quality of service at the patient pointof care. Presented is the DMS-TOMRM and how it intelligently in-teracts with a context rich medical environment.

1 INTRODUCTION

The importance placed on data management techniques has in-creased due to low-cost sensory devices and the emergence of ubiq-uitous environments. Current and future pervasive environments mayproduce large volumes of sensory data. Appropriate processing anddata management techniques are required to provide an effective ser-vice for the end user. The medical based DMS (Data ManagementSystem) [?] contains large volumes of static and dynamic data. Staticdata sources typically include databases containing patient records,and files containing medical practitioner profiles. Dynamic data mayinclude patient real-time sensor readings and medical practitionerlocation information. The DMS-TOMRM performs intelligent pro-cessing on the available static and dynamic datasets. It correlates thisinformation, highlighting anything of significance (thereby alertingthe medical practitioner). The monitoring and handling of data isachieved through an intelligent agent middleware, JadeX [?]. JadeXprovides built in reasoning and goal oriented facilities which are ide-ally suited for our context rich medical environments.

The relationship and meaning of important data variables withinour medical environments need to be defined. The basis for this care-

1 Department of Computer Science, University College Cork, Ireland, email:{ j.odonoghue, j.herbert } @cs.ucc.ie

ful management of data is the underlying semantic model. A seman-tic model implies precise definition of the meaning of concepts andterms. Within medical informatics a variety of methods exist whichprovide precision in biomedical information. These range from med-ical terminologies which provide a list of terms with basic hierarchi-cal structures to very comprehensive representations of medical con-cepts, encompassing a rich set of relationships between a specific setof concepts. Basic representations may be useful for domains suchas information retrieval. While sophisticated representations may besuitable for reasoning. Within a medical environment, different on-tologies address specific concerns. HL7 [?] is related to medical dataexchange terminologies; SNOMED CT [?] is concerned with the pre-cise definition of clinical terms. UMLS [?] addresses the need for”information integration through terminology integration”; it buildson over a hundred source vocabularies and contains over one millionconcepts. With DMS-TOMRM, the objective is different to that ofan ontology model such as UMLS. The goal is to provide meaningto all available medical data. Specifically how we interpret our avail-able datasets in real-time. This leads to a very concise and responsiveapplication-specific ontology rather than a broader, general purposeone.

The computational and communication constraints of an applica-tion deployed within a Wireless Patient Sensor Network (WPSN) aredirectly addressed by the ontology model developed. From this base-line a generic reasoning model may be created to meet applicationspecific requirements. If necessary detailed ontologies and reasoningmodels may be integrated into the DMS-TOMRM while maintain-ing the same application targets. A high-level overview of the currentDMS-TOMRM prototype is outlined in figure 1.

Figure 1. A Logical Overview of the DMS Architecture.

The DMS-TOMRM is based on three inputs 1) External stimuli2) Medical knowledge base 3) User profiles. If one of these inputstreams should fail, alternative JadeX plans will be activated basedon the current context of the patient and their surroundings.

The initial DMS version [?] was built on a Jade [?] agent middle-ware, working alongside a Jess expert system and Protege ontologymodels. The current DMS version integrates a JadeX adapter. With

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 27

Page 34: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

JadeX a true agent DBI (Desires, Beliefs and Intentions) model isapplied. It contains beliefs, goals and plans. Beliefs are all knownfacts about the real-world environment. From this belief base a JadeXagent may activate plans (i.e. intentions) which may contain a set ofsub plans (enabling the JadeX agent to react to multiple scenarios.).Finally desires, what does the agent wish to achieve? When a goalis outlined the JadeX agent will use all knows avenues to reach it.The DMS-TOMRM requires a dynamic middleware to cope with in-termittent communication links and sporadic end user requests. WithJadeX an intelligent agent middleware provides the necessary service(i.e. activate alternative plans/goals on all known contexts).

In section 2, a review of how ontologies, sensors and agent mid-dlewares have been applied in developing reasoning models withinour pervasive environments. Section 3 introduces the DMS-TOMRMand how it reasons based on available datasets. An example scenarioof the DMS-TOMRM model and how it integrates with JadeX isgiven in Section 4. Finally, a conclusion on the effectiveness and im-portance of the DMS-TOMRM model and its future development isoutlined.

2 RELATED WORKA number of ambient reasoning models have been developed basedon well defined ontologies. How our sensors and knowledge basesare integrated plays a pivotal role in their effectiveness [?]. As ourpervasive environments are coupled together based on multiple dis-tributed objects, an intelligent agent middleware may provide thenecessary capabilities in delivering an effective service. This mayassist software developers in meeting the data management require-ments of our medical practitioners.

2.1 OntologiesOntologies within context rich medical environments have beenshown to play a key role in providing effective reasoning architec-tures. In developing a biomedical based context ontology model it isimportant not to view it ”as a mere knowledge representation tool”[?]. It is extremely important that, the true meaning of each data vari-able is not only understood but is explicit under all related contexts(e.g. a partial DMS-TOMRM model, a patient blood pressure read-ing of 150/90 mmHg, may not retain the same meaning if the userprofile were not available). The term biomedical ontology covers alarge range of systems. A number of ontologies have been developedin the following areas, these include:

2.1.1 Service Oriented Context Ontologies

As communication links and servers fail, a query injection into a con-text environment does not always provide the desired result. [?] de-scribes a context information service which serves ontology-basedcontext queries. It is an important characteristic within a perva-sive environment. As well defined relationships between the knowl-edge base and medical practitioner requirements provide the basisto deliver succinct datasets to the mobile user. Another developmentwithin service ontologies is the Web Service Modelling Ontology[?]. Here ontologies are utilised to imply ”meaning to all resourcedescriptions as well as all data interchanged during service usage”.[?] presents the SOCAM (Service-Oriented Context-Aware Middle-ware) architecture. It is designed to manipulate and access contextaware information. It presents a formal and extensible context modelbased on the OWL (Web Ontology Language) ontology.

2.1.2 Ontology based context models

A number of general purpose context ontologies have been devel-oped. They outline some of the fundamental aspects required in de-veloping an effective reasoning ontology within our ambient envi-ronments. In [?] the approach of decoupling application compositionfrom context acquisition and representation is applied. This enableskey features of the application to be designed from the Top-down,thus providing a pure design approach. The CONON (CONtext ON-tology) is a domain specific ontology built on a hierarchal structureenabling domain specific extensions to be added if required, basedon a formal extensible structure [?]. As our pervasive environmentscontain multiple heterogeneous devices, development of a commonterminology between all participating devices would provide a solidbase to develop context reasoning models. In [?] all devices containa list of services which it may provide within its ambient environ-ment. A generic ontology for the description of context informationwas developed.

A number of key areas highlighting how ontologies have been ap-plied within our ambient environment were outlined above. Devel-opment of a complete ontology model which represents every singleaspect of our real-world environment is a major challenge. Definingrelationships between multiple data sources is extremely complex.Ontologies are most effective when developed for a specific appli-cation domain with a well defined collection of datasets and contextenvironment guidelines.

2.2 SensorsAmbient sensors come in many guises. They provide our medicalcommunity with the ability to measure patient vital signs [?] [?] [?]and provide an overview of the real-world environment (e.g. roomtemperature, natural light levels) in real-time. In merging patient andreal-time environment datasets with a patients profile and medicalknowledge base, the potential for a higher QoS within our medicalenvironments is increased.

2.3 Agent MiddlewaresThe benefit of deploying intelligent agent middlewares within ourmedical environments is well documented. They provide sufficientreactive and proactive capabilities to deal with complex, paralleltasks. Scheduling resources for patients within a medical environ-ment is a complex task. Outlined in [?] is the MedPage architecturewhich contains Jade agents and JadeX agent adapters. Here agentsnegotiate with each other based on the current and future states oftheir respective ambient environments. An Ontology-Driven Soft-ware Development in the context of the semantic web is presentedin [?]. Here suggestions are given on how to develop ontology ori-ented software through Protege and OWL with an underlining agentplatform.

Ontologies, Sensors and Intelligent agent middlewares provide thenecessary tools in developing effective reasoning domain specific ap-plications. Ontologies may be designed for very specific to the verybroadest of ambient environments. It is clear that attention to detail,particularly in the area of relationships and context management isparamount in developing a successful ambient application.

3 Context ReasoningPresented is the DMS-TOMRM context reasoning model. It is devel-oped specifically to assist medical practitioners to monitor cardiovas-

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 28

Page 35: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

cular patients in a non-critical environment. It achieves this throughthe use of low-disturbance wireless patient sensors (cf. figure 2). Atpresent, vital signs being monitored include blood pressure, pulse,body temperature and electrocardiogram (ECG).

Figure 2. The Tyndall-DMS-Mote Measuring a Patients Pulse Rate whileinteracting with the Nokia 9500 Communicator.

For this application scenario, the ontological model is partitionedto reflect three main sources of information (cf. figure 3). This iscalled the Data Management System-Tripartite Ontology MedicalReasoning Model (DMS-TOMRM) it consists of:

1. Cardiovascular measurement (external stimuli), dealing with thedynamic measurements of the heart and circulatory system asa separate bio-mechanical entity. This model defines how weproduce characteristic cardiovascular measurements (for exam-ple, systolic and diastolic pressure parameters) from raw sensordatasets.

2. Patient profile record (user profile), providing static informationon the individual whose cardiovascular system is being monitored.

3. A Medical knowledge base, describing the relevant clinical knowl-edge and rules in this particular domain. It provides real-time med-ical based assistance during diagnosis of a cardiovascular patient.It works alongside external stimuli and user profile domains.

Figure 3. The Data Management System-Tripartite Ontology MedicalReasoning Model (DMS-TOMRM).

The overlaps in figure 3 of the DMS-TOMRM model indicate thatas well as using all three sources of information, we can make useof any two sources to assist our medical practitioners. Thus, in theabsence of a User Profile, the medical knowledge base could assistthe interpretation of the cardiovascular measurements. This may be

important in a wireless network environment where not all sourcesof information may be simultaneously available. An essential task ofthe DMS is correlating dynamic and static information. The DMS-TOMRM ontology reflects this division, dynamic (external stimulie.g. sensors) and static (medical knowledge base and patient profile).

Medical practitioners interact with the DMS-TOMRM through theDMS-Server (cf. Figure 4). As a medical practitioner examines a pa-tient he/she notes the patients symptoms onto his/her mobile device(e.g. PDA, Tablet PC). This information is then processed by theDMS-TOMRM where it correlates all known information and pro-vides the medical practitioner with a list of known possibilities (di-agnosis). This list of possibilities can be rejected or explored further.

Figure 4. Medical Practitioner interacting with the DMS-TOMRMthrough the DMS-Server.

A brief overview for each of the DMS-TOMRM divisions is given:

3.1 Medical Knowledge BaseBlood pressure is taken as an example of a cardiovascular measure-ment for this paper. Blood pressure is defined as the pressure exertedon the arterial walls by the flow of blood [?]. This pressure is clearlyrelated to the pumping output of the heart and the resistance of theblood vessels. However there is a sophisticated regulation system forblood pressure that seeks to maintain blood pressure within a certainrange. In addition to the heart, other important elements in the regu-lation system are the kidneys, the internal cellular lining of the bloodvessel walls, and the baroreceptors in the heart [?]. Without goinginto detail, it is clear that a sophisticated ontology is required to fullymodel blood pressure, its regulation, and its interaction with otherbody functions.

In our application, the wireless patient sensor node provides twomeasurements:

1. the systolic pressure, i.e. specifically the maximum arterial pres-sure during contraction of the left ventricle of the heart, and

2. the diastolic pressure, i.e. the minimum arterial pressure duringrelaxation and dilatation of the ventricles of the heart.

These measurements are important indicators. High systolic bloodpressure appears to be a significant indicator for heart complicationsin all ages, but especially in older adults. High diastolic pressure ap-pears to be a strong predictor of heart attack and stroke in young

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 29

Page 36: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

adults. The difference between systolic and diastolic readings (pulsepressure), although not usually considered, may be a predictor ofheart problems particularly in older adults.

Apart from defining blood pressure within the medical knowledgebase (i.e. specific range, category), patient symptoms and associatedcauses may be coupled with a specific blood pressure range. Thisnarrows the possible diagnosis list which will be transmitted back tothe medical practitioner.

3.2 External StimuliPatient and ambient sensors return raw data values. In isolation suchdatasets may assist medical staff in providing a better quality of ser-vice. By merging this information with a structured medical knowl-edge base and user profile, better data analysis techniques may beemployed to re-examine the patients previous state of health. It mayalso be used to predict future conditions. In relation to blood pres-sure the Tyndall-DMS-Mote may return Systolic values in the rangeof 0-300 mmHg and Diastolic values in the range of 0-150 mmHg.

3.3 User ProfileThe medical knowledge base is a collection of facts and relation-ships binding relevant realities together. This source of informationis based on the pure mechanics of the human body (for example, ifa patients blood pressure level remains severely elevated over a largeperiod of time, specific organs will begin to fail. This in turn maycause further complications). By merging external sensor values withthis medical knowledge base, early detection of such conditions mayprevent unnecessary risks.

Both the medical knowledge base and external stimuli are valu-able sources of information. However they both lack patient specificrealities including:

1. Patient details, such as Age, Gender.2. Current symptoms (e.g. heart palpitations, chest pain).3. Medical history (operations, family history, allergies).4. Medication (current and previous medication).

This collection of patient specific data is defined as a user profilewithin the DMS architecture. This valuable set of information greatlyenhances the identification process in isolating a patients medicalcondition.

4 JadeX and Context ReasoningTo manage our data effectively within a pervasive medical environ-ment a JadeX agent platform is employed. It contains similar qual-ities to Jade [?], however JadeX contains a higher abstract level ofreasoning. All JadeX agents are given a specific goal which theymust reach in order to complete their life cycle. It achieves its goalsthrough a set of plans or through a collection of sub goals. If one ofthe plans is not executable (e.g. communication failure) it may acti-vate an alternative plan or a sequence of sub goals. All known factsin relation to the monitored environment are stored within the beliefbase. A belief base may come in the form of an agent tuple set or alocal database. JadeX agents react to stimuli in two ways:

1. Internal Events may be activated based on a condition triggerwithin the belief base. Such events activate a deliberation processto see if any other events or plans need to be activated (for exam-ple if a patients pulse level becomes elevated, a JadeX agent will

examine all known data sets. If it discovers that the patient is cur-rently running, it may not raise an alarm). If a predefined plan isnot currently active then a new plan is instantiated to ensure thatthe agents final goal is reached. If the final goal can not be reachedthen new goals may be adopted.

2. External ACL Messages (Agent Communication Language). Heremobile agents throughout the distributed environment may com-municate with each other through ACL messages. Within JadeXall ACL messages must go through a deliberation process. Allknow goals and plans are identified and activated if in accordancewith the belief base.

Presented in Figure 5 is the DMS-TOMRM reasoning model inrelation to the JadeX architecture. Aside from JadeX administrationagents such as AMS, DF etc, Four DMS JadeX agents reside on theDMS-Server, they include:

1. User Agent, The user agent manages all incoming and out goingACL messages (i.e. requests, informs) from external agents re-siding on PCs, PDAs. Once initialised the User Agent registerswith the Dictionary Facilitator where it may locate other regis-tered agents including User Profile, Medical Knowledge Base andExternal Stimuli. The User Agent is responsible for correlating allknown data sources (e.g. User Profile, Medical Knowledge Baseand External Stimuli) in coming to an overall conclusion on thepatients state of health. A new User Agent is created for each mo-bile user who wishes to interact with the DMS-Server.

2. User Profile Agent, The User Profile manages all relevant infor-mation in relation to a specific patient. All patient data may residewithin their mobile device, local database or on the DMS-Server.

3. Medical Knowledge Base Agent, The Medical Knowledge Base(MBK) may come in many forms e.g. local database, ontology,XML document. The MKB stores all know medical facts in astructured manner (protege ontology model) which enables theMKB agent in real-time to filter and locate relevant information.

4. External Stimuli Agent, Our ambient environment may containmany sensors which transmit vast quantities of information backto a central server. In relation to the DMS-TOMRM model allpatient vital sign readings are instantly compared against knownfacts (i.e. User Profile and/or MKB). This activates a series of subgoals to identify possible irregularities (for example, high bloodpressure).

Figure 5. A JadeX agent based DMS-TOMRM Reasoning Model.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 30

Page 37: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

5 CONCLUSION

Data management within our ubiquitous medical environments re-quire an intelligent, sophisticated middleware to combine all knowresources in an effective manner. Ontologies may be utilised to de-fine many aspects of our pervasive environment from the very simple;definition of medical terms (i.e. blood pressure ranges, categories)to the very complex, complete medical reasoning models. Presentedis the Data Management System-Tripartite Ontology Medical Rea-soning Model (TOMRM). This novel approach is designed to as-sist medical practitioners in providing a higher quality of patientcare within a pervasive medical environment. It correlates all knownsources of data to assist medical practitioners during diagnosis. AJadeX agent platform is employed to provide the necessary real-timeDMS-TOMRM reasoning features. The DMS-TOMRM executes abest-effort approach, as a User Agent on the DMS-Server will as-sist medical practitioners during diagnosis with ”all known” datasources. The DMS-TOMRM provides a single focal point for all rea-soning in relation to each patient. Thus providing, a distinct advan-tage over current disjointed data gathering approaches, which maybe found in the majority of medical communities. The current proto-type of the DMS-TOMRM is still under development. Key evaluationareas include; partial and complete DMS-TOMRM model analysis.The effectiveness of the DMS-TOMRM will be based on the qualityof diagnosis returned to the medical practitioner.

ACKNOWLEDGEMENTS

This work is funded by the Boole Centre for Research in Informaticsand is supported by the Tyndall National Institute through the SFI-funded National Access Programme (NAP).

REFERENCES

[1] J. ODonoghue, J. Herbert J, Data Management System: A Context AwareArchitecture For Pervasive Patient Monitoring. in Proceedings of the 3rdInternational Conference on Smart Homes and Health Telematic (ICOST2005), pp. 159-166., 2005.

[2] A. Pokahr, L. Braubach, W. Lamersdorf, Jadex: Implementing a BDI-Infrastructure for JADE Agents, in EXP In Search of Innovation (SpecialIssue on JADE), , Telecom Italia Lab, Turin, Italy, 2003.

[3] HL7 Reference Information Model http://www.hl7.org/Library/data-model/RIM/modelpage mem.htm.

[4] SNOMED CT, http://www.snomed.org/snomedct/index.html.[5] UMLS, http://www.openclinical.org/medTermUmls.html.[6] F. Bellifemine, A. Poggi, G. Rimassa, JADE A FIPA Compliant Agent

Framework, In Proceedings of the International Conference on the Prac-tical Application of Intelligent Agents and Multi-Agent Systems, pp. 97-108, 1999.

[7] C. Becker, D. Nicklas, Where do spatial context-models end and where doontologies start? A proposal of a combined approach. In Indulska, Jad-wiga (ed), De Roure, David (ed) in proceedings of the First InternationalWorkshop on Advanced Context Modelling, Reasoning and Managementin conjunction with UbiComp 2004, 2004.

[8] W. Ceusters, B. Smith, Ontology and Medical Terminology: why Descrip-tions Logics are not enough. in proceedings of the conference Towards anElectronic Patient Record (TEPR 2003), San Antonio, (electronic publi-cation), 2003.

[9] R. Power, D. Lewis, D. O’Sullivan, O. Conlan, V. Wade, A context infor-mation service using ontology-based queries,” in Proceedings of the SixthInternational Conference on Ubiquitous Computing (Ubicomp’04), 2004.

[10] Web Service Modelling Ontology http://www.wsmo.org/.[11] T. Gu, X.H. Wang, H.K. Pung, D.Q. Zhang, An Ontology-based Con-

text Model in Intelligent Environments. In Proceedings of CommunicationNetworks and Distributed Systems Modeling and Simulation Conference,San Diego, California, USA, 2004.

[12] E. Christopoulou, C. Goumopoulos, A. Kameas An ontology-based con-text management and reasoning process for UbiComp applications, Pro-ceedings of the 2005 joint conference on Smart objects and ambient in-telligence: innovative context-aware services: usages and technologies.Pages: 265 - 270, 2005.

[13] X.H. Wang, T. Gu, D.Q. Zhang, H.K. Pung, Ontology Based ContextModeling and Reasoning using OWL. Workshop on Context Modeling andReasoning (CoMo-Rea) at PerCom’04, 2004.

[14] J. Alametsa, A. Varri, M. Koivuluoma, L. Barna, The Potential of EMFiSensors in Heart Activity Monitoring, 2nd OpenECG Workshop Integra-tion of the ECG into the EHR and Interoperability of ECG Device Systems,Berlin, Germany, 2004.

[15] J. Barton, B. OFlynn, P. Angove, A. Gonzalez, J. ODonoghue and J.Herbert, Wireless Sensor Networks and Pervasive Patient Monitoring,Proceedings of Information Technology and Telecommunications AnnualConference (IT&T 2005), poster, 2005.

[16] R. Fensli, E. Gunnarson and T. Gundersen, A Wearable ECG-recordingSystem for Continuous Arrhythmia Monitoring in a Wireless Tele-Home-Care Situation, proceedings of eighteenth IEEE symposium on computer-based medical systems (CBMS), 2005.

[17] T. O. Paulussen, A. Zoller, A. Heinzl, A. Pokahr, L. Braubach, W.Lamersdorf. Dynamic Patient Scheduling in Hospitals, Coordination andAgent Technology in Value Networks, 2004, publisher GITO Berlin, 2004.

[18] H. Knublauch. Ontology-Driven Software Development in the Context ofthe Semantic Web: An Example Scenario with Protege/OWL. InternationalWorkshop on the Model-Driven Semantic Web, Monterey, CA, 2004.

[19] A. Kumar, B. Smith, The Ontology of Blood Pressure: A Case Study inCreating Ontology Partitions in biomedicine, In Press,, 2003.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 31

Page 38: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

IPRA — An Integrated Pattern Recognition Approach toEnhance the Sensing Abilities of Ambient Intelligence

Holger Schultheis1

Abstract. The ability of ambient intelligence to serve a personcould be considerably enhanced, if intelligent devices would takeinto account the person’s current cognitive and affective state. Oneprimary source for state information are physiological measures.However, obtaining state information requires concertedly employ-ing a multitude of different techniques. To facilitate the use of phys-iological signals as sources of state information and thereby allow toimprove ambient intelligence, we developed IPRA, an integrated pat-ter recognition approach which (a) employs all techniques necessaryto appropriately analyze physiological data, (b) is broadly applicable,and (c) is easy to use even for non-experts. This approach togetherwith results of its first application are presented in this contribution.

1 INTRODUCTION

An intelligent device in a person’s (P ) environment will be all themore of use to P if the services it makes available to P are appro-priate with respect to P ’s current situation. For example, presentingP with new and potentially helpful information (e.g., route adviceduring driving) should be dependent on (a) whether P currently hasthe cognitive capacity to process the information presented and (b)whether presentation of the information might distract P ’s attentionand thereby be more harmful than helpful (e.g., P might cause anaccident; cf. [10]). Less critical but also desirable might be a pres-election of services based on the current mood of P like an intelli-gent HiFi device suggesting songs or films on the basis of P ’s cur-rent emotional state. Consequently, truly useful ambient intelligenceshould take into account P ’s cognitive and affective states.

In order to take P ’s states into account, intelligent devices in theenvironment need to be provided with information about these states.Because the states are not directly observable, they have to be esti-mated on the basis of indirect evidence. Such evidence can be found,for example, in the user’s speech and motor behavior, in data fromphysiological sensors, and by taking into account information aboutthe general properties of a situation and prior knowledge of P . Ofall of these indirect measures, physiological signals seem especiallysuitable for the goal at hand, since physiology (a) is continuouslyavailable, (b) usually changes quickly after the user state has changedand (c) is usually quite sensitive, i.e., even small changes in user stateshow up in changes of physiological signals (see, e.g., [18]).

Due to their advantages there has been a growing interest to usephysiological signals as a source of information about P ’s affective(e.g., [11]) as well as cognitive (e.g., [4]) states. Yet, the specificnature of physiological measures (see Sect. 2) prohibits the straight-forward use of raw signals for state estimation. Rather, a battery of

1 SFB/TR 8 Spatial Cognition, Universitat Bremen, Germany, email:[email protected]

techniques (see Sect. 3) needs to be applied to extract the relevant in-formation from the raw signals. Existing approaches, although theyhave yielded promising results, normally apply only a subset of allnecessary techniques. What is more, each of the previous approacheshas a rather narrow scope in concentrating on a given particular dataset and employing techniques for extracting state information justfrom this data set. As a result, the available approaches not only fallshort of gaining the most out of the physiological signals measured,but also often cannot be reused when analyzing other types of physi-ological measures for other application scenarios.

Given the considerable potential physiological signals hold forambient intelligence in providing information about P ’s state, thisseems to be a highly unsatisfactory situation. Desirable would be anapproach which both is general enough to be applied to a wide rangeof physiological data and draws on the full battery of techniques re-quired to properly analyze physiological signals. Ideally, such an ap-proach should also be easily to employ, so that researchers wantingto endow their intelligent devices with the ability of sensing cogni-tive and affective states need not be(come) experts in analyzing thesignals. In this contribution we present such an improved approach,called IPRA, which has been developed in response to the identifiedlack of satisfactory solutions.

The remainder of the article is structured as follows: first, theproperties of physiological data necessitating the use of sophisticatedtechniques are shortly described (Sect. 2). Second, the general princi-ples underlying the design of IPRA (Sect. 3.1) and the specific meth-ods employed in IPRA (Sect. 3.2) are detailed in Section 3. Section 4will then explicate the results of a first application of IPRA to a setof physiological signals. Finally, some concluding remarks touch onopen questions and future work (Sect. 5).

2 PROPERTIES OF PHYSIOLOGICAL DATA

Body conditions and processes are continuously available and mea-surement techniques allow to gauge the value of a physiological sig-nal several hundred times a second. As a consequence, one measure-ment from which to extract P ’s cognitive and / or affective state willcomprise several hundred single values. If, for example, P ’s statesshould be estimated every second, and the physiological signal issampled at 500 Hz, each measurement will comprise 500 values.

To be able to estimate the current state of P from such a 500 valuevector one has to know how particular values of a measurement re-late to certain states. Due to the large number of values, however, it isusually not obvious and for humans—even after long and thoroughinspection—impossible to determine which combinations of valuesare characteristic for specific states. Besides the large number of val-ues to consider, identifying the relation between values and states

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 32

Page 39: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

is further exacerbated by the fact that other factors than the currentstate of P , called noise, influence the measured values. For one, bod-ily processes do not only change with changing cognitive or affectivestate. Heart rate, for instance, depends considerably stronger on bod-ily than on mental effort. Furthermore, physiological signals do varyseemingly random over time, that is, they vary even without recog-nizable external influences.

The large number of values to be considered together with thenoise in the signal normally makes it impossible to identify the rela-tionship between measurement values and P ’s state with the nakedeye. Therefore, pattern recognition techniques need to be applied toextract the relationship between the values of a measurement andthe state P was in when the measurement was taken. As will be ex-plained in the following sections, however, applying pattern recogni-tion is not as easy as it may sound. It requires careful considerationand preparation utilizing other techniques like feature selection andestimating intrinsic dimensionality. In other words, pattern recogni-tion needs to be integrated with these additional techniques to yieldan appropriate approach to the analysis of physiological data. Thegeneral principles and specific methods of IPRA which is such anintegrated approach will be detailed in the following section.

3 IPRA — AN INTEGRATED APPROACH

The aims in developing IPRA were to provide:

1. an integrated approach: utilizing a comprehensive set of methodsto ensure a satisfactory analysis of physiological data.

2. a broadly applicable approach: utilizing methods and proceduressuch that a wide range of different kinds of physiological datacould be analyzed with IPRA.

3. an easy to use approach: even non-experts should be able to useIPRA to extract the relevant state information from given physio-logical data.

3.1 General Principles

3.1.1 Aim 1: Integration

As already said, the particular nature of physiological data (Sect. 2)necessitates the use of pattern recognition techniques to extract rela-tions between measured values and P ’s states. More precisely, statis-tical pattern recognition (cf. [8]) seems most appropriate due to boththe lack of prior information about structure and the noise in the data.The basic idea of statistical methods is to view the measurements aspoints in an n-dimensional space, where n is the number of valuesin each measurement, and to assume that measurements taken dur-ing one state of P are distributed differently in this n-dimensionalspace than measurements taken during another state of P . The aimof statistical pattern recognition is then to either directly or indirectlyestimate those (n− 1-dimensional) decision boundaries which mini-mize the probability to assign a measurement to a wrong state, that is,the probability to infer an incorrect state from a given measurement.

It has been shown ([16]) that when estimating the decision bound-aries the estimate will so much more be inaccurate as the numberof dimensions will increase—a phenomenon which has been termedthe curse of dimensionality. More precisely, the accuracy of the es-timation depends on the proportion of dimensions and number ofexamples from which to estimate the boundaries. Jain et al. ([8]), forexample, argue that to achieve a satisfactory estimate one needs atleast ten times as much example measurements for each state to bedistinguished than each measurement has dimensions. This, however,

is virtually never the case for physiological data. Not only tend phys-iological measurements to have several hundred dimensions, but alsothere are usually (cf. [11]) only comparatively few example measure-ments available for estimation.

Due to the curse of dimensionality applying statistical patternrecognition techniques to raw physiological signals will usually re-sult in suboptimal decision boundaries. To avoid this, one needs toeither increase the number of examples from which to estimate orto decrease the number of dimensions (i.e., values) being consideredduring estimation. Of these two options only the latter seems feasi-ble, since the former would require to gather several thousand mea-surements for each state to be distinguished. However, decreasing thenumber of dimensions also potentially reduces the amount of infor-mation available to distinguish measurements from different statesand thereby also may hamper boundary estimation. Thus, to achievea satisfactory estimate it is necessary to use as few, say k, dimen-sions as possible such that the dimensions used contain the maximalinformation possible for a subset of dimensions of cardinality k.

One way of finding this subset would be to test every subset of di-mensions for its suitability. Considering each subset is, however, onlyseldom possible, since the number of subsets is exponentially relatedto the number of dimensions. For physiological measurements in par-ticular following this approach would include examining 2500 sub-sets or more which is clearly infeasible. Rather, the search space ofdimension subsets needs to be searched using heuristics trying to findthe best subset while visiting as few subsets as possible. Various suchmethods, called feature selection methods, have been proposed overthe years (see [9] for an overview). Although the different approachesdiffer quite a lot in how they attempt to intelligently search the sub-set space, most of them require the user to specify the maximal (andsometimes the minimum) cardinality of the subsets to examine. Toconstrain the search as strictly as possible it would be desirable toprovide as the maximal cardinality a number which corresponds tothe minimum number of dimensions needed to distinguish measure-ments belonging to different states.

An indicator for the minimum number of dimensions necessary isthe intrinsic dimensionality of the given example data. Given a dataset DS lying in a space S with d dimensions, the intrinsic dimen-sionality of DS is defined to be d′, d′ ≤ d, such that there exists anS′ with S′ ⊆ S, S′ has dimensionality d′, and all elements of DSlie completely in S′. Usually, and especially for high dimensional,noisy data, d′ cannot be determined exactly, but has to be estimated.But even if estimated d′ yields important information for the patternrecognition process. If d′ < d holds, this indicates that some of thedimensions used to characterize the elements in DS are superfluous.This entails that some of the dimensions used in DS will be of nouse in distinguishing measurements stemming from different states.Therefore, d′ can be seen as a measure for the amount of dimensionsnecessary to retain all the information in a data set which are relevantfor distinguishing between different states.

Summarizing the above, there are at least three steps necessaryto appropriately analyze physiological signals: first, the intrinsic di-mensionality of the given data set has to be assessed. Second, withthe estimated intrinsic dimensionality as a guiding value for the car-dinality, a subset of all available dimensions will be selected to avoidthe curse of dimensionality. Finally, pattern recognition techniquescan be applied to the data constrained to the dimensions found bythe feature selection stage to estimate the optimal decision bound-aries. By implementing this three step procedure IPRA realizes thefirst of the above stated aims of providing an integrated approach tothe analysis of physiological data.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 33

Page 40: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

3.1.2 Aim 2: Broadly Applicable

To realize also the second aim, the methods employed in each ofthe three steps have to be chosen carefully. Over the years numer-ous methods for estimating intrinsic dimensional, feature selection,and pattern recognition have been proposed. Each particular methodin each stage has its particular strengths and weaknesses and can beassumed to provide good or optimal results only for certain kinds ofto be analyzed data. Regarding pattern recognition techniques, forexample, Schaffer ([14]) has shown that no single technique is supe-rior in performance to any other technique when considered acrossall possible types of problems. This does not exclude one patternrecognition technique to be superior to other techniques for certainproblem types. Rather, it shows that techniques superior to others forsome problem type must be inferior to the other techniques when ap-plied to a different problem type. Consequently, which technique isoptimal is dependent on the type of problem considered. Since oneusually cannot judge from the given data set to which type of prob-lem it corresponds2, it is impossible to know prior to analyzing thedata which pattern recognition technique will give the best results.This leads to two important conclusions with respect to the secondaim. On the one hand, to be broadly applicable a pattern recognitionapproach should not rely on one but on several pattern recognitiontechniques. On the other hand, techniques selected should be suffi-ciently different regarding their methodology such that they can beassumed to be especially suitable for different kinds of problems.

Following a similar argument, applying several intrinsic dimen-sionality and feature selection methods which should be as differentas possible in scope seems necessary. Adhering to these consider-ations, IPRA is equipped with three different methods for each ofthe three stages (see Sect. 3.2). One criterion—apart from method-ological diversity—for selecting the employed techniques were thenumber of parameters a technique needs to have specified to beapplicable: techniques with few parameters were preferred. Meth-ods with many parameters seemed unsatisfactory, because withoutprior knowledge about the to be analyzed data—which usually is notavailable—either the user would have to provide the parameter val-ues by hand or optimal parameter values have to be estimated fromthe data. The latter seems objectionable, as more values to be esti-mated exacerbate the curse of dimensionality. The former, in con-trast, is in disagreement with the third aim of ease of use, since set-ting the parameter values by hand requires considerable experienceand expertise. Therefore, although it was not possible to avoid pa-rameters completely, methods were selected such as to minimize thenumber of parameters to specify.

3.1.3 Aim 3: Ease of Use

In order to realize the third aim all selected techniques for each ofthe three steps were combined into a single program. This programbuilds on the PRTools software package version 3.0 of Duin ([7])and provides a GUI from which to access and control the employ-ment of the techniques. For each step of the general procedure, i.e.,for intrinsic dimensionality estimation, feature selection, and patternrecognition, three techniques can be freely chosen to be applied ornot. Furthermore, for every technique chosen to be applied—wherenecessary—the parameters for the technique’s application can eitherbe set to some default value (extracted from the literature), be set to

2 Even if the problem type would be known selecting the best pattern recog-nition technique would be hard to achieve, because currently no systematicguidelines on which technique to use for which problem are available.

an arbitrary value the user might want to provide, or be set to be esti-mated from the given data set. In doing so, it is ensured that (a) noviceusers by relying on default values or estimates can apply IPRA suc-cessfully in most situations and (b) more experienced useres have thefull freedom to parametrize the techniques according to their needs.

Once the given data set has been analyzed with the methods spec-ified by the user, it is necessary to check how accurately P ’s statecan be inferred from the physiological measurements. To avoid over-fitting, the estimated decision boundaries are evaluated on measure-ments not used for the estimation. More precisely, before the start ofthe three step procedure the given data set is split into one part forestimating and another part for evaluating the boundaries. Althoughthis kind of evaluation is not ideal from a theoretical point of view, ithas been chosen for computational efficiency reasons (see Sect. 5).

3.2 Specific Methods

In this section the three specific methods utilized in IPRA forintrinsic dimensionality estimation (Sect. 3.2.1), feature selection(Sect. 3.2.2), and pattern recognition (Sect. 3.2.3), respectively, willbe shortly introduced. Although for each stage of the analysis nu-merous methods have been proposed the techniques can usually begrouped regarding one or two fundamental aspects. In accord withthe aim of using preferably diverse techniques, the specific tech-niques were selected such that they have different characteristics withrespect to these fundamental aspects. For deciding between tech-niques having the same characteristics, those were chosen whichneed fewer parameters and are more strongly supported by evalua-tion studies in the literature.

3.2.1 Intrinsic Dimensionality

Methods to estimate the intrinsic dimensionality d′can be groupedaccording to two aspects. A first distinguishing characteristic iswhether a technique is primarily sensitive to linear or non-linear re-lationsships in the data. The second grouping factor discriminatesbetween methods designed to directly estimate d′ compared to meth-ods estimating d′ as a by-product.

The core component of virtually all linear estimation techniquesis the principal component analysis (PCA). The PCA has the advan-tage of being computationally efficient and needing only few param-eters. The major disadvantage of the PCA is the fact that it only takesinto account linear relationships between dimensions and might thusoverestimate the true d′. Although modifications to the PCA exist toavoid this drawback we decided to use the unmodified PCA, becausemodifications require more parameters to be specified and increasethe computational complexity.

Still directly estimating d′, but also taking non-linear relationshipsinto account is the nearest neighbor estimation designed by Verveerand Duin ([17]). By drawing on the basic observation that the proba-bility density of some measurement m can be estimated by the num-ber of other measurements in its close vicinity, the estimate is basedon the extent to which one has to enlarge the vicinity of m to includeone additional measurement in the vicinity. This method is compu-tationally efficient, needs no parameters to be specified, and also hasbeen shown to be among the most accurate estimators (cf. [19]).

The third chosen method belongs to the group of projection tech-niques which estimate d′ indirectly. Of the projection methods avail-able, like auto-associative feed forward networks, self-organizingmaps, or sammon’s mapping (cf. [13]), the last is most appropri-ate for physiological data, because its computational complexity in-

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 34

Page 41: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

creases with the number of measurements, but is independent of thenumber of dimensions.

The overall estimate of d′ is the mean of the estimates of all ap-plied methods.

3.2.2 Feature Selection

Since it is infeasible to consider all subsets during feature selec-tion (Sect.3.1), selection techniques employ heuristics to guide thesearch. The criterion used for guidance in virtually all selection meth-ods is the quality of the already considered subsets assuming thatgood subsets are more similar (with respect to the contained dimen-sions) to the best subset than bad subsets. However, problems varyregarding the degree to which this ordering assumption holds. AsCover and van Campenhout ([6]) have shown, there exist problemsfor which removing or adding one dimension to the best subset willresult in the worst subset. Consequently, for some problems, relyingon this assumption will lead the search astray, whereas for others itwill lead to the best feature subset. In order to achieve good resultsfor a wide variety of problems, selection methods were chosen suchthat they rely on the ordering assumption to a different degree.

The first method chosen was first choice hill climbing (FCHC).Starting from some subset s, subsets which can be created by remov-ing or adding just one dimension from / to s (called successors) areevaluated one after another. As soon as a subset is found which hasa higher quality this subset is chosen and the procedure is repeated.This method needs few parameters, is computationally efficient, andhas been shown to be quite suitable especially for large search spaces.

The sequential floating forward selection (SFFS) developed byPudil et al. ([12]) was chosen as the second method, because it needsonly one parameter to be specified and has been shown in severalevaluation studies to be as good as or better than other selection tech-niques. Basically, this methods proceeds as FCHC, but always movesto the best successor of the current subset.

As a third selection method a variant of the beam search pro-posed by Aha and Bankert ([1]) has been chosen. Compared to FCHCand SFFS it searches the subset space more thoroughly by consider-ing several subsets and their successors in each step. Although thismethod is computationally more complex than the other two methodsit will, in case the ordering assumption holds for the given problem,be better in finding the best subset.

Like with the estimation of intrinsic dimensionality the user canfreely decide which of the three methods to employ.

3.2.3 Pattern Recognition

The two aspects used to distinguish groups of pattern recogni-tion techniques are whether the estimated boundaries are linear andwhether the estimate relies on the distances between measurements.

Out of the set of available linear pattern recognition techniquessupport vector machines (SVM, cf. [5]) are used in IPRA, as theyhave empirically been shown (a) to be comparatively insusceptibleto the curse of dimensionality and (b) to be quite effective in analyz-ing physiological data. Because SVMs, furthermore, need only twoparameters to be specified they seemed to be a good choice for IPRA.

The second method chosen was dynamic batch learning vectorquantization (DBLVQ), an advanced version of the learning vectorquantization variants developed by Bermejo ([3, 2]). One advantageof this method is that it is not necessary for the user to specify anyparameters. What is more, the two LVQ variants it combines havebeen shown to give good results in various pattern recognition tasks.

Both SVM and DBLVQ rely on distances between measurementsto estimate the decision boundaries. To employ a wide variety of con-ceptionally different techniques we also chose a method which doesnot draw on distances. More precisely, we chose to use a decisiontree (DT) as the third pattern recognition technique. Although deci-sion trees have not been used to analyze physiological data so far, inseveral other problem domains they have been applied quite success-fully. In addition, only two parameters have to be specified and thecomputational complexity of DTs is rather low.

As in the other two stages use of any or all of these three tech-niques can be controlled by the user.

4 AN EXAMPLE APPLICATION

As a first evaluation of the approach, IPRA has been applied to physi-ological data from an experiment conducted by Schultheis and Jame-son ([15]). In this experiment participants had to read texts of varyingdifficulty. It was hypothesized that reading easy texts would lead toa lower cognitive load than reading difficult texts. In accord with thehypothesis, several measures recorded during reading indicated thatparticipants experienced higher load when reading difficult texts.

One of the measures signifying the load effect was the electroen-cephalogram (EEG). More precisely, one certain component of theEEG, the P300, was, on average, significantly higher during easyreading. Accordingly, the P300 could be used to identify the currentcognitive load of P and, thus, the EEG data seemed a suitable dataset to employ and evaluate IPRA.

In the following the characteristics of the EEG data (Sect. 4.1) aredescribed before the results of the analysis are presented (Sect. 4.2).

4.1 The Data

EEG was measured at 500 Hz from eight electrodes across the scalp(forehead to back of the head). To be able to identify the P300in the measurements, EEG segments of 550 ms have to be consid-ered for each electrode. Consequently, each measurement contained8 ∗ 276 = 2208 values, that is, the dimension of the measurementspace was 2208. As mentioned above these measurements differedin the height of the P300 depending on the load situation they wererecorded in. However, this difference is far from obvious as can beseen from Figure 1. Although on average, that is, across all partic-ipants and measurements, the P300 is clearly higher for easy texts,the relation might be reversed for individual measurements: due tothe noise in the EEG signals, measurements from easy reading (solidline) might exhibit a smaller P300 than measurements from difficultreading (dashed line). Thus, the pattern recognition task can be as-sumed to be extremely difficult for the EEG data.

4.2 Analysis Results

The results of the analysis are displayed in Table 1. Each columnpresents the results for one of the pattern recognition techniques bothwithout (first 3 columns) and with feature selection enabled (columns4 - 6). The first row in each column signifies how many dimensionshave been used for pattern recognition. The second row gives theaccuracy of the pattern recognizers obtained by the analysis. The ac-curacy value was determined as the percent of measurements in thatpart of the data used only for evaluation (see sect. 3.1) which wereassigned to the correct state.

Several points are noteworthy with respect to the results: first ofall, although the pattern recognizers perform above chance, the ac-curacies are rather low. In part this is due to the extreme difficulty

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 35

Page 42: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

0 100 200 300 400 500−30

−20

−10

0

10

20

30

40

Milliseconds

Ele

ctric

al P

oten

tial i

n M

icro

volts

Figure 1. Example EEG measurements for easy (solid line) and difficult(dashed line) reading. The P300 usually occurs between 200 ms and 400 ms.

Table 1. Classification results in percent correct. Only the best results foreach classifier are shown.

Pattern Recognition Techniques

SVM DBLVQ DT SVM DBLVQ DT

#dimensions all all all 6 3 3accuracy 57% 50% 51% 61% 56% 65%

Estimated d′ was 12

of the problem (Sect. 4.1). Furthermore, however, the low accuraciesmight be an artifact of the way the recognizers were evaluated, sincesplitting the original data in training and test set has been shown tounderestimate the true accuracy of pattern recognizers (cf. [8]).

Second, comparing columns 4 - 6 with columns 1 - 3 shows thatfeature selection is highly desirable: each of the three techniques ex-hibits improved accuracy with the reduced set of dimensions. Third,the estimated intrinsic dimensionality is quite close to the cardinalityof the optimal subset finally used and therefore considerably helpsto constrain the search space. Fourth, using more than one methodpays off. Not only are there marked differences in the accuracies ofthe different techniques, but also the technique being best could nothave been assumed to be the best before the analysis.

Thus, this first example analysis of EEG data lends strong supportto the general principles underlying the design of IPRA.

5 CONCLUSIONS

Physiologically data constitutes a valuable source of informationabout a human person and by exploiting this source ambient intelli-gence applications could be improved. However, since (a) extractinginformation from physiological signals requires the principled use ofseveral pattern recognition techniques and (b) existing approacheslack in methodology and ease of use, ambient intelligence applica-tions are usually barred from this important source of information.In response to this lack, we developed IPRA, an integrated patternrecognition approach which (a) employs all techniques necessary toappropriately analyze physiological data, (b) is broadly applicable,and (c) is easy to use even for non-experts. IPRA has been evaluatedwith an analysis of EEG data and found to be a suitable tool for theextraction of person information from physiological data.

Since the current Matlab implementation of IPRA is comparativelyslow, to further advance our approach, we plan to reimplement IPRA

in a more machine oriented programming language. Assuming a suit-able speed-up can be achieved through reimplementation it seems,moreover, desirable to improve the current evaluation procedure. In-

stead of using training and test sets, for example, boot strappingcould be employed.

ACKNOWLEDGEMENTS

This paper presents work done in the project READY of the Collab-orative Research Center SFB 378 Resource-adaptive Cognitive Pro-cesses and in the project R1-[ImageSpace] of the Transregional Col-laborative Research Center SFB/TR 8 Spatial Cognition. Funding bythe German Research Foundation (DFG) is gratefully acknowledged.

REFERENCES[1] D. W. Aha and R. L. Bankert, ‘A comparative evaluation of sequential

feature selection algorithms’, in Artificial Intelligence and Statistics,eds., D. Fisher and Lenz J.-H., 199 – 206, Springer, New York, (1996).

[2] S. Bermejo, ‘A dynamic LVQ algorithm for improving the generalisa-tion of nearest neighbour classifiers’, in Learning with Nearest Neigh-bour Classifiers, ed., S. Bermejo, 6–1 – 6–12, Universitat Politecnicade Catalunya, Barcelona, (2000).

[3] S. Bermejo and J. Cabestany, ‘A batch learning vector quantization al-gorithm for nearest neighbour classification’, Neural Processing Let-ters, 11, 173 – 184, (2000).

[4] B. Blankertz, C. Schafer, G. Dornhege, and G. Curio, ‘Single trial de-tection of EEG error potentials: A tool for increasing BCI transmissionrates’, in Artifical Neural Networks - ICANN 2002, pp. 1137 – 1143,(2002).

[5] C. J. C. Burges, ‘A tutorial on support vector machines for patternrecognition’, Data Mining and Knowledge discovery, 2(2), 121 – 167,(1998).

[6] T. M. Cover and J. M. Van Campenhout, ‘On the possible orderingsin the measurement selection problem’, IEEE Transactions on systems,man and cybernetics, 7(9), 657 – 661, (1977).

[7] R. P. W. Duin, PRTools Version 3.0: A Matlab toolbox for pattern recog-nition, 2000.

[8] A. K. Jain, R. P. W. Duin, and J. Mao, ‘Statistical pattern recognition:A review’, IEEE Transactions on Pattern Analysis and Machine Intel-ligence, 22(1), 4 – 37, (2000).

[9] H. Liu and H. Motoda, Feature Selection for Knowledge Discovery andData Mining, Kluwer, Boston, 1998.

[10] D. C. McFarlane and K. A. Latorella, ‘The scope and importance ofhuman interruption in HumanComputer interaction design.’, Human-Computer Interaction, 17, 1 – 61, (2002).

[11] R. W. Picard, E. Vyzas, and J. Healey, ‘Toward machine emotionalintelligence: Analysis of affective physiological state’, IEEE Trans-actions on pattern analysis and machine intelligence, 23(10), 1175 –1191, (2001).

[12] P. Pudil, J. Novovicova, and J. Kittler, ‘Floating search methods in fea-ture selection’, Pattern recognition letters, 15, 1119 – 1125, (1994).

[13] J. W. Sammon, ‘A nonlinear mapping for data structure analysis’, IEEETransactions on computers, 18, 401 – 409, (1969).

[14] C. Schaffer, ‘A conservation law for generalization performance’, inProceedings of the Eleventh International Conference on MachineLearning, eds., William W. Cohen and Haym Hirsh, 259 – 265, MorganKaufmann, San Francisco, (1994).

[15] H. Schultheis and A. Jameson, ‘Assessing cognitive load in adaptivehypermedia systems: Physiological and behavioral methods’, in Adap-tive hypermedia and adaptive web-based systems: Proceedings of AH2004, eds., Wolfgang Neijdl and Paul De Bra, Springer, (2004).

[16] G. V. Trunk, ‘A problem of dimensionality: A simple example’, IEEETransactions on Pattern Analysis and Machine Intelligence, 1(3), 306– 307, (1979).

[17] P. J. Verveer and R. P. W. Duin, ‘An evaluation of intrinsic dimension-ality estimators’, IEEE Transactions on pattern analysis and machineintelligence, 17(1), 81 – 86, (1995).

[18] G. F. Wilson and F. T. Eggemeier, ‘Psychophysiological assessment ofworkload in multi-task environments.’, in Multiple-Task Performance.,ed., D. L. Damos, 329 – 360, Taylor and Francis, London, (1991).

[19] N. Wyse, R. Dubes, and A. K. Jain, ‘A critical evaluation of intrinsicdimensionality algorithms’, in Pattern Recognition in Practice, eds.,Edzard S. Gelsema and Laveen N. Kanal, 415 – 425, North-HollandPublishing Company, Amsterdam, (1980).

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 36

Page 43: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

AI Techniques for Ambient Intelligence Posters

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 37

Page 44: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Humans and Agents Teaming for Ambient CognitionMasja Kempen1 and Manuela Viezzer1,2 and Niek Wijngaards1

Abstract. In actor-agent teams human and artificial entities inter-act and cooperate in order to enhance and augment their individ-ual and joint cognitive ergonomic and problem solving capabilities.Also actor-agent communities can benefit from ‘ambient cognition’,a novel further reaching concept than ambient intelligence that hardlytakes into account the resource limitations and capabilities chang-ing over time of both humans and agents in collaborative settings.The Dutch Companion project aims at the realization of an agentthat takes advantage of the ambient cognition concerning actor-agentsystem dynamics such that natural social and emotion-sensitive inter-action with an actor over a longer period of time can be sustained. Weelaborate on our vision of pursuing ambient cognition within actor-agent systems and briefly describe the goals of the Dutch Companionproject.

1 BEYOND AMBIENT INTELLIGENCE

What is currently missing in Ambient Intelligence [1] is ‘emotionaland social intelligence’ going beyond user and preference aware-ness, in other words some indispensable features ofcognition. Wedo not imply agents that emulate human actors, but instead augmentor complement actors’ cognitive capabilities. We envision ambientintelligence evolving towardsambient cognition— an environmentwhich augments human cognition and supports humans in achiev-ing their goals. Not only is responding and providing the humanwith support an important ingredient, but more crucial are ‘services’such as support delivered at the right time, depending on the situa-tion, context, and preferences, knowledge level, and skills of humans.Intelligent artificial entities may put themselves forward as agents,with which humans can team up and form longer-term relationships.These cognitive agents are capable of profiling their human com-panions, not only regarding personal preferences, but also regardingmoods, emotions, goals, intentions, etc. An important task for thecognitive agents is to reduce the (cognitive) workload of and stressamong humans, and facilitate the humans’ continued effectiveness inperforming tasks and achieving objectives.

We regard the human inside its ambient intelligent environment aspartaking in an actor-agent team (AAT). In this construct, actor refersto humans, and agents to artificial systems, ranging from sensors tohighly intelligent, cognitive systems. Here teams are associated withsystems that collaborate, whereas actor-agent communities (AAC)are distributed actor-agent teams of which the performance hingeson communication [2].

Teams are often considered to consist of humans [3]. In our opin-ion, intelligent systems such as agents and robots can also be team-

1 DECIS Lab, Thales Research and Technology Netherlands,P.O. Box 90, 2600 AB Delft, The Netherlands. Contact author:[email protected]

2 School of Computer Science, The University of Birmingham, B15 2TTBirmingham, United Kingdom.

members, equivalent in status to humans. This is in contrast to a largeamount of system-level teams and agent research, which concen-trates on agent-based support for individual human team members,e.g. [4]. In our view, agents may also take the initiative and give or-ders to human (and other agent) team members.

The rationale for forming human teams is that they perform bet-ter than individuals on their own, assuming that the right people aretogether in a team. Teams are built up out of individuals each withtheir own strengths, capacities and personalities, which should leadto efficient collaboration.

But what happens when agents become team members? We wouldexpect the team performance to increase. However, actors interactingwith agents bring to bear new forms of collaboration, interaction,and communication schemes introducing new hurdles to be taken bythe team members. This unmistakably affects team performance as awhole in a rather unpredictable way.

Hoffman et al. [5] emphasize that AACs require analysis as awhole, rather than from an agent-systems point of view alone. Theyidentify actors, agents and context as analysis-triple. For actors, theircognitive capacities, perceptual capacities and goals are listed as dis-tinguishing characteristics, for agents their computational and inter-face capabilities, and the context is characterized by requirements,constraints and potentials. Herein agents should amplify human in-telligence, which implicates compensation for human shortcomingsand boosting competencies. This strengthens us in hypothesizing thatactor-agent teams perform better than actor-only teams or agent-onlyteams.

2 THE DUTCH COMPANION

Humans are, perhaps after some inquiries, remarkably good at recog-nizing emotions in others and we use this capability in all our com-munications. This leads to the assumption that the correct inferenceof the emotional state of communicative partners is essential for anaturally developing communication and natural interaction. Thereis an important role put aside for emotions in future intelligent sys-tems, acknowledged by a lot of research done in areas such as affec-tive computing and social robotics [6,7,8].

The Dutch Companion project deals with augmenting agents suchthat they are sensitive to emotions of humans and are capable of por-traying emotions as well: a step towards ambient cognitive interac-tion. The main focus of the project is on improving the effectivenessof actor-agent interaction through making an agent aware of humanemotions and expected social interactions. An agent can then respondnot only with appropriate content but also with appropriate emotionalstate. The agent will have the embodiment of Philips’ iCat ([9]) andwill be situated inside the kitchen. It learns to take preferences andemotional state into account when suggesting meals or activities totheir inhabitants.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 38

Page 45: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

The project concentrates on the following: (i) The ability to focusattention on the user’s needs for information and assistance as a func-tion of the user’s situation, goals and current (cognitive) capabilitiesand emotional states; (ii) The ability to adapt in real time to the be-havior and responses beyond the mere use of some built-in static usermodel.

The first point will be addressed by looking at the use of soundrecognition and analysis [10] to enhance its situational awareness.The classification of particular sounds from the environment will beused by the iCat to alert the user, depending on the current user con-text. Via deliberation it determines whether or not the user should bealerted.

The second point will be addressed by detection of emotion in theuser’s voice, in combination with the emotional output from othermodalities, such as facial recognition and gestures. The perceivedemotional state can than be used as a controlling input for the iCat’sdeliberation.

The personal and social based deliberation designs and implemen-tations will be based on agent-oriented programming, in particular3APL [11]. A central element of 3APL is the way the agent deliber-ates how to perform plans given certain goals and beliefs, in particu-lar which goals and plans to select and pursue, and which goals andplans to revise under the circumstances given. This so-called ‘delib-eration cycle’ has to be extended to also include the effect of emo-tional states, as well the influence of social aspects such as normsand obligations arising from the social context [12].

In addition we will couple the emotion recognition with the com-munication and dialogue iCat module, thus changing the nature ofthe dialogue and the content of the communication depending on theemotional state of the user. We refer to [13] for a more elaboratedescription of the project, expected results and some illustrative sce-narios.

3 TOWARDS AMBIENT COGNITION

The current abundance and omnipresence of ICT architectures andinfrastructures enable ubiquitous, pervasive, sentient, and ambientintelligent computing, communication, cooperation and competitionof both artificial and societal organizations and structures. However,they appear to us as merely nice to haves in a rather unstructured andunorganized ICT infrastructure. In general they still lack cognitiveengineering capabilities, namely those for anticipation and selectionof attention.

Instead of perpetually handcrafting standalone ICT infrastruc-tures and integrating them, smart human-system network interactionparadigms are needed such that ambient intelligent or actor-agentsystems can continuously select and embody, after learning, suitableanticipatory and selection of attention schemes. Only then can thosenovel integrated ICT architectures and infrastructures be brought tolife. In short, new paradigms like those proposed in [14] for co-existence and co-evolution of humans, machines and their extensionsare needed in order to simultaneously sustain all.

Bringing ICT to life requires moving from nice to have featuresprovided by ambient intelligence to need to have features, services,and better quality of life provided by ambient cognition. This shiftthat we envision will be enabled by building autonomous cognitivesystems such as one headed for by the Dutch Companion project.This is fully in line with ongoing research activities in the area ofambient intelligence, where the emphasis is on the development ofgreater user-friendliness, usability, more efficient services support,and user-empowerment.

ACKNOWLEDGEMENTS

This work is supported by SenterNovem, Dutch Companion projectgrant nr: IS053013 and the Dutch Ministry of Economic Affairs,ICIS research program, grant nr: BSIK03024. The Dutch Com-panion and the ICIS project are both hosted by the DECIS Lab(http://www.decis.nl), the open research partnership of Thales Ned-erland, the Delft University of Technology, the University of Amster-dam and the Netherlands Foundation of Applied Scientific Research(TNO). We explicitly want to acknowledge Dr. A. H. Salden for hisinvaluable comments regarding the past, present and future of ambi-ent intelligence.

REFERENCES[1] W. Weber, J. Rabaey and E. Aarts (Eds.),Ambient Intelligence, Springer-

Verlag, (2005).

[2] N. Wijngaards, M. Kempen, A. Smits and K. Nieuwenhuis, ‘Towards Sus-tained Team Effectiveness’, in G. Lindemann et al. (Eds.),Selected revisedpapers from the workshops on Norms and Institutions for Regulated Multi-Agent Systems (ANIREM) and Organizations and Organization OrientedProgramming (OOOP) at AAMAS’05, LNCS, vol. 3913, 33-45, (2006). Inpress.

[3] E. Salas, T. L. Dickinson, S. A. Converse and S. I. Tannenbaum, ‘Towardan understanding of team performance and training’, in R.W. Swezey andE. Salas (Eds.),Teams: Their training and performance, 3-29, Ablex, Nor-wood, NJ, (1992).

[4] K. Sycara and M. Lewis, ‘Integrating intelligent agents into human teams’,in E. Salas and S. Fiore (Eds.),Team Cognition: Understanding the Fac-tors that Drive Process and Performance, American Psychological Asso-ciation, Washington, DC, (2004).

[5] R. Hoffman, P. Hayes, K. M. Ford and P. Hancock, ‘The triples rule’,IEEEIntelligent Systems, 17(3), 62-65, (2002).

[6] R. Picard,Affective Computing, MIT Press, (1997).

[7] T. Fong, I. Nourbakhsh and K. Dautenhahn, ‘A Survey of Socially Inter-active Robots’,Robotics and Autonomous Systems, 42, 143-166, (2003).

[8] K. Dautenhahn, ‘Robots We Like to Live With?! – A DevelopmentalPerspective on a Personalized, Life-Long Robot Companion’,Proceed-ings IEEE Roman 2004, 13th IEEE International Workshop on Robot andHuman Interactive Communication, September 20-22, 2004 Kurashiki,Okayama Japan, IEEE Press, 17-22, (2004).

[9] iCat Research Platform, Experimentation Platform for Human-Robot Interaction Research, available fromhttp://www.hitech-projects.com/icat/platform.php

[10] T. C. Andringa,Continuity Preserving Signal Processing, Ph.D. Thesis,University of Groningen, The Netherlands, (2002).

[11] M. Dastani, M.B. van Riemsdijk, F. Dignum and J.-J. Ch. Meyer, ‘A Pro-gramming Language for Cognitive Agents: Goal-Directed 3APL’, inPro-gramming Multi-Agent Systems, Proc. ProMAS 2003, M. Dastani, J. Dixand A. El Fallah-Seghrouchni (Eds.), LNAI 3067, Springer, Berlin, 111-130, (2004).

[12] J.-J. Ch. Meyer, ‘Reasoning about Emotional Agents’, inProc.16th Eu-ropean Conf. on Artif. Intell. (ECAI 2004), R. Lopez de Mantaras and L.Saitta (Eds.), IOS Press, 129-133, (2004).

[13] M. Kempen, M. Viezzer and N. Wijngaards, ‘Humans and Agents Teamingfor Ambient Cognition’, Dutch Companion Technical Report, (2006).

[14] A. H. Salden and M. Kempen, ‘Sustainable Cybernetic Systems - Back-bones of Ambient Intelligent Environments’, in P. Remagnino, G.L.Foresti and T. Ellis (eds.),Ambient Intelligence: a Novel Paradigm,Springer, November, 213-238, (2005).

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 39

Page 46: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

A Rule Based Application: Diet AdvisorNati Herrasti and Antonio López and Aitzol Gallastegi1

Abstract. Rule based applications represent another approach toapplication development, based on logical information, that is,knowledge of the relationships between entities. This approachrepresents an other view from the conventional development ofapplications, where application logic is dispersed throughout thecode. The aim of this approach is to allow the coding of logicalinformation as quickly, easily and reliably as possible, facilitatingapplication rule maintenance, and its separation from the code. Todo so, specialised new tools are used which allow the developmentof this type of applications..

1 INTRODUCTIONKnowledge, of whatever type, is essential if we are to develop an application from which certain results can be expected. The threetypes of knowledge are: Factual knowledge refers to facts or data, and can be data about clients, orders, products, etc. Proceduralknowledge indicates how to conduct a task. For example, how to process an order, find something on the Internet or calculatemortgage repayments. Logical knowledge on the other hand, refersto knowledge of the relationship between entities and it allows usto relate prices and market conditions, products and theircomponents, symptoms and diagnoses, etc [1].

2 RULE BASED APPLICATIONSStructure: The structure of this type of application is relativelysimple, comprised of two central elements, a rule representationlanguage (Knowledge base) and a rule engine (Inference engine).Operation: On the one hand there is the rule representationlanguage or knowledge base, which stores the rules of operationdefined in the corresponding format. And in addition, theinference engine, tasked with processing the rules using one of the permitted techniques, which will be explained in the section on“Rule engines”.

2.1 Rule representation language Rule based processing tools provide so called “Knowledge base”, centralised point for storing rules. A rule engine not only uses a knowledge base for storing rules but also for executing rules andprocessing data. A knowledge base provides the option of storingdifferent versions of a rule and recording rule version changes,allowing for greater transparency and accessibility to rules thatinterest us. We frequently come across tools that integrate the ruleengine and the rule representation language, such as Prolog, a very useful language for AI professionals who constantly work withinference rules, or Clips, a tool created by NASA for developing expert systems which provides a complete construction

environment based on rules and objects. These tools are widelyused in the sphere of expert systems and have a long history. Ev1enso, the current tendency with the rise of XML, is for them to beseparate, as occurs in standard RuleML, which contains only therule representation language [3].RuleML: The Rule Markup Initiative developed the standardRuleML [4]. This organisation is an initiative for standardisingrules (forward-chaining y backward-chaining) using XML.

Figure 1. Hierarchy of RuleML

The structure used by RuleML for rules is divided into two categories, reaction rules (when a rule is met, an action is executed)and transformation rules (when a rule is met a database value is modified). Transformation rules specialise in derivation rules (rules that deduce new events for known data). Likewise, derivation rules have two components: facts and queries (structures which returndata to an application).RuleML constitutes a change in rule definition. To date, all tools(Prolog, Lisp, Clips, etc.) used proprietary languages that couldonly be interpreted by themselves. RuleML in contrast, allows itslanguage to be used by rule engines on any platform (Java, .NET).In summary, these are the most significant advantages of RuleML:

2.2 Rule engine The rule engine established how the rules are going to be appliedon execution. All reasoning engines are basically patternrecognition search engines which scan all rules and decide whichto use each time and repeating until a condition is met [6].An inference engine is a rule engine which emulates the humancapacity to reach conclusions through reasoning.Forward-chaining: One of the reasoning methods used byinterference engines. Forward-chaining commences with availabledata (facts database), and uses the inference rules to obtain more data, until the objective has been attained.Backward-chaining: Another of the reasoning methods used byinference engines. Backward-chaining starts with a list of

1 Ikerlan research and development centre, Spain, email:{nherrasti, alopez, aitzol.gallastegi } @ikerlan.es

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 40

Page 47: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

objectives, and searches to see if there is a rule that corresponds to these objectives. If one exists, it then searches for data to executethe rule, and adds it to the list of objectives if none are found.We shall now consider two rule engines, SRE (Simple Rule Engine)and NxBRE, both from the .NET platform.

2.2.1 SRE (Simple Rule Engine)From a technical perspective, SRE is a secure and flexible toolfocussed on allowing software developers to create applicationsthat can be maintained with a minimum of effort [7].

Facilitates rule creation, edition, storage and organisation.Uses a rule language created with XML labels.Rules are declarative and not procedural to simply theproblem.Allows rules to be separated from the application, thus providing flexibility for future changes.Uses a forward-chaining strategyComprises of three elements (rules, facts, and actions)SRE is a valid, but very limited tool for rule declaration, itbeing impossible to represent relatively complex rules.

2.2.2 NxBRENxBRE is the first open-source rule engine for the .NET platform and Rule-Based Engine that offers two different approaches [8]:

Flow engine Inference engine

The “Flow Engine” is comprised of a series of wrappers based onXML in NxBRE format stored in a file (.xbre). The rules for this fileare analyzed just once from start to finish, and do no considerationis given to new events that may have occurred and that would lead to the application of other rule instances.The “Inference Engine”, which uses a forward-chaining strategy,supports concepts like Facts, Queries, Implications, Rule Priority,Mutual Exclusion and Precondition. This engine processes the RuleML file until there are no more events for rules to process.The “Inference Engine” is considered to be more interesting thanthe “Flow Engine” for the following reasons:

It supports priorities, mutual exclusion and preconditions.It uses a standard rule definition format (RuleML), allowing for much more flexible rule definition, in order to facilitatemodification in changing events.It possesses an elaborate memory model, with support forseparate deduction spaces.

It separates the roles of experts who design the business rules andthe programmers who write the applications.NxBRE, is the first complete tool on the .NET platform that usesthe RuleML standard as a rule definition language, and can be really useful in projects that have to work with:

Entire application rules that cannot be expressed in a structured manner and which require the use of logicalexpressions.Application that are constantly changing and oblige rule recompilation for when they need changing.

3 DEVELOPED APPLICATIONHaving illustrated the theory behind rule based applications andanalysed the different tools, an application has been developed in C# .NET using the RuleML language for rule representation and the NxBRE rule engine, for the aforementioned reasons.

The application called Diet Advisor, conducts a selection of recipesthat can be consumed by the user, taking into account anylimitations he/she may have, dietary requirements or personaltastes, amongst others. All of these aspects are defined via rules inthe RuleML.These rules will define the foods that are not recommended for theuser due to any illness he/she may suffer. For example, if a user isdiabetic, they will not be able to eat chocolate because it iscontraindicated for diabetes.In this manner, once all established rules have been processed, theapplication lists the recipes that are suitable for the user,recommending two based on the offers arriving from thesupermarket and the calories consumed that day be the user.If the user agrees with the selection made by the application, ashopping list with the necessary ingredients will be created, takinginto account the food available at home, and processing the orderwhen the user decides convenient.

4 CONCLUSIONS

Logical knowledge, unlike factual or procedural knowledge, is difficult to code with a traditional tool. However, the ability of anorganisation to successfully code logical knowledge may lead tobetter service provision for its users. The question then becomeshow it’s the best way to code this form of knowledge. It can be fed into tools based on data and procedures, but coding is difficult, knowledge turns grey and maintenance is a nightmare.Tools based on rules and logic are much better for coding this typeof knowledge, but the right tool must be selected and its usagelearnt. It must be highlighted that there are two key aspects to constructing an application based on rules:

Design of the rule representation language.Implementation of the rule engine.

The recommended strategy for developing applications based onrules on the .NET platform is to use the RuleML and NxBRE toolsfor the reasons mentioned previously.The entire application development strategy facilitates applicationconstruction, maintenance and the reduction of error ratios.Therefore its usage constitutes a big advantage in environments where business logic is of great importance.

REFERENCES

[1] Dennis Merritt, Best Practices for Rule-BasedApplication Development. Microsoft Architect Journal,January 2004[2] Alison Cawsey, Basic Architecture of an ExpertSystem,www.macs.hw.ac.uk/~alison/ai3notes/section2_5_2.html[3] Dennis Merritt, AI Expert Newsletter, Dr.Dobb’s PortalNovember 07, 2003[4] The Rule Markup Initiative. www.ruleml.org[5] Dennis Merritt, Building Custom Rule Engines. PC AImagazine, Volume 10, Number 2 Mar/Apr 1996[6] Rules-Based Processing – Managing and AdjudicatingCNSI-WP-RBP-01-00, October 28, 2004[7] Simple Rule Engine (SRE) Project,http://sourceforge.net/projects/sdsre/[8] .NET Business Rules Engine (NxBRE),www.agilepartner.net/oss/nxbre/

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 41

Page 48: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Spatiotemporal Ambient IntelligenceBj orn Gottfried 1 and Hans W. Guesgen2

Abstract. Over the recent years, researchers have introduced anumber of formalisms to reason about space and time. Some of thesefall into the category of relational approaches, i.e. they use relationsamong objects or time events for reasoning. In this paper, we sum-marise a few of these approaches and argue that they are particularlysignificant in the context of ambient intelligence.

1 INTRODUCTION AND MOTIVATION

Ambient intelligence can be discussed from many angles, the sameas human intelligence can be explored under different aspects. Oneof these aspects is our capability to reason about space and time.Not only do we use such reasoning to find our way around in theworld in time, but also to communicate with each other in a concreteor metaphorical way. We can, for example, designate certain space-time coordinates to meet each other (5 o’clock at the town hall), oruse them to describe the condition that we are in (yesterday I was ontop of the world).

Although there are various ways to reason about space and time, itseems that we often tend to do so in a relational rather than absoluteway. Therefore, we shall argue that attention should be paid espe-cially to relational methods of spatiotemporal reasoning for ambientintelligence. By using some examples of relational methods, we willdemonstrate how particular categories of spatiotemporal knowledgecan be covered: topological knowledge, directional knowledge, anddistance knowledge. All of them are considered in the spatial as wellas temporal sense (although directional knowledge is often implicitin the temporal domain, due to our perception of time moving alongthe time axis).

2 TYPES OF KNOWLEDGE

The first type of spatiotemporal knowledge is topological knowledge.Although there is a wide spectrum of approaches to spatiotempo-ral reasoning, there is hardly any approach that has influenced re-search in this field more than Allen’s temporal logic [1], which hasnot only been used with its original intention but also as a spatiallogic [6]. The logic is based on a set of thirteen atomic temporal re-lations between time intervals. These relations are used to describehow an event/object relates to another event/object. Given a set ofevents/objects and a set of relations between them, a constraint satis-faction algorithm can be used to reason about these relations.

Similar to Allen’s approach is the region connection calculus [8]and the 9-intersection calculus [2]. As with Allen’s approach, theyboth focus on topological relations and do not, for example, considerthe size of the objects involved.

1 Artificial Intelligence Group, Centre for Computing Technologies (TZI),Universitat Bremen, Germany, email: [email protected]

2 Computer Science Department, University of Auckland, New Zealand,email: [email protected]

The second type of spatiotemporal knowledge is directionalknowledge. As time moves naturally along a time axis (from the pastover the present to the future), not many approaches deal with repre-senting temporal directional knowledge explicitly. Nevertheless thereare some that do so. For example, [9] introduces the directed inter-val algebra, which uses 26 base relations to describe the relationshipbetween two directed intervals.

In spatial reasoning, directional knowledge is often expressed inthe form of cardinal directions [4], in particular when dealing withspatial knowledge on a geographic scale. In smaller spaces, likerooms in houses, the layout of rooms and the arrangements of theobjects in them determine directional knowledge (the boardbetweenthe window and the table).

In order to capture all possible two-dimensional relationships be-tween disconnected objects in room space, a generalisation of Allen’slogic has been introduced in [5]. This approach defines relationswhich allow arrangements between arbitrarily aligned objects to bedescribed at the same granularity level as Allen. Rather than usingexternal frames of references, approaches like this one make use ofintrinsic reference systems, that is, objects themselves define a spa-tial context relative to which other objects can be described.

The third type of spatiotemporal knowledge employed in com-monsense reasoning is knowledge about distance. Often it is un-necessary to reason about distance in a precise, Euclidean way forthe purpose of making abstract spatiotemporal decisions. A sim-ple distance metric such as the one from [3] suffices for these pur-poses. Here, directions are combined with simple distance informa-tion, such asfar andclose.

In [7], we avoid using a distance measure altogether but definethe proximity of objects by using fuzzy sets. These fuzzy sets definesets of neighborhoods, each neighborhood containing the objects thathave the same proximity to the reference objects.

3 RELATIONAL AMBIENT INTELLIGENCE

In the previous sections, we have introduced various forms of rela-tional spatiotemporal knowledge. We argue that this type of knowl-edge is particular relevant for ambient intelligence, especially whenapplied to smart homes. The reason for that lies in the following:

1. Binary decisions are frequently to be made, requiring a concisespatiotemporal assessment of the state of affairs, instead of precisemeasurements about space and time. Such concise assessments aremade when directly relating entities to each other, like a personand a door: as soon as someone approaches the sliding door, openit.

2. Coarse preliminary decisions are useful in planning since theworld changes all the time. It makes no sense that an AI-systemcommits itself too early to precise spatiotemporal conditions: thecleaning-robot has to push the chairs to the side before starting to

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 42

Page 49: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

clean the floor; however, it is not necessary to consider the precisepushing procedure in order to maintain this constraint; moreover,the chairs to be pushed aside will sometimes be at different places.

3. Coarse qualitative representations frequently suffice for solvingsome problems which do not require precise measurements at all:if there is someone in the room switch on the heating, regardlessof where precisely that person is.

4. Eventually, the usage of a number of qualitative distinctions leadsto more efficient reasoning techniques than when using precisequantities and variables with continuous domains.

Let us consider some further examples. Allen’s relations can beused to describe how an event relates to another event: the event of asensor in a room being triggered occurs during the event of a personbeing in that room. Or the event of the door to the room being openedoccurs before the event of the person entering the room. Allen’s al-gorithm is then used to reason about these relations. If, for example,the door to the room is opened before the person is in the room andthe sensor is triggered during the person’s presence in the room, thenAllen’s composition table tells us that the door is opened before thesensor is triggered.

Similarly, the region connection calculus can be used to define aspatial context, such as the one depicted in Figure 1. By means of

hallway

passage

window

shel

f

TV

sofa

chair

chair

door

Figure 1. Regions determining the spatial context in a living room.

the topological relations of the RCC calculus alone or together withknowledge about time and further background knowledge a numberof inferences can be made:

1. If a sensor detects a person in the living room, and if, after a shortperiod of time, another sensor detects a person in the hallway,whereas the passage sensor did not detect anything in the mean-time, it can be concluded that there are either two persons in theflat (one in the living room, the other one in the hallway), or thatthe sensor in the passage is out of order. The latter holds if thesystem knows there is only one person in the flat.

2. From the knowledge that a cleaning-robot enters the light-cone, itcan be inferred that the chairs, the table, and the sofa are potentialobstacles, whereas the shelf and the television are outside of thescope of the cleaning-robot.

Spatial arrangements between objects can be described by gener-alising Allen’s relations. In [6] an external reference frame is usedrelative to which positions are described. This allows, for example,an ordering of obstacles to be determined which a cleaning-robot hasto pass when being programmed to clean the hallway first, and thento enter the living room. Precise measurements are only required assoon as local decisions are to be made, for example, to test whetherthe cleaning-robot fits through the gap between window and chair.

[6] is limited to adequately representing those objects which arealigned with the axes of the underlying reference frame. In order tocapture all possible two-dimensional relationships between discon-nected objects in room space, the approach introduced in [5] can beused. Since this approach uses an intrinsic reference system, the or-dering between arbitrarily aligned furniture and other objects can beadequately considered. For instance, the spatial relationin front of theTV can then correctly be represented, and so can alignments betweenthe oblique light-cone and other objects. Altogether this approachdistinguishes 23 positional relations, eight orientation relations, andcombined 125 relations.

To conclude, the examples given in this section demonstrate howrelational systems represent specific spatiotemporal knowledge andhow to reason about this kind of knowledge. Among others, it showsthat the methods characterise spatiotemporal knowledge on an ab-stract level. This allows to concisely express relations among objectsallowing these methods to be included in more complex reasoningsystems, providing a useful basis in ambient intelligence.

4 CONCLUSION

This paper outlines several relational spatiotemporal knowledge rep-resentation formalisms and shows how they can be applied in thecontext of ambient intelligence, in particular smart homes. We arenot claiming that the list of formalisms chosen is complete, ratherthat it is a representative sample of formalisms. What we are claim-ing is that such formalisms in general are useful for smart home ap-plications and therefore should be given significant attention.

From the given examples we learn how relational formalismswork. Worth mentioning is that they describe spatial knowledge byrelating objects to other objects; for this purpose, global referencesystems are not needed. This is of particular importance in the con-text of ambient intelligence since the intelligent environment has toact and react in response to the given spatial situation. However, thespatial situation permanently changes and the intelligent system hasto deal with a number of objects some of which disappear whileothers suddenly enter the scene. It is the flexibility of relational ap-proaches which is capable of managing such spatial changes.

REFERENCES[1] J.F. Allen, ‘Maintaining knowledge about temporal intervals’,Commu-

nications of the ACM, 26, 832–843, (1983).[2] M. Egenhofer and R. Franzosa, ‘Point-set topological spatial relations.’,

International Journal of Geographical Information Systems, 5(2), 161–174, (1991).

[3] A. Frank, ‘Qualitative Spatial Reasoning about Distance and Directionsin Geographic Space’,Journal of Visual Languages and Computing, 3,343–373, (1992).

[4] A.U. Frank, ‘Qualitative spatial reasoning: Cardinal directions as anexample’,International Journal of Geographical Information Systems,10(3), 269–290, (1996).

[5] B. Gottfried, ‘Reasoning about intervals in two dimensions’, inIEEEInt. Conference on Systems, Man and Cybernetics, eds., W. Thissen,P. Wieringa, M. Pantic, and M. Ludema, pp. 5324–5332, The Hague,The Netherlands, (2004). Omnipress.

[6] H.W. Guesgen, ‘Spatial reasoning based on Allen’s temporal logic’,Technical Report TR-89-049, ICSI, Berkeley, California, (1989).

[7] H.W. Guesgen, ‘Reasoning about distance based on fuzzy sets’,AppliedIntelligence (Special Issue on Spatial and Temporal Reasoning), 17(3),265–270, (2002).

[8] D.A. Randell, Z. Cui, and A.G. Cohn, ‘A spatial logic based on re-gions and connection’, inProc. KR-92, pp. 165–176, Cambridge, Mas-sachusetts, (1992).

[9] J. Renz, ‘A spatial odyssey of the interval algebra: 1. directed intervals’,in Proc. IJCAI-01, pp. 51–56, Seattle, Washington, (2001).

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 43

Page 50: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Semantic Tuple Space: application in aContextual Information Management System

Ignacio Nieto and Juan Botıa and Antonio Gomez-Skarmeta1

1 IntroductionNowadays, one of the most important issues in context-aware sys-tems is the development of a mechanism for proper management ofthe contextual information surrounding the users and their environ-ment. In this paper, we present OCP (Open Context Platform), anintegral context management system. OCP aims to be a middlewarefor obtaining, distributing, storing and handling the contextual in-formation of a system. Some of the advantages of OCP are genericextraction of the contextual information for heterogeneous devices,ontological representation of the contextual information by means ofOWL, independence from the information storage implementationby means of an extensible tuple space and reasoning process on theinformation thanks to the JENA inference engine.

2 OCPOCP proposes an integrated, generic middleware, covering the com-plete lifetime of the contextual information, from its extraction fromthe environment, devices and users, to its transformation, integrationand usage for obtaining new contextual information. For the ontolog-ical representation, we will use the OWL language[5], that will allowus the extension to other ontologies with ease, guaranteing also theirmodularity. OCP has its own generic ontology, designed to give alogical structure and a proper hierachy to the contextual informa-tion to be handled by the system. The OCP ontology can be seen ascomposed of two main parts. First part refers to how the context andcontextual information hierachy is organized. The second main partof the OCP ontology involves the structure of the contextual informa-tion. In order to create a contextual system fully integrated with theontological model used for information representation, the contex-tual information has been designed to be a reflection of the concep-tual structure of the OWL ontological representation language, andmore precisely of its Lite version.

3 Related TechnologiesOne of the most relevant issues in designing a contextual informa-tion management system is the development of an infrastructure ca-pable of the logical distribution and management of the contextualinformation. In this field the first efforts centered on the coordinationparadigm that, based on the separation of the computational aspectsand the interaction of the components forming a system, has given

1 University of Murcia, Spain, email: [email protected]. Part of this work hasbeen financed by the Spanish Local CARM government trough the SenecaFundation, the Research Project ENCUENTRO 00511/PI/04., and the Min-isrty of Education and Science through the Research Project TIN2005-08501.

rise to several models and languages that have been applied with suc-cess to the development of parallel and distributed applications. Thefirst developed coordination model was Linda[11]. Linda introducesthe concept of Tuple Space. A tuple space is a associative memoryspace in wich objects are referred, as within a hashtable, by its con-tent rather than by its address. Most commercial tuple spaces such asJavaSpaces, TSpaces and GigaSpaces are based in the Linda model,and so is our own tuple space.

Figure 1. Architecture of the Platform.

For the reasoning about contextual information, OCP uses JENA.JENA is a framework for the construction of semantic web applica-tions. JENA has been designed to work with RDF, RDFS, and OWL.It contains an rule-based inference engine for the extraction of newsemantic information from the previously existing one. It includes anRDF API, an OWL API and an RDF query language called RDQL.JENA tries to be as platform independent as possible, so its API iswritten in Java. The use of JENA allows us to have quite a completreasoning system for the needs of our middleware.

4 A Complete View of the SystemThe architecture for OCP’s contextual platform is shown in figure 1.User’s applications will make use of a high-level contextual interfaceby means of OCP. At this level there will be a communication of con-texts and contextual information. These requests will be translated toconcrete Linda commands, and will be sent to the OCP-Linda logi-cal tuple space. The OCP-Linda system has been designed in such away it isolates the logical representation of the tuple space from the

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 44

Page 51: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

Figure 2. Interaction for reading a context.

underlying physical implementation. This implies that the informa-tion could be physically contained in a distributed storage system ora central server. Our goal is to build an hybrid architecture capable ofswitching between distributed and centralized operation modes whenneeded. To achieve this, as figure 1 shows, we have in one hand a cen-tralized JENA server, and on the other hand a distributed model basedon the idea taken from tuple spaces, wich consists of distributing theinformation considering its content. This hybrid system is based ona flexible mechanism of operation. The JENA server acts as a trustedrepository of knowledge, storing information from the computationalentities of the system. Its main functions are, in one hand, offering afull fledged space of revised information from the contextual sourcesof the system. In such space, the new information is fused with theprevious one by means of a validation, consistency and combinationprocess.

In addition to the centralized server, the system has an underly-ing distributed model, composed of every computational entity ofthe system. Based on the philosophy of tuple spaces, the informationis distributing according to its content, so it seems that the most logicoption is to store the contextual information in such a way its desti-nation will be closer to the nodes that are or could be interested on it.In order to reach this goal, we use a mechanism for measuring wichnodes are interested in wich information, and distribute the informa-tion according to its content. This mechanism, using content-centrichash tables, is analyzed in deep in [4].

Figure 2 shows how this model works for the reading of certaincontext. The application takes its request to the OCP context mod-ule, wich queries its ontology database for giving a semantic contextto the requested information and relate it with the ontology of thedomain the system is handling. With this information, the contextualmodule asks the linda interface to read certain tuple from the space.The hybrid system OCP-Linda evaluates then where this informationlies. The system isolates the logical interface where the informationis distributed from the concrete storage, while the hybrid mechanismoffers several available locations, such as the JENA server, a simplehash table or the decentralized DHT (Ditributed Hash Table) algo-rithm.

In order to reach a fully decentralized system, we are working inthe design and development of a CHT (Contextual Hash Table). CHTapproachs the problem of information distribution with a hash tablestrategy, where the information is distributed according to its contex-tual significance rather than its address. The goal of such a systemis to act as a decentralized space for the contextual information, dis-tributing the context near the entities that are supposed to need or

be interested on it. Let us not forget we are working in a heteroge-neous scenario, with multiple users accessing the system by meansof diverse networking devices. In this scenario, each contextual in-formation will be asigned a target node where to be stored, takinginto account the semantic content of the information and the nodesthat could be interested in it. We will call this node the home nodeof the contextual information. With this considerations in mind, andgiven that we are using OWL (that uses URIs for name addressing) asthe language for representing contextual information, the CHT algo-rithm will be designed to minimized the overall communication cost.This can be efficiently done, by evaluating the semantic of the con-textual information itself, and determining which is the home nodeidentifier for that particular contextual information. The hash func-tion will take a contextual information and the identifier of the nodewhich obtained that information as arguments, and produce an iden-tifier as its output. Each device of the system will possess an uniqueURI that will identify it univocally. The hash function must ensurethat contextual information is stored in the proximity of the nodeswhich need that information. In CHT, nodes with similar contextualinterests or similar contextual information are neighbour nodes, thatis to say, they receive closer identifiers, thus concentrating the infor-mation near the nodes that are interested in it. The core of CHT lies inthe hash function used to identify the home node of each contextualdata. Invoking the hash function with a concrete contextual informa-tion will give us a hash value dependent, as said before, on a concretecontextual information and a source URI. We will use this value toidentify the home node of that information by means of a simplecheck of integer proximity. Using an ordered distributed identifierstable, this could be done in small time.

5 Conclusions and Future WorkIn this article we present OCP, a contextual management middlewarewhose goals are to obtain, manage, reason and deliver the contex-tual information in context-aware systems. OCP intends to offer anintegral solution for such systems, handling from the contextual in-formation recovering from numerous heterogeneous devices to thehigh level management, including reasoning and logical distributionon the information. Our goal is the creation of a homogeneous sys-tem offering the users a common API, isolating them from techni-cal and physical issues on information distribution, architecture andcommunication methods. Also, OCP handles the contextual infor-mation from an ontological point of view, thus giving it a semanticsignificance following the OWL language in terms of contextual in-formation representation, internal structure and properties. OCP actsas a dual operation system, working with a contralized reasoningsystem based on JENA, but distributing the information logically bymeans of a tuple space leaning on the Linda model and a distributionmechanism based on a content-centric hash table.

REFERENCES[4] Ignacio Nieto-Carvajal, Juan A. Botıa, Pedro M. Ruiz, An-

tonio F. Gomez-Skarmeta: Distributed Contextual InformationStorage Using Content-Centric Hash Tables. EUC 2004, Aizu-Wakamatsu City, Japan, August 25-27, 2004, Proceedings. 957–966

[5] X. Wang: Ontology-Based Context Modeling and Reasoning us-ing OWL. Context Modeling and Reasoning Workshop at Per-Com 2004 (2004).

[11] Gelernter, D.: Generative Communication in Linda. On ACMTransactions on Programming Languages and Systems journal.Vol. 7, No. 1 (1985). 80–112.

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 45

Page 52: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

A mixture of experts for learning lighting controlStephan K. Nufer(1,2,3), Mathias Buehlmann(1,2,3), Tobi Delbruck(1) and Josef M. Joller(2)

Abstract. Conventional building automation is preprogrammedwith behavior to suit average occupants. Machine learning offers thepossibility of learning behavior that is better matched to individualoccupant desires and that further reduces energy consumption. How-ever, traditional machine learning is difficult to apply due to 1) theextremely sparse training input–typically 2-3 effector uses/day and2) to the extreme necessity to avoid occupant rejection by annoy-ing incorrect behavior. Our building1 is equipped with a LonWorksbuilding automation network that provides sensor (light, presence,temperature, etc) and effector (light and blind switches) information.Previous work using our automation installation [4] and [10], haveexplored machine learning of user preferences, with the aim of in-creasing comfort while reducing energy consumption. All the priorsystems were rejected by normal occupants. In the work describedhere we demonstrate for the first time a system for lighting controlthat has been accepted by normal building occupants. It has beenrunning continuously for the past 70 days in three normal offices oc-cupied by 9 people. It is based on a new multiagent OSGI-based in-frastructure and uses Weighted-Majority mixture of experts, to learnuser preferences for lighting starting from a tabula rasa state.

1 INTRODUCTION

In [4], and [10], a hierarchical fuzzy system approach has been intro-duced where the inputs to the learning process are real valued vari-ables acquired from sensors. The output of the proposed learningalgorithm is a model consisting of a number of fuzzy rules whichare continuously generalized into a fuzzy rule set using the induc-tive fuzzy learning algorithm of Castro et al. [2]. These fuzzy rulesare then used by a fuzzy logic controller (FLC) to take decisions.Feedback acquired from the environment is continuously used by thelearning process to adapt the fuzzy logic rules. Questionnaires an-swered by previous users have shown that the system was rejected bythe occupants and attempts had been made to circumvent it. Other ap-proaches include i.e. [1] a system where different sensors of a roomare connected via a sensor network (LonWorks) to a single embed-ded agent that is located physically in a room. This agent uses theinformation acquired to learn fuzzy logic rules with a genetic algo-rithm learning paradigm. However, their learning procedure requiresexplicit feedback of the user. Our work is different because we usea learning algorithm which doesn’t require any explicit feedback bythe users at all and is also not based on fuzzy logic.

1 Institute of Neuroinformatics, ETH Zurich/University of Zurich, www:”http://abi.ini.unizh.ch”, email: [email protected]

2 University of Applied Science, Rapperswil3 These authors contributed equally to this work.

2 OUR APPROACHOur measurements [9] have shown that some rooms (labs and regu-lar offices) have simple regularity in them which can be learned byproviding a few significant input variables (i.e. interior daylight ordaytime) only. However, not all spaces are of such simple structureand ambient needs and thus need to be learned and controlled usingmore sophisticated learning algorithms and also more sensory infor-mation. However, it is quite difficult to determine and to evaluate thegoodness of a learning approach without having the ability to com-pare them against others. Additionally, not every approach turnedout to succeed all of the time [9]. For instance they might only be ac-curate under specific sensor conditions, determined by the dynamicsource of lighting, temperature, humidity, etc. Furthermore, an en-vironment with frequently changing occupants yields an additionaldifficulty that must be overcome; the ability to adapt to new occu-pant preferences that ultimately demands that an algorithm must becapable to discard previously learned knowledge. Our approach insetting up a suitable learning infrastructure introduces a novel In-telligent Building Framework [9] that composes a set of indepen-dent light controller agents (LCs). Each LC deals with multiple inputdimensions and controls an individual light on a local basis ratherthen global (i.e. room or even building). The core of each LC is aWeighted-Majority based algorithm that we presented in [3],[9] thatincorporates a mixture of experts (base learners). The major benefitby applying this approach is that each algorithm within each LC willcontribute its individual decision, which is weighted by a function ofuser interactions and the deviation to the target output.However, a common problem with online learning is to decide whento discard old and when to incorporate new data for the learning(Short Term Memory vs. Long Term Memory [10], [4]). Also, find-ing a suitable training-set size for base-learners is cumbersome butultimately essential. Collecting a lot of data isn’t very practical be-cause often it results in an imbalanced class problem and also, build-ing occupants may frequently change.To address the previously stated issues, a foregoing data pre-processing step, clustering, was employed by two (out of five) baselearners. The data was clustered using g-means[7] and growing neu-ral gases (GNG)[5], [6], and was then combined with an ANN thatmade a prediction based on a local or global training-set. Local refersto the procedure where the new data is first classified to the mostprobable cluster. Only the data of this cluster is then used as thesample training-set. Whereas global does’t classify the new data toa cluster but instead takes all of the clusters as the training-set (Seenext paragraph).The class imbalanced problem and the problem of discovering a suit-able training-set size are further addressed by

• Limitting the size of clusters (ring-buffering) to prevent thetraining-set from constantly growing and also to avoid the

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 46

Page 53: Middlesex Universityeis.mdx.ac.uk/staffpages/juanaugusto/AITAmI2006.pdf · Artificial Intelligence Techniques for Ambient Intelligence (AITAmI06) Ambient intelligence [1] (AmI) is

training-set to become largely imbalanced.• The training-set should be re-clustered before any conventional

supervised learning technique is employed. Hereby, each re-clustering step should be launched with an initial set of clusters(Depending on the number of clusters of the previous step). Thus,we allow candidate clusters to be merged with other similar clus-ters to prevent a form of over-clustering. Also, clusters which onlycontain a few data members should be deleted completely (out-liers).

The remaining three applied base learners shown in Fig. 2 have beenrealized with conventional techniques and were discussed in [3] and[9]. Due to the limited space here, we cannot give any further detailsabout the applied base learners

3 PRELIMINARY RESULTSThe following real experiment was used to evaluate the success of thelearning. Three rooms, each of which comprises 2 LCs, were testedfor a period of 70 days. Two rooms are illustrated in Fig. 1. The gapbetween week four and five in room 55.G.84 is due to Xmas andNew Year’s eve, where nobody was working. The collected data in-

Figure 1. The left plots show the received occupant-, and the right plots theconducted LC decisions, versus weeks.

dicate that user interactions which are normally taken by occupantshad been taken over by the system. Thus, the system was assisting itsinhabitants in their ordinary daily tasks. In the first week four user in-teractions were received which decreased to only 1-2 or mostly evenzero. The learning was conducted every 10 minutes under the con-dition that someone was present. Additionally, in order not to endup in opposing interactions, a delay of one hour was enforced uponreceiving any system or user interaction. Fig. 2 illustrates how thecontribution of the base learners have varied during that 70 day pe-riod. Each associated weight reflects how strong its opinion was con-sidered by the overall decision. When comparing Fig. 2 with Fig. 1we can observe that around day 42 most weight alterations have beenconducted due to newly received user instructions. Five learners wereincorporated and their weights recorded; Fig. 2, shows that not allbase learners performed equally well in our two test rooms, but atthe same time, we can observe that the Weighted-Majority mixtureof experts succeeded to control the two rooms even though we hadbeen running the same base learners in both rooms.

Figure 2. The plots illustrate the varying contribution (weight [0-1]) of thebase learners.

4 SUMMARYWe have proposed and studied a new approach that succeeded tocontrol the lights in three different rooms for a period of 70 days.Each light controller dynamically considered a mixture of expert de-cisions. The evaluation of the collected data has shown that the col-lective decisions updated to individual space functions without a pri-ori knowledge. Also, a stable OSGi-based infrastructure [8] was de-veloped that incorporates the presented IB framework that provides acommon generic architecture that facilitates further development. Inthe future, such a generic setup will be beneficial even when adaptedto other controllers such as blinds. Although these preliminary re-sults are encouraging, they need to be studied over a longer periodof time with a greater variety of rooms, occupants and weather con-ditions. Also, controlled rooms need to be quantitatively comparedwith uncontrolled rooms.

REFERENCES[1] Hagras et al., ‘A hierarchical fuzzy-genetic multi-agent architecture for

intelligent buildings online learning, adaptation and control’, Informa-tion Sciences - Informatics and Computer Science, 150, 33–57, (2003).

[2] J. L. Castro et al., ‘Learning maximal structure rules in fuzzy logic forknowledge acquisition in expert systems’, Fuzzy Sets and Systems, 101,331–342, (1999/2/1).

[3] S. Nufer et al., ‘A mixture of experts for learning lighting control, Sup-plementary Material for aitami’06, www. http://abi.ini.unizh.ch’, Rivadel Garda, Italy, (2006).

[4] U. Rutishauser et al., ‘Control and learning of ambience by an intelli-gent building.’, IEEE Transactions on Systems, Man, and Cybernetics,Part A, 35(1), 121–132, (2005).

[5] B. Fritzke, ‘A growing neural gas network learns topologies’, in Ad-vances in NIPS 7, MIT Press.

[6] B Fritzke, ‘Supervised learning with growing cell structures’, in Ad-vances in NIPS 6, Morgan Kaufmann Publishers.

[7] G. Hamerly and C. Elkan, ‘Learning the k in k-means’, in Advances inNIPS, volume 17, (2003).

[8] S. Nufer and M. Buehlmann, ‘A new ABI system built on the openservices gateway initiative’, Technical report, Institute of Neuroinfor-matics, ETH and University of Zurich, (2005).

[9] S. Nufer and M. Buehlmann, ‘A novel approach for learning dynamicspace behaviors in a non-stationary environment’, Technical report, In-stitute of Neuroinformatics, ETH and University of Zurich, (2005).

[10] J. Trindler and R. Zwiker, ‘Parallel fuzzy controlling and learning archi-tecture based on a temporary and long-term memory’, Technical report,Institute of Neuroinformatics, ETH and University of Zurich, (2003).

AI Techniques for AmI J.C. Augusto and D. Shapiro (Eds.)

AITAmI'06 47