ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly,...

64
European Research Consortium for Informatics and Mathematics www.ercim.org ERCIM NEWS Number 72, January 2008

Transcript of ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly,...

Page 1: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

European Research Consortiumfor Informatics and Mathematicswww.ercim.org

ERCIM NEWSNumber 72, January 2008

Page 2: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008

ERCIM News is the magazine of ERCIM. Published quarterly, itreports on joint actions of the ERCIM partners, and aims to reflectthe contribution made by ERCIM to the European Community inInformation Technology and Applied Mathematics. Through shortarticles and news items, it provides a forum for the exchange ofinformation between the institutes and also with the wider scientificcommunity. This issue has a circulation of 10,500 copies. The printedversion of ERCIM News has a production cost of €8 per copy. Sub-scription is currently available free of charge.

ERCIM News is published by ERCIM EEIG BP 93, F-06902 Sophia Antipolis Cedex, FranceTel: +33 4 9238 5010, E-mail: [email protected]: Jérôme ChaillouxISSN 0926-4981

Editorial Board:Central editor: Peter Kunz, ERCIM office ([email protected])

Local Editors:Austria: Erwin Schoitsch, ([email protected])Belgium:Benoît Michel ([email protected])Czech Republic:Michal Haindl ([email protected])Finland: Pia-Maria Linden-Linna ([email protected])France: Bernard Hidoine ([email protected])Germany: Michael Krapp ([email protected])Greece: Eleni Orphanoudakis ([email protected])Hungary: Erzsébet Csuhaj-Varjú ([email protected])Ireland: Ray Walsh ([email protected])Italy: Carol Peters ([email protected])Luxembourg: Patrik Hitzelberger ([email protected])Norway: Truls Gjestland ([email protected])Poland: Hung Son Nguyen ([email protected])Spain: Salvador Lucas ([email protected])Sweden: Kersti Hedman ([email protected])Switzerland: Harry Rudin ([email protected])The Netherlands: Annette Kik ([email protected])United Kingdom: Martin Prime ([email protected])W3C: Marie-Claire Forgue ([email protected])

ContributionsContributions must be submitted to the local editor of your country.

Copyright NoticeAll authors, as identified in each article, retain copyright of their work.

AdvertisingFor current advertising rates and conditions, see http://ercim-news.ercim.org/ or contact [email protected]

ERCIM News online edition The online edition is published at http://ercim-news.ercim.org/

SubscriptionSubscribe to ERCIM News by: contacting the ERCIM office (see address above) or by filling out the form at the ERCIM website at http://ercim-news.ercim.org/

Next issue:April 2008Special theme: Maths for Everyday Life

Editorial Information

Page 3: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008

TThhee WWeebb ooff TThhiinnggss Progress in communications technology has been character-ized by a movement from lower to higher levels of abstrac-tion. When, first, computers were connected by telephonewires, you would have to run a special program to make oneconnect to another. Then you could make the second connectto a third, but you had to know how to use the second onetoo. Mail and news was passed around by computers callingeach other late at night, and, for a while, email addressescontained a list of computers to pass the message through,such as: timbl@mcvax!cernvax!cernvms.

It's not the wires – it's the computers The ability to use this communication power between com-puters wasn't truly useful until the Internet. The Internetallowed one to forget about the individual connections. It wasthought of as the 'Internet Cloud'. Messages went in from onecomputer, and appeared in another, without one having toworry about how they were broken into packets, and how thepackets were routed from computer to computer.

It's not the computers – it's the documents This power of communication between computers wasn'teasily usable by the majority of people until the Web camealong. The realization of the Web was: "It's not the comput-ers which are interesting, it's the documents!". The WWWprotocols (URI, HTTP, HTML) defined how documentscould be sent between Web servers and Web browsers. Withthese innovations, the user entered a Web of interconnecteddocuments. She didn't have to worry about how the proto-cols work underneath, with two exceptions.

When things went wrong, she had to be able to figure outwhether it was a problem with her connection to the Internet,with the URI in the link she was following, or an error on theserver end. This involved looking under the hood, as it were.But that was, and still is, the exception.

There is another reason to be aware of what is happening. Theinformation one browses comes from a particular server,whose name has been registered to a particular person ororganization. The trust you put in that information relates towho that organization is. The serving organization is responsi-ble for keeping the URIs you bookmark today alive tomorrow.So the Web is just a web of documents, except one has to beaware of the social aspects implied by the underlying level.

It's not the documents – it's the things The power of the Web hadn't reached its full potential untilthe semantic Web came along. The semantic Web's realiza-tion is, "It isn't the documents which are actually interesting,it is the things they are about!". A person who is interested ina Web page on something is usually primarily interested inthe thing rather than the document. There are exceptions, ofcourse – documents are certainly interesting in their ownright. However, when it comes to the business and science,the customers, the products, or the proteins and the genes,are the things of interest. A good semantic Web browser,then, shows a user information about the thing, which mayhave been merged from many sources. Primarily, the user isaware of the abstract Web of connections between the things

3

Keynote

Tim Berners-Lee,director of the WorldWide Web Consortiumand inventor of the Web.

– eg: this person is a customer who made this order whichincludes this item which is manufactured by this facility ...and so on.

There are again the same two exceptions. When things gowrong, the user must be able to look under the hood to seewhether the document was fetched OK but had missing data,or the document was not fetched OK, in which case whatwas the underlying Web problem. And again, when the useris looking at a bit of data in the data view, perhaps a point ona map or a cell in a table, then she must be able to see easilywhich document that information came from.

One thing, or person, may have many URIs on the semanticWeb. Often such a URI has within it the URI of the docu-ment which has some information about the thing: a gene asdefined in the Gene Ontology; a protein as defined in thistaxonomy; a citizen as defined by the Immigration and Nat-uralization Service glossary, and so on. Similarly, the URI ofthe document has within it the DNS name of the computer.So the social structure can be seen within the URI.

The World of Things ... on the Web The Web of things is built on the Web of documents, whichis built on the Web of computers controlled by DNS owners,which itself is build on a set of interconnected cables. This isan architecture which provides a social backing to the namesfor things. It allows people to find out the social aspects ofthe things they are dealing with, such as provenance, trust,persistence, licensing and appropriate use as well as the rawdata. It allows people to figure out what has gone wrongwhen things don't work, by making the responsibility clear.

The last level of abstraction is the Web of real things, builton top of the Web of documents, which is in turn built on thenetwork of computers.

Tim Berners-Lee

Phot

o: D

onna

Cov

eney

.

Page 4: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

Joint ERCIM Actions

ERCIM NEWS 72 January 20084

Contents

2 Editorial Information

KEYNOTE

3 The Web of Things

by Tim Berners LeeDirector of the World Wide Web Consortium and inventor of the Web

JOINT ERCIM ACTIONS

6 Denmark joins ERCIM

7 The Future of the Web: An ERCIM View

by Keith Jeffery

7 ERCIM Working Group 'Dependable Embedded

Systems' - Forthcoming Activities and Workshops

by Bruno Le Dantec

8 Second DELOS Conference on Digital Libraries

9 'The infinity Initiative' - ICT Research meets

Standardization

10 IV Grids@Work Conference in China

by Bruno Le Dantec

11 EVOL 2007 - Third International ERCIM

Symposium on Software Evolution

by Tom Mens and Kim Mens

12 12th International ERCIM Workshop on Formal

Methods for Industrial Critical Systems

by Stefan Leue and Pedro Merino

13 ERCIM Working Group Workshops on Dependable

Embedded Systems

by Erwin Schoitsch

13 ERCIM STM Working Group Award for the best

PhD Thesis on 'Security and Trust Management'

THE EUROPEAN SCENE

14 The European Commission is launching 24 new

Research Projects on ICT Security for €90 million

under the FP7-ICT Theme

by Jacques Bus

15 European Digital Library Foundation welcomed

by the Commissioner

SPECIAL THEME

Introduction to the Special Theme14 The Future Web

by Lynda Hardman and Steven Pemberton

Semantic Web18 Which Future Web?

An interview with Frank van Harmelen

19 KAON2 – Scalable Reasoning over Ontologies with

Large Data Sets

by Boris Motik

20 The WIKINGER Project – Knowledge-Capturing

Tools for Domain Experts

by Lars Bröcker

22 Tagging and the Semantic Web in Cultural Heritage

by Kees van der Sluijs and Geert-Jan Houben

23 Infrastructure for Semantic Applications -

NeOn Toolkit Goes Open Source

by Peter Haase, Enrico Motta and Rudi Studer

24 Bridging the Clickable and Semantic Webs

with RDFa

by Ben Adida

Media Web25 A Framework for Video Interaction with Web

Browsers

by Pablo Cesar, Dick Bulterman and Jack Jansen

27 Creating, Organising and Publishing Media

on the Web

by Raphaël Troncy

Web 2.0/Online Communities 28 Empowering Online Communities to Manage

Change - How to Build Viable Organisations Online

by David Lewis and Kevin Feeney

30 Owela: Open Web Laboratory for Innovation

and Design

by Pirjo Näkki

31 The Principle of Identity Cultivation on the Web

by Pär J. Ågerfalk and Jonas Sjöström

32 Understanding the Hidden Web

by Pierre Senellart, Serge Abiteboul and Rémi Gilleron

Page 5: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 55

Web Data Structures 33 Static Analysis of XML Programs

by Pierre Genevès and Nabil Layaïda

34 Seaside – Advanced Composition and Control Flow

for Dynamic Web Applications

by Alexandre Bergel, Stéphane Ducasse and Lukas Renggli

Using the Web35 EAGER: A Novel Development Toolkit for

Universally Accessible Web-Based User Interfaces

by Constantina Doulgeraki, Alexandros Mourouzis and Constantine Stephanidis

37 Migratory Web User Interfaces

by Fabio Paternò, Carmen Santoro and Antonio Scorcia

38 Paving the Way of Designers - Towards the

Development of Multimodal Web User Interfaces

by Adrian Stanciulescu, Jean Vanderdonckt and Benoit Macq

40 Distributed Personalization: Bridging Digital Islands

in Museum and Interactive TV

by Lora Aroyo

Web Services41 Discovery and Selection of Services on the Semantic

Web

by Dimitrios Skoutas, Alkis Simitsis and Timos Sellis

43 QoS-Based Web Service Description and Discovery

by Kyriakos Kritikos and Dimitris Plexousakis

44 Automating the Creation of Compound Web

Applications

by Walter Binder, Ion Constantinescu, Boi Faltings and Radu Jurca

R&D AND TECHNOLOGY TRANSFER

46 Epigenetic Modelling

by Dimitri Perrin, Heather J. Ruskin, Martin Crane and Ray Walshe

47 Contract-Oriented Software Development for Inter-

net Services

by Pablo Giambiagi, Olaf Owe, Anders P. Ravn and Gerardo Schneider

48 gCube: A Service-Oriented Application Framework

on the Grid

by Leonardo Candela, Donatella Castelli and Pasquale Pagano

49 The Decision-Deck Project – Developing a Multiple

Criteria Decision Analysis Software Platform

by Raymond Bisdorff and Patrick Meyer

51 Lightspeed Communications in Supercomputers

by Cyriel Minkenberg, Ronald Luijten and Francois Abel

52 Mobile-Based Wireless Sensor-Actuator Distributed

Platform in Pervasive Gaming

by Pär Hansson and Olov Ståhl

54 An Informatics Research Contribution to the

Domotic Take-Off

by Vittorio Miori, Dario Russo and Massimo Aliberti

55 Natural Interaction - A Remedy for the Technology

Paradox

by Lea Landucci, Stefano Baraldi and Nicola Torpei

EVENTS

57 ISWC07 - IEEE International Symposium on Wire-

less Communication Systems

by Yuming Jiang

58 INRIA Celebrated 40 years of Activity

59 ECSA 2007 - First European Conference on Soft-

ware Architecture

by Carlos E. Cuesta and Paloma Cáceres

60 Announcements

62 In Brief

Next issue: April 2008Next special theme: Maths for Everyday Life

Page 6: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

Joint ERCIM Actions

ERCIM NEWS 72 January 20086

DDeennmmaarrkk jjooiinnss EERRCCIIMMDANAIM (Danish Research Association for Informaticsand Mathematics), a research consortium establishedby seven major Danish universities and universitycollaborations, has become a member of ERCIM.Denmark is the nineteenth country to join ERCIM.

The aim of ERCIM is to foster collaborative work within theEuropean research community in information and communi-cation technology (ICT) and applied mathematics, and toincrease cooperation with European industry. With theDANAIM membership, ERCIM continues to expand itsgeographical reach and increase its impact.

DANAIM Represents almost all Danish UniversitiesDAIAIM is a consortium of seven universities: University ofSouthern Denmark, University of Aarhus, Roskilde Univer-sity, Technical University of Denmark, IT Vest – networkingUniversities, University of Copenhagen and Aalborg Univer-sity. We hope that the remaining two Danish universitiesbecome members in a few years. In Denmark there is a longtradition of international collaboration with other universi-ties and collaboration with industry. DANAIM willstrengthen the cooperation between Danish and other Euro-pean researchers and industry.

DANAIM represents 440 researchers within the field ofcomputer science and related fields and an annual turnoverof more than €30 million. More than 75% of the studentsenrolled for computer science or related areas are enrolled atthe institutions represented by DANAIM.

Reasons for Joining ERCIMDANAIM chair Chrstian S. Jensen finds many good reasonsfor joining ERCIM: "Research is fundamentally interna-

Aalborg University

Esbjerg Kolding

Sønderborg

Odense

SlagelseKøbenhavn

University of Southern Denmark

Aarhus

University of Aarhus

Technical University of Denmark

Lyngby

IT University West (an educational network between the four university institutions in the Western part of Denmark; Aarhus School of Business, University of Southern Denmark, Aalborg University and University of Aarhus.)

University of Copenhagen

Aalborg

Aalborg University

Roskilde University

Roskilde

tional. Membership in ERCIM represents an opportunity forthe partners to strengthen their international ties withresearch organisations across Europe. The DANAIM part-ners view ERCIM's many working groups as attractive forafor scientific exchange and networking. Collaborativeresearch projects often grow out of networks. By joiningERCIM, the partners hope to expand and invigorate theirEuropean networks, thereby increasing their participation inEuropean projects."

On the more specific level, one of the major reasons for join-ing ERCIM was the fellowship program. As Christian S.Jensen notes, "The partners are particularly excited about theprospects for becoming part of ERCIM's fellowship pro-gram, as this highly successful program attracts very largenumbers of applicants on an annual basis."

The research spirit in Denmark is collaborative. Conse-quently, it is in the interest of DANAIM to influence thestrategic directions for ICT research at a European scale – anobjective the DANAIM consortium hopes to be able to con-tribute to via ERCIM by supporting ERCIM's objective ofinfluencing the research directions.

The board of directors for DANAIM consists of Kim SkakLarsen, Department of Mathematics and Computer Science,University of Southern Denmark; Kurt Jensen, Department ofComputer Science, University of Aarhus; Hans VestmanJensen, Institute of Information and Media Studies, Universityof Aarhus; Henning Christiansen, Department of Communica-tion, Business and Information Technologies, Roskilde Uni-versity; Flemming Nielson, Department of Informatics andMathematical Modelling, Technical University of Denmark;Jens Bennedsen, it-vest – Networking Universities; Jørgen P.Bansler, Department of Computer Science, University ofCopenhagen; and Christian S. Jensen (chair), Department ofComputer Science, Aalborg University.

Link:

http://www.danaim.dk/

Please contact:

DANAIMAtt: Henriette Frahmc/o Aalborg Universitet, Institut for DatalogiSelma Lagerlöfs Vej 3009220 Aalborg ØDenmarkE-mail: [email protected]

The DANAIM consortium.

Page 7: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 7

TThhee FFuuttuurree ooff tthhee WWeebb:: AAnn EERRCCIIMM VViieewwby Keith G. Jeffery

It is impossible to imagine modern life without the Web. TimBerners-Lee's original vision, so simple and elegant, hasbecome part of the fabric of our lives. It is hard to realize thatin less than two decades a line-mode 'techie' tool from aresearch laboratory has become indispensable to all for infor-mation, communication, business and research. Within thefirst few years of the Web, teams worked on aspects such asWeb-database integration – leading to an explosion of e-busi-ness, especially B2C (Business to Customer) – andstylesheets to assist in consistent presentation. As time pro-gressed and the Web became increasingly widespread, acces-sibility issues were addressed. Graphics recommendationssuch as PNG and SVG were developed. Synchronized multi-media (SMIL) was developed. Teams were already workingon trust and security and on metadata to describe Web pageswhen 'Weaving the Web' was published, emphasising theWeb of trust and semantic Web. The development of Javaallowed the production of applets and servlets, leading to theclient-server model being replaced by a new three-levelarchitecture. Web services with XML provided data transferand workflow capabilities for business. Web access frommobile devices became possible and the Mobile Web Initia-tive was launched by W3C, which has been in existence sincethe early years, providing a forum for consulting membersand pushing through recommendations as effective standards.

Web of TrustWith some background in PICS (Platform for Internet ContentSelection) various researchers have been developing systemsfor trust and security in a Web environment. The definition ofstandard document formats (using XML) to exchange informa-tion about companies, products and services, and the provisionof SLAs (service level agreements), has underpinned the trusteffort. Cryptography with page-content encoding/decoding hasbeen a major part of security. Privacy has also been addressed,from techniques for authenticating users to methods of author-izing (and therefore also restricting) access.

Semantic WebThe development of the Resource Description Framework(RDF), essentially a binary relational representation, allowedmore expressiveness to be encoded in Web applications.DAML and OIL – early ontology languages – led to the rec-ommendation OWL for constructing domain ontologies to beused in assisting interoperation (or for input validation, queryimprovement and answer explanation). There is still a longway to go, but semantics are gradually being managed in theWeb environment.

Mobile WebThe increasing capacity of handheld devices and networks hasled to demand for access to the Web on mobile devices. Thepossibilities include data input from attached detectors (egvideo), full-quality video and sound output, and push technol-ogy to make the device intelligent with respect to the user'sobjectives and the current spatio-temporal environment.

Web 2.0The potential of the Web as a platform for interaction andcooperation is now emerging with the so-called Web 2.0ideas. These include not only the use of services but also par-ticipation by users individually or collectively (collectiveintelligence). This can occur, for example, through blogs andWikis with tagging, faster page loads and interactions usingAJAX technology, access to multiple information sourcesand their integration through 'mashups', peer-to-peer connec-tivity and increasingly intelligent push technology.

Web ServicesThe development of Web services provided a sort of applica-tion programming interface (API) to allow programming ofthe Web, rather than simply authoring or generating contentand then making it available. With XML encoding of theinformation, cooperative working including workflow is madepossible. It also made it possible to consider SOA (service-ori-ented architecture), which leads to shorter development timesfor applications and easier maintenance. However this alsoentailed metadata descriptions of services and businessprocesses to allow discovery and use. There has been anexplosion of metadata offerings in this domain – many appli-cation-domain specific – and some standardisation is needed.

GRIDsStarting as metacomputing (linked supercomputers) in NorthAmerica in the mid- to late nineties, the Europeans proposedthat GRID services be based on Web services so that GRIDscould become a ubiquitous facility. In this vein, work by theNext Generation Grid Expert Group of the European Com-mission has led to much R&D activity in services and theirspecification with metadata.

ConvergenceThe future of the Web lies first with the convergence of thesemantic Web, Web of trust and Web 2.0, followed by twin-ning with GRIDs. This combined e-infrastructure will allowmore or less unlimited access to information, business pro-cessing, entertainment, education, research and so forth.

StandardsThe outstanding barriers to realizing this vision concern thedevelopment and acceptance of standards. This is partlybecause some standards (recommendations in W3C terms)are inadequate, and partly because there are competing com-mercial interest groups. However, to provide ubiquitousinteroperability these standards must be developed, acceptedand evolved in a structured way.

As they were ten years ago, the key standards required noware in the areas of semantics (particularly of metadata toallow management of services and management of informa-tion resources) and trust/security (another metadata-con-trolled area). Resolution of these areas will allow confidentprogression to the vision. ERCIM researchers are involvedin projects addressing these very issues and are workingtogether with the W3C Europe, located within ERCIM.

Please contact:

Keith G. Jeffery, ERCIM PresidentScience and Technology Facilities Council, UKE-mail: [email protected]

Page 8: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

Joint ERCIM Actions

SSeeccoonndd DDEELLOOSS CCoonnffeerreenncceeoonn DDiiggiittaall LLiibbrraarriieessThe second DELOS Conference, which was held on 5-7 December 2007 in Tirrenia, Pisa, Italy, not onlyprovided a forum to present and discuss recent researchadvances in the field of Digital Libraries, but alsoprovided the opportunity to summarise some of themain achievements of DELOS during the past four years.

After four years of activity, the DELOS Network of Excel-lence on Digital Libraries, supported by the European Unionunder the Sixth Framework Programme and managed byERCIM, has reached the end of its funded lifetime. How-ever, the activities of DELOS actually started many yearsago, first with the DELOS Working Group at the end of thenineties, and then through the DELOS Thematic Network,under the Fifth Framework Programme, from 2001 to 2003.Over these years, the objective of DELOS has been to fill thegap between current Digital Library practice and the needsof modern information provision. The goal has been to fos-ter the development of technology that will eventually over-come the current restrictions of today's information systems.

In these years, it is widely recognised that DELOS hasplayed a significant role in• the formation of an active European Digital Library

research community • the formulation of a vision for the future of the field• fostering collaborative research in the direction of this

vision.

The DELOS community has provided significant contribu-tions to many key components of digital libraries. This isevidenced by the vast volume of scientific papers (more than500) produced by the DELOS community in the last fouryears and by the number of projects and applications inwhich members of DELOS have provided or are providing acontribution. As important examples, we can cite the DigitalLibrary Reference Model and the DELOS Digital LibraryManagement System, which were both presented at the Con-ference.

The conference was attended by more than 80 participants.The 33 presentations reflected the main fields of interest ofthe DELOS community (DL Architectures, Interoperability,Information Access, Image and 3D Objects Retrieval, UserInterfaces, Personalization, Preservation, Evaluation). Onesession was dedicated to the DELOS Reference Model,which constitutes one of the major legacies of DELOS to thedigital library community. The final morning session con-sisted of presentations and demonstrations of the Delos-DLMS, a globally integrated prototype of a Digital LibraryManagement System, offering DL services and specializedfunctionality provided by members of the DELOS commu-nity. The DelosDLMS currently offers text and audio/visualsearching, relevance feedback tools, novel interfaces andinformation visualization, annotation of retrieved informa-tion, and support for sensor data streams.

There were also three invited talks from key players in thefield: Fabrizio Gagliardi, Microsoft, who described "Techni-cal Computing at Microsoft Research", Maurizio Lenzerini,University of Roma La Sapienza, on "Ontology-based infor-mation access", and Stefan Gradmann, University of Ham-burg, talking about "Interoperable Information Space – mov-ing towards the European Digital Library".

The results achieved by DELOS over the years were sum-marised in a concluding session both by Yannis Ioannidis ina presentation entitled "DELOS Achievements" and also by

ERCIM NEWS 72 January 20088

Yannis Ioannidis presenting the DELOS achievements.

DELOS fields of research.

Launch of the DELOS Association for Digital Libraries - In order tokeep the DELOS spirit alive after the end of the DELOS Network ofExcellence sponsored by the Sixth Framework Programme, a non-profit association has been established. The main objective of thisinitiative will be to continue the work of DELOS by promotingresearch activities in the field of digital libraries through the organi-sation of workshops, working groups, schools, etc. Membership isopen both to individuals and to organisations, and the membership feefor the first year has been set at €50 for individuals and €2000 fororganisations. Complete information, including the Statute of theAssociation and a membership application form can be found on theDELOS Web site at http://www.delos.info/Association/.

Launch of the DELOS Associationfor Digital Libraries

Page 9: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008

a number of participants who presented various activitiesthat have seen the light as a result of work which begun in orwas sponsored by DELOS. These activities include a num-ber of independent EC funded projects, eg MultiMatch, Tre-bleCLEF, LOGOS, DILIGENT, DPE, etc. as well as nationalinitiatives such as Rinet in Greece, and the series of IRCDLconferences in Italy. The conference proceedings and copiesof all presentations can be found on the DELOS Web site athttp://www.delos.info/.

Although, sadly, this was the last official event of theDELOS Network of Excellence, DELOS will live on inthrough the newly formed "DELOS Association" and it ishoped that many of the DELOS events, such as the summerschools and the scientific workshops, will continue in thefuture. For more information about the DELOS Association,see the announcement in the box or visit the DELOS Website.

Link:

http://www.delos.info/

Please contact:

Costantino ThanosDELOS scientific coordinatorISTI-CNR, ItalyE-mail: [email protected]

9

''TThhee iinnffiinniittyy IInniittiiaattiivvee'' -- IICCTT RReesseeaarrcchh mmeeeettss SSttaannddaarrddiizzaattiioonnERCIM and ETSI, the European TelecommunicationsStandards Institute, have co-signed a landmarkMemorandum of Understanding (MoU) to launch theirfuture close collaboration.

Collaborative research and development in ICT (Informationand Communication Technologies) plays a major role indetermining the prosperity of Europe. The MoU encapsu-lates the organisations' joint aims to:• exploit areas of synergy between Research and Standardi-

sation in the innovation food chain• accelerate R&D innovation cycles• reduce time-to-market.

The organisations' commitment to their future collaborationwas initiated by an inaugural high level seminar at ETSIheadquarters on 29 November 2007, the first in a series ofenvisaged joint events. The seminar took place in SophiaAntipolis, France, where both ERCIM and ETSI have theiroperations.

The seminar included presentations from the EuropeanCommission DG Information Society & Media and the JointResearch Council (JRC) as well as from top level represen-tatives from Alcatel-Lucent, the Austrian Academy of

Science, BT, Ericsson, Fraunhofer-Gesellschaft, UniversityCollege Dublin and the World Wide Web Consortium.

According to ETSI Director-General, Walter Weigel: "ICTmarkets are shaped by [ETSI] Standards which enable thewidespread use of new technologies developed after exten-sive research activities led by organisations such as ERCIMand so cooperation between our two communities can onlybe beneficial. This is an exciting development for all innova-tion food chain stakeholders and our corridors are alreadybuzzing with ideas for the future".

ERCIM President, Keith Jeffery said: "The greatest satisfac-tion for researchers is to see their ideas carried forward toproducts and services that are used widely. There are manyexamples where ERCIM researchers have achieved this.However, this cooperation with ETSI allows us to acceleratethe innovation chain by overlapping research, testing, stan-dardisation and exploitation, thus creating a leading positionfor the ICT industry within Europe, the results of which willthen spread -like the World Wide Web- to the rest of theworld."

"The infinity Initiative" envisages a number of joint eventsincluding advanced workshops, seminars and meetings. Itaims to strengthen the links between standardisation andresearch and has the support of the European Commission.The name "The infinity Initiative" was chosen in allusion tothe enduring and futuristic nature of research.

Link:

Seminar programme, agenda, presentations and speakers'biographies: http://www.etsi.org/WebSite/NewsandEvents/infinity_home.aspx

Please contact:

Catherine MarchandERCIM officeE-mail: [email protected]

Walter Weigel, ETSI Director-General (left) and Keith Jeffery,ERCIM President

Page 10: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

Discussions relating to the Bridge project (Cooperationbetween Europe and China to Develop Grid Applications)were centred on successfully obtaining interoperabilityrequirements from the application partners, and finalizingthe design of the joint interoperability architecture needed bythe project. The first Bridge prototype will be deliveredbefore the end of the year.

All three European projects are partially funded by the Euro-pean Commission. EchoGrid and Gridcomp are managed byERCIM, while Bridge is managed by Fraunhofer-Gesellschaft. In the middle of the week, EchoGrid organisedan all-EC Projects meeting to exchange information betweenthe above projects and to organise a dedicated session on thedescription and comparison of the different middleware inuse by Chinese and European projects. Representatives offive types of middleware participated in this exercise (Proac-tive/GCM, EGEE gLite, GRIA, CNGrid GOS andCROWNE), using a dedicated questionnaire defined prior tothe meeting. A very interesting and exciting question andanswer session followed the presentations.

IV Grids@Work PlugtestThe goal of this year's challenge was to write the best Griddistributed program that could count a maximum number ofsolutions within a limited amount of time to the well-knownN-Queens problem. The finalist teams competing had towrite a program able to work out solutions to the N-Queensproblem using a Grid. All jobs were submitted from Beijingusing the ProActive middleware on a worldwide Grid madeavailable by Grid5000 (FR), In Trigger (JP), PoweRcost (IT)and DAS (NL). These jobs were then executed on the Gridduring a fixed time slot and results evaluated according totheir performance (number of solutions found, ability to usea maximum of grid nodes, CPU time etc). The winner of this2007 contest is KAAPI-MOAIS, France represented byXavier Besseron, with ~ 40 379 billion solutions found.Deployed on 3654 workers, it computed for the first timeN=23 in 35min 7s and N=22 in 3min 21s.

This highly oriented Grid week was a clear success. It facil-itated the exchange of good experiences and best practiceson the use of Grid open standards for Grid middleware andapplications interoperability, and helped to reinforce existinglinks between the two Grid communities and to promotelong and lasting partnerships.

Links:

IV Grids@Work conference: http://www.etsi.org/plugtests/grid/IVGRID_PLUGTESTS.htmEchoGrid: http://echogrid.ercim.org/GridComp: http://gridcomp.ercim.org/Bridge: http://www.bridge-grid.eu/

Please contact:

Florence PesceERCIM officeTel: +33 49238 5073E-mail: [email protected]

IIVV GGrriiddss@@WWoorrkk CCoonnffeerreennccee iinn CChhiinnaa by Bruno Le Dantec

Further to the success of the previous three events,ERCIM, ETSI, and INRIA teamed again this year toorganise the 4th GRID Plugtests. The event, hosted byCNIC, the Computer Network Information Center of theChinese Academy of Sciences, took place in Beijingfrom 28 October to 2 November 2007.

The events brought together more than 130 Grid experts andusers from academia and industry from many Europeancountries and China. It was co-located with a series of con-ferences and tutorials at which several Grid research projectsmet and networked. Participants learned about Grid perspec-tives for research activities and industrial applications, com-pared some running European and Chinese Grid middlewareand were informed on the efforts made by the Grid commu-nity toward interoperability.

Workshops, Tutorials and 'All EC Projects' MeetingsEchoGrid, the project on European and Chinese Cooperationon the Grid, held its second public workshop. Over two days,presenters from Europe and China exchanged and compared

past work and exciting ideas for the coming years on fourtopics, namely: new programming paradigms, trust andsecurity, Grid workflow and Grid middleware for industrialapplications. The outcomes will be used to draft theEchoGrid roadmap during the next consortium meeting inRoma in January 2008.

The GridComp project (Grid Programming with Compo-nents) organised its internal management meeting togetherwith an open Proactive/GCM tutorial; this was used to pres-ent interactively the ProActive middleware for parallel, dis-tributed and multi-threaded computing. This middleware pro-vides the reference implementation of the Grid ComponentModels (GCM). At the end of the day, participants were ableto develop, deploy and test ProActive/GCM applications.

ERCIM NEWS 72 January 200810

Joint ERCIM Actions

Plugtest participants.

Page 11: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 11

EEVVOOLL 22000077 -- TThhiirrdd IInntteerrnnaattiioonnaall EERRCCIIMM SSyymmppoossiiuumm oonn SSooffttwwaarree EEvvoolluuttiioonnby Tom Mens and Kim Mens

The third edition of the International Symposium onSoftware Evolution took place in Paris, 5 October 2007,under the auspices of the ERCIM Working Group onSoftware Evolution. The event gathered people fromacademia and industry to identify and discuss recentadvancements and emerging trends in state-of-the-artresearch and practice in software evolution.

The symposium was organised in 'La Maison Internationale'in Paris, in co-location with the 23rd IEEE InternationalConference on Software Maintenance (ICSM 2007). Theorganisers were Tom Mens (coordinator of the ERCIMWorking Group on Software Evolution), Kim Mens (lecturerat UCL, Belgium), Maja D'hondt (Chargé de Recherches atINRIA) and Ellen Van Paesschen (ERCIM postdoctoralresearch fellow). Sponsoring was obtained from ERCIM,INRIA and Université des Sciences et Technologies de Lille.

As in previous years, the event was very successful. 37 peo-ple from twelve different countries attended, and in total wereceived 29 regular submissions and three tool demonstra-

As a joint effort of the ERCIM WorkingGroup on Software Evolution, Tom Mensand Serge Demeyer have edited together abook entitled 'Software Evolution' withcontributions by international authoritiesin the field of software evolution.

Software has become omnipresent andvital in our information-based society, soall software producers should assumeresponsibility for its reliability. While'reliable' originally assumed implementa-tions that were effective and mainly error-free, additional issues like adaptabilityand maintainability have gained equalimportance recently. For example, the2004 ACM/IEEE Software EngineeringCurriculum Guidelines list software evo-lution as one of ten key areas of softwareengineering education.

Tom Mens and Serge Demeyer, bothinternational authorities in the field ofsoftware evolution, together with theinvited contributors, focus on novel

trends in software evolution research andits relations with other emerging disci-plines such as model-driven softwareengineering, service-oriented softwaredevelopment, and aspect-oriented soft-ware development. They do not restrictthemselves to the evolution of sourcecode but also address the evolution ofother, equally important software artifactssuch as databases and database schemas,design models, software architectures,and process management. The contribut-ing authors provide broad overviews ofrelated work, and they also contribute to acomprehensive glossary, a list ofacronyms, and a list of books, journals,Web sites, standards and conferences thattogether represent the community's bodyof knowledge. Combining all these fea-tures, this book is the indispensablesource for researchers and professionalslooking for an introduction and compre-hensive overview of the state of the art.In addition, it is an ideal basis for anadvanced course on software evolution.

Software Evolution

tions. An international programme committee of experts inthe field took charge of ensuring the quality of acceptedpapers. Fifteen submissions and two tool demonstrationswere accepted for presentation, and eleven of those wereselected for inclusion in the post-proceedings, which will bepublished as part of the scientific open-access journal Elec-tronic Communications of the EASST in 2008. This corre-sponds to an overall acceptance ratio of 34.4 %.

The presentations covered a wide variety of research topics,including research advances in software refactoring, open-source software evolution, reengineering of legacy code,software product lines, and evolution problems pertaining tocomponent-based, aspect-oriented or model-driven soft-ware. The two tool demonstrations focused on the support offeature modelling and maintaining design regularities inevolving code.

Links:

WG Web site: http://w3.umh.ac.be/evol/ Workshop Web site:http://w3.umh.ac.be/evol/events/evol2007.htmlICSM Web site: http://conferences.computer.org/icsm/

Please contact:

Tom MensUniversité de Mons-Hainaut, BelgiumTel: +32 65 37 3453E-mail: [email protected]

Software EvolutionTom Mens, University of Mons-Hainaut, Belgium; Serge Demeyer, University of Antwerp, Belgium (Eds.) Springer, ISBN 978-3-540-76439-7

Page 12: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

Joint ERCIM Actions

ERCIM NEWS 72 January 200812

1122tthh IInntteerrnnaattiioonnaall EERRCCIIMM WWoorrkksshhoopp oonn FFoorrmmaall MMeetthhooddss ffoorrIInndduussttrriiaall CCrriittiiccaall SSyysstteemmssby Stefan Leue and Pedro Merino

Last year's ERCIM Workshop on Formal Methods forIndustrial Critical Systems (FMICS) was affiliated withthe Computer Aided Verification (CAV) conference, heldin Berlin on 1-2 July 2007.

The aim of the FMICS workshop series is to provide a forumfor researchers who are interested in the development andapplication of formal methods in industry. In particular,these workshops are intended to bring together scientists andpractitioners who are active in the area of formal methodsand interested in exchanging their experience in the indus-trial usage of these methods. These workshops also strive topromote research and development for the improvement offormal methods and tools for industrial applications.

The topics for which contributions to FMICS 2007 weresolicited included, but were not restricted to, the following:• design, specification, code generation and testing with for-

mal methods• verification and validation of complex, distributed real-

time systems and embedded systems• verification and validation methods that aim at circum-

venting shortcomings of existing methods with respect totheir industrial applicability

• tools for the design and development of formal descriptions

• case studies and project reports on formal methods relatedprojects with industrial participation (eg safety critical sys-tems, mobile systems, object-based distributed systems)

• application of formal methods in standardization andindustrial forums.

The workshop included five sessions of regular contributionsand three invited presentations, given by Charles Pecheur,Thomas Henzinger and Gérard Berry. In addition to the partic-ipant's proceedings, a volume in the Lecture Notes in Com-puter Science (LNCS) series published by Springer Verlag willbe devoted to FMICS 2007. The papers included in the LNCSvolume were selected from a second round of peer reviewing.

FMICS 2007 attracted 33 participants, some of them aremembers of the FMICS Working Group, from fourteen dif-ferent countries.

We wish to thank the members of the programme committeeand the additional reviewers for their careful evaluation ofthe submitted papers (fifteen papers have been selected outof 31 submitted). We are very grateful to the local organisersof the CAV conference and to University of Dortmund forallowing us to use their online conference service.

Following a tradition established over the past years, theEuropean Association of Software Science and Technology(EASST) offered an award to the best FMICS paper. Thisyear's award was given to Robert Palmer, Michael DeLisi,Ganesh Gopalakrishnan and Robert M. Kirby for their paper'An Approach to Formalization and Analysis of MessagePassing Libraries'.

Link: FMICS Working Group and the next FMICS workshop:http://www.inrialpes.fr/vasy/fmics

Please contact:

Pedro MerinoUniversity of Malaga / SpaRCIM, SpainE-mail: [email protected]://www.lcc.uma.es/~pedro

From left: Pedro Merino ERCIM FMICS Working Group coordina-tor and workshop co-organiser; Ganesh Gopalakrishnan winner ofthe FMICS best paper award; and Stefan Leue, University of Kon-stanz, workshop co-organiser.

ERCIM sponsors up to ten high-quality events per year (confer-ences, workshops and summer schools).

Conditions:• ERCIM sponshorship must be acknowledged in all official print-

ed publicity material relating to the event• promotional and informational material received prior to event

commencement must be distributed to the conference participantsas a part of the participant's conference package

• event organisers must provide free-of-charge attendance to oneERCIM representative, should ERCIM decide to send a PR per-son. They must also provide a booth in a reasonably prominentplace, where this person can present and distribute further mate-rials or provide further information to participants

• event organisers must include the ERCIM logo with a link toERCIM Web pages on the title (reference) page of the conferencefor the period starting with successful PR sponsorship negotia-tion, for the remaining existence of these Web pages.

If you would like to associate ERCIM to an event, please [email protected].

ERCIM-Sponsored Events

Page 13: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 13

EERRCCIIMM SSTTMM WWoorrkkiinngg GGrroouuppAAwwaarrdd ffoorr tthhee bbeesstt PPhhDD TThheessiiss oonn ''SSeeccuurriittyyaanndd TTrruusstt MMaannaaggeemmeenntt'' The ERCIM Working Group (WG) on Security and TrustManagement (STM) establish an award for the bestPh.D. thesis in this area of security and trustmanagement in order to increase the visibility of theyoung researchers inside ERCIM as well as in the largerEuropean scientific community.

The topics of interest include, but are not limited to:• rigorous semantics and computational models for security

and trust • security and trust management architectures, mechanisms

and policies • networked systems security• privacy and anonymity • identity management • ICT for securing digital as well as physical assets • cryptography.

Applications for the 2008 award are open to all PhD holderswho defended their thesis during 2007 in any European uni-versity. Applications consisting of the PhD thesis (in pdf for-mat), a short summary of the thesis in English, and letters ofsupport from researchers from at least two different ERCIMSTM WG member institutions must be submitted by emailnot later than 20 February 2008 to the STM Working Groupcoordinator Fabio Martinelli ([email protected]).

Applications will be evaluated by a committee and judgedon the basis of scientific quality, originality, clarity of pres-entation, as well as potential impact of the results. The com-mittee members for the 2008 competition are:• Gilles Barthe, INRIA• Theo Dimitrakos, BT• Dieter Gollmann, Hamburg University of Technology• Günter Karjoth, IBM • Javier Lopez, University of Malaga • Fabio Martinelli, CNR • Mogens Nielsen, University of Aarhus• Jean-Jacques Quisquater, Université catholique de Louvain• Peter Ryan, University of Newcastle• Pierangela Samarati, University of Milan

The award ceremony will be held during the 4th Interna-tional Workshop on Security and Trust Management(STM08) on 16-17 June in Trondheim, Norway, collocatedwith EUROPKI08 and IFIP TM08. The winner will beinvited for giving a lecture at STM08.

Link:

STM WG: http://www.iit.cnr.it/STM-WG

Please contact:

Fabio Martinelli, IIT-CNR, ItalyE-mail: [email protected]

EERRCCIIMM WWoorrkkiinngg GGrroouuppWWoorrkksshhooppss oonn DDeeppeennddaabblleeEEmmbbeeddddeedd SSyysstteemmss The ERCIM Working Group (WG) 'Dependable EmbeddedSystems' organised two successful workshops in coop-eration with DECOS, the Integrated Project 'DependableEmbedded Components and Systems'. Both workshopswere chaired by the WG coordinators Erwin Schoitschfrom the Austrian Research Centers (ARC) and AmundSkavhaug, NTNU.

In cooperation with Euromicro, the ERCIM/DECOS work-shop 'Dependable Smart Systems: Research, Industrial Appli-cations, Cooperative Systems, Standardization and Certifica-tion' took place on 28 August 2007 in Lübeck, Germany.Thirteen participants enjoyed eleven presentations ondependable embedded systems theory and research, valida-tion and certification, industrial (product line developmentfor SMEs) and transport applications. A very engaged discus-sion – especially on automotive embedded systems visionsand futures (road safety) – closed the full day sessions.

The 2nd DECOS/ERCIM/COOPERS workshop 'DependableEmbedded Systems - Challenges, Impact, Solutions, Exam-ples, Professional and Academic Education and Training' tookplace during the SAFECOMP 2007 conference in Nuremberg,Germany on 18 September 2007. 23 attendees from Europe(including two Russians) discussed dependable embeddedsystems research, embedded systems trust and security issues,industrial applications such as very large sensor networks, androad safety. A panel of industrial suppliers for automotive androad traffic advanced driver assistance and traffic manage-ment embedded systems concluded the event. The papers anddiscussion results have been collected, and will be reviewedand published as ERCIM workshop proceedings.

Additionally, a special session on dependable embedded sys-tems was organised at INDIN 2007, the Industrial Informat-ics conference in Vienna, 23-26 July 2007. The papers arepart of the conference proceedings published by IEEE.

The WG presented ERCIM at a joint DECOS/ERCIM boothat the workshops described above. ERCIM was also presentat the Vehicle Electronics conference in Baden-Baden, Ger-many, 10-11 October 2007, which was attended by 1500 par-ticipants, and the ITEA2 Symposium, held in Berlin from18-19 October, 2007. ITEA2, a unique opportunity to gainan insight into one of Europe 's most important and strategicICT initiatives, attracted almost 500 participants.

Link:

http://its.arcs.ac.at/ercim/

Please contact:

Erwin Schoitsch Dependable Embedded Systems WG coordinator Austrian Research Centres GmbH – ARC, Austria Tel: +43 50550 4117E-mail: [email protected]

Page 14: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

The European Scene

Among the 24 projects, there are five Integrated Projects(IPs), one Network of Excellence (NoE), fourteen FocusedResearch Actions (STREPs) and four Coordination Actions(CAs). The duration of individual projects varies from two tofour years and the funding is ranging from less than €1 mil-lion (CAs) to more than €9 million (some IPs). The projectscover an extensive range of hot security-related research top-ics in ICT. The overall focus of the research is on develop-ing knowledge and technologies needed for building a trust-worthy ubiquitous information society, based on secure andresilient network and service infrastructures. A central issueof the new research is the empowerment of end-usersthrough proper handling of their digital identities andrespecting their privacy when they interact in the digitalworld. The research topics can be broadly grouped into fourclosely inter-related thematic areas, with each project touch-ing upon several areas (see figure).

1. Dynamic, Reconfigurable Service ArchitecturesModern IT systems tend to replace individual componentsand applications with compositions of services distributedover a network. It is vital to define and ensure trust, securityand dependability properties when composing services,either at design time or dynamically at run-time. Fourresearch projects investigate the implications of this para-digm shift in service-oriented architectures in terms of trustand security.

2. Network InfrastructuresHeterogeneous network infrastructures that are secure,resilient and able to cope with different attacks and threats

are needed in order to deliver trustworthy and always avail-able applications and services. Research funded in this areacovers, on the one hand the need to design, build and main-tain security in such network infrastructures. On the otherhand, it covers threat prevention mechanisms with a focus onthe early identification and modelling of emerging threats.This will enhance the ability of operators to detect anomaliesand to counter possible cyber attacks.

3. Identity Management, Privacy, Biometrics, and Trust PoliciesAs users are bringing more and more of their corporate andprivate data within the realm of the Internet, they realise thepotential, and also the risks that technology has created forthe protection and management of sensitive interactions,information and data. Several projects investigate new tech-nological solutions for the secure handling of digital identi-ties and sensitive data by adopting a user-centric, privacy-enhancing federated identity approach. Specific researchareas addressed include: new privacy and identity manage-ment schemes for social networking and Web 2.0 environ-ments and for complex real-life applications like healthcare;novel solutions combining biometrics and crypto-hashingmethods for fighting digital identity theft; new concepts inmulti-modal biometric authentication; and innovative bio-metric authentication means for new mobile services.

4. Underpinning Technologies for Trustworthy Infrastructures Four new projects, one of which is a Network of Excellence,will develop a number of technology building blocks under-pinning trustworthy network and service infrastructures.

ERCIM NEWS 72 January 200814

Networkinfrastructures

Dynamicreconfigurable

service architectures

Identity management,privacy, biometrics,

trust policies

Coordination ActionsResearch roadmaps, metrics and benchmarks, international cooperation, coordination activities 4 Projects: 3.3 m€

3 Projects9.8 m€

4 Projects11 m€

4 Projects18 m€

4 Projects22.5 m€

4 Projects: 16 m€Underpinning technologies for trustworthy infrastructures

1 Project9.4 m€

Networkinfrastructures

Dynamicreconfigurable

servicearchitectures

Identity management,privacy, biometrics,

trust policy

Coordination ActionsResearch roadmaps, metrics and benchmarks, international cooperation, coordination activities

4 Projects: 3.3 m€4 Projects : 3.3 m €

3 Projects9.8 m€

3 Projects

9.8 m€

4 Projects11 m€

4 Projects

11 m€4 Projects

18 m€4 Projects

18 m€

4 Projects22.5 m€

4 Projects

22.5 m€

4 Projects: 16 m €Underpinning technologies for trustworthy infrastructures

1 Project9.4 m€

1 Project

9.4 m€

ICT Research in 'secure, dependa-ble and trusted infrastructures':research areas covered by the 24new FP7 projects.

TThhee EEuurrooppeeaann CCoommmmiissssiioonn iiss llaauunncchhiinngg 2244 nneeww RReesseeaarrcchh PPrroojjeeccttss oonn IICCTT SSeeccuurriittyy ffoorr €€9900 mmiilllliioonn uunnddeerr tthhee FFPP77--IICCTT TThheemmeeby Jacques Bus

Twenty-four new research projects are being launched by DG INFSO following the result of the first callfor proposals in the area of 'secure, dependable and trusted infrastructures', which is part of the 7thFramework Programme (FP7) Information and Communication Technologies (ICT) theme. Most of theprojects started on 1st January 2008 and all will be operational within the next few months.

Page 15: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008

These projects will mainly focus on: achieving furtheradvances in cryptographic algorithms and protocols andtheir efficient hardware and software implementations;developing trusted computing architectures for embeddedcomputing platforms; and, building tools to help program-mers in designing and developing more secure and robustsoftware.

Coordination ActionsFinally, four new Coordination Actions (CAs) will addresssharing of knowledge and collaboration between researchersin various areas of ICT security. Two of them cover the coor-dination of research, respectively, in cyber-threat defenceand in resilience measuring and benchmarking in computersystems and components. Another CA supports and coordi-nates international collaboration with a number of developedcountries. A fourth one addresses the definition of new areasof work in ICT Security Research, to further EU's strategicthinking and positioning in the field.

Disclaimer: The content of this paper is the sole responsibil-ity of the author and in no way represents the view of theEuropean Commission or its services.

Link:

Synopsis of the new projects funded under FP7 in the fieldof ICT security:http://cordis.europa.eu/fp7/ict/security/projects_en.html

Please contact:

Jacques Bus, Head of Unit "Security", DG InformationSociety and Media, European CommissionE-mail: [email protected]

• Stimulate initiatives to bring together existing digital content• Support digitisation of Europe's cultural and scientific

heritage

The European digital library is developing its prototype sitefor launch next year. To coincide with the formal handoverto the Commissioner, the Foundation announced the CITYas the first of the site's themes.

The CITY is a broad theme that will enable the prototype siteto show the European urban experience from several per-spectives. Emerging ideas include: • cities of the future/cities of the past• migration and diaspora • trade and industry• design, shopping and urban cool• pox, cholera and the plague: the route to urban health• archaeology and architecture• utopias and cities of the imagination• riot and disorder• palaces and politics

The digital library project is gathering digitised content fromEuropean archives, museums, audio-visual collections andlibraries. It will use maps, artefacts, photos, sound, filmmaterial, books, archival records and artworks to exploretwo millennia of connectivity between Europe's cities.

Dr Wim van Drimmelen, foundation member and Director ofthe Koninklijke Bibliotheek, the Netherlands, which hoststhe European digital library initiative, said, "archives, muse-ums, audio-visual collections and libraries are collaboratingin order to guarantee that their resources can be broughttogether in a virtual world, regardless of where the originalis held. It's what users expect. However, the benefits are notjust for the users. The development work is providing anexcellent forum for knowledge transfer between thedomains, which increases our collective ability to respond tochanging user needs and remain relevant in the fast-movingdigital environment."

The European digital library is hosted by the KoninklijkeBibliotheek, the national library of the Netherlands, and ledby the Conference of European National Librarians. The ini-tiative is one of the Commission's flagship i2010 initiatives.

In August 2006, the Commission adopted a Recommenda-tion on digitisation and digital preservation which urgedEuropean Union member states to set up large-scale digitisa-tion facilities, so as to accelerate the process of gettingEurope's cultural heritage online via the European digitallibrary. In November 2006 the idea of a European digitallibrary was strongly endorsed by the Culture Ministers of allEU Member States and was recently backed by the EuropeanParliament in its resolution of 27 September 2007.

Link:

http://www.europeandigitallibrary.eu/edlnet/

Please contact:

Jonathan PurdayDigital Library FoundationE-mail: [email protected]

15

EEuurrooppeeaann DDiiggiittaall LLiibbrraarryyFFoouunnddaattiioonn wweellccoommeedd bbyy tthhee CCoommmmiissssiioonneerrThe European Commission endorsed the work of theEuropean Digital Library Foundation when high level ECofficials met members for the formal handover of theirstatutes on 27 November 2007.

"Europe's citizens should all be able to enjoy our rich cul-tural heritage. This Foundation is a significant step towardsmaking that ambition come true," Commissioner VivianeReding, responsible for Information Society and Mediacommented. "It shows the commitment of Europe's culturalinstitutions to work together to make their collections avail-able and searchable by the public through a common andmultilingual access point online."

Foundation members include the key European heritage andinformation associations. Their statutes commit members towork in partnership to:• provide access to Europe's cultural and scientific heritage

though a cross-domain portal• co-operate in the delivery and sustainability of the joint portal

Page 16: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 16

Introduction to the Special Theme

The Path to Web n+1by Lynda Hardman and Steven Pemberton

The Sapir-Whorf Hypothesis postulates a link betweenthought and language: if you haven't got a word for a concept,you can't think about it; if you don't think about it, you won'tinvent a word for it. The term "Web 2.0" is a case in point. Itwas invented by a book publisher as a term to build a series ofconferences around, and conceptualises the idea of Web sitesthat gain value by their users adding data to them. But theconcept existed before the term: Ebay was already Web 2.0 inthe era of Web 1.0. But now we have the term we can talkabout it, and it becomes a structure in our minds, and in thiscase a movement has built up around it.

Special Theme: The Future Web

There are inherent dangers for users of Web 2.0. For a start,by putting a lot of work into a Web site, you commit your-self to it, and lock yourself into their data formats. This issimilar to data lock-in when you use a proprietary program.You commit yourself and lock yourself in. Moving comes atgreat cost. This was one of the justifications for creating theeXtended Markup Language (XML): it reduces the possibil-ity of data lock-in - having a standard representation for datahelps using the same data in different ways too.

As an example, if you commit to a particular photo-sharingWeb site, you upload thousands of photos, tagging exten-sively, and then a better site comes along. What do you do?How about if the site you have chosen closes down (as has

Page 17: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 17

happened with some Web 2.0 music sites): all your work islost. How do you decide which social networking site tojoin? Do you join several and repeat the work? How aboutgeneology sites, and school-friend sites? These are allexamples of Metcalf's law, which postulates that the valueof a network is proportional to the square of the number ofnodes in the network. Simple maths shows that if you splita network into two, its value is halved. This is why it isgood that there is a single email network, and bad that thereare many instant messenger networks. It is why it is goodthat there is only one World Wide Web.

Web 2.0 partitions the Web into a number of topical sub-Webs, and locks you in, thereby reducing the value of thenetwork as a whole.

The advantage of the semantic Web, however, is that itallows your data to be distributed over the Web and notcentralised. By adding metadata to your data it allowsaggregation services to offer the same functionalities asWeb 2.0 without the lock-in and potential loss of datashould the service die. To enable the inclusion of non-tex-tual media in the semantic Web we need to extend existinglanguages, and be aware that existing interfaces are based

on assumptions that may no longer apply. Articles in this specialissue describe emerging technologies that will contribute toenabling making use of modalities beyond the visual and audi-ble, for the whole, mobile, Web population.

The Web 2.0 phenomenon could not have been predicted whenthe first Hypertext Transfer Protocol (http) was being devel-oped. Similarly, as we are going beyond human-created links tomachine integrated data, we cannot forsee the applications andsocial phenonena which may emerge, even in the near future.

In this special issue we are not seeking to predict where the Webwill go along its apparently accelerating path, but give insightsinto emerging core technologies that will enable the continuingrevolution in our species' relationship to creating, using andmaintaining information.

Please contact:

Lynda Hardman, CWI and Eindhoven University of Technology, The NetherlandsE-mail: [email protected]

Steven Pemberton, CWI, The Netherlands and W3CE-mail: [email protected]

Page 18: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

The semantic Web initiative is often saidto address the same issues that havealready been approached for thirtyyears in Artificial Intelligence. Whatmakes the semantic Web, with its focuson ontologies and reasoning so differ-ent?

There is indeed a widespread miscon-ception that the semantic Web is 'AI allover again'. Even though the two mayhave tools in common (eg ontologies,reasoning, logic), the goals of the twoprogrammes are entirely different. Infact, the goals of the semantic Web aremuch more technical and modest.Rather than seeking to build a general-purpose all-encompassing global Inter-net-based intelligence, the goal isinstead to achieve interoperabilitybetween data sets that are exposed to theWeb, whether they consist of structured,unstructured or semi-structured data.

Tim Berners-Lee (W3C) devoted anentire presentation to the confusionbetween AI and semantic Web in July2006 (see the link below). This presen-tation also does a very good job of bust-ing some of the other myths surroundingthe semantic Web, such as that thesemantic Web is mainly concerned withhand-annotated text documents, or thatthe semantic Web requires a single uni-versal ontology to be adopted by all.

Web 2.0 appears to be the new kid on theblock - everybody's darling, loved bothby academia and industry. The semanticWeb, on the other hand, has fallen fromgrace, owing to numerous unmet prom-ises. How do you regard the coexistenceof these two Webs and what role willWeb 2.0 assume in the semantic Web'sstory?

My feeling is that this question is basedon a false premise, namely that "thesemantic Web has fallen from grace,owing to numerous unmet promises".The SemTech conference, an annualindustry-oriented event organised in thepast three years in San Jose, California,attracted 300 attendants in 2005, 500

attendants in 2006, and 700+ attendantsin 2007. Its European counterpart, theEuropean Semantic Technologies Con-ference, attracted 200+ attendants to itsfirst event in Vienna in May 2007, ofwhom 75% were from companies. Thiswould suggest that research and interestin the semantic Web are alive and well.

In fact, semantic technology is in theprocess of an industrial breakthrough.Here is a quote from a recent (May2007) Gartner report, the industrywatcher not known for its love of short-lived hypes:

"Key finding: During the next ten years,Web-based technologies will improvethe ability to embed semantic structuresin documents, and create structuredvocabularies and ontologies to defineterms, concepts and relationships. Thiswill offer extraordinary advances in thevisibility and exploitation of informa-tion - especially in the ability of sys-tems to interpret documents and infermeaning without human intervention."And: "The grand vision of the semanticWeb will occur in multiple evolutionarysteps, and small-scale initiatives areoften the best starting points."

Turning to the substance of your ques-tion: There is widespread agreement inthe research world that Web 2.0 andsemantic Web (or Web 3.0) are comple-mentary rather than in competition. Forexample, a science panel at theWWW07 conference in May 2006 inEdinburgh came to the following con-sensus: Web 2.0 has a low threshold (it'seasy to start using it), but also has a lowceiling (folksonomies only get you sofar), while Web 3.0 has a higher thresh-old (higher startup investments), buthas a much higher ceiling (more is pos-sible).

The aforementioned Gartner report hasuseful things to say here as well. Itadvises the combination of semanticWeb with Web 2.0 techniques, and pre-dicts a gradual growth path from thecurrent Web via semantically light-weight but easy to use Web 2.0 tech-niques to higher-cost/higher-yield Web3.0 techniques.

And what about automated means oflearning ontologies, relationshipsbetween entities, and so forth - that is,resorting to natural language process-ing, text mining, and statistical meansof knowledge extraction and inference.Do you regard these techniques as com-plementary to the manual compositionof ontologies or rather inhibitory?

My attitude towards the acquisition ofontologies and the classification of dataobjects in these ontologies is: if itworks, it's fine. Clearly relying solelyon the manual construction of ontolo-gies puts a high cost and a low ceilingon the volume of knowledge that can becoded and classified. Hence, I expectthat the techniques you mention willplay an ever-bigger role in the range ofsemantic technologies. I see no reasonwhy such techniques are 'bound to fail'? instead I am rather optimistic abouttheir increasingly valuable contribution.

All great technological inventions andmilestones are marked by the advent ofa killer application. What could/will be

ERCIM NEWS 72 January 2008 18

Special Theme: The Future Web

Which Future Web?An interview with Frank van Harmelen

The semantic Web will be a considerable part of the future Web. What is the difference between thesemantic Web and artificial intelligence? And what about Web 2.0? Frank van Harmelen, computerscientist in the Netherlands and a specialist in the semantic Web, answers some questions.

Frank van Harmelen. P

ictu

re:

Irm

a B

ulke

ns.

Page 19: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

the semantic Web's killer app? Willthere be one at all?

I find the perennial question for the'killer app' always a bit naive. Forexample: we can surely agree that thewidespread adoption of XML was animportant technical innovation. Butwhat was XML's 'killer app'? Was therea single one? No. Instead there aremany places where XML facilitatesprogress 'under the hood'. SemanticWeb technology is primarily infrastruc-ture technology. And infrastructuretechnology is under the hood, or inother words, not directly visible tousers. One simply notices a number ofimprovements. Web sites become morepersonalized, because under the hood

semantic Web technology is allowingyour personal interest profile to beinteroperable with the data sources ofthe Web site. Search engines provide abetter clustering of results, becauseunder the hood they have classifiedsearch results in a meaningful ontology.Desktop search tools become able tolink the author names on documentswith email addresses in your addressbook, because under the hood, thesedata formats have been made to interop-erate by exposing their semantics.However, none of these applicationswill have 'semantic Web technology'written on their interface. SemanticWeb technology is like Nikasil coatingin the cylinders of a car: very few cardrivers are aware of it, but they are

aware of reduced fuel consumption,higher top speeds and the extended life-time of the engine. Semantic Web tech-nology is the Nikasil of the next gener-ation of human-friendly computerapplications that are being developedright now.

Links:

http://www.w3.org/2006/Talks/0718-aaai-tbl/Overview.htmlhttp://www.cs.vu.nl/~frankh/

Please contact:

Frank van HarmelenVrije Universiteit Amsterdam, The NetherlandsTel: +31 20 598 7731/7483 (secr)E-mail: [email protected]

ERCIM NEWS 72 January 2008 19

Ontologies – vocabularies of termsoften shared by a community of users –are being applied in science and engi-neering disciplines as diverse as biol-ogy, geography, astronomy, agricultureand defence. Nowadays, ontologies areusually expressed in the W3C standardlanguage called the Web Ontology Lan-guage (OWL). OWL ontologies consistof a schema part, called a TBox, whichdescribes the concepts and relationshipsin the domain of discourse, and a datapart, called an ABox, which describesthe actual data in the application. Anefficient reasoner is the cornerstone ofmost OWL-based applications. It imple-ments the formal semantics of OWL andthus provides the application with queryanswering capabilities.

While reasoning over OWL ontologiesis a provably intractable computationalproblem, it has been observed that theontologies encountered in practicerarely involve a combination of con-structs that leads to intractability. Byrelying on sophisticated optimizations,reasoners were developed that can han-dle ontologies with large Tboxes, yet

these still do not provide adequate per-formance on ontologies containinglarge ABoxes. This has so far preventedthe usage of OWL in applications thatdepend on large data sets, such asmetadata management and informationintegration.

Parallel to the development of reason-ing techniques for OWL, significanteffort has been invested into improvingthe scalability of relational and deduc-tive databases. In particular, numerousoptimizations of query answering in(disjunctive) datalog (a widely useddeductive database language) areknown and have proven themselveseffective in practice. It is therefore nat-ural to try to improve the scalability ofABox reasoning in OWL by buildingon this large body of existing work.

This idea has been realized in a newreasoner called KAON2. The architec-ture of the reasoner is shown in Figure 1. The central component of thesystem is the reasoning engine, whichimplements a completely new reason-ing algorithm. A query Q over an ontol-

ogy consisting of a TBox T and anABox A is answered by first reformu-lating T as a set of clauses in first-orderlogic, and then transforming the resultinto a disjunctive datalog programDD(T). The latter step is the key part ofKAON2: based on certain novel resultsin resolution-based theorem proving, itensures that all answers of Q over Tand A can be equally computed byevaluating Q over DD(T) and A. Themain benefit of such a transformationis that, to evaluate Q w.r.t DD(T) andA, the disjunctive datalog engine canuse the optimization techniques knownfrom deductive databases, such as (dis-junctive) magic sets or join-order opti-mizations.

This approach to query answering hasshown itself to be practical and effec-tive in cases where the TBox is rathersimple but the ABox contains largeamounts of data. On such ontologies,KAON2 has shown performanceimprovements over the state of the artof one or more orders of magnitude.KAON2 has thus become the platformof choice for numerous research proj-

KAON2 – Scalable Reasoning over Ontologies with Large Data Setsby Boris Motik

Scalability of ontology reasoning is a key factor in the practical adoption of ontology technologies.The KAON2 ontology reasoner has been designed to improve scalability in the case of reasoningover large data sets. It is based on a novel reasoning algorithm that builds upon extensive researchin relational and deductive databases.

Page 20: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ects, such as FIT (EU IST 27090),OntoGov (EU IST 507237), NeOn (EUIST-2005-027595), X-Media (EU FP6-26978), and KnowledgeWeb (EU FP6-507482). Furthermore, ontopriseGmbH, the vendor of ontology-basedsoftware infrastructure based in Karl-

sruhe, Germany, is integrating KAON2into its product suite and is using thetool in a commercial setting.

KAON2 is written in Java and can beused free of charge for non-commercialpurposes. The tool has emerged as a

result of the author's PhD work at theUniversity of Karlsruhe, Germany. Itsdevelopment was continued at the Uni-versity of Manchester, UK, and is cur-rently taking place at the University ofOxford, UK.

Boris Motik has been awarded the 2007Cor Baayen Award for a most promis-ing young researcher in computer sci-ence and applied mathematics byERCIM.

Links:

http://kaon2.semanticweb.org/http://www.ontoprise.de/

Please contact:

Boris MotikComputing Laboratory, Oxford University, UKE-mail: [email protected]

ERCIM NEWS 72 January 2008 20

Special Theme: The Future Web

KAON2

Ontology API Reasoning API

Reduction into

Disjunctive Datalog

Ontology

Clausification

Disjunctive Datalog

Engine

Files RDBMS Reasoning

Engine

clauses rules

TBox axioms ABox assertions

DIG API

Figure 1: KAON 2 architecture.

There is a multitude of interesting con-tent on the Web that would greatly ben-efit from inclusion into the semanticWeb, eg cultural heritage collections,educational resources and scientific dig-ital libraries. These collections offer aplethora of possibilities for knowledgeapplications if only they could be madeaccessible through ontologies. However,as long as domain experts require theassistance of ontology engineers to evenbegin the process of building a shallowontology for their own domains, this isunlikely to happen.

Although there are tools available thatguide the creation of ontologies, theyare pitched to the level of an experi-enced ontology engineer. Domainexperts wanting to create an ontologymust therefore become ontology-cre-ation experts before they can make use

of these tools. A more sensible situationwould be to provide tools to guide theontological layperson through theprocess of creating an ontology for theirdomain, by automating the creation asmuch as possible.

WIKINGER is a joint project involvingthe department for Computer Linguisticsat the University of Duisburg-Essen, theFraunhofer-Institut für IntelligenteAnalyse und Informationssystem (IAIS),and the Commission for ContemporaryHistory (KfZG). The goal is to createontologies from collections of scientificdocuments; these are accessed using aWiki containing the entities and conceptsrelevant to the domain, as well as quali-fied associations connecting them.

Named entity recognition (NER) isemployed to automatically extract

from the text collection entities of dif-ferent concept classes. Domainexperts provide examples for theseconcept classes using WALU, theWIKINGER Annotation and LearningEnvironment. Figure 1 shows a typicalscreenshot of the tool in action. Enti-ties are marked up in different coloursin the text, and are thus easily dis-cerned at a glance. They are thenassigned to a concept class by pressingthe corresponding button in the lowerpart of the editor. WALU containshelper functions that mark up entitiesautomatically, eg date/time informa-tion, previous annotations or entitiesprovided in list form. The results ofthis step are sent back to theWIKINGER server, which then trainsthe extraction algorithms to extractentities on the larger scale of the doc-ument collection.

The WIKINGER Project – Knowledge-Capturing Tools for Domain Expertsby Lars Bröcker

The semantic Web offers exciting opportunities for scientific communities: knowledge bases withunderlying ontologies promote the process of collaborative knowledge creation and facilitateresearch based on the published body of work. However, since most scientific communities do notpossess the knowledge required to build an ontology, these opportunities tend not to be taken up.The WIKINGER project aims to provide tools that largely automate the creation of an ontology from adomain-specific document collection.

Page 21: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 21

Whereas this step provides the nodes ofthe semantic network, the followingstep is concerned with the provision ofthe edges connecting these nodes.These edges represent the relationsbetween the different concept classes.They are determined in a relation dis-covery module that first discerns all sta-tistically relevant relations betweenclasses in a training part of the corpus.The association rule mining and cluster-ing algorithms are applied to this task.The association rules highlight interest-ing concept combinations in the textcollection. The relations present in theoccurrences of different rules are sepa-rated using clustering algorithms.

The relation discovery process can begoverned by domain experts using aJava application. It generates the asso-ciation rules, performs the clusteringand allows the selection and labelling ofrelations that are to be included in theontology. The different clusters can bereviewed on screen, meaning they canbe merged with other similar clusters,discarded in the event of mistakes, andfinally labelled if they fit into the ontol-ogy that is to be created. Results are

sent back to the server that applies themon the automatically extracted entitiesin order to populate the ontology. Theserver then provides access to the ontol-ogy for Web applications or via aSPARQL Protocol and RDF QueryLanguage.

The pilot application for our systemfocuses on the domain of contemporaryhistory, in particular the social and polit-ical history of German Catholicism.Historians from the KfZG are alreadyusing the tools described above for thecreation of annotation examples and thesubsequent selection of appropriate rela-tions for the ontology describing theirdomain. They are reporting no difficul-ties in adapting to these new tools. Themetaphors used in WALU imitateknown behaviours used in marking uptext passages, and thus integrate easilyinto a familiar pattern of workflow.Slightly more effort is required tobecome acquainted with the relation dis-covery application, since the actionsperformed here are new to the experts.

The prototype of the system is in itsfinal development stages, and its

release to the scientific community isplanned for early 2008. Work con-ducted on different data sets shows thetransferability of the approach: WALUfor example achieved second place inthe EVALITA shared task on NER inItalian newspapers in 2007.

The project is funded by the GermanFederal Ministry of Research and Edu-cation in the programme 'e-Science undWissensvernetzung' (e-Science andKnowledge Networking). Work on theproject began in October 2005, and willbe completed in September 2008.

Link:

http://www.wikinger-escience.de

Please contact:

Lars BroeckerFraunhofer-Institut Intelligente Analyse- und Informationssysteme,GermanyTel: +49 2241 14 1993E-mail: [email protected]

Figure 1: Screenshot of WALU.

Page 22: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

The Regional Historic Centre Eind-hoven (RHCe) governs all historicalinformation related to the cities in theregion around Eindhoven in the Nether-lands. The information is gathered fromlocal government agencies or privatepersons and groups. This includes notonly enormous collections of birth, mar-riage and death certificates, but alsoposters, drawings, pictures, videos andcity council minutes. One of its goals isto make these collections available tothe general public. However, for thevideos and pictures very little metadatais available, which makes indexing thisdata for navigation or searching veryhard. Moreover, most of the fragilematerial is stored in vaults and is thusphysically inaccessible to the public.

Through the use of simple tagging tech-niques, users could collaboratively helpto provide metadata for previouslyuncharted collections of multimediadocuments. By applying semantic andlinguistic techniques these user tags canbe enriched with well-structured onto-logical information provided by profes-sionals. This gives content providersaccess to high-quality metadata with lit-tle effort, eg, for archival purposes. Inreturn, the rich metadata in combinationwith ontological sources offers usersmore powerful navigation and searchfacilities, allowing them to get a grip onthe 'sometimes very large' multimediacollections and find what they werereally looking for.

Web engineering researchers from theEindhoven University of Technologytook on the challenge of building a Webapplication framework that would openup digital versions of these multimediadocuments to the public in a meaningfulway. A prototype framework, called Chi,provides a faceted browser view overthe different data sets. This means thatthe data can be navigated or searchedvia a number of different dimensions.

Photo and video collections share threedimensions within scenes: time, loca-tion and a set of keywords that are asso-ciated with a scene. In Chi these dimen-sions are described by a specialistontology. Professionals adapt the meta-data schema of every collection byaligning notions of these dimensions

with those in the ontologies. Because ofthese aligned ontologies users canhomogeneously navigate heteroge-neous collections of multimedia.

For every dimension Chi has a separatevisualization in the user interface. Fortime there is a timeline, for locationGoogle Maps is used, and for the key-words we present a graph which repre-sents the relatedness of terms. In thisway, results sharing a characteristic inone dimension can be grouped in theseinterfaces. This representation meansthat for a given picture or video, other

pictures and videos that share a charac-teristic can be found.

The RHCe institute is aiming for goodmetadata descriptions of all itsarchived data, but its few domain spe-cialists have only limited time and thecollections are huge. For them, the

biggest benefit from Chi is gettingmetadata from the users. However,users do not want to fill in large formsto provide well-structured data aboutthe scenes: many will find this toocomplicated or too time-consuming.Therefore, Chi uses several tricks tokeep things simple while still obtainingthis information. First it uses an ordi-nary tagging mechanism by whichusers can enter keywords or small sen-tence fragments, called tags, todescribe a scene. Then via severalmatching mechanisms that take syntax,semantics and user reinforcement into

ERCIM NEWS 72 January 2008 22

Special Theme: The Future Web

Tagging and the Semantic Web in Cultural Heritageby Kees van der Sluijs and Geert-Jan Houben

While it is generally desirable that huge cultural collections should be opened up to the public, thepaucity of available metadata makes this a difficult task. Researchers from the Eindhoven Universityof Technology in the Netherlands have built a Web application framework for opening up digitalversions of these multimedia documents with help of the public. This leads to a win-win situation forusers and content providers.

Figure 1: Visualization of location using Google Maps (in Dutch).

Page 23: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

account, users are presented with alter-natives to their input tags. These can beclicked on; while users are not obligedto click on an alternative, if they do,this is registered. If users often select aspecific alternative for some tag, weassume there is some kind of relation-ship between the original and theselected tags, and we use this relation-ship to improve tagging suggestionsand user queries.

Users can see the tags of others and canagree or disagree with a tag by clickinga thumbs-up or a thumbs-down icon. Ifmany people agree on a certain tag for acertain scene we assume the probabilityis high that this tag is a correct descrip-

tion for a scene. Finally these tags arepresented to the RHCe domain special-ist who provides a final judgment. Ineffect, this information is reused toidentify those users who often presentgood tags and those who don't. Thegood taggers are given more weight tocalculate if a tag is useful.

Chi is currently being finalized in a sec-ond version and prepared for moreextensive user testing. The first versionof Chi has already been tested by end-users, and their responses and feedbackwere very positive and encouraging.This has motivated RHCe to continuewith the researchers along this path ofapplying semantic technology.

Links:

http://wwwis.win.tue.nl/~ksluijs/ http://wwwis.win.tue.nl/~hera/ http://www.icwe2007.org

Please contact:

Kees van der Sluijs and Geert-Jan HoubenEindhoven University of Technology,The NetherlandsTel: +31 40 247 5154E-mail: [email protected], [email protected]

ERCIM NEWS 72 January 2008 23

NeOn is a 14.7 million euro projectinvolving fourteen European partners. Itstarted in March 2006 and will have aduration of four years. NeOn addressesthe complete R&D cycle of the emerg-ing generation of semantically enrichedapplications, which exist and operate inan open environment of highly contex-tualized, evolving and networkedontologies. It aims to achieve and facili-tate the move from feasibility in princi-ple, to concrete cost-effective solutionsthat can support the design, develop-ment and maintenance of large-scale,semantic-based applications. In particu-lar, the project is investigating methodsand tools for managing the evolution ofnetworked ontologies, for supportingthe collaborative development ofontologies, and for the contextual adap-tation of semantic resources.

NeOn has developed an open service-centred reference architecture for man-aging the complete life cycle of net-worked ontologies and metadata. Thisarchitecture is realized through the

NeOn Toolkit and complemented by theNeOn methodology for system devel-opment using networked ontologies.

NeOn Toolkit LaunchedThe NeOn Toolkit provides a next-gen-eration ontology engineering environ-ment for semantic applications. It isdesigned around an open and modulararchitecture, which includes infrastruc-ture services such as registry and repos-itory, and supports distributed compo-nents for ontology management, rea-soning and collaboration in networkedontologies. Building on the Eclipseplatform, the Toolkit provides an openframework for plug-in developers. TheNeOn Toolkit is freely available in opensource as the reference implementationof the NeOn architecture. The firstopen-source release of the Toolkit wasjust recently launched at the latestNeOn Glowfest at ISWC07 – the Inter-national Semantic Web Conference – inBusan, Korea on the 13th November.NeOn Glowfest is a series of seminarsand discussions intended to gather

together the members – both users anddevelopers – of the NeOn community.

NeOn in PracticeNeOn pays special attention to integrat-ing research with work practices. Themethodology, toolkit and infrastructureare therefore intertwined with theirdeployment and testing from the earlyphases of the project. NeOn uses a case-centred methodology, which means thatour research results are applied to real-world case studies involving partnersfrom industry and public bodies.Three use cases provide testbeds forNeOn technology that show how theresearch innovations can be applied inpractice and potentially exploited. Weapply NeOn technology in two sectorswith early-adopter businesses, wherewe can achieve rapid and significantimpact. The two case studies are carriedout as follows:• Ontology-driven stock over-fishing

alert system is a study led by theFood and Agriculture Organisation ofthe UN, which focuses on the agricul-

Infrastructure for Semantic Applications - NeOn Toolkit Goes Open Sourceby Peter Haase, Enrico Motta and Rudi Studer

The NeOn project is investigating the entire life cycle of networked ontologies that enable complex,semantic applications. As the amount of semantic information available online grows, semanticapplications are becoming increasingly ubiquitous, Web-centric and complex. The NeOn Toolkit andthe NeOn methodology lie at the core of the NeOn vision, defining the standard referenceinfrastructure and the standard development process for creating and maintaining large-scalesemantic applications. The first version of the NeOn Toolkit for ontology engineering has just beenreleased in open source by the Neon Consortium.

Page 24: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

tural sector and information manage-ment for hunger prevention. As thetitle suggests, it involves the manage-ment of alerts to avoid over-fishing inalready stretched global waters.

• Supporting collaboration in pharma-ceutical industry is a case study con-cerned with infrastructure and APIsto bridge the currently used proprieta-

ry systems for managing financialand product knowledge interoperabi-lity in the networks/clusters of phar-maceutical labs, companies and dis-tributors in Spain.

The NeOn project is co-funded by theEuropean Commission's Sixth Frame-work Programme under grant number

IST-2005-027595. The consortiumincludes European universities, busi-nesses and user organisations. Amongthe research institutions are world-lead-ing groups in the fields of ontology, col-laborative technology, context manage-ment and human-computer interaction,such as the Open University and theUniversity of Sheffield from the UK,the Universities of Karslruhe andKoblenz-Landau from Germany, Uni-versity Polytechnic Madrid (UPM)from Spain, INRIA, the Josef StefanInstitute from Slovenia and the NationalResearch Council of Italy. The partici-pating companies are Software AG,iSOCO, Ontoprise and Atos Origin.

Links:

http://www.neon-project.org/http://www.neon-toolkit.org/

Please contact:

Peter HaaseInstitute AIFB University of Karlsruhe, GermanyTel: +49 721 608 3705E-mail: [email protected]

ERCIM NEWS 72 January 2008 24

Special Theme: The Future Web

Figure 1: NeOn helps ensuring sustainable fishery with semantic technologies.

Semantic Web technologies, and in gen-eral the components of the 'data Web',are maturing at a rapid pace. A numberof commercial products support thedesign of RDF (Resource DescriptionFramework) schemas and the publica-tion of RDF datasets. Key semantic-Web specifications, SPARQL (SPARQLProtocol and RDF Query Language) andGRDDL (Gleaning Resource Descrip-tions from Dialects of Languages), haverecently reached Recommendation sta-tus with the W3C. One of the importantquestions remaining is how to connectthe Semantic Web to the existing 'click-able Web', the Web just about everyoneknows and loves. RDFa, a W3C pro-posal on the verge of entering Last Call,is one promising answer, a potential

bridge between the clickable and seman-tic Webs. HTML already contains someconstructs to provide the beginnings ofdata structure, for example with the relattribute. Publishers are anxious toachieve this kind of functionality:Google recommends the use of this relattribute with the keyword 'nofollow' toindicate that search engines should 'notfollow' this link in their search relevancealgorithms, eg when a publisherattempts to increase their URL's stand-ing by posting it in multiple blog com-ments.

With RDFa, an HTML author can aug-ment presentational markup with RDFstructure. This begins very simply, byproviding extra information about click-

able links using the existing HTML relattribute. In the following example, wedeclare a copyright license for the cur-rent document:

This document is licensed under a

<a rel="license"

href="http://creativecommons.org/

licenses/by/3.0/">

Creative Commons license</a>

Note how the clickable link's href isused both for rendering purposes and formachine readability. This idea, oftencalled DRY (Don't Repeat Yourself), is akey principle of the RDFa design.

A Web publisher may want to makestatements about more than the current

Bridging the Clickable and Semantic Webs with RDFaby Ben Adida

RDFa, a W3C proposal about to enter Last Call, will help bridge the clickable and semantic Webs.Publishers of current HTML Web pages can augment their output with interoperable, structured data.Web users can extract and process this data straight from its pleasant visual rendering within theirbrowsers. RDFa will enable semantic Web data to progressively emerge from the existing, visual Web.

Page 25: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 25

page, of course, and RDFa fully sup-ports this. An image may be Creative-Commons licensed as follows:

<div about="photo1.jpg">

This photo is licensed under a

<a rel="license"

href="http://creativecommons.org/

licenses/by/3.0/">

Creative Commons license

</a>

</div>

With RDFa, an author is not limited tothe existing reserved keywords in thebaseline HTML vocabulary (next, prev,license, etc.). It is trivial to import theDublin Core vocabulary, for instance,using the xmlns declaration mechanism.Once this is done, an author can use thevocabulary in her document. The fol-lowing example does just that, using,this time, the property RDFa attribute toindicate a literal field, rather than aURL:

This document was created by:

<span property="dc:creator">

Ben Adida

</span>.

RDFa supports most of the power ofRDF, including the ability to createmore complex graphs of data. For exam-ple, if we want to provide more informa-tion about the creator of this page, we

can introduce a blank node to which weattach a number of additional datafields. The rel attribute is used without acorresponding href, which indicates thecreation of a blank node, and the con-tained RDFa statements are automati-cally applied to this blank node.

This document was created by:

<div rel="dc:creator">

<span property="foaf:name">

Ben Adida</a>,

[<a rel="foaf:mbox"

href="mailto:[email protected]">

email</a>]

</span>.

In the above example, the creator of thecurrent document is a an entity namedBen Adida, with email [email protected]. Additional fields,including a phone number, a physicaladdress, an organizational affiliation,etc. can be added just as easily, and fur-ther data depth, such as the physicaladdress of the author's organizationalaffiliation, can be provided.

A number of RDFa tools are emerging,with parsers in a number of program-ming languages including Python, Java,PHP, Ruby, and JavaScript [1]. Withinthe browser, the RDFa Buttons [2] canbe used in any Web browser to highlightRDFa structured data and convert it toRDF/XML or RDF/N3. Developers can

easily extend these code snippets to pro-vide specific actions based on this data,eg preparing a blog post about a Cre-ative-Commons resource using appro-priate attribution name and URL. Evenmore interesting is the Operator FirefoxAdd-On [3], which notifies users of thepresence of RDFa as they browse, withcontent-specific action buttons automat-ically enabled based on the data foundon the page. For example, Operatormight allow one-click adding of anevent to a calendar, of a phone numberto an address book, or of a song to aplaylist.

More information about RDFa markupcan be found in the RDFa Primer [4],and the RDFa community and ecosys-tem is reachable on the RDFa Informa-tion Web site [5].

Links:

[1] http://rdfa.info/rdfa-implementations/[2] http://www.w3.org/2006/07/SWD/RDFa/impl/js/[3] https://addons.mozilla.org/firefox/addon/4106[4] http://www.w3.org/TR/xhtml-rdfa-primer/[5] http://rdfa.info/

Please contact:

Ben Adida Creative CommonsE-mail: [email protected]

Web 2.0 is not so much a technologicalrevolution as an evolution in the attitudeof end-users towards the Web. Whatstarted as a global library is becoming asocial meeting place in which users canshare views and content. Where the ini-tial focus was on a repository of staticdocuments, the future focus will be onthe provision of dynamic documentservices, such as the one shown in Figure 1.

Some of the future scenarios that moti-vate our work are:• E-tourism: an online guide to a city

that includes videos of the place ofinterest. The presentation can dynami-cally export the coordinates of thelocations presented in the videos,which can be used to represent theguided tour in an external engine suchas Google maps, as shown in Figure 1.In addition, a GPS-enabled phone can

request the current coordinates andimport them into a presentation thatshows the location of the user.

• E-learning: an e-learning portal thatincludes synchronized videos andslides. An interactive test controlledby an external calculation enginecan provide results to the mediaplayer, allowing the learning materi-al to be adapted to the knowledge ofthe student.

A Framework for Video Interaction with Web Browsersby Pablo Cesar, Dick Bulterman and Jack Jansen

In order to make multimedia a first-class citizen on the Web, there is a need for major effortsacross the community. European projects such as Passepartout (ITEA) and SPICE (IST IP) showthat there is a need for a standardized mechanism to provide rich interaction for continuousmedia content. CWI is helping to build a framework that adds a temporal dimension to existinga-temporal Web browsers.

Page 26: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008

• E-commercials: media-based commer-cials can be customized to a specificuser by, for example, displaying thename of the user and adapting the mediapresentation based on user preferences.

• Persistent segmentation: for example,by allowing the user to explicitlypause a presentation and then restartit at some later point – possibly daysor weeks later.

In order to realize the services-orientedvision with video, an interaction modelneeds to be defined that transcends thetraditional control set of start, stop andpause. The content within the video ele-ment will need to be triggered fromexternal, peer-level content, as in the e-commercial scenario. That content inturn needs to trigger related contentwithin the context of a higher-levelembedding, as in the e-tourism scenario.

The scenarios show that there is a clearneed for richer temporal semanticswhen integrating a conventional(X)HTML browser interface with mul-timedia documents. To this end, wewrap videos with an external datamodel, to extend content-related (notcontent-based) interaction. The datamodel – rather than the video encoding– is the focal point for sharing, mashingand reusing individual objects.

Following the lead of XForms, our datamodel is defined as a small XML docu-

ment. This data model is language-inde-pendent and can be shared between dif-ferent XML-based documents such as(X)HTML, SMIL, or SVG. In addition,the framework provides support fordefining and manipulating the value ofvariables in the data model. Moreover,the framework provides the mechanismby which variables can be evaluated atruntime and the state variable valuessaved for the next time the media docu-ment is played.

By exporting the data model to the out-side world, it becomes possible for themedia document to affect other con-texts, eg the (X)HTML presentation. Atthe same time, external engines canaffect the media presentation. So,unlike embedded video players, in ourscenarios the video plays an active rolein the Web page.

At the moment, the framework isimplemented in the Ambulant open-source SMIL player. The work sketchedin this article has been submitted to theW3C's SYMM working group underthe name of smilState. It is expected tobe integrated in the SMIL 3.0 release inearly 2008. We are also actively partic-ipating in the W3C Backplane work touse the results from this and other Webgroups to integrate a broadly consistentframework for sharing the data modelacross XML-based languages.

This work has been funded by theDutch Bsik BRICKS project, the ITEAProject Passepartout and the FP6 ISTproject SPICE. Development of theopen-source Ambulant Player andCWI's participation in the SMIL stan-dardization effort have been funded bythe NLnet foundation.

Links:

http://www.cwi.nl/projects/Ambulant/distPlayer.htmlhttp://www.w3.org/TR/NOTE-HTMLplusTIMEhttp://www.w3.org/TR/SMIL/

Please contact:

Pablo CesarCWI, The NetherlandsTel: +31 20 592 4332 E-mail: [email protected]

26

Special Theme: The Future Web

Figure 1: Screenshot ofthe e-tourism scenario.

Page 27: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 27

The W3C Multimedia Semantics XGhas been chartered to show how meta-data interoperability can be achieved byusing semantic Web technology to inte-grate existing multimedia metadatastandards. Between May 2006 andAugust 2007, 34 participants from fif-teen different W3C organisations andthree invited experts have produced sev-eral documents: these describe relevantmultimedia vocabularies for developersof semantic Web applications, and theuse of RDF and OWL to create, store,exchange and process information aboutimage, video and audio content. TheSemantic Media Interfaces group atCWI has sponsored and co-chaired thisW3C activity.

The main result is a Multimedia Annota-tion Interoperability Framework thatpresents several key use cases and appli-cations showing the need for combiningmultiple multimedia metadata formats,often XML-based. We use semanticWeb languages to explicitly representthe semantics of these standards andshow that they can be better integratedon the Web. We describe two of the usecases here.

Managing Personal Digital PhotosThe advent of digital cameras and cam-era phones has made it easier than everbefore to capture photos, and many peo-ple have collections of thousands of dig-ital photos from friends and family, ortaken during vacations, while travellingand or parties. Typically, these photosare stored on personal computer harddrives in a simple directory structurewithout any metadata. The number oftools, either for desktop machines orWeb-based, that perform automaticand/or manual annotation of content isincreasing. For example, a large numberof personal photo management toolsextract information from the so-calledEXIF header and add this information to

the photo description. Popular Web 2.0applications, such as Flickr, allow usersto upload, tag and share their photoswithin a Web community, while Riyaprovides specific services such as facedetection and face recognition of per-sonal photo collections.

What remains difficult however, is find-ing, sharing, and reusing photo collec-tions across the borders of tools andWeb applications that come with differ-

ent capabilities. The metadata can berepresented in many different standardsranging from EXIF (for the cameradescription), MPEG-7 (for the low-level characteristics of the photo) toXMP and DIG35 (for the semantics ofwhat is depicted). To address this prob-lem, we have designed OWL (WebOntology Language) ontologies to for-malize the semantics of these standards,and we provide converters from theseformats to RDF in order to integrate themetadata. A typical use case is a Webapplication that can generate digitalphoto albums on the fly based on bothcontextual and semantic descriptions of

the photos, and aesthetic and visualinformation tailored to the user.

Faceting Music Many users own digital music files thatthey have listened to only once, if ever.Artist, title, album and genre metadatarepresented by ID3 tags stored withinthe audio files are generally used byapplications to help music consumersmanage and find the music they like.Recommendations from social net-

works, such as lastFM, combined withindividual FOAF (Friend of a Friend)profiles comprise additional metadatathat can help users to purchase missingitems from their collections. Finally,acoustic metadata, a broad term whichencompasses low-level features such astempo, rhythm and tonality (key), areMPEG-7 descriptors that can be auto-matically extracted from audio files,and which can then be used to generatepersonalized playlists.

Using the Music Ontology, a convertergenerates RDF descriptions of musictracks from the ID3 tags. The descrip-

Creating, Organising and Publishing Media on the Webby Raphaël Troncy

Creating, organising and publishing findable media on the Web raises outstanding problems.While most applications that process multimedia assets make use of some form of metadatato describe the multimedia content, they are often based on diverse metadata standards. Inthe W3C Multimedia Semantics Incubator Group (XG), we have demonstrated, for various usecases, the added value of combining several of these into a single application. We have alsoshown how semantic Web technology can help in making these standards interoperable.

Figure 1: Mazzle provides a faceted browser interface on your personal music collection andcan automatically build personalized playlists.

Page 28: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008

tions are enhanced with low-leveldescriptors extracted from the signaland can be linked to recommendationsfrom social community Web sites. Theuser can then navigate a personal musiccollection using an iTunes-like facetedinterface that aggregates all this meta-data (see Figure 1).

COMM: From MPEG-7 to the Semantic WebMPEG-7 is the de facto standard formultimedia annotation that appears inevery use case developed by the Multi-media Semantics XG. Media contentdescriptions often explain how a mediaobject is composed of its parts and whatthe various parts represent. Media con-text descriptions need to link to somebackground knowledge and to the cre-ation and use of the media object. To

make MPEG-7 interoperable withsemantic Web technology, we havedesigned COMM, a core ontology formultimedia that has been built byreengineering the ISO standard, andusing DOLCE (Descriptive Ontologyfor Linguistic and Cognitive Engineer-ing) as its underlying foundationalontology to support conceptual clarityand soundness as well as extensibilitytowards new annotation requirements.

Current semantic Web technology issufficiently generic to support annota-tion of a wide variety of Web resources,including image, audio and videoresources. The Multimedia SemanticsXG has developed tools and provideduse cases showing how the interoper-ability of heterogeneous metadata canbe achieved. Still, many things need to

be improved! The results from the Mul-timedia Semantics XG and the reportfrom the W3C 'Video on the Web' work-shop will provide valuable input toidentifying the next steps towards mak-ing multimedia content a truly inte-grated part of the Web.

Links:

http://www.w3.org/2005/Incubator/mmsem/http://multimedia.semanticweb.org/COMM/http://www.w3.org/2007/08/video/

Please contact:

Raphaël TroncyCWI, The NetherlandsTel: +31 20 592 4093E-mail: [email protected]

28

Special Theme: The Future Web

This is significant because there are realstructural problems with how onlinecommunities manage change. Thoughcommunication channels in online com-munities are inherently open, the chan-nels through which management deci-sions are made and enforced are not.The management technology availablefor online communities reflects the cen-tralized, specialist mindset of profes-sional administrators of information andcommunication technology.

Typically in online communities, theauthority to manage communicationchannels also imparts authority over thesocial behaviour of the community.Management decision-making in onlinecommunities is therefore highly central-ized, which restricts the participation ofthe broader community in reacting tochange. The viability of a community istherefore precariously dependent on the

competence and good grace of the indi-viduals concerned.

Researchers at the Knowledge and DataEngineering Group at Trinity CollegeDublin have recently completed thedevelopment and evaluation of aground-breaking mechanism for collec-tive management decision-making inonline communities. Their approachuses progressive grounding of a rule-based model. This advances a collectiveapproach to managing change as ameans to building viable online com-munities. The system developed is anovel specialisation of policy-basedmanagement, termed Community-Based Policy Management (CBPM).

With its novel use of self-defininggroups as the fundamental structuralabstraction, CBPM differs from previouspolicy-based management approaches

that use the centrally defined roles typi-cal of existing access control systems. Agroup is simply established by a set ofpeople engaged in a shared activity. Bydefining subgroups and federated groupsthrough explicit mandates for exercisingdecision-making authority, an organisa-tion of self-managing groups can beformed around the evolving needs andexperiences of an online community.These mandates can be progressivelygrounded as patterns of authoritychange, or as new models of resources orcontext emerge, and their impact on thedistribution of authority is captured. Bycontrolling the interconnection of differ-ent social and IT management rulesbetween groups, the portion of an organ-isation's current rule set that collaborat-ing decision-makers must understand isrestricted, thereby making collectivedecision-making more scaleable. Thisalso delivers fast runtime policy rule

Empowering Online Communities to Manage Change - How to Build Viable Organisations Onlineby David Lewis and Kevin Feeney

The World Wide Web is witnessing an explosion in new forms of online community. These are built onadvances in Web content posting, social networks and the wisdom of crowds. However, we are stilllargely ignorant about the factors that enable online communities to react successfully to change.Change can involve a number of things: swings in membership and levels of engagement; theemergence of internal conflicts; changing relationships with other online communities and offlineorganisations; and increasingly, changes caused by new computer-mediated communication technology.

Page 29: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 29

checking. Clear provenance of rulesensures that when policies authored inone part of a community clash with poli-cies or goals from another, the causes ofthe conflict are immediately identified,thereby quickening their resolution.Group structure and policy rules areexplicitly modelled, and as a result, pol-icy conflicts and the parties necessarilyinvolved in their resolution can be iden-tified. This enables online communitiesto actively reflect on their managementprocesses.

The Indymedia network (an online net-work for global news reporting that isindependent of large news corpora-tions) was selected as a case study forCBPM. The Oscailt Content Manage-ment System, which is the Indymediacommunity server, was modified toenforce all access control decisionsaccording to policy rules. The organisa-tional form of the community wastracked over a year, during which timeCBPM proved able to accurately modelstructural changes. This illustrated thatCBPM possesses the power to managechanges in the distribution of authorityand to resolve problems in organisa-tional structure through detection andanalysis of policy conflicts. Theexplicit, declarative nature of the policyrules, when aligned using CBPM to thestructure of community, was thereforeshown to support reflection upon thatstructure by its members.

Future work will develop further visual,interactive Web-based tools for reflect-

ing on organisational structure,resource semantics and the existing pol-icy-rule set. This will empower onlinecommunities to move from mere dis-cussion fora to viable, value-generatingorganisations. Such empowerment hasthe potential to radically enhance therole of online voluntary associations: innon-profit activities, in public policydeliberation, and in broadening theinclusive management of public bodies.Similarly, the approach can supportdevolved decision-making and an agileteam working in corporations or com-mercial virtual organisations.

Such use of CBPM will bring into sharpfocus the dynamics and distribution ofpower in online organisations, and willhighlight how strategies can evolve forbalancing control and trust in achievingshared and competing goals. Ulti-mately, the explicit modelling of suc-cessful strategies will facilitate the def-inition and reuse of online communitymanagement patterns.

Further CBPM applications are beingdeveloped using a Web service imple-mentation. It is ideal for applicationswhere resource ownership, and thusdecision-making authority, are neces-sarily diffuse. Integrated applicationsinclude Ubiquitous Computing underthe M-Zones project (www.m-zones.org), and now the NEMBES(www.nembes.org) project andDynamic Spectrum Management underthe Irish Centre for TelecommunicationValue Chain Research (www.ctvr.ie).

Link:

http://kdeg.cs.tcd.ie/cbpm

Please contact:

David LewisSchool of Computer Science and Stati-stics, Trinity College DublinTel: +353 1 896 2158E-mail: [email protected]

Figure 1: Policy-Based Management(PBM) using a group-orientedorganisational model.

Page 30: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 30

Special Theme: The Future Web

As increasing numbers of people spendtheir time in online communities, com-panies are also trying to find their wayinto these networks. One of the newways of utilizing the Internet is the openinnovation platform, where companiesdevelop new technology, products andservices openly with customers and endusers. For companies, the Web serves asa direct channel for communication,feedback and sharing ideas with users.

Owela is an online laboratory that utilizessocial media features for participatorydesign and open innovation. It has beendeveloped in the VTT research project'Social media in the crossroads of physi-cal, digital and virtual worlds' (SOMED).This project started in 2006 and will con-tinue until August 2008. It is developingtechnology and applications that supportthe creation of user-friendly and value-adding applications with socially createdinput for everyday use.

Owela serves as a virtual working envi-ronment, in which VTT is studyingsocial media as a phenomenon of digitalculture, and is testing and developingnew social software for computer-medi-ated communication. In Owela we canimprove our understanding of futureuser needs as well as gathering feed-back from a wider audience. Further-more, Owela is used to study how newsocial media tools can be utilized toenrich interactive Web-based researchmethods. Owela was launched in April2007 and is under continuous develop-ment based on experience and userfeedback.

Traditional user-centred design beginswith user-need acquisition, use-contextanalysis, and an examination of users'social and physical environments. Indifferent phases of the product design,users participate in interviews, observa-tions and testing. The user researchprocess requires a lot of time andresources from the researchers.

In the new era of the Internet, whenapplications and services are producedwith more agility, the traditional thor-ough user research process seems tootime-consuming. New, easier and fasterways to perform user-centred design aretherefore needed. As both a means ofcommunication and a meeting place,the Web serves as a good medium forparticipatory design, since that is wherethe users are already. Our goal is todefine and implement the processes andtools for user-driven product develop-ment so that the whole user-centricresearch process can be carried out reli-ably and efficiently.

At the moment, Owela is centredaround a blog-based tool called Idea-Tube, with which users may browse,comment on, and rate ideas, conceptsand scenarios of new products and serv-ices. Other tools for collaborationbetween users, developers andresearchers are chat and test lab, inwhich users may test new prototypesand give feedback. In addition to thesequalitative research methods, quantita-tive online questionnaires can be used.

Owela is not only a collection of differ-ent tools but an active community ofusers interested in new product and serv-ice development. In this kind of designcommunity, optimum results areachieved if users' motivation comesfrom the community and participation,

rather than from external rewards. Directcontact with users is needed, along withcommunication via Web site and emails.Furthermore, the design process must beclear to the users: they must see howtheir participation affects the design.

Owela can be combined with otheruser-centred design methods and uti-lized as a communication channelbetween face-to-face studies. Userresearch can be done either publicly or,for confidential user studies, inrestricted environments. It is also possi-ble to link Owela tools to other existingWeb communities. In the future, mobilefeatures will also be developed.

As a follow-up project, we have beenplanning the project 'Information Tech-nologies supporting the Execution ofInnovation Projects' (ITEI) in coopera-tion with seven European universitiesand 22 companies. This will involvefurther development of methods foruser-driven innovation and the onlinetools that support it.

Links:

Owela: http://owela.vtt.fiSOMED: http://www.vtt.fi/proj/somed/

Please contact:

Pirjo Näkki, VTT Technical ResearchCentre of FinlandTel: +358 20 7225897E-mail: [email protected]

Owela: Open Web Laboratory for Innovation and Designby Pirjo Näkki

For most users, the Web is about communication rather than technology. The rise of so-calledsocial media shows that when people are provided with simple tools and easy access to onlinecontent, they find new ways to utilize the Internet. The VTT Open Web Lab (Owela) is studyinghow social media tools can also be utilized in innovation and product design processes.

Figure 1: Owela supports all phasesof the user-centred design process.

Page 31: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 31

A fundamental question in relation tothe success of these emerging gift cul-tures is what motivates people to con-tribute time and knowledge without anyapparent return, at least not in the imme-diate monetary sense. According toeconomists Josh Lerner and Jean Tirole,the two major motivations are careerconcerns and ego gratification, whichthey collectively refer to as the sig-nalling incentive. By contributing to aWeb community people gain reputationand status within that community. Creat-ing and maintaining one’s identity and

social status thus appears to be the maindriving force of these emerging phe-nomena. A successful company mustshow commitment and build a strongcorporate identity to attract people (ievisitors), and an individual must commitfully to a community in order to build astrong personal identity: both are moti-vated by the signalling incentive.Hence, while personal identity is impor-tant to oneself, it is also important to

others in order for them to recognizeone’s contributions. In a similar vein,individuals’ personal identity yis impor-tant to corporations in order for them torecognize the characteristics and needsof their customers, and to tailor theirown Web presence, thus building theircorporate identity.

However, while identity and recogni-tion are important on the Web, the flip-side of the identity coin is that of per-sonal integrity. Consider the followingstory for example, which was reported

in The New York Times. Last year, ateam within AOL publicly releasedsearch data of more than 650,000 users.Although actual user names werereplaced with random numbers, it waspossible to track all the search termsused by a given individual, and thus tophysically locate that individual.Apparently, No. 4417749 conductedhundreds of searches over a three-month period and eventually the data

trail led to Thelma Arnold, a 62-year-old widow in Lilburn, GA, who con-firmed the searches were indeed hers.Shortly after this report, AOL removedthe search data from its site and apolo-gized for its release, but the detailedrecords continue to circulate online.The story does not reveal whether ornot Ms. Arnold benefited from herstrengthened identity in this particularcommunity. However, the exampleclearly illustrates that some of the traceswe leave on the Web are less intentionaland probably less ego gratifying thanwe would wish.

The aim of this research is to increaseour understanding of how the Web isused as an identity creation system,both for individuals and corporations.We have analysed the use ofAmazon.com from the perspective ofidentity creation based on how and withwhom we communicate when usingsuch a system. This includes the impor-tance of considering both the inten-tional actions of users and the traces leftbehind by those actions. The traces weleave behind, both knowingly (such aswhen reviewing a book) and unknow-ingly (by merely using the underlyingtechnology of the Web infrastructure)are the foundation for our Web identity.In order to take advantage of the sig-nalling incentive, a Web site thereforeneeds to provide users with instrumentsthat allow them to develop a properunderstanding of their ‘online conversa-tions’ and their contribution to thedevelopment of their own identity. Werefer to this as the ‘principle of identitycultivation’.

Striving for recognition in some com-munity thus motivates users seeminglyaltruistically to contribute to the devel-opment of the value of Internet serv-ices. In the quest for strong corporateidentity, Web 2.0-aware companies,such as Amazon.com, are building theirservices around user engagement. Thisattracts people to their Web site andengages them in the codevelopment of

The Principle of Identity Cultivation on the Webby Pär J. Ågerfalk and Jonas Sjöström

Web 2.0 and the commercial interest in open-source software both reflect a current trend towardsincreased user involvement in product and service development. To stay competitive in this era ofopen innovation, companies must learn to trust users as codevelopers and to make use of the Webas an instrument for identity cultivation.

Figure 1: Main concepts and relationships relevant to identity cultivation on the Web.

Page 32: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008

the site, which in turn givesAmazon.com a strong corporate iden-tity. This strong identity and the largenumber of committed visitors alsomakes Amazon.com an attractive plat-form on which third parties can buildtheir identity through personalizedadvertisements. This has an impact onour view of the Web from a design per-spective: we need to acknowledge mul-tiple interests and the implications forgathering, storing and utilizing infor-mation about both intentional user

actions and the possibly unintendedtraces left behind.

For more details on this study, see thepaper “Sowing the Seeds of Self: ASocio-Pragmatic Penetration of theWeb Artefact”, which recently won theBest Paper Award at the 2007 Interna-tional Conference on the PragmaticWeb (http://www.pragmaticweb.info).The paper is available from the authorson request and from the ACM digitallibrary (http://www.acm.org/dl/).

Please contact:

Pär J. Ågerfalk Lero – The Irish Software EngineeringResearch Centre, University of Limerick and Uppsala UniversityTel: +353 61 213543E-mail: [email protected]

Jonas SjöströmJönköping International BusinessSchool and Linköping UniversityTel: +46 36 101784E-mail: [email protected]

32

Special Theme: The Future Web

Access to Web information relies prima-rily on keyword-based search engines.These search engines deal with the 'sur-face Web', the set of Web pages directlyaccessible through hyperlinks, andmostly ignore the vast amount of highlystructured information hidden behindforms that composes the 'hidden Web'(also known as the 'deep Web' or 'invis-ible Web'). This includes, for instance,Yellow Pages directories, research pub-lication databases and weather informa-tion services. The purpose of the workpresented here, a collaboration betweenresearchers from INRIA project-teamsGemo and Mostrare and the Universityof Oxford (Georg Gottlob), is the auto-

mated exploitation of hidden-Webresources, and more precisely the dis-covery, analysis, understanding andquerying of such resources.

An original aspect of this approach isthe avoidance of any kind of humansupervision, which makes the problemquite broad and difficult. To cope withthis difficulty, we make the assumptionthat we are working with services rele-vant to a specific domain of interest (egresearch publications) that is describedby some domain knowledge base in apredefined format (an ontology). Theapproach is content-centric in the sensethat the core of the system consists in a

content warehouse of hidden-Web serv-ices, with independent modules enrich-ing the knowledge of these services sothey can be better exploited. Forinstance, one module may be responsi-ble for discovering relevant new serv-ices (eg URLs of forms or Web serv-ices), another for analysing the struc-ture of forms and so forth.

Typically the data to be managed israther irregular and often tree-struc-tured, which suggests the use of a semi-structured data model. We use eXtendedMarkup Language (XML) since this is astandard for the Web. Furthermore, thedifferent agents that cooperate to buildthe content warehouse are inherentlyimprecise since they typically involveeither machine-learning techniques orheuristics, both of which are prone toimprecision. We have thus developed aprobabilistic tree (or XML) model thatis appropriate to this context. Conceptu-ally, probabilistic trees are regular treesannotated with conjunctions of inde-pendent random variables (and theirnegation). They enjoy nice theoreticalproperties that allow queries andupdates to be efficiently evaluated.

Consider now a service of the hiddenWeb, say an HTML form, that is rele-vant to the particular applicationdomain. In order that it can be automat-ically reused, an understanding of vari-ous aspects of the service is necessary;that is, the structure of its input and its

Understanding the Hidden Webby Pierre Senellart, Serge Abiteboul and Rémi Gilleron

A large part of the Web is hidden to present-day search engines, because it lies behind forms.Here we present current research (centred around the PhD thesis of the first author) on the fullyautomatic understanding and use of the services of the so-called hidden Web.

Figure 1: Probabilistic content warehouse, updated and queried by various modules that analyse the hidden Web.

Page 33: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 33

output, and the semantics of the func-tion that it supports. The system firsttries to understand the structure of theform and relate its fields to conceptsfrom the domain of interest. It thenattempts to understand where and howthe resulting records are represented inan HTML result page. For the formerproblem (the service input), we use acombination of heuristics (to associatedomain concepts to form fields) andprobing of the fields with domaininstances (to confirm or invalidate theseguesses). For the latter (the service out-put), we use a supervised machine-learning technique adapted to tree-likeinformation (namely, conditional ran-dom fields for XML) to correct andgeneralize an automatic, imperfect andimprecise annotation using the domainknowledge. As a consequence of thesetwo steps, the structure of the inputs andoutputs of the form are understood(with some degree of imprecision ofcourse), and thus a signature for the

service is obtained. It is then easy towrap the form as a standard Web serv-ice described in Web Service DefinitionLanguage (WSDL).

Finally, it is necessary to understandthe semantic relations that existbetween the inputs and outputs of aservice. We have elaborated a theoreti-cal framework for discovering relation-ships between two database instancesover distinct and unknown schemata.The problem of understanding the rela-tionship between two instances is for-malized as that of obtaining a schemamapping (a set of sentences in somelogical language) so that a minimumrepair of this mapping provides a per-fect description of the target instancegiven the source instance. We are cur-rently working on the practical applica-tion of this theoretical framework tothe discovery of such mappingsbetween data found in different sourcesof the hidden Web.

At the end of the analysis phase wehave obtained a number of Web serv-ices, the semantics for which isexplained in terms of a global schemaspecific to the application. These serv-ices are described in a logical Datalog-like notation that takes into account thetypes of input and output, and thesemantic relations between them. Theservices can now be seen as views overthe domain data, and the system can usethese services to answer user querieswith almost standard database tech-niques.

Links:

http://pierre.senellart.com/phdthesis/http://treecrf.gforge.inria.fr/http://r2s2.futurs.inria.fr/

Please contact:

Pierre SenellartINRIA Futurs, FranceTel: +33 1 74 85 42 25E-mail: [email protected]

Static Analysis of XML Programsby Pierre Genevès and Nabil Layaïda

Static analysers for programs that manipulate Extensible Markup Language (XML) data havebeen successfully designed and implemented based on a new tree logic by the WAM (Web,Adaptation and Multimedia) research team, a joint lab of INRIA and Laboratoire d'Informatiquede Grenoble. This is capable of handling XML Path Language (XPath) and XML types such asDocument Type Definitions (DTDs) and XML Schemas.

Since its introduction a decade ago,Extensible Markup Language (XML)has gained considerable interest fromindustry and now plays a central role inmodern information system infrastruc-tures. In particular, XML is the key tech-nology for describing and exchanging awide variety of data on the Web. Theessence of XML consists in organisinginformation in tree-tagged structuresconforming to some constraints, whichare expressed using standard type lan-guages such as DTDs, XML schemasand Relax NG. XML processing can beseen as transforming these structuresusing tree-oriented query languagessuch as XPath expressions and XQuerywithin full-blown transformation lan-guages such as XSLT.

With the ever-increasing informationflow in the current Web infrastructure,XML programming is becoming a keyfactor in realizing the trend in Web serv-

ices aimed at enhancing machine-to-machine communication. There stillexist important obstacles along this path:performance and reliability. Program-mers are given two options, Domain-Specific Languages such as XSLT, orgeneral-purpose languages augmentedwith XML application programminginterfaces such as the Document ObjectModel (DOM). Neither of these alterna-tives is a satisfactory answer to perform-ance and reliability, nor is there even atrade-off between the two. As a conse-quence, new paradigms are being pro-posed and all have the aim of incorporat-ing XML data as first-class constructs inprogramming languages. The hope is tobuild a new generation of tools that arecapable of taking reliability and per-formance into account at compile time.

One of the biggest challenges in this lineof research is to develop automated andtractable techniques for ensuring static-

type safety and optimization of pro-grams. To this end, there is a need tosolve some basic reasoning tasks thatinvolve very complex constructions suchas XML types (regular tree types) andpowerful navigational primitives (XPathqueries). In particular, every future com-piler of XML programs will have to rou-tinely solve problems such as:• XPath query emptiness in the pre-

sence of a schema: if one can decideat compile time that a query is notsatisfiable then subsequent boundcomputations can be avoided

• query equivalence, which is impor-tant for query reformulation and opti-mization

• path type-checking, for ensuring atcompile time that invalid documentscan never arise as the output of XMLprocessing code.

All of these problems are known to becomputationally heavy (when decid-

Page 34: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 34

Special Theme: The Future Web

able), and the related algorithms areoften tricky.

In the WAM research team we havedeveloped an XML/XPath staticanalyser based on a new logic of finitetrees. This analyser consists in compilersthat allow XML types and XPath queriesto be translated into this logic, and anoptimized logical solver for testing satis-fiability of a formula of this logic.

The benefit of these compilers is thatthey allow one to reduce all the prob-lems listed above, and many others, tological satisfiability. This approach hasa couple of important practical advan-tages. First of all, one can use the satis-fiability algorithm to solve all of theseproblems. More importantly, one couldeasily explore new variants of theseproblems, generated for example by thepresence of different kinds of type or

schema information, with no need todevise a new algorithm for each variant.

The system has been implemented inJava and uses symbolic techniques(binary decision diagrams) in order toenhance its performance. It is capableof comparing path expressions in thepresence of real-world DTDs (such asthe W3C SMIL and XHTML languagerecommendations). The cost rangesfrom several milliseconds for compari-son of XPath queries without tree types,to several seconds for queries undervery large, heavily recursive, type con-straints, such as the XHTML DTD.These preliminary measurements shedlight for the first time on the cost ofsolving static analysis problems in prac-tice. Furthermore, the analyser gener-ates XML counter-examples that allowprogram defects to be reproduced inde-pendently from the analyser.

The development of these analysers wasinitiated by the Web Adaptation andMultimedia research team at INRIA,Grenoble - Rhône-Alpes Research Cen-tre, France. The project commenced inOctober 2005 and the full system willsoon be released publicly.

Links:

Project home page: http://wam.inrialpes.fr/xmlWAM team: http://wam.inrialpes.fr

Please contact:

Pierre GenevèsWAM Team, CNRS, France Tel: +33 4 76 61 53 84E-mail: [email protected]

Nabil LayaïdaWAM Team, INRIA, FranceTel: +33 4 76 61 52 81E-mail: [email protected]

Seaside is a framework for buildingdynamic Web applications, uniquelycombining object-oriented and continua-tion-based approaches. Seaside applica-tions are built by composing statefulcomponents, each encapsulating a smallportion of the page. Programmers arefreed from the concern of providingunique names, since Seaside automatesthis by associating callback functionswith links and form fields. Control flowis expressed as a continuous piece ofcode and in addition, Seaside offers a richapplication programming interface (API)to integrate with the latest Web 2.0 tech-nology, such as AJAX (AsynchronousJavaScript) and Comet (Server Push).

Seaside is implemented in Smalltalk, adynamically typed programming lan-guage. Seaside inherits powerful reflec-tive capabilities from the underlyinglanguage. Web applications may bedebugged while the application is run-ning. Inspection and modification may

occur on objects on the fly. Source codeis changeable and recompilable withoutinterrupting the running application,and there is no need to restart a session.

Whereas most other Web applicationframeworks work in a page-centricfashion, Seaside makes use of statefulcomponents that encapsulate a smallportion of a page. Developers cancompose the user interface as a tree ofindividual components, and oftenthese components are reused over andover again, within and between appli-cations. A basic set of ready-madewidgets is also provided to handle userinteractions.

Seaside offers a mechanism by whichobjects may be registered to be back-tracked. With every response sent to theclient, Seaside takes a snapshot of andcaches registered objects. This allowsprevious application states to berestored in a controlled fashion, for

example when the user is using the'back' button in the Web browser.

Seaside uses programmatic XHTMLgeneration. Instead of repeatedly past-ing the same sequence of tags into tem-plates, it provides a rich API to generateXHTML. This approach not onlyavoids common problems with invalidmarkup, but also allows markup pat-terns to be easily abstracted into con-venient reusable methods. CSS (Cas-cading Style Sheets) are used to givethe application a professional look.

Seaside also provides callback-basedrequest handling. This allows develop-ers to associate a piece of code withanchors and form fields, which are thenautomatically performed when the linkis clicked or the form is submitted. Thisfeature makes it almost trivial to con-nect the view with its model, as Seasideabstracts all serialization and parsing ofquery parameters away.

Seaside – Advanced Composition and Control Flow for Dynamic Web Applicationsby Alexandre Bergel, Stéphane Ducasse and Lukas Renggli

Page-centric Web application frameworks fail to offer adequate solutions to model composition andcontrol flow. Seaside allows Web applications to be developed in the same way as desktopapplications. Control flow is modelled as a continuous piece of code, and components may becomposed, configured and nested as one would expect from traditional user interface frameworks.

Page 35: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 35

Callbacks are used in a natural way todefine a control flow, for example totemporarily delegate control to asequence of other components. Theflow is defined by writing plain sourcecode. Control statements, loops andmethod calls are mixed with messagesto display components. Whenever anew view is generated, the control flowis suspended and the response is sentback to the client. Upon a new userinteraction the flow is resumed.

In order that the definition of controlflows as part of a Web application be asseamless as possible, Seaside internallystores a 'continuation' whenever a newcomponent is displayed. This suspendsthe current control flow and allows oneto resume it later on. Since continuationsmay be resumed multiple times, Seasidesupports the use of Web browsers' 'back'and 'forward' buttons at any time. Theexecution state is automatically restoredto the requested point and everythingbehaves as the developer expects.

To complement the expressiveness ofthe Smalltalk programming language, aset of tools including a memoryanalyser, a speed profiler, a code editorand an object inspector are included.The debugger supports incrementalcode recompilation, and enables highlyinteractive Web applications to be builtquickly and in a reusable and maintain-able fashion.

Seaside is open-source software distrib-uted under the MIT licence. It is under

constant development by an interna-tional community using SqueakSmalltalk. Due to the platform inde-pendence of Smalltalk, Seaside can berun on almost any platform. The Seasideapplication server can be bridged withindustrial scale servers, such as Apache.Seaside is supported by major commer-cial Smalltalk vendors, GemStoneSmalltalk and Cincom VisualWorks, aspart of their product strategy, and iswidely used in an industrial context.

The official Seaside Web site uses Pier,an open-source content managementsystem written on top of Seaside.

Link:

http://www.seaside.st

Please contact:

Alexandre BergelINRIA Futurs, Lille, FranceE-mail: [email protected]

Figure 1: A Web application built with Seaside.

EAGER: A Novel Development Toolkit for Universally Accessible Web-Based User Interfaces by Constantina Doulgeraki, Alexandros Mourouzis and Constantine Stephanidis

EAGER is an advanced toolkit that helps Web developers to embed in their artefacts accessibility and usability for all. Web applications developed by means of EAGER have the ability to adapt to the interaction modalities, metaphors and user interface elements most appropriate to each individualuser and context of use.

The constantly evolving Web is anunprecedented and continuously growingsource of knowledge, information andservices, potentially accessible by any-one, at any time and from anywhere.Despite the universality of the Web andthe predominant role of Web-based userinterfaces in the evolving Information

Society, current approaches to Webdesign do not embrace the notion ofadaptation and the principles of 'designfor all'. Consequently, they fail to satisfythe individual interaction needs of targetusers with different characteristics. Acommon practice in contemporary Webdevelopment is to deliver a single user-

interface design that meets the require-ments of an 'average' user. However, this'average' user is in fact an imaginaryentity, and differs radically from the pro-files of a large portion of the population.This is particularly the case for peoplewith a disability, elderly people, noviceIT users and users on the move.

Page 36: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008

The development of the EAGER toolkitwas motivated by the emerging need tosupport individuals in achieving fullparticipation in the knowledge society(especially individuals and groups atrisk of exclusion), and to offer themeans to address challenges associatedwith population aging and high disabil-ity rates all over the world. In theseterms, the main goal of this work is tocontribute to the collective vision ofmainstreaming and radically improvingthe accessibility, usability and generaluser experience and acceptance of com-puter-based products and services.Thus, it is perceived as a contribution inreducing the 30% of the European pop-ulation currently not using ICT.

EAGER supports, and provides themeans for, the development of inclusiveWeb-based interfaces that are capableof adapting to multiple and significantlydifferent user profiles and contexts ofuse (see Figure 1). It is an advancedtoolkit that aids Web developers in fol-lowing the Unified Web Interfaces(UWIs) method that builds on well-estab-lished 'design for all' principles. In partic-

ular, EAGER allows Microsoft.NETdevelopers to create or revise existinginterfaces such that they are able toadapt to the interaction modalities,metaphors and user interface elementsmost appropriate to each individualuser, according to profile informationbased on user and context-specificparameters..

In this context, a number of user inter-face elements were designed in variousforms (polymorphic task hierarchies)according to specific user and context-related parameter values (see Figure 2).These include user expertise and charac-teristics (eg disability) and interactiondevices and platforms. This phase pro-vided feedback during the developmentof EAGER, which involved the designof alternative interaction elements, andmechanisms for facilitating the dynamicactivation-deactivation (decision mak-ing) of the interaction elements andmodalities based on individual userinteraction and accessibility preferences.

In brief, EAGER is an advanced libraryof: (a) the core UWI architectural com-

ponents; (b) primitive UI elements withenriched attributes, eg buttons, linksand radios; (c) structural page elements,eg page templates, headers, footers andcontainers; and (d) fundamentalabstract interaction dialogues in multi-ple alternative styles, eg navigation, fileup-loaders, paging styles and text entry.

As a means to validating the proposedtoolkit, a prototypical, fully-functionalWeb portal, namely the new portal forthe European Design for All and e-Accessibility Network (EDeAN), wasimplemented with the EAGER toolkitand evaluated in terms of its accessibil-ity and usability. The development ofthe new EDeAN portal is carried out inthe framework of the EC-funded Coor-dination Action DfA@eInclusion(Design for All for eInclusion, contractno. 0033838). Through this process, theproduced interfaces were also assessedagainst the W3C accessibility guide-lines and thereby improved. Thisensures that interfaces developed bymeans of EAGER conform to the WebContent Accessibility Guidelines. Insummary, this development confirmedthat EAGER offers significant benefitsto developers and ensures the deliveryof accessible, usable and satisfyingWeb-based interfaces. In fact, theprocess of employing EAGER supportsthe development of elegant interfacesand is significantly less demanding interms of the time, experience and skillsrequired from the developer than thetypical process of developing Webinterfaces for the 'average' user.

Overall, EAGER constitutes a signifi-cant contribution towards embeddingaccessibility, graceful transformationand ease of use for all in future and exist-ing Web-based applications. Ultimately,this will help individuals – particularlypeople at risk of exclusion – to fully par-ticipate in the knowledge society.

Link:

Towards Unified Web-based UserInterfaces, Technical Report 394, ICS-FORTH:http://www.ics.forth.gr/ftp/tech-reports/2007/2007.TR394_Towards_Unified_Web-based_UI.pdf

Please contact:

Constantine StephanidisICS-FORTH, GreeceTel: +30 2810 391 741E-mail: [email protected]

36

Special Theme: The Future Web

Figure 1: Dimensions of diversity - some examples.

Figure 2: Examples of alternative representations for images.

Page 37: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 37

One important aspect of pervasive envi-ronments is giving users the ability tofreely move about while continuing tointeract with services available througha variety of interactive devices (cellphones, PDAs, desktop computers, digi-tal television sets, intelligent watchesand so on). Many interesting issues arisefrom such new environments, even forWeb applications. Indeed, one signifi-cant potential source of frustration is thenecessity of continuously restarting a

session after each interaction devicechange. Migratory Web Interfaces canovercome this limitation and supportcontinuous task performance. Thisrequires that the interactive part of Webapplications be able to follow users andadapt to the changing context of usewhile preserving their state.

We have developed a new solution forsupporting migration of Web applicationinterfaces between different types ofdevice. Our migration environment isbased on a service-oriented architectureinvolving multiple clients and servers,and supports Web interfaces with differ-ent platforms (fixed and mobile) andmodalities (graphical, vocal, and theircombination). Our solution aims to bevery general. We assume that a desktopversion of the relevant applicationsalready exists in the application servers,

and that it can be developed with anyauthoring environment. This is reason-able given that the desktop version of aWeb application is usually the only ver-sion existing. In addition, the user inter-face and the content for the desktop ver-sion are usually the most extended, thusproviding a good basis for the transfor-mations that aim to obtain versionsadapted for different types of platform.Our migration platform is composed ofa proxy service and a number of spe-

cific services and can be hosted by anysystem. In order to subscribe to ourmigration environment, a device mustrun a client. The purpose of this client isto notify the server about the availabil-ity of the device to the migration serv-ices, and to provide the server withinformation regarding the device. Themigration trigger can be activated onlyfrom the source device, either by a useraction or a system event. In addition,the source interface is closed aftermigration; this prevents someone elsepicking up the source device and cor-rupting the input.Our solution is able to detect any userinteraction performed at the client level.The state resulting from the differentuser interactions can then be retrievedand associated with a new user interfaceversion that is activated in the migrationtarget device. Users can therefore con-

duct their regular access to the Webapplication and then ask for a migrationto any device that has already been dis-covered by the migration server. Twofactors make possible the support ofmigration across devices that enablevarious interaction modalities: first, theuse of a logical language for user inter-face descriptions that is independent ofthe modalities involved (TERESAXML), and second, a number of associ-ated transformations that incorporate

design rules and take into account thespecific aspects of the target platforms.

When a migration has to be processed,and regardless of which agent (the useror the system) starts the process, theMigration Manager, acting as the mainserver module, retrieves the originalWeb page(s) desktop version that is tobe migrated. Once it is retrieved, theMigration Manager builds the corre-sponding logical descriptions at a dif-ferent abstraction level. The purpose ofthe logical description is to identify thebasic tasks supported by the originalversion of the application (eg editingvalues, selecting elements, activatingfunctionalities etc). Once the logicaldescription is obtained, it is used asinput for the Semantic Redesigner serv-ice, which performs a redesign of theuser interface for the target platform.

Migratory Web User Interfacesby Fabio Paternò, Carmen Santoro and Antonio Scorcia

An environment developed at the Human Interfaces in Information Systems (HIIS) Laboratory of ISTI-CNR supports Web user-interface migration through different devices. The goal is to provide userinterfaces that are able to move across different devices, even offering different interactionmodalities, in such a way as to support task continuity for the mobile user. This is obtained througha number of transformations that exploit logical descriptions of the relevant user interfaces.

Figure 1: An Example of Web User Interface Migrating across Multiple Interactive Platforms.

Page 38: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008

This involves adapting the logicaldescription to the interaction resourcesof the target device.

When the target user interfaces havebeen generated, a specific service isused to identify the page that should befirst uploaded to the target device(which is that supporting the last basictask performed on the source device).Indeed, it can happen that one desktoppage corresponds to multiple pages (forexample when considering a mobileuser interface). In order to support taskcontinuity throughout the migrationprocess, the state resulting from the

user interactions with the source device(filled data fields, selected items, cook-ies etc) is gathered through a dynamicaccess to the Document Object Model(DOM) of the pages in the sourcedevice. In a completely transparentfashion, this information is then associ-ated to the corresponding elements ofthe newly generated pages and adaptedto the interaction features of the targetdevice by one module of our architec-ture. We have developed generators forvarious Web implementation languages(XHTML, XHTML MP, VoiceXML,X+V) and even for non-Web languages(such as Java for the digital TV).

Links:

HIIS Lab: http://giove.isti.cnr.it http://www.isti.cnr.it/ResearchUnits/Labs/hiis-lab/

Migratory Interfaces:http://giove.isti.cnr.it/MIgration/

TERESA XML:http://giove.isti.cnr.it/teresa/teresa_xml.html

Please contact:

Fabio PaternòISTI-CNR, ItalyE-mail: [email protected]

38

Special Theme: The Future Web

At the Belgian Laboratory of Computer-Human Interaction (BCHI), Universitécatholique Louvain, we argue that devel-oping consistent and usable multimodalUIs (involving a combination of graph-ics, audio and haptics) is an activity thatwould benefit from the application of amethodology which is typically com-posed of: (1) a set of models gathered inan ontology, (2) a method which manip-ulates the involved models, and (3) toolsthat implement the defined method.

The ModelsOur methodology considers a model-based approach where the specificationof the UI is shared between a set ofimplementation-independent models,each model representing a facet of theinterface characteristics. In order toencourage user-centred design, we takethe task and domain models into consid-eration right from the initial designstage. The approach involves three stepstowards a final UI: deriving the abstractUI from the task and domain models,deriving the concrete UI from theabstract one, and producing the code ofthe corresponding final UI. All the infor-mation pertaining to the models describ-

ing the future UI is specified in the sameUI Description Language: UsiXML.

The MethodOur method considers all stages of thesoftware development life cycle to becovered according to design principles,from early requirements through to pro-totyping and coding. This approachbenefits from a design space thatexplicitly guides the designer to choosevalues of design options appropriate tothe multimodal UIs, depending onparameters. In order to support theseaspects our approach is based on a cat-alogue of transformation rules imple-mented as graphs.

The ToolWe developed MultiXML, an assemblyof five software modules, to explicitlysupport the design space. In order toreconcile computer support and humancontrol, we adopt a semi-automaticapproach in which certain repetitivetasks are executed, partially or totally,while still offering some level of con-trol to the designer. This involves:• manual selection of design option

values by the designer, with the pos-

sibility of modifying this configurati-on at any time

• automatic picking of transformationrules from the transformation catalo-gue corresponding to the selectedoptions, followed by their applicationby TransformiXML module (seeFigure 1) to reduce the design effort.

Case StudiesThe described methodology was testedon a couple of case studies using theOpera browser embedded with IBMMultimodal Runtime Environment.One of these applications considers alarge-scale image application (see Figure 2) that enables users to browse a3x3 grid map (ie translate, zoom in orzoom out) using an instruction patterncomposed of three elements: Action,Parameter Y and Parameter X. The Xand Y parameters identify the coordi-nates of the grid unit over which theaction is applied (eg Action = <<trans-late>>, Parameter Y = <<top>>, Para-meter X = <<left>>). The applicationenables users to specify each element ofthe pattern by employing one of thegraphical, vocal and tactile modalitiesor a combination of them.

Paving the Way of Designers - Towards the Development of Multimodal Web User Interfacesby Adrian Stanciulescu, Jean Vanderdonckt, Benoit Macq

Multimodal Web applications provide end-users with a flexible user interface (UI) that allowsgraphical, vocal and tactile interaction. As experience in developing such multimodal applicationsgrows, the need arises to identify and define major design options of these applications in order topave the way of designers to a structured development life cycle.

Page 39: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 39

MotivationsOne of the main advantages of ourdesign space is the independence ofany existing method or tool, making ituseful for any developer of multimodalUIs. The design options clarify thedevelopment process by providingoptions in a structured way. Thismethod strives for consistent results ifsimilar values are assigned to designoptions in similar circumstances, andalso means that less design effort isrequired. Every piece of developmentis reflected in a concept or notion thatrepresents some abstraction withrespect to the code level, as in a designoption. Conversely, each design optionis defined clearly enough to drive theimplementation without requiring anyfurther interpretation effort. Moreover,the adoption of a design space supportsthe tractability of more complexdesign problems or a class of relatedproblems.

Figure 1: The TransformiXML module.

The ProjectThe current work was conducted in thecontext of the Special Interest Group onContext-Aware Adaptation of the SIMILAR project. The group began inDecember 2003 and was coordinated byBCHI. Five other partners wereinvolved: ICI, ISTI-CNR, UniversiyJoseph Fourier, University of Castilla-LaMancha and University of Odense. Thegoal of the group was to define, explore,implement and assess the use and theswitching of modalities to produce inter-active applications that are aware of thecontext of use in which they are exe-cuted. They can then be adapted accord-ing to this context so as to create UIs thatremain the most usable. In the future,BCHI will continue working on thedesign, implementation and evaluationof Ambient Intelligence systems as amember of the ERCIM Working GroupSmart Environments and Systems forAmbient Intelligence (SESAMI).

Links:

http://www.isys.ucl.ac.be/bchihttp://www.usixml.orghttp://www.similar.cchttp://www.ics.forth.gr/sesami/index.html

Please contact:

Adrian Stanciulescu, Jean Vanderdonckt and Benoit MacqUniversité catholique de Louvain, BelgiumTel: +32 10 47 83 49, +32 10 47 85 25, +32 10.47.22.71E-mail: [email protected] [email protected], [email protected]

Figure 2: Multimodal navigation inlarge-scale images.

Page 40: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

With the miniaturizing of the computerand its massive deployment in end-userelectronics the 'disappearing computer'becomes an inseparable part of theeveryday life. The ICT landscape isdeveloping into a highly-interactive dis-tributed environment where peoplemore than ever become early adopters oftechnology. This rapid progressing ofthe 'technology embraced by public' cre-ates a risk of information overload if thecontent is not adapted to the preferencesand current user context. The experi-ences become more personal and engag-ing across multiple media.

However, the integration of the differentuser profiles is limited, lacking trans-parency in the use of personal databetween applications. For example,usage data collected by Amazon.com isonly valid within the Amazon domain; it

cannot be used for other informationservices, nor can Amazon cater for dif-ferent 'modes' of a user. Such personal-ization is still local and the user experi-ences are still 'digital islands' – withinthe boundaries of single applications, notinterconnected to allow users capitaliz-ing on the full potential of distributedmulti-device environments. Methodsand techniques that embrace the com-plexity of the distributed digital space ofdevices, information and users areneeded to collect the existing fragmentsof personalization components and pro-vide the essential glue to support person-alization across the boundaries of partic-ular applications and devices.

CHIP (Figure 1), an interactive applica-tion of semantic Web technology, illus-trates how the physical museum, themobile devices and the Web can be

combined in order to maximize themuseum visitors' experience anytimeand anywhere. Users' interests and artpreferences expressed as semanticmetadata of cultural heritage, offer apersonalized way to explore digital andphysical museum collections throughmultiple devices (eg PC, mobile phone,PDA) and sensor technology, eg RFID.The CHIP (Cultural Heritage Informa-tion Personalization) project is guidedby a use case provided by the Rijksmu-seum Amsterdam, and is funded by theNWO CATCH program.

In iFanzy personalized cross-media TVguide background knowledge is usedfor personalized access to TV content ina cross-media environment with TVsettop box, PC and mobile. The contin-uous observation of the user in allspaces (eg online and with various sen-

ERCIM NEWS 72 January 2008 40

Special Theme: The Future Web

Distributed Personalization: Bridging DigitalIslands in Museum and Interactive TVby Lora Aroyo

Personalization and user experience are key challenges for effectively using current consumerelectronics. The VU University Amsterdam demonstrated in two projects the use of Semantic Webtechnology for personalization: CHIP – combining the experience in a physical museum withmobile devices and the Web, and iFanzy – a personalized selection of the digital TV content in across-media environment.

(a) iFanzy Set-top Box Interface

(b) iFanzy Web Interface Figure 1: CHIP application.

Ontology Server

iFanzy Client Applications

External MetadataSuppliers

SenSee Server

UserModel Server

MetadataService

CRIDAuthority

TVA Service Server

RFID reader

Bio sensor

Client Device

3rd Party External

Applications

Back-end Server

SenSee Servers

Settop Box Application

Input Devices

Apache TomcatWeb Server

Web Browser

(c) iFanzy Space Architecture

(a) iFanzy set-top box interface.

(b) iFanzy Web interface.

(c) iFanzy space architecture.

Page 41: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

sors at home) allows for a context-sen-sitive adaptation. This work has beenrealized as a collaborative effortbetween Stoneroos Interactive Televi-sion, Eindhoven University of Technol-ogy and VU University Amsterdamwith a Eureka ITEA funding.

To realize distributed personalization inthese interactive environments threeaspects have been identified:• interaction spaces, ie virtual (Web)

space, physical space and the mobilespace

• nformation challenges, ie data inte-gration, modeling and presentation

• space-integration principles, ie userexperience optimization strategies.

The physical interaction space, eg, athome or in the museum encompassestask-specific devices such as TV andinformation displays supporting user'sprime goal to consume content. In thevirtual interaction space, referring tothe Web, users typically work with mul-tiple applications and perform informa-tion-intensive tasks. The mobile inter-action space regards portable and hand-held devices each containing a frag-mented portion of user's data. Further,space-integration principles for utiliz-ing the three spaces for distributed per-sonalization are postulated:• Complementarity ('use the right

space'): Use the applications anddevices in the three spaces in a com-

plementary way, so that the overalluser's experience is improved. Forexample, the iFanzy Web applicationis handy for searching, browsing andtext-typing tasks. Where a mobiledevice is used for user identificationand one-click-away actions.

• Minimization ('do not bother theuser'): Minimize the explicit userinput, while maximizing the back-ground collection of user and contextdata and its integration from differentapplications. This allows in CHIP tostart instantly the personalizationprocess, and prevent a 'cold start' inthe TV recommendation in iFanzy.

• Context-awareness ('know where youare serving'): Consider the specificcontext for presenting information tothe user in order to achieve good con-fluence of the content and the con-text. Support user awareness aboutthe current context both while navi-gating in the museum and interactingwith the iFanzy recommender.

• Vocabulary bridging ('talk the user'slanguage'): Provide ways of bridgingthe gap of terminology betweenexpert-created applications and theend-user ways of naming items, sear-ching and browsing.

The information integration, modelingand presentation have been realizedwith semantic Web technology.Museum and TV metadata have beenprovided in RDF/OWL format, as well

as relevant vocabularies, eg TV-Any-time, XMLTV Genres, IMDB Genresand Locations, WordNet, IconClass,Getty AAT, TGN and ULAN.

The RDF/OWL graphs resulting fromthe aligning and enriching of thesevocabularies and metadata allowed forbuilding a combined (virtual and physi-cal) user model and further apply it (1)for (semi-)automatic generation of vir-tual and physical museum tours, and (2)semantic recommendations of TV pro-grams. In CHIPAAT Style metadata hadbeen added and further mapped to theartists in the Rijksmuseum collection, toallow for serendipitous recommenda-tions based on relationships betweenartist and artworks in the same style orrelationships between artists, eg collab-orator and student_of. The use of RFIDtechnologies allows for an instant inter-action with artworks in the museum andfor its logging for the user profile.

Links:

http://chip-project.orghttp://chip-project.org/demohttp://wwwis.win.tue.nl/SenSeehttp://ifanzy.nl http://www.cs.vu.nl/~laroyo

Please contact:

Lora Aroyo, VU University Amsterdam, The NetherlandsTel: +31 20 598 2868/7718E-mail: [email protected]

ERCIM NEWS 72 January 2008 41

Service-oriented architectures consti-tute a key technology for providinginteroperability among heterogeneoussystems, and integrating inter-organisa-tion applications in a loosely coupledfashion. Due to their increasing popu-larity and adoption and the wideningavailability of Web services, the prob-lem of effectively and efficiently dis-covering and selecting appropriate serv-

ices to meet specific user or applicationrequirements or to compose complexworkflow processes becomes a criticalissue. On the other hand, the semanticWeb is now enhancing the current Webwith machine-processable metadata,giving formal and explicit meaning toinformation and thus making itprocessable not only by humans butalso by software agents.

Our approach aims at the discovery andselection of appropriate advertised serv-ices that match a requested service bycombining techniques from both theaforementioned areas. Figure 1 depictsa high-level overview of the individualsteps – sequentially numbered in thefigure – that are required in such aprocess. In the remainder of the article,we elaborate on the semantic matching.

Discovery and Selection of Services on the Semantic Webby Dimitrios Skoutas, Alkis Simitsis and Timos Sellis

Scientists from the National Technical University of Athens and the IBM Almaden Research Centreare proposing a novel infrastructure for ranking and selecting Web services using semantic Webtechnology. This approach uses the measures of recall and precision to evaluate the similaritybetween requested and provided services and expresses that similarity as a continuous value inthe range of 0 to 1.

Page 42: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 42

Special Theme: The Future Web

Service DescriptionsUDDI provides syntactic interoperabil-ity and a classification scheme todescribe service functionality. How-ever, its support for semantics is lim-ited, affecting the recall and precisionof the obtained results. Several initia-tives exist for the purpose of semanti-cally enriching the descriptions of Webservices (OWL-S, WSDL-S, WSMO),so that a high degree of automation canbe achieved in performing fundamentaltasks such as service discovery andcomposition. Hence, services aredescribed in terms of their input andoutput parameters, preconditions andeffects. Each parameter is semanticallyannotated using terms in a correspon-ding OWL (Web Ontology Language)domain ontology. A service descriptionmay also contain non-functional param-eters regarding quality-of-service(QoS) aspects.

Semantic MatchmakingThe process of matching semantic Webservices is essentially based on the useof logic inference to check for equiva-lence or subsumption relationshipsbetween the ontology classes that corre-spond to the parameters in the servicedescriptions. Typically, five types ofmatch are identified: exact, if therequest is equivalent to the advertise-ment; plug-in, if the request is sub-sumed by the advertisement; subsume,if the request subsumes the advertise-ment; intersection, if the intersection ofthe request and the advertisement is sat-isfiable; and disjoint, otherwise.

Ranking Discovered ServicesThe ability of a search engine to rankthe returned results is essential for tworeasons. First, given that exact matchesare often rare, and there is potentially ahuge number of partial matches, rank-ing facilitates the location of the mostsuitable candidates. Second, users oftendo not have sufficiently clear informa-tion needs. Thus, consequent searchesmay be performed, and the ranking ofintermediate results provides usefulfeedback for refining the search criteriauntil converging to a search thatretrieves the final item(s).

We consider the following desiderata fora ranking mechanism for Web services:

• the notion of similarity should bebased on assessing the semantic simi-larity between the request and service

parameters, including inputs, outputs,preconditions, and effects

• the similarity function should beasymmetric, so as to distinguish whet-her the service capabilities are asuperset or a subset of those requested

• the similarity function should consi-der the semantic information encodedin the domain ontology, includingboth the class hierarchy and the pro-perties of classes, as well as restricti-ons specified on these properties, sothat the higher expressiveness ofOWL ontologies can be exploited

• the ranking mechanism should be fle-xible and customizable, and be ableto adapt to specific application needsor user requirements and preferences.

We use the notions of recall and preci-sion to assess the similarity betweenrequested and provided services. Simi-lar to existing works, the proposed dis-covery mechanism uses the availablesemantic information provided by thedomain ontology, and performs logicinference to estimate the degree ofmatch between the requested andoffered capabilities. However, thedegree of match is not expressed in adiscrete scale, such as the five types ofmatch discussed above, but as a contin-uous value in the range [0..1]. Thisallows the handling of cases where alarge number of candidate services pro-vide the same type of match, ie partialmatches can be appropriately ranked.Moreover, using these two measuresprovides an intuitive way of capturingasymmetry in the matching. Essentially,this encourages advertisers to be honestwith their descriptions: the serviceprovider is obliged to strike a balancebetween these two factors in order toachieve a high rank. Ranking exploitsthe semantic information encoded bothin the class hierarchy and the properties

of the classes, including hierarchy ofproperties and value or cardinalityrestrictions. It is thus useful for bothapplications relying on taxonomies andthose employing more expressiveontologies.

Finally, the proposed ranking mecha-nism is flexible and customizable,allowing the consideration of user pref-erences. The user may determine therelative importance of each searchparameter, and apart from presenting asingle rank for each candidate service,more detailed results may also be pro-vided (eg separate values for recall andprecision or the degree of match forspecific parameters). This aids the userin identifying the most suitable serviceor refining the search criteria.

Prototype Implementation and Future WorkOur approach is complemented by aprototype implementation based onJena, a Java framework for buildingsemantic Web applications, and Pellet,an open-source OWL-DL reasoner.While currently only functional serviceparameters are considered, in the nearfuture the ranking mechanism will alsosupport QoS parameters. Anotherscheduled extension involves the facili-tation of service composition and theranking of composite services.

Links:http://www.dblab.ntua.gr/~asimihttp://doi.ieeecomputersociety.org/10.1109/SERVICES.2007.8

Please contact:Dimitrios SkoutasNational Technical University ofAthens, GreeceTel: +30 210 7721436E-mail: [email protected]

Domain Ontology(ies)

ServiceRegistry

SemanticMatcher

annotatedservicedescription

annotatedrequest

rankedservices

Service ProviderUser

Figure 1: System overview.

Page 43: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 43

QoS-Based Web Service Description and Discoveryby Kyriakos Kritikos and Dimitris Plexousakis

The success of the Web service paradigm has led to a proliferation of available services. Whilesophisticated semantic (functional) discovery mechanisms have been invented to overcome UDDI'ssyntactic solution, the number of functionally equivalent Web services returned is still large. Thesolution to this problem is the description of the non-functional aspect of Web services, in particularquality of service (QoS), which is directly related to their performance. We are currently developing asemantic framework, which includes ontologies and matchmaking algorithms, for the semantic QoS-based description and discovery of Web services.

The aim of this work is to develop acomplete framework for the semanticQoS-based description and discoveryof Web services. This framework willconsist of ontological specificationsfor the semantic description of QoS ofWeb services, QoS metrics and specifi-cation matchmaking algorithms for theQoS-based Web service discovery anda prototype system implementation formanaging these specifications andalgorithms.

Associated with each Web service is aset of non-functional attributes/charac-teristics that may impact its quality ofservice. Each QoS attribute is measuredby one or more QoS metrics, whichspecify the measurement method,schedule, unit, value range, authorityand other measurement details. A QoSspecification (offer or demand) of a Webservice is materialized as a set of con-straints on certain set of QoS metrics.These constraints restrict the metrics tohave values in a certain range or in acertain enumeration of values, or tohave just one value. Current modellingefforts of QoS specifications in fact dif-fer only in the expressiveness of theseconstraints. However, when it comes tothe modelling of QoS metrics, theseefforts fail. The main reason is that theirQoS metric model is syntactic, poor (ienot capturing all aspects of QoS metricdefinition) and not extensible. Based onthe above deficiencies, we have con-ducted extensive research that resultedin the identification of a set of require-ments for a complete QoS-based Webservice description. By taking intoaccount these requirements, we havedeveloped an upper ontology of QoS-based Web service description calledOWL-Q. This is comprised of manysub-ontologies (facets), each of whichcan be extended independently of theothers. This ontology also complements

OWL-S, the W3C submission for thefunctional specification of Web serv-ices. The sub-ontology specifying aQoS metric is depicted in Figure 1.

The most prominent QoS-based Webservice discovery algorithms fail to pro-duce accurate results because they relyeither on syntactic descriptions of QoSmetrics or on semantically poor QoSmetric descriptions. Hence, in bothcases, they cannot infer the equivalence

of two QoS metrics based on descrip-tions provided by different parties (egWeb service provider and requester).Different specifications can occur fortwo reasons: either because of a differentperception of the same concept, or a dif-ferent type of system reading or meas-urement (low-level or high-level) for thesame metric. Provided that two QoSmetric descriptions are expressed inOWL-Q, we have developed a rule-based QoS metric-matching algorithmthat infers the equivalence of the twometrics. This algorithm is composed ofthree main rules, each corresponding to adifferent case in a comparison of twometrics. The last rule is recursive and

reaches the final point of checking theequivalence of two mathematical formu-lae in order to infer the equivalence ofthe metrics. Equivalency of mathemati-cal expressions is generally undecidable,but powerful mathematical engines likeMathematica and Maple can success-fully deal with many hard cases.

One of the most prominent approachesfor QoS-based Web service discoverytransforms QoS-based Web service

specifications into Constraint Satisfac-tion Problems (CSPs). Each CSP ischecked for consistency (ie whether itadmits any solution at all). Matchmak-ing is then performed according to theconcept of conformance (ie if everysolution of the offer is a solution to thedemand). We have extended thisapproach in three ways: a) using ourQoS metric-matching algorithm, weassign the same CSP variable to equiv-alent metrics; b) if equivalent metricsuse different units, then a unit transfor-mation procedure is carried out on theappropriate parts of the provider's CSP;and c) the types of discovery resultswere expanded.

Figure 1: The QoS metricsub-ontology ofOWL-Q.

Page 44: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008

We are currently in the developmentphase of our QoS-based Web servicediscovery process by using the Pelletreasoner for inferencing and theECLiPSe Constraint Logic Program-ming (CLP) system for solving con-straints. The Java programming lan-guage is used as a bridge between them.The architecture of the system underdevelopment is shown in Figure 2.Upon completion, the QoS metric-matching algorithm will be evaluatedusing synthetic data and simulationmethods. In addition, we are currentlyinvestigating the relaxation of hard con-straints into soft ones for solving theCSPs in cases where the QoS-basedWeb service matchmaking procedurefails to produce any useful result.Regarding QoS modelling, we plan todevise mid-level ontologies describingdomain-independent QoS metrics forhelping Web service providers andrequesters in expressing QoS-basedWeb service specifications. Moreover,we intend to extend OWL-Q by model-ling other types of non-functional attrib-

utes like context (location, time, userpreferences, service execution environ-ment etc). Last but not least, we envis-age QoS-aware, dynamic Web servicecomposition: this will be done byenforcing global QoS constraints, andfor every metric, using metric evalua-tion functions imposed on any possibleworkflow (Web service) compositionconstruct to produce global QoS metricvalues through construct reduction.

Links:

http://www.ics.forth.gr/isl/index.htmlhttp://www.w3.org/Submission/OWL-S/http://pellet.owldl.com/http://eclipse.crosscoreop.com/

Please contact:

Kyriakos KritikosICS-FORTH, GreeceTel: +30-2810-391637E-mail: [email protected]

44

Special Theme: The Future Web

Figure 2: The architecture of theQoS-based Web servicediscovery process.

Building compound Web applicationsby integrating standardized Web serv-ices promises many benefits, such asreduced development effort and cost,ease of maintenance, extensibility andreuse of services. Service-oriented com-puting promotes the construction ofapplications by composing services.Service-oriented architectures maximizedecoupling between services and createwell-defined interoperation semanticsbased on standard protocols.

Service-oriented Web applications inte-grate the distributed Web services (WS)offered by various service providers.Web services are advertised in directo-ries, which publish their programming

interfaces, grounding details (protocol,address etc) and eventually service-level agreements (SLAs) specifying theconditions and cost of service usage.Current technology includes UniversalDescription Discovery and Integration(UDDI) for Web service directories,Web Service Definition Language(WSDL) for Web service advertise-ments, WS-Agreement for SLAs, andBusiness Process Execution Languagefor Web Services (BPEL4WS) for rep-resenting compound services. In orderto leverage available Web services,software designers must be aware of theservice interfaces and structure theirapplications so that they can be inte-grated. Software designers searching

for suitable services (also using tradi-tional search engines) are confrontedwith a large and dynamically changingsearch space, which in general must beexplored manually. Such a manualapproach to creating service-orientedsystems is tedious and can hardly takeadvantage of the rapidly growing mar-ket of available Web services.

As a joint project between the Univer-sity of Lugano and the Artificial Intelli-gence Lab of the Ecole PolytechniqueFédérale de Lausanne (EPFL), we havebeen exploring new techniques forautomating the creation of compoundservice-oriented Web applications basedon user requirements, while taking into

Automating the Creation of Compound Web Applicationsby Walter Binder, Ion Constantinescu, Boi Faltings and Radu Jurca

The creation of compound, service-oriented Web applications is a tedious, manual task. Thesoftware designer must search for relevant services, study their application programming interfaces(APIs) and integrate them into the desired application, while also taking into account non-functionalaspects such as service cost or reliability. We have been investigating models and algorithms toautomate the service integration process, resulting in novel service composition algorithms thatcombine artificial intelligence (AI) planning techniques with advanced dynamic matchmaking inlarge-scale Web service directories.

Page 45: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 45

account large-scale directories of Webservice advertisements. As an essentialtool for the design of service-orientedWeb applications, we promote the Ser-viceComposer. This takes a specifica-tion of user requirements as input andinteracts with Web service directories,as well as with the software designer, inorder to create a compound service thatmeets those requirements. Our approachsupports traditional settings in whichWeb service advertisements merely con-vey interface information at the level ofmethod signatures, as well as advancedsettings in which semantically enrichedservice advertisements provide specifi-cations of service behaviour. In the lattercase, languages such as OWL-S or WebService Modelling Ontology (WSMO)may be used to specify service seman-tics; the ServiceComposer uses the addi-tional information to improve composi-tion quality.

At the core of the ServiceComposer areplanning and optimization algorithmsthat combine available services. On theone hand, the generated compoundservice must fulfil the functionalrequirements specified by the user. Onthe other hand, the ServiceComposershould choose individual Web servicesso as to minimize the cost of serviceusage and maximize reliability andavailability of the compound service.Ideally, the ServiceComposer discoversa set of available Web services that canbe assembled to meet all requirements.However, often there are specificaspects of the desired application thatcannot be covered by off-the-shelf serv-ices, or the software designer may wantto avoid using certain Web services, egbecause of high cost or because of lackof confidence in the service reliability.Thus, the ServiceComposer cannot be afully automated tool, since it needs to

interact with the software designer.Rather, it offers design alternatives,suggests Web services to be integrated,and helps to minimize developmenteffort by making use of available Webservices wherever possible.

So far, as original scientific contribu-tions, we have developed novel algo-rithms for service matchmaking andautomated service composition, allow-ing to efficiently assemble servicesadvertised in large-scale directories inorder to meet given user requirements.We have succeeded in interleaving andintegrating AI planning algorithms withremote directory access, for which wedesigned a dedicated query languageand new matchmaking algorithms.Moreover, our work on automated Webservice composition takes partialmatches into account, increasing thesuccess rate of our algorithms. In con-trast, prior work on automated compo-sition required a fixed and limited set ofservice advertisements (in the order of afew hundred services) that was hard-coded in a reasoning engine and did notsupport partial matches.

Our ongoing research concentrates onoptimizing compound services withrespect to non-functional propertiessuch as service cost and service reputa-tion. Moreover, we are developing anew middleware for dependable serv-ice-oriented applications. This will sup-port failure recovery through dynamicreplanning of service compositions, aswell as proactive improvement of com-pound services, taking service reliabil-ity and performance into account. Thelatter features require a sophisticatedmonitoring infrastructure to be inte-grated into our middleware. Regardingconcrete applications, we have demon-strated the feasibility of our approach

with an evening planner that automati-cally integrates Web services for routeplanning, public transport schedules,restaurant recommendations, movieschedules etc. Currently, we are explor-ing ways to partly automate the creationof upcoming, social-based Web struc-tures such as Web mashups.

Links:

A Flexible Directory Query Languagefor the Efficient Processing of ServiceComposition Queries: http://www.igi-pub.com/articles/details.asp?id=6573

A Multi-agent System for the ReliableExecution of Automatically ComposedAd-hoc Processes: http://dx.doi.org/10.1007/s10458-006-5836-0

Flexible and Efficient Matchmakingand Ranking in Service Directories:http://doi.ieeecomputersociety.org/10.1109/ICWS.2005.62

Large Scale, Type-Compatible ServiceComposition:http://doi.ieeecomputersociety.org/10.1109/ICWS.2004.1314776

Please contact:

Walter BinderUniversity of Lugano, SwitzerlandE- mail: [email protected]

Ion Constantinescu Digital Optim LLCE-mail: [email protected]

Boi Faltings, Radu Jurca Artificial Intelligence Laboratory,Ecole Polytechnique Fédérale de Lausanne (EPFL), SwitzerlandE-mail: {boi.faltings,radu.jurca}@epfl.ch

Figure 1: The ServiceComposer helpsthe software designer in selecting andintegrating available Web services thatare advertised in distributed directories.The ServiceComposer uses AI planningand optimization algorithms and dyna-mically interacts with remote Web ser-vice directories that provide advancedmatchmaking functionality.

Page 46: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 200846

Early advances in genetics led to the all-genetic paradigm: phenotype (an organ-ism's characteristics/behaviour) is deter-mined by genotype (its genetic make-up). This was later amended andexpressed by the well-known formula P = G + E, encompassing the notion thatthe visible characteristics of a livingorganism (the phenotype, P) is a combi-nation of hereditary genetic factors (thegenotype, G) and environmental factors(E). However, this method fails toexplain why in diseases such as schizo-phrenia we still observe differencesbetween identical twins. Furthermore,the identification of environmental fac-tors (such as smoking and air quality forlung cancer) is relatively rare. The for-mula also fails to explain cell differenti-ation from a single fertilized cell.

In the wake of early work by Wadding-ton, more recent results have empha-sized that the expression of the genotypecan be altered without any change in theDNA sequence. This phenomenon hasbeen tagged as epigenetics. To form thechromosome, DNA strands roll overnucleosomes, which are a cluster of nineproteins (histones), as detailed in Fig-ure 1. Epigenetic mechanisms involveinherited alterations in these two struc-tures, eg through attachment of a func-tional group to the amino acids (methyl,acetyl and phosphate). These 'stablealterations' arise during developmentand cell proliferation and persistthrough cell division. While informationwithin the genetic material is notchanged, instructions for its assemblyand interpretation may be. Modellingthis new paradigm, P = G + E + EpiG, isthe object of our study.

To our knowledge, no previous effortshave sought to model directly the mech-anisms that affect epigenetic changes.Biological research on epigenetic phe-nomena is ongoing, but while some verypromising articles are being published,most still contain only qualitativedescriptions of epigenetic changes. Thisis not ideal when trying to develop com-puter-based models, but it is also not

unusual. Over a decade ago the basics ofHIV infection were understood, butquantitative data were sparse. Yet asearly as 1992, differential equationmodels were proposed, while cell-medi-ated micro-models date from the 1990s.As more data have become available,these models have improved in sophisti-cation, incorporating features such asshape-space formalism and massivelymulti-agent, parallel systems.

As a first step, we propose a micro-scopic model for chromatin structures.From the current biological results, itclearly appears that each unit (eg his-tone, DNA strand or amino acid) has adistinct role in epigenetic changes, andthis role can alter depending on the typeor location of the unit (eg which partic-ular amino acid, what part of the DNAstrand etc). For efficiency, this is bestmodelled using an object-orientedapproach and a C++ implementation.The main objective of this early modelis to provide a description and hierarchyfor epigenetic changes at the cell level,as well as an investigation into thedynamics and time scales of thechanges. These results will then be usedto 'feed into' other models. Already indevelopment are approaches such asagent-based modelling of cell differen-

tiation and complex recurrent networksof cancer initiation by epigeneticchanges.

Another early model uses ProbabilisticBayesian Networks. These represent aset of variables and their probabilisticdependencies and are constructed asdirected acyclic graphs, for whichnodes represent variables and arcsencode conditional dependenciesbetween the variables. The variablescan be of any type, ie a measuredparameter, a latent variable or even ahypothesis. These networks can be usedfor inference, parameter estimation andrefinement, and structure learning. Thisapproach has been successfully used inmedicine (eg breast cancer diagnosis)and biology (eg protein structure pre-diction), and epigenetic mechanismsappear amenable to such techniques.

Though still in its infancy, the project isgaining momentum and early work onthe different approaches looks verypromising. Active involvement frombiologists and medical researchers iscurrently being sought in order tosecure access to data and guaranteemodel realism (as highlighted by a pres-entation at the International Agency forResearch on Cancer in early December2007). Previous modelling experiencefrom the group promises sensible inte-gration of the various approaches andefficient implementations. Several pub-lications and presentations are expectedin the coming year, all of which willappear on the group's Web site (linkbelow).

Links:

http://www.computing.dcu.ie/~dperrin/http://www.computing.dcu.ie/~msc/publications.shtml

Please contact:

Dimitri PerrinSchool of Computing, Dublin CityUniversity, IrelandTel: +353 1 700 8449E-mail:[email protected]

EEppiiggeenneettiicc MMooddeelllliinnggby Dimitri Perrin, Heather J. Ruskin, Martin Crane and Ray Walshe

The field of epigenetics looks at changes in the chromosomal structure that affect geneexpression without altering DNA sequence. A large-scale modelling project to better understandthese mechanisms is gaining momentum.

Figure 1: A nucleosome, the fundamentalsubunit of the chromosome (adapted from C. Brenner, PhD thesis, Université Libre deBruxelles, 2005).

R&D and Technology Transfer

Page 47: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 47

As recently as several years ago, tech-nology gurus predicted that the next bigtrend in software system developmentwould be service-oriented architecture:that is, a successful integration ofloosely coupled services belonging todifferent organisations, sometimescompeting but on specific tasks collab-orating, would storm the world. Thiswould create a myriad of new businessopportunities, enabling the formation ofvirtual organisations in which small andmedium-sized enterprises would joinforces to thrive in an increasingly com-petitive global market. The dream liveson, and the industry is pouring resourcesinto developing and deploying Webservices. Yet the degree of integrationachieved between different organisa-tions remains low. Collaboration pre-sumes mutual trust, and wherever trustis not considered sufficient, businesspeople turn to contracts as a mechanismto reduce risk. In other words, for theSOA to deliver its promised advantages,developers need cost-effective contractmanagement solutions.

The main problems and open issuesidentified for supporting Web servicesdevelopment are the following: 1. Formal definition of generic con-

tracts: currently, no unified system ofsyntax and semantics exists for con-tracts (in particular for quality of ser-vice (QoS) and security).

2. Negotiable and monitorable con-tracts: a contract must be negotiateduntil both parties agree on its finalform, and it must be monitorable inthe sense that there must be a way todetect violations. Current program-ming languages provide little sup-port for negotiable and monitorablecontracts.

3. Combination of object-orientationand concurrency models based onasynchronous message-passing. Theshared-state concurrency model is

not suitable for Web service develop-ment.

4. Integration of XML into a hostlanguage, reducing the distance bet-ween XML and object data models.

5. Harmonious coexistence at thelanguage level of real-time and inhe-ritance mechanisms.

6. Verification of contract properties:the integration of contracts in a pro-gramming language should beaccompanied by the ability to guaran-tee essential properties. Guaranteeingthe non-violation of contracts can bedone in (at least) four different ways:(a) with runtime enforcement (egthrough monitors); (b) by constructi-on (eg through low-level languagemechanisms); (c) with static programanalysis techniques; or (d) throughmodel checking.

None of the above can be used as a uni-versal tool; they must be combined.

In addressing these issues and prob-lems, we need to develop a model ofcontracts in an SOA that is broadenough to cater for at least QoS andsecurity contracts. A basic requirementis the ability to seamlessly combinereal-time models (for QoS specifica-tions) and behavioural models (essen-tial to constrain protocol implementa-tion and enforce security). Contractmodels should also address discoveryand negotiation. However, the formaldefinition of contracts should only be afirst step towards the more ambitioustask of providing language-based sup-port for the programming and effectiveusage of such contracts.

Some contracts may be seen as wrap-pers that 'envelop' the code/object underthe scope of the contract. Firewalls, forinstance, may be seen as a kind of mon-itor of the contract between a machine

CCoonnttrraacctt--OOrriieenntteedd SSooffttwwaarree DDeevveellooppmmeenntt ffoorr IInntteerrnneett SSeerrvviicceessby Pablo Giambiagi, Olaf Owe, Anders P. Ravn and Gerardo Schneider

The 'COSoDIS' project - Contract-Oriented Software Development for Internet Services - is developingnovel approaches to implementing and reasoning about contracts in service-oriented architectures(SOA). The rationale is that system developers benefit from abstraction mechanisms in working withthese architectures. Therefore the goal is to design and test system modelling and programminglanguage tools to empower SOA developers to deploy highly dynamic, negotiable and monitorableInternet services.

Figure 1: A contract (template) is generated in an electronic version (1); the contract is checked tobe free of contradictions (2); a negotiation starts (3); different versions of the contract are checked(4); a final contract is signed (5); a runtime monitor guarantees the contract fulfilment (6) .

Page 48: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 200848

and the external applications wanting torun on that machine. It would be inter-esting to investigate a language primi-tive to create wrapped objects that arecorrect by construction.

Contracts for QoS and security couldalso be modelled as first-class entitiesusing a 'behavioural' approach, throughinterfaces. In order to tackle time con-straints (related to QoS), such interfacesneed also to incorporate time.

Finding languages or notations fordescribing timing behaviour andrequirements is easy: the real challengesare in analysis. Besides syntactic exten-sions, the language needs to have timesemantic extensions in order to allowextraction of a timed model, eg a timed

automaton. This model may be checkedwith existing tools such as Kronos orUppaal, while other properties mayinstead be proven correct by construc-tion (eg wrappers).

In practice, many properties can only bevalidated through runtime approaches.A promising direction is to developtechniques for constructing a runtimemonitor from a contract, which will beused to enforce its non-violation (cfongoing work with Java Modeling Lan-guage and Spec#).

In summary, our aim is to develop lan-guage-based solutions to address theabove problems through the formaliza-tion of contracts as enriched behaviouralinterfaces, and the design of appropriate

abstraction mechanisms that guide thedeveloper in the production of contract-aware applications.

The COSoDIS project is a Nordic proj-ect funded by the Nordunet3 committee.The partners involved are the Universityof Oslo (Norway), Aalborg University(Denmark) and SICS (Sweden).

Link:

http://www.ifi.uio.no/cosodis/

Please contact:

Gerardo SchneiderUniversity of Oslo, NorwayTel: +47 22 85 29 71 E-mail: [email protected]://folk.uio.no/gerardo/

The long-term journey towards the e-science vision demands e-infrastruc-tures that allow scientists to collaborateon common research challenges. Suchinfrastructures provide seamless accessto necessary resources regardless oftheir physical location. These sharedresources can be of very differentnatures and vary across applicationdomains. Usually they include contentresources, application services thatmanipulate these content resources toproduce new knowledge, and computa-tional resources, which physically storethe content and support the processingof the services. Many e-infrastructuresalready exist, at different levels of matu-rity, and support the sharing of andtransparent access to resources of a sin-gle type, eg SURFNet (information),GriPhyN (services), EGEE (computingand storage). However, they are still tooprimitive to support a feasible realiza-tion of VREs. The DILIGENT infra-structure, released recently by thehomonymous project, will overcome

this limitation by supporting in a singlecommon framework the sharing of allthree types of resource.

The core technology supporting such e-infrastructure is a service-orientedapplication framework named gCube.gCube enables scientists to declara-tively and dynamically build transientVREs by aggregating and deployingon-demand content resources, applica-tion services and computing resources.It also monitors the shared resourcesduring the lifetime of the VRE, guaran-teeing their optimal allocation andexploitation. Finally, it provides mecha-nisms to easily create dedicated Webportals through which scientists canaccess their content and services.

From the technological point of view,gCube provides: (i) runtime and designframeworks for the development ofservices that can be outsourced to aGrid-enabled infrastructure; (ii) a serv-ice-oriented Grid middleware for

exploiting the Grid and hosting WebServices on it; (iii) a set of applicationservices for distributed informationmanagement and retrieval of structuredand unstructured data.

Runtime frameworks are distinguishedworkflows that are partially pre-definedwithin the system; they include Grid-enabled services and application serv-ices, where the former coordinate in apure distributed way the action of thelatter, while relying on a high-levelcharacterization of their semantics.Design frameworks consist of patternedblueprints, software libraries and partialimplementations of state-of-the-artapplication functionality, which can beconfigured, extended and instantiatedinto bespoke application Grid services.

The service-oriented Grid middlewareprovides all the required capabilitiesnecessary to manage Grid infrastruc-tures. It eliminates manual servicedeployment overheads, guarantees opti-

ggCCuubbee:: AA SSeerrvviiccee--OOrriieenntteedd AApppplliiccaattiioonn FFrraammeewwoorrkk oonn tthhee GGrriiddby Leonardo Candela, Donatella Castelli and Pasquale Pagano

gCube is a new service-oriented application framework that supports the on-demand sharing ofresources for computation, content and application services. gCube enables the realization of e-infrastructures that support the notion of Virtual Research Environments (VREs), ie collaborative digitalenvironments through which scientists, addressing common research challenges, exchange informationand produce new knowledge. gCube is currently used to govern the e-infrastructure set up by theEuropean integrated project DILIGENT (A Digital Library Infrastructure on Grid-Enabled Technology).

R&D and Technology Transfer

Page 49: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 49

mal placement of services within theinfrastructure and opens unique oppor-tunities for outsourcing state-of-the-artimplementations. Rather than interfac-ing with the infrastructure, the softwarewhich implements the application serv-ices is literally handed over to it, so asto be transparently deployed across itsconstituent nodes according to func-tional constraints and quality-of-service(QoS) requirements. By integrating thegLite system released by the EnablingGrid for E-sciencE (EGEE) project forbatch processing and management ofunstructured data, gCube also allowsthe large computing and storage facili-ties provided by the EGEE infrastruc-ture to be properly exploited. With over20,000 CPUs and 5 million Gigabytesof storage, EGEE is the largest opera-tional Grid infrastructure ever built.

gCube application services offer a fullplatform for distributed hosting, man-agement and retrieval of data andinformation, and a framework forextending state-of-the-art indexing,selection, fusion, extraction, descrip-tion, annotation, transformation andpresentation of content. In particular,gCube is equipped with services formanipulating information objects,importing external objects, managingtheir metadata in multiple formats,securing the information objects toprevent unauthorized access, andtransparently managing replication andpartition on the Grid.

In order to promote interoperability,gCube services are implemented in accor-dance with second-generation Web serv-ice standards, most noticeably SOAP,BPEL, WSRF, WS-Notification, WS-Security, WS-Addressing, and JSR168Portal and Portlets specifications.

gCube is the result of the collaborativeefforts of more than 48 researchers anddevelopers in twelve different academicand industrial research centres: Instituteof Information Science and Technolo-gies ISTI-CNR (IT), University ofAthens (GR), University of Basel (CH),Engineering Ingegneria InformaticaSpA (IT), University of Strathclyde(UK), FAST Search & Transfer (NO),CERN European Organisation forNuclear Research (CH), 4D SOFT Soft-ware Development Ltd (HU), EuropeanSpace Agency ESA (FR), Scuola Nor-

male Superiore (IT), and RAI RadioTelevisione Italiana (IT), and ERCIM.

Information about the project anddetailed information about the software,currently available in its beta version,can be found at the gCube Web site.

Links:

gCube: http://www.gcube-system.org/DILIGENT: http://www.diligentproject.org/EGEE: http://www.eu-egee.org/

Please contact:

Donatella Castelli, Pasquale PaganoISTI-CNR, ItalyE-mail: [email protected],[email protected]

Figure 1: gCube in action.

TThhee DDeecciissiioonn--DDeecckk PPrroojjeecctt –– DDeevveellooppiinngg aa MMuullttiippllee CCrriitteerriiaa DDeecciissiioonn AAnnaallyyssiiss SSooffttwwaarree PPllaattffoorrmmby Raymond Bisdorff and Patrick Meyer

The Decision-Deck project is developing an open-source generic Multiple Criteria DecisionAnalysis (MCDA) software platform composed of modular components. Its purpose is toprovide effective tools for decision-aid consultants, for researchers in the field of MCDA,and for operations research teachers.

The Decision-Deck project's objective isto provide an open-source software com-posed of various modular components,which pertains to the field of MultipleCriteria Decision Analysis (MCDA). Itshould give a user the ability to add,modify or simply use existing plugged-in functionalities ('plugins'). These con-

stituents can either be complete MCDAmethods, or elements common to alarge range of procedures. The typicalend-user of the Decision-Deck platformis an MCDA researcher, an MCDA con-sultant or a teacher in an academic insti-tution. The following MCDA methodshave already been implemented: sorting

of alternatives into ordered classesbased on an outranking relation (IRIS),progressive best-choice method basedon an outranking relation (Rubis), best-choice method based on an additiveaggregation model accepting impreciseinformation on the scaling coefficients(VIP), and ranking of alternatives with

Page 50: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 200850

R&D and Technology Transfer

a set of value functions (UTA-GMS/GRIP).

In order to make these various softwareplugins reusable by the whole commu-nity, their implementation must be com-pliant with a set of predefined conven-tions and formats. Quite obviously, thesuccess of the Decision-Deck projectrelies on a collaborative developmenteffort from the MCDA community.

The Decision-Deck software is writtenin the Java programming language andis therefore platform-independent. Itslatest version can be downloaded fromthe collaborative software developmentmanagement system Sourceforge. Twokinds of implementation designs areavailable: a Java client which locallyimplements the MCDA methods (D2),and a distributed Web service andAJAX based architecture, serving theMCDA methods from a distributed Webserver (D3). For example, the Rubischoice method is implemented as such aWeb service on a server at the Univer-sity of Luxembourg.

One of the most valuable features of theDecision-Deck software is the effectiveconsideration of specific roles such asdecision maker, evaluator, coordinatoror facilitator in a given decision analy-sis project. For instance, evaluatorsfrom a variety of distant locations maycommunicate their evaluations via theirlocal D2 clients to the common decisionanalysis project under the supervisionof the project coordinator, whereas thedecision maker may input personal

preferences via method-specific criteriatuning facilities offered in his/her localclient (see Figure 1).

One of the tasks of the Decision-Deckproject is to develop and maintain theUniversal MCDA Modelling Language(UMCDA-ML), an XML standardwhich describes in a generic way theinputs and outputs of MCDA methods,as well as the different steps of a deci-sion analysis workflow. The purpose ofUMCDA-ML (see Figure 2) is to alloweasy integration of MCDA Web serv-ices such as the Rubis server above,while also facilitating communicationand data exchange between core com-ponents of the software platform.

The founding partners of the Decision-Deck project, which started in early2006, are the MathRO laboratory of the

Faculty of Engineering of Mons, theLamsade laboratory of the UniversityParis-Dauphine, the ILIAS laboratoryof the University of Luxembourg andKarmic Software Research.

Links:

http://www.decision-deck.org/http://decision-deck.sourceforge.net/http://sourceforge.net/projects/decisi-on-deck/http://ernst-schroeder.uni.lu/rubisServerhttp://charles-sanders-peirce.uni.lu/bis-dorff/

Please contact:

Raymond BisdorffUniversity of LuxembourgE-mail: [email protected]

Figure 1: One of the interesting featuresoffered by the Decision-Deck software isthe common availability of visualizationresources as illustrated in the pictureabove. The snapshot, taken from a D3Rubis plugin session, shows the perfor-mances of the alternatives on a subset ofcriteria in a column chart style.

Figure 2: Summary viewof part of the UMCDA-MLspecifications.

Page 51: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 51

Recently, the rate of progress in CPUperformance has slowed down consid-erably, mainly because of power con-straints. Nowadays, the preferredapproach to increasing computer per-formance is to use many processors inparallel, each relatively simple andpower-efficient, thus giving rise tomassively parallel computer architec-tures with thousands or even tens ofthousands of processors making up asingle machine. To collectively solve aproblem, these processors need to com-municate among each other with lowlatency and high bandwidth. In such amachine, the network used to intercon-nect the processors is a crucial factor indetermining the system's overall per-formance.

Optical technology holds substantialpromise for interconnection networks inHPC systems for a number of reasons.First, an optical switch typically con-sumes significantly less power than acomparable electronic switch becauseits power consumption is proportional tothe packet rate rather than to the bit rate;second, fibre is much better suited thancopper to the transmission of high datarates over long distances; and third,optics neither generate nor are sensitiveto electromagnetic interference.

In the Optical Shared MemOry Super-computer Interconnect System (OSMO-SIS) project, Corning Inc and IBMResearch have jointly explored thispromise by developing an optical net-work switch prototype. The project wassponsored by the US Department ofEnergy. The result (see Figure 1) of thisfour-year endeavour is the highest-capacity optical packet switch in theworld, with 64 ports running at 40 Gb/s,switching fixed-size packets (cells) of256 bytes at an aggregate rate of 1.25billion packets per second. A state-of-the-art electronic controller, which com-putes an optimal switch configurationonce in every packet slot of 51.2 ns,ensures that throughput and reliabilityare maximized and latency minimized.

Optical switching has three main draw-backs, namely the absence of a practical,dense, fast-access optical memory tech-nology, the complexity of its opticalcontrol and arbitration, and the cost offast optical switching components. TheOSMOSIS project addressed thesedrawbacks by adopting a hybrid electro-optical approach, using electronics toimplement buffering and scheduling andoptics for transmission and switching.

HPC switches must deliver very lowlatency, an extremely low bit-error rate,a high data rate, and extreme scalability.Moreover, they must be efficient forbursty traffic, with demands rangingfrom very small messages (eg collectiveoperations and syncs) to bulk datasettransfers (eg memory pages). The mainchallenge for OSMOSIS was the shortpacket duration, which resulted in asmall overhead budget of 12.8 ns forswitching time, synchronization time,line coding, forward error correctionand header information. This wasachieved by using Corning's high-speedsemiconductor optical amplifier (SOA)technology, which provided sufficientlyfast switching times (< 3 ns).

Moreover, unlike most other opticalswitches, which are designed to bereconfigured at a relatively slow rateusing a circuit switching or container-switching approach, OSMOSIS wasdesigned to perform just-in-timepacket-by-packet switching, reconfig-uring the data path every 51.2 ns. Thisput a heavy burden on the controller,which was addressed by IBM's novelhighly distributed controller architec-ture. The challenge was magnified bybeing limited to using only field-pro-grammable gate arrays (FPGAs) for

LLiigghhttssppeeeedd CCoommmmuunniiccaattiioonnss iinn SSuuppeerrccoommppuutteerrssby Cyriel Minkenberg, Ronald Luijten and Francois Abel

Optics offer significant potential as the communication infrastructure of choice for futuresupercomputers. The exploratory OSMOSIS project has resulted in a prototype of an electronicallycontrolled optical packet switch with an aggregate capacity of 2.5 Tb/s. This device is designedspecifically to meet high-performance computing (HPC) requirements.

Figure 1: The OSMOSIS optical switch prototype, partially populated.

Figure 2: The midplane of the OSMOSIScontroller (57×43 cm2).

Page 52: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 200852

R&D and Technology Transfer

reasons of cost and flexibility. The con-troller's final implementation uses a 36-layer midplane (see Figure 2) holding40 daughter cards and a total of 48high-end FPGAs.

The key requirement in computing sys-tems is low latency, as latency typicallytranslates directly into idle processortime. To meet the requirement of lessthan 1 μs latency measured applica-tion-to-application, we designed novelcontrol algorithms to reduce latency byup to 50%. Further requirements forour prototype system included a band-width efficiency of 75% user payload,a maximum link utilization of at least95%, a bit-error ratio less than 10-21,and scaling capability to support morenodes (>= 2048) and higher line rates(>= 160 Gb/s).

The OSMOSIS project has demonstratedthe viability of SOA-based optical

packet switching. It is the first system-level optical packet switch with morethan two ports operating at 40 Gb/s thatdoes not resort to container switching.

OSMOSIS also sports a number ofarchitectural and algorithmic innova-tions. For instance, it uses a combina-tion of a speculative transmissionscheme with dual receivers per outputto significantly reduce latency. Itsdeeply pipelined controller architec-ture uses multiple independent sched-uling engines in parallel to achievehigh utilization. Its highly distributednature allows physical distributionacross multiple FPGA devices to man-age complexity. At the heart of thecontroller is one of the world's mostcomplex board designs (36 layers,13,000 wires, 45,000 connections),which was awarded a Mentor GraphicsPCB Technology Leadership Award. Italso features one of the highest con-

centrations of high-end FPGAs in asingle system.

To make the OSMOSIS switch archi-tecture commercially viable and fit forpractical deployment, drastic costreductions through dense integration ofoptical components and ASIC (applica-tion-specific integrated circuit) integra-tion of controller logic will be needed.The critical technology to make thislevel of integration cost-effective isalready under development.

This research is supported in part by theUniversity of California.

Link:

http://www.zurich.ibm.com/~fab/Osmosis/

Please contact:

Cyriel MinkenbergIBM Research, SwitzerlandE-mail: [email protected]

IPerG is an EU-FP6 project on Perva-sive Gaming - games that integrate thetechnical approaches of computer gam-ing with emerging interface, wirelessand positioning technology to creategame experiences that combine both vir-tual and physical game elements. IPerGresearches technical, design and busi-ness aspects of pervasive games.Whereas the player of a 'regular game'can participate in a game only at certainplaces and times, pervasive gamesexpand the boundaries spatially (egplaying on the streets, across the globe,use of positioning), temporally (eg inter-lacing gaming with everyday life), andsocially (eg non-players can be invited,players can produce content).

The project's infrastructure researchtheme is focused on identifying therequirements that pervasive gaming putson supporting technology such as

devices, software platforms and net-works. One aspect that makes pervasivegames different from traditional PC- orconsole-based games is that the playersare often required to move around in thephysical environment. This has severalimplications for the design of softwareplatforms that support such games, andoften means that techniques and princi-ples used to achieve today's computer-based multiplayer games need to bemodified or cannot be used at all. PC-and console-based multiplayer gamestypically assume a networking environ-ment where connections are stable andlatencies are low or moderate. This isdifferent from many pervasive games,in which players are mobile and the useof radio-based communication meansthat the risk of high latencies and fre-quent disconnections cannot be neg-lected. One of the aims of the IPerGproject is to develop platforms that are

adapted to the specific characteristics ofpervasive games, including:• frequent disconnection: devices will

often move in and out of connection• mobile players: players move about

forming ad-hoc arrangements ofdevices

• lightweight: must be able to run onlow powered devices, especiallyphones

• heterogeneous: needs to run overmultiple networks including WLAN,GSM/GPRS/3G and Bluetooth

• sensors/actuators: use of sensor oractuator hardware, for instance aspart of purpose-built game props.

One of the platforms developed bySICS within IPerG is called PART (Per-vasive Applications RunTime). A typi-cal PART game session consists of anumber of handheld or embeddeddevices, eg mobile phones or microcon-

MMoobbiillee--BBaasseedd WWiirreelleessss SSeennssoorr--AAccttuuaattoorr DDiissttrriibbuutteedd PPllaattffoorrmm iinn PPeerrvvaassiivvee GGaammiinnggby Pär Hansson and Olov Ståhl

Wide-area wireless sensor/actuator networks can be used across a broad range of application domains.Within the Integrated Project on Pervasive Gaming (IPerG), we use these types of systems to establish acomputer-supported citywide role-playing game. SICS develops one such platform, with researchspanning such areas as wireless sensor/actuator hardware platforms, distribution software platforms,configuration software, and tools for user-created content.

Page 53: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 53

trollers, which are connected to somesensor or actuator hardware. On eachsuch device runs a PART process,which is an application that has beenbuilt using the PART middleware.These processes are in turn connectedto each other, network-wise, via point-to-point connections, which allow themto communicate and exchange datausing the PART protocol. Such networkconnections can be set up in an ad-hocfashion, since PART does not dictateany particular connection topology (ega star shaped client-server topology).Furthermore, it supports several com-munication protocols such as tcp, http,and btspp. In addition, PART also pro-vides support for distributed objects,which can be used to represent a gamestate that needs to be shared and syn-chronized among a set of gameprocesses.

A PART process that connects to somesensor hardware will typically create a

distributed object representing this sen-sor. The process will also dynamicallyadd properties to this object, represent-ing different types of sensor readings(eg temperature and humidity). In asimilar way, hardware that controlsactuators such as, for example, a lightswitch, can be represented by distrib-uted objects and properties. One of thekey features of the PART object modelis that objects created by one processcan be discovered and accessed byother processes. This allows remoteprocesses to read or set the values ofproperties of such objects, providingaccess to the hardware they represent.

IPerG has implemented and demon-strated its results in a number of show-case games. These include Live-ActionRole Playing games and Location-based games, both of which have beenfocused around ubiquitous computingconcepts for interacting with the physi-cal world. Game technology setup is

centred around players roaming arounda city, carrying a mobile phone and anynumber of extra devices, all highly con-nected to players of the same game.

Typically these 'ubicomp' games are notplayed on the mobile screen; rather,they emphasize real-world interactionthrough other means and employ thephone as a communication node run-ning the PART process. Extra devices,carried or placed in the environment,are automatically discovered usingBluetooth and then become part of thegaming experience. Support for devicesis easily extensible and ranges fromcustom-built modular hardware that canbe fitted with arbitrary sensors/actua-tors (eg touch, light, temperature, vibra-tion, audio, RFID etc) to off-the-shelfunits such as GPS.

The partners in the IPerG project areSICS, the Interactive Institute and Got-land University in Sweden, NokiaResearch and the University of Tam-pere in Finland, the University of Not-tingham and Blast Theory in GreatBritain, and the Fraunhofer Instituteand Sony Europe in Germany. Theproject has been running since Septem-ber 2004 and will continue until Febru-ary 2008.

Links:

http://part.sourceforge.net/http://pimp.sourceforge.net/http://www.pervasive-gaming.org/http://prosopopeia.se/

Please contact:

Pär HanssonSICS, SwedenTel: +46 8 633 1575E-mail: [email protected]

Figure 1: A small PART session consistingof processes running on a PC, a phone

and a microcontroller.

Figure 2: Typical mobile technology setup for the IPerG ubicomp demo game, with GPS, RFIDreader and graphics tablet connected to players.

Page 54: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 200854

R&D and Technology Transfer

Domotics is the discipline that investi-gates how to realize an intelligent homeenvironment. Benefits of the use ofdomotic technology include improvedenergy efficiency, on-command enter-tainment, increased safety, greater inde-pendence and access to external serv-ices. This is particularly important inaddressing the needs of elderly and dis-abled people. These services are gener-ally based on hardware and softwareinfrastructures commonly called

domotic systems (X10, Konnex, Lon-Works, UPnP, HAVi, Jini etc), whichsupport the main available communica-tion standards (Ethernet, FireWire,Bluetooth, ZigBee, IrDA, proprietarybuses etc). However, they are oftendeveloped by private companies andtherefore distributed as proprietary andclosed technology.

At present, too many domotic systemsare crowding the market, and these sys-

tems are rarely interoperable (Figure 1).This represents a serious obstacle tosolid market growth. Consumers shouldbe able to choose devices according totheir requirements and to criteria suchas cost, performance, trends and confi-dence, without having to worry aboutissues of compatibility with their exist-ing systems. Unfortunately, commonmarket logic currently binds con-sumers, forcing them to purchase onlydevices that conform to a single systemin order to obtain a suitable level ofinteroperability. Further, in our opinionnone of the domotic protocols that arecurrently available will ever be able toachieve a truly leading role.

We thus propose the introduction of anew communication infrastructure anda new XML language to enableexchange of information between nodesbelonging to distinct domotic systems.This new infrastructure, calledDomotics Network (DomoNet), isbased on a service-oriented architecture(SOA) model, in which services coin-cide with the functionalities offered bythe devices. DomoNet uses Web serv-ices technology, a standard communica-tion paradigm in distributed SOA con-texts, and uses XML and XML-schemas to define a language thatdescribes datatypes, messages, devicesand their services. This universal lan-guage, called DomoML, allows a stan-dard communication between thedevices inside DomoNet.

As shown in Figure 2, DomoNet pro-vides a network infrastructure for ambi-ent devices in which each domotic sys-tem represents a sub-network connectedto it through a dedicated gateway.

From a functional point of view,DomoNet is constituted by a Web serv-ice, called DomoNetWS, and by a set of

AAnn IInnffoorrmmaattiiccss RReesseeaarrcchh CCoonnttrriibbuuttiioonn ttoo tthhee DDoommoottiicc TTaakkee--OOffffby Vittorio Miori, Dario Russo and Massimo Aliberti

While many household appliances and hi-tech devices populate our lives, each one is independent ofthe others. Informatics research on domotics has the target of achieving interoperability between all thedevices available in a home, automating some particular sequences of tasks and integrating the homewith the outside world and especially with the Internet. The lack of an open and universal domoticstandard is the principal reason for the poor use of domotics in real environments, so a new softwarearchitecture based on the SOA (service-oriented architecture) model and a domotic XML language hasbeen created in order to realize a homogenous and independent vision of the home environment.

Figure 1: At home, variousdevices belonging differentbrands and distinct domoticsystems don't speak eachothers.

Figure 2: domoNet architecture.

Page 55: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ciency, safety and comfort, while UPnPfocuses on multimedia applications).

Many activities based on the results ofthis project have been or will soon belaunched. Currently, the most importanteffort concerns the definition of anXML language that is able to addsemantics to domoNet in order toincrease its usability. An initial part ofthis study is the development of anautoconfigurable universal homedevice controller, designed to be user-friendly for everyone, especially forelderly and disabled people.

Links:

Domonet home page:http://sourceforge.net/projects/domonetISTI - CNR Domotic Laboratory:http://www.domoticslab.it

Please contact:

Vittorio MioriISTI-CNR, Italy E-mail: [email protected]

ERCIM NEWS 72 January 2008 55

modules that work as gateways, calledtechManagers, to deal with specificdomotic systems. When an event gener-ated by a device is captured by its tech-Manager (eg a button is pressed),domoNetWS identifies it and finds theconfigured device and service (eg theswitching on of a light) in order to exe-cute the action.

The main task of domoNetWS is to cor-rectly route messages between techMan-agers using a table containing pairs ofaddresses: a DomoNETAddress, assignedto every device that is connected toDomoNet, and the techManager's IPaddress (and a TCP port) which handlesthe specific domotic system.

DomoNetWS considers the house to bean Internet node that can share environ-ments and device functionalities withany other DomoNetWS placed on theInternet. For example, functionalitiesand states can be shared between a user'smain residence and the holiday home.

The system prototype developed at theISTI-CNR Domotics Laboratory inPisa is now open-source softwarereleased under GPL licence. It waswritten using Java and open-sourcelibraries and tools. The prototype hasbeen tested and is currently under fur-ther development. In its present state, itcan correctly import and interact withall available devices such as televi-sions, washing machines and lights,regardless of the specific domotic sys-tem to which they belong.

The software has been tested using twodomotic systems with different func-tionalities: Konnex and UPnP. Thesesystems have two fundamental differ-ences: first, in the ease with whichdevices can be configured (in Konnexthis must be done by specialized staff,systems and software, whereas UPnPhas an automatic plug-and-play auto-configuration system); and second, inthe functions provided (Konnex is ori-ented towards lighting, energy effi-

NNaattuurraall IInntteerraaccttiioonn -- AA RReemmeeddyy ffoorr tthhee TTeecchhnnoollooggyy PPaarraaddooxxby Lea Landucci, Stefano Baraldi and Nicola Torpei

The advent of new technology means we are continuously offered high-tech devices that shouldmake our life more agreeable, safe and pleasant. Do they succeed? In his book "The Psychology ofEveryday Things", Donald Norman, a leading cognitive psychologist, warns us about the technologyparadox: innovation technology risks making our life more complex every day. The intention of thiswarning is to make us conscious of the importance of 'human-centred design', particularly when wetalk about human-computer interaction.

The aim of Natural Human-ComputerInteraction (NHCI) research is to createnew interactive frameworks that inte-grate human language and behaviourinto tech applications, focusing on theway we live, work, play and interactwith each other. Such frameworks mustbe easy to use, intuitive, entertainingand non-intrusive. They must supportinteraction with computerized systemswithout the need for special externalinput-output equipment like mice, key-boards, remote control or data gloves.Instead, these will be replaced by handgestures, speech, context awareness andbody movement.

An interesting challenge for NHCI is tomake such systems self-explanatory byworking on their 'affordance' and intro-

ducing simple and intuitive interactionlanguages. Ultimately, according to M.Cohen, "Instead of making computerinterfaces for people, it is of more fun-damental value to make people inter-faces for computers".

Our research team at the Media Inte-gration and Communication Center(MICC) is working on natural interac-tive systems that exploit ComputerVision (CV) techniques. The mainadvantages of using visual input in thiscontext are that visual informationallows users to communicate withcomputerized equipment at a distance,without the need for physical contactwith the equipment to be controlled.Compared to speech commands, handgestures are advantageous in noisy

environments, in situations wherespeech commands would be disturbed,and for communicating quantitativeinformation and spatial relationships.

We have developed a framework calledVIDIFACE to obtain hand gestures andperform analysis of smart objects(detection and tracking) in order tounderstand their position and move-ment in the interactive area. VIDI-FACE exploits a monochrome cameraequipped with a near-infrared band-pass filter that captures the interactionscene at thirty frames per second. Byusing infrared light, the computervision is less sensitive to visible lightand is therefore robust enough to beused in public spaces. A chain ofimage-processing operations is applied

Page 56: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

in the case of hinged interfaces. In ourimplementation of TUIs, users employ'smart' physical objects in order to inter-act with Tabletop interfaces. Obviouslythe employment of such objects must beas intuitive as possible. The TANGer-INE project, developed in collaborationwith the Micrel Lab of the University ofBologna, exploits a 'smart cube': a wire-less Bluetooth wooden object equippedwith sensor node (tri-axial accelerome-ter), a vibra motor, an infrared LEDmatrix on every face and a microcon-troller. We chose a cube once againbecause of its clear affordance: usersintuitively consider the uppermost face'active' (as if reading the face of a dice),thus conceiving the object as able toembody six different actions or roles.

Links:

http://www.micc.unifi.it http://www.micc.unifi.it/vidifacehttp://www.palazzo-medici.it/ita/sperimenta.htmhttp://www.micc.unifi.it/interactive-bookshophttp://www.tangerineproject.org

Please contact:

Lea Landucci, Stefano Baraldi and Nicola TorpeiMICC Media Integration andCommunication Center, University of Florence, ItalyE-mail: [email protected], [email protected], [email protected]

ERCIM NEWS 72 January 200856

R&D and Technology Transfer

in order to remove noise, adapt to thebackground and find feature dimen-sions, optimized to obtain a real-timeflow of execution.

What Kind of Interfaces?Our early results concerned verticalscreens showing multimedia contents.Users point at a content display in orderto access information: one significantinstallation is the PointAt system at thePalazzo Medici Riccardi museum inFlorence. It is considered to be a goodvanguard experiment in museum didac-tics, and has been functioning success-fully since 2004. With the PointAt sys-tem, users can view a replica of theBenozzo Gozzoli fresco 'Cavalcata deiMagi' in the palace chapel, and requestinformation on the displayed figuressimply by pointing at them. Informationis provided to the user in audio.

More recently, we have focused ourresearch on TableTop multi-userframeworks that exploit table-like hor-izontal screens. This decision waseffected by the concept of 'affordance'.According to D. Norman, this is theability of an object to suggest the cor-rect way in which it should be used. Atable is a natural place to share withother people and can also be used tohold other objects.

Multi-user systems often involve inter-faces that are rich in interactive actionsand multimedia content. Our early solu-tion used richer interaction languages todevelop CV algorithms able to recog-nize different gestures and hand pos-tures. However, we observed that such asolution causes an increasing cognitiveload for users, who were forced to learndifferent gestures before using the sys-

tem. This went against the concept of aself-explanatory system.

During the last year we have worked ondifferent solutions for developing newnatural interaction frameworks thatexploit a minimal set of natural object-related operations (eg zoom-in for 2Dimages, zoom and rotate for 3Dobjects, open and turn page for books,unroll for foulards and so on). An inter-esting result is our Interactive Book-shop, which was recently prototypedand will soon be installed in the book-shop of Palazzo Medici Riccardi inFlorence to display artistic objects asdigital replicas.

Another solution exploits Tangible UserInterfaces (TUIs) to enrich interaction

Figure 1: VIDIFACE interactive bookshop, a tabletop multi-user natural interaction system.

PointAt, an interactive replica of Benozzo Gozzoli fresco 'Cavalcata dei Magi', installed inPalazzo Medici Riccardi Museum in Florence, Italy.

Page 57: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 57

SPONSORED BY ERCIM

IISSWWCC0077 -- IIEEEEEE IInntteerrnnaattiioonnaall SSyymmppoossiiuumm oonn WWiirreelleessssCCoommmmuunniiccaattiioonn SSyysstteemmss by Yuming Jiang

The IEEE International Symposium on WirelessCommunication Systems 2007 (ISWCS'07) was held inTrondheim, Norway, 16-19 October 2007. It was thefourth conference in the series and provided a platformfor wireless communication researchers andtechnologists to identify and discuss technicalchallenges and business opportunities in wirelesscommunication systems.

In 2007, ISWCS became an IEEE-sponsored conference.Specifically, ISWCS'07 was sponsored by the IEEE and theIEEE Vehicular Technology Society. In addition, it was tech-nically co-sponsored by the IEEE Communications Societyand the Nordic Radio Association. ISWCS'07 also receivedstrong institutional support from the Norwegian Universityof Science and Technology (NTNU), the University ofAgder, and the Simula Research Laboratory. Besides, theconference received sponsorships from the City of Trond-heim, the Research Council of Norway, Telenor Researchand Innovation, The MathWorks and ERCIM.

The technical program of ISWCS'07 was very strong, which,in addition to paper sessions, included keynote speeches,panel discussion and tutorials. There were four keynotespeeches given by internationally recognized experts fromboth academia and industry. Professor David Gesbert (Eure-com Institute) gave a talk on adaptation, coordination anddistributed resource allocation in wireless networks. Dr.Hans Martin Ritt (The MathWorks) presented his view onthe role of new development technologies in developingcomplex wireless systems. Professor Andreas F. Molisch(Mitsubishi Electric Research Labs and Lund University)highlighted how the MIMO antenna and channel propertiesimpact wireless systems. Professor Mario Gerla (UCLA)talked about vehicle to vehicle communications and net-working.

One panel discussion was organised. The panel memberswere Richard Savage (Qualcomm Europe), Stein Hansen(Telenor), Boon Sain Yeo (SensiMesh), Ralf Müller(NTNU) , Jarle Boe (Texas Instrument), and Victor Bahl(Microsoft Research). There were lively discussions duringthe panel session, and the panel provided a good forum forboth the panelists and the audiences to debate the future ofwireless communications.

Six tutorials were provided on 16th October 2007. Thesetutorials were carefully chosen from the proposals receivedby the conference. Topics of them covered a wide range ofactive areas in wireless communications, networking andsystems. They included "Multipath Characterization ofAntennas and Mobile Terminals for Diversity and MIMOSystems" by Per-Simon Kildal and Jan Carlsson (Chalmers

University of Technology), "Ultra-Wideband Technologyand its Standardization for Wireless Personal Area Net-works" by Huan-Bang Li (NICT), "From Cognitive Radiosto Cognitive Wireless Networks" by Petri Mahonen andMarina Petrova (RWTH Achen University), "MultiuserCommunications" by Ralf Muller (NTNU), "Introduction toITS Communication Solutions" by Runnar Søråsen and PerJarle Furnes (Q-free), and "RFID Systems" by Boon SainYeo (SensiMesh). The conference gave each registered par-ticipant free access to one tutorial. All six tutorials were wellattended.

The conference received 369 submissions and about 2/3 ofthe submissions were full papers. This figure exceeded thenumber of submissions to previous ISWCS conferences andshowed that ISWCS was on the right path to greater recog-nition and international popularity. The review processinvolved all members of the Technical Program Committeeand many other reviewers, all were recognized experts intheir respective fields. It was taken care that the topic of eachsubmitted paper fell within the reviewers' area of expertise.The review process was in accordance with standard singleblind peer-reviewing practices, i.e., the reviewers know whothe authors of the manuscript are, but the authors do not haveaccess to the information of who the reviewers are. For mostpapers, at least 3 independent reviews were received. Insome very few cases, we were satisfied with 2 reviews pro-vided the two reviews were consistent in their conclusions.The result of the review process was that 170 papers wereaccepted for presentation at the conference and inclusion inthe conference proceedings, among which 107 were oralpresentation papers and 63 papers were poster presentations.

The conference received more than 210 registrations, andapproximately, 180 people attended the conference each day.A Conference Dinner was arranged. About 120 people regis-tered and attended the Dinner where the best student paperawards were given. ISWCS'07 awarded two Best StudentPapers. They were chosen by a committee consisting of theGeneral Chair, TPC Chair, Publication Chair and SpeakerChair. The decision was made based on the paper's original-ity, depth, and potential impact as well as the author's pres-entation at the conference.

I thank all people who have contributed to and made the con-ference a great success in Trondheim. Particularly, I amgrateful to the conference organising committee, technicalprogramme committee, reviewers, authors, presenters, spon-sors, and the technical societies. My special thanks go to thestudents of NTNU and University of Agder for their greateffort with the on-site arrangement.

ISWCS'08 will be held from 21-24 October 2008 in Reykjavik, Iceland. I wish them all the best for another suc-cessful conference.

Link:

http://www.iswcs.org/iswcs2007/

Please contact:

Yuming Jiang, ISWCS General ChairNTNU, NorwayE-mail: [email protected]

Events

Page 58: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 200858

Events

IINNRRIIAA CCeelleebbrraatteedd 4400 yyeeaarrss ooff AAccttiivviittyyINRIA celebrated its 40th anniversay with a large forumon information and communication science andtechnologies, and their past, present and future socio-economic impact at the Grand Palais in Lille on 10-11December 2007. The aim was to analyse and present the'information revolution' at the turn of the millennium, thegenuine upheaval in economic and research activities, inemployment organisation and social practices.

Over the past 40 years, information and communication sci-ence and technology (ICST) has entered every area of social-economic life. It has revolutionised social practices, changedour working lives, sped up economic cycles and imposednew cultural habits and leisure activities. Information tech-nology and software, the Internet and Web, plus a multitudeof spin-offs, have developed on a scale never before wit-

nessed in human history. And yet, the public at large has lit-tle awareness of the science behind these innovations.

The objective of the forum was to launch a debate on soci-ety, measuring how far we have come, analysing the changesmade and attempting to explain how and why some types oftechnology have been so successful. Once we have thatknowledge, we can look ahead to the near future, anticipateforthcoming changes, define good practices and set intomotion new innovations to guarantee economic developmentin the years to come. The conferences brought together sci-entists from all disciplines and key guest speakers. It wascomposed of four plenary sessions and three theme sessions.

The plenary session featured the topics "Computer Science,history in the making", "Digital networks and the develop-ment of a new industrial model", "Information and Communi-cation Technologies: change of scale and new challenges",and "New technologies: a cultural and cognitive revolution". Chaired by experts, historians, sociologists and philosophers(Michel Serres closed the forum speaking about cultural andcognitive revolution), the theme sessions focused on variousaspects of ICST: historical perspectives, economic issues,societal impact and scientific challenges. The scientific theme

session discussed the questions "Is computer science really ascience?" with Robin Milner, Cambridge University andGilles Dowek, INRIA and Ecole Polytechnique; "ComputerScience and Biology: a creative convergence?" with LucaCardelli, Microsoft Research, Cambridge, Jean-Frédéric Ger-beau, INRIA and Jacques Haiech, University of Strasbourg 2;and "What scientific challenges does the future hold in store?"with Olivier Faugeras, INRIA, Bruno Sportisse, Cerea-ENPC/INRIA, and Vincent Quint, INRIA and W3C.

The economic sessions focused on the three topics "Hascomputing transformed engineering?", "Labels and con-sumers: the stakes involved in the shopping of the future?"and "From research to business: process dynamics". Speak-ers included Jean-François Abramatic, Ilog; Patrick John-son, Dassault Systèmes; Armand Hatchuel, CSG, Ecole desMines de Paris; Olivier Iteanu, specialist lawyer in informa-tion technology; Henri de Maublanc, Aquarelle.com, Acsel;Arnaud Mulliez, Auchan France; David Simplot-Ryl,INRIA; Jérôme Chailloux, ERCIM, Jean-Marie Hullot,Thierry Pollet, Alcatel-Lucent.

The third session discussed society-based topics such as"Surveillance society – the reality of tomorrow? (BarbaraCassin, CNRS, University Paris 4, author of 'Google-moi',Sébastien Canevet, specialist in Internet law, Alex Türk,French Senator, CNIL); "The individual in network: newcultural practices?" (Eric Bruillard, University Paris12, ENSCachan, Jean-Pierre Dalbera, French Ministry of Culture;Patrice Flichy, University Marne la Vallée, Jean-MichelSalaün, University of Montréal, EBSI); and "Will the futureof art go digital?" (Norbert Corsino, choregraph, n+ncorsino, Philippe Morel, architect, EZCT; Steve Sullivan,Industrial Light & Magic).

As part of the celebrations, INRIA also organised an exhibi-tion devoted to the history of information and communica-tion science and technology. More than thirty demonstrationsprovided an overview of results today. The exhibition centredon an 'innovation village' offering scientific demonstrations,films and activities on a wide variety of subjects: health andbiology, art and science, the virtual world, sustainable devel-opment, robotics, search engines, networks and sensors, lan-guages, human-machine interaction, etc.

More information: http://www.inria.fr/40ans/

Michel Cosnard, CEO and chairman of INRIA adresses the audience. Exhibitions.

© IN

RIA

/ p

hoto

s: S

tudi

o 9

Page 59: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008

EECCSSAA 22000077 -- FFiirrsstt EEuurrooppeeaann CCoonnffeerreenncceeoonn SSooffttwwaarree AArrcchhiitteeccttuurreeby Carlos E. Cuesta and Paloma Cáceres

The First European Conference on Software Architecture(ECSA 2007) was held in Aranjuez, near Madrid, on 24-26 September 2007. Conceived as the premier meetingplace for European researchers in the area of softwarearchitecture, it has now become firmly established asone of the most prominent worldwide events in the field.

Software architecture has just begun its second decade as adiscipline within software engineering. During this period,its role in the engineering and design of complex software-intensive applications has become more and more importantand widespread. The field is now experiencing the coexis-tence of component-based and service-oriented architec-tures, and because of that, this is a time of changes, whichcould even lead to a paradigm shift. From a general perspec-tive, architecture presents itself as a key aspect in the design,development and evolution of software systems; the latter isperhaps the most challenging issue in the field today.

Due to the relevance of these issues, the existing successfulseries of European workshops on software architecture hasevolved into a number of fully-fledged European confer-ences. Thus, the follow-up to the meetings held in the UnitedKingdom in 2004 (LNCS 3047), Italy in 2005 (LNCS 3527),and France in 2006 (LNCS 4344) has been the first editionof the European Conference on Software Architecture(ECSA 2007). This took place in Aranjuez (Madrid), Spain,on 24-26 September 2007 (LNCS 4758).

The conference attracted the interest of researchers in thefield, increasing the number of submissions with respect toprevious workshops. An initial profile of 89 abstracts provideda total of 62 contributions from 25 countries. From these,eighteen were selected for presentation, and only five wereaccepted as full papers. Moreover, sixteen additional submis-sions were selected as posters; these were exposed in a veryintense poster session, held also as part of the main program.The selected papers covered many areas and promoted fruit-ful discussions among the participants. The structure of theprogram grouped them as follows:• architecture description, with an emphasis on evolution• architecture analysis• architecture-based approaches and applications• challenges in software architecture• service-oriented architectures.

Furthermore, there were three keynote talks during the con-ference, given by the following outstanding researchers:• Professor David Garlan from Carnegie Mellon University

(USA), who presented the invited lecture "Software Archi-tectures for Task-Oriented Computing"

• Professor Ron Morrison from the University of StAndrews (UK), who in his lecture described "An ActiveArchitecture Approach to Dynamic Systems Co-Evolution"

• Professor Michael Papazoglou from Tilburg University (TheNetherlands), who asked "What's in a service?" and discus-sed in detail the consequences and ideas around this concept.

Due to the success of ECSA 2007 and with the collaborationof all the participants, ECSA has already become one of theworldwide reference forums for software architecture. Inaddition, plans for future editions already include a jointevent with WICSA, the premier meeting in the field, to beheld in 2009.

The ECSA promoters would like to thank the Spanish Min-istry of Education and Science, the the General Director ofUniversities and Research (DGUI) of the autonomous Gov-ernment of Madrid, and Rey Juan Carlos University for theirfinancial support; Kotasoft SI Ltd, Kybele Consulting, OpenCanarias and InterSystems for their sponsorship; and theSpanish Office of the W3C for their kind collaboration.

Link:

http://www.kybele.es/ecsa/

Please contact:

Esperanza Marcos, SpaRCIM, SpainE-mail: [email protected]

Follow your calling

To find out about our opportunities,it's here and now http://www.inria.fr/travailler/index.en.html

I n n o v a t i o n

S c i e n c eT r a i n i n gR e s e a r c h

n e t w o r k ss o f t w a r e

s e c u r i t y

c o m p l e xs y s t e m s

s i m u l a t i o n

v i r t u a lr e a l i t y

m o d e l l i n gl i v i n g s y s t e m s

INRIA in France is at the heart of information society and is offering

an outstanding environment for research in exciting scientific fields:

Networks, software security, complex systems, simulation,

virtual reality, modelling living systems.

INRIA is the French National Institute for Research in ComputerScience and Control. 3,800 people, including 2,900 scientists(INRIA and partners), work in our research centres, located in 7 different regions of France.

In 2008 , we offer

150 Post-doctoral positionswithin cutting edge internationalresearch team.

Advertisement

Page 60: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

Announcements

ERCIM NEWS 72 January 200860

SPONSORED BY ERCIM

WWWWIICC 22000088,, tthhee 66tthh IInntteerrnnaattiioonnaall

CCoonnffeerreennccee oonn WWiirreedd//WWiirreelleessss

IInntteerrnneett CCoommmmuunniiccaattiioonnss

Tampere, Finland, 28-30 May 2008

Next generation mobile networks will be based on Internetcore networks and wireless access networks. The need forefficient merging of the wired and wireless infrastructure aswell as the new multimedia services and applications of nextgeneration networks call for novel network architectures,protocols and traffic-related mechanisms.

WWIC addresses research topics such as the design andevaluation of protocols, the dynamics of the integration, theperformance tradeoffs, the need for new performance met-rics, and cross-layer interactions. The goal of the conferenceis to present high-quality results in the field, and to providea framework for research collaboration through focused dis-cussions that will designate future research efforts and direc-tions. In this context, the program committee will acceptonly a limited number of papers that meet the criteria of orig-inality, presentation quality and topic relevance.

WWIC is a single-track conference which has reached,within 5 years, the highest level of quality, which is reflectedboth in the level of participation as well as the acceptanceratio and the amount and quality of submitted papers.

Conference TopicsThe conference objectives will be pursued through highlytechnical sessions organised thematically and keynote talksoffered by recognized experts. Topics of interest to WWIC2008 include (but are not limited to) the following:

AAA in mobile environments, ambient networks, ad-hocmobile networks, blended network configurations, beyond3G networks technologies, cross layer interactions, econom-ical issues of wireless networks, end-to-end Quality of Ser-vice support, handover techniques, heterogeneous wirelessaccess networks, hybrid wired / wireless environments, inte-gration of wired and wireless networks, mobile service levelagreements / specification, mobility management, networkdesign and network planning, network mobility, networkcoding in mobile networks, network security in mobile envi-ronments, performance evaluation of wireless systems, pric-ing, charging and accounting in wireless networks, QoSrouting in mobile networks, QoS signalling in mobile envi-ronments, resource management and admission control,service creation and management for wireless, simulationfor next generation mobile networks, traffic characterisationand modelling, traffic engineering, transport protocols andcongestion control, wireless mesh networks, wireless multi-hop networks, wireless multimedia systems, wireless net-work monitoring, wireless sensor networks.

More information:

http://wwic2008.cs.tut.fi/

CALL FOR PARTICIPATION

55EECCMM,, tthhee FFiifftthh EEuurrooppeeaann

CCoonnggrreessss ooff MMaatthheemmaattiiccss

Amsterdam, 14-18 July 2008

5ECM is the fifth in a row of highly successful congressesthat cover the broad range of the mathematical sciences,from very applied to very pure, and which are organisedunder the auspices of the European Mathematical Society(EMS). This year's congress will be organised in Amster-dam, under the special patronage of the KoninklijkWiskundig Genootschap (Royal Dutch Mathematical Soci-ety – the KWG), by the mathematical departments of the VUUniversity, the University of Amsterdam, and CWI. The pre-ceding ECMs were held in Paris (1992), Budapest (1996),Barcelona (2000) and Stockholm (2004).

The scientific committee of 5ECM, headed by LexSchrijver (CWI and University of Amsterdam) has com-posed a wonderful programme with ten plenary talks byoutstanding mathematicians, three science lectures by topscientists and thirty-two invited talks by top specialists. Aprize committee chaired by Rob Tijdeman (LeidenUniversity) is busy selecting ten of the most promisingyoung European mathematicians who will each be awardeda prize of €5000 and who will be invited to present a talk. Inaddition, there are 22 mini symposia to learn about thenewest developments in the various areas of specializationof mathematics.

As a special activity we mention the presentation of theBrouwer medal which is awarded once every three years bythe KWG. The Brouwer medal ceremony consists of a lauda-tio for the golden Brouwer medal winner and a lecture by themedal winner, followed by a welcome reception for the par-ticipants and accompanying persons of 5ECM. This cere-mony will be organised on the evening of the first day of thecongress.

The ECM congresses do not know submitted talks or papers,but registered participants have the occasion to present theirmathematical work in the form of a poster.

The local organisation is in the hands of André Ran (VUUniversity), Herman te Riele (CWI Amsterdam), and JanWiegerinck (University of Amsterdam).

Important Dates31 January 2008: poster submissions due15 April 2008: early registration deadline

More information:

http://www.5ecm.nl/

Page 61: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 61

Call for Application to the

INRIA-Schneider Endowed Chair

INRIA is a French national institute dedicated to fundamental and applied research in information and communication science

and technology (ICST). The Institute plays a leading role in the international scientific community in the field of ICST. Together

with several academic partners, INRIA has about 150 research groups, called project-teams, that host over 2500 researchers,

faculty members, postdocs and PhD students. It is a major participant in technology development and transfer in France and

Europe.

The Schneider Electric industrial group is one of the first companies worldwide in electrical distribution, ultra terminal and

secured power. It employs over 100 000 persons in 190 countries. Schneider endows, for three years, a research chair at one of

INRIA’s research centers, preferably in Grenoble or Nice - Sophia Antipolis.

The call is open in all research areas pursued at INRIA, in particular in:

� Modeling, Simulating, & Optimizing Complex Systems� Programming, Guarantied & Secure Computing� Communicating, distributed computing and ubiquitous systems� Interacting with Real & Virtual Worlds� Computational engineering and embedded systems

The chair holder is expected to lead a research program in order to strengthen INRIA activities in the chosen area either by

� Putting together an integrative project that builds on existing project-teams with a multidisciplinary approach towards an

ambitious common objective, or

� Creating a new project-team on an excellent, highly innovative research objective.

This call is for a position at the level of a senior researcher or full professor, with distinguished scientific records. The initial contract

is for a full-time employment for up to three years. It may be pursued into a tenured research or faculty position by applying to

a permanent position within INRIA project-teams.

This endowed chair covers a competitive salary and full social benefits for the chair holder, as well as funding for PhD students,

postdoctoral researchers and/or development engineers, together with the support for infrastructure, research equipment, and

general expenses related to the research program. The call is open without any restriction on the current professional or personal

situation of the candidate. Specific leave conditions from actual permanent positions elsewhere within universities or research

institutes can be studied on request.

Application procedure: Applicants are invited to submit a full research project (up to 20 pages in English) and their detailed CV by the 29th of February

2008. They are strongly encouraged to contact and/or visit INRIA before submitting a proposal. A hiring committee with

renowned scientific personalities, including representatives from INRIA and Schneider will reach a decision by mid-April 2008.

The position starting date is negotiable around the summer 2008.

Calendar:� Application submission deadline: February 29th, 2008

� Decision notification: April 17th, 2008

� Position starting date: July 1st, 2008 (negotiable)

Contact: � Gérard Giraudon, Director of the Sophia Antipolis research center [email protected]

� Claude Puech, INRIA Scientific Director [email protected]

� François Sillion, Director of the Grenoble research center [email protected]

Adve

rtis

emen

t

Page 62: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

In Brief

DDuubblliinn CCiittyy UUnniivveessiittyy ttoo LLeeaadd

RReesseeaarrcchh iinn HHiigghh--TTeecchh AAuuttoommaattiicc

LLaanngguuaaggee TTrraannssllaattiioonn

Dublin City University (DCU) is to lead a multi-million euroresearch partnership funded by Science Foundation Ireland(SFI) that will develop the next generation of high tech auto-matic language translation. This five-year research pro-gramme will transform an important sector of Ireland's globalsoftware business – localisation - as well as a key driver ofthe global content distribution industry. DCU is collaboratingin the project with academic partners, University CollegeDublin, University of Limerick and Trinity College Dublin,and with the companies IBM, Microsoft, Symantec, Dai Nip-pon Printing, and Idiom Technologies as well as Irish SMEs,Alchemy, VistaTech, SpeechStorm and Traslan.

The project is awarded €16.8m by SFI, and the industry part-ners are contributing €13.6m in materials, research servicesand additional funding. Ireland already has a substantialglobal footprint in the localisation industry – the process ofadapting digital content, download manuals, software andother materials, to different languages and cultures.

ERCIM NEWS 72 January 200862

AAllccaatteell--LLuucceenntt BBeellll LLaabbss aanndd IINNRRIIAA

EEssttaabblliisshh aa JJooiinntt RReesseeaarrcchh LLaabb

Alcatel-Lucent and INRIA have signed a joint researchagreement that commits the two organisations to partner indeveloping advanced technologies for the next generationInternet. As announced by Valérie Pécresse, French Ministerfor Higher Education and Research in her opening speech onthe occasion of INRIA's 40th anniversary, Bell Labs andINRIA have committed researchers and resources to jointlyresearch and develop critical technologies to realize the chal-lenge of self organising networks – considered by many tobe key for the future of networking.

This joint research will cover four critical areas - semanticnetworking, high manageability, self-optimizing mobile cel-lular networks, and ad-hoc extended mobile networks - andwill focus on generating results through new technologies,intellectual property, and milestone publications that pavethe way for the telecommunications.

"The number of sensors and communication devices isbringing Internet complexity to a new dimension" said JeongKim, president of Alcatel-Lucent's Bell Labs. "This JointResearch Lab is a natural next step from our collaborationwith INRIA which started 10 years ago." "We have devel-oped a theory based on this approach, giving rise to a gen-uine paradigm of the virtuous circle of research and innova-tion" said Michel Cosnard, chairman and CEO of INRIA. "Itis the people in industry who raise the questions that lead tonew and exciting challenges for the scientists, who dreamthe innovations that the people in industry go on to develop."This partnership will bringing together a team of about 50researchers from both organisations dedicated to this task.

The Irish project will tackle three critical problems for thelocalisation industry: • Volume: The amount of content to be translated and loca-

lised to the destination culture and environment is growingrapidly and massively outstrips the supply of human trans-lators.

• Access: Powerful, small devices such as mobile phonesand PDAs require novel technologies integrating speechand text to support "on the move" delivery of, and accessto multilingual information.

• Personalisation: A new demand has rapidly emerged forthe adaptation of a huge amount of multilingual contentnow available on the Web, for individual needs. It needs"instant" localisation and personalisation to meet thedemands of the users.

Professor Joseph van Genabith, Director of the new Centresaid: "Localisation as an industrial process was developed inIreland. We have a unique concentration of university- andindustry-based research and development expertise in lan-guage technologies, machine translation, speech processing,digital content management and localisation. The researchcentre is going to pool that expertise and develop the nextgeneration of language and content management technolo-gies to support and develop the localisation industry."

Please contact:

Prof. Josef van Genabith Director National Centre for Language Technology NCLTSchool of Computing, Dublin City University Tel: +353 1 7005074 E-mail: [email protected]

SSuucccceessssffuull SSttaarrtt ffoorr CCWWII''ss

NNeeww BBuuiillddiinngg

The building activi-ties for the newwing of CWI inAmsterdam for-mally started onNovember 20, 2007.Peter Nijkamp,chairman of theGoverning Board ofthe NetherlandsOrganisation forScientific Research- NWO), Cees deVisser, generaldirector of NWO,Pieter Adriaans,chairman of the Governing Board of CWI, and Jan KarelLenstra, general director of CWI, personally hammered the'first pole' of the new wing into the ground, encouraged bymore than 120 cheering visitors and a brass band. This build-ing event completed "CWI in Bedrijf", CWI's annual day forrelations (http://www.cwi.nl/cib/). The new wing and therenovation of the old building will be ready in 2009.

Phot

o: C

WI.

Page 63: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM NEWS 72 January 2008 63

NNookkiiaa FFoouunnddaattiioonn AAwwaarrdd

ttoo PPeekkkkaa AAbbrraahhaammssssoonn

Nokia Foundation has granted its 2007 award to PekkaAbrahamsson for his merits in researching and developingglobal software development and processes. The €10,000award was presented at the foundation's scholarship awardsceremony on 28 November 2007, when the foundation cele-brated its 13th year of operation.

Pekka Abrahamsson is a research professor at VTT Techni-cal Research Centre of Finland and currently head of theFLEXI-ITEA2 project, which aims to improve the competi-tiveness of Europe's software development by acceleratingand adapting global product development and innovations.There are 37 organisations from eight European countriesinvolved in the project. Abrahamsson is a pioneer of agilesoftware development in Finland, and his team designedMobile-D, the agile approach for mobile application devel-opment.

"Software is one of the key elements driving the Informa-tion, Communication and Telecommunications' role in theeconomy. At Nokia, the company is branching out into soft-ware and services in a unique way, compared with the mobil-ity industry at large. Nokia is renewing and transformingbusiness models, taking different approaches than in thepast, as software increases in value," Jorma Ollila, Chairmanof Nokia's Board of Directors, noted in his address at theFoundation's ceremony.

Nokia Foundation was formed in 1995. The Foundationsupports the scientific development of information andtelecommunications technologies and promotes educationof the sector.

LLeeiibbnniizz PPrriizzee ffoorr HHoollggeerr BBoocchhee

Holger Boche, director of the Fraunhofer Institute forTelecommunications, Heinrich-Hertz-Institut HHI, has beenawarded the prestigious €2.5 million Leibniz Prize 2008 forhis work on the mobile radio networks of the future.

Several key developments in the expansion of the mobilephone networks in recent years have been due to HolgerBoche. On the basis of his theoretical work, Boche hasadvanced the understanding of complex mobile communica-tion systems and at the same time has put his insights intopractice in the technology used to standardise new mobiletelephone systems. His research is of particular relevance tocross-layer optimisation, which increases the effectivenessand reliability of mobile phone networks. Boche has thuscontributed significantly towards making it possible to usethe existing mobile network frequencies with as few perma-nently installed transmitters and receivers to provide fullcoverage - a task which is not only a considerable scientificchallenge but also has great economic potential.

Holger Boche studied information technology at the Techni-cal University Dresden, supplementing his studies withmathematics from 1990 to 1992. Having obtained therespective diplomas, Boche went on to earn doctorates inboth subjects, graduating with "summa cum laude". In 2002– at the age of 32 – Boche accepted a professorship in mobilecommunications at the Technical University of Berlin. Hehas headed the Fraunhofer German-Sino Lab for MobileCommunications since 2003, and took on the joint director-ship of the Fraunhofer Heinrich Hertz Institute together withHans-Joachim Grallert in 2005.

The "Gottfried Wilhelm Leibniz Prize" is awarded by theDeutsche Forschungsgemeinschaft (German Research Foun-

Professor Holger Boche, director of the Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut HHI.

dation) every year since 1985 to scientists working in Ger-many. The DFG named eleven researchers, eight men andthree women, as this year's winners of Germany's most pres-tigious research prize from amongst 158 candidates. Leibnizprize winners receive a grant of up to €2.5 million, whichthey can use flexibly according to the requirements of theirresearch projects, over a period of up to seven years. Otherwinners include: Prof. Dr. Susanne Albers, Theoretical Com-puter Science, University of Freiburg; Prof. Dr. MartinBeneke, Theoretical Particle Physics, RWTH Aachen Uni-versity; Dr. Elena Conti, Structural Biology, Max PlanckInstitute of Biochemistry, Martinsried, with Dr. Elisa Izaur-ralde, Cell Biology, Max Planck Institute for DevelopmentalBiology, Tübingen; Prof. Dr. Stefan W. Hell, Biophysics,Max Planck Institute for Biophysical Chemistry, Göttingen;Prof. Dr. Klaus Kern, Physical Chemistry of the Materials,Max Planck Institute for Solid State Research, Stuttgart;Prof. Dr. Wolfgang Lück, Algebraic Topology, University ofMünster; Prof. Dr. Jochen Mannhart, Experimental SolidState Physics, University of Augsburg.

More information:

http://www.dfg.de/en/news/press_releases/2007/press_release_2007_81.html

© F

raun

hofe

r H

HI.

Page 64: ERCIM NEWS · ERCIM NEWS 72 January 2008 ERCIM News is the magazine of ERCIM. Published quarterly, it reports on joint actions of the ERCIM partners, and aims to reflect the contribution

ERCIM is the European Host of the World Wide Web Consortium.

Institut National de Recherche en Informatique et en AutomatiqueB.P. 105, F-78153 Le Chesnay, Francehttp://www.inria.fr/

Technical Research Centre of FinlandPO Box 1000FIN-02044 VTT, Finlandhttp://www.vtt.fi/

Irish Universities Associationc/o School of Computing, Dublin City UniversityGlasnevin, Dublin 9, Irelandhttp://ercim.computing.dcu.ie/

Austrian Association for Research in ITc/o Österreichische Computer GesellschaftWollzeile 1-3, A-1010 Wien, Austriahttp://www.aarit.at/

Norwegian University of Science and Technology Faculty of Information Technology, Mathematics andElectrical Engineering, N 7491 Trondheim, Norwayhttp://www.ntnu.no/

Polish Research Consortium for Informatics and MathematicsWydział Matematyki, Informatyki i MechanikiUniwersytetu Warszawskiegoul. Banacha 2, 02-097 Warszawa, Poland http://www.plercim.pl/

��������� �����������

���������������� �����������

Consiglio Nazionale delle Ricerche, ISTI-CNRArea della Ricerca CNR di Pisa, Via G. Moruzzi 1, 56124 Pisa, Italyhttp://www.isti.cnr.it/

Centrum voor Wiskunde en InformaticaKruislaan 413, NL-1098 SJ Amsterdam, The Netherlandshttp://www.cwi.nl/

Foundation for Research and Technology – HellasInstitute of Computer ScienceP.O. Box 1385, GR-71110 Heraklion, Crete, Greecehttp://www.ics.forth.gr/

FORTH

Fonds National de la Recherche6, rue Antoine de Saint-Exupéry, B.P. 1777L-1017 Luxembourg-Kirchberghttp://www.fnr.lu/

FWOEgmontstraat 5B-1000 Brussels, Belgiumhttp://www.fwo.be/

FNRSrue d'Egmont 5B-1000 Brussels, Belgiumhttp://www.fnrs.be/

Fraunhofer ICT GroupFriedrichstr. 6010117 Berlin, Germanyhttp://www.iuk.fraunhofer.de/

Swedish Institute of Computer ScienceBox 1263, SE-164 29 Kista, Swedenhttp://www.sics.se/

Swiss Association for Research in Information Technologyc/o Professor Daniel Thalmann, EPFL-VRlab, CH-1015 Lausanne, Switzerlandhttp://www.sarit.ch/

Magyar Tudományos AkadémiaSzámítástechnikai és Automatizálási Kutató IntézetP.O. Box 63, H-1518 Budapest, Hungaryhttp://www.sztaki.hu/

Spanish Research Consortium for Informaticsand Mathematics c/o Esperanza Marcos, Rey Juan Carlos University,C/ Tulipan s/n, 28933-Móstoles, Madrid, Spain, http://www.sparcim.org/

Science and Technology Facilities Council, Rutherford Appleton LaboratoryHarwell Science and Innovation CampusChilton, Didcot, Oxfordshire OX11 0QX, United Kingdomhttp://www.scitech.ac.uk/

Czech Research Consortium for Informatics and MathematicsFI MU, Botanicka 68a, CZ-602 00 Brno, Czech Republichttp://www.utia.cas.cz/CRCIM/home.html

Order FormIf you wish to subscribe to ERCIM News

free of chargeor if you know of a colleague who would like to

receive regular copies of ERCIM News, please fill in this form and we

will add you/them to the mailing list.

Send, fax or email this form to:ERCIM NEWS

2004 route des LuciolesBP 93

F-06902 Sophia Antipolis CedexFax: +33 4 9238 5011

E-mail: [email protected]

Data from this form will be held on a computer database. By giving your email address, you allow ERCIM to send you email

I wish to subscribe to the

� printed edition � online edition (email required)

Name:

Organisation/Company:

Address:

Postal Code:

City:

Country

E-mail:

You can also subscribe to ERCIM News and order back copies by fi l l ing out the form at the ERCIM Web site at http://ercim-news.ercim.org/

Danish Research Association for Informatics and Mathematicsc/o Aalborg University,Selma Lagerlöfs Vej 300, 9220 Aalborg East, Denmarkhttp://www.danaim.dk/

ERCIM – the European Research Consortium for Informatics and Mathematics is an organisation

dedicated to the advancement of European research and development, in information technology

and applied mathematics. Its national member institutions aim to foster collaborative work within

the European research community and to increase co-operation with European industry.