Study and realization of a prototype to discover and manage a...

85
Master of engineering in industrial systems and projects Option: Dynamic systems and signals Thesis for Master SDS Study and realization of a prototype to discover and manage a real world object through the web Alcatel-Lucent, Bell Labs – Villarceaux 01/04/2010 – 30/09/2010 Written by: BEN FREDJ Sameh Promotion 2010 Project supervisor: Mr. LE BERRE Olivier

Transcript of Study and realization of a prototype to discover and manage a...

Page 1: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

Master of engineering in industrial systems and projects Option: Dynamic systems and signals

Thesis for Master SDS

Study and realization of a prototype to discover and manage a real world object through the web

Alcatel-Lucent, Bell Labs – Villarceaux

01/04/2010 – 30/09/2010

Written by: BEN FREDJ Sameh

Promotion 2010

Project supervisor: Mr. LE BERRE Olivier

Page 2: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

1

Page 3: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

2

Acknowledgements By the end of my final year internship, I would like to express my gratitude to the Hybrid Communication department team at Alcatel Lucent – Bell Labs France which contributed to the smooth progress of my internship. I would like to thank my tutor Mr. Le Berre Olivier, software engineer at Hybrid Communication department and Mr. Boussard Mathieu, team leader for the Web as a Platform project within the department, for their attention, availability, advice and support during my internship. I also would like to thank Mr. Labrogere Paul, Head of the department, for his welcome and his accessibility. Finally, I would like to thank all the members of the Hybrid Communication department for their availability, advices and sympathy which facilitated my rapid integration into the team.

Page 4: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

3

Content

ACKNOWLEDGEMENTS......................................................................................................... 2

LIST OF FIGURES................................................................................................................... 5

ABSTRACT............................................................................................................................. 6

INTRODUCTION .................................................................................................................... 7

I CONTEXT PRESENTATION............................................................................................. 8

1. BELL LABS PRESENTATION........................................................................................ 8

2. APPLICATIONS DOMAIN AND HYBRID COMMUNICATION DEPARTMENT ................... 9

2.1 APPLICATION DOMAIN ..................................................................................... 9

2.2 HYBRID COMMUNICATION DEPARTMENT ......................................................... 9

II PRESENTATION OF THE INTERNSHIP SUBJECT AND ISSUES ........................................... 10

1. INTERNSHIP SUBJECT AND ANALYSIS OF THE PROBLEM ............................................ 10

2. ISSUES FOR BELL LABS AND ALCATEL-LUCENT ....................................................... 10

2.1 TECHNICAL AND INDUSTRIAL ISSUES .............................................................. 10

2.2 SECURITY AND PRIVACY ISSUES ....................................................................... 11

III PRESENTATION OF THE TECHNICAL WORK .............................................................. 12

1. CONNECTING A COMMON LAMP TO THE WEB AND REMOTE CONTROL..................... 12

1.1 INTRODUCTION .............................................................................................. 12

1.2 STATE OF ART ................................................................................................ 12

1.3 GENERAL PRESENTATION OF THE DEMO ....................................................... 13

1.4 PRESENTATION OF THE CONNECTED LAMP SYSTEM........................................ 17

1.4.1 THE SCENARIO OF THE LAMP ...................................................................... 17

1.4.2 DESCRIPTION OF THE ELECTRONIC /ELECTRICAL PART ............................. 18

1.4.2.A LAMP CHOICE AND LIGHTING ............................................................ 18

1.4.2.B ARDUINO ELECTRONIC BOARDS ......................................................... 19

1.4.2.C ELECTRONIC CIRCUIT & GENERAL ARCHITECTURE ............................ 20

1.4.3 DESCRIPTION OF THE SOFTWARE PART ....................................................... 21

1.4.3.A LAMP VIRTUAL OBJECT ....................................................................... 21

1.4.3.B SOFTWARE EMBEDDED ON THE ARDUINO.......................................... 23

1.4.4 TESTS AND RESULTS ................................................................................... 24

1.5 CONCLUSION.................................................................................................. 26

2. DISCOVERY OF THINGS USING COMPUTER VISION ................................................... 27

2.1 INTRODUCTION .............................................................................................. 27

2.2 STATE OF ART ................................................................................................ 27

2.2.1 OBJECT DISCOVERY IN UBIQUITOUS COMPUTING ENVIRONMENT ............... 27

2.2.2 COMPUTER VISION...................................................................................... 28

2.2.3 COMPUTER VISION WITH MOBILE DEVICES ................................................ 28

2.2.4 THE CHOSEN APPROACH............................................................................. 29

2.3 GENERAL PRESENTATION OF THE COMPUTER VISION DISCOVERY SYSTEM ... 29

2.3.1 USER SCENARIO.......................................................................................... 29

2.3.2 SYSTEM ARCHITECTURE .............................................................................. 30

Page 5: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

4

2.4 OBJECT RECOGNITION ALGORITHM............................................................... 31

2.4.1 PRE-PROCESSING ........................................................................................ 31

2.4.2 INTEREST POINT DETECTOR....................................................................... 31

2.4.3 INTEREST POINT DESCRIPTOR .................................................................... 32

2.4.4 MATCHING PROCESS ................................................................................... 34

2.5 IMPLEMENTATION .......................................................................................... 35

2.5.1 HARDWARE IMPLEMENTATION................................................................... 35

2.5.2 SOFTWARE IMPLEMENTATION .................................................................... 35

2.6 TESTS AND RESULTS ....................................................................................... 37

2.7 CONCLUSION.................................................................................................. 39

CONCLUSION...................................................................................................................... 40

1. RESULTS .................................................................................................................. 40

2. ENCOUNTERED DIFFICULTIES ................................................................................. 40

3. PROJECT PROGRESS ................................................................................................. 40

4. ADDED VALUE AND PERSONAL IMPRESSION ............................................................ 41

APPENDICES ....................................................................................................................... 42

APPENDIX 1: DETAILED SCHEMATICS OF THE ARDUINO DUEMILINOVE ......................... 43

APPENDIX 2: DETAILED SCHEMATICS OF THE ARDUINO ETHERNET SHIELD................... 44

APPENDIX 3: SOURCE CODE OF THE LAMPVO JAVA CLASS............................................... 45

APPENDIX 4: STRUCTURES IN THE ARDUINO CODE ........................................................ 54

APPENDIX 5: CODE EMBEDDED INTO THE ARDUINO BOARD.......................................... 55

APPENDIX 6: CODE OF TAKEPICTUREACTIVITY.JAVA CLASS ........................................... 61

APPENDIX 7: CODE OF CVRESULTACTIVITY.JAVA CLASS.............................................. 69

APPENDIX 8: UPLOADING PICTURE & LAUNCHING THE RECOGNITION OBJECT PROCESS 72

APPENDIX 9: COMPUTER VISION DISCOVERY SYSTEM APPLICATION C+ ......................... 74

APPENDIX 9: COMPUTER VISION DISCOVERY SYSTEM APPLICATION C+ ......................... 74

BIBLIOGRAPHY ................................................................................................................... 83

Page 6: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

5

List of Figures

Figure 1: Bell Labs Research Domains......................................................................................... 8 Figure 3: Demo Set Up.............................................................................................................. 14

Figure 4: Class Diagram of the Virtual Object Framework ........................................................ 16

Figure 5: Lamp intercation scenario ........................................................................................... 17 Figure 6: Fado Lamp from Ikea ................................................................................................. 18

Figure 7: High power RGB Led................................................................................................. 18

Figure 8: Arduino Duemilanove Board ...................................................................................... 19 Figure 9: Arduino Ethernet shield ............................................................................................. 20

Figure 10: Arduino and Arduino Ethernet Shield boards plugged .............................................. 20

Figure 11: Electronic circuit with the lamp ................................................................................ 20 Figure 12: General Architecture of the connected lamp system ................................................. 21

Figure 13: LampVO Class ......................................................................................................... 22

Figure 14: Development environment of the Arduino ............................................................... 23 Figure 15: Final solution ............................................................................................................ 24

Figure 16: Lamp discivery using................................................................................................. 29

Figure 17: Overview of the CV discovery system concept ......................................................... 30 Figure 18: Presentation of the different steps of the algorithm .................................................. 31

Figure 19: Box Filters ................................................................................................................ 32

Figure 20: Orientation Assignment ............................................................................................ 33 Figure 21: Descriptor Components ........................................................................................... 34

Figure 22: Left: Detected ineterst point, Middle: Haar wavlet filters, Right: descriptor windows 34

Figure 23: MATCHING features:.............................................................................................. 35 Figure 24: Sequence Diagram of THE CV Discovery System .................................................... 36

Page 7: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

6

Abstract My graduation internship took place in the Hybrid Communication department at Alcatel Lucent – Bell Labs France. I was involved in the Web as a Platform project to work on the concept of smart environments and user interaction using Web of Things principles. The purpose of my internship was first to study the different solutions to connect a common object to the Web and expose its services. The user should be able to interact with the physical object through its virtual representation on the web. The second part of my internship aimed to study the discovery of objects using computer vision. In fact, when entering a smart environment, a user needs to discover the connected objects around him to be able to interact with their services. Many discovery technologies are available and have been studied such as: RFID, Bluetooth...etc. In this project, I present a system which allows discovering physical connected objects and exposing their services to the user by taking pictures of them using a mobile phone with integrated camera. This report presents the different works, results and prototypes done during my internship. Keywords: Web of Things, embedded software, Arduino Boards, Web, HTTP, REST, computer vision, objects recognition.

Page 8: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

7

Introduction With the decrease of hardware production costs, it has become cheap enough in the last few years to embed networking capabilities into every object starting with powerful mobile computing devices but also for everyday objects with limited computation or interface capabilities like printers, cameras, TVs, etc. In parallel, consequent researches were conducted under the concept of “Web of Things1” on how to network these objects based on a common networking technology: IP2. Moreover, the World Wide Web has imposed itself as the main platform to deliver services to end-user, thanks to its simple and open foundations that fostered service. As a result, real-word objects like consumer devices are mapped as resources offered through the web, making then the so-called “Web of Things”.[1] In this prospect, the world will be more tightly connected than ever. In fact, the proliferation of network-capable devices will enable billions of objects to be linked together into novel type of applications. However, while great opportunities are offered by integrating such smart objects within Web applications, problems remain regarding the embodiment of this vision. Interesting questions arise out of this problem: “Can we connect any object to the web?”, “How can the user interact with a smart environment3?”, “How can he discover the presence of smart objects4?” The purpose of my internship at Bell Labs within the Hybrid Communication department is to give some answers to theses numerous questions. In the first part of this report, I present my researches and solutions that I developed to handle, the objects’ connection issue. The second part is about the discovery of objects in smart environments.

1 Web of Things is a vision where everyday devices and objects, i.e. objects that contain an embedded devices or computer, are connected by fully integrating them to the Web. Examples of smart devices and objects are Wireless Sensor Networks, Ambient devices, household appliances, etc. 2 Internet Protocol (IP) is a protocol used for communicating data across a packet-switched internetwork using the Internet Protocol Suite. 3 Smart environment is a technological concept where physical world is richly and invisibly interwoven with sensors, actuators, displays, and computational elements, embedded seamlessly in the everyday objects of our lives, and connected through a continuous network. 4 Smart objects are objects with networking capabilities.

Page 9: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

8

I Context Presentation

1. Bell Labs Presentation

Bell Labs was founded in 1925. It operates its headquarters at Murray Hill, New Jersey, and has field research and development facilities throughout the world mainly in the filed of telecommunication. Since 2009, they are part of the research and development center of Alcatel-Lucent.

Bell Labs is a community of about 1,500 researchers located in 10 countries.

Researches at Bell Labs aim to create new growth opportunities and to provide a competitive market advantage for Alcatel-Lucent with disruptive innovations. Separated into eight domains, each covers the continuum of an entire research lifecycle from generation to transfer – and is involved in the mentoring of active research projects and developing and transferring technologies to the business. Research in Alcatel-Lucent Bell Labs is organized into eight strategic domains.

FIGURE 1: BELL LABS RESEARCH DOMAINS

My internship is taking place within the Applications domain.

Page 10: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

9

2. Applications Domain and Hybrid Communication Department

2.1 Application Domain

The mission of the Applications Domain is to develop technologies, intellectual properties, paradigms and product concepts that serve directly the users in their needs for information, communication and entertainment.

As users get involved into service creation, applications providers need to get closer to the end-users. That is why experimentations in the Applications Domain are done to obtain feedbacks from users at the different stages of the application in order to improve it.

The Applications Domain is organized into six departments. A department handles a mid/long term research challenge, related to a foreseen user need. The department is accountable for results and it spans over one to three years. It has multiple cooperation and partnership aspects (both industrial and academic).

Teams of 10-20 researchers located in a single site and reporting to a director are assigned to every department.

A Department delivers not only knowledge and expertise in the related scientific domain, but also publications and experimentations.

My internship project is part of the Hybrid Communication department projects.

2.2 Hybrid Communication department

The Hybrid Com department aims to offer a vision that changes the communication model in order to define a new type of hybrid applications exposed to the web. In this context, two research projects are held. On of the project is Web as a Platform. The vision is that of the extension of the Web to the physical environment of the user using Web of Things principles. This extension can take two non-contradicting forms: the extension of the “window on the Web” to surrounding objects and the availability of real world objects as resources on the Web, exposing information about, data generated and services.Thus, a user can for example drive the objects around from his computer, his mobile phone or any device connected to the Web. My internship subject is part of this project.

Page 11: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

10

II Presentation of the internship subject and issues

1. Internship subject and analysis of the problem

More and more electronic devices (IP cameras, video projectors, printers, etc) become easily connectable and some even expose their services through HTTP APIs. However, every day objects such as a lamp, a clock radio are still not easily accessible from the web and make the creation of prototypes around the area difficult. How to connect these every day objects to the Web and have access to their services? In this context, a first step of the internship is to implement the most appropriate mechanism for an object of everyday life, making it accessible through the web and exposing its services through an API based on the principles of REST. At the same time with the emergence of ubiquitous computing, users are increasingly encountering smart environments where many physical objects are connected to the web and communicate with each other. The user in this context needs a way to discover these connected objects so that he can interact with them and use their services. For example, How to discover that the lamp of the room is connected to the web? Several technologies can be used for discovery purpose: RFID, GPS, QR Code, Bluetooth, etc. However, some technological and aesthetic constraints may exist: objects can be too small to put markers on them, too far or not able to integrate an RFID tags. How to recognize objects without making any changes on them? How to replace the GPS in indoor environments? What is the most suitable means for exploring the surrounding objects? The second part of my internship is to conduct a prospective study of techniques and mechanisms for surrounding objects discovery (GPS, NFC, QR Code, Bluetooth, Computer Vision, etc) and select the best means of discovery. A prototype of the final solution will be developed.

2. Issues for Bell Labs and Alcatel-Lucent

2.1 Technical and industrial issues

The development of Web of Things technologies can offer communication services to different industrial fields. For instance, in automotive, aerospace and aviation field, the Web of Things can help to improve safety and security of products and services. The nodes in such a network will be used for detecting various conditions such us pressures, vibrations, temperatures, etc. The data collected gives access to customized usage trends, facilitates maintenance planning, reduces maintenance and waste and will be used as inputs for the user to evaluate and reduce the energy consumption during vehicles and aircraft operations. Besides, buildings and home automation technologies are being deployed to create smart homes and environments. For example, smart metering is becoming more popular for measuring energy consumption and transmitting this information to the energy provider electronically. In conjunction with modern home entertainment systems, which are based on general-purpose computing platforms, they could easily be combined with other sensors and actors within a building, thus forming a fully interconnected, smart environment. Sensors for temperature,

Page 12: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

11

humidity provide the necessary data to automatically adjust the comfort level and to optimize the use of energy for heating or cooling. Additional values are provided by monitoring and reacting to human activities. For instance, exceptional situations can be detected and people can be assisted in everyday activities, like supporting the elderly in an aging society. This is just a small sample of the huge number of services that can be offered to industrials through the Web of Things.

2.2 Security and privacy issues

A major issue of the Web of Things is related to trust, privacy and security, not only for what concerns the technological aspects, but also for the education of the people at large. The growing number of remotely controlled object through the Web of Things will require new security models, which in return will help the citizens to build trust and confidence in these novel technologies rather than increasing fears of total surveillance scenarios. The dissemination of the benefits that these technologies can bring to the public will also be essential for the success of this technology on the market.

Page 13: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

12

III Presentation of the technical work

1. Connecting a common lamp to the web and remote control

1.1 Introduction

Thanks to the stunning progress in the field of embedded devices, physical objects such as home appliances, industrial machines and wireless sensor and actuator networks can now embed powerful computers that can connect to the Internet from anywhere. The Chumby, Sun spot, Ploggs, Nabaztag, etc, are only a few examples of these tiny computers. In the meanwhile, broadband and cheap Internet connectivity becomes a commodity accessible from anywhere. According to the IP for Smart Objects (IPSO) Alliance, an increasing number of embedded devices will be supporting the IP protocol, so that many physical objects will soon possess direct connectivity to the Internet. This convergence of physical computing devices (Wireless Sensor Networks, mobile phones, embedded computers, etc.) and the Internet provides new design opportunities for interactive applications. We talk about extending the Web beyond the computer, and bring it into the real world! In this context, the Web as a Platform team at Bell Labs made a demonstration (“Demo”) of the possibility to connect everyday objects to the web and enable the user to interact with them through different type of devices. This demo was presented during the OPEN DAYS5 and the LIFT6. In this part, there is first, a state of art of connecting objects to the web. Then I present the scenario of the demo of the Web of Things that was presented during the OPEN DAYS and the LIFT. Finally, I concentrate on the part that I was responsible for: connecting a lamp to the web and interacting with its different services.

1.2 State of Art

Linking the Web and physical objects is not a new idea. Early approaches started by attaching physical tokens (such as barcodes) to objects to direct the user to pages on the Web containing information about the objects [2].These pages were first served by static Web Servers on mainframes, then by early gateway system that enabled low-power devices to be part of wider networks [3].The key idea of these works was to provide a virtual counterpart of the physical objects on the Web. URLs to web pages were scanned by users (e.g. using mobile devices and directed them to online representation of real things). With advances in computing technology, tiny web servers can be embedded in many devices [4]. The Cooltown project pioneered this area of the physical Web by associating pages and URLs to people, places and things.

5 OPEN DAYS: The Bell Labs Open Days are the opportunity for employees, partners, customers, reporters and students to visit the labs, talk to the researchers and see lots of innovative demos, from optical networking to applications, security and wireless. It is a demonstration and presentation of some of the latest and most compelling research and development work in telecommunication technologies done by Bell Labs. It took place in the end of May in Villarceaux, France. 6 LIFT: Lift is a series of events built around a community of pioneers who get together in Europe and Asia to explore the social implications of new technologies. Each conference is a chance to turn innovation into opportunities by anticipating the major shifts ahead, and meet the people who drive them. It took place in July at Marseille, France.

Page 14: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

13

The project presented in [5] is using Sun Spot platform to create Restful smart things that are able to connect to Internet via Wi-Fi. The purpose of the following work is to build upon these approaches and propose a new way of connecting physical object using Arduino boards. These boards are smaller than the sun Spot ones and thus easier to integrate into systems. Moreover, they have many digital/analog inputs/outputs that facilitate their connection to other electronic boards. Also, they are able to host web servers and accept simultaneous connections. Finally, they are cheaper than the Sun Spot platforms. For the common object to connect, I chose a lamp that we can buy from the store. A lamp is an every day object used by every one and thus is a good candidate for the following experience.

1.3 General presentation of the Demo

The Demo of Web of Things presented during the OPEN DAYS is more a vision and an overall concept than an actual technology, aimed more at giving a glimpse of opportunities that lie ahead the concept of Web of Things than showing current technology or product evolution perspectives. The purpose of the demo is to enable the user to interact with different real world objects situated in different geographical locations. The user manipulates different interfaces to interact with the physical objects through a Virtual Object Framework hosted in a remote PC as shown in Figure 3.

Page 15: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

14

FIGURE 2: DEMO SET UP

Interfaces of interaction Different interfaces are chosen to experiment the user experience. The first one is a web browser interface which is quite usual for web users. The web browser represents the user environment and the different objects that exist. This interface shows that all is based on regular web technology which is easy to manipulate. However, entering an environment with a laptop under the arm is not exactly a mobile-friendly situation, so it is reasonable to think that mobile devices are going to be prime candidate to embody Web of Things browsers. For this reason, tactile tablet and a smart phone are used to interact with objects. The interfaces are showing objects in my environment and objects that are already bookmarked (in other environment).

Interfaces of interaction

Users

V irtua l o b je c ts

V ir tu a l O b je c ts F ram ew o rk

E ven tin g

H TTP A P Is

A cce ss

C on tro l

P ro v is ion

in g

My Home physical objects

My Grandma’s home physical objects

Page 16: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

15

User’s environment To show the concept of the Web every where, two different environments are chosen. My home environment which represents the connected objects in my home: a camera and a phone. My grandma’s home environment which represents the connected object in my grandma’s home: a lamp, a TV and a phone. The user can both interact with objects in my home and in my grandma’s home through the interfaces of interaction. Virtual object framework and architecture It is also called the Virtual Object Gateway. This framework is hosting the virtual description of different connected objects (name, icon, address, parameters, services, etc). It is based on an OSGI Framework7 which is hosting different bundles that represent the physical objects and define the services they offering. The main architecture of the Virtual Object Framework is composed of a major abstract class: CoreVirtualObject. These abstract classes implement respectively the two interfaces: Virtual Object and Gateway Service. CoreVirtualObject is a parent class from which children Virtual object classes (eg. WebCamVO, ScreenVO, LampVO) will be derived. The created virtual objects which are representing the physical object in the environment, inherit methods from the CoreVirtualObject class. GatewayComponent is the main class which is handling the interactions and requests from the different clients. It is also responsible for adding or removing virtual objects instances via provisioner class. The figure below is showing the general software architecture of the Virtual object framework.

7 OSGi Framework is a module system and service platform for the Java programming language that implements a complete and dynamic component model. Applications or components can be remotely installed, started, stopped, updated and uninstalled without requiring a reboot.

Page 17: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

16

FIGURE 3: CLASS DIAGRAM OF THE VIRTUAL OBJECT FRAMEWORK Actually, this Virtual Object Framework is now hosted into a laptop; however it can be hosted by a Set-top box in the future. It is acting as the center client /server of the Demo: The user sends commands from interfaces to the Virtual Object Framework. The gateway is sending answers to the interfaces and to the real objects. Moreover, if the user is acting directly on the physical objects, the changes are reflected on the virtual object. HTTP (Hypertext Transfer Protocol) requests are exchanged between the physical and the virtual objects.HTTP is the main protocol for interacting with resources and the physical objects and HTML (Hypertext Transfer Protocol) is used to represent the interface of virtual objects. This web architecture makes it easy to expose objects on the Internet and creating the Web of Things.

Page 18: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

17

Different type of interactions and scenarios During the Demo different interactions between the physical objects and the gateway are experienced. For instance, the user can control the grandma’s home lamp through the different interfaces. He can switch on/off the lamp. The details will be explained in the next part. It is also possible to send a video stream on the TV of the grandma so that she can observe the children playing in “my home”. Moreover, the user can redirect the calls from his home’s phone to the grandma’s home one, in case that he visits his grand-mother. Different mashups are also created between objects. For example, the grandma who is a bit deaf can launch an application from his mobile phone to blink the lamp while her phone is ringing. This is a very simple application, but we can imagine the hundreds of applications of all level of complexity that can be applied to objects of your environment.

1.4 Presentation of the connected lamp system

For the Demo I was responsible of connecting a common lamp to the web, exposing its service on the web and interacting with its virtual representation on the Virtual Object Framework. This work aims to show the possibility of connecting physical objects that we are using every day to the web and the different possibilities of interaction that we can experience.

1.4.1 The scenario of the lamp

The lamp should be able to connect to the Virtual Object Framework and communicate with it through Restful principals. The virtual object is exposing its services to the user different interfaces of interactions (browser web, phone, tactile tablet). Through the interface, the user is able to:

- Switch on/off the lamp - Change its intensity form bright to fade and vice-versa.

- Change its color The communication between the physical lamp and the Virtual Object Framework (the real world and the physical world) is bidirectional: The user can change the physical status of the lamp through the interface. In the other side, the interface should show exactly the real physical state of the lamp. If the user chose to physically switch on or off the lamp, the interfaces should receive a notification of this change and reflect the actual state of the lamp. In this way, we have a real communication between the virtual world and the physical world.

FIGURE 4: LAMP INTERCATION SCENARIO

Page 19: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

18

1.4.2 Description of the electronic /electrical Part This part is dealing with the different electronic/electrical modifications that were made on the lamp to connect it to the web. During my internship I was responsible for listing and ordering the electronic components required for the Demo.

1.4.2.a Lamp choice and Lighting

The selected lamp should be a common lamp that we use every day .A big importance was given to its shape and design. I was looking for an easy integration of the electronic part. The FADO lamp from IKEA8 (see Figure 6) was a good candidate. For the OPEN DAYS and the LIFT demos we wanted to add the possibility to change the colors under the user request. It was not possible to do so with the mono-color standard bulb that is usually used in lamps. The idea was to replace it with a light-emitting-diode lamp. This kind of lighting is more and more used now. It’s more efficient, has a longer life span and consume less than the incandescent or fluorescent bulbs. Moreover, it is possible to find leds with different colours (Red, Green, Bleu…) or RGB leds and make multi-colour Lamps. For the purpose of the Demo, I chose to replace the bulb with a 3W High Power RGB Luxeon LED with a large light angle for a good lighting.

FIGURE 5: FADO LAMP FROM IKEA

FIGURE 6: HIGH POWER RGB LED

8 IKEA: is a privately held, international home products retailer.

Page 20: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

19

1.4.2.b Arduino electronic boards

The Arduino boards are used as web enablers. They are the interfaces between the physical object and the virtual object gateway.

� Arduino Duemilinove (ATmega 328) This is the latest version of the USB Arduino boards. It’s a microcontroller board based on the ATmega 328. It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz crystal oscillator, a USB connection, a power jack, an ICSP header, and a reset button. It can be powered via a USB cable, an AC-to-DC adapter or a battery.

FIGURE 7: ARDUINO DUEMILANOVE BOARD

Here are the main technical characteristics of the Arduino Duelmilanove Board:

Appendix n°1 is showing the detailed schematics of the Arduino Duemilinove. The Arduino Duemilanove can be programmed with the Arduino software using the Wiring language which is based on C/C++.

� Arduino Ethernet Shield

The Arduino Ethernet Shield allows an Arduino board to connect to the Internet. It is based on the Wiznet W5100 Ethernet chip. The Wiznet W5100 provides a network (IP) stack capable of both TCP and UDP. It supports up to four simultaneous socket connections. The shield provides a standard RJ45 Ethernet jack.

Operating Voltage5V Input Voltage (recommended) 7-12V Input Voltage (limits) 6-20V Digital I/O Pins 14 (of which 6 provide PWM output) Analog Input Pins 6 DC Current per I/O Pin 40 mA DC Current for 3.3V Pin 50 mA Flash 32 KB (ATmega328) SRAM 2 KB (ATmega328) EEPROM 1 KB (ATmega328) Clock Speed 16 MHz

Page 21: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

20

Appendix n°2 is showing the detailed schematics of the Arduino Ethernet Shield.

FIGURE 8: ARDUINO ETHERNET SHIELD

The Ethernet shield connects to an Arduino board using long wire-wrap headers that extend through the shield. This keeps the pin layout intact and allows another shield to be stacked on top. The Arduino Duemilinove board and the Arduino Ethernet Shield are then plugged together as shown below.

FIGURE 9: ARDUINO AND ARDUINO ETHERNET SHIELD BOARDS PLUGGED

1.4.2.c Electronic circuit & General architecture

� Electronic Circuit

An electronic circuit is needed as an interface between the Arduino Boards and the physical object (the lamp). It is used to power the lamp and plug the button (Figure 11).

FIGURE 10: ELECTRONIC CIRCUIT WITH THE LAMP

Page 22: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

21

� General Architecture of the connected lamp system

The lamp is connected to the network via Ethernet an Ethernet cable. We can have access to its services using a web browser and a mobile phone. For the tests, a local network is created using a wireless Broadband Router.

FIGURE 11: GENERAL ARCHITECTURE OF THE CONNECTED LAMP SYSTEM The electronic part is integrated into a box (Figure 12) and the physical button is added in the front side of the box.

1.4.3 Description of the software part

1.4.3.a Lamp Virtual object

In this part I am describing the main class that I developed to ensure connection between the virtual object on the virtual object Framework and the real physical object.

� LampVO Class

This class is inheriting form the Superclass CoreVirtualObject. It instantiates the virtual object of the lamp.

Page 23: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

22

FIGURE 12: LAMPVO CLASS

� Lamp Attributes The main attributes are declared privates and are related to the characteristics of the lamp and to its services. NAME, POWER: Name and the power consumed by the lamp ATTR STATE: to characterize the status on /off of the lamp ATTR INTENSITY: to modify the intensity of the lamp. COLOR: To characterize the color changes of the lamp. URL ARDUINO: defining the IP address and the port to connect to the Arduino device.

� Lamp main functions

void init (Map<String, Object> properties) This function is initializing the different attributes of the lamp. When the gateway is launched, an xml file which contains the different initial values is read. boolean sendToArduino (String param) This function sends the different request and data (param) to the Arduino board server. Http requests are sent to switch on/off the lamp, change the intensity or the colour. void updateAttr () This function receives the Http requests sent by the Arduino client to Virtual Object Framework and updates the status of the lamp.

Page 24: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

23

void attributeValueChange (VODataAttributeEvent event) This function receives the notification of events from the different interfaces and sends the new Http requests to the Arduino board. Appendix n°3 is showing the source code of the lampVO java class.

1.4.3.b Software embedded on the Arduino

� The development environment for Arduino board

The Arduino environment is an open source and makes it easy to write code and upload it to the I/O board. The Arduino language is based on C/C++ language called Wiring.

The Appendices n° 4 is showing the different structures in the Arduino Code.

We can resume that an Arduino program (called Sketch) is composed of two main structures setup() to initialize variables and loop() to loop consecutively the program. Some functions are defined in the Arduino language to manage the digital and analog read and write from the I/O (eg. digitalWrite (), analogWrite()). Moreover, some constants define the digital pin nature (INPUT, OUTPUT) and their level (HIGH, LOW).

FIGURE 13: DEVELOPMENT ENVIRONMENT OF THE ARDUINO

Page 25: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

24

� Arduino Software for lamp connection and interaction with h the gateway

With the Arduino Ethernet Shield, the Ethernet Library allows the Arduino board to connect to the Internet. It can serves either as a server accepting incoming connections from the gateway or as a client when it receives electrical signals from the lamp making then outgoing ones to the Virtual Object Framework..

The Arduino Ethernet library supports up to four concurrent connections (incoming or outgoing or a combination).

The appendix n°5 is showing the code embedded into the Arduino Board.

1.4.4 Tests and Results

� Test Setup

For the final solution, I made some integration tasks to hide the electronic devices as shown on the figure below.

FIGURE 14: FINAL SOLUTION

The tests where performed on a local network between the connected lamps, the Virtual Object Framework and the interaction interfaces. Tests were performed using both the web browser interface and the mobile phone interface.

� performed tests Test 1: Switch on/off the lamp

Page 26: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

25

Test 2: Change lamp intensity

The slider in the bottom of the phone interface is used to modify the intensity value of the lamp. Test 3: Change lamp colors

Test 4: Swith on/off the button and reflection on the interfaces

The icon of the lamp on the interface is reflecting the reel state of the physical lamp.

Page 27: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

26

� Results: The connected lamp system is working well. The tests were performed successfully. However I noticed that the action on the physical lamp is quite slow when using the mobile phone. This is due to the Wi-Fi connection. When testing with multiple users. I noticed that the connection to the lamp is lost when there are more than 4 simultaneous connections. This is due to the Arduino specification where it’s noticed that it can’t support more then 4 simultaneous connections.

1.5 Conclusion

In this part I developed a connected lamp system. The smart lamp is connected to the web and the user can have access to its services via a web interface: web browser or from his mobile phone. Therefore the user can have access to the lamps status of his house at any time (.e.g. to check if he forgot to switch off the light before leaving the house). The Arduino board was a good solution for creating a prototype of connected object. However, because of its limitations in numbers simulanous connections, it is better to choose a more powerful board for a real product.

Page 28: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

27

2. Discovery of Things using computer vision

2.1 Introduction

The Web of Things extends the Web to physical objects. Thus, it enables humans to live in a smart environment. A user needs an input device to discover physical connected objects and interact with the services they offer. For example a user in a smart room needs to know which lamp is connected and which services are exposed. This input device is better to be easy to carry everywhere. In this context, a mobile phone can be a good candidate. Mobile phones have become sophisticated computers and can be used as interaction devices with the user’s environment. They offer integrated cameras and a wide range of communication channels like Bluetooth, WLAN, GPRS, WIFI and sensors like barcode readers or RFID readers and cameras. The purpose of the discovery system developed in this part is to not rely on markers to recognize objects but rather to identify them by their looks by using visual object recognition from a mobile device’s camera image. With this system, snapping a picture of an object would be sufficient to request all the desired information on it. In other terms, it “hyperlinks” them to digital information. Using object recognition methods to hyperlink physical object with the digital world brings several advantages. For instance certain types of objects can’t support markers because they are very small or very large and need many markers in different sights (ex: buildings). A barcode or RFID attached to an object would be difficult to reach if the room where the object is located is very crowded. Furthermore, installing a marker, an RFID or a Bluetooth beacon on each connected object can be costly for installation and maintenance. Taking a picture for object recognition is practical because it can be done from any position where the object is visible and it needs only a data-base of images. In this part, I am doing a state of art of different technologies for object discovery. Then I am presenting the Computer Vision Discovery system that I developed.

2.2 State of Art

2.2.1 Object discovery in ubiquitous computing environment

Efforts in context- aware and ubiquitous computing have focused on making knowledge about the physical world to mobile computer system. The purpose is to make users of mobile computer system aware of the different connected object and of the services that they expose in the ubiquitous environment. Many systems have been developed using different technologies. One of these systems is the RELATE [6] which is used to support collaboration of co-located mobile users and also discovery of objects. For this purpose, Dongles are plugged to the mobile systems to discover each other and form a wireless sensor network for collaborative measurement of their spatial arrangements. The user can then have a map on his mobile system of the connected objects next to him. Passive radio frequency identification (RFID) tags are activated if a mobile device carrying an RFID reader is in the range. With the provided ID number, information about the existing object can be displayed [7].Bluetooth is becoming more and more popular to connect objects and exchange data [8]. These kind of technologies are very used in guidance systems (e.g., in museums [9]) but they are limited by the distance that separates them form the object. Jini [10] and UPnP provide methods to discover networked appliances services. However, they don’t return enough information about the services. Users need to know not only the virtual and

Page 29: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

28

networked information (e.g. host names, URLs...) but also the link between the services’ physical entities and virtual information. For example, how can visitors tell the network name of a specific printer? The use of digital images may be the solution. U-Photo [11] is a system where the user can easily find out information about an object by taking its picture. A visual tag is attached to the object to recognize it through image processing. There is many solutions using computer vision for object recognition through markers, or QR code or 1D- bare code on objects [12][13].However, it is not always possible to put a marker, RFID reader or Bluetooth on the object. The object can be small or very far. Besides, this involves multiple changes on objects and can be costly. The approach is to avoid making any changes on objects and rely on direct object recognition using computer vision.

2.2.2 Computer vision

In the field of computer vision, we will focus on Object Recognition techniques. Object recognition is based on extracting global or local features from the image before being recognized. In [14] there is a comparison between global and local features and the limitation of each one. Object recognition techniques that use global features are based on image properties such as colour, texture or gradient images. It describes images as a whole Swain et al [15] presented a recognition system based on colour histograms. Object recognition global features extraction is applied for context based image retrieval [16]. Lehman et al [17] use global features to categorize medical images. Image retrieval is based on global histograms which limits the recognition of graphical elements in an image. Besides this technique turns out not to be very robust to lighting changes, scale or rotation variance. Local Features are computed at multiple points in the image and are consequently more robust to occlusion and clutter. In fact, today’s recognition systems mostly use local features (e.g. local corner points or image fragments) which can be scale and rotational invariant and support finding correspondence between images in with different viewing conditions. One of the first recognition systems using local features was proposed by Schmid and Mohr [18] who used local gray value feature points extracted with Harris corner detector [19] for image retrieval. These features are rotationally invariant and the system provides a robust recognition. Lowe [20] presented an algorithm to detect local scale invariant features based on local extremes found in Gauss-filtered difference images. Later Lowe [21] demonstrated the possibility to extract highly distinctive features (SIFT- Scale Invariant Features Transform) that could be matched in a large database with a high rate. SURF, proposed in [22] (Speed – Up Robust Features), is a local image feature descriptor inspired by SIFT. However, it is faster, more compact and robust against different image transformations than SIFT. SURF and SIFT, are able to find correspondence between two images in case of scaling, rotation and view point changing, whereas, in such circumstances, global features will generally fail.

2.2.3 Computer Vision with mobile Devices

Object recognition performed directly on mobile devices, such as PDAs or mobile phones is rare due to the hardware limitations for such devices. Most approaches use the mobile device for image capturing, simple pre-computation and streaming tasks only, while a powerful remote server performs the intensive computation needed for object recognition. Fritz et al. [23] proposed such a system for recognizing outdoor objects like building and status using a PDA and a wireless connection to a server. The server recognizes the objects and sends back the result to the PDA.

Page 30: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

29

Various ongoing initiatives follow the same principle but uses mobile phones instead of PDAs.[24] is describing a system which is using a mobile phone for capturing pictures and SIFT algorithm is used on a server side for the object recognition. Some computer vision methods can be performed locally on the mobile devices itself. The Semacode corporation [25] developed a system for mobile phone to recognize a URL encoded in a printed barcode. Several groups perform marker detection and tracking to support augmented reality application with mobile phones [26].[27] is making experiments thought two systems. The first one is using SURF Features for image recognition on a remote PC. The second one is using SURF running on a Phone. The result shows an average 22X slowdown compared to the PC.

2.2.4 The chosen approach Based on the state of art, most discovery systems based on computer vision are used as museum guides. After object recognition, the user can have access to textual and image information about the pointed monument. In my approach, I am trying to extend the scope of utilization of such system to any smart environment. The user will not only have access to textual and image information about the object but also will be able to interact with it via its services. For a better performance in object recognition and speed, I am using SURF Features and performing the image processing on a remote server instead of the mobile phone.

2.3 General Presentation of the Computer Vision Discovery System

2.3.1 User Scenario

This section describes a use case of the system when a user attends a smart room. The user enters an office in his work place. There are many objects around him: a lamp, a phone, a Pc, etc. Many objects are connected and can offer services to the user: switch on/off the lamp, redirect the phone call, etc. However, the user is not able, at first, to know which object is connected and how to use its services. By using object recognition application on his mobile phone, the user gets freedom to interact with the objects around him. He just needs to point the phone toward any object and take a picture of it with his camera phone (Figure 16). As a result, he can get two types of answers: If the object is a connected object, (hat we call a smart object, then its virtual object description will be sent the user phone interface. By clicking on the virtual object icon, the user has access to its services and can, thus, interact with the physical object. However, if the object is not connected, then the user gets a message to tell him that the pointed object is not a smart one.

FIGURE 15: LAMP DISCIVERY USING THE CAMERA PHONE

Page 31: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

30

2.3.2 System architecture This section describes how a common smart-phone with built-in digital camera can be used in an image-based object recognition system. The overall concept consists of three main phrases. In the first phase, a user on his personal smart phone activates a software client. The software – client offers functionality to capture an image of an object the user intends to identify. The image of the object is sent to an image recognition application running on a remote dedicated server, over a common wireless Internet connection like Wi-Fi. The software client will be waiting for the server answer. In the second phase, a remote server (the Virtual object Framework) reads the request from the client (smart-phone) and uploads the image into a folder in a local space. Then the server launches an application for object recognition with the uploaded image and a text file as parameters. The text file contains the list of objects in the database with their URLs for their virtual description. The image is thus analysed by a dedicated recognition algorithm to obtain representative image features like local descriptors that characterize a specific object. Next, these features are compared with the reference features of every object image stored in the database and listed in the text file. The database contains image of different objects. The object is then identified by matching the features of the user – picture with the features of each image in the database. Once the object is identified, the corresponding URL VO’s is sent to the smart-phone client as response. In the Third phase, the web-service response is presented to the user on the smart-phone. The response contains a virtual representation of the object and a list of the service that can be used. Figure 17 illustrates the complete technical concept and its three phrases in the CV (Computer Vision) Discovery system.

FIGURE 16: OVERVIEW OF THE CV DISCOVERY SYSTEM CONCEPT

Internet

1- Take picture

Object Database

Object recognition process

Lamp

3- Display services

2- Recognize object

Wifi

Virtual Object Framework

Page 32: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

31

2.4 Object Recognition Algorithm

The object recognition application developed in this part is based on interest point correspondences between individual images pairs. Input images, taken by the user, are compared to all object images in the database. This is done by matching their respective interest points. The object image with the highest number of matches with respect to the input image is chosen as the one which represents the object the user is looking for. As mentioned in the state of art, local features are used for object detection. These features allow a better detection of geometric details and thus a good distinction between different objects. The choice of SURF features as interest points for object recognition is because they are robust regarding scale, rotation, lighting and perspective distortion. The algorithm developed based on SURF features can be divided into three main steps: First, “Interest points” are selected at distinctive location in the image, such us corner or blobs. Next, a feature vectors represent the neighbourhood of every interest point. This descriptor has to be distinctive. Finally, the descriptor vectors are matched between different images. The matching is based on a distance between vectors. The figure below is showing the different step of the algorithm when trying to compare the input image to an image from the database.

FIGURE 17: PRESENTATION OF THE DIFFERENT STEPS OF THE ALGORITHM In following parts are explaining in details the different steps of the algorithm:

2.4.1 Pre-processing

The pre-processing part consists of converting the RGB images to intensity (greyscale) images to speed up the runtime of the algorithm. Colour information are discarded and tasks can be conducted based only on the intensity image.

2.4.2 Interest Point detector

These features are often corners or edges in the image but should be invariant to scale and stable, meaning that these features should be consistently detected even in a different setting. The SURF interest point extraction step is based on the determinant of the Hessian matrix. Given a point X=(x,y) in an image I, the Hessian matrix H(X, σ) in X at scale σ is defined as follows :

H(X, σ) =

),(),(

),(L),( xy

σσ

σσ

xLxL

xxL

yyxy

xx

Page 33: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

32

Where ),( σxLxx refers to the convolution of the Gaussian second derivation )(2

2

σgX∂

∂ with

the image I in point X. It is similar for ),( σxLxy and ),( σxL yy . These derivations are known as

Laplacien of Gaussians. The Scale-Space is constructed using box filters as shown below.

FIGURE 18: BOX FILTERS The box filters can be evaluated very fast using integral images as defined in [28] and independently of size. Therefore, the scale space is analyzed by up-scaling the filter size rather than iteratively reducing the image size, which greatly reduces the computing time. To localise interest points in the image and over scales, a non-maximal suppression in a 3×3×3 neighbourhood is applied. Then interpolate the nearby data to find the location in both space and scale to sub-pixel accuracy. In order to do this, the determinant of the Hessian function, H(x, y,σ ) is expressed as a Taylor expansion up to quadratic terms centred at detected location.

XX

HX

X

HHXH T

T

2

2

2

1)(

∂+

∂+=

The interpolated location of the extremes ),,(ˆ σyxX = is found by taking the derivative of this

function and setting it to zero:

X

H

X

HX

∂−=

2

12

ˆ

2.4.3 Interest Point descriptor

Once the points of interest are identified, they must each have distinct quantitative representations for matching purposes. These quantitative representations are called features descriptors. The Surf descriptor describes how the pixel intensities are distributed within a scale dependent neighbourhood of each interest point detected by the Hessian detector. Extraction of the descriptor can be divided into two distinct tasks.

Page 34: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

33

The first step consists of fixing a reproducible orientation based on information from a circular region around the interest point. First, calculate the Haar-wavel9 and responses in x and y direction, and this in a circular neighbourhood of radius 6σ (σ refers to the scale at which the point was detected) around the interest point. Then, weight the responses are represented as points in vector space, with the x-responses along the abscissa and the y-responses along the ordinate. The dominant orientation is selected by rotating a circle segment covering an angle of

π3 around the origin.

At each position, the x and y-responses within the segment are summed and used to form a new vector. The longest vector lends its orientation to the interest points, as figure below shows

FIGURE 19: ORIENTATION ASSIGNMENT

The second step is to construct a square region aligned to the selected orientation, and extract the SURF descriptor from it. First, construct a square region centred on the interest point, and oriented along the orientation selected in the previous step. The region is split up regularly into smaller 4×4 square sub-regions. Within each of these sub-regions Haar Wavelets of size 2 σ are calculated for 25 regularly distributed sample points. Refer to the x and y wavelet responses by dx and dy respectively then for these 25 sample points,

[ ]∑ ∑ ∑ ∑=− dydxdydxV regionsub ,,,

Therefore each sub-region contributes four values to the descriptor vector leading to an overall vector of length 4× 4×4 = 64, as figure below shows. The resulting SURF descriptor is invariant to rotation, scale, brightness and, after reduction to unit length, contrast.

9 Haar wavelets are simple filters which can be used to find gradient in the x and y directions. Haar walvets are used in order to increase robustness and decrease computation time.

The left Haar wavelets filter computes the response in the x-direction and the right one the y-direction. Weights are 1 for black regions and -1 for the white.

Page 35: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

34

FIGURE 20: DESCRIPTOR COMPONENTS

FIGURE 21: LEFT: DETECTED INETERST POINT, MIDDLE: HAAR WAVLET FILTERS, RIGHT: DESCRIPTOR WINDOWS

2.4.4 Matching process

In order to recognise the objects from the database the images in the test set are compared to all object images in the database by matching their respective interest points. The object shown on the reference image with the highest number of matches with respect to the test image is chosen as the recognised object. The matching is carried using the Nearest Neighbour Search strategy [29].An interest point in the test image is compared to an interest point in the reference image by calculating the Euclidean distance between their descriptor vectors. A matching pair is detected, if its distance is closer than 0.6 times the distance of the second nearest neighbour.

Page 36: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

35

FIGURE 22: MATCHING FEATURES: DOWN, THE TAKEN PICTURE AND, ON TOP, THE OBJECT IMAGE IN THE DATABASE.

2.5 Implementation

2.5.1 Hardware implementation

All images were captured with HTC Desire Android™ 2.1 handled device and have an original resolution of 1024 x 768. The experiments were conducted on a virtual server equipped with an Intel® Core™ 2 Duo CPU 2.26 GHz, 1.94 Go de RAM and Windows XP OS.

2.5.2 Software implementat ion

This part is presenting the software architecture of the Computer Vision Discovery system. The Sequence Diagram below is representing the different exchange between the different actors of the system.

Page 37: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

36

FIGURE 23: SEQUENCE DIAGRAM OF THE CV DISCOVERY SYSTEM

� Mobile Client This part is developed on Android OS using the Android Development Tools (ADT) Plug-in for Eclipse IDE. The ADT plug-in adds extensions to the Eclipse integrated environment to develop and debug Android applications using Java language. The Mobile Client for the Discovery system is composed of two main classes: TakePictureActivity.java: this Class implements a client to connect to the server and send the taken picture for processing on the server side. CVResultActivity.java: This Class Display on the phone screen the result sent by the server After image processing The appendices n°6 and n°7 are showing respectively the code of the TakePictureActivity and the CVResultActivity Classes.

Page 38: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

37

� Virtual Object framework ( the remote server) This part is developed on windows XP PC in Java using the Eclipse IDE. The remote server is the same then the one which is hosting the virtual object and presented in part (III/1.3). The purpose is to have a centralized architecture of the Web of Things: the Virtual Object Framework handles all the requests from the connected objects. When receiving the POST request from the mobile client with the taken picture, the remote server upload the picture on a folder in the local memory and create a new process to launch the object recognition application. The appendix n°8 is showing the code responsible for uploading the picture and launching the recognition_object process.

� Object Recognition Process

This application is implemented in C++ for the following reasons:

• Speed: Low level image processing needs to be fast and C++ facilitates the implementation of a highly efficient library of function

• Image processing libraries: OpenCV is an Open Source Computer Vision Library, which provides API for working with images and video in C++. It provides functions for reading data from image and video files, extracting features, segmentation, recognition, motion tracking, etc.

The development environment for the implementation of the library is Microsoft Visual C++ IDE. The Object Recognition Process is interacting with a database which is hosting all the object images of the virtual objects in the Virtual Object Framework. The appendix n°9 is showing the implementation of the object recognition algorithm in C++ and using the OpenCv library.

2.6 Tests and Results

Test object recognition algorithm In the pictures below, the picture of the object which is saved in the database is on the top, the taken picture is on the bottom. While testing my object recognition algorithm, I could see clearly that in some cases the matching is performed well (.e.g. Bottle recognition) and the object is perfectly recognized. In other cases, (.e.g. Lamp or Stapler recognition) the object is recognized, but the matching is not very precise. In this part, more tests will be performed before my oral presentation, in order to make sure that the algorithm is robust enough in different experience conditions.

Page 39: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

38

Lamp recognition Bottle recognition

Stapler recognition Test CV discovery system In the pictures below, I am testing the CV Discovery System in order to discover the lamp (.i.e. to know if the lamp is connected or not). As Shown, after taking the picture and performing the object recognition process in the remote process, I received an answer from the Virtual Object Framework and the virtual object description of the recognized object (the lamp) is displayed on the mobile phone interface. It is possible then to interact with the lamp.

Page 40: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

39

The time for matching processing and receiving the answer was quite short. This is because of the few number of object images in the database (about 6 objects). More tests will be performed on the system with more object images in the database. Results will be presented in my oral presentation.

2.7 Conclusion

In this part, I developed a Computer Vision Discovery system (CV Discovery system) witch enable the user to discover and interact with connected object in a smart environment. The system is based on object recognition using SURF Features. The image processing is done in the remote Virtual Object Framework. The mobile Client is responsible for taking picture of the pointed object and sending it to the Remote framework. The first tests performed on this system were satisfying. In almost cases, the system is able to recognize the object in a short time. More tests will be performed in the near future.

1- Take picture 2- Perform matching

4- Object recognition: Display of the virtual object description.

3- Wait for the remote server answer.

Page 41: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

40

Conclusion

1. Results

During the period that I spent doing my internship, I developed a prototype of a system, which enables a user to interact with smart objects in a smart environment. We can divide this system into two subsystems:

� The first subsystem is a smart lamp: The lamp is connected to the internet through Arduino boards. The lamp has its own virtual object hosted in a remote Virtual Object Framework. The user can interact with different interfaces (web browser, mobile phone) to change remotely the status of the physical lamp (switch on/off, change its intensity, change its colour). The changes are reflected on the physical object and on its virtual representation. Moreover if the user interact directly with the physical object (e.g. push the on/off button) these changes are reflected on the virtual object to have an identical representation of the virtual and the real world.

� The second subsystem is a computer vision based discovery system: This system allows the user to know which object is a smart one (connected object). The system is based computer vision object recognition. With his mobile phone camera, the user points an object and takes a picture of it. This object will be sent to a remote server where object recognition process will be performed. The taken picture will be compared to reference objects stored in a database. If the object in the taken image is recognized then the remote server sends the URL of the virtual object (VO) to the mobile client. The user can thus see the virtual representation of the object and have access to its service.

2. Encountered difficulties

In the beginning, it was a bit hard to understand the concept of the Web of Things, I needed to read many papers and do researches about the subject to get used to it. I needed time to understand the architecture of the Virtual Object Framework to know exactly how to integrate functions related to the Lamp virtual object and to the object recognition process. I also spent a part of my time to discover the development on Arduino OS using the dedicated Android Development Tools (ADT) Plug-in for Eclipse IDE. Finally, read a lot about the Arduino embedded boards and tested small software on them before implementing my final solution.

3. Project progress

My internship lasts 6 months and ends by September 30; therefore, I still have one month and half to work on the subject. By now, I almost reached the purpose of the internship. However, I did not fully test my application for object recognition. The next steps will be to test the object recognition application with many objects in the database and to perform the recognition of multiple objects in the same taken picture.

Page 42: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

41

4. Added value and personal impression

This project gave me the chance to discover the concept of Web of Things. It was really very interesting to understand how to connect common objects to the web and control them remotely using a mobile phone or web browser. It is always impressive to know that I can control the light of my house from my mobile phone and check that I switched off all the lamps while I am working in my office. This is just a small example off the huge number of possible useful applications. Technically, I learnt a lot about how to manipulate embedded Arduino Boards, how to transform a common object to a smart one by putting technologies on it, how to develop on Arduino OS for mobile phones application and how to deal with huge server software to integrate new functions on it. Moreover, it was very interesting to work on a team and to contribute to the preparation of OPEN DAYS Demo with respect to the schedule and dead lines. The most interesting parts were the brainstorming meeting to discuss about new ideas and the different approaches and also the integration and test of the whole demonstration and dealing with the different technical problems that can occur. Finally, I really liked the work ambiance in my team and in Bell Labs in general. I could see the real difference in working between research centres and business centres. In Bell Labs, the teams are more focused on how to develop new concepts, new technologies, to create new things that can be transformed into product in the future. They also encourage the employees to write patents and to publish papers in conferences.

Page 43: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

42

Appendices

Page 44: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

43

Appendix 1: Detailed schematics of the Arduino Duemilinove

Page 45: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

44

Appendix 2: detailed schematics of the Arduino Ethernet Shield

Page 46: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

45

Appendix 3: Source code of the lampVO java class

package alcatellucent.belllabs.applications.wot.lamp; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.net.HttpURLConnection; import java.net.URL; import java.util.ArrayList; import java.util.Hashtable; import java.util.Map; import javax.ws.rs.Consumes; import javax.ws.rs.GET; import javax.ws.rs.PUT; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.WebApplicationException; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import javax.ws.rs.core.Response; import alcatellucent.belllabs.applications.wot.vosgateway.core.vo.CoreVirtualObject; import alcatellucent.belllabs.applications.wot.vosgateway.core.vo.VODataAttribute; import alcatellucent.belllabs.applications.wot.vosgateway.core.vo.VODataAttributeEvent; import alcatellucent.belllabs.applications.wot.vosgateway.core.vodescription.VODataAttributeDescription; import alcatellucent.belllabs.applications.wot.vosgateway.core.vodescription.VODataAttributeType; import alcatellucent.belllabs.applications.wot.vosgateway.core.vodescription.VODataDescription; @Path("/") public class LampVO extends CoreVirtualObject { private static final String URL_ARDUINO = "UrlArduino"; private static final String BLINK = "blink"; private static final String COLOR = "color"; private static final String POWER = "power"; private static final String NAME = "name"; private static final int maxIntensityValue = 100; private static final int minIntensityValue =0; private Blink _blinkThread = null; //private String _voURL = null; private final static String ATTR_STATE = "state"; private final static String ATTR_INTENSITY = "intensity"; // Attribute to select the html template to be used public static final String ATTR_HTML_TEMPLATE = "htmltemplate"; public void init(Map<String, Object> properties) { System.out.println("Start up Lamp VO"); // Set the king attribute

Page 47: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

46

getVOData().setKind("lamp"); if (!properties.containsKey(NAME)) setAttributeValue(NAME, "Grandma's lamp"); // Add other attributes if not already present if (!properties.containsKey(ATTR_STATE)) setAttributeValue(ATTR_STATE, "off"); if (!properties.containsKey(POWER)) setAttributeValue(POWER, "60"); // Add intensity if (!properties.containsKey(ATTR_INTENSITY)) setAttributeValue(ATTR_INTENSITY, "0"); // Add blink service if (!properties.containsKey(BLINK)) setAttributeValue(BLINK, "false"); // Add color if (!properties.containsKey(COLOR)) setAttributeValue(COLOR, "#ffffff"); if (!properties.containsKey(URL_ARDUINO)) setAttributeValue(URL_ARDUINO, "192.168.1.82:80"); // Send Init values to Arduino voChange(); } /** * Indicates whether the lamp is on or not * * @return */ boolean isOn() { VODataAttribute attr = getAttr(ATTR_STATE); if (attr == null) { // Should not happen, leave a message anyway in case System.err.println("Lamp VO does not have a state attribute !"); return false; } String value = attr.getValue(); if (value == null || value.equalsIgnoreCase("off")) { return false; } else { return true; } } @Override public String getCurrentIcon() { if (isOn()) { return "lamp_on.png"; } else { return "lamp_off.png"; } }

Page 48: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

47

/** * Allow the gateway to connect to the Arduino Server and send data * * @param parameter */ public boolean sendToArduino(String param) { try{ String urlStr = "Http://"+ getAttributeValue(URL_ARDUINO)+ "/"; System.out.println("urlString: "+urlStr); if (param != null && param.length () > 0) { urlStr += param; } URL Url = new URL(urlStr); System.out.println(Url);

HttpURLConnection connection = (HttpURLConnection) Url.openConnection();

connection.setDoOutput(true); connection.setRequestMethod("GET"); connection.connect(); InputStream in = connection.getInputStream(); BufferedReader reader = new BufferedReader(new InputStreamReader(in)); String text = reader.readLine(); System.out.println(text); in.close(); connection.disconnect(); return true; } catch(IOException ex) { ex.printStackTrace(); return false; } } @GET @Produces(MediaType.TEXT_HTML) public Response getHtml() { // This resource is normally located into the GWT Library bundle String htmlTemplate = getAttributeValue(ATTR_HTML_TEMPLATE); if (htmlTemplate == null || htmlTemplate.length() == 0) { htmlTemplate = "template/lamp.html"; } URL url = getBundlesResource(htmlTemplate); if (url == null) { throw new WebApplicationException(Response.Status.NOT_ACCEPTABLE); } // Now, replace some values in the gadget file

Page 49: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

48

// WARNING, because of java String, to replace ${KEY} in the file, // you must use \\$\\{KEY\\} as key Hashtable<String, String> toReplace = new Hashtable<String, String>(); // Replace ${HLID} by the default HLID value toReplace.put("\\$\\{TITLE\\}", "Virtual Object " + getID()); toReplace.put("\\$\\{ID\\}", getID()); toReplace.put("\\$\\{KIND\\}", getKind()); toReplace.put("\\$\\{URL\\}", getStrippedVOURL()); toReplace.put("\\$\\{TIMEOUT\\}", "timeout=\"2\""); toReplace.put("\\$\\{NAME\\}", getAttributeValue(NAME)); toReplace.put("\\$\\{WIDTH\\}", ""); toReplace.put("\\$\\{HEIGHT\\}", ""); // Compute the Gateway URL ! String gatewayurl = buildGatewayURL(); toReplace.put("\\$\\{GATEWAY_URL\\}", gatewayurl); try { String html = replaceInTextFile(url, toReplace); if (html == null) { throw new WebApplicationException( Response.Status.INTERNAL_SERVER_ERROR); } return Response.ok(html,headers.getAcceptableMediaTypes().get(0)).build(); } catch (Exception e) { System.out.println("Error while generating XML"); throw new WebApplicationException( Response.Status.INTERNAL_SERVER_ERROR); } } @GET @Path("api") @Produces("text/javascript") public Response getAPI() { URL url_js = getBundlesResource("javascript/lamp.api.js"); if (url_js == null) { throw new WebApplicationException(Response.Status.NOT_ACCEPTABLE); } // Now, replace some values in the gadget file // WARNING, because of java String, to replace ${KEY} in the file, // you must use \\$\\{KEY\\} as key Hashtable<String, String> toReplace = new Hashtable<String, String>(); // Replace ${HLID} by the default HLID value toReplace.put("\\$\\{URL\\}", getStrippedVOURL()); try { String js = replaceInTextFile(url_js, toReplace); if (js == null) { throw new WebApplicationException( Response.Status.INTERNAL_SERVER_ERROR);

Page 50: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

49

} return Response.ok().entity(js).build(); } catch (Exception e) { System.out.println("Error while generating Javascript"); throw new WebApplicationException( Response.Status.INTERNAL_SERVER_ERROR); } } /** * Update The status of the lamp (On/Off) * * @param data * @return */ @PUT @Path("/Arduino") @Consumes() @Produces() public void updateAttr() { String statusLamp = ""; String attributeValue = ""; MultivaluedMap<String, String> params = uriInfo.getQueryParameters(); if (params == null) { throw new WebApplicationException(Response.Status.BAD_REQUEST); } //else statusLamp = params.getFirst(ATTR_STATE); System.out.println("statusLamp:" + statusLamp); if(statusLamp == null){ throw new WebApplicationException(Response.Status.BAD_REQUEST); } attributeValue = getAttributeValue(ATTR_STATE); if (statusLamp.equalsIgnoreCase(attributeValue)) { return; } setAttributeValue(ATTR_STATE, statusLamp); // Notify the change in different thread in case the treatment takes // some times notifyVOChange(); } @Override

Page 51: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

50

public VODataDescription getDescription() { VODataDescription desc = new VODataDescription("lamp"); VODataAttributeDescription attrdesc = new VODataAttributeDescription( ATTR_STATE, VODataAttributeType.ENUM); attrdesc.setDisplayName("State"); ArrayList<String> values = new ArrayList<String>(); values.add("on"); values.add("off"); attrdesc.setPossibleValues(values); desc.addAttributeDescription(attrdesc); attrdesc = new VODataAttributeDescription(POWER, VODataAttributeType.INT); attrdesc.setDisplayName("Power"); desc.addAttributeDescription(attrdesc); return desc; } /** * To be notified of attribute changes */ @Override public void attributeValueChange(VODataAttributeEvent event) { boolean sendToArduinoResult = false; String previousAttributeValue =""; String eventName = event.name(); String eventValue = event.value(); System.out.println("eventValueIs:"); System.out.println(eventValue); // If we clik on the button if (eventName.equals(ATTR_STATE)) { previousAttributeValue = getAttr(ATTR_STATE).getValue(); if (eventValue == null || eventValue.isEmpty()) { System.out.println("error: No status value"); return; } // Start Client to send Http requet to the Arduino Server. if (eventValue.equals("on")) { System.out.println("putLampON"); sendToArduinoResult = sendToArduino("ON"); } // If the lamp is Off else if (eventValue.equals("off")) { System.out.println("Lamp is OFF"); System.out.println("putLampOFF"); sendToArduinoResult = sendToArduino("OFF"); } else

Page 52: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

51

{ //error case, the value provided does not exist

System.out.println("ERROR : Invalid value " + event.value() + "for parameter " + ATTR_STATE);

sendToArduinoResult = false; } if (sendToArduinoResult == false){ getAttr(ATTR_STATE).setValue(previousAttributeValue); } } // If we move the Slider else if (eventName.equals(ATTR_INTENSITY)) { previousAttributeValue = getAttr(ATTR_INTENSITY).getValue(); // if (isOn()){ if (eventValue.isEmpty()) { System.out.println("error: No intensity value"); sendToArduinoResult = false; } else { int intensityValue = Integer.parseInt(eventValue); String intensityValueString = eventValue;

if ((intensityValue < minIntensityValue) || (intensityValue > maxIntensityValue)) {

System.out.println("errorIntensityValue"); sendToArduinoResult = false; } else { System.out.println("changing Intensity"); sendToArduinoResult = sendToArduino(intensityValueString); } } if (sendToArduinoResult == false){ getAttr(ATTR_INTENSITY).setValue(previousAttributeValue); } System.out.println(event.toString()); } else if ( eventName.equals(COLOR)) { previousAttributeValue = getAttr(COLOR).getValue(); String colorAsString = eventValue; if (eventValue.isEmpty()) {

Page 53: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

52

System.out.println("error: No color value"); sendToArduinoResult = false; } else { while (colorAsString.startsWith("#")) { colorAsString = colorAsString.substring(1); } System.out.println("changing color"); System.out.println("COLOR"+colorAsString); sendToArduinoResult = sendToArduino("COLOR"+colorAsString); } if (sendToArduinoResult == false){ getAttr(COLOR).setValue(previousAttributeValue); } } else if (event != null && eventName.equals(BLINK)) { if (eventValue.equalsIgnoreCase("true")) { if (_blinkThread == null) { // this.setAttributeValue(event.name(), // event.previousValue()); _blinkThread = new Blink(this); _blinkThread.start(); } } else { if (_blinkThread != null) { _blinkThread.stopBlink(); } _blinkThread = null; } } } @Override /** * olb - Handle the case when all the VO is changed. * Current implementation juste take into account intensity and state * simulating Attribute Value Changes * TODO Sameh ?: Write dedicated method to set intensity and state to be able * to reuse them here and maybe in init method */ public void voChange() { System.out.println("Lamp VO changed ! "); sendToArduino(getAttr(ATTR_STATE).getValue()); sendToArduino(getAttr(ATTR_INTENSITY).getValue()); String colorValue = getAttr(COLOR).getValue(); while (colorValue.startsWith("#")) { colorValue = colorValue.substring(1);

Page 54: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

53

} sendToArduino("COLOR"+colorValue); } }

Page 55: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

54

Appendix 4: Structures in the Arduino Code

� Sketches

A sketch is the name that Arduino uses for a program. It's the unit of code that is uploaded to and run on an Arduino board. Arduino programs can be divided in three main parts: structure, values (variables and constants), and functions.

� Structures

The two main structures in an Arduino Sketches are:

setup()

The setup() function is called when a sketch starts. Use it to initialize variables, pin modes, start using libraries, etc. The setup function will only run once, after each power up or reset of the Arduino board.

loop()

After creating a setup() function, which initializes and sets the initial values, the loop() function does precisely what its name suggests, and loops consecutively, allowing your program to change and respond. Use it to actively control the Arduino board.

� Values

There are two types of values: constants and variables.

Constants are predefined variables in the Arduino language. They are used to make the programs easier to read. We classify constants in groups:

Defining Logical Levels, true and false (Boolean Constants) Defining Pin Levels, HIGH and LOW Defining Digital Pins, INPUT and OUTPUT Variables are same as defined in the other software language. They have different types (int, Boolean, char, array ...etc)

� Functions

Some functions are defined in the Arduino language to manage the digital and analog read and write f the I/O (example: digitalWrite (), analogWirte (),..), others to mange the time and the mathematical applications.

� Library

Libraries provide extra functionality for use in sketches, e.g. working with hardware or manipulating data.

Page 56: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

55

Appendix 5: Code embedded into the Arduino Board.

#include <WString.h> #include <Ethernet.h> /* Demonstartion how to control led state (on /off)via a browser Control led intensity via a browser Control the Led state via a button The circuit: * Arduino Duemilanove * Arduino Ethernet shield * Basic FTDI breakout 5V * LED connected to GND and digital pin 9 via resistor * Button connected to pin 8 By Sameh BEN FREDJ */ byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED }; //physical mac address byte ip[] = { 192, 168, 1, 82 }; // IP adress of Arduino byte server[] = { 192, 168, 1, 100 }; // IP adresse of the gateway of VO byte gateway[] = { 192, 168, 95, 254 }; byte subnet[] = { 255, 255, 255, 0 }; Client client(server, 8888); // Arduino Client connecting to the gateway server //Client client(server, 8891); // Arduino Client connecting to the lift server Server serverArduino(80); // Gateway Client connecting to the Arduino server int gradation = 255; int ledPinRed = 3; // LED pin int ledPinGreen = 5;// green int ledPinBleu= 6; // bleu int ledPin= 9; int RedValue = 255; int GreenValue = 255; int BleuValue= 0; int Red ; int Green ; int Bleu; String Sameh = String(30);; int buttonPin = 8; // Pushbutton pin int IntensityValue =0; // Value of the intensity of the Led int ledState =0; // Led ON or OFF

Page 57: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

56

int buttonState = 0; // variable for reading the pushbutton status : HIGH or LOW int lastButtonState =0; String readString = String(30); //string for fetching data from address boolean LEDON = false; //LED status flag int mode = 0; // chose if Arduino in mode Server (0) or in mode Client (1) void setup() { //start Ethernet Ethernet.begin(mac, ip, gateway, subnet); Serial.print("Etehernet Begin"); //Set pin 9 to output pinMode(ledPin, OUTPUT); //Set pin 8 to input pinMode(buttonPin, INPUT); Serial.begin(9600); } // Send "on" to the gateway server void doRequest_on() { Serial.println("doRequest Function on"); while(!client.connect()) { Serial.println("Can't get a connection to server!"); } Serial.println("(Finally) got a connection, so send Put Request on....."); //client.print("PUT /gateway/mylamp/Arduino?state=on"); // localhost lamp home client.print("PUT /local/locallamp/Arduino?state=on"); // lamp is at grandma home client.println(" HTTP/1.0"); client.println(); Serial.println("Reading incoming data"); incoming(); client.stop(); Serial.println('socket stop'); } //send an off to Gateway server void doRequest_off() { Serial.println("doRequest_off Function"); while(!client.connect()) { Serial.println("Can't get a connection to server!"); } Serial.println("(Finally) got a connection, so send put request off.....");

Page 58: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

57

//client.print("PUT /gateway/mylamp/Arduino?state=off"); client.print("PUT /local/locallamp/Arduino?state=off"); // lamp is at grandma home client.println(" HTTP/1.0"); client.println(); Serial.println("Reading incoming data"); incoming(); client.stop(); Serial.println('socket stop'); } //Gateway server is responding to the Arduino Client void incoming() { while (client.available()) { char c = client.read(); Serial.print(c); } if (!client.connected()) { Serial.println("Disconnected."); } } void loop() { buttonState = digitalRead(buttonPin); if ( buttonState != lastButtonState) { mode= 1; // mode client } else { mode= 0; // mode server } switch (mode) { case 0 : //MODE SERVER { // Create a client connection Client clientArduino = serverArduino.available(); if (clientArduino) { while (clientArduino.connected()) {

Page 59: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

58

if (clientArduino.available()) { char c = clientArduino.read(); //read char by char HTTP request if (readString.length() < 30) { //store characters to string readString.append(c); } //output chars to serial port // Serial.println(c); //if HTTP request has ended if (c == '\n') { //lets check if LED should be lighted if(readString.contains("ON")) { //led has to be turned ON serverArduino.write("led ON ok"); Serial.print("Setting LED ON"); // set the LED on with intensity value Serial.println("Les valeurs avant le On:"); Serial.println(Red); Serial.println(Green); Serial.println(Bleu); analogWrite(ledPinRed,Red); analogWrite(ledPinGreen,Green); analogWrite (ledPinBleu,Bleu); LEDON = true; } else if (readString.contains("OFF")) { //led has to be turned OFF serverArduino.write("led OFF ok"); Serial.print("Setting LED OFF"); digitalWrite(ledPinRed,LOW); digitalWrite(ledPinGreen,LOW); digitalWrite (ledPinBleu,LOW); LEDON = false; } else if (readString.contains("COLOR")) { // Serial.println(readString); //Serial.print(readString.length()); // Length =9 because #FFFFFF\r\n readString = readString.substring(10,16); // we extract only FFFFFFF // Serial.println(readString); //Read value : first FF converted to decimal (0 to 255) RedValue = strtoul(readString.substring(0,2),0,16); // Red = RedValue*((float)IntensityValue/(float)gradation); Serial.println(RedValue);

Page 60: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

59

GreenValue = strtoul(readString.substring(2,4),0,16); // Green in decimal Green = GreenValue*((float)IntensityValue/(float)gradation); Serial.println(GreenValue); BleuValue= strtoul(readString.substring(4,6),0,16); // bleu from hexa to decimal Bleu = BleuValue*((float)IntensityValue/(float)gradation); Serial.println(BleuValue); if (LEDON == true){ analogWrite(ledPinRed,Red); analogWrite(ledPinGreen,Green); analogWrite (ledPinBleu,Bleu); serverArduino.write("Color Ok");} //LEDON = true; } else { Serial.println(readString);

//WARNING the number send is from 1 to 100. We read 3 characters in all //cases. Atoi function seems to work correctly anyway but it should be safer to read the number until the next space

readString = readString.substring(5,8); Serial.println(readString); IntensityValue = atoi(readString); Serial.println(IntensityValue); IntensityValue = map(IntensityValue,1,100,1,255); serverArduino.write("intensity ok"); Red = RedValue*((float)IntensityValue/(float)gradation); Green = GreenValue*((float)IntensityValue/(float)gradation); Bleu = BleuValue*((float)IntensityValue/(float)gradation); if( LEDON == true) { //Change Led intensity only if led is On analogWrite(ledPinRed,Red); analogWrite(ledPinGreen,Green); analogWrite (ledPinBleu,Bleu); } } //clearing string for next read readString=""; //stopping client clientArduino.stop(); }// end if (c == '\n') }//end if (clientArduino.available()) }//end while (clientArduino.connected()) }// end if (clientArduino) }//end case 0

Page 61: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

60

break; // break case 0 case 1: //MODE client { if (LEDON == false ) { analogWrite(ledPinRed,Red); analogWrite(ledPinGreen,Green); analogWrite (ledPinBleu,Bleu); doRequest_on(); LEDON = true; lastButtonState = buttonState; break; } if (LEDON == true ) { analogWrite(ledPinRed,0); analogWrite(ledPinGreen,0); analogWrite (ledPinBleu,0); doRequest_off(); LEDON = false; lastButtonState = buttonState; break; } } // break; // break case 1 default: { } break; // break default }// end switch }// end void loop()

Page 62: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

61

Appendix 6: Code of TakePictureActivity.java class

package alcatellucent.belllabs.applications.wot.android.application; import java.io.File; import java.io.FileOutputStream; import java.io.IOException; import java.util.Timer; import java.util.TimerTask; import java.util.UUID; import org.apache.Http.HttpEntity; import org.apache.Http.HttpResponse; import org.apache.Http.HttpStatus; import org.apache.Http.client.HttpClient; import org.apache.Http.client.methods.HttpGet; import org.apache.Http.client.methods.HttpPost; import org.apache.Http.entity.ByteArrayEntity; import org.apache.Http.impl.client.DefaultHttpClient; import org.apache.Http.util.EntityUtils; import alcatellucent.belllabs.applications.wot.android.application.ConfigActivity.PrefKey; import android.app.Activity; import android.app.ProgressDialog; import android.content.Context; import android.content.Intent; import android.content.pm.ApplicationInfo; import android.hardware.Camera; import android.hardware.Camera.AutoFocusCallback; import android.os.AsyncTask; import android.os.Bundle; import android.os.Environment; import android.util.Log; import android.view.KeyEvent; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.view.View; import android.view.Window; // ---------------------------------------------------------------------- public class TakePictureActivity extends Activity { private Preview mPreview; public static String STORAGE_DIRECTORY = null; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Hide the window title. requestWindowFeature(Window.FEATURE_NO_TITLE); // Create our Preview view and set it as the content of our activity. mPreview = new Preview(this);

Page 63: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

62

setContentView(mPreview); //Check if a dedicated directory exist for this application //if not, create it if (STORAGE_DIRECTORY == null) { try { ApplicationInfo info = getApplication().getApplicationInfo(); String label = getResources().getText(info.labelRes).toString(); File cvDirectory = new File(Environment.getExternalStorageDirectory() + "/" + label); if (!cvDirectory.exists()) { cvDirectory.mkdirs(); } STORAGE_DIRECTORY = cvDirectory.getPath();

Log.i(getClass().getSimpleName(), "onCreate | Store pictures for computer vision in directory " + STORAGE_DIRECTORY);

} catch (Exception e) {

Log.e(getClass().getSimpleName(), "onCreate | Could not create dedicated directory to store pictures for computer vision !",e);

STORAGE_DIRECTORY = null; } } } @Override public boolean dispatchKeyEvent(KeyEvent event) { int action = event.getAction(); int keyCode = event.getKeyCode(); Log.i(getClass().getSimpleName(), "dispatchKeyEvent | action:" + action + " keycode:" + keyCode); if (action == KeyEvent.ACTION_DOWN && ( keyCode==KeyEvent.KEYCODE_CAMERA || keyCode==KeyEvent.KEYCODE_SEARCH || keyCode==KeyEvent.KEYCODE_DPAD_CENTER || keyCode==KeyEvent.KEYCODE_ENTER)) { //Force auto focus before taking the picture mPreview.autoFocus(); //mPreview.takePicture(); return(true); } return super.dispatchKeyEvent(event); } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { if (keyCode == KeyEvent.KEYCODE_BACK) { finish(); return true; } else

Page 64: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

63

return super.onKeyDown(keyCode, event); } } // ---------------------------------------------------------------------- class Preview extends SurfaceView implements SurfaceHolder.Callback,Camera.PictureCallback, AutoFocusCallback { // set the image dimensions final public static int MAX_IMAGE_WIDTH = 1024; final public static int MAX_IMAGE_HEIGHT = 768; SurfaceHolder mHolder; Camera mCamera; public ProgressDialog progressDialog = null; Preview(Context context) { super(context); // Install a SurfaceHolder.Callback so we get notified when the // underlying surface is created and destroyed. mHolder = getHolder(); mHolder.addCallback(this); mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void surfaceCreated(SurfaceHolder holder) { // The Surface has been created, acquire the camera and tell it where // to draw. mCamera = Camera.open(); try { mCamera.setPreviewDisplay(holder); } catch (IOException exception) { mCamera.release(); mCamera = null; // TODO: add more exception handling logic here } } public void surfaceDestroyed(SurfaceHolder holder) { // Surface will be destroyed when we return, so stop the preview. // Because the CameraDevice object is not a shared resource, it's very // important to release it when the activity is paused. mCamera.stopPreview(); mCamera.release(); mCamera = null; } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { // Now that the size is known, set up the camera parameters and begin // the preview. Log.i(getClass().getSimpleName(), "surfaceChanged | w:" + w + " h:" + h); Camera.Parameters parameters = mCamera.getParameters();

Page 65: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

64

Log.i(getClass().getSimpleName(), "surfaceChanged | flatten(" + parameters.flatten()+")"); //We could play around with the preview size to choose the best one suitable //but might be a little difficult String previewSizeValueString = parameters.get("preview-size-values"); // saw this on Xperia if (previewSizeValueString == null) { previewSizeValueString = parameters.get("preview-size-value"); } Log.i(getClass().getSimpleName(), "surfaceChanged | preview-size-value(" + previewSizeValueString+")"); try { //Get the parameters really set parameters = mCamera.getParameters(); Log.i(getClass().getSimpleName(), "surfaceChanged | preview w:" + parameters.getPreviewSize().width + " h:" + parameters.getPreviewSize().height); //1024x768 works for both G2 and desire. //To be safer, the size should be choosen according to the one provided in picture-size-values //parameters //It makes a picture of about 300Ko

//We could play around also with lowest quality of jpeg using jpeg-quality parameter parameters.setPictureSize(MAX_IMAGE_WIDTH, MAX_IMAGE_HEIGHT); mCamera.setParameters(parameters);

Log.i(getClass().getSimpleName(), "surfaceChanged | picture w:" + mCamera.getParameters().getPictureSize().width + " h:" + mCamera.getParameters().getPictureSize().height);

} catch (Exception e) { //Unfortunately, this seems to happen on HTC desire !

Log.i(getClass().getSimpleName(), "surfaceChanged | ERROR while setting camera picture size parameters " + e.toString());

} mCamera.startPreview(); } protected void takePicture() { mCamera.stopPreview(); mCamera.takePicture(null, null, this); } protected void autoFocus() { mCamera.autoFocus(this); } @Override public void onPictureTaken(byte[] data, Camera camera)

Page 66: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

65

{ //We send it to the server and we can close this view progressDialog = ProgressDialog.show(getContext(), "Computer Vision Processing", "Please wait...", true, true); new SendPhotoTask().execute(data); } @Override public void onAutoFocus(boolean success, Camera camera) { if (success) { //Now we can take the picture takePicture(); } } class SendPhotoTask extends AsyncTask<byte[], String, String> { String jpegFileName = null; int nb_poll = 5; // nb of pollings before stop String timeout_result = "processing timeout, no result from the server"; //String timeout_result = ConfigActivity.getString(PrefKey.VO_GATEWAY_IP_LOCAL) + "localmobilephone"; @Override protected String doInBackground(byte[]... jpeg) { System.out.println("doInBackground"); try { jpegFileName = "cv"+ UUID.randomUUID().toString() + ".jpg"; HttpClient httpclient = new DefaultHttpClient();

HttpPost httppost = new HttpPost(ConfigActivity.getString(PrefKey.VO_GATEWAY_IP_LOCAL) + "cv/" + jpegFileName);

httppost.setHeader("Content-Type","image/jpg"); ByteArrayEntity bae = new ByteArrayEntity(jpeg[0]); httppost.setEntity(bae); HttpResponse response = httpclient.execute(httppost); if (response.getStatusLine().getStatusCode() == HttpStatus.SC_OK) { HttpEntity entity = response.getEntity();

Log.i(getClass().getSimpleName(), "Request successfull :" + response.getStatusLine().toString());

} else {

Page 67: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

66

Log.i(getClass().getSimpleName(), "Request failed :" + response.getStatusLine().toString());

return null; } } catch(Exception e) { Log.i(getClass().getSimpleName(), "Exception catched:" + e.getMessage()); return null; } return "completed"; } // -- called as soon as doInBackground method completes // -- notice that the third param gets passed to this method @Override protected void onPostExecute( String result ) { super.onPostExecute(result); Log.i( getClass().getSimpleName(), "onPostExecute(): " + result ); // check if the processing is terminated // setTimer to trigger a HttpGet request in order to get back VOURLs of objects setTimer(2000); // set a timer of 2 seconds } private void setTimer(int milliseconds){ final Timer myTimer = new Timer(); myTimer.schedule(new TimerTask() { @Override public void run() { String result = retrieveCVResult(); if (result != null){ myTimer.cancel(); // start CVResultActivity

Intent cvresultIntent = new Intent().setClass(getContext(), CVResultActivity.class);

Bundle bundle = new Bundle(); bundle.putString("cvresult", result); cvresultIntent.putExtras(bundle); Context context = getContext(); context.startActivity(cvresultIntent); //Stop this activity which will release the camera resource progressDialog.dismiss(); ((Activity)context).finish(); // stop takePictureActivity } else { nb_poll--; if (nb_poll == 0){ // stop polling myTimer.cancel(); // start CVResultActivity

Intent cvresultIntent = new Intent().setClass(getContext(), CVResultActivity.class); Bundle bundle = new Bundle(); bundle.putString("cvresult", timeout_result); cvresultIntent.putExtras(bundle); Context context = getContext(); context.startActivity(cvresultIntent);

Page 68: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

67

//Stop this activity which will release the camera resource progressDialog.dismiss(); ((Activity)context).finish(); // stop takePictureActivity } } } }, 0, milliseconds); } private String retrieveCVResult(){ Log.i(getClass().getSimpleName(), "poll cv result for the server."); // do HttpGet here to get back vourls HttpClient httpclient = null; try {

// suppose that the result of computervision application stores the result in file

// with prefix by picture name String jpegFileName_result = jpegFileName.replaceAll(".jpg", ".result"); httpclient = new DefaultHttpClient();

HttpGet httpget = new HttpGet(ConfigActivity.getString(PrefKey.VO_GATEWAY_IP_LOCAL) + "cv/" + jpegFileName_result);

httpget.setHeader("Content-Type","text/*"); HttpResponse response = httpclient.execute(httpget); if (response.getStatusLine().getStatusCode() == HttpStatus.SC_OK) {

Log.i(getClass().getSimpleName(), "get CV result request successfull :" + response.getStatusLine().toString());

HttpEntity respEntity = response.getEntity(); if (respEntity != null){ String responseBody = EntityUtils.toString(respEntity); Log.i(getClass().getSimpleName(), "get CV result : " + responseBody); return responseBody; } else {

Log.i(getClass().getSimpleName(), "get CV result Request : Respons entity is null");

return null; } } else {

Log.i(getClass().getSimpleName(), "get CV result Request failed :" + response.getStatusLine().toString());

return null; } }

Page 69: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

68

catch (IOException e) { Log.i(getClass().getSimpleName(), "get CV result Request exception :" +

e.getMessage()); return null; } finally { // When HttpClient instance is no longer needed, // shut down the connection manager to ensure // immediate deallocation of all system resources if (httpclient != null) httpclient.getConnectionManager().shutdown(); } } } class SavePhotoTask extends AsyncTask<byte[], String, String> { @Override protected String doInBackground(byte[]... jpeg) { if (TakePictureActivity.STORAGE_DIRECTORY != null) { String jpegFileName = "cv"+ UUID.randomUUID().toString() + ".jpg"; File photo=new File(TakePictureActivity.STORAGE_DIRECTORY, jpegFileName); if (photo.exists()) { photo.delete(); } try { FileOutputStream fos=new FileOutputStream(photo.getPath()); fos.write(jpeg[0]); fos.close(); Log.i(getClass().getSimpleName(), "Wrote file " + jpegFileName + " successfully"); } catch (java.io.IOException e) {

Log.e(getClass().getSimpleName(), "Exception in photoCallback when writing file " + jpegFileName, e);

} } return(null); } } }

Page 70: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

69

Appendix 7: Code of CVResultActivity.java class

package alcatellucent.belllabs.applications.wot.android.application; import java.util.StringTokenizer; import java.util.Vector; import alcatellucent.belllabs.applications.wot.android.R; import alcatellucent.belllabs.applications.wot.android.tools.ImageManager; import alcatellucent.belllabs.applications.wot.android.virtualobject.VirtualObjectManager; import alcatellucent.belllabs.applications.wot.android.virtualobject.WebObject; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.ImageButton; import android.widget.TextView; import android.widget.Toast; public class CVResultActivity extends Activity { Vector<String> vourls = new Vector<String>(); String msg = null; Activity currentActivity; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); currentActivity = this; Bundle extras = getIntent().getExtras(); if(extras != null) { String result = extras.getString("cvresult"); if (result != null){ // analyze the result result = result.trim(); if (result.length() == 0){ msg = "no any result found."; } else if (result.startsWith("Http://")){ // found one or more vourls StringTokenizer tokens = new StringTokenizer(result, ";"); while (tokens.hasMoreElements()) vourls.add((String)tokens.nextElement()); } else { msg = result; } }

Page 71: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

70

} else msg = "no any result found."; // fill the CVResult view with the two first VOs if (vourls.size() == 0){ // display error message setContentView(R.layout.cvresult_error); TextView error_view = (TextView) this.findViewById(R.id.cv_error); error_view.setText(msg); } else { setContentView(R.layout.cvresult_ok); // get VOs from VirtualObjectManager VirtualObjectManager voManager = MainActivity.getVOManager(); WebObject wo; int id_image = 0; int id_title = 0; TextView textView; ImageButton imageBtn; // init the view, put invisible by default View rview1 = this.findViewById(R.id.result1); rview1.setVisibility(View.GONE); View rview2 = this.findViewById(R.id.result1); rview2.setVisibility(View.GONE); for (int i = 0; i < vourls.size(); i++){

// for the first version, we limit two VOs, later, we should create a ListView to any number of VOs

wo = voManager.get(vourls.elementAt(i)); if (wo == null){

Log.e(getClass().getSimpleName(), "can't retrieve VO from url : " + vourls.elementAt(i));

// goto next one continue; } if (i==0){ id_image = R.id.result1_image; id_title = R.id.result1_title; rview1.setVisibility(View.VISIBLE); } else if (i==1){ id_image = R.id.result2_image; id_title = R.id.result2_title; rview2.setVisibility(View.VISIBLE); } textView = (TextView)this.findViewById(id_title); textView.setText(wo.getName()); imageBtn = (ImageButton)this.findViewById(id_image); ImageManager.setImageViewWithWO(wo, imageBtn); imageBtn.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { // action 1 : add obj to list of bookmark

Page 72: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

71

// action 2 : display the object if (v.getId()== R.id.result1_image)

MainActivity.bookmark_vourl = vourls.elementAt(0);

else if (v.getId()== R.id.result2_image) MainActivity.bookmark_vourl =

vourls.elementAt(1); MainActivity.tabHost.setCurrentTab(1); currentActivity.finish(); } }); } } }

}

Page 73: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

72

Appendix 8: Uploading picture & launching the recognition object process

/** * POST request that allow a mobile client to upload a picture on the server.

This picture should be used to identify the object according to the list of known object in the gateway.

*/ @POST @Path(COMPUTER_VISION_PATH + "/{name : .+}") @Consumes({"image/jpg","image/png","image/*"}) @Produces() public Response uploadImage(@PathParam("name") String name, InputStream is) { System.out.println(" I am in this function now: [email protected]"); BufferedInputStream bis = null; BufferedOutputStream bos = null; try { bis = new BufferedInputStream(is); System.out.println("outputStream:"+getCVImagesPath()+"/"+name); FileOutputStream fos = new FileOutputStream(getCVImagesPath()+"/"+name); bos = new BufferedOutputStream(fos); int c = -1; while (( c = bis.read()) != -1) { bos.write(c); } } catch (Exception e) { e.printStackTrace(); } finally { try { if (bis != null) { bis.close(); bis = null; } } catch (IOException e) { e.printStackTrace(); } try

Page 74: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

73

{ if (bos != null) { bos.close(); bos = null; } } catch (IOException e) { e.printStackTrace(); } } ResponseBuilder builder = Response.ok(); String[] cmd = new String[]{"object_recognition.exe",getCVImagesPath()+"/"+name,"base.txt"}; System.out.println(getCVImagesPath()+"/"+name); StartCommand(cmd); return builder.build(); } // Launch Computer_vision.exe public void StartCommand(String[] command) { try { //creation du processus Process p = Runtime.getRuntime().exec(command); InputStream in = p.getInputStream(); StringBuilder build = new StringBuilder(); char c = (char) in.read(); while (c != (char) -1) { build.append(c); c = (char) in.read(); } String response = build.toString(); System.out.println(response); } catch (Exception e) { System.out.println("\n" + command + ": commande inconnu "); } }

Page 75: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

74

Appendix 9: Computer Vision Discovery system application C+

/* * Object recognition program using SURF Features with OpenCV * */ #include <cv.h> #include <highgui.h> #include <ctype.h> #include <stdio.h> #include <stdlib.h> #include <iostream> #include <vector> #include <string> #include <fstream> #include <algorithm> using namespace std; // define whether to use approximate nearest-neighbor search #define USE_FLANN IplImage *image = 0; double compareSURFDescriptors( const float* d1, const float* d2, double best, int length ) { double total_cost = 0; assert( length % 4 == 0 ); for( int i = 0; i < length; i += 4 ) { double t0 = d1[i] - d2[i]; double t1 = d1[i+1] - d2[i+1]; double t2 = d1[i+2] - d2[i+2]; double t3 = d1[i+3] - d2[i+3]; total_cost += t0*t0 + t1*t1 + t2*t2 + t3*t3; if( total_cost > best ) break; } return total_cost; } int naiveNearestNeighbor( const float* vec, int laplacian, const CvSeq* model_keypoints, const CvSeq* model_descriptors ) {

Page 76: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

75

int length = (int)(model_descriptors->elem_size/sizeof(float)); int i, neighbor = -1; double d, dist1 = 1e6, dist2 = 1e6; CvSeqReader reader, kreader; cvStartReadSeq( model_keypoints, &kreader, 0 ); cvStartReadSeq( model_descriptors, &reader, 0 ); for( i = 0; i < model_descriptors->total; i++ ) { const CvSURFPoint* kp = (const CvSURFPoint*)kreader.ptr; const float* mvec = (const float*)reader.ptr; CV_NEXT_SEQ_ELEM( kreader.seq->elem_size, kreader ); CV_NEXT_SEQ_ELEM( reader.seq->elem_size, reader ); if( laplacian != kp->laplacian ) continue; d = compareSURFDescriptors( vec, mvec, dist2, length ); if( d < dist1 ) { dist2 = dist1; dist1 = d; neighbor = i; } else if ( d < dist2 ) dist2 = d; } if ( dist1 < 0.5*dist2 ) return neighbor; return -1; } void findPairs( const CvSeq* objectKeypoints, const CvSeq* objectDescriptors, const CvSeq* imageKeypoints, const CvSeq* imageDescriptors, vector<int>& ptpairs ) { int i; CvSeqReader reader, kreader; cvStartReadSeq( objectKeypoints, &kreader ); cvStartReadSeq( objectDescriptors, &reader ); ptpairs.clear(); for( i = 0; i < objectDescriptors->total; i++ ) { const CvSURFPoint* kp = (const CvSURFPoint*)kreader.ptr; const float* descriptor = (const float*)reader.ptr; CV_NEXT_SEQ_ELEM( kreader.seq->elem_size, kreader ); CV_NEXT_SEQ_ELEM( reader.seq->elem_size, reader ); int nearest_neighbor = naiveNearestNeighbor( descriptor, kp->laplacian, imageKeypoints, imageDescriptors ); if( nearest_neighbor >= 0 ) { ptpairs.push_back(i); ptpairs.push_back(nearest_neighbor); } } }

Page 77: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

76

void flannFindPairs( const CvSeq*, const CvSeq* objectDescriptors, const CvSeq*, const CvSeq* imageDescriptors, vector<int>& ptpairs ) { int length = (int)(objectDescriptors->elem_size/sizeof(float)); cv::Mat m_object(objectDescriptors->total, length, CV_32F); cv::Mat m_image(imageDescriptors->total, length, CV_32F); // copy descriptors CvSeqReader obj_reader; float* obj_ptr = m_object.ptr<float>(0); cvStartReadSeq( objectDescriptors, &obj_reader ); for(int i = 0; i < objectDescriptors->total; i++ ) { const float* descriptor = (const float*)obj_reader.ptr; CV_NEXT_SEQ_ELEM( obj_reader.seq->elem_size, obj_reader ); memcpy(obj_ptr, descriptor, length*sizeof(float)); obj_ptr += length; } CvSeqReader img_reader; float* img_ptr = m_image.ptr<float>(0); cvStartReadSeq( imageDescriptors, &img_reader ); for(int i = 0; i < imageDescriptors->total; i++ ) { const float* descriptor = (const float*)img_reader.ptr; CV_NEXT_SEQ_ELEM( img_reader.seq->elem_size, img_reader ); memcpy(img_ptr, descriptor, length*sizeof(float)); img_ptr += length; } // find nearest neighbors using FLANN (FAST Libarray for approximate nearest neighbours) cv::Mat m_indices(objectDescriptors->total, 2, CV_32S); cv::Mat m_dists(objectDescriptors->total, 2, CV_32F); cv::flann::Index flann_index(m_image, cv::flann::KDTreeIndexParams(4)); // using 4 randomized kdtrees //The (knnSearch) k-nearest neighbor algorithm is amongst the simplest of all machine learning algorithms: an object is classified by a majority vote of its neighbors flann_index.knnSearch(m_object, m_indices, m_dists, 2, cv::flann::SearchParams(64) ); // maximum number of leafs checked int* indices_ptr = m_indices.ptr<int>(0); float* dists_ptr = m_dists.ptr<float>(0); for (int i=0;i<m_indices.rows;++i) { if (dists_ptr[2*i]<0.4*dists_ptr[2*i+1]) { ptpairs.push_back(i); ptpairs.push_back(indices_ptr[2*i]); } } }

Page 78: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

77

int locatePlanarObject( const CvSeq* objectKeypoints, const CvSeq* objectDescriptors, const CvSeq* imageKeypoints, const CvSeq* imageDescriptors, const CvPoint src_corners[4], CvPoint dst_corners[4] ) { double h[9]; CvMat _h = cvMat(3, 3, CV_64F, h); vector<int> ptpairs; vector<CvPoint2D32f> pt1, pt2; CvMat _pt1, _pt2; int i, n; #ifdef USE_FLANN flannFindPairs( objectKeypoints, objectDescriptors, imageKeypoints, imageDescriptors, ptpairs ); #else findPairs( objectKeypoints, objectDescriptors, imageKeypoints, imageDescriptors, ptpairs ); #endif n = ptpairs.size()/2; if( n < 4 ) return 0; pt1.resize(n); pt2.resize(n); for( i = 0; i < n; i++ ) { pt1[i] = ((CvSURFPoint*)cvGetSeqElem(objectKeypoints,ptpairs[i*2]))->pt; pt2[i] = ((CvSURFPoint*)cvGetSeqElem(imageKeypoints,ptpairs[i*2+1]))->pt; } _pt1 = cvMat(1, n, CV_32FC2, &pt1[0] ); _pt2 = cvMat(1, n, CV_32FC2, &pt2[0] ); //cvFindHomography:Finds the perspective transformation between two planes. if( !cvFindHomography( &_pt1, &_pt2, &_h, CV_RANSAC, 5 )){ printf("no Homography\n"); return 0;} for( i = 0; i < 4; i++ ) { double x = src_corners[i].x, y = src_corners[i].y; double Z = 1./(h[6]*x + h[7]*y + h[8]); double X = (h[0]*x + h[1]*y + h[2])*Z; double Y = (h[3]*x + h[4]*y + h[5])*Z; dst_corners[i] = cvPoint(cvRound(X), cvRound(Y)); } return 1; } int main(int argc, char** argv) { // un vecteur d'objets 2D //std::vector< std::vector< string > > objet_base;

Page 79: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

78

vector<string> object_Id; vector<string> object_Url; vector<int> matching; int h = 0; int i;

printf("\n\nWelcome to Object_Recognition Program using OpenCV version %s\n\n",CV_VERSION);

//const char* object_filename = argc == 3 ? argv[1] :"base/lamp.jpg"; if ( argc != 3) { printf("Error: 2 parameters are needed\n"); return 0; } if (argv[1] == NULL){ printf("Error: Argument 1 is Null\n"); return 0; } const char* image_filename = argv[1]; if (argv[2] == NULL){ printf("Error: Argument 2 is Null\n"); return 0; } //Open file.txt for reading the object in the base ifstream fichier(argv[2], ios::in); //Clear object vectors object_Id.clear(); object_Url.clear(); if(fichier) // si l'ouverture a réussi { string ligne; while(getline(fichier, ligne)) // tant que l'on peut mettre la ligne dans "contenu" { // Fill the two vectors with data from the file object_Id.push_back(ligne.substr(0,ligne.find('#'))); object_Url.push_back(ligne.substr(ligne.find('#')+1 ,ligne.size())); cout << object_Id[h]<< endl; // on l'affiche

Page 80: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

79

cout <<object_Url[h]<< endl; // cout<<object_Id.size()<<endl; h++; } fichier.close(); // on ferme le fichier } else { // sinon cerr << "Impossible d'ouvrir le fichier !" << endl; } // Test that Reading base.txt file is Ok ! if ( object_Id.size() == object_Url.size()) { cout << "Reading base.txt file is Ok" << endl; } else { cerr << "An error ocurred while reading base.txt file !" << endl; } CvMemStorage* storage = cvCreateMemStorage(0); //cvNamedWindow("Object", 1); //cvNamedWindow("Object Correspond", 1); static CvScalar colors[] = { {{0,0,255}}, {{0,128,255}}, {{0,255,255}}, {{0,255,0}}, {{255,128,0}}, {{255,255,0}}, {{255,0,0}}, {{255,0,255}}, {{255,255,255}} }; //Load image file in grayscale IplImage* image = cvLoadImage( image_filename,CV_LOAD_IMAGE_GRAYSCALE); // Error if image not found if(!image ) { fprintf( stderr, "Can not load %s and/or %s\n" "Usage: find_obj [<image_filename>]\n",image_filename ); exit(-1); }

Page 81: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

80

// Create a windows for image with color IplImage* image_color = cvCreateImage(cvGetSize(image), 8, 3); //cvCvtColor( image, image_color, CV_GRAY2BGR); CvSeq *imageKeypoints = 0, *imageDescriptors = 0; //Create a CvSURFParams using the specific values

//only features with keypoint.hessian larger than that are extracted. good default value is ~300-500

CvSURFParams params = cvSURFParams(500, 0); // extarct features from the image. cvExtractSURF( image, 0, &imageKeypoints, &imageDescriptors, storage, params ); printf("Image Descriptors: %d\n", imageDescriptors->total); //Traitement_Part of Object's image for( int j = 0;j< (int)object_Url.size()-1;j++ ) { const char* object_filename = object_Url[j].c_str(); //printf("object is:%s\n",object_filename); //Load Object image in grayscale

IplImage* object = cvLoadImage( object_filename, CV_LOAD_IMAGE_GRAYSCALE);

if(!object) { fprintf( stderr, "Can not load %s and/or %s\n" "Usage: find_obj [<object_filename>]\n", object_filename); exit(-1); } // Create a windows for object with color //IplImage* object_color = cvCreateImage(cvGetSize(object), 8, 3); //cvCvtColor( object, object_color, CV_GRAY2BGR); // Keypoints = detector and descriptor = descriptor in SURF algorithm CvSeq *objectKeypoints = 0, *objectDescriptors = 0;

double tt = (double)cvGetTickCount();//measurement of a function/user-code execution time

//Finds robust features in the object's image. For each feature it returns its location, size, orientation and optionally the descriptor, basic or extended.

Page 82: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

81

cvExtractSURF( object, 0, &objectKeypoints, &objectDescriptors, storage, params );

printf("Object Descriptors: %d\n", objectDescriptors->total); tt = (double)cvGetTickCount() - tt; printf( "Extraction time = %gms\n", tt/(cvGetTickFrequency()*1000.));

CvPoint src_corners[4] = {{0,0}, {object->width,0}, {object->width, object->height}, {0, object->height}};

CvPoint dst_corners[4];

#ifdef USE_FLANN //FLANN (FAST Libarray for approximate nearest neighbours)

printf("Using approximate nearest neighbor search\n"); #endif

vector<int> ptpairs;

#ifdef USE_FLANN flannFindPairs( objectKeypoints, objectDescriptors, imageKeypoints,

imageDescriptors, ptpairs ); printf("Number of similar features: %d",ptpairs.size()/2); matching.push_back(ptpairs.size()/2);

#else findPairs( objectKeypoints, objectDescriptors, imageKeypoints,

imageDescriptors, ptpairs );

#endif } // find the object that contain the max matching points int Max; int index =0; Max = matching[0]; for( int i =0; i< (int)matching.size(); i++) { if (matching[i] > Max) { Max = matching[i]; index = i; } } printf ("index_object : %d\n",index); string object_known = object_Id[index]; cout << object_Id[index]<< endl; printf ("le nom de l'image : %s\n",image_filename); printf ("I reconize : %s\n", object_known.c_str()); const char* object_recognition = object_Url[index].c_str(); //Load Object image in grayscale

IplImage* object_ok = cvLoadImage( object_recognition, CV_LOAD_IMAGE_GRAYSCALE);

Page 83: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

82

//cvShowImage( "Object Correspond", correspond ); cvShowImage( "Object", object_ok ); cvWaitKey(0); cvDestroyWindow("Image"); cvDestroyWindow("Object"); //cvDestroyWindow("Object SURF"); //cvDestroyWindow("Object Correspond"); return 0;

}

Page 84: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

83

Bibliography [1] “Towards the Web of Things: Web Mashups for Embedded Devices [ Dominique Guinard,

Vlad Trifa ],” Jun. 2010. [2] R. Want, K.P. Fishkin, A. Gujar, and B.L. Harrison, “Bridging physical and virtual worlds

with electronic tags,” Proceedings of the SIGCHI conference on Human factors in computing systems: the CHI is the limit, Pittsburgh, Pennsylvania, United States: ACM, 1999, pp. 370-377.

[3] P. Schramm, E. Naroska, P. Resch, J. Platte, and H. Linde, “Integration of Limited Servers into Pervasive Computing Environments Using Dynamic Gateway Services.”

[4] S. Duquennoy, G. Grimaud, and J. Vandewalle, “The Web of Things: Interconnecting Devices with High Usability and Performance,” Embedded Software and Systems, Second International Conference on, Los Alamitos, CA, USA: IEEE Computer Society, 2009, pp. 323-330.

[5] “Using REST on SunSPOTs | Web of Things.” [6] H. Gellersen, C. Fischer, D. Guinard, R. Gostner, G. Kortuem, C. Kray, E. Rukzio, and S.

Streng, “Supporting device discovery and spontaneous interaction with spatial references,” Personal Ubiquitous Comput., vol. 13, 2009, pp. 255-264.

[7] J. Rekimoto, Y. Ayatsuka, M. Kohno, and H. Oba, “Proximal interactions: A direct manipulation technique for wireless networking,” IN PROCEEDINGS OF INTERACT2003, SEP.-OCT, 2003, pp. 511--518.

[8] T. Fuhrmann and T. Harbaum, “Using Bluetooth for Informationally Enhanced Environments Abstract.”

[9] “Museum Puts Tags on Stuffed Birds - RFID Journal.” [10] W.K. Edwards, “Discovery Systems in Ubiquitous Computing,” IEEE Pervasive Computing,

vol. 5, 2006, pp. 70-77. [11] “u-Photo : Interacting with pervasive services using digital still images.” [12] R. Ballagas and M. Rohs, “Mobile Phones as Pointing Devices,” PERVASIVE MOBILE

INTERACTION DEVICES (PERMID 2005), WORKSHOP AT THE PERVASIVE 2005, 2005, pp. 27--30.

[13] M. Rohs and B. Gfeller, “Using Camera-Equipped Mobile Phones for Interacting with Real-World Objects,” ADVANCES IN PERVASIVE COMPUTING, 2004, pp. 265--271.

[14] D.A. Lisin, M.A. Mattar, M.B. Blaschko, M.C. Benfield, and E.G. Learned-miller, “Combining local and global image features for object class recognition,” IN PROCEEDINGS OF THE IEEE CVPR WORKSHOP ON LEARNING IN COMPUTER VISION AND PATTERN RECOGNITION, 2005, pp. 47--55.

[15] M.J. Swain and D.H. Ballard, “Color indexing,” INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 7, 1991, pp. 11--32.

[16] Q. Iqbal and J.K. Aggarwal, “Cires: A System For Content-Based Retrieval In Digital Image Libraries,” IN INVITED SESSION ON CONTENT-BASED IMAGE RETRIEVAL: TECHNIQUES AND APPLICATIONS, 7 TH INTERNATIONAL CONFERENCE ON CONTROL AUTOMATION, ROBOTICS AND VISION (ICARCV, vol. 2, 2002, pp. 205--210.

[17] T.L. Department, M. Güld, C. Thies, and T.M. Lehmann, “Content-Based Image Retrieval in Medical Applications,” IN PROCS. INT. SOCIETY FOR OPTICAL ENGINEERING (SPIE, vol. 3972, 2000, pp. 312--320.

[18] C. Schmid and R. Mohr, “Local grayvalue invariants for image retrieval,” IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 19, 1997, pp. 530--535.

Page 85: Study and realization of a prototype to discover and manage a …laris.univ-angers.fr/_resources/logo/MASTERBenFredjSameh.pdf · 2020. 12. 7. · issues for bell labs and alcatel-lucent.....10

84

[19] K.G. Derpanis, “The Harris Corner Detector,” 2004. [20] D. Lowe, “Object Recognition from Local Scale-Invariant Features,” 1999, pp. 1150--1157. [21] D.G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” 2003. [22] H. Bay, A. Ess, T. Tuytelaars, and L.V. Gool, “Speeded-Up Robust Features (SURF),”

Comput. Vis. Image Underst., vol. 110, 2008, pp. 346-359. [23] P. Luley, A. Aimer, C. Seifert, G. Fritz, and L. Paletta, “A Multi-Sensor System for Mobile

Services with Vision Enhanced Object and Location Awareness,” Mobile Commerce and Services, 2005. WMCS '05. The Second IEEE International Workshop on, 2005, pp. 52-59.

[24] P.M. Luley, L. Paletta, and A. Almer, “Visual object detection from mobile phone imagery for context awareness,” Proceedings of the 7th international conference on Human computer interaction with mobile devices \&amp; services, Salzburg, Austria: ACM, 2005, pp. 385-386.

[25] “Semacode : Weblog :.” [26] M. Mohring, C. Lessig, and O. Bimber, “Video See-Through AR on Consumer Cell-

Phones,” Proceedings of the 3rd IEEE/ACM International Symposium on Mixed and Augmented Reality, IEEE Computer Society, 2004, pp. 252-253.

[27] W. Chen, Y. Xiong, J. Gao, N. Gelfand, and R. Grzeszczuk, “Efficient Extraction of Robust Image Features on Mobile Devices,” Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, IEEE Computer Society, 2007, pp. 1-2.

[28] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, Kauai, HI, USA: IEEE Comput. Soc, 2001, pp. I-511-I-518.

[29] A. Baumberg, “Reliable Feature Matching across Widely Separated Views,” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, Los Alamitos, CA, USA: IEEE Computer Society, 2000, p. 1774.