ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment...

27
ÇANKAYA UNIVERSITY FACULTY OF ENGINEERING COMPUTER ENGINEERING DEPARMENT CENG 407 Visual Assistant System KÜBRA AKGÜN SENA AKTAŞ BÜŞRA BETÜL BAYRAM HATİCE YALINIZ 26.05.2017

Transcript of ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment...

Page 1: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

ÇANKAYA UNIVERSITY

FACULTY OF ENGINEERING

COMPUTER ENGINEERING DEPARMENT

CENG 407

Visual Assistant System

KÜBRA AKGÜN

SENA AKTAŞ

BÜŞRA BETÜL BAYRAM

HATİCE YALINIZ

26.05.2017

Page 2: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

Table of Contents

Abstract .................................................................................................................................................................... 4

1. Introduction ..................................................................................................................................................... 5

1.1. Problem Statement ...................................................................................................................................... 5

1.2. Solution Statement ...................................................................................................................................... 6

2. Literature Search ............................................................................................................................................. 6

2.1. Smart Security for Sensitive Areas ............................................................................................................. 6

2.2. Real-Time Crowd Density Estimation ........................................................................................................ 7

2.3. Tempature Measurement Camera System ................................................................................................... 7

2.4. Real Time Algorithm for Human Body Tracking with Kinect Device ....................................................... 8

2.5. Conclusion .................................................................................................................................................. 9

3. Summary ................................................................................................................................................... 10

3.1. Summary of Conceptual Solution ......................................................................................................... 10

3.2. Technology Used .................................................................................................................................. 11

4. Software Requirements Specification ....................................................................................................... 11

4.1. Introduction ............................................................................................................................................... 11

4.1.1. Purpose ................................................................................................................................................. 11

4.1.2. Scope .................................................................................................................................................... 11

4.1.3. Glossary ................................................................................................................................................ 12

4.1.4. References ............................................................................................................................................ 12

4.1.5. Overview of the Document ................................................................................................................... 12

4.2. Overall Description ............................................................................................................................... 13

4.2.1. Product Perspective .............................................................................................................................. 13

4.2.2. Product Functions ................................................................................................................................. 13

4.2.2.1. Use Case 1: Choose Mode ................................................................................................................ 13

4.2.3. Use Case 2: User Control Operations ................................................................................................... 13

4.2.3.1. User Control Operation about Crowd Density ................................................................................. 23

4.2.3.2. User Control Operation about Smart Security System ..................................................................... 14

4.2.3.3. User Control Operation about Intensity of Sunlight ......................................................................... 14

4.2.3.4. User Control Operation about Movement of Person ........................................................................ 15

4.3. Requirements Specification ...................................................................................................................... 16

4.3.1. External Interface Requirements .......................................................................................................... 16

4.3.1.1. Hardware Interface ........................................................................................................................... 16

4.3.1.2. Software Interface ............................................................................................................................ 16

4.3.1.3. Communication Interface ................................................................................................................. 16

4.3.2. Software System attributes ................................................................................................................... 16

Page 3: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

4.3.2.1. Performance ..................................................................................................................................... 16

4.3.2.2. Usability ........................................................................................................................................... 16

4.3.2.3. Availability ....................................................................................................................................... 16

4.3.2.4. Maintainability ................................................................................................................................. 16

4.3.2.5. Scalability ......................................................................................................................................... 16

4.3.3. Assumption & Dependencies................................................................................................................ 16

5. Software Design Description ......................................................................................................................... 16

5.1. Introduction .......................................................................................................................................... 16

5.1.1. Purpose ................................................................................................................................................. 17

5.1.2. Scope .................................................................................................................................................... 17

5.1.3. Glossary ................................................................................................................................................ 17

5.1.4. References ............................................................................................................................................ 18

5.1.5. Overview of the Document ................................................................................................................... 18

5.2. Architecture Design .............................................................................................................................. 19

5.2.1. Software Methodology ......................................................................................................................... 19

5.2.2. System Design ...................................................................................................................................... 21

5.2.3. Data Flow Diagram............................................................................................................................... 10

5.2.3.1 Setting File ........................................................................................................................................... 23

6. Test Plan…. ................................................................................................................................................. 104

6.1 Introduction ........................................................................................................................................ 104

6.1.1 Overview ............................................................................................................................................ 104

6.2 Features to Will Be Tested.................................................................................................................. 104

Page 4: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

List of Figures

Figure 1- Overall Visual Assistant System ............................................................................................................ 11 Figure 2- Choose Mode Use Case Diagram ........................................................................................................... 13 Figure 3- Control Operation for Crowd Density Mode Use Case Diagram ........................................................... 23 Figure 4- Use Control Operation for Smart Security System Mode Use Case Diagram ........................................ 14 Figure 5- User Control Operation for Intensity of Sunlight Mode Use Case Diagram ......................................... 15 Figure 6- User Control Operation for Movement of Person Mode Use Case Diagram.......................................... 15 Figure 7- Scrum Gannt Chart for Visual Assistant System .................................................................................... 19 Figure 8-Hierarchical structure of the system ........................................................................................................ 21 Figure 9- Flowchart Diagram of Data Flow ........................................................................................................... 22 Figure 10-Setting File Diagram .............................................................................................................................. 23

Page 5: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

Abstract

Image processing involves acquisition of the data by analyzing images taken from a device, and

this data is also used for certain purposes. Image processing is used in the fields of military

industry, medicine, robotics, traffic, agriculture, photographic industry, biomedical fields,

remote sensing applications and weather observation and forecasting on satellite images. In this

project, a visual assistant system is designed using image processing. The system that we will

design consists of four options. It will include smart security, crowd density estimation, light

intensity measurement and human body movement modes.

Key words: Image processing, Camera System, Crowd Density Estimation, Security,

Human Movement Tracking, Intensity of Light Measurement

Özet

Görüntü işleme, bir cihazdan alınan görüntülerin işlenerek veriler elde edilmesidir ve bu

verilerin belli amaçlar için kullanılmasıdır. Görüntü işleme askeri endüstri, tıp, robotik, trafik,

tarım, fotoğraf endüstrisi, biyomedikal alanlar, uzaktan algılama uygulamaları ve uydu

görüntüleri üzerinde hava gözlem ve tahmin uygulamaları alanlarında kullanılmaktadır. Bu

projede görüntü işleme kullanılarak görsel bir asistan sistemi tasarlanmıştır. Tasarlayacağımız

sistem dört seçenekten oluşmaktadır.Bu sistem güvenlik,kalabalık yoğunluğu ,sıcaklık ölçümü

ve insan vücut takibi modlarını içerecektir .

Anahtar Kelimeler: Görüntü işleme, Kamera Sistemi, Kalabalık Yoğunluk Tahmini,

Güvenlik, İnsan Hareketi İzleme, Işık Yoğunluğu Ölçümü

1. Introduction

1.1. Problem Statement

In many areas of life, camera and image recording are needed. For example; safety cameras that

required for the protection of a workplace or a home applied image processing techniques can

be reported to the person an abnormal movement attached to the camera. Besides, it is possible

to see a patient who should be constantly observed with the camera and sending information to

the doctor at the moment of movement. Apart from observing dangerous or abnormal

conditions, image recording is also important for different purposes. For example, a crowd in a

shopping mall or a concert can be measured. On the other hand, sunlight measurements provide

many benefits such as growing fruits and vegetables. The systems in which these functions are

used are on the market, but the way they exist in the market is that they have one mode and

their prices are very high. The purchased product loses its multifunctional capability of being

only the safety camera feature or being only available in the health field. The high cost of a

product that is to be used in a single area lowers the utilization rate. It is not a product that

people can use even at home because it cannot be bought because of high cost.

Page 6: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

1.2. Solution Statement

In this project, a new product is not produced but this camera system is planned to be used in

many areas. With this feature, it is separated from others. It is formed by a combination of

several mods; Crowd density estimation, smart security, Sunlight Intensity, movement of

person. By means of these modes some parameters are added and a multipurpose system is

aimed. There are camera systems in which these modes are created separately but there is no

multipurpose tracking in this way. In this project, a product design is planned that is not aimed

at a single aim and is low in cost. In this way, it is aimed to realize the intended modes by

modifying the triggers.

2. Literature Search

2.1. Smart Security for Sensitive Areas

Smart Security System for Sensitive Areas by Using Image Processing [1] mentions the security

system designed for a bank. Many security systems constantly record video and spend memory

unnecessarily. The disadvantages of these systems are cause memory wastage, only use for

evidence and not designed to prevent attack. The reason for the development of this system is

to remove these problems altogether. For this reason, motion sensor is used in this system.

When it is moving, it starts recording videos. This system can keep mobile phone numbers of

the administrators or the owners to who will communicate in an emergency. This system also

has the ability to keep a diary. Therefore, messages that sent or received are kept particularly.

The administrator can see the stream live and he/she can do the appropriate action as a result.

This project also has the ability to record images from more than one camera. When there is an

event, the image does not just stay on the catch. At the same time, it also activates the system.

This system has SMS (Short Message Service), GPRS (General Packet Radio Service) and J2EE

(The Java 2 Platform). The design and implementation of the system are mentioned. Almost all

communication tools can be used for remote access. If the user authenticates, he/she can log in

to the home server via internet connection and monitor. The user can check the system using

SMS messages. E-mail can also be used for system control. Any attack or danger is transmitted

online to the user using the camera and motion detection algorithm. The user can also send

commands (the commands sent are executed automatically) and live broadcasts can be made

from mobile phone. In this system, firstly web cameras are checked whether they are connected

to the system. If it is not connected, an error message is displayed. If it is connected, the standard

image is compared to the caught images. It checks for any attacks. SMS is sent to the

administrator or user for the appropriate action to do in the event of an attack. Performance

requirements; the product is planned to be developed with Java, and no memory requirement

has been specified. Any file system software can be used for the database. Security

requirements; image processing, secure user access, legal compliance, data security, data

storage and backup, support, reporting. The used algorithm for the system is analyzed in this

paper. The Euclidean distance algorithm is used to compare two points with motion detection.

Software requirements; Java, Apache tomcat server, My SQL, Eclipse. Hardware requirements;

Server side System (will support Windows XP), minimum configuration required on server

Page 7: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

platform, 2.4 GHZ-80 GB HDD, 512 MB RAM, PC, Network Cards, Android Phone and two

cameras. Legal requirements; static IP (2000-2500 p.a required) and internet connection (any

internet service provider (ISP) with minimum speed 2-4 mbps). Module working in the project;

in the first stage, the system authenticates the user by authenticating the user. Both cameras of

the system are started and the template is kept in the file. The system module starts to run and

the CCTV camera continuously sends the images to the warning module. If there is any danger

in the template display, the warning message is sent to manager. Live streaming is displayed on

the Android system. Finally, the advantages and disadvantages of this project are observed.

Advantages; it consumes less memory, unnecessary video recording is not performed, it can be

activated during an attack, the manager can look at the live broadcast and act. Disadvantages;

interruptions can affect the system, it requires constant observation because the alarm system is

sensitive, live streaming applications are always in Wi-Fi.

2.2. Real-Time Crowd Density Estimation

Real-Time Crowd Density Estimation Using Images Project’s [2] aim is measure the human

density in malls, stadiums, streets, train and metro stations with using the image processing

methods by the images taken from this places at different time intervals. The process may be

done by infrared sensors; however, this project is prepared by using a video camera for the

measurements. When image processing methods are using for measure the crowd density,

usually the method of background erasing of the image is used and then area of foreground

pixels is measured. In this project, methods for real-time automatic crowd density estimation

are presented based on texture descriptors of a sequence of crowd images. For this method,

adopted the idea which is different images with different human densities are tend to presents

different texture patterns. The Images with high human density are usually tend to present high

frequency texture patterns but the images with low human density are tend to present low

frequency texture patterns. This technique reached 73.89% correct classification estimation for

sequence of 9892 images. The sequence images taken from video camera records in an airport

area. From the images, a subset of 990 images (1 per 10 photo) is homogeneously prepared and

manual classification of human densities of images is made. This classification is used for

measure the correction of automatic technique. Firstly, the classification of each pixel of the

input image into one of the identified texture classes is prepared. The algorithm of the technique

is presented as following steps;

“The master processor divides the input image in n fragments (n is the number of slave nodes

in the cluster), Each image fragment is sent to a slave processor, each slave processor performs

the texture classification of its image fragment pixels using a sequential algorithm, the slave

processors send their classified fragments to the master, the master processor assembles all

fragments into a final texture segmented image.” ([2], Ch.3) As the result of this steps, technique

obtained 73.89% correct classification.

2.3. Temperature Measurement Camera System

Temperature Measurement Camera System Using Images Project’s [3] aims is provide

important services in the food, agriculture and health sectors. This camera system measures of

Page 8: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

heat and cold. Which means that the warmth and coldness may be seen. Imaging system is

emerging, which determines the colors and shapes formed at temperatures. Heat dissipation is

attributed to different temperature ranges. This heat dissipation has a different wavelength.

Because of this, certain temperature ranges can display. These sensitive devices measure very

small temperature values (such as 0.01). This may occur in different images and different

temperature values measured. For example, on fruits and vegetables, such as images emerge of

issues with this device in the diagnosis diseases in medicine. Briefly, how the diagnosis is made

of fruits and vegetables is aimed to explained

Firstly apple information is taken. Then, the image was taken is compared with the apple and

they work in the form of comparison result. Apples can be distinguished by color, size, etc. Can

be distinguished by size. The picture was taken with the actual size of an apple can be

determined from the difference in the size of the apple. For this, a ruler is placed in front of the

camera with an apple. By measuring the size of the apple, a coefficient is determined the pixel

value and the actual size. Can be distinguished by color. Apple can be separated from the

background. Then the density in RGB (red, green, blue) format is calculated. Color is calculated

according to the classification value in the most appropriate range. This experiment was

performed as a temperature measurement.

Information of some techniques in the measurement of heat are given. In this camera, CCD

cameras are used as shape recognition and size recognition. These cameras can capture color

and gray images. Many images can be captured at the seconds. In order to make the image look

good on the camera, the lens that provides the light intensity and focus is used. According to

the differences of different lenses can be applied to the images.

Lighting position is important when temperature measurement is performed. In the environment

must be dark when some measurements are made. Because a measurement requires the least

amount of light to be processed most easily during image processing. Later dark environments

illuminated with led lights. These led lights are flashing. In this case, different results can be

achieved on the images.

2.4. Real Time Algorithm for Human Body Tracking with Kinect Device

Real Time Algorithm for Human Body Tracking with Kinect Device’s [4] has the argued

technique gets only ten squares at a time. Additional information about the environment is

neglected to facilitate calculation. So only hand information is framed. The calibration step

identifies the hand position to recognize the actions and then looks at the changes in the resulting

squares. The calibration procedure defines the hand and head. The phases of the algorithm form

the body position. Exclude a boundary in the head area. The reason for this is that when the

object is turned to the left, the distance between the left and right hand heads is different. It

consists of a depth sensor, an IR laser transmitter and an IR camera. “Paper [4] proposes the

Kinect device as a measurement instrument for people suffering from Parkinson’s disease”. The

systems used for movement symptoms are very expensive. It requires specialization. This

article shows that it is possible to follow the Parkinsonian sign thanks to the Kinect sensor. In

this work, a new algorithm for real-time remote control application is analyzed. This practice is

entirely focused on the human movement. The algorithm attaches importance to the left and

Page 9: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

right hand positions based on the head position. The algorithm shows that the hand is the

opposite field. This interpretation finds coordinates using the rules of Kinect device. The

algorithm used for the Kinect Device considers three directions. These are the head area, the

right-hand side to the left-hand side and vice versa and prevent more than one passage. In

addition, the calibration procedure is used for the head area, and flag variables are used to limit

multiple transition. A skeleton tracking class is used by the ‘Microsoft Kinect SDK’ in the

algorithm. Here, left_pos and right_pos are defined. Thanks to this class only the position of

the human body is followed. The hand_right and hand_left variables are included for the

left_pos and right_pos. The head position shows the balance between the hands. The head

position value is set to zero. The variables left_flag and right_flag are included. Their purpose

is to determine which side of the user's hand has passed. More than one photo will not pass. The

value of this flag variable is false. The current_photo variable was used. This keeps the current

photo directory. When the Kinect Sensor is engaged, the device continuously creates photos,

and it creates new positions for each hand. A transition to a new photo condition is faded if the

hand is reversed only once. A condition for the next transition is that the hand will return to that

area and then return to the opposite area. If the left hand is in the right zone, left_pos>0. If the

hand has moved to this area for the first time, left_flag==false. Left_flag == true to prevent

another pass, and then it is tested whether the current photo is the first one. If the first photo is

already in the series, it will not switch to the previous photo. If the transition occurs, the previous

photo in the list is displayed. When the hand is on the opposite side, it is checked. The reason

for this is that the hand can go right and left multiple times before returning to the relevant area.

Two things are provided because of the flag variant. First, multiple passes are prevented.

Secondly, locking is prevented when there is no transition. The algorithm zeroes the flag

variable to allow another photo pass. The left field (left_pos<0) and left_flag must be set to true

when the left hand returns to the corresponding field. The same algorithm is maintained to

provide an assurance of the availability of the next picture on security. This is provided: when

the right hand is first defined in the counter area and right_flag == false. The algorithm always

looks at the next photo until the last photo in the photo directory is reached, and right_flag is

true. When the right hand returns to the corresponding field, the flag variable right_flag is false.

So the algorithm for the next passes will not return because right_flag is false.

2.5. Conclusion

In conclusion, image processing is a sequence of operations. Image processing is caught at the

beginning of this sequence and implemented by combining with different algorithms according

to the target as in the projects exemplified here. The areas in which the image processor is used

are multifaceted as seen, and the image processing is captured to include a lot of space.

Subject of our project is image processing. We aim to build a camera system that can be useful

in many areas with image processing. An image system with many features is both expensive

and difficult to reach. In this project, a camera warning and measurement system, which is

cheap and easily accessible according to market average, will be done. As hardware, an

embedded camera system will be created by a Rapsberry Pi, a camera and modules which can

Page 10: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

be added for measurements that may be needed. The system can measure, the density of light,

size of the given area, the change of motion, temporal change in motion between images. With

the formation of the desired conditions that given in certain time intervals; the measurements

that maden can be using for sending sms, mms and e-mail to the user. Also, this system will be

multi functional by making some changes to triggers and it will not limit to a single field. We

will exemplify them;

-If the specified movement size is reached a very high level, this is perceived as an

abnormal condition for continuous control of the patient in hospitals. Then, photos should send

to the doctor via e-mail. Similarly, it can also use for safety of babies at home. In this case, if

the motion is perceived as abnormal, thein formation sent to the parent with the photo.

-In areas where safety is required, according to the size of the danger, this is reported to

related person.

-This system can be used to measure of crowded in some where such as mall, sidewalk,

and road.

The amount of sunlight is important in many cases such as rheumatism, migraine, plant, baby

etc. Therefore, we aim to design a system that can measure sunlight as well.

References

[1] Pathikar, M.P., Bholase, S.J., Patil D., Deshpand, G, Smart Security System for Sensitive

Area by Using Image Processing. Pune Unversity, 2014.

[2] A. N.; Cavenaghi, M. A.; Ulson, R. S.; Drumond, F. L., Real-Time Crowd Density

Estimation Using Images. UNESP (Sao Paulo State University)

[3] Cetişli B. (2013),” Machine vision technology for agricultural Applications, Computers and

Electronics in Agriculture”, 173-191 Issue 36.

[4] Roşca C.M.,Voicilă E.B. (2016), “Real Time Algorithm for Human Body Tracking with

Kinect Device”, Petroleum- Gas University of Ploiesti Bulletin, Technical Series, Vol. 68

Issue 3.

3. Summary

3.1. Summary of Conceptual Solution

Different modes are designed so that the visual assistant system can perform many different

tasks. With smart security system mode, it is aimed to inform the user for sudden changes that

may occur at home or work places. With the crowd density estimation mode, it is aimed to

measure the density of the crowd in areas where active crowd are located such as street,

shopping center, concert areas. With the human motion measurement mode, it is aimed to be

able to inform the doctor, nurse or companion by measuring the sudden or continuous

movement of people in stationary position such as intensive care and coma patients. With the

light measurement mode, it is aimed to measure the intensity of light and to inform the user by

calculating the optimal light level for the desired conditions so that it will be beneficial for

calculating intensity of light for agriculture areas or people with diseases such as rheumatism

Page 11: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

and migraine.

3.2. Technology Used

● Raspberry Pi

A Raspberry Pi is a credit card-sized computer and it is originally designed for education but in

this project we will use it as our system main engine. And it is also useful for embedded designs

such as Visual Assistant System since it have different models with different modules for

different tasks. Also it has a camera module which is capable of taking full HD 1080p photo

and video and can be controlled programmatically.

● Microsoft Visual Studio 2015

Microsoft Visual Studio is an integrated development environment (IDE) developed by

Microsoft. Visual Studio supports different programming languages.

● C

C is a high-level and general-purpose programming language.

● OpenCV

OpenCV (Open Source Computer Vision Library) is an open source computer vision and

machine learning software library which is used for Image Processing. It has C++, C, Python

and Java interfaces.

4. Software Requirements Specification

4.1. Introduction

4.1.1. Purpose

This document provides information about the measured or recorded image data. This document

describes the general information of the recorded data. In this study, remote sensing applications

through Smart Security System, Real-Time Crowd Density Estimation, the estimation of the

measurement light and for observation of human movement with the camera remote control app

Systems in detail will explain. This SRS document gives information about the measurements

of the camera system. This project is a multi-purpose camera system images give, interpretation

and transfer it to the user to design we are working.

4.1.2. Scope

Visual Assistant System is designed to serve many purposes. The most prominent feature is the

versatility of the camera system that we can benefit from many areas. At the same time, it is

one of the exclusive features of being affordable according to the prices of market. The goal of

the project is making this already existing system more versatile, making it affordable for

everyone, and making it easier to access. With the Visual Assistant System, images are captured

that will cover different areas using image processing. As one of the result of these images, the

Page 12: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

user is warned at some point or the user is informed by some measurements.

There is one actor in this system: User. User can select a mode. System has four mods here.

These; Crowd density, security in required areas, human hand movement and intensity of

sunlight measurement. After the mode is selected in the required field, the user is given the time

setting authority. The system activates at the set time interval and records images for its intended

purpose. Then it sends information to the user via mail or web-service.

4.1.3. Glossary

Term Definition

User The person who use visual assistant system

Smart Security System Mode Camera system mode that are used to ensure

environmental safety.

Crowd Density Estimation Mode Camera system mode that measuring the density

of crowd at streets, malls etc.

Movement of Person Mode Camera system mode that informs us when person

moves.

Intensity of sunlight measurement Mode Camera system mode that shows whether the air

is dark, bright, or not.

RAPSBERRY Pİ ZERO Credit card sized minicomputer consisting of a

single board.

4.1.4. References

https://www.raspberrypi.org/products/pi-zero/

4.1.5. Overview of the Document

In the rest of the document, the Visual Assistant System’s details and requirements will be

discussed. In the “Overall Description” section, there are use cases consisting of different

features of the camera and multi-purpose. And the Control Operations for the User are involved

in different modes. The hardware specification of Raspberry Pi, which is the device to be used

in the requirement specification section, is mentioned. In addition, the software system

requirements have been examined usability, availability, maintainability and scalability. The

last part is “Assumption and Dependencies”.

Page 13: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

4.2. Overall Description

4.2.1. Product Perspective

This Visual Assistant project based on multi-tasking embedded camera system for user’s

different needs. For initial step Fig-1 shows the generalization of the system’s execution.

Camera system works on modes which the user have preferred thus system starts with the

selected mode selection. After the selection, system starts to take images at different time

intervals and performs the job the user asked for.

Page 14: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

Figure 1- Overall Visual Assistant System

Page 15: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

4.2.2. Product Functions

4.2.2.1. Use Case 1: Choose Mode

Figure 2- Choose Mode Use Case Diagram

Brief Description:

User chooses the mode for the needed job.

Initial Step by Step Description:

1. User clicks progress Modes from user interface.

2. User chooses the mode for the needed job.

4.2.3. Use Case 2: User Control Operations

Page 16: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

4.2.3.1. User Control Operation about Crowd Density

Figure 3- Control Operation for Crowd Density Mode Use Case Diagram

Brief Description:

User can selects Crowd Density mode.

User can sets time interval.

Initial Step by Step Description:

1. User selects the Crowd Density Estimation mode.

1.1 User sets the desired time interval.

4.2.3.2. User Control Operation about Smart Security System

Figure 4- Use Control Operation for Smart Security System Mode Use Case Diagram

Page 17: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

Brief Description:

User can selects Smart Security System mode.

User can sets time interval.

Initial Step by Step Description:

1. User selects the Smart Security System mode.

1.1 User sets the desired time interval.

4.2.3.3. User Control Operation about Intensity of Sunlight

Figure 5- User Control Operation for Intensity of Sunlight Mode Use Case Diagram

Brief Description:

User can selects Intensity of Sunlight Measurement mode.

User can sets time interval.

Initial Step by Step Description:

1. User selects the Intensity of Sunlight Measurement mode.

1.1 User sets the desired time interval.

4.2.3.4. User Control Operation about Movement of Person

Page 18: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

Figure 6- User Control Operation for Movement of Person Mode Use Case Diagram

Brief Description:

User can selects Movement of Person mode.

User can sets time interval.

Initial Step by Step Description:

1. User selects the Movement of Person mode.

1.1 User sets the desired time interval.

4.3. Requirements Specification

4.3.1. External Interface Requirements

4.3.1.1. Hardware Interface

Raspberry Pi is a credit card sized computer. Image can be taken from Raspberry Pi machine.

With Raspberry Pi, which we call a talented little computer, we can do what we do on desktop

computers work. For example, word processors and accounting programs (Word, Excel) can

work with the various games that we can play. You can think of it as an affordable computer

that can be used for simple programming or even for experiments. To be able to do modes on

Visual Assistant System, Raspberry Pi is needed.

4.3.1.2. Software Interface

There are no external software interface requirements.

4.3.1.3. Communication Interface

There are no external communication interface requirements.

4.3.2. Software System attributes

4.3.2.1. Performance

Page 19: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

● The system must be active within the set time.

● The system should record images in less than 1 second at the motion, at the

instantaneous temperature changes, or at the large changes in the area to be

measured.

● The recorded image must be reported to the user within 2 seconds.

4.3.2.2. Usability

● Turkish and English languages are used.

● The user should set time interval.

4.3.2.3. Availability

● In the event of a power failure, the system should continue when the interruption

is over.

4.3.2.4. Maintainability

● The system must be easy to maintain and user-friendly.

4.3.2.5. Scalability

● The system has only one user.

4.3.3. Assumption & Dependencies

● The user should have an email for system’s notifications.

5. Software Design Description

5.1. Introduction

5.1.1. Purpose

This SDD document how the Virtual Assistant system will be done discusses in explained. The

architecture of system is expressed by its hierarchical structure and described in a detailed the

way. Most of these documents describe the design of a system that can be used for convenience.

In this document, we will talk about design, crowd density, security, human movement and

measurement light. These topics will be integrated into the Visual Assistant System as a model.

According to selected mode, necessary observations will be made. As a result, important images

will be obtained and shall be reported to the relevant person.

5.1.2. Scope

Visual Assistant System is designed to serve many purposes. The most prominent feature is the

versatility of the camera system that we can benefit from many areas. At the same time, it is

one of the exclusive features of being affordable according to the prices of market. The goal of

the project is making this already existing system more versatile, making it affordable for

everyone, and making it easier to access. With the Visual Assistant System, images are captured

Page 20: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

that will cover different areas using image processing. As one of the result of these images, the

user is warned at some point or the user is informed by some measurements.

Visual Assistant will use raspberry pi to implement the system design. In addition, the setup

file will be created and will be selected from the model setup file. The camera that follows the

relevant field image processing will be performed according to the specified mode.

Programming as the language for software will be based on C / C ++.

5.1.3. Glossary

Term Definitions

Image Processing Image processing is the processing of a two-

dimensional image through a computer.

Scrum Used for software development process

Agile is one of the methods.

Sprint It is the place where all the work to be done

in Scrum takes place.

Pixels It is the smallest unit that can get digital

indicators.

Differences A view taken as a background removal from

the recorded image.

Standard Deviation It is called distribution (spread) of the images

according to the average.

Mean Mean: Images obtained of the sum of ımage

number is called part.

Histogram Equalization Graphs showing the number of color values

in an image histogram.

ROI (Region of Interest) The area that the camera restricting

according to need.

RGB (Red, Green Blue) Electronic devices such as monitors is a color

model that allows you to create color-using

light.

HSI (Hue Saturation Intensity) Method of displaying colors according to

their hue saturation and intensity.

Trigger Change in an attribute, condition, factor,

parameter, or value that represents crossing a

threshold and actuates or initiates a

mechanism or reaction that may lead to a

radically different state of affairs.

5.1.4. References

● http://www.atasoyweb.net/

● http://www.printcenter.com.tr/

Page 21: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

5.1.5. Overview of the Document

In the architecture design part of this document, firstly, the software methodology was

mentioned and the methodology to be used was shown in the Gantt chart. Secondly, the

hierarchical structure of the system was revealed. How the data flow is carried out is

shown in the flowchart diagram. Finally, the settings file, which is contained in the

parameters to be used by the modes, is explained in detail.

Page 22: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

5.2. Architecture Design

5.2.1. Software Methodology

Figure 7- Scrum Gantt chart for Visual Assistant System

Page 23: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

In this project, the use of the scrum method was deemed appropriate because it is a system that

operates as a mode base this software development method will be used. Scrum requirements

are not obvious Change is used in the management of open software projects. Scrum is used to

manage open software projects where the requirements are not obvious. The use of this method

is suitable because our project will serve many different areas. Minor changes to the modeler's

triggers will be easier with these methods. Also the scrum method to reveal the faults in the

process is revealing shot. This way continuous improvement is achieved. In the scrum method,

the project is divided into sprints. Each Sprint is 1 month (30 days) lasts. At the end of the

sprints, the work done on that sprint can be controlled. The side faults can be seen.

Four sprint was used for our project. Scrum is seen in the timeline in figure1.

In the Sprint1, firstly, OpenCV will be implemented. Raspberry Pi OpenCV will be written on.

OpenCV implementation is done after the screen shot and camera to implement will be made.

Access to the camera required for each mode and video capture from the camera will be

provided as a result of this implementation and the image can be saved as a JPG. After this

process, video recording will be implemented. Camera recording and video recording must be

with OpenCV. Therefore, OpenCV should do both image capture and video capture.

In the Sprint2, the processes that models will calculate will be defined. Made the required

settings for each mode shall implement. For each mode one week (7 days) is reserved into.

In the Sprint3, will perform differentiation calculations for crowd density mode and security

mode.

In the Sprint4, the focus settings will be made for the modes and finally the GUI settings will

be made.

5.2.2. System Design

Figure 8-Hierarchical structure of the system

Page 24: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

The hierarchical structure of the system is illustrated in Figure 2. The camera is associated with

the Engine. Here, in the system, there is one main engine as technical architecture. So one main

software will work. The setting file will communicate with the main Engine. There are modes

on the main Engine. This model is show crowd density, security, human movement and

measurement light. The engine will apply image processing using modes. So engine will

communicate with the modes. The Engine will calculate basic things such as mean, differences,

standard derivation.

Setting file, whether the engine is triggered by reaching the information in the Engine. This

information using gives outputs. It takes the image from the setting file as input and then returns

it as output again. Engine gives this output a scale. The camera takes the information from, it

works, rendered images. Gives information about how many trigger. Triggers as a percentage

returns. The trigger importance rate varies by percentage.

In addition, ROI (Region of Interest) is an important issue. The ROI varies according to the

modes. In terms of modes, the case for security monitors an area of human body for human

movement, a certain area of sunlight for light. In these modes, ROI will change according to

user needs.

RGB imaging occurs when R (Red), G (Green), B (Blue) is transmitted to the image by coming

up. Thus, these three color combinations are superimposed to produce a color image.

Also, by ignoring the background of the image with the background subtraction, it is ensured

that the areas where the movement is moving are foregrounded.

The numbers of the color values in an image are indicated by the histogram. Histogram

equalization is a method used to remove color discrepancies in the image. If the image consists

of gray and tones, histogram equalization is not required. If the color image is being worked on

and the RGB values are to be processed then red, green and blue colors are separated one by

one and histogram equalization is applied for each color. Also, if all the pixels have color values

at certain intervals, the histogram equalization method is effective. Otherwise, it does not

improve every image.

The work is done on the engine such as the mean of the image, the average of the pixels, the

average of the images, the difference of the images, the standard deviation difference, their

histogram equalization, ROI.

GUI will be taken from the setup file. Then the settings files are supposed to be converted to

GUI.

5.2.3. Data Flow Diagram

In Figure 3, the data flow is shown in the flowchart. The data flow starts on the system with the

capture of the camera from the camera. The captured image is defined as data. The modes are

selected with settings file. The parameters contained in the selected mode are sent to engine. It

is the main software that defines the calculations within the Engine. The parameters field engine

makes the necessary calculations and returns the result of the calculations to the mode. The

result is sent to the user from here. Finally, a conditional expression is encountered. It is checked

whether the time interval given by the user is finished or not. If the answer is yes, the system is

terminated. If it is no, the image is captured again.

Page 25: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

Figure 9- Flowchart Diagram of Data Flow

5.2.3.1 Setting File

Page 26: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

Setting file is including parameters for required in Visual Assistant System Modes. Modes

can reach these parameters and send them to engine for needed calculations. Time interval

parameter is used for set time to capture images. The threshold value is the Minimum or

Maximum value that serves as a benchmark for comparison or guidance, and may require a

full review of any violation status or a redesign of the system. Trigger value is value that

represents crossing a threshold and initiates the web-service message sending for user. A

region of interest (ROI) is the selected subset of instances of a set of images defined for a

selected mode. Histogram equalization is a method used in image processing of the contrast

adjustment using the histogram of the image. If histogram equalization value is true, it means

this mode needs contrast adjustment but in Intensity of Light Mode; histogram equalization is

the opposite of what the mode’s aim is, thus it should be FALSE. Intensity of Light Mode may

be external from trigger value and dependent on the intensity or the light so that Dependencies

of which HSI values active value determines which HSI values active in the mode.

6. Test Plan

6.1. Introduction

Some tests are applied to bring the product to the desired level. As a result of these tests the

error is minimized and the service offered to the customer is the best. Test cases are documents

that are prepared according to the requirements and their results are expected results. The test

cases provide the requirements and the problems or deficiencies in the design. A basic test case

consists of expected and actual output after the input, although it varies according to the

requirements. The script should be an aim. In this way, it is possible to follow which scenarios

need to be executed in which modes.

6.1.1. Overview

The use cases of Visual Assistant System which had been determined in SRS document and

Setting File Values which had been determined in SDD document will be tested.

6.2. Features to Will Be Tested

Choose Mode

Crowd Density Time Interval

Smart Security System Time Interval

Intensity of Sunlight Time Interval

Movement of Person Time Interval

Crowd Density Threshold Value

Smart Security System Threshold Value

Intensity of Sunlight Threshold Value

Movement of Person Threshold Value

Figure 10-Setting File Diagram

Page 27: ÇANKAYA UNIVERSITY · Çankaya university faculty of engineering computer engineering deparment ceng 407 visual assistant system kÜbra akgÜn sena aktaŞ bÜŞra betÜl bayram

Crowd Density Trigger Value

Smart Security System Trigger Value

Intensity of Sunlight Trigger Value

Movement of Person Trigger Value

Smart Security System Region of Interest

Intensity of Sunlight Region of Interest

Movement of Person Region of Interest

Intensity of Sunlight Dependencies of Which HIS Values Active