ksuweb.kennesaw.eduksuweb.kennesaw.edu/~snorth/HoloLens/Project_Documents/... · Web viewThe t-test...

7
Visualize Streaming Big Data Through Augmented Reality Sean Saffan, Victor Orellana, Matthew Alyward, Ron Brooks, Sarah North Department of Computer Science College of Computing and Software Engineering Kennesaw State University ABSTRACT The purpose of this project is to create a prototype augmented reality (AR) application for the visualization of streamed data of networked devices. This visualization will be formed using information such as the IP and MAC addresses of machines connected to the network, their location, and the potential security risks that they pose to the network. To implement this, a program scans the network at regular intervals and collects information about the machines connected to it. The information is synthesized in an (AR) visualization that can be displayed on the Hololens or an Android device. Our methods for developing this prototype will include research in augmented and mixed reality technology in tandem with cyber security and big data visualization. Result: We will determine the extent to which these multidisciplinary fields in technology have been brought together and successfully utilized, where they have fallen short, and where they could be more fully developed. Keywords: Data Visualization, Streaming Big Data, Augmented Reality (AR), GUI (Graphical User Interface), Holographic. IP and MAC addresses network. 1. INTRODUCTION Brain or mind controlled technology offers a new and dynamic type of human and computer interaction. This technology can make the user feel more immersed than other technology can such as a keyboard, mouse, controller, or any other device. During the past decade, drone usage has grown significantly in the military and other federal agencies; this growth has also extended to some local law-enforcement agencies. Growth in drone usage is not limited to government agencies; it can also be seen in the commercial, private, and recreational sectors. Due to the substantial growth in popularity and a large number of drones being acquired. Consequently, the Federal Aviation Administration (FAA) and many state governments taken steps to increase safety where drones are concerned. The purpose of this study was to investigate the innovative usages for a brain-controlled drone and how effective these drones might operate. This study also tested the bounds of human connectivity with a mind-controlled drone. 1.1 Concise Literature Review After an extensive research on the topic, revels two prominent articles that report on results from research conducted on

Transcript of ksuweb.kennesaw.eduksuweb.kennesaw.edu/~snorth/HoloLens/Project_Documents/... · Web viewThe t-test...

Page 1: ksuweb.kennesaw.eduksuweb.kennesaw.edu/~snorth/HoloLens/Project_Documents/... · Web viewThe t-test between pre-test mental and post-test mental commands was 0.4266 with a p-value

Visualize Streaming Big Data Through Augmented Reality

Sean Saffan, Victor Orellana, Matthew Alyward, Ron Brooks, Sarah North

Department of Computer ScienceCollege of Computing and Software Engineering

Kennesaw State University

ABSTRACTThe purpose of this project is to create a prototype augmented reality (AR) application for the visualization of streamed data of networked devices. This visualization will be formed using information such as the IP and MAC addresses of machines connected to the network, their location, and the potential security risks that they pose to the network. To implement this, a program scans the network at regular intervals and collects information about the machines connected to it. The information is synthesized in an (AR) visualization that can be displayed on the Hololens or an Android device. Our methods for developing this prototype will include research in augmented and mixed reality technology in tandem with cyber security and big data visualization. Result: We will determine the extent to which these multidisciplinary fields in technology have been brought together and successfully utilized, where they have fallen short, and where they could be more fully developed.

Keywords: Data Visualization, Streaming Big Data, Augmented Reality (AR), GUI (Graphical User Interface), Holographic. IP and MAC addresses network.

1. INTRODUCTIONBrain or mind controlled technology offers a new and dynamic type of human and computer interaction.  This technology can make the user feel more immersed than other technology can such as a keyboard, mouse, controller, or any other device. During the past decade, drone usage has grown significantly in the military and other federal agencies; this growth has also extended to some local law-enforcement agencies. Growth in drone usage is not limited to government agencies; it can also be seen in the commercial, private, and recreational sectors.  Due to the substantial growth in popularity and a large number of drones being acquired. Consequently, the Federal Aviation Administration (FAA) and many state governments taken steps to increase safety where drones are concerned. The purpose of this study was to investigate the innovative usages for a brain-controlled drone and how effective these drones might operate. This

study also tested the bounds of human connectivity with a mind-controlled drone.

1.1 Concise Literature ReviewAfter an extensive research on the topic, revels two prominent articles that report on results from research conducted on

online (Internet) and offline (non-Internet) and their relationship of emotions and the role they play in human interactions. In Anderson & McOwan’s [2] summary article reviews an extensive research of Emotiv EPOC+ headset and the correlation on emotions reveals that emotions play an important role in the interactions between humans. Human emotion is fundamental to human experience; influencing things like cognition, perception, and even as far as rational decision-making. Therefore, the study of emotion and the Emotiv neuroheadset is indispensable [2]. The article “Emotion Recognition Using Emotiv EPOC+ Device” was to find the relationship between electroencephalogram (EEG) signals and human emotions. The study’s primary focus was on emotion recognition experiments that are conducted using the commercial Emotiv EPOC+ headset to record EEG signals while watching a variation of emotional movies [1, 2, 4, 5]. The need and importance of the automatic emotion recognition from EEG signals has grown with an increasing role of brain computer interface applications and development of new forms of human – centric and human – driven interaction with digital media.

Research Question: 1. Is there any significant difference in efficiency in

using AR to visualize data streaming through wireless mobile devices compared to traditional 2D displays?

2. Does augmented reality visualization help the naive user better understand the activity of networked devices compared to regular 2D visualization?

Null Hypothesis: There is no significant difference in difficulty between the user’s ability to control the drone using mental or facial commands utilizing brain control interface in conjunction with aerial drones,

2. METHODOLOGY2.1 Participants

Page 2: ksuweb.kennesaw.eduksuweb.kennesaw.edu/~snorth/HoloLens/Project_Documents/... · Web viewThe t-test between pre-test mental and post-test mental commands was 0.4266 with a p-value

Twenty participants (n=20) both male and female, ten (10) male and ten (10) female, with ages equal or greater than to eighteen (18) participated in the experiments. Participants were randomly selected from a User Interface Engineering class as well as other computer science courses. The initial phase was conducted by having the user control a virtual cube on an interactive graphical user interface to assess their abilities, and to calibrate the systems.

2.2 ApparatusThe experiments used an array of laptops and other electronic devices such as Samson S2 9.7”; 32 GB Android Tablet; however, the main assessments were completed with the Microsoft HoloLens Mobile Devices managing (MDM) for HoloLens. Software were used selected Universal Windows Platform (UWP) Apps to install and set security configurations; Visual and Android Studio 2017; Unity software, a cross-platform game engine developed by Unity Technologies, to use to develop 3-dimentional modeling interface within the HoloLens environment. Microsoft HoloLens (see Figure 1), and Microsoft HoloLens Hardware and software [1].

HoloLens (Highlight Specifications): Holograms are the next evolution in computing, with

vision in mind, hardware, software, and design that can create the fully immersive and self-contained holographic environment.

Sensor Fusion, is a advance caption information were HoloLens can see, map and understand Physical places, spaces, GUI around the AR environment.

HoloLens System Requirements (Specifications): System: 1 gigahertz (GHz) or faster x86- or x64-bit

processor with SSE2 instruction set HD-3.0 gigabytes (GB). RAM: 1 gigabyte (GB) RAM (32-bit); 2 gigabytes (GB)

RAM (64-bit) Operating Systems: Minimum Win 7, 10, 64 bit machine, and Microsoft SharePoint 2013. Multi-touch: A touch-enabled device is required to use

any multi-touch functionality.

Figure 1. The EPOC+ nodes saturating on a test subject’s head.

2.3 Instruments for Experiments

The experimental instruments included a pre-experiment survey and post-experiment survey. Specific questions that were administered to the participants for each specific experiments are listed below:

Pre-Experiment Survey: Specify your age bracket. Gender. How do you feel about the usage of drone technology? How would you rate your knowledge of drones? Have you ever flown a drone controlled by a handheld device? Have you ever flown a drone controlled through mental

commands? How much control do you think you will have of the drone while

using the EPOC+ (mental-command device? How easy do you think it will be to control the drone overall

using the EPOC+ device (headset)? What is your current stress level? Do you believe flying the drone will increase your current stress

level? How much difficulty do you believe you will have flying the drone

using mental commands? How much difficulty do you believe you will have flying the drone

using facial commands?

Post-Experiment Survey: How would you rate the control level you had of the drone while

using the EPOC+ device? How you rate your stress level after flying the drone? Do you believe flying the drone changed your stress level? How difficult was flying the drone using the mental commands? How much difficulty did you have flying the drone using the facial

commands?

2.4 ProcedureThe first initial step was the nodes that allowed the connection from the EPOC+ device were soaked in multipurpose solution and connected to the device. The next step is to make certain each node is receiving a good connection. This was determined by viewing the user interface and making sure the nodes were displaying as green color signal on the screen. A black, yellow, or red colored node meant the connection needed to be reevaluated. Once a proper connection was established, the EPOC+ was calibrated. Calibration occured by connecting the EPOC+ to the Xavier Composer interface (Figure 2 and Figure 3). The user then was subjected to the “virtual cube experiment,” which was the first stage of testing; during which, the user followed a set of commands where they imagined moving the virtual cube with their mind in a specific order. For example, the user was prompted to “push, pull, lift, shift to the right, shift to the left, and lower” the virtual cube. During the next part of experiment, the EPOC+ was connected to the Parrot AR-Drone 2.0. The user then attempted to replicate the results from the virtual cube experiment; only this time with an actual tangible object: the Parrot AR-Drone 2.0. The following stage of experiment was completed in the same manner as the preceding experiment. The next phase was completed with facial commands. The participant then used facial movements such as blink, smile, and clinch to make the drone perform certain movements.

With the EPOC+ and Parrot AR-Drone 2.0 set aside in an isolated room, individuals were randomly selected and asked if they would be willing to participate in the research. A pre-

Page 3: ksuweb.kennesaw.eduksuweb.kennesaw.edu/~snorth/HoloLens/Project_Documents/... · Web viewThe t-test between pre-test mental and post-test mental commands was 0.4266 with a p-value

experiment survey was administered to all participants. The participants were given thirty (30) minutes to complete all experiment phases of the project. Once the participants completed the experiments, they were asked to complete a post-experiment survey questionnaire. The data was then collected and ran through a variation of statistical analysis test.

The entire experimental phase was monitored as well as the different reactions, expressions, and emotions of the experiment participants were captured.

Figure 2. Emotiv Xavier Control Panel connecting to individual nodes to establish connection.

Figure 3. Test subject’s emotion graph being recorded in the Emotiv Xavier Control Panel

3. RESULTSBefore analyzing the collected data, it was critical that the participants to complete a pre-experiment survey, so that the data analysis included the participants’ general knowledge and any previous exposure to drones that they might have had. The last two questions on the pre-experiment survey were used to compare with the final two questions on the post-experiment survey.

Out of twenty participants, 65% reported that they had never flown a drone before. In addition to that, 100% of the participants reported that they had no prior experience with the EPOC+ device to control a drone. These instances allowed for collecting more precise raw data. Many participants originally stated that they believed the drone would be relatively easy to control using mental commands, comprehensive surveys and observations showed that it was actually more difficult for the participants to control the drone. Shown in Figure 4a, and Figure 4b, are the average of

how participants responded to an overall lower score in flying the drone using mental commands than they originally expected. Shown in Figure 5a and Figure 5b, are that the average of how participants responded to an overall lower score in flying the drone using facial commands than they originally expected. When comparing Figures 4a and 5a to Figures 4b and 5b, it can be seen that participants tend to claim an increase in the level of difficulty once they have flown the drone. There was a 20% increase in participants responding that the difficulty on the scale rated “High” and an increase of 5% on “Very High.”

Figure 4a. Chart shows the overall average user response in difficulty before flying the drone using the EPOC+ device.

Figure 4b. Chart shows the overall average user response in difficulty after flying the drone using the EPOC+ device.

Figure 5a. Chart shows the overall average user response in difficulty before flying the drone using facial commands.

Page 4: ksuweb.kennesaw.eduksuweb.kennesaw.edu/~snorth/HoloLens/Project_Documents/... · Web viewThe t-test between pre-test mental and post-test mental commands was 0.4266 with a p-value

Figure 5b. Chart shows the overall average user response in difficulty after flying the drone using facial commands.

Overall, Figure 6 depicts all participants’ responses from the question five and six from the post survey. While Table 1 compare what the users believed the level of difficulty would be. ANOVA statistical analysis was applied to the collected data for all survey questions. The computation of the data resulted in an f-ratio value of 7.5147 with a p-value of 0.009, supporting that the results of the analysis of the difficulty in mental and facial commands is significant at p < 0.05; Table 2 illustrates that the facial commands has the higher level of rated difficulty. To further analysis the differences of pre-and post-test data, a series of t-test were conducted for each set of data provided. The t-test between pre-test mental and post-test mental commands was 0.4266 with a p-value of 0.2133, showing no statically significant difference at p < 0.05 (Table 3.) The t-test ran on the pretest facial commands vs. the posttest facial commands was 0.0095 and a p-value of 0.0047, shows that there is a statically significant at p < 0.05 (Table 3). The third t-test conducted was on the data collected from the mental commands pretest and the facial commands pre-test, the t-test for this data was 0.5217, and a p-value of 0.2608, showing that there is no statically significant difference at p < 0.05 (Table 3). The final t-test conducted was on the mental commands post-test and the facial commands post-test, the t-test value was 0.0075 with a p-value of 0.0037 showing that there is a statically significant at p < 0.05 (Table 3). This exploratory compound analysis showed that mental commands were perceived to be more difficult than facial commands. Facial commands showed higher means of difficult compared to mental commands.

Questions f-ratio p-value Difference Analysis

I 1.131 0.301 InsignificantII 9.255 0.070 SignificantIII 0.841 0.371 Insignificant IV 5.106 0.036 SignificantV 0.548 0.468 Insignificant

Table 1. Single factor ANOVA analysis between the correlated pre-survey and post- survey questions.

ANOVA: Single Factor AnalysisSUMMARYGroups Sum Average VarianceMental Commands 48 2.40 1.30526Facial Commands 69 3.45 1.62894

ANOVASource of Variation SS MS FBetween Groups 11.023 7.621 7.5147Within Groups 55.750 1.467Total 66.773    

Table 2. Exploratory ANOVA statistical analysis of post data results about the difficulty of mental and facial commands.

Games Interfaces Comparisons t-test SignificantDifference

Pre-test Mental vs. Post-test Mental 0.4266 NegativePre-test Facial vs. Post-test Facial 0.0095 AffirmativePre-test Mental vs. Pre-test Facial 0.5217 NegativePost-test Mental vs. Post-test Facial 0.0075 Affirmative

Table 3. Compound t-test analysis for each set of drone commands.

Figure 6. Graph showing the participants overall response in difficult after flying the drone using both mental and facial commands. (Difficulty in mental commands are listed on left bar graph and are green, Difficulty in facial commands are listed on right bar graph and are blue).

4. CONCLUSIONS Allowing for some trouble with the hardware, the toughest part of running the experiment was providing consistent connection between the computer and the drone. Despite some connectivity issues, a good number of participants managed to fly the drone using the facial commands while a slightly smaller number of subjects could manipulate the drone with mental commands once it was in the air. Females tended to have better control over both the virtual cube experiment in the MindDrone App and controlling the drone using the EPOC+ headset. The male subjects consistently succeeded in passing the ‘virtual cube experiment’ but could not translate the thought control into manipulating the drone.

Anyone can learn to train his or her brain. As the pre- and post-experiment survey data showed, training one’s brain is most definitely a learned skill. Most of the participants expressed surprise at the ease with which they were able to control the virtual cube on the MindDrone App, having never tried any sort of brain-control activity previously, or even having any experience with drones whatsoever.

Page 5: ksuweb.kennesaw.eduksuweb.kennesaw.edu/~snorth/HoloLens/Project_Documents/... · Web viewThe t-test between pre-test mental and post-test mental commands was 0.4266 with a p-value

While currently limited, the current results given by the experiment and research show a very important aspect of the difficulty of mental versus facial commands. From the observation and analysis of collected data, it can be relatively concluded that there are some differences of difficulty in mental vs facial commands. This research shows that from the group of participants more individuals had greater difficulty controlling the mental and facial commands than they originally expected. At this time we can state that “There is no significant difference between the rating of difficulty before and after between the mental and facial commands.” It is entirely likely that with additional future research, more in-depth details, into what the impacts the level of difficulty in mental and facial commands.

5. ACKNOWLEDGMENTSpecial acknowledgment and thanks to CS 4712 class for their contribution to several phases of this research. In addition, authors express their appreciation for the time and efforts of all participates who graciously participated in all the phases of the experiments. Congratulation to the team who won 1st place at the College of Computing and Software Engineering (CCSE)/Computing Showcase event – C-Day on November 30, 2017. http://kennesaw.meritpages.com/. Major funded for this project provided by Computer Science department for advancement of technology and research on Undergraduate senior project activities. Furthermore, many thanks to Ahamd Alissa, and Josh Cooper who developed the manual for this project. http://whateven.com/ardrone/tutorials/

6. REFERENCES[1] Microsoft Hololens (2017) Development Edition Virtual Reality Headset, https:// www.microsoft.com/en-us/hololens .

Alissa, A., Cooper, J., & Rashied, A. (2017, November 6). Epoc Emotiv Manual. Retrieved on November 28, 2017 from http://whateven.com/ardrone/tutorials/index.php?page=1

[2] Anderson, K., & McOwan, W.P., A real-time automated system for the recognition of human facial expressions. IEEE Trans. System, Man, and Cybernetics, Part B: Cybernetics 36(1), 96–105 (2006)

[3] AR.Drone 2.0, Tutorial video #1: Setup [Video file]. Retrieved on July 27, 2017 fromhttps://www.parrot.com/us/drones/ parrot-ardrone-20-power-edition#ar-drone-20-power-edition.

[4] EMOTIV. MindDrone [Software]. Retrieved on August 5, 2017 from https://www.emotiv.com/product/mind drone/.

[5] EMOTIV. Xavier: EMOTIV Control Panel [Software]. Retrieved on September 29, 2017 from https://www. emotiv.com/product/xavier-control-panel/.

[6] Nakisa, B., Rastgoo, M. N., Tjondronegoro, D., & Chandran, V. (2018). Review: Evolutionary computation algorithms for feature selection of EEG-based emotion

recognition using mobile sensors. Expert Systems with Applications, 93143-155. doi:10.1016/j.eswa.2017.09.062

[7] Parrot S.A. (2016). AR.FreeFlight (Version 2.4.15) [Software], Retrieved on September 11, 2017 fromhttps://play.google.com/store/apps/details?id=com.parrot.freeflight.

[8] Parrot S.A. (2016). Parrot AR.Drone 2.0 User Guide. Retrieved on August 3, 2017 from https://static.bhphotovideo.com/lit_files/121124.pdf.

[9] Lin, Y. P., Wang, C. H., Wu, T. L., Jeng, S. K., & Chen, J. H., (2009). "EEG-based emotion recognition in music listening: A comparison of schemes for multiclass support vector machine", ICASSP IEEE International Conference on Acoustics Speech and Signal Processing-Proceedings, pp. 489-492,