SAFETY ENHANCEMENT THROUGH SITUATION-AWARE USER INTERFACESIet system safety 2012

Post on 07-Jun-2015

159 views 0 download

Tags:

description

Due to their privileged position halfway the physical and the cyber universe, user interfaces may play an important role in learning, preventing, and tolerating scenarios affecting the safety of the mission and the user's quality of experience. This vision is embodied here in the main ideas and a proof-of-concepts implementation of user interfaces combining dynamic profiling with context- and situation-awareness and autonomic software adaptation. The corresponding paper may be downloaded from https://dl.dropboxusercontent.com/u/67040428/Articles/gg.pdf

Transcript of SAFETY ENHANCEMENT THROUGH SITUATION-AWARE USER INTERFACESIet system safety 2012

vincenzo.deflorio@ua.ac.be

© PATS Research Group

SAFETY ENHANCEMENT

THROUGH SITUATION-AWARE

USER INTERFACES

Vincenzo De Florio

Introduction

• GUI: contact point between two worlds

U: User (Physical) world

C: Cyber world

• These two worlds are very different

Different notion of time

Different notions of behavior, actions, evolution…

• The GUI is the "medium" between these two distant realities

7-Dec-13 2

How are GUIs now (1/2)

• Adaptive, anticipative, personalized, "intelligent"...

• ...but mostly focusing on functional aspects

GUI is a way to "send commands to the other side"

→ No interpretation of the user behavior

7-Dec-13 3

How are GUIs now (2/2)

• User activity is unquestioned

“Does it make sense?”

“Is it "logical" / "meaningful" / SAFE?”

“Is it "normal?" >> (w.r.t. given user, given specs, given environment...)

• No assessment of QoE

• No assessment of the current situations

7-Dec-13 4

Idea

• GUI as a usability sensor

• All U-activity reported dynamically to C

Both functional and non-functional

Actions, mistakes, timing b/w actions... (big data!)

• C then uses U-activity to build/refine a model of U

Stereotypes, rules, hidden Markov models, Bayesian intelligence...

(Currently, simple rules) 7-Dec-13 5

Application (1/5)

• C makes use of the model to…

1. tell whether U's behavior is “in order" rather than "abnormal"

• Safety, confidentiality...

• E.g. switching off features when misbehaviours or erroneous “human-machine conversations” are detected.

2. tell whether the user has changed

• GUI detects an unusual stereotype →

device in different hands?

• GUI as biometric sensor

3. Detect / react from lack of reactivity

7-Dec-13 6

Application (2/5)

• C makes use of the model to

4. reshape the GUI

• WYSIWYU: What-you-see-is-what-you-understand and expect

• I don't need it? It's not there

E.g.: Better management of screen space

E.g.: eInclusion

7-Dec-13 7

Application (3/5)

• C makes use of the model to

5. reshape itself

Unneeded functionality is "unloaded" → reduced complexity →

less faults, less resource consumption...

à la Transformer [GD12a,GD12b]

7-Dec-13 8

Application (4/5)

• C makes use of the model to

6. "...tell Computers and Humans Apart“

as in CAPTCHA: "Complete Automatic Public Turing test ..."

Does the client interact as a human user? Gestalt psychology, morphisms,...

Enhanced robustness against attacks → safety in eBusiness

7-Dec-13 9

Application (5/5)

• C makes use of the model to

7. provide the business end with usability intelligence

Big data about usability of devices / services

Data analysis may help tell what feature is most wanted / most hated in products

Etc.

7-Dec-13 10

Approach: Design of Autonomic GUIs

• GUI publishes widget actions and times

Simple Tcl/Tk toy GUI

• Interaction logger receives actions/times stream and creates a compact representation (iCode)

• iCode is sent to an Interaction analyser

Context is gathered and situations analysed

• Actions are then planned and executed

The GUI is adapted

7-Dec-13 11

Simple analysis: QoE

7-Dec-13 12

Visualization

7-Dec-13 13

Detection of discomfort

7-Dec-13 14

7-Dec-13 15

Discomfort

detected:

Burst (iRate = 212.7134279)

Dynamic adaptation

7-Dec-13 16

• Current (trivial) strategy: GUI executes the interaction analyzer of its own current interaction

Dynamic adaptation

• No separation of design concerns

• Embedded in one GUI

• Solution: aspect orientation

Future work

7-Dec-13 17

7-Dec-13 18

Multiple detections

unravel situations

After six discomfort

detections, we assume we are in an unsafe

situation

Identity of

the user must be reconfirmed

Conclusions (1/3)

• Analyses of user interaction can tell us much about the user

Is s/he in command?

Is s/he still the same who logged in?

Etc (re: crash of Air France 447)

• They can tell us much about the interface

Did the interface behave as expected?

(re: Therac-25 accidents)

7-Dec-13 20

Conclusions (2/3)

• This knowledge may (should!) be used to understand what went wrong / react before things go wrong

• “Going wrong” ranges from usability to safety issues

7-Dec-13 21

Conclusions (3/3) • Here, a simple proof of concepts – with

many potential applications

• Embedded in one simple GUI

• No actual experimentation

• Many unanswered questions:

Analyzers may be too simple or simply wrong

Components may fail – what then?

Etc

• Complex problems that call for multidisciplinary solutions

7-Dec-13 22

Please let us know your questions / ideas for collaborations! vincenzo.deflorio@ua.ac.be jonas.buys@ua.ac.be

7-Dec-13 23

References

• [GD12a] Gui, N. and De Florio, V.: "Transformer: an adaptation framework with contextual adaptation behavior composition support," accepted for publication in Software: Practice & Experience, Wiley, 2012.

• [GD12b] Gui, N., De Florio, V., "Towards Meta-Adaptation support with Reusable and Composable Adaptation Components," Sixth IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO 2012), Lyon, France, 10-14 September 2012.

7-Dec-13 24