Perception
description
Transcript of Perception
![Page 1: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/1.jpg)
Perception
CS533C Presentation
by Alex Gukov
![Page 2: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/2.jpg)
Papers Covered Current approaches to change blindness Daniel J. Simons.
Visual Cognition 7, 1/2/3 (2000) Internal vs. External Information in Visual Perception Ronald
A. Rensink. Proc. 2nd Int. Symposium on Smart Graphics, pp 63-70, 2002
Visualizing Data with Motion Daniel E. Huber and Christopher G. Healey. Proc. IEEE Visualization 2005, pp. 527-534.
Stevens Dot Patterns for 2D Flow Visualization. Laura G. Tateosian, Brent M. Dennis, and Christopher G. Healey. Proc. Applied Perception in Graphics and Visualization (APGV) 2006
![Page 3: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/3.jpg)
Change Blindness
Failure to detect scene changes
![Page 4: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/4.jpg)
Change Blindness
Large and small scene changes Peripheral objects Low interest objects
Attentional blink Head or eye movement – saccade Image flicker Obstruction Movie cut
Inattentional blindness Object fade in / fade out
![Page 5: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/5.jpg)
Mental Scene Representation
How do we store scene details ? Visual buffer
Store the entire image Limited space Refresh process unclear
Virtual model + external lookup Store semantic representation Access scene for details Details may change
Both models support change blindness
![Page 6: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/6.jpg)
Overwriting Single visual buffer Continuously updated Comparisons limited to semantic information Widely accepted
![Page 7: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/7.jpg)
First Impression Create initial model of the scene No need to update until gist changes Evidence
Test subjects often describe the initial scene. Actor substitution experiment.
![Page 8: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/8.jpg)
Nothing is stored( just-in-time) Scene indexed for later access Maintain only high level information ( gist ) Use vision to re-acquire details Evidence
Most tasks operate on a single object. Attention constantly switched.
![Page 9: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/9.jpg)
Nothing is compared Store all details Multiple views of the same scene possible Need a ‘reminder’ to check for contradictions Evidence
Subjects recalled change details after being notified of the change. Basketball experiment.
![Page 10: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/10.jpg)
Feature combination Continuously update visual representation Both views contribute to details Evidence
Eyewitness adds details after being informed of them.
![Page 11: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/11.jpg)
Coherence Theory
Extends ‘just-in-time’ model Balances external and internal scene representations Targets parallelism, low storage
![Page 12: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/12.jpg)
Pre-processing
Process image data Edges, directions, shapes
Generate proto-objects Fast parallel processing Detailed entities Link to visual position No temporal reference Constantly updating
![Page 13: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/13.jpg)
Upper-level Subsystems
Setting (pre-attentive) Non-volatile scene layout, gist Assists coordination Directs attention
Coherent objects (attentional) Create a persistent representation when focused on an
object Link to multiple proto-objects Maintain task-specific details Small number reduces cognitive load
![Page 14: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/14.jpg)
Subsystem Interaction
Need to construct coherent objects on demand Use non-volatile layout to direct attention
![Page 15: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/15.jpg)
Coherence Theory and Change
Blindness
Changes in current coherent objects Detectable without rebuilding
Attentional blink Representation is lost and rebuilt
Gradual change Initial representation never existed
![Page 16: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/16.jpg)
Implications for Interfaces
Object representations limited to current task Focused activity
Increased LOD at points of attention Predict or influence attention target
Flicker Pointers, highlights..
Predict required LOD Expected mental model
Visual transitions Avoid sharp transitions due to rebuild costs Mindsight ( pre-attentive change detection)
![Page 17: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/17.jpg)
Critique
Extremely important phenomenon Will help understand fundamental perception mechanisms
Theories lack convincing evidence Experiments do not address a specific goal Experiment results can be interpreted in favour of a
specific theory (Basketball case)
![Page 18: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/18.jpg)
Visualizing Data with Motion
Multidimensional data sets more common Common visualization cues
Color Texture Position Shape
Cues available from motion Flicker Direction Speed
![Page 19: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/19.jpg)
Previous Work
Detection 2-5% frequency difference from background 1o/s speed difference from the background 20o direction difference from the background Peripheral objects need greater separation
Grouping Oscillation pattern – must be in phase
Notification Motion encoding superior to color, shape change
![Page 20: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/20.jpg)
Flicker Experiment
Test detection against background flicker Coherency
In phase / out of phase with the background
Cycle difference Cycle length
![Page 21: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/21.jpg)
Flicker Experiment - Results
Coherency Out of phase trials detection error ~50% Exception for short cycles - 120ms
Appeared in phase
Cycle difference, cycle length (coherent trials) High detection results for all values
![Page 22: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/22.jpg)
Direction Experiment
Test detection against background motion Absolute direction Direction difference
![Page 23: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/23.jpg)
Direction Experiment - Results
Absolute direction Does not affect detection
Direction difference 15o minimum for low error rate and detection time Further difference has little effect
![Page 24: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/24.jpg)
Speed Experiment
Test detection against background motion Absolute speed Speed difference
![Page 25: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/25.jpg)
Speed Experiment - Results Absolute speed
Does not affect detection Speed difference
0.42o/s minimum for low error rate and detection time Further difference has little effect
![Page 26: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/26.jpg)
Applications Can be used to visualize flow fields
Original data 2D slices of 3D particle positions over time (x,y,t)
Animate keyframes
![Page 27: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/27.jpg)
Applications
![Page 28: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/28.jpg)
Critique
Study Grid density may affect results Multiple target directions
Technique Temporal change increases cognitive load
Color may be hard to track over time Difficult to focus on details
![Page 29: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/29.jpg)
Stevens Model for 2D Flow Visualization
![Page 30: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/30.jpg)
Idea Initial Setup
Start with a regular dot pattern Apply global transformation Superimpose two patterns
Glass Resulting pattern identifies the global transform
Stevens Individual dot pairs create perception of local
direction Multiple transforms can be detected
![Page 31: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/31.jpg)
Stevens Model Predict perceived direction
for a neighbourhood of dots Enumerate line segments in a
small neighbourhood Calculate segment directions Penalize long segments Select the most common
direction Repeat for all neighbourhoods
![Page 32: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/32.jpg)
Stevens Model
Segment weight
![Page 33: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/33.jpg)
Stevens Model Ideal neighbourhood – empirical results
6-7 dots per neighbourhood Density 0.0085 dots / pixel
Neighbourhood radius 16.19 pixels
Implications for visualization algorithm Multiple zoom levels required
![Page 34: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/34.jpg)
2D Flow Visualization Stevens model estimates perceived
direction How can we use it to visualize flow fields ?
Construct a dot neighbourhoods such that the desired direction matches what is perceived
![Page 35: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/35.jpg)
Algorithm Data
2D slices of 3D particle positions over a period of time Algorithm
Start with a regular grid Calculate direction error around a single point
Desired direction: keyframe data Perceived direction: Stevens model
Move one of the neighbourhood points to decrease error
Repeat for all neighbourhoods
![Page 36: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/36.jpg)
Results
![Page 37: Perception](https://reader036.fdocuments.us/reader036/viewer/2022062807/56815141550346895dbf5ef9/html5/thumbnails/37.jpg)
Critique
Model Shouldn’t we penalize segments which are too short ?
Algorithm Encodes time dimension without involving cognitive
processing Unexplained data clustering as a visual artifact
More severe if starting with a random field