NBN Ready Augmented Reality Using Augmented Reality in vocational training.
AUGMENTED REALITY VIDEO PLAYLISTcs-courses.mines.edu/csci507/projects/2015/Chandra.pdf · AUGMENTED...
Transcript of AUGMENTED REALITY VIDEO PLAYLISTcs-courses.mines.edu/csci507/projects/2015/Chandra.pdf · AUGMENTED...
0/29
AUGMENTED REALITY
VIDEO PLAYLIST
Surya Chandra
EENG512
1/29
Augmented Reality
Augmented reality is the integration of digital
information with live video or user’s
environment in real time.
It is an enhanced reality with possibly human
interaction involved.
2/29
Goal
[ Using either the live feed of a webcam or a video
recording as reference ]
1) Playlist: To play a set of random videos on top of
everyday objects (photos, books, etc.) lying around in the
room .
2) Interaction: To allow the user to point and select a
particular video and view it on the user’s palm as desired.
3/29
1a.Selecting markers
NO YES
Reference Images
4/29
1b. Video feed Webcam/Recording
5/29
2a.Extract Features : SURF (Speeded Up Robust Features)
MATLAB: detectSURFFeatures
extractFeatures
WEBCAM - FRAME 1
6/29
2b.Match Features
REFERENCE IMAGE/MARKER WEBCAM - FRAME 1
MATLAB: matchFeatures
7/29
2c.Inlier Matches
REFERENCE IMAGE/MARKER
MATLAB: estimateGeometricTransform
- Gives the inliers points and the transformation
WEBCAM - FRAME 1
8/29
3a. Rescaling Video
VIDEO FRAME 1 (rescaled) REFERENCE IMAGE/MARKER
Resizing the video frame to match the dimensions of reference image
MATLAB: imresize
9/29
3b. Transforming Video
VIDEO FRAME 1 (Transformed)
Applying the obtained reference image transformation to video frame 1
WEBCAM - FRAME 1
MATLAB: Imwarp
vision.AlphaBlender: Given a mask, it blends two images
10/29
3c. Projected Result
RESULT: OUTPUT VIDEO FRAME 1
11/29
This method takes about 1 to 1.5 sec for each frame.
Hence, using this method for all the frames will be very
expensive.
Problem:
12/29
Solution:
POINT TRACKER - vision.PointTracker
It tracks a given set of points using KLT(Kanade-Lucas-Tomasi) feature tracking algorithm.
Works well for tracking objects that do not change shape.
Used in video stabilization, object tracking and camera motion estimation.
13/29
4a. PointTracker
INITIALIZED POINT TRACKER
Initialize a point tracker with the inliers points previously obtained
14/29
4b. PointTracker
Webcam : Video Frame 1
Go to the next webcam video frame and match the point tracker points
Webcam : Video Frame 2
15/29
Repeat…
NEXT FRAM
E
Rescale the next video frame Transform the video frame
Project and blend Reset PointTracker with new inliers
16/29
PointTracker method executes around 8-10 frames per
second.
The transformation needs to be accumulated till current
frame.
trackingTransform.T = refTransform.T * trackingTransform.T;
17/29
Problem:
The point tracker works only for short-term tracking.
Over time, points are lost due to lighting variations and
out of plane rotation.
Points are to be reacquired periodically to track for a
longer time
Solution:
To break the loop when points being tracked < 6 and restart
from step 1, extracting SURF features.
It breaks once every 70-100 frames and restarts.
18/29
Preliminary Testing RESULT VIDEO
https://www.youtube.com/watch?v=qCWVcxSxAo4
19/29
Preliminary Testing 2 TWO VIDEOS
https://www.youtube.com/watch?v=5XZ1_utCYIQ
20/29
Problem:
The Transformation Matrix was badly conditioned.
Easy Fix : rcond < 10^-6, Break and Restart.
21/29 5a. Interaction Red-Tape is used as marker
RGB Image Red component – Grayscale(average)
Filtered – remove noise Apply threshold
22/29 5b. Interaction Used BlobAnalysis to extract red regions within certain
area range and find their centroids.
https://www.youtube.com/watch?v=xjguVXAZdnk
23/29
Scaled the video to the distance between these centroids.
Found the 2D transformation matrix using the angle.
Used the same process as in part 1 to project the video.
5c. Interaction
https://www.youtube.com/watch?v=ePx_H3LTvRo
24/29
6. Adding Interaction (Results)
https://www.youtube.com/watch?v=G69nCvYhJGA
25/29 7. Multiple Videos (Results) selecting the video closest to the markers
https://www.youtube.com/watch?v=zTttISVHhV8
26/29
8.Future Work-
Fix some coding issues with handling remaining point trackers while a video is being viewed by the user. (as seen in the previous video)
To try to implement marker-less detection of fingertips and hand pose estimation.
27/29
9.Take away
The frame rate of the webcam video and the videos in the playlist should be same.
PointTracker tracks loses accuracy eventually and has to be re-initialized.
Cases where the reference image is completely out of frame has to be considered.
… and Computer Vision works!
28/29
10. REFERENCES [1] Bay, Herbert, et al. "Speeded-up robust features
(SURF)." Computer vision and image understanding 110.3 (2008): 346-359.
[2] Lee, Taehee, and Tobias Hollerer. "Handy AR: Markerless inspection of augmented reality objects using fingertip tracking." Wearable Computers, 2007 11th IEEE International Symposium on. IEEE, 2007.
[3] Ta, Duy-Nguyen, et al. "Surftrac: Efficient tracking and continuous object recognition using local feature descriptors." Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
29/29