REU Week 1
description
Transcript of REU Week 1
REU Week 1Presented by Christina Peterson
Edge DetectionSobel
◦Convolve image with derivative masks: x:
y:
◦Calculate gradient magnitude◦Apply threshold
1 0 -1
2 0 -2
1 0 -1
1 2 1
0 0 0
-1 -2 -1
Edge DetectionMarr Hildreth
◦Apply Laplacian of Gaussian to an image
◦Find zero crossings {+,-}, {+, 0, -}, {-, +}, {-, 0, +}
◦Mark edges Apply threshold to slope of zero-crossings
2
22
22
22
3
2 22
1
yx
eyx
G
Edge DetectionCanny
◦Convolve image with first derivative of gaussian
◦Find magnitude of gradient and orientation
◦Apply non-max suppression For each pixel, check if it is a local max
by comparing it to neighbor pixels along normal direction to an edge
◦Apply hysteria thresholding
Canny Example
Original Image Canny Output
Harris Corner DetectorImplemented Harris Corner Detector
◦1. x and y derivatives Ix=conv2(double(I), maskx, ‘same’) Iy=conv2(double(I), masky, ‘same’)
◦2. products of derivatives Ix2=Ix.*Ix Iy2=Iy.*Iy Ixy=Ix.*Iy
◦3. sums of products of derivatives Sx2=gauss_filter(Ix2, sigma, kernel_size) Sy2=gauss_filter(Iy2, sigma, kernel,size) Sxy=gauss_filter(Ixy, sigma, kernel_size)
Harris Corner Detector◦4. Define matrix H(x,y):
For j=1:columns, For i=1:rows, H{ i, j } = [Sx2(i, j) Sxy(i, j); Sxy(i, j) Sy2(i, j)
◦5. Response Detector For j=1:columns,
For i=1:rows, R( i, j ) = det(H{ i, j } )– k*(trace(H{ i, j }))^2
◦6. Apply threshold to R Edge: R < -10000 Corner: R > 10000
Harris Corner Detector
SiftPurpose
◦To identify features of an image regardless of scale and rotation
Scale Space◦Resize image to half size (octave)◦Blur image by adjusting sigma ◦4 octaves and 5 blur levels are
recommended
SiftSift Features
◦Divide image into 4 x 4 windows◦Divide each window into 4 x 4
subwindows• Calculate magnitude and gradient for
each subwindow◦Generate a histogram of 8 bins for each 4 x 4 window• Each bin represents a gradient orientation
• 4 x 4 x 8 = 128 dimensions
Sift using Vl_feat
Sift using Vl_feat
Match candidates by finding patches that have the most similar SIFT descriptor
Optical FlowLucas Kanade Optical Flow
Does not work for areas of large motion◦Resolved by Pyramids
tyx fvfuf
tTT fAAA
v
u 1
Optical Flow
Bag of FeaturesImplemented a Bag of Word
classificationDivided image into framesConcatenated sift descriptors for
each frameKmeans2 to cluster featuresImage represented as histogramUsed histograms as training data
for SVM
Bag of FeaturesResults for 8 frames and 20
clusters:◦9.5% accuracy on test data
Conclusions:◦Increase frames and clusters to
improve accuracy
Research Topics
1. Survey on Multiple Human Tracking by Detection Methods • Afshin Dehghan
2. Data Driven Attributes for Action Detection• Rui Hou
3. Subspace Clustering via Graph Regularized Sparse Coding• Nasim Souly