|
1
|
|
|
2
|
|
|
3
|
|
|
4
|
|
|
5
|
|
|
6
|
|
|
7
|
|
|
8
|
|
|
9
|
|
|
10
|
- Find features that are invariant to transformations
- geometric invariance:
translation, rotation, scale
- photometric invariance:
brightness, exposure, …
|
|
11
|
- Locality
- features are local, so robust to occlusion and clutter
- Distinctiveness:
- can differentiate a large database of objects
- Quantity
- hundreds or thousands in a single image
- Efficiency
- real-time performance achievable
- Generality
- exploit different types of features in different situations
|
|
12
|
- Feature points are used for:
- Image alignment (e.g., mosaics)
- 3D reconstruction
- Motion tracking
- Object recognition
- Indexing and database retrieval
- Robot navigation
- … other
|
|
13
|
|
|
14
|
|
|
15
|
|
|
16
|
|
|
17
|
|
|
18
|
|
|
19
|
|
|
20
|
|
|
21
|
|
|
22
|
|
|
23
|
|
|
24
|
|
|
25
|
|
|
26
|
|
|
27
|
|
|
28
|
|
|
29
|
|
|
30
|
|
|
31
|
|
|
32
|
|
|
33
|
|
|
34
|
- Suppose you rotate the image by some angle
- Will you still pick up the same features?
- What if you change the brightness?
- Scale?
|
|
35
|
- Suppose you’re looking for corners
- Key idea: find scale that gives
local maximum of f
- f is a local maximum in both position and scale
- Common definition of f:
Laplacian
(or difference between two Gaussian filtered images with
different sigmas)
|
|
36
|
|
|
37
|
|
|
38
|
|
|
39
|
|
|
40
|
|
|
41
|
|
|
42
|
|
|
43
|
|
|
44
|
- We know how to detect good points
- Next question: How to match them?
|
|
45
|
- We know how to detect good points
- Next question: How to match them?
- Lots of possibilities (this is a popular research area)
- Simple option: match square
windows around the point
- State of the art approach: SIFT
- David Lowe, UBC http://www.cs.ubc.ca/~lowe/keypoints/
|
|
46
|
|
|
47
|
|
|
48
|
|
|
49
|
- Find dominant orientation of the image window
- This is given by x+, the eigenvector of H corresponding to l+
- l+ is the larger
eigenvalue
- Rotate the window according to this angle
|
|
50
|
- Take 40x40 square window around detected feature
- Scale to 1/5 size (using prefiltering)
- Rotate to horizontal
- Sample 8x8 square window centered at feature
- Intensity normalize the window by subtracting the mean, dividing by the
standard deviation in the window
|
|
51
|
|
|
52
|
|
|
53
|
|
|
54
|
|
|
55
|
- Maximally Stable Extremal Regions
- Threshold image intensities: I > thresh
for several increasing values of thresh
- Extract connected components
(“Extremal Regions”)
- Find a threshold when region is “Maximally Stable”, i.e. local minimum
of the relative growth
- Approximate each region with
an ellipse
|
|
56
|
|
|
57
|
|
|
58
|
|
|
59
|
- Throw out features with distance > threshold
- How to choose the threshold?
|
|
60
|
- The distance threshold affects performance
- True positives = # of detected matches that are correct
- Suppose we want to maximize these—how to choose threshold?
- False positives = # of detected matches that are incorrect
- Suppose we want to minimize these—how to choose threshold?
|
|
61
|
- How can we measure the performance of a feature matcher?
|
|
62
|
- How can we measure the performance of a feature matcher?
|
|
63
|
|
|
64
|
- Features are used for:
- Image alignment (e.g., mosaics)
- 3D reconstruction
- Motion tracking
- Object recognition
- Indexing and database retrieval
- Robot navigation
- … other
|
|
65
|
|
|
66
|
|