A day of a Software Engineer
CSE 455: Computer Vision
Project 2 Artifact
Pingyang He and Atanas Kirilov
Test sequence
Feature detection:
worked well: We basically find all the features that the solution has, and the orientation of the features is the same as the solution one
not well: the threshold I pick is different from solution, so the amount of features I got is slightly more than solution one.
Panorama:
worked well: The stitching is identical to the solution image pixel-for-pixel so the alignment and blending worked well.
not well: Working out the translation of the images was really difficult and there were many many formulas that reversed the order, created gaps, or even crashed the program entirely.
Sequence with Kaiden panorama head
Feature detection:
worked well: We can use our own code to generate the panorama.
not well: since the two recycle bins are almost identical, so the panorama we generated has some artifact around that area
Panorama:
worked well: The stiching is identical to the solution and almost perfect.
not well: Since two of the images had a very plain overlap, the algorithm only matched 2 features and as a result there is a bit of ghostin
Sequence taken by hand
The result is kind of poor. The inliers are about 10-12 on each picture, some of them only have 3. Where as the pictures taken by tripod usually have more than 70 inliers.
ROC comparison of our code vs. SIFT
The graph shows the relationship between false positive rate(x) and true positive rate (y). False positive rate = # features that matcher wrongly found / # features that really don't have a match. True positive rate = # features matcher correctly found / # features that really have a match. The four different curves shows the result under different approaches.
Extra Credit
We tried...