Due on April ̶1̶6̶ 25

  • In this HW, you will leverage the HOG function you created in HW4 and develop a simple feature descriptor. You will then align two images with your feature descriptor.

    • You can simply use Harris corner detector (the corner function in Matlab) to locate feature positions.

    • Please review the local feature slides to generate descriptors for all feature points.

    • After generating descriptors from both images, you have to match descriptors from one image to another. You have to get rid of the poor matches. Please again refer to the local feature slides.

    • If you are stuck generating descriptors, you may use descriptors (SIFT, SURF, etc.) from opencv or matlab. Just you will only get 80% of all the points.

  • You are free to use Matlab or opencv-python. But you would like to at least take a look of this matlab package. This gives you more hints of what you suppose to do. The package will also display your match location and fit an affine transform with your matching result. You will also get a fused image similar to the following.

aligned result with affine transform fitting 
  • Please submit screenshots of your match results and source code as usual. If you prefer not to take advantage the matlab package, please make sure you take a screenshot with matched points as shown below.

matched points 
  • Extra credit (20%). Test your code with your own input images. Submit screenshot of the matching points and blended image for extra credit.

  • Extra credit (50%). Please refine your alignment result by using RANSAC (refer to the alignment slides).

  • Extra credit (50%). Please fit your matching result with projective transform rather than affine transform (see alignment slides).

If you implemented both projective transform fitting and RANSAC, your alignment should be significantly improved. You will get result similar to the following.

aligned result with both RANSAC and projective transform fitting