1

I have 2 binary images (black/white). They have the same content but might slightly differ in rotation/translation and scale of the content (text).

How do I get a simple measure for the similarity of 2 images in OpenCV?

The operation needs to be as fast as possible (live).

Examples:

A:

enter image description here

B:

enter image description here

10
  • A few example images might help Commented Oct 13, 2016 at 9:18
  • Try using SIFT or SURF to find same points in the image. Then compute an affine transform that adjusts the images to one another and compute the sum of the difference magnitudes Commented Oct 13, 2016 at 9:19
  • @meetaig sounds anything but simple to me ;) Commented Oct 13, 2016 at 9:27
  • Yes, but most of the functionality is already implemented in opencv. If you have knowledge about he displacement of the images relative to each other you can use that. But when you have a set of coordinates you can use getAffineTransform to directly obtain the transform you need ;) Commented Oct 13, 2016 at 9:30
  • Get the minAreaRect of both texts, warp one onto the other, compute the absdiff, sum up the differences. That would be a measure of dissimilarity. This would be fairly fast Commented Oct 13, 2016 at 9:33

4 Answers 4

2

You can use LogPolarFFT registration algorithm to register the images, then compare them using similarity check (PSNR or SSIM).

Sign up to request clarification or add additional context in comments.

Comments

1

You need to remove the scale and rotation.

To remove the rotation, take pca which gives you the primary axis, and rotate both images so the primary axis is along the x. (Use a shear rotate). To remove the scale, you then either simply take a bounding box or, if there's a bit of noise in there, take area and scale one until it is equal. Just sample pixel centres to scale ( a bit icky, but it hard to scale a binary image nicely).

I should put some support in the binary image library for this. You might find the material helpful

http://malcolmmclean.github.io/binaryimagelibrary/

Comments

0

The one way I can think of is to find keypoint using SIFT/SURF and then calculate Homography between two images and warp them according to the calculated Homography ( so as to fix rotation and Translation). Then you can simply calculate similarity in terms of SAD.

Comments

0

Need to use rotation and scale invariant approach. Also need a threshold based area segmentation before feature extraction is needed. I suggest to follow below steps:

1/ Binary threshold & scan line algorithm can be used to segment specific text line area.

2/ After segmentation you should adjust the rotation using warpAffine transformation. See this example

3/ On adjusted image you can apply SIFT or BRISK or SURF features to get features

4/ Use template matching approach to match or generate similarity or distance score.

see following link for more detail:

scale and rotation Template matching

Comments