5

I am new to Open Cv, I want to transform the two images src and dst image . I am using cv::estimateRigidTransform() to calculate the transformation matrix and after that using cv::warpAffine() to transform from dst to src. when I compare the new transformed image with src image it is almost same (transformed), but when I am getting the abs difference of new transformed image and the src image, there is lot of difference. what should I do as My dst image has some rotation and translation factor as well. here is my code

cv::Mat transformMat = cv::estimateRigidTransform(src, dst, true); cv::Mat output; cv::Size dsize = leftImageMat.size(); //This specifies the output image size--change needed cv::warpAffine(src, output, transformMat, dsize); 

Src Image

enter image description here

destination Image

enter image description here

output image

enter image description here

absolute Difference Image

enter image description here

Thanks

4
  • First of all, what exactly do you want to achieve? Of course, when doing some general transform, the abs-diff will be non-zero. Even rotation change of 1 degree will cause big changes because of pixel interpolation. Commented May 31, 2013 at 9:37
  • Hello, jnovacho, I want to rectify the images using opencv Commented May 31, 2013 at 10:55
  • I still don't see the problem. Your code seems fine to me. Can you provide some screenshots - source and destination image and output image too. Commented May 31, 2013 at 11:31
  • added , another question how can I set the rotation center before applying warp affine ? Commented May 31, 2013 at 11:47

1 Answer 1

5

You have some misconceptions about the process.

The method cv::estimateRigidTransform takes as input two sets of corresponding points. And then solves set of equations to find the transformation matrix. The output of the transformation matches src points to dst points (exactly or closely, if exact match is not possible - for example float coordinates).

If you apply estimateRigidTransform on two images, OpenCV first find matching pairs of points using some internal method (see opencv docs).

cv::warpAffine then transforms the src image to dst according to given transformation matrix. But any (almost any) transformation is loss operation. The algorithm has to estimate some data, because they aren't available. This process is called interpolation, using known information you calculate the unknown value. Some info regarding image scaling can be found on wiki. Same rules apply to other transformations - rotation, skew, perspective... Obviously this doesn't apply to translation.

Given your test images, I would guess that OpenCV takes the lampshade as reference. From the difference is clear that the lampshade is transformed best. Default the OpenCV uses linear interpolation for warping as it's fastest method. But you can set more advances method for better results - again consult opencv docs.

Conclusion: The result you got is pretty good, if you bear in mind, it's result of automated process. If you want better results, you'll have to find another method for selecting corresponding points. Or use better interpolation method. Either way, after the transform, the diff will not be 0. It virtually impossible to achieve that, because bitmap is discrete grid of pixels, so there will always be some gaps, which needs to be estimated.

Sign up to request clarification or add additional context in comments.

4 Comments

Hello jnovacho, can you tell me how to set the rotation to center before applying the warp affine, actually it is rotating it from the top-left corner while in stereo images most of the time the rotation is from the center. Thanks
Hi, there's a catch. All transformation are in respect to the origin [0,0]. So the trick here is to translate the desired center to origin, do the rotation and translate back. Some info about matrix transformations is here willamette.edu/~gorr/classes/GeneralGraphics/Transforms/… But I don't think this would work in OpenCV, as you would lost the data in negative coordinates. Plus I don't think you really need this. The functions deal with this problem internally, it transforms image so corresponding points match.
Hello, can you tell me what technquie is implemented at the back end of "cv::estimateRigidTransform()" function ?
Sorry, I have no idea. :( But you can always dig into source code and find out. :) I would guess that they are using cv::goodFeaturesToTrack on both images, and then run some calculations to get the transformation matrix.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.