Hi,

 

Thanks for your feedback. I’ve read somewhere (actually in “A Structure and Motion Toolkit in Matlab” – “Interactive adventures in S. and M.”, by P.H. S. Torr) that setting w to 1 or any other value matters only if the algorithm for computing F is invariant / variant to the value of w.

 

But, I have one more question, regarding the same previously mentioned example (compute_FMatrix_example.cxx), from oxl\mvl, and especially the code about computing the error for F:

 

    double d = 0;

    for (unsigned int i = 0; i < points1.size(); ++i)

      d += f.image1_epipolar_distance_squared(points1[i], points2[i]);

    vcl_cout << "Error = " << d/points1.size() << vcl_endl;

 

Isn’t that only considering one side of the error ? Shouldn’t we combine both errors for the values of f.image2_epipolar_distance_squared() and f.image1_epipolar_distance_squared(). I mean that , in this way, for each match, we find an epipolar line  in image1, given some point2, and see how far point1 (from the same matching pair) is from that line. What about the reverse – given some point1 in image1, compute an epipolar line in image2, and find the distance of point2 (from the same matching pair) to that line. I tried that and got almost the same final error (using RANSAC and MLESAC), but it differed a lot when using the linear method.  Thanks in advance.

 

 

Regards,

Angel

 


From: Oli Cooper [mailto:Oli.Cooper@bristol.ac.uk]
Sent: Wednesday, May 11, 2005 1:14 PM
To: 'Angel Todorov'; vxl-users@lists.sourceforge.net
Subject: RE: [Vxl-users] Fundamental matrix & normalized coordinates

 

You seem to be a little confused as to the use of homogeneous coordinates and the difference between the Fundamental and Essential matrices.  The Hartley and Zisserman book "Multiple View Geometry" should help.  It is fine to compute F by setting w=1.0

 

Normalised coordinates in the sense of the Essential Matrix refers to the coordinates being in the camera coordinate system as opposed to image coordinates.

 

Just to confuse things further, you should ALWAYS normalise the coordinates when computing F by translating them so the centroid of the points is at the origin and scaling them so they are a mean distance of sqrt(2) away.

 

There are methods in the mvl library for normalising coordinates and extracting camera matrices from a computed F.

 

I'm sure the MVG book will explain better than me...

 

Oli.

 


From: vxl-users-admin@lists.sourceforge.net [mailto:vxl-users-admin@lists.sourceforge.net] On Behalf Of Angel Todorov
Sent: 10 May 2005 08:11
To: vxl-users@lists.sourceforge.net
Subject: [Vxl-users] Fundamental matrix & normalized coordinates

Hi,

 

Suppose we have two uncalibrated cameras, and we are processing some image sequence - in the example contrib\oxl\mvl\examples\compute_FMatrix_example.cxx , the point coordinates are normalized with w=1.0, i.e we are basically computing the essential matrix, but, my question is, given that we have no knowledge of the homogeneous coordinates, is it still possible to obtain a meaningful error by computing F (with w=1.0 set), and then try to find the camera matrices by some other method (like bundle adjustment) ?

 The purpose is to refine matches by fitting them to epipolar lines in the beginning, and I was wondering if this is a sensible approach provided that the matched points’ coordinates are normalized, and we know nothing about the camera calibration matrices as well (and we don’t do explicit calibration)? In some publications, unknown cameras are also assumed, but it’s not very clear to me if, while first computing F or T, coordinates are normalized or not. (Otherwise, as far as I know, we can obtain the essential matrix, if knowing the calibration matrices: E = K’*F*K.) Thank you very much for your help.

 

 

Regards,

Angel