Hi,

 

Suppose we have two uncalibrated cameras, and we are processing some image sequence - in the example contrib\oxl\mvl\examples\compute_FMatrix_example.cxx , the point coordinates are normalized with w=1.0, i.e we are basically computing the essential matrix, but, my question is, given that we have no knowledge of the homogeneous coordinates, is it still possible to obtain a meaningful error by computing F (with w=1.0 set), and then try to find the camera matrices by some other method (like bundle adjustment) ?

 The purpose is to refine matches by fitting them to epipolar lines in the beginning, and I was wondering if this is a sensible approach provided that the matched points’ coordinates are normalized, and we know nothing about the camera calibration matrices as well (and we don’t do explicit calibration)? In some publications, unknown cameras are also assumed, but it’s not very clear to me if, while first computing F or T, coordinates are normalized or not. (Otherwise, as far as I know, we can obtain the essential matrix, if knowing the calibration matrices: E = K’*F*K.) Thank you very much for your help.

 

 

Regards,

Angel