cv::Point3f CameraParameters::getCameraLocation(cv::Mat Rvec, cv::Mat Tvec) {
cv::Mat m33(3, 3, CV_32FC1);
cv::Rodrigues(Rvec, m33);
cv::Mat m44 = cv::Mat::eye(4, 4, CV_32FC1);
for (int i = 0; i < 3; i++)
for (int j = 0; j < 3; j++)
m44.at< float >(i, j) = m33.at< float >(i, j);
// now, add translation information
for (int i = 0; i < 3; i++)
m44.at< float >(i, 3) = Tvec.at< float >(0, i);
// invert the matrix
m44.inv();
return cv::Point3f(m44.at< float >(0, 0), m44.at< float >(0, 1), m44.at< float >(0, 2));
}
The input parameters Rvec Tvec come from recognised markers. Marker::Tvec is a one-column matrix, but m44.at< float >(i, 3) = Tvec.at< float >(0, i); use it as a one-row matrix.
Another problem is although I corrected the problem above, the return value is still not true. I can't get correct position of camera. I do not understand what algorithm is used for this function. So modified this function in my own way:
cv::Point3f CameraParameters::getCameraLocation(cv::Mat Rvec, cv::Mat Tvec) {
cv::Mat m33(3, 3, CV_32FC1);
cv::Rodrigues(Rvec, m33);
m33=-m33*Tvec;
return cv::Point3f(m33.at< float >(0, 0), m33.at< float >(1, 0), m33.at< float >(2, 0));
}
That works fine for me.
// Detection of markers in the image passed
TheMarkers= MDetector.detect(TheInputImage, TheCameraParameters, TheMarkerSize);
if(TheMarkers.size()>0)
{
Marker marker = TheMarkers[0];
Point3f cameraLocation= TheCameraParameters.getCameraLocation(marker.Rvec,marker.Tvec);
}
Hi,
I think that the correct code should be
On 12/03/17 02:27, Wang Zhanglong wrote:
Related
Bugs: #25
Hi,
I think the correct code sould be
cv::Mat Tvec)
{
cv::Mat m33(3, 3, CV_32FC1);
cv::Rodrigues(Rvec, m33);
m44.at<float>(2, 3));
}</float>
On 12/03/17 02:27, Wang Zhanglong wrote:
Related
Bugs: #25
Hello All,
First of all, thanks for sharing this amazing library. I'm finding it very useful for my research. I think both versions of code are buggy. The code proposed by Wang is almos correct (the rotation matrix should be transposed in before multiplying by the translation vector). I'm posting the correct code ((please, notice the .t() call when calculating the camera position) so please, can you integrate it in the stable version ?
I tested the code with a camera and a metric reference and made sure that the position reported is correct. You can double check if you would like to. Thanks.
Sources of the algorithm:
https://math.stackexchange.com/questions/82602/how-to-find-camera-position-and-rotation-from-a-4x4-matrix
Last edit: Redouane Kachach 2018-10-08
Hi all,
I think the code in the library is correct. I've checked and it seems correct. Please consider that the pose is given wrt the marker center.
Hello again,
The position reported by the current code doesn't make any sense (even wtr to the marker center). As I said I checked it using a physical metric reference. I fixed the code proposed by Wang based on the following response:
https://math.stackexchange.com/questions/82602/how-to-find-camera-position-and-rotation-from-a-4x4-matrix
So básically: C = -R_t * T
Where C: camera pose , R_t is the transpose of the rotation matrix and T is the translation vector.
And I started getting correct values. So I'm not sure if your algorithm is not correct or the implementation has some BUG. But definitely the values returned by the current code are not correct (try to test it with some calibrated camera and a physical metric system and prinout the camera position wtr to the marker reference).
Last edit: Redouane Kachach 2018-10-09