Menu

#25 CameraParameters::getCameraLocation is wrong

v1.0 (example)
open
nobody
None
5
2018-10-09
2017-03-12
No
cv::Point3f CameraParameters::getCameraLocation(cv::Mat Rvec, cv::Mat Tvec) {
    cv::Mat m33(3, 3, CV_32FC1);
    cv::Rodrigues(Rvec, m33);

    cv::Mat m44 = cv::Mat::eye(4, 4, CV_32FC1);
    for (int i = 0; i < 3; i++)
        for (int j = 0; j < 3; j++)
            m44.at< float >(i, j) = m33.at< float >(i, j);

    // now, add translation information
    for (int i = 0; i < 3; i++)
        m44.at< float >(i, 3) = Tvec.at< float >(0, i);
    // invert the matrix
    m44.inv();
    return cv::Point3f(m44.at< float >(0, 0), m44.at< float >(0, 1), m44.at< float >(0, 2));
}

The input parameters Rvec Tvec come from recognised markers. Marker::Tvec is a one-column matrix, but m44.at< float >(i, 3) = Tvec.at< float >(0, i); use it as a one-row matrix.

Another problem is although I corrected the problem above, the return value is still not true. I can't get correct position of camera. I do not understand what algorithm is used for this function. So modified this function in my own way:

cv::Point3f CameraParameters::getCameraLocation(cv::Mat Rvec, cv::Mat Tvec) {
    cv::Mat m33(3, 3, CV_32FC1);
    cv::Rodrigues(Rvec, m33);
    m33=-m33*Tvec;
    return cv::Point3f(m33.at< float >(0, 0), m33.at< float >(1, 0), m33.at< float >(2, 0));
}

That works fine for me.

// Detection of markers in the image passed
TheMarkers= MDetector.detect(TheInputImage, TheCameraParameters, TheMarkerSize);
if(TheMarkers.size()>0)
 {
Marker marker = TheMarkers[0];
Point3f cameraLocation= TheCameraParameters.getCameraLocation(marker.Rvec,marker.Tvec);
}

Related

Bugs: #25

Discussion

  • Rafael Munoz-Salinas

    Hi,

    I think that the correct code should be

    On 12/03/17 02:27, Wang Zhanglong wrote:


    [bugs:#25] https://sourceforge.net/p/aruco/bugs/25/
    CameraParameters::getCameraLocation is wrong

    Status: open
    Group: v1.0 (example)
    Created: Sun Mar 12, 2017 01:27 AM UTC by Wang Zhanglong
    Last Updated: Sun Mar 12, 2017 01:27 AM UTC
    Owner: nobody

    cv::Point3f CameraParameters::getCameraLocation(cv::Mat Rvec, cv::Mat Tvec) {
    cv::Mat m33(3, 3, CV_32FC1);
    cv::Rodrigues(Rvec, m33);

     cv::Mat  m44  =  cv::Mat::eye(4,  4,  CV_32FC1);
     for  (int  i  =  0;  i  <  3;  i++)
         for  (int  j  =  0;  j  <  3;  j++)
             m44.at<  float  >(i,  j)  =  m33.at<  float  >(i,  j);
    
     //  now,  add  translation  information
     for  (int  i  =  0;  i  <  3;  i++)
         m44.at<  float  >(i,  3)  =  Tvec.at<  float  >(0,  i);
     //  invert  the  matrix
     m44.inv();
     return  cv::Point3f(m44.at<  float  >(0,  0),  m44.at<  float  >(0,  1),  m44.at<  float  >(0,  2));
    

    }

    The input parameters |Rvec| |Tvec| come from recognised markers.
    Marker::Tvec is a one-column matrix, but |m44.at< float >(i, 3) =
    Tvec.at< float >(0, i);| use it as a one-row matrix.

    Another problem is although I corrected the problem above, the return
    value is still not true. I can't get correct position of camera. I do
    not understand what algorithm is used for this function. So modified
    this function in my own way:

    cv::Point3f CameraParameters::getCameraLocation(cv::Mat Rvec, cv::Mat Tvec) {
    cv::Mat m33(3, 3, CV_32FC1);
    cv::Rodrigues(Rvec, m33);
    m33=-m33*Tvec;
    return cv::Point3f(m33.at< float >(0, 0), m33.at< float >(1, 0), m33.at< float >(2, 0));
    }

    That works fine for me.

    // Detection of markers in the image passed
    TheMarkers= MDetector.detect(TheInputImage, TheCameraParameters, TheMarkerSize);
    if(TheMarkers.size()>0)
    {
    Marker marker = TheMarkers[0];
    Point3f cameraLocation= TheCameraParameters.getCameraLocation(marker.Rvec,marker.Tvec);
    }


    Sent from sourceforge.net because you indicated interest in
    https://sourceforge.net/p/aruco/bugs/25/

    To unsubscribe from further messages, please visit
    https://sourceforge.net/auth/subscriptions/

     

    Related

    Bugs: #25

  • Rafael Munoz-Salinas

    Hi,

    I think the correct code sould be

     cv::Point3f CameraParameters::getCameraLocation(cv::Mat Rvec,
    

    cv::Mat Tvec)
    {
    cv::Mat m33(3, 3, CV_32FC1);
    cv::Rodrigues(Rvec, m33);

         cv::Mat m44 = cv::Mat::eye(4, 4, CV_32FC1);
         for (int i = 0; i < 3; i++)
             for (int j = 0; j < 3; j++)
                 m44.at<float>(i, j) = m33.at<float>(i, j);
    
         // now, add translation information
         for (int i = 0; i < 3; i++)
             m44.at<float>(i, 3) = Tvec.ptr<float>(0)[i];
         // invert the matrix
         m44.inv();
         return cv::Point3f(m44.at<float>(0, 3), m44.at<float>(1, 3),
    

    m44.at<float>(2, 3));
    }</float>

    On 12/03/17 02:27, Wang Zhanglong wrote:


    [bugs:#25] https://sourceforge.net/p/aruco/bugs/25/
    CameraParameters::getCameraLocation is wrong

    Status: open
    Group: v1.0 (example)
    Created: Sun Mar 12, 2017 01:27 AM UTC by Wang Zhanglong
    Last Updated: Sun Mar 12, 2017 01:27 AM UTC
    Owner: nobody

    cv::Point3f CameraParameters::getCameraLocation(cv::Mat Rvec, cv::Mat Tvec) {
    cv::Mat m33(3, 3, CV_32FC1);
    cv::Rodrigues(Rvec, m33);

     cv::Mat  m44  =  cv::Mat::eye(4,  4,  CV_32FC1);
     for  (int  i  =  0;  i  <  3;  i++)
         for  (int  j  =  0;  j  <  3;  j++)
             m44.at<  float  >(i,  j)  =  m33.at<  float  >(i,  j);
    
     //  now,  add  translation  information
     for  (int  i  =  0;  i  <  3;  i++)
         m44.at<  float  >(i,  3)  =  Tvec.at<  float  >(0,  i);
     //  invert  the  matrix
     m44.inv();
     return  cv::Point3f(m44.at<  float  >(0,  0),  m44.at<  float  >(0,  1),  m44.at<  float  >(0,  2));
    

    }

    The input parameters |Rvec| |Tvec| come from recognised markers.
    Marker::Tvec is a one-column matrix, but |m44.at< float >(i, 3) =
    Tvec.at< float >(0, i);| use it as a one-row matrix.

    Another problem is although I corrected the problem above, the return
    value is still not true. I can't get correct position of camera. I do
    not understand what algorithm is used for this function. So modified
    this function in my own way:

    cv::Point3f CameraParameters::getCameraLocation(cv::Mat Rvec, cv::Mat Tvec) {
    cv::Mat m33(3, 3, CV_32FC1);
    cv::Rodrigues(Rvec, m33);
    m33=-m33*Tvec;
    return cv::Point3f(m33.at< float >(0, 0), m33.at< float >(1, 0), m33.at< float >(2, 0));
    }

    That works fine for me.

    // Detection of markers in the image passed
    TheMarkers= MDetector.detect(TheInputImage, TheCameraParameters, TheMarkerSize);
    if(TheMarkers.size()>0)
    {
    Marker marker = TheMarkers[0];
    Point3f cameraLocation= TheCameraParameters.getCameraLocation(marker.Rvec,marker.Tvec);
    }


    Sent from sourceforge.net because you indicated interest in
    https://sourceforge.net/p/aruco/bugs/25/

    To unsubscribe from further messages, please visit
    https://sourceforge.net/auth/subscriptions/

     

    Related

    Bugs: #25

  • Redouane Kachach

    Hello All,

    First of all, thanks for sharing this amazing library. I'm finding it very useful for my research. I think both versions of code are buggy. The code proposed by Wang is almos correct (the rotation matrix should be transposed in before multiplying by the translation vector). I'm posting the correct code ((please, notice the .t() call when calculating the camera position) so please, can you integrate it in the stable version ?

    I tested the code with a camera and a metric reference and made sure that the position reported is correct. You can double check if you would like to. Thanks.

    Sources of the algorithm:

    https://math.stackexchange.com/questions/82602/how-to-find-camera-position-and-rotation-from-a-4x4-matrix

    cv::Point3f myGetCameraLocation(cv::Mat Rvec, cv::Mat Tvec) {
      cv::Mat m33(3, 3, CV_32FC1);
      cv::Rodrigues(Rvec, m33);
      m33 = -m33.t() * Tvec;
      return cv::Point3f(m33.at< float >(0, 0), m33.at< float >(1, 0), m33.at< float >(2, 0));
    }
    
     

    Last edit: Redouane Kachach 2018-10-08
  • Rafael Munoz-Salinas

    Hi all,
    I think the code in the library is correct. I've checked and it seems correct. Please consider that the pose is given wrt the marker center.

     
  • Redouane Kachach

    Hello again,

    The position reported by the current code doesn't make any sense (even wtr to the marker center). As I said I checked it using a physical metric reference. I fixed the code proposed by Wang based on the following response:

    https://math.stackexchange.com/questions/82602/how-to-find-camera-position-and-rotation-from-a-4x4-matrix

    So básically: C = -R_t * T

    Where C: camera pose , R_t is the transpose of the rotation matrix and T is the translation vector.

    And I started getting correct values. So I'm not sure if your algorithm is not correct or the implementation has some BUG. But definitely the values returned by the current code are not correct (try to test it with some calibrated camera and a physical metric system and prinout the camera position wtr to the marker reference).

     

    Last edit: Redouane Kachach 2018-10-09

Log in to post a comment.

MongoDB Logo MongoDB