Hello VXL users,

 

I am looking to construct a perspective camera from GPS/IMU data associated with a collection of images. I am wondering, what is the best way to go about this in order to generate cameras I can use with the BOXM2 library to generate a 3D model? It appears to me the coordinate systems are different.. i.e. the roll-pitch-yaw XYZ coordinate system versus the orientation of the XYZ axes used in the computer vision community. One thought I had was to construct a 4 parameter quaternion from these 3 angles and then use that as an input to creating a VPGL perspective camera, since that can already be done using the output of the structure-from-motion software package VisualSFM. Would this be the best avenue to take, or is there something else in the VPGL codebase that would be more adequate for constructing a perspective camera from GPS/IMU information?

 

 

Cheers,

Joe McGlinchy