I have some questions:
I imagine the transformation process is:
suppose we want to register two images Img1 and Img2, and Img1 and
Img2 have some overlap regions. And T(params, Xk) is the function
which transforms the image postion Xk in Img1 or Img2 to its
corresponding position in the panorama by the parammeters like
roll/pitch/yaw and fov.
Then the transformation function can be decomposed into several
concatenated functions like (two functions for example):
T(params, Xk) = T2(T1(params, Xk)))
where, T1(params, Xk) transforms the image postion Xk into a 3D ray
direction (theta, phi), and T2() converts the 3D ray direction into
the panorama coordinate space.
Thus, maybe every stack frame in the transformation stack is a
transformation function like T1 and T2. Does it?
Given the initial roll/pitch/yaw and fov parameters for two images,
then we have 8 parameters for Img1 and Img2, and we can select 4 pair
control points, and use non-linear optimization like M-L to solve the
Using the T(params, XK), we can convert control point c1 in Img1 and
c2 in Img2 to panorama coordinate p1 and p2. Then the objective
function of optimization is to minimize the distance of p1 and p2, and
this is an iteratice process.
However, what is the function of Lens distortion parameters in the