I'm wondering what could happen in these situations:
1) The two set of points are feature points identified after processing images coming from two cameras, with somewhat different scale factors.
2) One set of points are feature points identified after processing images coming from a camera, while the other set of points are actual man-made measurements on the object under observation.
Is the ICP algorithm scale-factor independent? To what extent?
the ICP class is not yet fully implemented. The static method ICP::CalculateOptimalTransformation is the core function of the ICP, computing the optimal transformation, given a set of 3D-3D point correspondences (Horn, 1987). It cannot handle different scale-factors.