About the licensing problem, I'm working with Noah Snavely, the original author of bundler. One of the reasons that Noah released bundler under the GPL was because of its dependency on a bundle adjustment package released under the GPL. Since we plan on either using the Levenberg- Marquardt algorithm in VNL, or the existing bundle adjustment code in vpgl_algo, we won't have that restriction. Given this, I'm pretty sure that we'll have the code released under the same license that VXL is currently released under (which to me looks like a version of the BSD license, but I'm not a lawyer either).

In a bit more detail, we have two big project goals, which are sort of a requirement for Noah's research. The first of which is to maintain bundler capability for very large data sets (tens of thousands of images), and the second is to create some sort of modularity in the pipeline. To explain the second requirement a bit further, bundler and SfM in general is a pipeline (first detect features, then match features, add two images to the set, bundle adjust, add the next image, etc). The general bundler assumption is that the images are all uncalibrated, meaning adding the images together requires the six-point or 8-point algorithm, but if the cameras are calibrated, we would require a much faster algorithm. We'd like to provide the generic pipeline, but provide the capability to implement any stage and swap that one with the default.

Also, I would be happy to try and migrate vpgl to core.

Does this all still sound good? A couple of people have said they'd be interested in helping, so if people email me, I can write up a bit on the proposed architecture and mail them personally, unless people think that such a spec would be useful for the entire list. Also, Suggestions on next steps to take would be appreciated.

 -Andrew