## [PanoTools-devel] breakthrough in lens modeling

 [PanoTools-devel] breakthrough in lens modeling From: Thomas Sharpless - 2009-08-13 04:35:10 Attachments: Message as HTML ```Hi all, The big problem with lens calibration has always been instability of the radius correction polynomial under optimization. I believe I have licked that -- by getting rid of the polynomial. Instead, the latest cut of lensFunc uses a 7-point spline curve to represent the relationship of the real to the ideal radius. It is parameterized in such a way that it is guaranteed to be monotonic no matter what parameter values the optimizer chooses to assign, so unlike the polynomial it can't "go wild". Also unlike the polynomial, adding more parameters improves the level of detail without increasing the chances of blowing up. With this change, for the very first time, I can see a consistent relationship between the quality of the fit and the appropriateness of the ideal model function. Here are first results, for a set of 199 straight lines from a Nikon 10.5 fisheye; optimizing 7 radius spline points and nothing else, varying only the assumed ideal function... model fn start error end error iterations stop reason rectilinear 3299 2709 1000 iteration limit eq.angle 2159 306 436 parameter delta eq.area 1989 3 e-15 176 error gradient stereogr. 2455 --- -- (model error) Notice that all these statistics reflect the relative appropriateness of the chosen model. But final error is far the most sensitive criterion. With polynomial radius corrections (2, 3 or 4 terms) I have never seen anything I could make sense of. I believe that is because the polynomial can take on so many forms that it can mask the inappropriateness of the model function. And create wildly different results under small changes in the data. Also the smallest final errors for this data set with polynomial radius correction have never been less than about 0.02. So I think this new radius function is a winner, and I'm quite pumped about it! To go with it, the current line finder can extract hundreds of good line segments from a "liney" image in just a little more time than it takes to run the Canny edge detector (roughly 7 sec for a typical DSLR snap). So we are very close to being able to start some serious research using hundreds of photos. Of course the world will want to see polynomial coefficients for these lenses. It should be routine to fit a polynomial to the spline curve (just by sampling its values, I mean, nothing analytic). Does anyone know of readymade code for that? Regards, Tom ```

 [PanoTools-devel] breakthrough in lens modeling From: Thomas Sharpless - 2009-08-13 04:35:10 Attachments: Message as HTML ```Hi all, The big problem with lens calibration has always been instability of the radius correction polynomial under optimization. I believe I have licked that -- by getting rid of the polynomial. Instead, the latest cut of lensFunc uses a 7-point spline curve to represent the relationship of the real to the ideal radius. It is parameterized in such a way that it is guaranteed to be monotonic no matter what parameter values the optimizer chooses to assign, so unlike the polynomial it can't "go wild". Also unlike the polynomial, adding more parameters improves the level of detail without increasing the chances of blowing up. With this change, for the very first time, I can see a consistent relationship between the quality of the fit and the appropriateness of the ideal model function. Here are first results, for a set of 199 straight lines from a Nikon 10.5 fisheye; optimizing 7 radius spline points and nothing else, varying only the assumed ideal function... model fn start error end error iterations stop reason rectilinear 3299 2709 1000 iteration limit eq.angle 2159 306 436 parameter delta eq.area 1989 3 e-15 176 error gradient stereogr. 2455 --- -- (model error) Notice that all these statistics reflect the relative appropriateness of the chosen model. But final error is far the most sensitive criterion. With polynomial radius corrections (2, 3 or 4 terms) I have never seen anything I could make sense of. I believe that is because the polynomial can take on so many forms that it can mask the inappropriateness of the model function. And create wildly different results under small changes in the data. Also the smallest final errors for this data set with polynomial radius correction have never been less than about 0.02. So I think this new radius function is a winner, and I'm quite pumped about it! To go with it, the current line finder can extract hundreds of good line segments from a "liney" image in just a little more time than it takes to run the Canny edge detector (roughly 7 sec for a typical DSLR snap). So we are very close to being able to start some serious research using hundreds of photos. Of course the world will want to see polynomial coefficients for these lenses. It should be routine to fit a polynomial to the spline curve (just by sampling its values, I mean, nothing analytic). Does anyone know of readymade code for that? Regards, Tom ```
 Re: [PanoTools-devel] breakthrough in lens modeling From: D M German - 2009-08-13 08:01:57 ```Hi Tom, Tom> So I think this new radius function is a winner, and I'm quite pumped about it! this is great news. Tom> To go with it, the current line finder can extract hundreds of Tom> good line segments from a "liney" image in just a little more time Tom> than it takes to run the Canny edge detector (roughly 7 sec for a Tom> typical DSLR snap).  So we are very close to being able to start Tom> some serious research using hundreds of photos. Tom> Of course the world will want to see polynomial coefficients for Tom> these lenses.  It should be routine to fit a polynomial to the Tom> spline curve (just by sampling its values, I mean, nothing Tom> analytic).  Does anyone know of readymade code for that? Why don't we just skip the polynomial coefficient and allow the specification of lenses in 2 models: polynomial and spline-based? Of course this imply more modifications to libpano than if we used their polynomial approximation. --daniel -- -- Daniel M. German http://turingmachine.org/ http://silvernegative.com/ dmg (at) uvic (dot) ca replace (at) with @ and (dot) with . ```
 Re: [PanoTools-devel] breakthrough in lens modeling From: Thomas Sharpless - 2009-08-13 17:09:51 Attachments: Message as HTML ```Hi Daniel On Thu, Aug 13, 2009 at 4:01 AM, D M German wrote: > > Hi Tom, > > > Tom> So I think this new radius function is a winner, and I'm quite pumped > about it! > > this is great news. > > > Tom> To go with it, the current line finder can extract hundreds of > Tom> good line segments from a "liney" image in just a little more time > Tom> than it takes to run the Canny edge detector (roughly 7 sec for a > Tom> typical DSLR snap). So we are very close to being able to start > Tom> some serious research using hundreds of photos. > > Tom> Of course the world will want to see polynomial coefficients for > Tom> these lenses. It should be routine to fit a polynomial to the > Tom> spline curve (just by sampling its values, I mean, nothing > Tom> analytic). Does anyone know of readymade code for that? > > Why don't we just skip the polynomial coefficient and allow the > specification of lenses in 2 models: polynomial and spline-based? Of > course this imply more modifications to libpano than if we used their > polynomial approximation. The basic mode of use for lensFunc is that a client C++ program, such as Hugin, will create a lensFunc object, loaded with the appropriate calibration data, and pass a handle to that into libpano, along with a source image type that means "use lensFunc". The C interface seen by libpano is quite thin and integrating it won't be hard at all. It is neither appropriate nor necessary to make libpano deal with persistent calibration data. However to support the use of lensFunc by command line tools, it may eventually be good to define one or two new PT script line types to hold lensFunc calibration data. Even then the current command line clients would not be able to use lensFunc, unless someone edited the scripts for them. I think that's OK because they aren't set up to handle persistent calibration data. It would not be too hard to enhance some of them to do that, however. It would also be possible to give lensFunc traditional polynomial coefficients and lens types via the libpano interface, which it could use to set up its internal parameters, but I would prefer to do that sort of thing at the level of the client program, where there is a better chance to get the data right. However this may be a good interim measure, as (did I mention?) lensFunc is quite a bit faster than the traditional remapping fns it replaces. I would be glad to see the radial polynomial off to Hell. However there is a tremendous tradition and literature around the use of polynomial coefficients to characterize lens distortions, and I fear lens geeks might be unhappy if they could not have them. Of course the Dersch coefficients are not what the l. g.s want, either, as they go backward (from "correct" to "incorrect" radius) and contain that mysterious cubic term (something purist lens calibrators shun). I imagine the average l. g. would want a simple utility to fit his favorite polynomial to the lensFunc spline. Going the other way, it is easy to set the spline from a polynomial -- provided, and this is actually a big proviso, that the polynomial is monotonic all the way to the edge of the image. You might be surprised how many of them aren't; and when that is so, I wonder how PanoTools can compute the inverse function? I have to solve the problem of monotonizing the overall mapping anyhow, to enable my dream of a single universal lens model, so eventually it should be possible to "copy" any PT calibration that works. Regards, Tom ```
 Re: [PanoTools-devel] breakthrough in lens modeling From: D M German - 2009-08-13 19:22:41 ```Hi Tom, Thomas> The basic mode of use for lensFunc is that a client C++ Thomas> program, such as Hugin, will create a lensFunc object, loaded Thomas> with the appropriate calibration data, and pass a handle to Thomas> that into libpano, along with a source image type that means Thomas> "use lensFunc".  The C interface seen by libpano is quite thin Thomas> and integrating it won't be hard at all.  why don't we create a simple function (in libpano) that can be called from any program that does the transformation of one image, instead of the current model that requires a script? This way lensFunc can be the "controller" and libpano simply execute on the given parameters. The parameters will be passed as an struct, it can even include a function that does the correction of the lens, for instance, as one of such parameters. Also (I mentioned this to you in an earlier message), will one tool depend on the other 100% of the times? which will depend on which? -- -- Daniel M. German http://turingmachine.org/ http://silvernegative.com/ dmg (at) uvic (dot) ca replace (at) with @ and (dot) with . ```