> On Wed, 07 Feb 2007 15:25:25 +0100, Pablo dAngelo wrote:
> > Hi Yuv,
> >> [quoted text muted]
> > I agree that time spend by the human in front of the computer needs to be
> > minimized.. However, have you measured the time that the optimizer takes
> > in your workflow? I estimate the time spend in the optimizer is in the
> > order of 1-2 seconds, at least for the typical 360 deg panorama with
> > fisheye images. IMHO, saving 50% of that time is a marginal improvement
> > that is not really worth working many hours on it. I guess it needs longer
> > to read the optimisation result and press the ok button.
> > For a 200 image panorama this is different, and a to archive dramatic
> > improvements, an optimization algorithm that is designed for problems with
> > many variables is required.
> For me autopano-sift is the bottleneck: you can't get working until it
> completes, and it's *slow*.
Some unknown person has ported autopano-sift to C and placed it on the patches page
of the hugin project (look there for more info). Unfortunately the port contains
some subtle errors somewhere, which I haven't been able to find (the sift algorithm
is quite complex...). On my linux machine it is at least 4-5 times faster than the
I have moved it into a separate module of the hugin cvs
repository, so if anybody wants to help with finding the differences, please
feel free to take a look at that.
> It occurs to me that Hugin could include some statistical methods to
> suppress the impact of bad control points and greatly speedup
> "wall-clock time" optimization, which often involves more time
> trimming crummy CPs than any actual calculation time for me (which, as
> Pablo mentioned, takes only 1-2 seconds).
I have added a robust estimator to panotools (non-squared cost function
for large errors), but I didn't have the time to evaluate it properly. Maybe
this also helps. See the PTOptimizer documentation for more information.
For usage, you currently have to use the "edit optimizer script" tick in hugin.
> These CPs wouldn't be deleted, just disabled, and their offsets would
> still be computed, so that a new round of rejection could commence.
> Some might "come back to life" after a given round. An iterative
> optimizer that repeatedly ran in this way until the average control
> point distance converges would be very useful. I.e. press "Robust
> Optimize" and the optimizer:
> 1. Runs once, with all CPs active.
> 2. Evaluates and disables `bad' CPs.
> 3. Runs again.
> 4. Repeats step 2 + 3 until convergence.
Hmm, interesting idea. However, I'm not sure if it would converge
to a solution where too many points are marked as outliers.
Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!