Hello Tom

This looks like a very interesting project.  Assuming you have the necessary engineering skills and tools for FPGA programming, you should be able to make this device stitch video in real time.

You could certainly do worse than use the PanoTools code as a model for the necessary processing.  Most of us who write panoramic photo software learned how to do it right by studying Helmut's code.  Unfortunately at its present state of development libpano is large, complex and poorly documented; so that learning process can take some time.  Start with math.c, which has most of the core routines, adjust.c, which sets up the mapping function stacks that use them, and filter.c, which has the executive code that runs the stacks.

The most important thing to understand about stitching is that the images are fit together on a spherical surface.  This is not obvious in the PT code as the spherical images exist only implicitly in the middle of the transformation stack, at the point where 2D equirectangular coordinates are transformed to 3D Cartesian coordinates, which get rotated in 3D to align the image on the sphere.  Then the result is immediately transformed back to 2D equirectangular.  All the rest of PanoTools is preparation for this step.

The second most critical thing is to understand is how the output image is constructed from the input image by interpolating input data at points computed by mapping output pixel centers onto the input grid, using the above transformations.  So the main transformation is the 'inverse' mapping  from output to input coordinates; the 'forward' mapping is only used for things like computing fields of view and need not be optimized for speed.

Thirdly, coordinate mapping and image interpolation are separable.  They are in fact done separately in PT when the "fast transform" method is invoked (as it always is for high resolution images).  Rather than compute the full transformation for every output pixel center, that method computes a coarser grid of exact output centers and finds the others by interpolation during the image warping phase.  In your application, where I assume the optics and alignment will be fixed, you could  precompute all the output centers and just look them up when needed.

I hope this helps you get started.  Please keep us posted on your progress.

Kind regards, Tom





On Sun, Sep 5, 2010 at 8:18 AM, Tom Sparks <tom_a_sparks@yahoo.com.au> wrote:
I am about to buy a custom-build sphere camera

I would like to port some of the panotool code to the sphere camera (dual-fisheye to Equirectangular)

specs on the cameras main processor board http://wiki.elphel.com/index.php?title=10353

I am hoping to do some thing like these two papers:
http://www.altera.com/literature/wp/wp-01107-stitch-fisheye-images.pdf
http://www.altera.com/literature/wp/wp-01073-flexible-architecture-fisheye-correction-automotive-rear-view-cameras.pdf

so where do I begin?

tom_a_sparks
Light travels faster then sound, which is why some people appear bright, until you hear them speak





------------------------------------------------------------------------------
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
_______________________________________________
PanoTools-devel mailing list
PanoTools-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/panotools-devel