As I can see from the sources (resample.c, I guess), in libpano there implemented following way to build the resulting image:
take a pixel of the resulting image, apply back transformation to see, where it falls in the original image (say, it would be a point with float coords (u,v)). After that we take some pixels of original image near (u,v), take its colour, colours of some of its neighbours and do some interpolation to make all the stuff smooth.
Do I understand it correct?

What do you think of the following algorithm?
Consider pixel as a unit square, apply transformation to it's four points. After that look, what pixels of the resulting image are covered by this transformed quadrangle. Consider a pixel of the resulting image as a unit square. We take it's colour as a weighted average of colours of transformed quadrangles, covering it. Weights are common areas of our unit rectangle and the current quadrangle.
I've implemented this method for my needs, and it works quite nice, but maybe too slow. The speed depends mainly on how you calculate a common area for a square and quadrangle.

Is someone interested in implementation of the method?