The code for three resize.c filters is changed in the attached version.
For a discussion of what this is about, see
Just in case the attachment did not upload quite right:
It can be found here:
Department of Mathematics and Computer Science
Note that Chantal Racette and I can pretty much do this kind of treatment throughout, one "expensive" filter at a time. In addition, I have optimally refactored BC-cubic code which I'm planning put in there.
I have attached a simple resize benchmark script which may be used to evaluate the performance of image resizing when using the various resize filters. Unfortunately, the resize performance does not appear to be primarily dependent on the resize filter subroutine performance, but is somewhat dependent on other parameters related to it. The bottleneck is not currently in the filter code which is sped up by this patch. This is something I already knew from prior code profiling, but it becomes more clear when running "before" and "after" benchmark runs.
Thank you Bob. I understand now that adding "cheaper filters" is not something which will make much of a difference (except maybe, in exceptional situations, e.g. very narrow image -> coefficient/pixel computation ratio skewed, as pointed out by Anthony Thyssen).
This being said, with very little code change, you can halve computation time of the coefficients of Blackman and Lanczos 3. Basically, one can use trigonometric identities to reduce the number of trig calls from two down to one. These simplications, although elementary, do not appear to have been exploited before.
Here they are (I realize my coding style is somewhat different from whoever programmed resize.c for GM):
static double Blackman(const double x,const double ARGUNUSED(support))
0.42 + 0.5 cos(pi x) + 0.08 cos(2pi x) refactored by Chantal
Racette and Nicolas Robidoux down to 1 trig call and 1 flop.
const double cospix = cos(MagickPI*x);
static double Lanczos(const double x,const double support)
Sinc(x)*Sinc(x/3)/ refactored by Nicolas Robidoux and Chantal
Racette down to two conditionals, 1 trig call and 7 flops.
const double xx = x*x;
if (xx == 0.0)
if (xx <= 9.0)
const double s = sin((MagickPI/3.0)*x);
const double ss = s*s;
While I was at it, I rewrote Gaussian with one less flop:
static double Gaussian(const double x,const double ARGUNUSED(support))
I am updating the attached resize.c this minute so it reflects these changes (without the "Poor Man's Sinc" code).
(forgot how to add files) Let me see...
You seem to have CSS turned off.
Please don't fill out this field.
I just fixed the typo in the Blackman comments.
Your work is certainly valuable. Perhaps when other algorithmic issues have been addressed, the faster filter functions will provide more impact. At the moment, the random-access nature of the pixel access seems to cause the biggest run-time impact.
Just realized I had programmed lanczos 3 in non-normalized form. It makes no difference when used as a (discrete) filter (with weights normalized at the end) but I'll update resize.c in case anybody cares.
resize.c with efficient lanczos normalized "correctly"
Note that I'm OK with the code not being used. "No obligation to buy!"
ImageMagick resize.c now uses several of our refactored filter kernels BY DEFAULT.
Let me know if you want me to put GraphicsMagick's resize.c through the same (minor: I understand that with caching weight computation is a minor component of resampling) speed up treatment.
(The GM version I put on the web is seriously "outdated.")