From: Alan W. I. <ir...@be...> - 2001-12-01 03:31:15
|
On Sat, 1 Dec 2001, [iso-8859-1] Jo=E3o Cardoso wrote: > [snip] You have done a good research, and everything you said implies tha= t > we must change plimage internals now. Also because plimage() does not > yet support any of the difilt() standard transformations, it is > better to change now (or never!). In the next week I have no time to > make the changes, so please go ahead. I am still somewhat concerned there may be image libraries other than libp[bgp]m and libgd which do not follow the "column-index changes fastest" "standard". So everybody here please check your favorite image libraries now, and let us know the results especially if you find a counter-example. = I will also do a lot more checking with my astronomer friends on this issue for other image libraries, now that there is some tentative approval for th= e idea. > > Alessandro, are you hearing? What is your opinion? I also would like to hear from Allesandro. > > Have you tried a very large image to notice the page faults? Or are > you speaking just theoreticaly? Practical experience long ago with benchmarks for a different project. Recall that memory access is hierarchical. So there is not only the potential page fault problem I mentioned, but also the potential cache faul= t problem. I think reorganizing our indices will make a large difference on ix86 PC's, for example. The benchmark I did long ago on a pentium-133 was t= o determine the Mflop rate for FFT calculations where you know how many flops there are as a function of the size of the vector you are trying to transform. There was a sharp factor of three (IIRC) drop in Mflops for larger N, and that N where that drop occurred corresponded closely to the size of my L2 cache. Recall, FFT calculations necessarily have very scattered use of memory so once that overflowed the cache my memory access times went from cache speeds to memory speeds which of course were 4 times slower on a pentium-133 (133MHz versus 33 MHz). For fast PC's of today, the same problem should exist because the ratio of cache speed to memory speed has stayed roughly the same. I don't have any practical experience with page fault benchmarks, but the ratio of memory speed to disk access speed is much higher than cache to memory speed so the penalty is much worse. Also, some of the images that scientist's will want to process with plimage are quite large. I did a bit of image processing in the early 90's, and some of the astronomical images then exceeded 20 MB, and I presume they are even bigger now. Like you and for the same reasons, I am not keen at all on optimization, bu= t I think this time it might be worth what I hope will be the small amount of trouble involved. Anyhow, I am willing to do the work for the conversion (s= o long as nobody finds a substantial number of image library counter-examples). Alan |