Speed problems with ZoomifyImage

  • Steve

    Steve - 2005-09-12

    When I first stumbled upon ZoomifyImage, I was elated to find a good way to Zoomify from the command line.  This is a very useful thing for me, as I need to zoomify thousands of images every day and it becomes quite cumbersome to constantly check to see if it needs to produce a log file or if too many corrupted images went through it and it just closed, the list goes on. 

    However, there is a significant problem.  The speed of this python script is considerably slower than the standalone program.  In fact, it's about 10x slower.  Probably not a big deal when you're only doing a few pictures, but now I'll probably be here until 8 or 9 getting these ready to ship out.  I fear it may have something to do with my machine's dual processors, but I really don't know anything about python to judge if that's the problem.  In any event, is there any way to speed up the python scripts and get the best of both worlds?

    This is running on a dual 2 GHz G5 with 1.5 GB DDR SDRAM  with Panther (10.3.9) and Python v 2.3. 


    Stephen Weiss
    Image Master

    • adam smith

      adam smith - 2005-09-13

      Hi Stephen,

      Unfortunately your speed issues are not a surprise, if I remember correctly, the official Zoomify converter is written in C++, and you can expect software written in higher level languages like Python to be significantly slower. This may be especially true of this type of program which is very memory/processor intensive. (You can also expect the official Zoomify converter to be capable of processing larger images than my software.)

      My software is also pretty new, and by necessity at this stage, I have focused more on correctness than speed. And up to this point, there has been more pressure to process ever larger images that to do this processing quickly, so I have done things such as writing out many temporary files during processing that have significantly slowed the program.

      I am currently working on improving the memory management of the software so that it loads less of each image's data into memory as it processes it, but I can't guarantee that this will give you a completely satisfactory experience.

      In the meantime, there are things you can do at the architectural level that might help. Try upgrading to Python 2.4 and make sure you are using the newest version of the Python Imaging Library. My general experience has been that each new version of Python makes dramatic performance improvements. Your dual processors would only help, I would think.

      Also, can you let me know typical sizes of the images you are processing? Naturally, I originally wrote this software for my own needs, and those are the results I know the best. With your information, I can give potential users of the software a better picture of its limitations before they decide to use it.


    • Steve

      Steve - 2005-09-13

      See, I thought that Python just was a wrapper for the C++ (I should really learn to look at the code instead of just making assumptions).  Well, it's too bad - I liked the setup of your software, but I also like sleep! :-)

      Our pictures range anywhere from 700KB to 1.4MB.  I recorded the cpu time for each process, and in general it's about 25:00.0 for this script, and 3:00.0 for the C++ version - a huge difference when you're talking thousands of files.  I think in my case it's just the number of files, not really the size.

      Incidentally, the correctness of the images was perfect - can't see a single difference.

      I'm just gonna stick to the C++ version for now, maybe play around with getting it to work with open -a or something - I don't need to piss off my boss again!

      • Gawain Lavers

        Gawain Lavers - 2005-09-13

        I have no real evidence to support this, but I'm wondering if the speed issue isn't something to do with OS X (perhaps PIL is poorly ported to that platform).  Speed divergence between my G4 laptop and a Pentium 4 Linux box is rather large for me.

    • adam smith

      adam smith - 2005-09-14

      I wonder about the Mac version also. I have a P4 with 1GB RAM, 2Gz dual processors, etc. which I would think have comparable performance to the G5 you mentioned, and I seem to recall processing a test image that was roughly 1.2 GB in size in under eight minutes back when I was making the initial release of the software. It was still *much* slower than the real Zoomify product, but all in all, I felt pretty good about it at the time--I was expecting worse. I did see 20-30 minute times when I tested the same image within the Zope environment, but that's another story...


Log in to post a comment.