Oddegamra - 2013-05-11
  1. I can see where you are coming from. Generally, MComix does not deal with large images very well. There are way too many image operations that create a new copy of the image in memory. There's also the pixbuf cache, which takes a respectable part of memory itself even at moderate settings (e.g. 5 images). All of this combined actually prevents me from opening an archive with a few high-res scans, as the Python Imaging Library constantly runs out of memory. I'm not sure what the problem is here, as the library does not seem to take a very large amount of memory even if it should be available in practice, but maybe that is a limitation of the Python C plugin API? In any case, the lens and the main screen aren't that similar. For one, the lens is a completely independent image that is simply drawn on the screen with a scaled pixbuf taken from the main image. Pretty easy stuff, essentially. The main image, however, is a pixbuf assigned to a gtk.Image, so GTK handles all the nasty code. And by nasty, I'm mainly talking about computing the size required by the window, setting the viewport, repainting parts of the image that have been damaged (mouse-over, window overlapping and so on) and handling scrolling. The latter is probably the most problematic part. If you wanted to display only a part of the image (and keep that part in memory), you'd have to tell GTK how large the image is (for correct scrollbar rending), dynamically compute the position within the image (e.g. set correct scrolling position), decide when a scrolling operation hits the end of the image, and so forth. I'm even omitting actually drawing the image. In short, what I consider the "core" of the program would need to be rewritten entirely. I'm not to keen on tackling that part, naturally.
  2. That's Pixbuf.scale_simple from my profiling stats. When I went from 2% to 0.8%, the execution time went up from 0.2s to 5s to 15s. Apparently, there is a major performance problem somewhere in there trying to downscale to very small sizes. Not sure if there is anything that could be done about this problem. Concerning redrawing, the zoom code originally triggered a method zoom_changed, which then executed draw_image in main.py by callback. This callback was not triggered when the previous and current zoom levels were identical. Since this piece of code was changed during refactoring, it probably has to be restored in main.py around manual_zoom_in, manual_zoom_out and manual_zoom_original.
  3. Good call. We could move the execution of ImageEnhancer upward in _draw_image to a point where no scaling has been performed yet (i.e. right after get_pixbuf). Of course, this also has the downside that large images which would normally be scaled down first to a reasonable size before being enhanced will now be a lot more costly.
  4. I'm not 100% sure, but I think this comment refers to calls to draw_image being done while processing is still somewhere in _draw_image. For example, someone hitting ZoomOut while the image is still in the process of being zoomed. However, I'm not sure this can still happen, as UI events are no longer processed while _draw_image is being performed (they used to be). In any case, all of this happens in the main thread. Worker threads do not call UI operations directly (or delegate the call to the main thread), so an atomic operation should not be needed.