#60 Several issues concerning image drawing

SVN
open
nobody
None
5
2013-05-11
2013-05-09
Ark
No

There are several issues concerning the way MComix draws images (most important first):

  1. Currently, a (scaled) version of the entire image (depending on the zoom factor) is stored in main memory. This has at least two disadvantages: First, it needs large amounts of RAM, especially if you zoom in. (I already ran into problems because of that.) Second, it takes a long time until you can see the image for the first time. It would be better to draw only the visible part of the image on the fly, without any additional memory used for offscreen buffering. That is, the viewport of the main window should work just the same way the magnifying lens already does. (If you use the lens, it does not buffer the scaled version of the entire image but scales up on the fly, which is way faster.)
  2. When I open a quite large image (around 2300×7600) using "Best fit" mode at the smallest zoom factor possible, it needs around 15 seconds to render the image on the screen, even though it takes up only 10×30 pixels due to the small zoom factor. Re-rendering occurs every time I attempt to zoom out, even if the zoom factor does not change anymore. (Scaling mode is set to "Normal (fast)".)
  3. "Enhance image" currently applies sharpening to the scaled version of the image, not the original image. Thus, if you zoom to 50%, the kernel used for sharpening appears to be twice as big in every dimension. On the other hand, if you zoom to 800%, sharpening is almost useless since the actual kernel becomes quite small (relative to the original image). Side effect: The more you zoom in, the more time sharpening will consume because of that.
  4. In draw_image there is a piece of code with a comment that says "Don't stack up redraws." Not sure but it looks like it needs a mutex to emulate an atomic CAS operation on self._waiting_for_redraw.

Related

Bugs: #63

Discussion

  • Oddegamra
    Oddegamra
    2013-05-11

    1. I can see where you are coming from. Generally, MComix does not deal with large images very well. There are way too many image operations that create a new copy of the image in memory. There's also the pixbuf cache, which takes a respectable part of memory itself even at moderate settings (e.g. 5 images). All of this combined actually prevents me from opening an archive with a few high-res scans, as the Python Imaging Library constantly runs out of memory. I'm not sure what the problem is here, as the library does not seem to take a very large amount of memory even if it should be available in practice, but maybe that is a limitation of the Python C plugin API? In any case, the lens and the main screen aren't that similar. For one, the lens is a completely independent image that is simply drawn on the screen with a scaled pixbuf taken from the main image. Pretty easy stuff, essentially. The main image, however, is a pixbuf assigned to a gtk.Image, so GTK handles all the nasty code. And by nasty, I'm mainly talking about computing the size required by the window, setting the viewport, repainting parts of the image that have been damaged (mouse-over, window overlapping and so on) and handling scrolling. The latter is probably the most problematic part. If you wanted to display only a part of the image (and keep that part in memory), you'd have to tell GTK how large the image is (for correct scrollbar rending), dynamically compute the position within the image (e.g. set correct scrolling position), decide when a scrolling operation hits the end of the image, and so forth. I'm even omitting actually drawing the image. In short, what I consider the "core" of the program would need to be rewritten entirely. I'm not to keen on tackling that part, naturally.
    2. That's Pixbuf.scale_simple from my profiling stats. When I went from 2% to 0.8%, the execution time went up from 0.2s to 5s to 15s. Apparently, there is a major performance problem somewhere in there trying to downscale to very small sizes. Not sure if there is anything that could be done about this problem. Concerning redrawing, the zoom code originally triggered a method zoom_changed, which then executed draw_image in main.py by callback. This callback was not triggered when the previous and current zoom levels were identical. Since this piece of code was changed during refactoring, it probably has to be restored in main.py around manual_zoom_in, manual_zoom_out and manual_zoom_original.
    3. Good call. We could move the execution of ImageEnhancer upward in _draw_image to a point where no scaling has been performed yet (i.e. right after get_pixbuf). Of course, this also has the downside that large images which would normally be scaled down first to a reasonable size before being enhanced will now be a lot more costly.
    4. I'm not 100% sure, but I think this comment refers to calls to draw_image being done while processing is still somewhere in _draw_image. For example, someone hitting ZoomOut while the image is still in the process of being zoomed. However, I'm not sure this can still happen, as UI events are no longer processed while _draw_image is being performed (they used to be). In any case, all of this happens in the main thread. Worker threads do not call UI operations directly (or delegate the call to the main thread), so an atomic operation should not be needed.