From: Eric E. <ems...@ob...> - 2004-12-20 09:01:24
|
Hi, after some successful attempt at migrating routines from several plotting and/or analysing softwares (pgplot, Midas, Iraf, etc) to Matplotlib I now hit a big wall: the speed at which it can display a reasonably large image: - I have a ''local'' library to read fits files [2D images] in python (after SWIG wrapping). It converts these files into float - nmerix - arrays. I was then testing the ability of matplotlib to display these 2D arrays. Until now all was fine since I tested matplotlib on very small images (50x50 pixels). Yesterday I tried with a 1600x1600 pixels image. NOTE: this is a very reasonable size (and typical) in my work and I expect much much bigger ones in a near future (up to 20 000 x 20 000). ==> Matplotlib takes ~20 seconds to display it !!! (after 12 seconds the window opens, and then it takes another 8 seconds to be displayed) (as compared to less than .2 sec for Midas, Iraf and others so a factor of 100 at least!!! and less than a second using the ppgplot routines). Some info: running on a 1.6 Ghz/512 RAM centrino, linux, backend TkAgg, numarray, float array of 1600x1600 pixels. Using either imshow or figimage in ''ipython -pylab'' (tried different interpolation schemes the one I want being ''nearest'' in general) To be frank, this is a killer for me, since I need to work on such images (signal processing, analysing) and display them each time changing the levels, centring etc etc. There is no way I will wait for 1 mn for 3 successive displays... So the question now: - am I doing something wrong (backend, way to load the array, ...) or is it intrinsic to matplotlib ? - is there a way out? (basically the same question..) If there is no way out, I may have to just abandon (sigh) matplotlib for displaying images. But if there is one, please let me know!! Thanks Eric -- =============================================================== Observatoire de Lyon ems...@ob... 9 av. Charles-Andre tel: +33 4 78 86 83 84 69561 Saint-Genis Laval Cedex fax: +33 4 78 86 83 86 France http://www-obs.univ-lyon1.fr/eric.emsellem =============================================================== |
From: Arnd B. <arn...@we...> - 2004-12-20 09:21:02
|
Hi Eric, I am certain that John will be very delighted to have another of the "matplotlib is slow" e-mails ;-). Let me therefore add my 2 cents on this before he wakes up (hope I have the timezones right) and gives a qualified comment on this ... On Mon, 20 Dec 2004, Eric Emsellem wrote: [... timings etc snipped ... ] > Some info: > > running on a 1.6 Ghz/512 RAM centrino, linux, backend TkAgg, numarray, > float array of 1600x1600 pixels. > Using either imshow or figimage in ''ipython -pylab'' (tried different > interpolation schemes > the one I want being ''nearest'' in general) Did you also try a Numeric array? Another point: the imshow routine has quite a bit of functionality (color maps, interpolation, even alpha!!!) which might cost some time (for example, I don't know whether all this is done in python). You can also pass a PIL (Python Image Library) image to imshow. So I would suggest: a) try a Numeric array b) try to convert you matrix to a PIL image and determine the time it takes to display that. (I would hope that this is much faster) > To be frank, this is a killer for me, since I need to work on such images > (signal processing, analysing) and display them each time changing the > levels, centring etc etc. There is no way I will wait for 1 mn for 3 > successive displays... > > So the question now: > - am I doing something wrong (backend, way to load the array, ...) or is it > intrinsic to matplotlib ? To really answer this question it would be useful if you post your code (presumably simplified by creating some mock data without reading from an external file). By this one could also try this on other platforms ... [...] Best, Arnd |
From: John H. <jdh...@ac...> - 2004-12-20 16:01:15
|
>>>>> "Eric" == Eric Emsellem <ems...@ob...> writes: Eric> If there is no way out, I may have to just abandon (sigh) Eric> matplotlib for displaying images. But if there is one, Eric> please let me know!! What version of matplotlib are you using? >>> import matplotlib >>> matplotlib.__version__ JDH |
From: John H. <jdh...@ac...> - 2004-12-20 16:50:23
|
>>>>> "Eric" == Eric Emsellem <ems...@ob...> writes: Eric> Hi, I am using: Eric> IPython 0.6.6 with Python 2.3.3 and matplotlib-0.65 OK, too bad, one of the most common causes of slow behavior on earlier versions of matplotlib was a numeric/numarray incompatibility setting as discussed here - http://matplotlib.sourceforge.net/faq.html#SLOW. But with 0.65, this shouldn't be a problem. But just to be certain - do you have Numeric and numarray on your system? - what is the output of any test script when run with --verbose-helpful. It is important that you are not mixing numeric and numarray in your code, because then matplotlib falls back on the python sequence protocol. So I just want to insure that you are using numarray consistently and that matplotlib knows it. Eric> (checked that this is indeed the case) Eric> Eric P.S.: by the way, just to let you know (and I will pass Eric> the message on the forum) I am sincerely very impressed by Eric> matplotlib in general (in fact 5 people just switched to it Eric> in the last 2 weeks and our group only amounts to 10 people Eric> so 1/2 still to convince!). So this kind of ''negative'' Eric> feedback/question should not undermine the rest of the soft Eric> for sure!!! I passed it on already. I suspect there are a number of others who are interested in and can contribute to this discussion so I'll keep this on list for a bit. Thanks for the moral support :-) A bit of background: as Arnd pointed out, imshow is probably doing a lot more under the hood than you need, so I suggest we stick with figimage for a while. I assume you're an astronomer, since you mentioned IRAF, no? Perry Greenfield, also an astronomer, suggested figimage because he mainly wanted a pixel dump of his data, with no interpolation that imshow provides. Do you need color mapping, or mainly grayscale? The reason I ask is that for simplicity of coding I convert everything to an RGBA under the hood, which is obviously inefficient in memory and CPU time. At some point (maybe now), we'll have to special case grayscale for people who can't pay those extra costs. I don't know if this is where your bottleneck is. So assuming figimage for now, I wrote a test script to get some numbers for comparison. Then I noticed I had introduced a bug in 0.65 in figimage when I wrongly added "hold" control to figimage, where it doesn't below. So replace figimage in pylab.py with the function I'm including below before running any tests. Since most people can't fit a 1600x1600 image on screen (I can't), we'll need a big figure window at least to get most of it.. That's what the figsize command is for. FYI, my suspicion is we should be able to get acceptable performance for 1600x1600 or 2000x2000 and images of this magnitude, with some basic refactoring and optimization. I doubt we'll get 20k x 20k in the forseeable future, in large part because the underlying image library from antigrain only does 4096x4096. With this script, I'm running in interactive mode ("ion") so the figure will be created when the figimage call is made, and then will immediately terminate rather than go into the tk mainloop since show is called. from pylab import * ion() rc('figure', figsize=(13,12)) X = rand(1600,1600) figimage(X, cmap=cm.hot) #show() Here are my numbers. Note I can get almost a 2x performance boost switching to GTKAgg - there are known performance issues blitting a large image to tkinter. Is GTKAgg as possibility for you? peds-pc311:~> time python test.py --Numeric -dTkAgg 5.750u 1.730s 0:08.20 91.2% 0+0k 0+0io 3286pf+0w peds-pc311:~> time python test.py --Numeric -dGTKAgg 3.280u 0.840s 0:04.16 99.0% 0+0k 0+0io 4579pf+0w peds-pc311:~> time python test.py --numarray -dTkAgg 8.830u 1.100s 0:09.96 99.6% 0+0k 0+0io 3455pf+0w peds-pc311:~> time python test.py --numarray -dGTKAgg 4.730u 0.560s 0:05.36 98.6% 0+0k 0+0io 4747pf+0w with a 3GHz P4 with 1GB of RAM. How do the numbers for your system compare? I'll spend some time with the profiler looking for some low hanging fruit. JDH # the modified figimage function def figimage(*args, **kwargs): # allow callers to override the hold state by passing hold=True|False try: ret = gcf().figimage(*args, **kwargs) except ValueError, msg: msg = raise_msg_to_str(msg) error_msg(msg) hold(b) raise RuntimeError(msg) except RuntimeError, msg: msg = raise_msg_to_str(msg) error_msg(msg) hold(b) raise RuntimeError(msg) draw_if_interactive() gci._current = ret return ret figimage.__doc__ = Figure.figimage.__doc__ + """ Addition kwargs: hold = [True|False] overrides default hold state""" |
From: John H. <jdh...@ac...> - 2004-12-20 17:07:29
|
>>>>> "John" == John Hunter <jdh...@ac...> writes: John> I'll spend some time with the profiler looking for some low John> hanging fruit. God bless the profiler. It turns out over half of the time to display this image is spent in the normalizer, which takes image data in an arbitrary scale and maps into the unit interval http://matplotlib.sourceforge.net/matplotlib.colors.html#normalize The normalizer handles a lot of special cases that you may not need. In fact, your data may already be normalized. So you can write a custom normalizer from pylab import * def mynorm(X): # do nothing, it's already normalized return X ion() rc('figure', figsize=(13,12)) X = rand(1600,1600) figimage(X, cmap=cm.hot, norm=mynorm) This change alone gives me more than a 2x speedup. So with GTKAgg + a custom normalizer (in this case a do nothing normalizer) you'll be running 4-5 times faster than you were before, me thinks. peds-pc311:~/python/projects/matplotlib> time python ~/test.py --numarray 1.650u 0.450s 0:02.13 98.5% 0+0k 0+0io 4746pf+0w I'll keep digging through the profiler... JDH |
From: Eric E. <ems...@ob...> - 2004-12-20 17:21:24
|
Test done and this is correct, here are the sets of timing : first line is without ''mynorm'', second is with ''mynorm''. time python test.py --Numeric -dTkAgg 10.432u 1.663s 0:12.37 97.7% 0+0k 0+0io 0pf+0w 7.258u 1.302s 0:08.64 98.9% 0+0k 0+0io 0pf+0w time python test.py --Numeric -dGTKAgg 5.209u 0.845s 0:06.10 99.0% 0+0k 0+0io 0pf+0w 4.226u 0.700s 0:04.98 98.7% 0+0k 0+0io 0pf+0w time python test.py --numarray -dTkAgg 16.391u 1.036s 0:17.96 96.9% 0+0k 0+0io 0pf+0w 5.690u 0.829s 0:06.85 95.0% 0+0k 0+0io 0pf+0w time python test.py --numarray -dGTKAgg 8.225u 0.546s 0:08.96 97.7% 0+0k 0+0io 0pf+0w 3.363u 0.445s 0:03.86 98.4% 0+0k 0+0io 0pf+0w Another factor of 10 and you are faster than Midas.. :-) Eric John Hunter wrote: >>>>>>"John" == John Hunter <jdh...@ac...> writes: >>>>>> >>>>>> > > John> I'll spend some time with the profiler looking for some low > John> hanging fruit. > >God bless the profiler. It turns out over half of the time to display >this image is spent in the normalizer, which takes image data in an >arbitrary scale and maps into the unit interval > > http://matplotlib.sourceforge.net/matplotlib.colors.html#normalize > >The normalizer handles a lot of special cases that you may not need. >In fact, your data may already be normalized. So you can write a >custom normalizer > >from pylab import * > >def mynorm(X): # do nothing, it's already normalized > return X > >ion() > >rc('figure', figsize=(13,12)) >X = rand(1600,1600) >figimage(X, cmap=cm.hot, norm=mynorm) > > >This change alone gives me more than a 2x speedup. So with GTKAgg + a >custom normalizer (in this case a do nothing normalizer) you'll be >running 4-5 times faster than you were before, me thinks. > >peds-pc311:~/python/projects/matplotlib> time python ~/test.py --numarray >1.650u 0.450s 0:02.13 98.5% 0+0k 0+0io 4746pf+0w > >I'll keep digging through the profiler... > >JDH > > > -- =============================================================== Observatoire de Lyon ems...@ob... 9 av. Charles-Andre tel: +33 4 78 86 83 84 69561 Saint-Genis Laval Cedex fax: +33 4 78 86 83 86 France http://www-obs.univ-lyon1.fr/eric.emsellem =============================================================== |
From: John H. <jdh...@ac...> - 2004-12-20 19:09:57
|
>>>>> "John" == John Hunter <jdh...@ac...> writes: John> This change alone gives me more than a 2x speedup. So with John> GTKAgg + a custom normalizer (in this case a do nothing John> normalizer) you'll be running 4-5 times faster than you were John> before, me thinks. Of the remaining time, about half of it, on my system, is spent in the colormapping. Almost all of this is in LinearSegmentedColormap.__call__, and is split between these numerix methods 0.12s zeros : create the RGBA colormapped output array 0.23s where : make sure the data are in the [0,1] interval 0.49s take + makeMappingArray - actually doing the mapping You may not want or need some of the overhead and extra checking that matplotlib does in the colormapping. Do you want colormapping at all by the way? If not, you can special case colormapping by simply converting to RGB in the 0-255 interval. Note, as I said it is a bit inelegant that everything has to go through RGB even if you don't need it. I have some ideas here, but that will have to wait a bit. from pylab import * def mynorm(X): return X class mycmap: name = "my gray" def __call__(self, X, alpha=None): # what is the fastest way to make an MxNx3 array simply # duplicating MxN on the last dimension? m,n = X.shape Z = zeros((m,n,3), typecode=X.typecode()) Z[...,0] = X Z[...,1] = X Z[...,2] = X return Z #norm = None # default norm = mynorm #cmap = None # default cmap = mycmap() ion() rc('figure', figsize=(13,12)) X = rand(1600,1600) figimage(X, cmap=cmap, norm=norm) With the following numbers # default cmap peds-pc311:~/python/projects/matplotlib> time python ~/test.py --numarray -dGTKAgg 1.630u 0.430s 0:02.12 97.1% 0+0k 0+0io 4746pf+0w # custom cmap peds-pc311:~/python/projects/matplotlib> time python ~/test.py --numarray -dGTKAgg 1.080u 0.290s 0:01.42 96.4% 0+0k 0+0io 4745pf+0w Of this 1.42 seconds, the big chunks are 0.660 The time it takes to do from pylab import *; half of this is initializing the colormaps and dicts in cm. We should do some work to get this number down. But it is a fixed cost that doesn't scale with array size 0.320 RandomArray2.py:97(random) # creating the dummy data. We can ignore this 0.540 backend_gtkagg.py:33(draw) # this creates the entire figure, of this, 0.420s is in the function FigureImage.make_image, most of which is in the extension code _image.fromarray So ignoring for now the fixed cost of loading modules and creating your arrays (Eg by running in pylab using the "run" function you only pay the module loading costs once), the next place where a big win is going to come from is in optimizing fromarray. Note, the matplotlib figure canvas is a drawable widget. If you want bare metal speed and don't want any of the features matplotlib offers (interpolation, normalization, colormapping, etc), you can always dump your image directly to the gtk canvas, especially if you compile gtk with image support. This may be useful enough for us to consider a backend figimage pipeline which would bypass a number of extra copies. Currently we have for gtkagg numarray -> agg image buffer -> figure canvas agg buffer -> gtk canvas With some care, we might be able to simplify this, ideally doing for the canvas numarray -> gtk canvas # for screen numarray -> agg buffer # for dumping to PNG There are some subtleties to think through as to how this would work. It would probably need to be a canvas method and not a renderer method, for one thing, which would take a bit of redesign. And then Midas would have to watch its back! JDH |
From: Eric E. <ems...@ob...> - 2004-12-21 10:27:01
|
Hi again, thanks a lot for the fixing and comments on figimage. This looks much much better now. Indeed a local matplotlib user had also pointed out to me the time it spent on cmap. By the way, the problem I have now is that I would like to have a quick way of looking at my images (the 1600x1600) but being able to size it INTO the actual opened window (so basically this calls for using imshow and not figimage). Just to be clear here is what I usually do (e.g. with Midas): 1/ I create a display window with the size (and location on my desktop) I wish 2/ I ''load'' my big image in there. Depending on the size it will take the full window or not. 3/ then depending on the region I wish to look at I scale and recenter the image to e.g., have it all in the window or just zoom in some regions. 4/ then I can interact with the cursor to e.g. make a cut of the image (and display it in another window) or get some coordinates so this would look like this in Midas: 1/ create/disp 1 600,600 # create the display window number 1 with 600x600 px 2/ load image.fits # load the image: from a fits file 3/ load image.fits scale=-2 center=0,0 # center the image at coord 0,0 and # scale it down by a factor of 2, (- = down, + = up) 4/ get/curs #activate the cursor on the image so that a click # prints out the corresponding coordinates By using imshow in matplotlib I think I can solve all these things (and as soon as the .key fields are activated so I can use the keyboard to interact with the display), but then it will show again its very slow behaviour (interpolation, etc...) ? Would there be a way to have such a command but with a much quicker loading / interacting? (note that the pan and zoom things would probably do it - although it would need to keep the coordinates right so there is no confusion there - , but at the moment this is not usable in imshow with large images as it is much too slow) Cheers, Eric -- =============================================================== Observatoire de Lyon ems...@ob... 9 av. Charles-Andre tel: +33 4 78 86 83 84 69561 Saint-Genis Laval Cedex fax: +33 4 78 86 83 86 France http://www-obs.univ-lyon1.fr/eric.emsellem =============================================================== |
From: Arnd B. <arn...@we...> - 2004-12-21 10:55:29
|
Hi Eric, On Tue, 21 Dec 2004, Eric Emsellem wrote: > Hi again, > thanks a lot for the fixing and comments on figimage. This looks much > much better now. Indeed a local matplotlib user had also pointed > out to me the time it spent on cmap. > > By the way, the problem I have now is that I would like to have a quick > way of looking at my images (the 1600x1600) but being able to > size it INTO the actual opened window (so basically this calls for using > imshow > and not figimage). Just a few thoughts: - you could gain a speed-up by displaying a reduced image, e.g. by passing image_mat[::2,::2] which would correspond to a 800x800 image (but latest when zooming one would need the full image ;-(. - maybe you have to code the zooming youself in terms of figimage (In particular if you are still aiming at the 20000x20000 case) - Is it right that from imshow you would only need the """ extent is a data xmin, xmax, ymin, ymax for making image plots registered with data plots. Default is the image dimensions in pixels"""? Somehow I would guess that this should not be a part which is too slow, so maybe one could do something like this (without any interpolation) also for figimage. Not-more-than-0.5-cents-this-time-ly yours, Arnd |
From: Eric E. <ems...@ob...> - 2004-12-20 17:17:42
|
Hi, thanks for the feedback. To answer your questions: - I have both Numeric and numarry but I am using numarray in principle. - Running in verbose mode, here is the output: matplotlib data path /usr/share/matplotlib loaded rc file /home/emsellem/.matplotlibrc matplotlib version 0.65 verbose.level helpful interactive is False platform is linux2 numerix numarray 1.0 font search path ['/usr/share/matplotlib'] loaded ttfcache file /home/emsellem/.ttffont.cache Could not load matplotlib icon: 'module' object has no attribute 'window_set_default_icon_from_file' backend GTKAgg version 2.0.0 - Then a VERY important note: yes indeed I am an astronomer and I am used to have a fixed window (I first define its size or use some default) and THEN ONLY load the image itself. This is how I use ppgplot in python or Iraf, or Midas. I indeed to not then use any interpolation scheme there. - ALSO: there seems to be a bug in the figimage routine as it shows the image OUTSIDE an axis filled with white (this may be because I am not doing the right thing though...) - And finally here are the timing you asked for: (there seem to be reasonable considering the difference in CPU/RAM) Hope this helps Eric ================================================== Before correcting figima ========================= time python test.py --Numeric -dTkAgg 14.146u 2.363s 0:17.79 92.7% 0+0k 0+0io 10pf+0w time python test.py --Numeric -dGTKAgg 9.795u 1.697s 0:13.63 84.2% 0+0k 0+0io 12pf+0w time python test.py --numarray -dTkAgg 22.640u 1.443s 0:25.31 95.1% 0+0k 0+0io 13pf+0w time python test.py --numarray -dGTKAgg 15.125u 0.925s 0:16.26 98.6% 0+0k 0+0io 0pf+0w After correcting figima ========================= time python test.py --Numeric -dTkAgg 10.432u 1.663s 0:12.37 97.7% 0+0k 0+0io 0pf+0w time python test.py --Numeric -dGTKAgg 5.209u 0.845s 0:06.10 99.0% 0+0k 0+0io 0pf+0w time python test.py --numarray -dTkAgg 16.391u 1.036s 0:17.96 96.9% 0+0k 0+0io 0pf+0w time python test.py --numarray -dGTKAgg 8.225u 0.546s 0:08.96 97.7% 0+0k 0+0io 0pf+0w -- =============================================================== Observatoire de Lyon ems...@ob... 9 av. Charles-Andre tel: +33 4 78 86 83 84 69561 Saint-Genis Laval Cedex fax: +33 4 78 86 83 86 France http://www-obs.univ-lyon1.fr/eric.emsellem =============================================================== |
From: Perry G. <pe...@st...> - 2004-12-21 17:26:12
|
On Dec 20, 2004, at 3:55 AM, Eric Emsellem wrote: > Sorry to get into this discussion so late. > sterday I tried with a 1600x1600 pixels image. NOTE: this is a very > reasonable > size (and typical) in my work and I expect much much bigger ones in a > near future > (up to 20 000 x 20 000). > Wow, I want your 20Kx20K image display! Seriously, it sounds like you do not intend to display the whole thing at once. In that case I would consider doing the slicing and reduction outside of matplotlib and then displaying the subsampled or sliced region using the size of the window. Applying color transformations on the whole image is going to result in a lot of wasted cpu time. I also recognize that 1.6K^2 displays are not that unreasonable so reasonable performance with this size is something one wants to achieve. > ==> Matplotlib takes ~20 seconds to display it !!! > (after 12 seconds the window opens, and then it takes > another 8 seconds to be displayed) > As John later alluded to, the time for the window to come up is a one time cost if you are running from an interactive prompt. It shouldn't be paid for subsequent display updates. > (as compared to less than .2 sec for Midas, Iraf and others so a > factor of 100 at least!!! > and less than a second using the ppgplot routines). > For better understanding the comparison, when you use Midas, what are you displaying to? It's been a long, long, time since I've used Midas so I forget (or likely, it's changed) how image display is done. Are you using DS9, ximtool or SAOIMAGE? Or a different display mechanism? By the way, we do have a module that allows displaying numarray images to DS9 and related image display programs (google numdisplay). But if you are hoping to combine matplotlib features with image display, this isn't going to help much. But if you are looking mainly to use DS9 or ximtool features, you can just use them directly and save yourself the trouble of trying to emulate them (not that wouldn't be a nice thing to have for matplotlib). Perry |
From: John H. <jdh...@ac...> - 2004-12-21 18:08:20
|
>>>>> "Perry" == Perry Greenfield <pe...@st...> writes: Perry> As John later alluded to, the time for the window to come Perry> up is a one time cost if you are running from an Perry> interactive prompt. It shouldn't be paid for subsequent Perry> display updates. I made some small changes which helped here - eg, deferring the initialization of the LUTs until they are actually requested. This shaved 0.3 s off startup time on my system. With Todd's help, I also made some changes in the core "fromarray" in extension code which delivered some speedups, and removed some extra checks in the colormapping code which are not needed for data that are properly normalized. I also think I found and fixed redundant calls to draw in some backends due to improper event handling and hold handling that crept into 0.65. Here are my current numbers for a 1600x1600 image # GTKAgg default normalization and colormapping matplotlib 0.65 figimage : 9.97s matplotlib 0.65 imshow : 9.91s matplotlib CVS figimage : 5.23s matplotlib CVS imshow : 5.18s # GTKAgg prenormalized data and default ("hot") colormapping matplotlib 0.65 figimage : 3.46s matplotlib 0.65 imshow : 3.37s matplotlib CVS figimage : 1.95s matplotlib CVS imshow : 2.01s # GTKAgg prenormalized data and custom grayscale colormapping matplotlib 0.65 figimage : 2.05s matplotlib 0.65 imshow : 1.95s matplotlib CVS figimage : 1.15s matplotlib CVS imshow : 1.21s So the situation is improving. As I noted before, interaction with plots via the toolbar should also be notably faster. This would make a good FAQ.... JDH |
From: John H. <jdh...@ac...> - 2004-12-21 18:34:08
|
>>>>> "John" == John Hunter <jdh...@ac...> writes: John> I made some small changes which helped here - eg, deferring John> the initialization of the LUTs until they are actually John> requested. This shaved 0.3 s off startup time on my system. John> With Todd's help, I also made some changes in the core John> "fromarray" in extension code which delivered some speedups, John> and removed some extra checks in the colormapping code which John> are not needed for data that are properly normalized. I John> also think I found and fixed redundant calls to draw in some John> backends due to improper event handling and hold handling John> that crept into 0.65. Well, Xavier Gnata just pointed out to me off list that almost half the cost of the default image handling was in the normalization calls to min and max. After a little poking around, I discovered we were using python's min and max here, which means sequence API. Ouch! So we get another 2x speedup on top of the numbers I just posted using default normalization and colormapping. # GTKAgg default normalization and colormapping # 0.65 matplotlib 0.65 figimage : 9.97s matplotlib 0.65 imshow : 9.91s # optimization numbers in my last post matplotlib figimage : 5.23s matplotlib imshow : 5.18s # as above but using nxmin and nxmax matplotlib figimage : 2.21s matplotlib imshow : 2.24s So out of the box the next matplotlib will be more than 4x faster than the last release for images. A long way from MIDAS and IRAF, but still satisfying for a day's work. JDH |