|
From: Tony Yu <ts...@gm...> - 2012-05-24 14:00:14
|
On Thu, May 24, 2012 at 9:14 AM, Sergi Pons Freixes
<spo...@gm...>wrote:
> On Wed, May 23, 2012 at 6:27 PM, Tony Yu <ts...@gm...> wrote:
> >
> > I'm not sure what you mean by "normalize the values to an appropriate
> number
> > of bits", but I don't think setting `vmin` or `vmax` will change the data
> > type of the image. So if you have 64-bit floating point images (100+ Mb
> per
> > image), then that's what you're going to be moving/scaling when you pan
> and
> > zoom.
>
> I was just guessing that it is part of the process of converting
> actual data (32 bit floats) to images on the screen (24 bit for RGB
> (32 with transparency) or 8 bit for grayscale).
>
> I tried converting the data to 8 bit, with .astype('uint8'), and it
> keeps being poorly responsive on zooming and panning.
>
>
It seems that setting `interpolation='none'` is significantly slower than
setting it to 'nearest' (or even 'bilinear'). On supported backends (e.g.
any Agg backend) the code paths for 'none' and 'nearest' are different:
'nearest' gets passed to Agg's interpolation routine, whereas 'none' does
an unsampled rescale of the image (I'm just reading the code comments
here). Could you check whether changing to `interpolation='nearest'` fixes
this issue?
-Tony
(Note: copied to stackoverflow)
PS: These different approaches *do* give different qualitative results; for
example, the code snippet below gives a slight moiré pattern, which doesn't
appear when `interpolation='none'`. I *think* that 'none' is roughly the
same as 'nearest' when zooming in (image pixels are larger than screen
pixels) but gives a higher-order interpolation result when zooming out
(image pixels smaller than screen pixels). I think the delay comes from
some extra Matplotlib/Python calculations needed for the rescaling.
#~~~
import matplotlib.pyplot as plt
import numpy as np
img = np.random.uniform(0, 255, size=(2000, 2000)).astype(np.uint8)
plt.imshow(img, interpolation='nearest')
plt.show()
|