Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Rightclick on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
From: Christoph Gohlke <cgohlke@uc...>  20110918 19:30:17

Hello, matplotlib uses int(x*255) or np.array(x*255, np.uint8) to quantize normalized floating point numbers x in the range [0.0 to 1.0] to integers in the range [0 to 255]. This way only 1.0 is mapped to 255, not for example 0.999. Is this really intended or would not the largest floating point number below 256.0 be a better scale factor than 255? The exact factor depends on the floating point precision (~255.999992 for np.float32, ~255.93 for np.float16). Christoph 
From: Eric Firing <efiring@ha...>  20110918 21:30:11

On 09/18/2011 09:30 AM, Christoph Gohlke wrote: > Hello, > > matplotlib uses int(x*255) or np.array(x*255, np.uint8) to quantize > normalized floating point numbers x in the range [0.0 to 1.0] to > integers in the range [0 to 255]. This way only 1.0 is mapped to 255, > not for example 0.999. Is this really intended or would not the largest > floating point number below 256.0 be a better scale factor than 255? The > exact factor depends on the floating point precision (~255.999992 for > np.float32, ~255.93 for np.float16). > > Christoph Christoph, It's a reasonable question; but do you have use cases in mind where it actually makes a difference? The simple scaling with truncation is used in many places, both in the python and the c++ code. Eric 
From: Christoph Gohlke <cgohlke@uc...>  20110919 09:23:29

On 9/18/2011 2:30 PM, Eric Firing wrote: > On 09/18/2011 09:30 AM, Christoph Gohlke wrote: >> Hello, >> >> matplotlib uses int(x*255) or np.array(x*255, np.uint8) to quantize >> normalized floating point numbers x in the range [0.0 to 1.0] to >> integers in the range [0 to 255]. This way only 1.0 is mapped to 255, >> not for example 0.999. Is this really intended or would not the largest >> floating point number below 256.0 be a better scale factor than 255? The >> exact factor depends on the floating point precision (~255.999992 for >> np.float32, ~255.93 for np.float16). >> >> Christoph > > Christoph, > > It's a reasonable question; but do you have use cases in mind where it > actually makes a difference? > > The simple scaling with truncation is used in many places, both in the > python and the c++ code. > > Eric > Hi Eric, visually it will be hardly noticeable in most cases. However, I'd expect the histogram of normalized intensity data to be the same as the histogram of a linear grayscale image of that data (neglecting gamma correction, image scaling/interpolation for now). Consider this code for example: import numpy as np a = np.random.rand(1024*1024) a[0], a[1] = 0.0, 1.0 h0 = np.histogram(a, bins=256, range=(0, 1))[0] h1 = np.bincount(np.uint8(a * 255)) h2 = np.bincount(np.uint8(a * 255.9999999999999)) print (h0  h1) print (h0  h2) Christoph 
From: Christoph Gohlke <cgohlke@uc...>  20110921 02:34:11

On 9/19/2011 2:23 AM, Christoph Gohlke wrote: > > > On 9/18/2011 2:30 PM, Eric Firing wrote: >> On 09/18/2011 09:30 AM, Christoph Gohlke wrote: >>> Hello, >>> >>> matplotlib uses int(x*255) or np.array(x*255, np.uint8) to quantize >>> normalized floating point numbers x in the range [0.0 to 1.0] to >>> integers in the range [0 to 255]. This way only 1.0 is mapped to 255, >>> not for example 0.999. Is this really intended or would not the largest >>> floating point number below 256.0 be a better scale factor than 255? The >>> exact factor depends on the floating point precision (~255.999992 for >>> np.float32, ~255.93 for np.float16). >>> >>> Christoph >> >> Christoph, >> >> It's a reasonable question; but do you have use cases in mind where it >> actually makes a difference? >> >> The simple scaling with truncation is used in many places, both in the >> python and the c++ code. >> >> Eric >> > > Hi Eric, > > visually it will be hardly noticeable in most cases. However, I'd expect > the histogram of normalized intensity data to be the same as the > histogram of a linear grayscale image of that data (neglecting gamma > correction, image scaling/interpolation for now). Consider this code for > example: > > import numpy as np > a = np.random.rand(1024*1024) > a[0], a[1] = 0.0, 1.0 > h0 = np.histogram(a, bins=256, range=(0, 1))[0] > h1 = np.bincount(np.uint8(a * 255)) > h2 = np.bincount(np.uint8(a * 255.9999999999999)) > print (h0  h1) > print (h0  h2) > > Christoph > To make this work with any float type one could use: np.uint8(a * np.nextafter(a.dtype.type(256), a.dtype.type(0))) Christoph 