From: John H. <jdh...@ac...> - 2004-03-09 16:10:57
|
I'm starting to think about adding image support and wanted to get some input about what it should include and how it should be designed. The ideas are still pretty nascent but here is where I was planning to start. Create an image extension built around agg (not backend_agg). This would be a free standing extension not tied to any of the backends with the responsibility of loading image data from a variety of sources into a pixel buffer, and resizing it to a desired pixel size (dependent on the axes window) with customizable interpolation. Inputs: what file formats should be supported? * I can do PNG rather easily since I already had to interface agg with png for save capabilities in backend_agg. * As for raw pixel data, should we try to support grayscale/luminance, rgb and rgba with the platform dependent byte ordering problems, or leave it to the user to load these into a numeric/numarray and init the image with that? Should we follow PILs lead here and just provide a fromstring method with format strings? * What raw types should be supported: 8 bit luminance, 16 bit luminance, 8 bit rgb, 8bit rgba, 16 bit rgb or rgba? Resizing: Generally the axes viewport and the image dimensions will not agree. Several possible solutions - perhaps all need to be supported: * a custom axes creation func that fits the image when you just want to view and draw onto single image (ie no multiple subplots). * resize to fit, resize constrained aspect ratio, plot in current axes and clip image outside axes viewlim * with resizing, what pixel interpolation schemes are critical? agg supports several: nearest neighbor, blinear, bicubic, spline, sinc. Backends: I am thinking about using the same approach as in ft2font. Have the image backend provide the load/resize/interpolate methods and fill a pixel buffer of appropriate size and letting the backends do whatever they want with it. Agg can blend the image with the drawing buffer, gtk can draw from an rgba buffer. Not sure about PS yet. paint and wx can use their respective APIs to copy the pixel buffer. Any other thoughts welcome... JDH |
From: John N S G. <jn...@eu...> - 2004-03-09 16:40:15
|
> Any other thoughts welcome... Not really related to images but... I've been thinking a bit about mapplotlib (no, that is not a typo). Quite often I find myself with numbers for different parts of the world that I want to map, shading regions different colours according to the numbers. In fact, one of my early experiments with python and wxPython was to produce such a beast, but I'm not terribly happy with what I produced. matplotlib has lots of the goodies that mapplotlib would require: it has axes you can zoom and scroll, is great a drawing coloured polygons and can do legends. The problem I've tended to run into with mapping projects has been getting shape files that aren't distributed under a restrictive licence. Anyway, is there any interest out there in a mapplotlib? John |
From: John H. <jdh...@ac...> - 2004-03-09 17:54:44
|
>>>>> "John" == John N S Gill <jn...@eu...> writes: John> Quite often I find myself with numbers for different parts John> of the world that I want to map, shading regions different John> colours according to the numbers. John> The problem I've tended to run into with mapping projects John> has been getting shape files that aren't distributed under a John> restrictive licence. I've experimented with this a bit using shapefiles from the national atlas (I think they are distributed under a permissive license). thuban has a nice python interface for reading shape files and their associated db files. I've held off on pursuing this functionality until we get an efficient path/polygon extension, which has been discussed a number of times but is still on the TODO list. The example below renders the US map from 2000 polygons, and is slow for interactive use with that many polygons. http://nitace.bsd.uchicago.edu:8080/files/share/map.png For nice map navigation, you would probably want a better navigation toolbar (hand pan, zoom to region) which was discussed many moons ago but has also languished due to lack of time - http://sourceforge.net/mailarchive/message.php?msg_id=6542965 Here's some example code: # shapefile from # http://edcftp.cr.usgs.gov/pub/data/nationalatlas/statesp020.tar.gz # shapefile lib from http://thuban.intevation.org import shapelib, dbflib, shptree from matplotlib.patches import Polygon from matplotlib.matlab import * filename = 'statesp020.shp' dbfile = 'statesp020.dbf' shp = shapelib.ShapeFile(filename) numShapes, type, smin, smax = shp.info() ax = gca() dpi = ax.dpi bbox = ax.bbox transx = ax.xaxis.transData transy = ax.yaxis.transData left=[]; right=[]; bottom=[]; top=[] db = dbflib.open(dbfile) # just get the main polys for each state seen = {} for i in range(numShapes): rec = db.read_record(i) state = rec['STATE'] area = rec['AREA'] obj = shp.read_object(i) verts = obj.vertices()[0] if seen.has_key(state): have, tmp = seen[state] if area>have: seen[state] = area, verts else: seen[state] = area, verts for state, tup in seen.items(): area, verts = tup poly = Polygon(dpi, bbox, verts, fill=False, transx=transx, transy=transy) x = [tx for tx, ty in verts] y = [ty for tx, ty in verts] ax.xaxis.datalim.update(x) ax.yaxis.datalim.update(y) ax.add_patch(poly) set(gca(), 'xlim', ax.xaxis.datalim.bounds()) set(gca(), 'ylim', ax.yaxis.datalim.bounds()) savefig('map', dpi=150) axis('Off') show() |
From: Perry G. <pe...@st...> - 2004-03-10 16:09:20
|
John Gill writes: > Any other thoughts welcome... > > Not really related to images but...=20 > > I've been thinking a bit about mapplotlib (no, that is not a typo). >=20 > Quite often I find myself with numbers for different parts of the = world=20 > that I want to map, shading regions different colours according to the = > numbers. > > In fact, one of my early experiments with python and wxPython was to=20 > produce such a beast, but I'm not terribly happy with what I produced. > > matplotlib has lots of the goodies that mapplotlib would require: it = has=20 > axes you can zoom and scroll, is great a drawing coloured polygons and = can=20 > do legends. > > The problem I've tended to run into with mapping projects has been = getting=20 > shape files that aren't distributed under a restrictive licence. >=20 > Anyway, is there any interest out there in a mapplotlib? > > John=20 Well, sort of though on our part the interest is more in plotting things on a map of the sky than the Earth (though occasionally, we need to do that also). For us the biggest issue is handling all the possible map coordinate projections. I would assume that is also something you would have to worry about. We've given some thought about how to do that sort of thing (as well as do thing like polar plots). This would be a generalization of the matplotlib transform mechanism. It isn't a real high priority for us yet. The image stuff that John is talking about is much higher priority. But if you have any thoughts of expanding on matplotlib for this and are planning to use something other than simple rectangular coordinates, I'd be interested in understanding how you will handle map projections. Perry |
From: Perry G. <pe...@st...> - 2004-03-09 23:15:38
|
John Hunter writes: > I'm starting to think about adding image support and wanted to get > some input about what it should include and how it should be designed. > The ideas are still pretty nascent but here is where I was planning to > start. > > Create an image extension built around agg (not backend_agg). This > would be a free standing extension not tied to any of the backends > with the responsibility of loading image data from a variety of > sources into a pixel buffer, and resizing it to a desired pixel size > (dependent on the axes window) with customizable interpolation. > I guess I'm confused by terminology. What do you intend "backend" to mean for images. A common interface for reading different image formats? Speaking of which... > Inputs: what file formats should be supported? > > * I can do PNG rather easily since I already had to interface agg > with png for save capabilities in backend_agg. > I guess I would argue for what you refer to below, that the functionality to read image formats should be decoupled, at least initially, from the plotting (display?) package. In fact, initially it may make sense just to use PIL for that functionality alone until we understand better what really needs to be integrated into the display package. (The main drawback of PIL is that it doesn't support either Numeric or numarray, and Lundt isn't inclined to support either unless either is part of the Python Standard Library. It may turn out that we could add it to PIL, or extract from PIL the image file support component for our purposes. I suspect that that stuff is pretty stable). But initially, reading images into arrays seems like the most flexible and straightforward thing to do. > * As for raw pixel data, should we try to support > grayscale/luminance, rgb and rgba with the platform dependent byte > ordering problems, or leave it to the user to load these into a > numeric/numarray and init the image with that? Should we follow > PILs lead here and just provide a fromstring method with format > strings? > I haven't given this a great deal of thought, but again, arguing for simplicity, that the array representations should be simple. For example, nxm dim array implies luminance, nxmx3 implies rgb, nxmx4 implies rgba. The I/O module always fills the arrays in native byte order. I suppose that some thought should be given to the default array type. One possibility is to use Float32 with normalized values (1.0=max), but it is probably important to keep integer values from some image formats (like png). Floats give the greatest flexibility and independence from the display hardware, if sometimes wasteful of memory. The second type to support would be UInt8 (I admit I could be stupidly overlooking something). These arrays are passed to matplotlib rendering methods or functions and the dimensionality will tell the rendering engine how to interpret it. The question is how much the engine needs to know about the depth and representation of the display buffer and how much of these details are handled by agg (or other backends) > * What raw types should be supported: 8 bit luminance, 16 bit > luminance, 8 bit rgb, 8bit rgba, 16 bit rgb or rgba? > > Resizing: Generally the axes viewport and the image dimensions will > not agree. Several possible solutions - perhaps all need to be > supported: > > * a custom axes creation func that fits the image when you just want > to view and draw onto single image (ie no multiple subplots). > > * resize to fit, resize constrained aspect ratio, plot in current > axes and clip image outside axes viewlim > > * with resizing, what pixel interpolation schemes are critical? agg > supports several: nearest neighbor, blinear, bicubic, spline, > sinc. > Here again I would argue that the resizing functions could be separated into a separate module until we understand better how they should be integrated into the interface. So for now, require a user to apply a resampling function to an image. Something like this might be a good initial means of handling images. im = readpng("mypicture.png") # returns a rgb array (nxmx3) unless alpha # is part of png files (I'm that ignorant). rebinned_im = bilinear(im, axisinfo...) Then use rebinned_im for a pixel-to-pixel display in the plot canvas (with appropriate offset and clipping). This isn't quite as convenient as one step from file to display, but it should get us some flexible functionality faster and doesn't restrict more integrated means of displaying images. There are other approaches to decoupling that are probably more object oriented. I'll think more about this (and you can clarify more what you mean as well if I'm confused about what you are saying). Perry |
From: Andrew S. <str...@as...> - 2004-03-10 00:33:30
|
<snipped lots of interesting discussion...> > I guess I would argue for what you refer to below, that the > functionality to read image formats should be decoupled, at least > initially, from the plotting (display?) package. In fact, initially > it may make sense just to use PIL for that functionality alone until > we understand better what really needs to be integrated into the > display package. (The main drawback of PIL is that it doesn't support > either Numeric or numarray, and Lundt isn't inclined to support > either unless either is part of the Python Standard Library. It > may turn out that we could add it to PIL, or extract from PIL > the image file support component for our purposes. I suspect that > that stuff is pretty stable). But initially, reading images into > arrays seems like the most flexible and straightforward thing to > do. Disclaimer: I surely don't understand all of the requirements for matplotlib imaging. I would also suggest PIL, sticking as much as possible to the pure-Python interface. I suggest this because some OSs (well, Mac OS X is what I'm thinking of) are supposed to have native image-handling routines which are fine-tuned to their hardware and unlikely to be beat by some multi-platform project. (For example, I've heard that Apple's PNG handling is faster and better than libpng.) If someone modified PIL to take advantage of these platform-specific features, we would reap the benefits. Then again, this argument assumes someone will someday do that. >> * As for raw pixel data, should we try to support >> grayscale/luminance, rgb and rgba with the platform dependent byte >> ordering problems, or leave it to the user to load these into a >> numeric/numarray and init the image with that? Should we follow >> PILs lead here and just provide a fromstring method with format >> strings? >> > I haven't given this a great deal of thought, but again, arguing > for simplicity, that the array representations should be simple. > For example, nxm dim array implies luminance, nxmx3 implies > rgb, nxmx4 implies rgba. The I/O module always fills the arrays > in native byte order. I suppose that some thought should be given > to the default array type. One possibility is to use Float32 with > normalized values (1.0=max), but it is probably important to keep > integer values from some image formats (like png). Floats give > the greatest flexibility and independence from the display hardware, > if sometimes wasteful of memory. The second type to support would be > UInt8 (I admit I could be stupidly overlooking something). I think the format string idea is a good one. An nxm dim array could be luminance, alpha, or possibly colormapped -- it's not necessarily known in advance what data type it is, although a guess would probably be right 99% of the time. I also vote use a floating-point representation whenever possible. It's clear that 8 bits per color aren't enough for many purposes. <snipped more interesting discussion...> |
From: John H. <jdh...@ac...> - 2004-03-10 04:32:11
|
>>>>> "Perry" == Perry Greenfield <pe...@st...> writes: >> Create an image extension built around agg (not backend_agg). >> This would be a free standing extension not tied to any of the >> backends with the responsibility of loading image data from a >> variety of sources into a pixel buffer, and resizing it to a >> desired pixel size (dependent on the axes window) with >> customizable interpolation. Perry> I guess I'm confused by terminology. What do you intend Perry> "backend" to mean for images. A common interface for Perry> reading different image formats? Speaking of which... Sorry for the confusion. I meant that I was considering using antigrain to load/store/scale/convert/process the pixel buffers in an image module in the same way that PIL could be used, and that this has nothing per se to do with backend_agg. By "backends", I meant the same old backends we already have, backend_gd, backend_wx, backend_ps, etc..., each of which would implement a new method draw_image to render the image object to its respective canvases/displays. Whether this image object is PIL based or Agg based under the hood is an open question. I hope I this clarifies rather than muddies the waters.... Perry> But initially, reading images into arrays seems like the Perry> most flexible and straightforward thing to do. Agreed - I like the idea of making the user deal with endianess, etc, in loading the image data into arrays, and passing those to image module. Todd, is it reasonably straightforward to write extension code that is Numeric/numarray neutral for rank 2 and rank 3 arrays? Perry> These arrays are passed to matplotlib rendering methods or Perry> functions and the dimensionality will tell the rendering Perry> engine how to interpret it. The question is how much the Perry> engine needs to know about the depth and representation of Perry> the display buffer and how much of these details are Perry> handled by agg (or other backends) One thing that the rest of matplotlib tries to do is insulate as much complexity from the backends as possible. For example, the backends only know one coordinate system (display) and the various figure objects transform themselves before requesting, for example, draw_lines or draw_text. Likewise, the backends don't know about Line2D objects or Rectangle objects; the only know how to draw a line from x1, y1 to x2, y2, etc... This suggests doing as much work as possible in the image module itself. For example, if the image module converted all image data to an RGBA array of floats, this would be totally general and the backends would only have to worry about one thing vis-a-vis images: dumping rgba floats to the canvas. Nothing about byte-order, RGB versus BGR, grayscale, colormaps and so on. Most or all of these things could be supported in the image module: the image module scales the image, handles the interpolation, and converts all pixel formats to an array of rgba floats. Then the backend takes over and renders. An array of RGBA UInt16s would probably suffice for just about everything, however. The obvious potential downside here is performance and memory. You might be in a situation where the user passes in UInt8, the image module converts to floats, and the backend converts back to UInt8 to pass to display. Those of you who deal with image data a lot: to what extent is this a big concern? My first reaction is that on a reasonably modern PC, even largish images could be handled with reasonable speed. On the subject of PIL versus agg for the workhorse under the image module hood: I agree with the positive points things about PIL that you and Andrew brought up (stability, portability, wide user base, relevant functionality already implemented). In favor of agg I would add * we're already distributing agg so negligible code bloat; PIL is largish and not the easiest install. Distributing fonttools has been a mess that I don't want to repeat. * impressive suite of pixel conversion funcs, interpolators, filters and transforms built in * easy and efficient integration with backend_agg, which the GUIs are converging around. * numeric/numarray support from the ground up Downsides * less portable - requires modern c++ compiler * more up front work (but most is already done in C++ and just needs exposing) JDH |
From: Perry G. <pe...@st...> - 2004-03-10 15:48:11
|
John Hunter writes: > Sorry for the confusion. I meant that I was considering using > antigrain to load/store/scale/convert/process the pixel buffers in an > image module in the same way that PIL could be used, and that this has > nothing per se to do with backend_agg. By "backends", I meant the > same old backends we already have, backend_gd, backend_wx, backend_ps, > etc..., each of which would implement a new method draw_image to > render the image object to its respective canvases/displays. Whether > this image object is PIL based or Agg based under the hood is an open > question. > > I hope I this clarifies rather than muddies the waters.... > OK, I understand what you meant. > Perry> But initially, reading images into arrays seems like the > Perry> most flexible and straightforward thing to do. > > Agreed - I like the idea of making the user deal with endianess, etc, > in loading the image data into arrays, and passing those to image > module. Todd, is it reasonably straightforward to write extension > code that is Numeric/numarray neutral for rank 2 and rank 3 arrays? > Todd and I just talked about this. There are two possible approaches, one of which should work and the other which we would have to think about. The simpler approach is to write a C extension using the Numeric API. As long as a small, rarely-used, subset of the Numeric API is not used (the UFunc API and some type conversion stuff in the type descriptor) then numarray can use the same code with only a change to the include file (which could be handled by an #ifdef) Then the same C extension code could be use with either Numeric or numarray. The catch is that it must be compiled to be used with one or the other. I was thinking that the way around that would be to do 2 things: have setup.py build for Numeric, numarray, *or* both depending on what it found installed on the system. The respective C extension modules would have different names (e.g., _image_Numeric.so/dll or _image_numarray.so/dll). There would also be a wrapper module that uses numerix to determine which of these C extensions to import. This is a bit clumsy (having to layer a module over it a la numerix, but it means only having one C source file to handle both. It might be possible for us to fiddle with numarray's structure definition so that the same compiled C code works with Numeric and numarray arrays, but given that the API functions will generate one or the other for creation, I'm not sure this is workable. We will give it some thought. My inclination is to use the first approach as a conservative but workable solution. > Perry> These arrays are passed to matplotlib rendering methods or > Perry> functions and the dimensionality will tell the rendering > Perry> engine how to interpret it. The question is how much the > Perry> engine needs to know about the depth and representation of > Perry> the display buffer and how much of these details are > Perry> handled by agg (or other backends) > > One thing that the rest of matplotlib tries to do is insulate as much > complexity from the backends as possible. For example, the backends > only know one coordinate system (display) and the various figure > objects transform themselves before requesting, for example, > draw_lines or draw_text. Likewise, the backends don't know about > Line2D objects or Rectangle objects; the only know how to draw a line > from x1, y1 to x2, y2, etc... > > This suggests doing as much work as possible in the image module > itself. For example, if the image module converted all image data to > an RGBA array of floats, this would be totally general and the > backends would only have to worry about one thing vis-a-vis images: > dumping rgba floats to the canvas. Nothing about byte-order, RGB > versus BGR, grayscale, colormaps and so on. Most or all of these > things could be supported in the image module: the image module scales > the image, handles the interpolation, and converts all pixel formats > to an array of rgba floats. Then the backend takes over and renders. > An array of RGBA UInt16s would probably suffice for just about > everything, however. > > The obvious potential downside here is performance and memory. You > might be in a situation where the user passes in UInt8, the image > module converts to floats, and the backend converts back to UInt8 to > pass to display. Those of you who deal with image data a lot: to what > extent is this a big concern? My first reaction is that on a > reasonably modern PC, even largish images could be handled with > reasonable speed. > I agree that this is the drawback. On the other hand, processor memory has grown much faster than image display memory. With a full screen image of 1024x1280 pixels, we are only talking about 5MB or so for a 32-bit deep display. Converting rgba all to floats means 20MB, while large is not overwhelming these days, and that is the worst case for interactive cases. I suppose it would be a bigger concern for non-interactive situations (e.g. PDF). But it would seem that in doing this simpler approach, we would have something that works sooner, and then we could think about how to handle cases where someone wants to generate a 4Kx4K PDF image (which is going to be one big PDF file!). I'd let's not let extreme cases like this drive the first implementation. > On the subject of PIL versus agg for the workhorse under the image > module hood: I agree with the positive points things about PIL that > you and Andrew brought up (stability, portability, wide user base, > relevant functionality already implemented). In favor of agg I would > add > Well, I was just referring to the image file support that PIL supplies; the rest of PIL seems mostly redundant with matplotlib and array capabilities (if I remember right, I haven't really used PIL much). How much image file format support is built into agg? Perry |
From: John H. <jdh...@ac...> - 2004-03-10 20:00:06
|
>>>>> "Perry" == Perry Greenfield <pe...@st...> writes: Perry> Todd and I just talked about this. There are two possible Perry> approaches, one of which should work and the other which we Perry> would have to think about. The simpler approach is to write Perry> a C extension using the Numeric API. As long as a small, Perry> rarely-used, subset of the Numeric API is not used (the Perry> UFunc API and some type conversion stuff in the type Perry> descriptor) OK, I'll start with plain vanilla Numeric along the lines Todd sent me earlier, and we can do the (apparently straightforward) numarray/numeric port when we have a prototype. Perry> I agree that this is the drawback. On the other hand, Perry> processor memory has grown much faster than image display Perry> memory. With a full screen image of 1024x1280 pixels, we Perry> are only talking about 5MB or so for a 32-bit deep Perry> display. Converting rgba all to floats means 20MB, while Perry> large is not overwhelming these days, and that is the worst Perry> case for interactive cases. I suppose it would be a bigger Perry> concern for non-interactive situations (e.g. PDF). But it Perry> would seem that in doing this simpler approach, we would Perry> have something that works sooner, and then we could think Perry> about how to handle cases where someone wants to generate a Perry> 4Kx4K PDF image (which is going to be one big PDF Perry> file!). I'd let's not let extreme cases like this drive the Perry> first implementation. Agreed. If noone has any objections I think I'll start with rgba32 as the common rendering format for the image module and we can special case the smaller (grayscale UInt8 or UInt16) and larger (rgba floats) cases after a prototype implementation. I'm imagining an interface like Frontend: im = image.fromarray(A) # NxM im = image.fromarray(A, colormap) # NxM colormapped images im = image.fromarray(A) # NxMx3, no format needed im = image.fromarray(A) # NxMx4, no format needed im = image.fromfile('somefile.png') im.set_interpolate(image.BILINEAR) im.set_aspect_ratio(image.CONSTRAINED) Backend: def draw_image(im): # Ascaled is width x height x 4 resampled using interpolation # and aspect constraints from above Ascaled = im.resize(viewport.width, viewport.height) renderer.draw_from_rgba(Ascaled, etc1, etc2) Perry> Well, I was just referring to the image file support that Perry> PIL supplies; the rest of PIL seems mostly redundant with Perry> matplotlib and array capabilities (if I remember right, I Perry> haven't really used PIL much). How much image file format Perry> support is built into agg? Almost none, except for the raw (or almost raw) image types (ppm, bmp). PNG, I have more or less figured out; borrowing the read/write capabilities for other image format from PIL is a good idea. JDH |
From: gary r. <gr...@bi...> - 2004-03-10 06:55:05
|
Hi Perry, You may not be aware that, although not officially supported, Lundh provided the following sample code (which I have used successfully to read bmp files and do fft's on them) for converting PIL to/from Numeric arrays via strings. It is quite fast. I think I probably found it in an email archive, Gary Ruben -PilConvert.py- # # convert between numarrayal arrays and PIL image memories # # fredrik lundh, october 1998 # # fr...@py... # http://www.pythonware.com # import numarray import Image def image2array(im): if im.mode not in ("L", "F"): raise ValueError, "can only convert single-layer images" if im.mode == "L": a = numarray.fromstring(im.tostring(), numarray.UInt8) else: a = numarray.fromstring(im.tostring(), numarray.Float32) a.shape = im.size[1], im.size[0] return a def array2image(a): if a.typecode() == numarray.UInt8: mode = "L" elif a.typecode() == numarray.Float32: mode = "F" else: raise ValueError, "unsupported image mode" return Image.fromstring(mode, (a.shape[1], a.shape[0]), a.tostring()) -ffts.py- import PilConvert import Image from numarray import * from numarray.fft import * def doFft(im): nim = PilConvert.image2array(im) im_fft = abs(real_fft2d(nim)) im_fft = log(im_fft + 1) # convert levels for display scale = 255. / max(im_fft.flat) im_fft = (im_fft * scale).astype(UInt8) imo = PilConvert.array2image(im_fft) return imo im = Image.open('rm.bmp') imo = doFft(im) imo.save('rm_fourier.bmp', 'BMP') *********** REPLY SEPARATOR *********** On 9/03/2004 at 17:58 Perry Greenfield wrote: > John Hunter writes: > > > I'm starting to think about adding image support and wanted to get > > some input about what it should include and how it should be designed. > > The ideas are still pretty nascent but here is where I was planning to > > start. > > > > Create an image extension built around agg (not backend_agg). This > > would be a free standing extension not tied to any of the backends > > with the responsibility of loading image data from a variety of > > sources into a pixel buffer, and resizing it to a desired pixel size > > (dependent on the axes window) with customizable interpolation. > > > I guess I'm confused by terminology. What do you intend "backend" > to mean for images. A common interface for reading different > image formats? Speaking of which... > > > Inputs: what file formats should be supported? > > > > * I can do PNG rather easily since I already had to interface agg > > with png for save capabilities in backend_agg. > > > I guess I would argue for what you refer to below, that the > functionality to read image formats should be decoupled, at least > initially, from the plotting (display?) package. In fact, initially > it may make sense just to use PIL for that functionality alone until > we understand better what really needs to be integrated into the > display package. (The main drawback of PIL is that it doesn't support > either Numeric or numarray, and Lundt isn't inclined to support > either unless either is part of the Python Standard Library. It > may turn out that we could add it to PIL, or extract from PIL > the image file support component for our purposes. I suspect that > that stuff is pretty stable). But initially, reading images into > arrays seems like the most flexible and straightforward thing to > do. > > > * As for raw pixel data, should we try to support > > grayscale/luminance, rgb and rgba with the platform dependent byte > > ordering problems, or leave it to the user to load these into a > > numeric/numarray and init the image with that? Should we follow > > PILs lead here and just provide a fromstring method with format > > strings? > > > I haven't given this a great deal of thought, but again, arguing > for simplicity, that the array representations should be simple. > For example, nxm dim array implies luminance, nxmx3 implies > rgb, nxmx4 implies rgba. The I/O module always fills the arrays > in native byte order. I suppose that some thought should be given > to the default array type. One possibility is to use Float32 with > normalized values (1.0=max), but it is probably important to keep > integer values from some image formats (like png). Floats give > the greatest flexibility and independence from the display hardware, > if sometimes wasteful of memory. The second type to support would be > UInt8 (I admit I could be stupidly overlooking something). > > These arrays are passed to matplotlib rendering methods > or functions and the dimensionality will tell the rendering engine > how to interpret it. The question is how much the engine needs to > know about the depth and representation of the display buffer > and how much of these details are handled by agg (or other backends) > > > > * What raw types should be supported: 8 bit luminance, 16 bit > > luminance, 8 bit rgb, 8bit rgba, 16 bit rgb or rgba? > > > > Resizing: Generally the axes viewport and the image dimensions will > > not agree. Several possible solutions - perhaps all need to be > > supported: > > > > * a custom axes creation func that fits the image when you just want > > to view and draw onto single image (ie no multiple subplots). > > > > * resize to fit, resize constrained aspect ratio, plot in current > > axes and clip image outside axes viewlim > > > > * with resizing, what pixel interpolation schemes are critical? agg > > supports several: nearest neighbor, blinear, bicubic, spline, > > sinc. > > > Here again I would argue that the resizing functions could be separated > into a separate module until we understand better how they should > be integrated into the interface. So for now, require a user to > apply a resampling function to an image. Something like this might > be a good initial means of handling images. > > im = readpng("mypicture.png") # returns a rgb array (nxmx3) unless alpha > # is part of png files (I'm that ignorant). > rebinned_im = bilinear(im, axisinfo...) > > Then use rebinned_im for a pixel-to-pixel display in the plot canvas > (with appropriate offset and clipping). This isn't quite as convenient > as one step from file to display, but it should get us some flexible > functionality faster and doesn't restrict more integrated means of > displaying images. There are other approaches to decoupling that are > probably more object oriented. > > I'll think more about this (and you can clarify more what you mean > as well if I'm confused about what you are saying). > > Perry > > > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President and CEO of > GenToo technologies. Learn everything from fundamentals to system > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel ------------------------------------ Gary Ruben gr...@bi... <http://users.bigpond.net.au/gazzar> |
From: Perry G. <pe...@st...> - 2004-03-10 15:57:20
|
> > Hi Perry, > You may not be aware that, although not officially supported, Lundh > provided the following sample code (which I have used successfully to read > bmp files and do fft's on them) for converting PIL to/from Numeric arrays > via strings. It is quite fast. I think I probably found it in an email > archive, > Gary Ruben > Thanks for showing the code. I guess what I was referring to was that native support by PIL would eliminate unnecessary memory copies which occur when fromstring and tostring are used. But as I argue in a previous message, I'm not currently worried that much about that, but it seems that it would be nice if it weren't necessary to go through that copying (from PIL image to string to array rather than directly from PIL image to array) [And I *meant* to check the spelling of Fredrik's name; I have a hard time remembering the correct spelling :-) ] Perry |