From: Joel K. <jj...@ya...> - 2004-09-07 21:44:47
Attachments:
Image_To_Objects.py
|
By using the attached script, in which a module from the Python Imaging Library is combined with VPython, you can display a bitmap image on a rectangular grid of VPython objects. Once the basic image is in place, of course, you can use VPython's strong capabilities for animation &c to produce a wide variety of visual effects. As I attempted to indicate in the script's comment lines, I suggest starting out with rather small bitmaps until you see what kind of results you get on your particular hardware. If anyone else has been experimenting with similar algorithms, I would be interested in seeing their work. This class of programs would seem to offer a broad range of potential benefits for education and lots of other areas. Email if you have questions. Joel __________________________________ Do you Yahoo!? Yahoo! Mail - 50x more storage than other providers! http://promotions.yahoo.com/new_mail |
From: Jonathan B. <jbr...@ea...> - 2004-09-08 00:38:58
|
On Tue, 2004-09-07 at 17:44, Joel Kahn wrote: > By using the attached script, in which a module from > the Python Imaging Library is combined with VPython, > you can display a bitmap image on a rectangular grid > of VPython objects. Once the basic image is in place, > of course, you can use VPython's strong capabilities > for animation &c to produce a wide variety of visual > effects. As I attempted to indicate in the script's > comment lines, I suggest starting out with rather > small bitmaps until you see what kind of results you > get on your particular hardware. > > If anyone else has been experimenting with similar > algorithms, I would be interested in seeing their > work. This class of programs would seem to offer a > broad range of potential benefits for education and > lots of other areas. Email if you have questions. > > Joel This sounds like it could benefit from some work I have been doing on the next-generation rendering core for VPython. One of the new features that I intend to have working is called texture mapping in OpenGL terminology, but it is basically the use of a bitmap image to color the geometry of the objects on-screen. Here is an example of a texture-mapped box that applies the same image to each face of box: http://www4.ncsu.edu/~jdbrandm/box_test.png And here is an example that wraps an image onto a sphere using a kind of inverse Mercator projection (the source image is from the NASA "Blue Marble" press release): http://www4.ncsu.edu/~jdbrandm/sphere_texture_test.png You can also specify a per-pixel alpha value (in an RGBA or grayscale+alpha image) that gives you variable translucency across the body being rendered. The hard part with exposing this functionality to client programs is: what should the external interface be? How much complexity/how many options should be visible to client programs? Since you are the first person to ask for something like this (albeit indirectly), now seems like as good a time to talk about it as any. The potential complexity is very high, and I want the final result to appear easy from the Python side of the house, just like the rest of VPython. Please feel free to ask about anything that isn't clear below. Some constraints are: - Any image buffer must be N[xM[xO]] where N, M, and O are powers of 2. - Textures can be either one, two, or three dimensional, and each pixel can be a grayscale, grayscale+alpha, RGB, or RGBA value in several precisions (I'm leaning towards forcing 8 bits per channel for simplicity's sake). Clearly, high-res 3-D textures are _very_ memory intensive. - The image must be transferred into the memory of the graphics card before use and whenever changed, which is fairly slow. - Use of the image to draw objects later is very fast on new machines and painfully slow with a software-based renderer. Note the "cycle time" text at the bottom of those two screenshots. It represents the time to render the entire scene in seconds on my home machine - an 800 MHz PIII + NVIDIA GeForce 4000 graphics card (related to the GeForce 4), so it is relatively slow. A newer machine (3GHz P4+Radeon 9800) generally keeps it at about 1 ms per cycle, even for much more complex scenes, and most of that time is waiting for the VSYNC. The internal texture mapping model is this: For each triangle's vertex, you specify a coordinate on the texture object (ranging from (0->1) for each dimension of the texture). The triangle that is snipped from the image is then scaled, rotated, and filtered as needed to match the geometry of the triangle on the screen. In some cases, OpenGL can compute texture coordinates for you to perform a projection of the image onto the object being rendered based on the positions of the vertexes in object space. You specify a vector coefficient of a dot-product which is applied to each incoming vertex to compute the texture coordinate for that vertex (computed_coord = some_vector.dot( vertex_pos)), for each dimension of the texture object. For a 2D texture, this is analogous to specifying a scaled plane in world space from which the GL projects the image onto the body being rendered. Also, there are a few types of filtering that can be done on the image when it is transformed onto the target triangle. One is a "nearest", and the others are forms of linear interpolation. This comes into effect when the image is zoomed in or out. When using "nearest" filtering, a zoomed-in image looks grainy, and with one of the interpolation modes, it will look smooth (fuzzy at high magnifications). Now, with that in mind, here are the external interfaces that I have considered, but haven't yet nailed down in stone: - For some objects (only box and sphere at present), provide a default mapping of the texture to the body. This is available when some form of sane default is obviously going to be reasonable. There probably isn't one for the arrow, for example. - For all objects, provide a means of specifying the planar position and scaling in world space (probably relative rather than absolute) from which the texture should be projected onto the body. A combination of origin, s-axis and t-axis (which correspond to the lower-left corner, horizontal direction and vertical direction along the texture object, respectively), would be sufficient, I think. - Punt to the user and allow you to manually specify texture coordinates with each vertexes in the faces object. Texture data can be specified to Visual using either a built-in function to decode an image file from disk to an opaque internal buffer, or by passing an appropriately shaped ( N[xM[xO]]x{1,2,3,4} of type Int8) Numeric array containing in-memory pixel data. In either case, the resulting texture object can be applied to any Visual object in a one-to-many relationship. Open issues: - Does this allow for the kind of usage model that you want? I think that the program you attached could be implemented under this model as (in pseudo-code): allocate_a_faces_object_with_two_triangles_and_tex_coordinates() create_an_NxMx3_Numeric_array() populate_the_array() convert_array_to_texture() object.texture = my_new_texture_object while (some_condition): change_array() convert_to_texture() object.texture = my_modified_texture_object - Just how many of the options should be exposed? Alternatively (and probably the answer): which options would mere mortals find useful without being too confusing? - Since OpenGL requires that each dimension of the texture be a power of 2, any source data that isn't must be scaled to fit. Should it be automatically scaled? If so, up or down in precision? Scaling up will generate a grainy texture, scaling down will loose data, and neither is really what the user program requested. Both seem equally right and wrong to me. Some people have requested the use of procedural texture generation functions as well, akin to what is provided with Povray. I will admit that this kind of thing is somewhat over my head right now, but I don't want to make any kind of design decision now that would preclude a clean implementation of this kind of feature in the future. This is one direction that Visual is going and I would appreciate any feedback on it. Thanks, -Jonathan |