From: Jonathan B. <jbr...@ea...> - 2004-09-08 20:54:46
|
On Wed, 2004-09-08 at 16:27, Joel Kahn wrote: > First: a member of the Python Edu-Sig group sent me a > bug fix for my image-to-object script. The way I > originally wrote it, bitmaps with less than 24 bits > per pixel may not work. If you insert this line . . . > > Picture = Picture.convert("RGB") > > . . . after the "Image.open" command, your bitmap data > will be in the right format to be handled properly by > both PIL and VPython. > > Second: Jonathan's summary of the texture-mapping > prospects was educational, even though much of it was > beyond my level of expertise. One point that did come > through clearly was the need for powerful hardware to > handle (let alone animate) any significant number of > detailed texture-matched objects. I think that this > would be a good justification for a VPython-oriented > cluster experiment. Does anybody have the time and the > interest to join me in putting together some kind of > grant proposal for a Beowulf project? I'm open to any > relevant suggestions. . . . > > Joel Cluster? No, I think that would be a bad idea for VPython. Any recent video card will be able to handle pretty much anything you can throw at it from VPython. A scene with several translucent and textured objects, some using a source image of 1024x2048 pixels, can be rendered in only a few ms on common PC hardware. You really can't notice that it takes any time at all, and the UI remains buttery-smooth. Speed is not the problem. Picking the right parts of a complex environment (or completely reformulating the appearance of what you are doing) to expose to client programs in Python is the hard part, and that is what I would appreciate some feedback on. -Jonathan |