You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(4) |
Jul
(1) |
Aug
|
Sep
(15) |
Oct
(32) |
Nov
(35) |
Dec
(48) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(46) |
Feb
(22) |
Mar
(65) |
Apr
(49) |
May
(22) |
Jun
(29) |
Jul
(51) |
Aug
(34) |
Sep
(32) |
Oct
(46) |
Nov
(30) |
Dec
(32) |
2002 |
Jan
(48) |
Feb
(4) |
Mar
(20) |
Apr
(28) |
May
(13) |
Jun
(34) |
Jul
(51) |
Aug
(15) |
Sep
(15) |
Oct
(35) |
Nov
(15) |
Dec
(20) |
2003 |
Jan
(31) |
Feb
(111) |
Mar
(41) |
Apr
(28) |
May
(36) |
Jun
(29) |
Jul
(27) |
Aug
(29) |
Sep
(47) |
Oct
(28) |
Nov
(7) |
Dec
(26) |
2004 |
Jan
(44) |
Feb
(9) |
Mar
(17) |
Apr
(26) |
May
(58) |
Jun
(13) |
Jul
(44) |
Aug
(64) |
Sep
(30) |
Oct
(11) |
Nov
(21) |
Dec
(28) |
2005 |
Jan
(29) |
Feb
(11) |
Mar
(11) |
Apr
(22) |
May
(85) |
Jun
(46) |
Jul
(17) |
Aug
(18) |
Sep
(14) |
Oct
(22) |
Nov
(1) |
Dec
(45) |
2006 |
Jan
(20) |
Feb
(36) |
Mar
(18) |
Apr
(24) |
May
(21) |
Jun
(48) |
Jul
(23) |
Aug
(20) |
Sep
(10) |
Oct
(41) |
Nov
(46) |
Dec
(40) |
2007 |
Jan
(40) |
Feb
(20) |
Mar
(13) |
Apr
(6) |
May
(24) |
Jun
(31) |
Jul
(30) |
Aug
(11) |
Sep
(11) |
Oct
(10) |
Nov
(56) |
Dec
(64) |
2008 |
Jan
(64) |
Feb
(22) |
Mar
(63) |
Apr
(28) |
May
(25) |
Jun
(36) |
Jul
(11) |
Aug
(9) |
Sep
(14) |
Oct
(41) |
Nov
(46) |
Dec
(130) |
2009 |
Jan
(95) |
Feb
(41) |
Mar
(24) |
Apr
(35) |
May
(53) |
Jun
(67) |
Jul
(48) |
Aug
(48) |
Sep
(86) |
Oct
(75) |
Nov
(64) |
Dec
(52) |
2010 |
Jan
(57) |
Feb
(31) |
Mar
(28) |
Apr
(40) |
May
(25) |
Jun
(42) |
Jul
(79) |
Aug
(31) |
Sep
(49) |
Oct
(66) |
Nov
(38) |
Dec
(25) |
2011 |
Jan
(29) |
Feb
(18) |
Mar
(44) |
Apr
(6) |
May
(28) |
Jun
(31) |
Jul
(36) |
Aug
(24) |
Sep
(30) |
Oct
(23) |
Nov
(21) |
Dec
(27) |
2012 |
Jan
(14) |
Feb
(11) |
Mar
(2) |
Apr
(48) |
May
(7) |
Jun
(32) |
Jul
(22) |
Aug
(25) |
Sep
(31) |
Oct
(32) |
Nov
(21) |
Dec
(17) |
2013 |
Jan
(44) |
Feb
(27) |
Mar
(3) |
Apr
(1) |
May
|
Jun
|
Jul
(3) |
Aug
(4) |
Sep
(1) |
Oct
(7) |
Nov
(5) |
Dec
(5) |
2014 |
Jan
|
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
(2) |
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
(7) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Jonathan B. <jbr...@ea...> - 2004-09-09 14:20:08
|
On Thu, 2004-09-09 at 08:53, Jon Schull wrote: > > > > Speed is not the problem. Picking the right parts of a complex > > environment (or completely reformulating the appearance of what you are > > doing) to expose to client programs in Python is the hard part, and > > that > > is what I would appreciate some feedback on. > > > > I'm excited to hear about these developments, but got lost in the trees > of your description of the issues. > > On the off chance that you too are lost in the trees of coding, I want > to make sure that the basic vpython aesthetic is not being overlooked. > Pixels, for example, are several levels more molecular than I would > likely want to work. > > I'd want to be able assign texture and transparency to an *object*'s > surface. The positioning of the texture on the object might be > controlled by rotating the texture around the center of the object. > The texture would automagically slide around on the surface of the > object. That is the idea. There are two means of automatic texture-coordinate generation (the texture coordinates specify how to wrap the texture to the object). The first is a kind of planar projection, where you specify the position of the texture in world space, and it is linearly projected onto the surface of the body. The second is similar to that, except that the position of the texture is specified in screen space, meaning that it is projected from somewhere relative to your monitor. For those bodies where an obvious algorithm exists for mapping the texture, we can provide a default mapping. The box object simply maps the texture in its entirety to each plane. The sphere (and by extension, the ellipse) wraps the texture around itself as though the texture were laid out like http://earthobservatory.nasa.gov/Newsroom/BlueMarble/Images/land_ocean_ice_cloud_2048.jpg I haven't decided on default mappings for the other objects. The cylinder could support three separate textures (the side and each end). The cone can do something similar, and the ring has a reasonable one too. I can't think of a sane default for the arrow object, yet. > Since transparency is now an option (and this is very good news!) The > surfaces of objects might also have thicknesses (with defaults of > course). And the interiors might have a density/opacity parameter as > well. Well, that is the catch with transparency. You can only really set transparency for an individual zero-thickness planar triangle in OpenGL. So, on the outside, we expose a model where the entire body is controlled by a common opacity value, tentatively named 'alpha', but it should probably be named 'opacity'. > The default condition I think should be thickness=.00001 (just enough > to put a texture on) and opacity=1. > > And then vpython would do all of the per-pixel calculation (and > shadowing and light scattering) under the hood, as is its "true > nature". The only thing that you would have to do per-pixel is generate the texture itself, and only then if you are not loading the image from a file. Shadowing and light scattering are things that OpenGL does not support, directly. They can be emulated indirectly, but not to the extent that you would expect from say, a raytracing package. The code for the translucent boxes demo looks like this: from visual import * # Verify the dimensions of the bodies, with a known unit-length # object. yardstick = arrow() box( pos=(2, 0, 0), alpha=0.4) box( pos=(-2, 0, 0)) # file_texture objects inherit from 'texture', which most Visual objects # can use panel = file_texture( "crate.bmp") box( pos=(0, 2, 0), texture=panel) box( pos=(0, -2, 0), texture=panel, color=(1, 1, 1, 0.5)) sphere( pos=(0, -2, 0), radius=0.4) |
From: Jon S. <js...@so...> - 2004-09-09 12:53:28
|
> > Speed is not the problem. Picking the right parts of a complex > environment (or completely reformulating the appearance of what you are > doing) to expose to client programs in Python is the hard part, and > that > is what I would appreciate some feedback on. > I'm excited to hear about these developments, but got lost in the trees of your description of the issues. On the off chance that you too are lost in the trees of coding, I want to make sure that the basic vpython aesthetic is not being overlooked. Pixels, for example, are several levels more molecular than I would likely want to work. I'd want to be able assign texture and transparency to an *object*'s surface. The positioning of the texture on the object might be controlled by rotating the texture around the center of the object. The texture would automagically slide around on the surface of the object. Since transparency is now an option (and this is very good news!) The surfaces of objects might also have thicknesses (with defaults of course). And the interiors might have a density/opacity parameter as well. The default condition I think should be thickness=.00001 (just enough to put a texture on) and opacity=1. And then vpython would do all of the per-pixel calculation (and shadowing and light scattering) under the hood, as is its "true nature". I hope this is helpful or suggestive. Regards, =-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Jon Schull, Ph.D. Associate Professor Information Technology Rochester Institute of Technology 102 Lomb Memorial Drive Rochester, New York 14623 sc...@di... 585-738-6696 |
From: Kuzminski, S. R <SKu...@fa...> - 2004-09-09 12:12:07
|
I have also built up scenes that render quite slowly. My first thought is that the constructing python code ( the code that does the algorithms to build the shapes ) may be taking more time than you realize, especially if you are doing something recursive. You may wish to organize your code so you calculate everything you need, create a series of scenes or shapes and then line them up into a queue to be animated ( rather than calculating the next scene on the fly ). Of course if a simple rotate is slow, that is harder, perhaps there is some way of optimizing the calls to VPython so that it creates an tighter GLCallList ( precompiled OpenGL structure that you describe ) I don't know the VPython internals that well. If you are operating on matrixes or arrays ( for example calculating cellular automata ) use Numeric ( or Numarray ) matrixes, iterating over and operating on large arrays of regular python object is slow, you can use Numeric matrixes in expressions that are efficiently ( in C ) calculated. =20 S -----Original Message----- From: vis...@li... [mailto:vis...@li...] On Behalf Of Joel Kahn Sent: Thursday, September 09, 2004 3:40 AM To: vis...@li... Subject: [Visualpython-users] Speed Issues Jonathan Brandmeyer wrote: >Cluster? No, I think that would=20 >be a bad idea for VPython. =20 Bad as in "not cost-effective" (sort of your next point), or bad as in "no significant speed increase at all in a cluster"? I'd be interested in finding out more about whether VPython can benefit from any kind of parallel processing. >Any recent video card will be able=20 >to handle pretty much anything=20 >you can throw at it from VPython. =20 >A scene with several translucent=20 >and textured objects, some using a=20 >source image of 1024x2048 pixels,=20 >can be rendered in only a few ms on=20 >common PC hardware. You really=20 >can't notice that it takes any time at=20 >all, and the UI remains buttery-smooth. I can speak only from my own experience with my 2Ghz Athlon processor, 256MB system RAM, and S3 Graphics ProSavageDDR card (32MB graphics RAM), usually kept in 1024x768 true-color mode, all operating under Windows XP. In this environment, I have been able to put together objects that caused unacceptably slow renderings, "low virtual memory" warnings, and even the odd system crash. When I animate some complex scenes (usually using "rotate" commands in loops), this can produce more issues. Mostly the problems have resulted when I was working with either curves or faces, and running up into thousands of points, *and* doing fancy vector-based rotations as well. I suspect that this is just what happens when a nut like me starts trying to turn out artworks with a language designed for doing physics. :-) Seriously, though--VPython is a great system, and I fully intend to keep exploring it; but everything has its limits, and more powerful hardware *can* help. Joel __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around=20 http://mail.yahoo.com=20 ------------------------------------------------------- This SF.Net email is sponsored by BEA Weblogic Workshop FREE Java Enterprise J2EE developer tools! Get your free copy of BEA WebLogic Workshop 8.1 today. http://ads.osdn.com/?ad_id=3D5047&alloc_id=3D10808&op=3Dclick _______________________________________________ Visualpython-users mailing list Vis...@li... https://lists.sourceforge.net/lists/listinfo/visualpython-users |
From: Joel K. <jj...@ya...> - 2004-09-09 10:39:49
|
Jonathan Brandmeyer wrote: >Cluster? No, I think that would >be a bad idea for VPython. Bad as in "not cost-effective" (sort of your next point), or bad as in "no significant speed increase at all in a cluster"? I'd be interested in finding out more about whether VPython can benefit from any kind of parallel processing. >Any recent video card will be able >to handle pretty much anything >you can throw at it from VPython. >A scene with several translucent >and textured objects, some using a >source image of 1024x2048 pixels, >can be rendered in only a few ms on >common PC hardware. You really >can't notice that it takes any time at >all, and the UI remains buttery-smooth. I can speak only from my own experience with my 2Ghz Athlon processor, 256MB system RAM, and S3 Graphics ProSavageDDR card (32MB graphics RAM), usually kept in 1024x768 true-color mode, all operating under Windows XP. In this environment, I have been able to put together objects that caused unacceptably slow renderings, "low virtual memory" warnings, and even the odd system crash. When I animate some complex scenes (usually using "rotate" commands in loops), this can produce more issues. Mostly the problems have resulted when I was working with either curves or faces, and running up into thousands of points, *and* doing fancy vector-based rotations as well. I suspect that this is just what happens when a nut like me starts trying to turn out artworks with a language designed for doing physics. :-) Seriously, though--VPython is a great system, and I fully intend to keep exploring it; but everything has its limits, and more powerful hardware *can* help. Joel __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com |
From: Jonathan B. <jbr...@ea...> - 2004-09-08 20:54:46
|
On Wed, 2004-09-08 at 16:27, Joel Kahn wrote: > First: a member of the Python Edu-Sig group sent me a > bug fix for my image-to-object script. The way I > originally wrote it, bitmaps with less than 24 bits > per pixel may not work. If you insert this line . . . > > Picture = Picture.convert("RGB") > > . . . after the "Image.open" command, your bitmap data > will be in the right format to be handled properly by > both PIL and VPython. > > Second: Jonathan's summary of the texture-mapping > prospects was educational, even though much of it was > beyond my level of expertise. One point that did come > through clearly was the need for powerful hardware to > handle (let alone animate) any significant number of > detailed texture-matched objects. I think that this > would be a good justification for a VPython-oriented > cluster experiment. Does anybody have the time and the > interest to join me in putting together some kind of > grant proposal for a Beowulf project? I'm open to any > relevant suggestions. . . . > > Joel Cluster? No, I think that would be a bad idea for VPython. Any recent video card will be able to handle pretty much anything you can throw at it from VPython. A scene with several translucent and textured objects, some using a source image of 1024x2048 pixels, can be rendered in only a few ms on common PC hardware. You really can't notice that it takes any time at all, and the UI remains buttery-smooth. Speed is not the problem. Picking the right parts of a complex environment (or completely reformulating the appearance of what you are doing) to expose to client programs in Python is the hard part, and that is what I would appreciate some feedback on. -Jonathan |
From: Joel K. <jj...@ya...> - 2004-09-08 20:27:54
|
First: a member of the Python Edu-Sig group sent me a bug fix for my image-to-object script. The way I originally wrote it, bitmaps with less than 24 bits per pixel may not work. If you insert this line . . . Picture = Picture.convert("RGB") . . . after the "Image.open" command, your bitmap data will be in the right format to be handled properly by both PIL and VPython. Second: Jonathan's summary of the texture-mapping prospects was educational, even though much of it was beyond my level of expertise. One point that did come through clearly was the need for powerful hardware to handle (let alone animate) any significant number of detailed texture-matched objects. I think that this would be a good justification for a VPython-oriented cluster experiment. Does anybody have the time and the interest to join me in putting together some kind of grant proposal for a Beowulf project? I'm open to any relevant suggestions. . . . Joel __________________________________ Do you Yahoo!? Read only the mail you want - Yahoo! Mail SpamGuard. http://promotions.yahoo.com/new_mail |
From: Jonathan B. <jbr...@ea...> - 2004-09-08 00:38:58
|
On Tue, 2004-09-07 at 17:44, Joel Kahn wrote: > By using the attached script, in which a module from > the Python Imaging Library is combined with VPython, > you can display a bitmap image on a rectangular grid > of VPython objects. Once the basic image is in place, > of course, you can use VPython's strong capabilities > for animation &c to produce a wide variety of visual > effects. As I attempted to indicate in the script's > comment lines, I suggest starting out with rather > small bitmaps until you see what kind of results you > get on your particular hardware. > > If anyone else has been experimenting with similar > algorithms, I would be interested in seeing their > work. This class of programs would seem to offer a > broad range of potential benefits for education and > lots of other areas. Email if you have questions. > > Joel This sounds like it could benefit from some work I have been doing on the next-generation rendering core for VPython. One of the new features that I intend to have working is called texture mapping in OpenGL terminology, but it is basically the use of a bitmap image to color the geometry of the objects on-screen. Here is an example of a texture-mapped box that applies the same image to each face of box: http://www4.ncsu.edu/~jdbrandm/box_test.png And here is an example that wraps an image onto a sphere using a kind of inverse Mercator projection (the source image is from the NASA "Blue Marble" press release): http://www4.ncsu.edu/~jdbrandm/sphere_texture_test.png You can also specify a per-pixel alpha value (in an RGBA or grayscale+alpha image) that gives you variable translucency across the body being rendered. The hard part with exposing this functionality to client programs is: what should the external interface be? How much complexity/how many options should be visible to client programs? Since you are the first person to ask for something like this (albeit indirectly), now seems like as good a time to talk about it as any. The potential complexity is very high, and I want the final result to appear easy from the Python side of the house, just like the rest of VPython. Please feel free to ask about anything that isn't clear below. Some constraints are: - Any image buffer must be N[xM[xO]] where N, M, and O are powers of 2. - Textures can be either one, two, or three dimensional, and each pixel can be a grayscale, grayscale+alpha, RGB, or RGBA value in several precisions (I'm leaning towards forcing 8 bits per channel for simplicity's sake). Clearly, high-res 3-D textures are _very_ memory intensive. - The image must be transferred into the memory of the graphics card before use and whenever changed, which is fairly slow. - Use of the image to draw objects later is very fast on new machines and painfully slow with a software-based renderer. Note the "cycle time" text at the bottom of those two screenshots. It represents the time to render the entire scene in seconds on my home machine - an 800 MHz PIII + NVIDIA GeForce 4000 graphics card (related to the GeForce 4), so it is relatively slow. A newer machine (3GHz P4+Radeon 9800) generally keeps it at about 1 ms per cycle, even for much more complex scenes, and most of that time is waiting for the VSYNC. The internal texture mapping model is this: For each triangle's vertex, you specify a coordinate on the texture object (ranging from (0->1) for each dimension of the texture). The triangle that is snipped from the image is then scaled, rotated, and filtered as needed to match the geometry of the triangle on the screen. In some cases, OpenGL can compute texture coordinates for you to perform a projection of the image onto the object being rendered based on the positions of the vertexes in object space. You specify a vector coefficient of a dot-product which is applied to each incoming vertex to compute the texture coordinate for that vertex (computed_coord = some_vector.dot( vertex_pos)), for each dimension of the texture object. For a 2D texture, this is analogous to specifying a scaled plane in world space from which the GL projects the image onto the body being rendered. Also, there are a few types of filtering that can be done on the image when it is transformed onto the target triangle. One is a "nearest", and the others are forms of linear interpolation. This comes into effect when the image is zoomed in or out. When using "nearest" filtering, a zoomed-in image looks grainy, and with one of the interpolation modes, it will look smooth (fuzzy at high magnifications). Now, with that in mind, here are the external interfaces that I have considered, but haven't yet nailed down in stone: - For some objects (only box and sphere at present), provide a default mapping of the texture to the body. This is available when some form of sane default is obviously going to be reasonable. There probably isn't one for the arrow, for example. - For all objects, provide a means of specifying the planar position and scaling in world space (probably relative rather than absolute) from which the texture should be projected onto the body. A combination of origin, s-axis and t-axis (which correspond to the lower-left corner, horizontal direction and vertical direction along the texture object, respectively), would be sufficient, I think. - Punt to the user and allow you to manually specify texture coordinates with each vertexes in the faces object. Texture data can be specified to Visual using either a built-in function to decode an image file from disk to an opaque internal buffer, or by passing an appropriately shaped ( N[xM[xO]]x{1,2,3,4} of type Int8) Numeric array containing in-memory pixel data. In either case, the resulting texture object can be applied to any Visual object in a one-to-many relationship. Open issues: - Does this allow for the kind of usage model that you want? I think that the program you attached could be implemented under this model as (in pseudo-code): allocate_a_faces_object_with_two_triangles_and_tex_coordinates() create_an_NxMx3_Numeric_array() populate_the_array() convert_array_to_texture() object.texture = my_new_texture_object while (some_condition): change_array() convert_to_texture() object.texture = my_modified_texture_object - Just how many of the options should be exposed? Alternatively (and probably the answer): which options would mere mortals find useful without being too confusing? - Since OpenGL requires that each dimension of the texture be a power of 2, any source data that isn't must be scaled to fit. Should it be automatically scaled? If so, up or down in precision? Scaling up will generate a grainy texture, scaling down will loose data, and neither is really what the user program requested. Both seem equally right and wrong to me. Some people have requested the use of procedural texture generation functions as well, akin to what is provided with Povray. I will admit that this kind of thing is somewhat over my head right now, but I don't want to make any kind of design decision now that would preclude a clean implementation of this kind of feature in the future. This is one direction that Visual is going and I would appreciate any feedback on it. Thanks, -Jonathan |
From: Joel K. <jj...@ya...> - 2004-09-07 21:44:47
|
By using the attached script, in which a module from the Python Imaging Library is combined with VPython, you can display a bitmap image on a rectangular grid of VPython objects. Once the basic image is in place, of course, you can use VPython's strong capabilities for animation &c to produce a wide variety of visual effects. As I attempted to indicate in the script's comment lines, I suggest starting out with rather small bitmaps until you see what kind of results you get on your particular hardware. If anyone else has been experimenting with similar algorithms, I would be interested in seeing their work. This class of programs would seem to offer a broad range of potential benefits for education and lots of other areas. Email if you have questions. Joel __________________________________ Do you Yahoo!? Yahoo! Mail - 50x more storage than other providers! http://promotions.yahoo.com/new_mail |
From: Aaron T. <ti...@ma...> - 2004-08-29 02:49:42
|
I've uploaded Tom Foster's M&I pretest for Mechanics, and I've uploaded the Lawson Test of Scientific Reasoning. AT |
From: Bruce S. <Bru...@nc...> - 2004-08-28 14:33:31
|
You presumably noticed that we got a spam message sent ostensibly by Gabriela Jordan <vis...@li...> I've now blocked messages sent from our own list, and this message is a test to see whether legitimate messages still go through. Bruce Sherwood |
From: Gabriela J. <vis...@li...> - 2004-08-28 02:49:30
|
You must enable HTML to view this message. 8605.ZlY |
From: Jonathan B. <jbr...@ea...> - 2004-08-26 23:24:57
|
On Thu, 2004-08-26 at 11:48, Jonathan Brandmeyer wrote: > Did you see the message that preceded it, which said: > "This is a quiet Makefile. If make exits with an error, check > cvisual/build.log to see the complete error message(s). In the event of > an error that you cannot debug, please send a message to > vis...@li..., including the files config.log > and build.log, requesting assistance."? Sorry, that was rude of me. Clearly from your earlier posts you knew about that log file. -Jonathan |
From: Jonathan B. <jbr...@ea...> - 2004-08-26 15:48:42
|
On Thu, 2004-08-26 at 02:45, Nils Wagner wrote: > Dear experts, > > Unfortunately a make in visual-3.0 failed with the following message > > Compiling arrow.cpp ... > make[1]: *** [arrow.lo] Error 1 > make[1]: Leaving directory `/var/tmp/visual-3.0/cvisual' > make: *** [all-recursive] Error 1 > > Any pointer would be appreciated. > > Nils > Did you see the message that preceded it, which said: "This is a quiet Makefile. If make exits with an error, check cvisual/build.log to see the complete error message(s). In the event of an error that you cannot debug, please send a message to vis...@li..., including the files config.log and build.log, requesting assistance."? The error message(s) will be in cvisual/build.log. I redirected the output to that file, because when there is an error (even just a small one), it tends to cascade into many screenfulls of errors. The most relevant message will be near the top. It will probably be something along the lines of "could not find (file) included from (some_other_file): File not found" HTH, Jonathan |
From: Jonathan B. <jbr...@ea...> - 2004-08-26 11:24:22
|
On Thu, 2004-08-26 at 03:34, Nils Wagner wrote: > Dear experts, > > A new problem appeared during the compilation > > At the end of build.log in cvisual I found > > g++ -I/usr/include/python2.3 -DHAVE_CONFIG_H -I. -I. -I/opt/gnome/include/gtk-1.2 -I/usr/X11R6/include -I/opt/gnome/include/glib-1.2 -I/opt/gnome/lib/glib/include -D_REENTRANT -I/opt/gnome/include/glib-1.2 -I/opt/gnome/lib/glib/include -I/opt/gnome/include -fpic -DPIC -g -O2 -ftemplate-depth-120 -g0 -c cvisualmodule.cpp -fPIC -DPIC -o .libs/cvisualmodule.o > /usr/include/boost/mpl/if.hpp:75: internal compiler error: Segmentation fault Please submit a full bug report, with preprocessed source if appropriate. > See <URL:http://www.suse.de/feedback> for instructions. > Preprocessed source stored into /tmp/ccrXaXyw.out file, please attach this to your bugreport. > > Any pointer would be appreciated. > > Thanks in advance. > Nils > http://www.vpython.org/Building%20VPython%20on%20SuSE.html HTH, -Jonathan |
From: Nils W. <nw...@me...> - 2004-08-26 07:34:28
|
Dear experts, A new problem appeared during the compilation At the end of build.log in cvisual I found g++ -I/usr/include/python2.3 -DHAVE_CONFIG_H -I. -I. -I/opt/gnome/include/gtk-1.2 -I/usr/X11R6/include -I/opt/gnome/include/glib-1.2 -I/opt/gnome/lib/glib/include -D_REENTRANT -I/opt/gnome/include/glib-1.2 -I/opt/gnome/lib/glib/include -I/opt/gnome/include -fpic -DPIC -g -O2 -ftemplate-depth-120 -g0 -c cvisualmodule.cpp -fPIC -DPIC -o .libs/cvisualmodule.o /usr/include/boost/mpl/if.hpp:75: internal compiler error: Segmentation fault Please submit a full bug report, with preprocessed source if appropriate. See <URL:http://www.suse.de/feedback> for instructions. Preprocessed source stored into /tmp/ccrXaXyw.out file, please attach this to your bugreport. Any pointer would be appreciated. Thanks in advance. Nils |
From: Arnd B. <arn...@we...> - 2004-08-26 07:26:20
|
On Thu, 26 Aug 2004, Nils Wagner wrote: > Hi all, > > I have used scipy (www.scipy.org) to integrate the equations of motion of a > double pendulum. I am using the angles \varphi_1, \varphi_2 as > generalized coordinates. > Is it possible to visualize the planar motion of the double pendulum > with this > information. Has someone written a small vpython program for this (or a > similar) > example ? > Any pointer would be appreciated. Obviously you missed the demos ;-) Have a look at site-packages/visual/demos/doublependulum.py (or get that via http://vpython.org/download/Demos-2004-07-22.zip ) Best, Arnd P.S.: BTW, I could not find any pointer to the demos on the web-page - so maybe a link to demos, together with screenshots and links to the code would be nice. (could attract some new users, I'd guess ;-) |
From: Nils W. <nw...@me...> - 2004-08-26 07:09:44
|
Hi all, I have used scipy (www.scipy.org) to integrate the equations of motion of a double pendulum. I am using the angles \varphi_1, \varphi_2 as generalized coordinates. Is it possible to visualize the planar motion of the double pendulum with this information. Has someone written a small vpython program for this (or a similar) example ? Any pointer would be appreciated. Nils |
From: Nils W. <nw...@me...> - 2004-08-26 06:45:57
|
Dear experts, Unfortunately a make in visual-3.0 failed with the following message Compiling arrow.cpp ... make[1]: *** [arrow.lo] Error 1 make[1]: Leaving directory `/var/tmp/visual-3.0/cvisual' make: *** [all-recursive] Error 1 Any pointer would be appreciated. Nils |
From: Isaac W H. <isa...@om...> - 2004-08-25 15:59:56
|
export CPPFLAGS=-I/opt/gnome/include/gtkgl - Isaac On Wed, 2004-08-25 at 10:51, Nils Wagner wrote: > Dear experts, > > I am going to install vpython on SuSE9.1 > > gtkglarea.h is located in /opt/gnome/include/gtkgl/ > > How do I execute > > CPPFLAGS=-I/opt/gnome/include/gtkgl ? > > Nils > > > ------------------------------------------------------- > SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media > 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 > Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. > http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 > _______________________________________________ > Visualpython-users mailing list > Vis...@li... > https://lists.sourceforge.net/lists/listinfo/visualpython-users -- Isaac W Hanson Lead Software Developer; Omnidox, LC; http://www.omnidox.com |
From: Nils W. <nw...@me...> - 2004-08-25 15:51:54
|
Dear experts, I am going to install vpython on SuSE9.1 gtkglarea.h is located in /opt/gnome/include/gtkgl/ How do I execute CPPFLAGS=-I/opt/gnome/include/gtkgl ? Nils |
From: rene h. <re...@we...> - 2004-08-23 11:32:44
|
hi, when running configure, it stops with the following error message=18 checking for gthread >=3D 1.0... Package gthread was not found in the pkg-co= nfig search path. Perhaps you should add the directory containing `gthread.pc' to the PKG=5FCONFIG=5FPATH environment variable No package 'gthread' found configure: error: GThread is required on Unix-like systems although PKG=5FCONFIG=5FPATH contains the directory that contains gthread.pc grateful for advice, rene =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F WEB.DE Video-Mail - Sagen Sie mehr mit bewegten Bildern Informationen unter: http://freemail.web.de/=3Fmc=3D021199 |
From: Bruce S. <Bru...@nc...> - 2004-08-23 00:33:33
|
I think I see. Thanks for the further analysis. So two wrongs don't make (or shouldn't have made) a right. Bruce Sherwood Jonathan Brandmeyer wrote: > On Sun, 2004-08-22 at 16:38, Bruce Sherwood wrote: > >>I ran Jonathan's test routine in pre-Boost and Boost versions of >>VPython. In the pre-Boost version the behavior is NOT what Gary and >>Jonathan describe. In the pre-Boost version the two forms of the update >>statement work exactly the same in Jonathan's test routine. The original >>Visual deliberately did something like this: >> >> sphere(pos=a) was treated as though it were written as >> sphere(pos=vector(a)) > > > Current VPython does this too for all of its built-in attributes. The > problem stems from the use of user-defined attributes, for which there > is no way to force a copy that will work for all Python data types > (copy.copy() is close, but doesn't work for everything). > > Old visual.vector did not provide the in-place operators (+=, *=, and > friends), which forced the interpreter to emulate them incorrectly. > > Python programmers must be aware that all Python variables are > references. Consider this program, which also incorrectly assumes that > '=' makes a copy: > > from visual import * > spheres = [] > velocity = vector(0,0,0) > spheres.append( sphere( v=velocity)) > spheres.append( sphere( v=velocity)) > > > twice = 2 > while twice: > print "before iteration:" > for i in spheres: > print i.v > > ctr = 0 > for i in spheres: > print ctr > ctr += 1 > print "before:", i.v > i.v.x += 1 > print "after", i.v > > twice -= 1 > > -Jonathan > > > > ------------------------------------------------------- > SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media > 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 > Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. > http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 > _______________________________________________ > Visualpython-users mailing list > Vis...@li... > https://lists.sourceforge.net/lists/listinfo/visualpython-users |
From: Jonathan B. <jbr...@ea...> - 2004-08-22 21:06:43
|
On Sun, 2004-08-22 at 16:38, Bruce Sherwood wrote: > I ran Jonathan's test routine in pre-Boost and Boost versions of > VPython. In the pre-Boost version the behavior is NOT what Gary and > Jonathan describe. In the pre-Boost version the two forms of the update > statement work exactly the same in Jonathan's test routine. The original > Visual deliberately did something like this: > > sphere(pos=a) was treated as though it were written as > sphere(pos=vector(a)) Current VPython does this too for all of its built-in attributes. The problem stems from the use of user-defined attributes, for which there is no way to force a copy that will work for all Python data types (copy.copy() is close, but doesn't work for everything). Old visual.vector did not provide the in-place operators (+=, *=, and friends), which forced the interpreter to emulate them incorrectly. Python programmers must be aware that all Python variables are references. Consider this program, which also incorrectly assumes that '=' makes a copy: from visual import * spheres = [] velocity = vector(0,0,0) spheres.append( sphere( v=velocity)) spheres.append( sphere( v=velocity)) twice = 2 while twice: print "before iteration:" for i in spheres: print i.v ctr = 0 for i in spheres: print ctr ctr += 1 print "before:", i.v i.v.x += 1 print "after", i.v twice -= 1 -Jonathan |
From: Bruce S. <Bru...@nc...> - 2004-08-22 20:38:48
|
I ran Jonathan's test routine in pre-Boost and Boost versions of VPython. In the pre-Boost version the behavior is NOT what Gary and Jonathan describe. In the pre-Boost version the two forms of the update statement work exactly the same in Jonathan's test routine. The original Visual deliberately did something like this: sphere(pos=a) was treated as though it were written as sphere(pos=vector(a)) The intent was to avoid many of the problems that caught Gary, using the trick that Jonathan describes. I can't pretend to be able to see through all aspects of this issue, but my first instinct is to make the latest Visual do what the original one did, both because it avoids some subtle problems for users and because the change has presumably broken some working programs somewhere. Comments? Bruce Sherwood gp...@ri... wrote: > All, > > I have a visual python program whose behavior changes > when I alternately comment and uncomment these two lines: > > s.v += (force / s.mass) * dt > s.v = s.v + (force / s.mass) * dt > > where s is a sphere instance and... well, see below. > > I thought this was odd, so I asked Jonathan to take a look. > He did, and: > > Gotme! (as in Gotcha!) > > Jonathan's reply to me is reproduced below. There are some > good lessons here. I think there are three: 1.) the reference > vs. value feature, 2.) a += b is not entirely equivalent to a = a+b, > 3.) defualt arguments in function defs are built into the function > object when the function is defined, not when it is called. > > The refernce to MakeMass() is a reference to my program. I *think* > Jonathan's explanation is clear enough on its own without having to know > exactly what's inside MakeMass. > > regards, with thanks to J.B., > -gary > > ---------------------------- Original Message ---------------------------- > Subject: Re: incrementing vectors "problem" > From: "Jonathan Brandmeyer" <jbr...@ea...> > Date: Wed, August 18, 2004 5:09 pm > To: gp...@ri... > -------------------------------------------------------------------------- > > >>On Wed, 2004-08-18 at 09:28, gp...@ri... wrote: > > >>This is a work in progress. There must be a coding problem somewhere >>(there are a number of changes in the works) but I'm stuck at this > > spot. >If you can just run it and let me know if they are the same or > different >for you, I'll have some idea about what I should do next. > (I'm not asking >you to debug my code!) > > I was sufficiently perplexed that I did debug your code. You have been > burned by reference vs. value semantics. > > Consider the following code: > from visual import * > spheres = [] > velocity = vector(0,0,0) > spheres.append( sphere( v=velocity)) > spheres.append( sphere( v=velocity)) > spheres.append( sphere( v=velocity)) > spheres.append( sphere( v=velocity)) > > twice = 2 > while twice: > print "before iteration:" > for i in spheres: > print i.v > > ctr = 0 > for i in spheres: > print ctr > ctr += 1 > print "before:", i.v > i.v += vector(.01, .01, .01) > # i.v = i.v + vector(.01, .01, .01) > print "after", i.v > > twice -= 1 > > Run it, and be surprised. The problem is that when each sphere is > created, its 'v' attribute is a reference to the single vector pointed to > by 'velocity'. So, when the loop runs with += expressions, each of them > is changing the single global vector 'velocity', but when it is run with > 'x = x + y' expressions, on the first iteration, each 'v' attribute is > reassigned to a new, unique vector: the returned result of the > addition. > > What is the fix? Whenever you want a true copy, you can invoke the "copy > constructor" for a vector to break the reference cycle: > velocity = vector(0,0,0) > v = vector(velocity) > > Note that all of the visual objects' vector attributes underlying "set > functions" do essentially the same thing. > > Actually, there is one additional piece of information that is specific to > your code. In your case, the common vector was the default value for the > 'velocity' argument in the MakeMass() function. When the > interpreter passes the closing line of the function definition, it creates > a callable object, named "MakeMass", and that object has a single copy of > any default arguments within it. Every time MakeMass's __call__() member > function is invoked without one of the arguments for which there is a > default, that parameter is replaced with a reference to the single > instance of the default value that was created when MakeMass was created. > > I know its a damned subtle problem, but those are Python's semantics. > > HTH, > -Jonathan > > > > > > > ------------------------------------------------------- > SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media > 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 > Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. > http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 > _______________________________________________ > Visualpython-users mailing list > Vis...@li... > https://lists.sourceforge.net/lists/listinfo/visualpython-users |
From: <gp...@ri...> - 2004-08-22 19:59:46
|
All, I have a visual python program whose behavior changes when I alternately comment and uncomment these two lines: s.v += (force / s.mass) * dt s.v = s.v + (force / s.mass) * dt where s is a sphere instance and... well, see below. I thought this was odd, so I asked Jonathan to take a look. He did, and: Gotme! (as in Gotcha!) Jonathan's reply to me is reproduced below. There are some good lessons here. I think there are three: 1.) the reference vs. value feature, 2.) a += b is not entirely equivalent to a = a+b, 3.) defualt arguments in function defs are built into the function object when the function is defined, not when it is called. The refernce to MakeMass() is a reference to my program. I *think* Jonathan's explanation is clear enough on its own without having to know exactly what's inside MakeMass. regards, with thanks to J.B., -gary ---------------------------- Original Message ---------------------------- Subject: Re: incrementing vectors "problem" From: "Jonathan Brandmeyer" <jbr...@ea...> Date: Wed, August 18, 2004 5:09 pm To: gp...@ri... -------------------------------------------------------------------------- >On Wed, 2004-08-18 at 09:28, gp...@ri... wrote: > This is a work in progress. There must be a coding problem somewhere >(there are a number of changes in the works) but I'm stuck at this spot. >If you can just run it and let me know if they are the same or different >for you, I'll have some idea about what I should do next. (I'm not asking >you to debug my code!) I was sufficiently perplexed that I did debug your code. You have been burned by reference vs. value semantics. Consider the following code: from visual import * spheres = [] velocity = vector(0,0,0) spheres.append( sphere( v=velocity)) spheres.append( sphere( v=velocity)) spheres.append( sphere( v=velocity)) spheres.append( sphere( v=velocity)) twice = 2 while twice: print "before iteration:" for i in spheres: print i.v ctr = 0 for i in spheres: print ctr ctr += 1 print "before:", i.v i.v += vector(.01, .01, .01) # i.v = i.v + vector(.01, .01, .01) print "after", i.v twice -= 1 Run it, and be surprised. The problem is that when each sphere is created, its 'v' attribute is a reference to the single vector pointed to by 'velocity'. So, when the loop runs with += expressions, each of them is changing the single global vector 'velocity', but when it is run with 'x = x + y' expressions, on the first iteration, each 'v' attribute is reassigned to a new, unique vector: the returned result of the addition. What is the fix? Whenever you want a true copy, you can invoke the "copy constructor" for a vector to break the reference cycle: velocity = vector(0,0,0) v = vector(velocity) Note that all of the visual objects' vector attributes underlying "set functions" do essentially the same thing. Actually, there is one additional piece of information that is specific to your code. In your case, the common vector was the default value for the 'velocity' argument in the MakeMass() function. When the interpreter passes the closing line of the function definition, it creates a callable object, named "MakeMass", and that object has a single copy of any default arguments within it. Every time MakeMass's __call__() member function is invoked without one of the arguments for which there is a default, that parameter is replaced with a reference to the single instance of the default value that was created when MakeMass was created. I know its a damned subtle problem, but those are Python's semantics. HTH, -Jonathan |