From: <Do...@ao...> - 2001-10-22 01:29:19
|
from visual import * scene.width =3D 320 scene.height =3D 320 #scene.ambient =3D 0.05 #scene.range =3D (15,15,15) my_fov =3D 3.1415926/4.0 scene.fov =3D my_fov scene.x =3D 0 scene.y =3D 0 scene.lights =3D [vector(0,0,1.0)] #scene.ambient =3D 0 art =3D frame() topball=3Dsphere(pos=3D(0,6,0), radius=3D2, color=3D(1,0.7,0.2)) botball=3Dsphere(pos=3D(0,-2,0), radius=3D2, color=3D(1,0.7,0.2)) triangle1 =3D convex(frame =3D art,pos =3D [(2,2,2),(0,0,0),(-2,2,2)],color=20= =3D color.red) triangle2 =3D convex(frame =3D art,pos =3D [(2,2,2),(0,4,0),(-2,2,2)],color=20= =3D color.blue) triangle3 =3D convex(frame =3D art,pos =3D [(2,2,-2),(0,0,0),(-2,2,-2)],colo= r =3D color.blue) triangle4 =3D convex(frame =3D art,pos =3D [(2,2,-2),(0,4,0),(-2,2,-2)],colo= r =3D color.red) center=3Dsphere(frame =3D art,pos=3D(0,2,0), radius=3D0.5, color=3Dcolor.yel= low) box1=3Dbox(frame =3D art,pos=3D(5,2,0),axis=3D(0,1,0),Length=3D2,Width=3D2,H= eight=3D6,color=3Dcolor.yellow) box2=3Dbox(frame =3D art,pos=3D(-5,2,0),axis=3D(0,1,0),Length=3D2,Width=3D2,= Height=3D6,color=3Dcolor.yellow) box3=3Dbox(pos=3D(7,2,0),axis=3D(0,1,0),Length=3D2,Width=3D2,Height=3D9,colo= r=3D(1,0,1)) box4=3Dbox(pos=3D(-7,2,0),axis=3D(0,1,0),Length=3D2,Width=3D2,Height=3D6,col= or=3D(0,1,1)) box5=3Dbox(pos=3D(9,2,0),axis=3D(0,1,0),Length=3D2,Width=3D2,Height=3D6,colo= r=3Dcolor.red) box6=3Dbox(pos=3D(-9,2,0),axis=3D(0,1,0),Length=3D2,Width=3D2,Height=3D6,col= or=3Dcolor.red) for i in xrange(1,2000): art.rotate(angle =3D 0.1,axis =3D (0,1,0),origin =3D (0,0,0)) topball.rotate(angle =3D -0.1,axis =3D (0,1,0),origin =3D (0,0,0)) botball.rotate(angle =3D -0.1,axis =3D (0,1,0),origin =3D (0,0,0)) box3.rotate(angle =3D -0.1,axis =3D (0,1,0),origin =3D (0,0,0)) box3.rotate(angle =3D -0.3,axis =3D box3.axis,origin =3D box3.pos) box4.rotate(angle =3D -0.1,axis =3D (0,1,0),origin =3D (0,0,0)) box5.rotate(angle =3D -0.1,axis =3D (0,0,1),origin =3D (0,0,0)) box6.rotate(angle =3D -0.1,axis =3D (0,0,1),origin =3D (0,0,0)) rate(30) |
From: Bruce S. <ba...@an...> - 2001-10-22 14:32:32
|
Ah. Now I see what you're referring to. When you explicitly set the range, that turns off autoscaling, and the camera stays in a fixed location. Otherwise, with autoscaling in effect, you're seeing the camera move in and out to try to make sure that the entire scene is visible at all times. The autoscaling machinery isn't perfect and sometimes makes mistakes about whether the camera needs to move back or not. A useful scheme is to establish the scene with its objects, then execute "scene.autoscale = 0" to turn off further autoscaling. Bruce Sherwood --On Monday, October 22, 2001 10:11 +0000 Do...@ao... wrote: > Pentium 2 PC, 400 MHz, sort of an old machine, ATI video card... > > When I run things with the scene.fov, the complex appears to move back > and forth in space, when I use the scene.range, the motion is not present. > > Wayne |
From: Bruce S. <ba...@an...> - 2001-10-22 01:49:00
|
--On Sunday, October 21, 2001 21:29 +0000 Do...@ao... wrote: > Background on the first question. I am looking into doing some > experiments with VPython using digital video output which could have some > applicability to driving a scene projector. I am trying to understand > what data is being generated, how many bits etc., so I can decide how to > encode and grab off from digital video the scene intensity data. Surely this has much more to do with the color depth of your video card than with Visual. > On another related question. How does VPython handle rendering object > data which is sub-pixel? If I for example have a scene.range of say 1000 > meters, and say 200x200 scene.x and scene.y, then a screen pixel is on > the order of 5 meters. (Math kept simple, in deference to me). If I have > an object that is, say a 1 meter sphere, will it not appear, or only > appear is it happens to be hit by some sampling process? Try it! You'll see that very small (or equivalently, very distant) objects don't display. There might be some question as to whether they should display one hardward pixel, but currently they don't. (I'm assuming you mean "scene.width" and "scene.height", not "scene.x" and "scene.y".) > Oh, before I forget. I have a simple program that looks at a > number of objects that are rotating. When I use scene.range > to specify the effective field of view, it behaves nicely. If I use > scene.fov, there is a weird motion of the scene. I will include a copy > for your amusement, probably at my stupidity... I don't understand the issue. When I run your program using either range or fov, the behavior looks the same to me. Say more? What platform? Bruce Sherwood |
From: David S. <dsc...@vy...> - 2001-10-22 15:30:53
|
> > Background on the first question. I am looking into doing some > > experiments with VPython using digital video output which > could have > > some applicability to driving a scene projector. I am trying to > > understand what data is being generated, how many bits > etc., so I can > > decide how to encode and grab off from digital video the scene > > intensity data. > > Surely this has much more to do with the color depth of your > video card > than with Visual. Yes. Visual uses OpenGL for low-level rendering, and exactly how rasterization takes place is determined by the OpenGL drivers for your video card. Usually, the framebuffer will store 5-8 bits for each of red, green, and blue. I have never heard of a framebuffer with a separate intensity channel. > > On another related question. How does VPython handle > rendering object > > data which is sub-pixel? If I for example have a > scene.range of say > > 1000 meters, and say 200x200 scene.x and scene.y, then a > screen pixel > > is on the order of 5 meters. (Math kept simple, in > deference to me). > > If I have an object that is, say a 1 meter sphere, will it > not appear, > > or only appear is it happens to be hit by some sampling process? Again, this might conceivably depend on your OpenGL implementation. Visual itself makes no attempt to do antialiasing (though it certainly preserves coordinate information to subpixel accuracy). However, some OpenGL implementations can do screen-space antialiasing by supersampling. If you want to create very high-quality video from Visual, the best approach would be to render individual frames at very high resolution and scale them down (this is essentially a very slow supersampling implementation). You will need to extend Visual somehow to try to get access to the image in the framebuffer. Alternatively, you might explore using the POVray export to generate a POV scene for each frame, raytrace it (with as much antialiasing as you like) and then composite the raytraced images to form a video. Dave |