From: John G. <je...@vi...> - 2000-10-27 20:39:45
|
Hello, I originaly sent this to the opengl.org forums, but didn't get much help, aside from someone suggesting that it is a HW acceleration problem. (Not the case, my Voodoo2 runs all the latest Linux OpenGL games perfectly. (I actually bought the card specifically for Tuxedo T Penguin: A Quest for Herring, (This was before the large array of commercial game ports came in.) So am sure that it is hardware accelerating correctly. I'll just paste my post from opengl or to save from having to retype it all. I have since posting it removed the glFrustum and glViewport calls from the main loop, with no speed increase. Also I forgot to expand the i3dTranslate and i3dRotate functions into the code that they call so you could see what is going on (I did that for the rest of the functions that I wrote) but I guess you can figure out what goes on behind the scenes of those two functions. It is purely opengl code so the botteneck wouldn't be there. <begin repost> Hi, I'm having a bit of a problem finding out what is causing my code to be so slow. I have a model rotating with vertex shading and am only getting about 11 FPS. (The model has 7121 faces so that is about 79,000 polygons per second). The thing is I have tried it at resolutions from 320x240 up to 800x600 and the framerate is always the same, so my rendering code shouldn't be the problem, correct? Further evidence that it is not the rendering code causing the problem is that I switched from a for loop that calls glcolor, glnormal, and glvertex one time for each face, to using glDrawArrays and didn't notice any improvement whatsoever in framerate. The problem is that there is very little in the form of non-gl code in the main loop. The main loop looks like this: for(i = 0; i < 100; i++){ glViewport (0, 0, (GLsizei) W, (GLsizei) H); glMatrixMode (GL_PROJECTION); glLoadIdentity (); glFrustum (-1.0, 1.0, -1.0, 1.0, MinDepth, MaxDepth); glMatrixMode (GL_MODELVIEW); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); i3dTranslate(-Camera->position[VALUE_X],-Camera->position[VALUE_Y],-Camera->position[VALUE_Z]); i3dRotate(-Camera->angle[VALUE_X],-Camera->angle[VALUE_Y], -Camera->angle[VALUE_Z]); /* I'll explain this next line later */ i3dDrawObject(nimitz); /*I'm using SDL, not Glut */ SDL_GL_SwapBuffers(); Nimitz->angle[VALUE_Y] += 2.0; } i3dDrawObjects is of course a function I wrote, in actuality the main loop has more of my functions than that, but I just expanded them so you'd know exactly what is going on. i3dDrawObject has an if statement that checks for things like texturing, lighting, etc. so that I be sure to pass the right set of normals and such. But the main rendering part is here: glColorPointer(4, GL_FLOAT, 0, Object->colorarray); glNormalPointer(GL_FLOAT, 0, Object->vnormalarray); glVertexPointer(3, GL_FLOAT, 0, Object->vertexarray); glDrawArrays(I3DRenderType, 0, (Object->facecount * 3)); I've actually commented out everything but that to be sure that it wasn't the if statemnents giving me trouble and still got the same framerate. Any ideas on what it could be? I've even commented out the code that adds to the angle of the model. The only thing left is required GL code. I'd appreciate any help. <end repost> is there anything specific to Mesa, (I'm using 3.2 I believe) that would cause this to happen? Surely if it were a hardware acceleration problem, reducing the resolution would have fixed it. But all I am passing it basically pure OpenGL code. I'm rather puzzled. I hope I am not stuck at 79,000 polys/sec. -- "I'm willing to J.O.B., just not on no jagg-off shoe-shine tip" "Jagg-off shoe-shine tip?" "No *background-checking* jagg-off shoe-shine tip" |