Re: [PyOpenGL-Users] PyOpenGL Deprecation & New Methods
Brought to you by:
mcfletch
From: Mike C. F. <mcf...@vr...> - 2009-12-11 16:00:18
|
Ian Mallett wrote: > I was thinking of just computing the matrices in the main and then > passing those as uniforms, similarly to the present setup, instead of > trying to define a series of transformations and pass those in. Not > sure if that's what you mean. > > Simply add a part in each of the drawing functions (e.g., I have stuff > like object.draw_vbo(shader)) that passes in the current matrix, and > then redefine glScalef, glRotatef, etc. to change a hardcoded matrix. > Then you can code in legacy style. > > Still not sure why they decided to change it. IMO, the matrix setup > works beautifully. There can only be negative performance > consequences from not implementing it in hardware. The basic idea is that since most (professional) game engines wind up computing the final transformation matrix anyway, why have the GL *also* compute it. That allows them to (in theory) reduce the complexity of the GL (though since the deprecations were reversed, it didn't work :) ). You can see how this works in OpenGLContext's "flat" render pass. Calculating the matrix in full on the CPU lets it do frustum culling, depth-sorting, etceteras, then pass the same matrix into the GL with a couple of calls. Thing is, there hasn't been a hardware implementation of glRotate/glScale/glTranslate in a long time, from what I'm told. The GL has just been implementing all of the "legacy" transformation code in simple (read not-very-optimized) C code in the driver which then passed the results into the shader it was building under the covers to render the legacy shading model. Your "workstation" graphics cards were basically just better-optimized drivers for approximately the same hardware. The performance is thus ~ the same if you do the calculations in C (or numpy) or "in the GL". The hardware doesn't do the work, incidentally, because it costs silicon and, as mentioned, "real world" engines don't need it. That said, I don't *agree* with deprecating the functionality myself, I still think the idea of the transformation matrix stack are a useful first-step for new OpenGL programmers, I'm just repeating the arguments that I've received from others as to why it was done. HTH, Mike -- ________________________________________________ Mike C. Fletcher Designer, VR Plumber, Coder http://www.vrplumber.com http://blog.vrplumber.com |