Mike C. Fletcher wrote:
> Andrew Straw wrote:
>> I'm trying to understand what/how/if PyOpenGL deals with multiple
>> versions of OpenGL. For example, if I run PyOpenGL on a machine with
>> OpenGL 1.3 libraries, I should have multitexturing abilities built-in
>> (e.g. glActiveTexture() function). This is specified in the OpenGL
>> 1.3 spec:
> PyOpenGL is currently limited to OpenGL 1.1. At the moment, I don't
> have drivers which support features from OpenGL 1.2 or above.
You can always run Mesa to get OpenGL 1.4. (It's not HW accelerated,
but it'll at least let you test.)
> There are
> a few extensions that, if I get the time, I might code up wrappers for,
> but they aren't the fun/sexy stuff for new hardware, I only have a
> Radeon 7500.
OK, but my question really is: how does/would PyOpenGL deal with the
concept that at build time I might not have access to OpenGL > 1.1 libs,
but may want to run later (without re-compiling) on a system with
later libs. It seems that as long as you have header files that support
OpenGL > 1.1 (easy to get), we should be able to provide a correct and
complete interface to the dynamic library, regardless of version at
build and run time.
Perhaps on PyOpenGL load-time, PyOpenGL could call
glGetString(GL_VERSION) to figure out which names to load into its
namespace. Does this seem like a reasonable idea? Do any of the
supported platforms statically link OpenGL libs? (That sounds really
From what I've learned about the PyOpenGL system, many of the tricky
wrapping issues have already been taken care of, so it would "simply" be
a matter of creating this load-time system and adding the interface code
for the new calls.
The reason for this email is that I see a few numbers in the PyOpenGL
source that I'm not exactly sure what they mean, and I wonder if this
road was gone down before, or at least started. (e.g. in
interface_util.c there are heaps of "#ifndef GL_VERSION_1_2" type
conditionals, although this clearly points to a build-time rather than
run time interface selection, which is non-optimal.)
>> Another workaround is that some OpenGL 1.3 drivers (nVidia) still
>> support the multitexturing ARB extension, but others (ATI) don't.
> Hmm, strange. They really should be providing OpenGL 1.1 drivers with
> the extensions alongside the 1.3 drivers... sigh. Not that it would
> help much, as we don't have the extensions in many (most) cases.
Luckily we do have the ARB multitexturing extension, which is pretty
darn useful! (It is a pain that ATI doesn't seem to support it with
their Windows OpenGL 1.3 drivers.)
>> This success with ctypes and the continued good things I hear about
>> Pyrex make me wonder if maybe the PyOpenGL build system could be
>> simplified by getting away from SWIG and using these two technologies?
> ctypes is likely to be too slow for more than a few functions, as
> there's a lot of manipulation required to make the Python objects OpenGL
> compatible. Pyrex is possible, but in the end, it's almost the same
> problem as with the SWIG system, you wind up needing to extend Pyrex to
> support polymorphic array parameters, to provide the bridge code for
> various Python-ised calls, etceteras. Code complexity doesn't go down
> much AFAICS... that said, if someone wants to prove me wrong, I'd be
> delighted :) . I'd be more interested in seeing someone model the whole
> problem in Python with a code-generator to create SWIG, Pyrex or Ctypes
> code for any given function so that we could try them all out alongside
> one another (and maybe add a straight C code generator to see if that
> makes things easier...). Oh well.
Sounds like fun for a rainy month... But I guess there's no point
worrying about this until someone has time to dive in and tackle it.
Which may never happen given that they may not get a speed boost over
the current SWIG system. (It's good to hear you think SWIG will result
in the fastest interface, since with OpenGL that's probably what people
care about most.)