From: Marko Friedemann <mfr@bm...> - 2001-10-22 12:48:02
While implementing threads into my OpenGL project, I've found some
interesting difference between the DRI and "binary-only" drivers/
My main developing platform is
AMD K7 500
XFree86-4.1.0 with DRI (somewhat outdated, though - 20010405)
The reason for thread-usage is the following:
I need to load data that is not designed for use with OpenGL (the
textures are of random size not at 2^n borders). That's why I need to
scale them. I use gluScaleImage2D for that purpose, and this consumes
a lot of time. This is why I intend to scale in background, still
performing screen updates in the main-thread (note that I do _not_
_draw_ anything in the other thread).
Of course I locked all the relevant gl* operations with mutexes.
The version where I finally got the locking right works flawlessly
on above machine. My colleague, however, complained about segfaults
every time he would start up the application (using an NVidia card
I did, as well, run into problems when I ported the stuff to windows
and it doesn't run on my second box either, which is:
AMD Athlon 600
XFree86-4.1.0 with NV-GLX 1.0-1542
Now I finally got around to recognize the following:
Windows (and the NV-GLX stuff?) actually want a unique OpenGL
rendering context for each thread that performs OpenGL calls. Since
I use GLUT, this is not as easy to achieve.
Now I wonder why the DRI would not care about this?
I searched the net for further info, but didn't actually find any
comprehensive site to cover this issue. Perhaps one of you could
lend a hand and guide me to some useful location?
PS. Please cc my as I'm not subsribed to the list.