Thread: [PyOpenGL-Users] glGetTexImage weirdness
Brought to you by:
mcfletch
From: Gijs <in...@bs...> - 2009-02-19 09:27:32
|
Hello List, I'm trying to find why glGetTexImage works the way it does. With the following piece of code, I expect both prints to be the same. pixels = zeros(1*4, 'i') + 97 img = glGenTextures(1) glBindTexture(GL_TEXTURE_2D, int(img)) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels) data = glGetTexImageub(GL_TEXTURE_2D, 0, GL_RGBA) print data data = glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE) print data As I expect that PyOpenGL would pass GL_UNSIGNED_BYTE to the underlying function when I use glGetTexImageub, and that if I pass the type myself directly, the result would be the same. But in the first case I get a proper response, containing [[[97 97 97 97]]], and the second I get a rather weird response "aaaa" (a string). In the end it's of course quite easy to work around it, since you can just use the glGetTexImageub function, but when I stumbled upon it, it took me quite some time to find it since I assumed both would be the same. Regards, Gijs |
From: Mike C. F. <mcf...@vr...> - 2009-02-19 16:43:05
|
Gijs wrote: ... > data = glGetTexImageub(GL_TEXTURE_2D, 0, GL_RGBA) > print data > data = glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE) > print data > > As I expect that PyOpenGL would pass GL_UNSIGNED_BYTE to the underlying > function when I use glGetTexImageub, and that if I pass the type myself > directly, the result would be the same. But in the first case I get a > proper response, containing [[[97 97 97 97]]], and the second I get a > rather weird response "aaaa" (a string). In the end it's of course quite > easy to work around it, since you can just use the glGetTexImageub > function, but when I stumbled upon it, it took me quite some time to > find it since I assumed both would be the same. > It's a legacy compatibility feature requested to be re-instated by the Pygame folks. See the flag: UNSIGNED_BYTE_IMAGES_AS_STRING in the OpenGL/__init__.py module for how to go back to always getting back your "normal" array types. IIRC some common function in the released versions of Pygame was using the function expecting a string back (PyOpenGL 2.x behaviour) and broke with 3.x returning the registered preferred array handler type. HTH, Mike -- ________________________________________________ Mike C. Fletcher Designer, VR Plumber, Coder http://www.vrplumber.com http://blog.vrplumber.com |
From: Gijs <in...@bs...> - 2009-02-19 16:50:35
|
On 2/19/09 5:42 PM, Mike C. Fletcher wrote: > Gijs wrote: > ... >> data = glGetTexImageub(GL_TEXTURE_2D, 0, GL_RGBA) >> print data >> data = glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE) >> print data >> >> As I expect that PyOpenGL would pass GL_UNSIGNED_BYTE to the >> underlying function when I use glGetTexImageub, and that if I pass >> the type myself directly, the result would be the same. But in the >> first case I get a proper response, containing [[[97 97 97 97]]], and >> the second I get a rather weird response "aaaa" (a string). In the >> end it's of course quite easy to work around it, since you can just >> use the glGetTexImageub function, but when I stumbled upon it, it >> took me quite some time to find it since I assumed both would be the >> same. > It's a legacy compatibility feature requested to be re-instated by the > Pygame folks. See the flag: > > UNSIGNED_BYTE_IMAGES_AS_STRING > > in the OpenGL/__init__.py module for how to go back to always getting > back your "normal" array types. IIRC some common function in the > released versions of Pygame was using the function expecting a string > back (PyOpenGL 2.x behaviour) and broke with 3.x returning the > registered preferred array handler type. > > HTH, > Mike > Hmm ok, guess I'll use the array-functions then. Btw, when I use glGetTexImageub and I set the fragment shader to output 1/256 on every pixel, I get a 0 in every pixel instead of a 1. And every subsequent number returns the number-1, so 34/256 returns 33. Does this also have something to do with backwards compatibility? |
From: Gijs <in...@bs...> - 2009-02-20 10:02:29
|
On 2/19/09 6:02 PM, Mike C. Fletcher wrote: > Gijs wrote: > ... >> Hmm ok, guess I'll use the array-functions then. >> >> Btw, when I use glGetTexImageub and I set the fragment shader to >> output 1/256 on every pixel, I get a 0 in every pixel instead of a 1. >> And every subsequent number returns the number-1, so 34/256 returns >> 33. Does this also have something to do with backwards compatibility? > Nope, that sounds like a rounding or similar error to me. PyOpenGL > doesn't really touch the data involved, just shuffles it from OpenGL > over to numpy/ctypes/strings/etc It might be possible that it's doing > that translation wrong, but an off-by-one for all numbers isn't a > likely failure mode for the data translation process. > > If I understand what you are doing correctly, you wouldn't expect > 256/256 to produce a "256" in the result (max num is 255), so I'm > thinking you wanted your divisor one less? What happens if you set > the operation to 1/255 or 34/255? Even there you could see an > off-by-one with rounding errors and subsequent "floor" on the number, > but save for those you should be much closer to the number you expect. > > HTH, > Mike > Well, the problem that I had was that I wanted to count all pixels of a specific color in the shader with the reduction-technique. Normally you'd think you just literally count all the pixels and add them together and have an answer. But in the case of 8 bit color channels, you cannot output any number higher than 1.0 (which you probably already knew), since it will just ceil it to 1.0. So I'd thought I'd set every pixel to 1/256 in the Red channel, if it's the correct color, and set it to 0 if it's not the correct color. By using the other color channels as a 2nd, 3rd and 4th "byte" I would be able to output a number (max of 2^(4*8)) that I could translate back to its real value. Yesterday I found out I could have just used 32bit color channels, too bad I didn't know that beforehand. You were right that it was a rounding-error, if I divide numbers by 255 it would give me the correct number which I set beforehand in the shader. Guess I could have used 255 and adjust the calculations, but I solved the problem in the end by reading floats and multiplying them by 256. This did give me the correct answer. So I guess that was a bug on my end. |