Menu

#191 Segfault with glReadPixelsf

v3.0.0
closed-fixed
GL (74)
5
2008-06-06
2008-05-30
Anonymous
No

PyOpenGL version: 3.0.0b2
Numpy version: 1.0.4
Platform: Windows XP SP 2
GPU: NVidia 8800GTS, 169.21 driver version

Attempting to perform a glReadPixelsf on a 512x512 frame buffer causes a crash. In fact, every format I tried besides glReadPixelsub crashes.

Thanks,

Eddy Talvala, talvala@stanford.edu

Minimum testcase that shows crash on my system attached.

Discussion

  • Nobody/Anonymous

    Minimum testcase using GLUT, a single quad, and a pair of glReadPixels

     
  • Nobody/Anonymous

    Logged In: NO

    Further experimentation appears to show that the bug is in OpenGL.images.createTargetArray

    createTargetArray attempts to create, based on the pixel format and the color channel type, a zero array of sufficient size. However, the calculation seems incorrect for the case of an RGB float array, at least.

    After adding some prints inside createTargetArray, and calling glReadPixels with:

    readback_image1 = glReadPixelsub(0,0,512,512,GL_RGB)

    I get the following values for the variables inside createTargetArray:
    formatBits: 24
    typeBits: 8
    lastDim, remainder: 3, 0
    dims: (512, 512, 3)
    arrayType: <class 'OpenGL.arrays.arraydatatype.GLubyteArray'>

    Which is correct, and works fine. However, the same variables for the call

    readback_image2 = glReadPixelsf(0,0,width,height,GL_RGB)

    aren't correct:

    formatBits: 24
    typeBits: 32
    dims: (512, 512)
    arrayType: <class 'OpenGL.arrays.arraydatatype.GLfloatArray'>

    Most importantly, the dims tuple is wrong - the data has three color channels, each of which is a float. A 512x512 float array is 3 times too small to contain it, and explains why the code then crashes when simple.glReadPixels is called using the array returned by createTargetArray.

    Also, formatBits=24 makes little sense for a RGB float texture (where a pixel takes up 96 bits) unless formatBits=24 simply means '3 components'.

    It would seem at first glance that lastDim should simply be set based on format, not based on comparisons between formatBits and typeBits. Perhaps this code is holdover from an earlier version, where multiple color channels were stuffed into a single array entry, which no longer appears to be the case.

    Hope this helps,

    Eddy Talvala, <lastname> at stanford.edu

     
  • Nobody/Anonymous

    Logged In: NO

    One additional note: Shouldn't formatBits for GL_LUMINANCE_ALPHA=16, not 8 (defined in OpenGL.GL.images) according to the current convention, as it returns two components, not one?

     
  • Mike C. Fletcher

    • assigned_to: nobody --> mcfletch
    • status: open --> closed-fixed
     
  • Mike C. Fletcher

    Logged In: YES
    user_id=34901
    Originator: NO

    You are correct, size of the element should be handled solely via the component count. I've updated the code in CVS, should show up in the next release.

     

Log in to post a comment.