Thread: [PyOpenGL-Users] Tracking down an invalid operation
Brought to you by:
mcfletch
|
From: Derakon <de...@gm...> - 2011-06-02 20:35:55
|
I have an OpenGL-related crash that's giving me fits trying to trace it. This is gonna take a bit to describe, unfortunately. We have a computerized microscope with four cameras which each image different wavelength bands (colors) of the sample as 512x512 pixel arrays. These are displayed as OpenGL textures by some C++ code. I've made an overlaid view as a separate window, which runs in Python -- our C++ code is old and crufty and every time I touch it I'm worried it'll collapse into dust, but the Python "half" (more like 80%) of the program is more up-to-date. Basically, each time a camera receives new image, an event is generated on the C++ side, which is picked up by the Python side. The Python side makes a request to the C++ side for the image data, converts that into its own texture, and displays it. Ideally I'd just re-use the same textures the normal camera displays use, but they're in separate OpenGL contexts so, as far as I'm aware, that's not possible. So far, so good...except that when I ramp up the rate at which the cameras collect images, I get irregular crashes. OpenGL reports an invalid operation in glDeleteTextures, though I've no idea why; I'm certainly not double-deleting textures, and I've put locks around everything remotely sensitive so it shouldn't be e.g. trying to create a texture in one thread while rendering in another. In fact, I probably have too many locks, but there's no deadlocks, so oh well. I made a standalone application. It doesn't crash. I copied the standalone app's code back into the main program and hooked it back in. It crashes. I severed the code that retrieves the image array from the C++ side in favor of displaying a randomly-generated pixel array. It still crashes. At this point the only connection this system has to the rest of the program is the event notification. Here's the standalone, non-crashy version of the program: http://pastebin.com/p2QtTSZf (requires PyOpenGL, wxPython, numpy) The only difference between this version and the version running in the actual app is the source of the events that trigger onNewImageReady (and that the in-app version doesn't make its own wxApp, of course). Any ideas? I'm completely stumped. Also, I've noticed a secondary issue: when this version is running in the main app and hasn't crashed yet, other OpenGL canvases that use FTGL to draw text will sometimes instead draw solid blocks of color (or once, I saw black with random green lines across it) where the text should be. I don't know if this is related. How is display in one window affecting display in another? -Chris |
|
From: Ian M. <geo...@gm...> - 2011-06-02 21:24:19
|
On Thu, Jun 2, 2011 at 1:35 PM, Derakon <de...@gm...> wrote: > I have an OpenGL-related crash that's giving me fits trying to trace > it. This is gonna take a bit to describe, unfortunately. > > We have a computerized microscope with four cameras which each image > different wavelength bands (colors) of the sample as 512x512 pixel > arrays. These are displayed as OpenGL textures by some C++ code. I've > made an overlaid view as a separate window, which runs in Python -- > our C++ code is old and crufty and every time I touch it I'm worried > it'll collapse into dust, but the Python "half" (more like 80%) of the > program is more up-to-date. Basically, each time a camera receives new > image, an event is generated on the C++ side, which is picked up by > the Python side. The Python side makes a request to the C++ side for > the image data, converts that into its own texture, and displays it. > Ideally I'd just re-use the same textures the normal camera displays > use, but they're in separate OpenGL contexts so, as far as I'm aware, > that's not possible. > Hi, Generally, having multiple contexts in OpenGL is a very very very bad idea. If you're doing a readback of the texture data from one context, and then trying to use that data in another, it may not work properly, because OpenGL works asynchronously. In a single context, OpenGL calls will execute in order--but some won't block while they do the underlying work. In the context isn't managed, the context will get confused. If the work done by one context isn't done (e.g., copy this texture) and another context gets a function call in edge-wise, you'll have problems. Ian |
|
From: Derakon <de...@gm...> - 2011-06-02 21:35:42
|
On Thu, Jun 2, 2011 at 2:23 PM, Ian Mallett <geo...@gm...> wrote: > On Thu, Jun 2, 2011 at 1:35 PM, Derakon <de...@gm...> wrote: >> >> I have an OpenGL-related crash that's giving me fits trying to trace >> it. This is gonna take a bit to describe, unfortunately. >> >> We have a computerized microscope with four cameras which each image >> different wavelength bands (colors) of the sample as 512x512 pixel >> arrays. These are displayed as OpenGL textures by some C++ code. I've >> made an overlaid view as a separate window, which runs in Python -- >> our C++ code is old and crufty and every time I touch it I'm worried >> it'll collapse into dust, but the Python "half" (more like 80%) of the >> program is more up-to-date. Basically, each time a camera receives new >> image, an event is generated on the C++ side, which is picked up by >> the Python side. The Python side makes a request to the C++ side for >> the image data, converts that into its own texture, and displays it. >> Ideally I'd just re-use the same textures the normal camera displays >> use, but they're in separate OpenGL contexts so, as far as I'm aware, >> that's not possible. > > Hi, > > Generally, having multiple contexts in OpenGL is a very very very bad idea. > If you're doing a readback of the texture data from one context, and then > trying to use that data in another, it may not work properly, because OpenGL > works asynchronously. > I should have clarified: I'm not reading the texture data out, I'm reading the array of brightness values that was used to generate the texture. Basically the camera sends us a bunch of bytes, then one of our viewers reads from those bytes to generate a texture. Now I'm adding a second viewer to read from the same set of bytes. As for multiple contexts, I may have mis-stated or maybe I am in fact using multiple contexts; I'm not that boned up on OpenGL. I'd assumed each non-overlaid camera display was its own context, since each one has the same texture ID of 1, and I'd assume that in a single context no two textures could have the same texture ID. Is this inaccurate? This app has many wxGLCanvases all over the place and I've never run into trouble before. This window should work just like the other canvases in the app. According to this: http://wiki.wxwidgets.org/WxGLCanvas#Sharing_wxGLCanvas_context it should be possible to share the OpenGL context across canvases, but we aren't doing that right now. Of course none of the canvases are trying to share resources either. It sounds like you're suggesting that in the middle of my viewer's paint logic, another context is barging in and confusing OpenGL. I take that to mean that when I make an OpenGL API call, my context isn't implicitly passed along? But then why wouldn't my other canvases ever get screwed up? Assuming that is the problem, theoretically I could fix that by creating one canvas, getting its context, and then storing that globally and using it whenever I create any other canvases. Does that sound about right? -Chris |
|
From: Ian M. <geo...@gm...> - 2011-06-03 01:40:23
|
On Thu, Jun 2, 2011 at 2:35 PM, Derakon <de...@gm...> wrote: > I should have clarified: I'm not reading the texture data out, I'm > reading the array of brightness values that was used to generate the > texture. Basically the camera sends us a bunch of bytes, then one of > our viewers reads from those bytes to generate a texture. Now I'm > adding a second viewer to read from the same set of bytes. > > As for multiple contexts, I may have mis-stated or maybe I am in fact > using multiple contexts; I'm not that boned up on OpenGL. I'd assumed > each non-overlaid camera display was its own context, since each one > has the same texture ID of 1, and I'd assume that in a single context > no two textures could have the same texture ID. Is this inaccurate? > > This app has many wxGLCanvases all over the place and I've never run > into trouble before. This window should work just like the other > canvases in the app. According to this: > > http://wiki.wxwidgets.org/WxGLCanvas#Sharing_wxGLCanvas_context > > it should be possible to share the OpenGL context across canvases, but > we aren't doing that right now. Of course none of the canvases are > trying to share resources either. > > It sounds like you're suggesting that in the middle of my viewer's > paint logic, another context is barging in and confusing OpenGL. I > take that to mean that when I make an OpenGL API call, my context > isn't implicitly passed along? But then why wouldn't my other canvases > ever get screwed up? Assuming that is the problem, theoretically I > could fix that by creating one canvas, getting its context, and then > storing that globally and using it whenever I create any other > canvases. Does that sound about right? > Not sure exactly. I just know that having one application and several contexts gets dicey quickly. Personally, I'd go with the single context, as that should fix any problems therein. It seems to me that you can minimize any problems by making one texture for each context, and then just updating that texture with the given bytes (glCopyTex[Sub]Image2D(...), though you might already be doing that. Ian |
|
From: Mike C. F. <mcf...@vr...> - 2011-06-03 02:24:54
|
On 11-06-02 04:35 PM, Derakon wrote: > I have an OpenGL-related crash that's giving me fits trying to trace > it. This is gonna take a bit to describe, unfortunately. ... > So far, so good...except that when I ramp up the rate at which the > cameras collect images, I get irregular crashes. OpenGL reports an > invalid operation in glDeleteTextures, though I've no idea why; I'm > certainly not double-deleting textures, and I've put locks around > everything remotely sensitive so it shouldn't be e.g. trying to create > a texture in one thread while rendering in another. In fact, I > probably have too many locks, but there's no deadlocks, so oh well. The docs for glDeleteTextures say that the point at which it will generate invalid operation is when called between glBegin and glEnd, you may wish to guard that segment of render() with your self.lock. However, given that there should only be a single wxPython thread, you should only have one thread rendering at a time. As far as I recall, wx.PostEvent should do the "correct" thing in scheduling the call in the GUI thread. Note, however, that your C++ code is likely not GIL-locked, so unlike Python code, it can be running between op-codes and the like. I don't know that that would be the problem, but it is a likely difference between the broken and working versions. Afraid I don't have much else to suggest off the top of my head. I expect you are somehow seeing interference between the C++ rendering loop and the wxPython rendering loop, but without knowing what that C++ code is doing (i.e. is it in a thread, is it posting events to wx to do its own rendering, etc), I can't really guess what in particular is going wrong. Good luck, Mike -- ________________________________________________ Mike C. Fletcher Designer, VR Plumber, Coder http://www.vrplumber.com http://blog.vrplumber.com |