On Tue, Nov 29, 2011 at 11:41 AM, Derakon <derakon@...> wrote:
> I have a program that's intended to display the output of a scientific
> camera -- a ~2500x~2000 greyscale image. This exceeds the maximum
> texture size on the target computer, so I split the image into smaller
> tiles which are displayed side-by-side. I wrote up a test application
> which works fine on my OSX laptop, but not on the (Windows 7) target
> computer. The problem I'm seeing is that every tile is showing what
> the last tile is supposed to show. So for example if I render tiles at
> (0, 0), (0, 1), (1, 0), and (1, 1), then all of the tiles will display
> the content from the (1, 1) tile.
> Here's the test app: http://pastebin.com/yEaeLUNv
> It depends on wxPython, pyOpenGL, and numpy. Everything up to line 121
> is WX glue code and can be ignored (the sample image is generated at
> line 10, if you care). The OnPaint function there calls the render()
> method for each Image class instance; those Images then render
> individual tiles. I apologize for the inconsistent code style and use
> of deprecated OpenGL features; this is a mishmash of old and new-ish
> code from multiple authors.
> Any ideas what I'm doing wrong? Or how I'd go about figuring this out?
> I've yet to figure out how to effectively debug errors in my usage of
You're right; the code definitely needs cleaning up. .bindTexture actually
makes the OpenGL texture object? What?
As to the problem, I'm not seeing anything immediate in the code. However,
if it works on one piece of hardware and not on another, that's the
variable. The rule of debugging is to look at the difference.
Even my aging laptop (which was middle-end when new) supports 4096x4096
textures. If your Windows box doesn't, I'm guessing there are other
problems, like not supporting non-POT textures, or simply not having enough
memory to store the thing. Both of these might lead to the problem you're
So, in conclusion, compare GPUs.