I want to save a very large pyopengl scene to a snapshot file. The
resolution I am trying for is 3200x2400.
The following code fragment, using PIL, works up to 1600x1200:
ox, oy, width, height = glGetIntegerv(GL_VIEWPORT)
print width, height, ox, oy
data = glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE)
image = Image.fromstring("RGB", (width, height), data)
image = image.transpose(Image.FLIP_TOP_BOTTOM)
However if I try to create my initial window at 3200x2400, like this:
two things happen. First, the actual window that is created onscreen is
only about 1600x1200 (minus a bit for window borders). When I move around
my 3D scene, it looks like OpenGL thinks that the scene really is larger
than what the window is showing me (which is great) but I am only seeing
part of it. When I try to do the screen capture with the code above, the
width and height reported by glGetIntegerv(GL_VIEWPORT) are 3200 and 2400.
That looks promising. However, when I examine the resulting snap.png
image, it only contains what appears to be the lower quadrant of my
scene--the portion that was actually visible on screen.
Is there any simple way to capture an image larger than 1600x1200? What
is the resolution-limiting factor in my current approach?
Would I have to create 4 viewports of 1600x1200, each one showing part of
the scene (with a different projection matrix but same model transform?)
and same each as a separate image and then paste them together myself?
Does anyone have any suggestions for doing this seamlessly?
Get latest updates about Open Source Projects, Conferences and News.