pyopengl-users Mailing List for PyOpenGL (Page 9)
Brought to you by:
mcfletch
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(81) |
Oct
(41) |
Nov
(55) |
Dec
(14) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(34) |
Feb
(3) |
Mar
(16) |
Apr
(5) |
May
(10) |
Jun
(13) |
Jul
(24) |
Aug
(14) |
Sep
(14) |
Oct
(9) |
Nov
(10) |
Dec
(16) |
2003 |
Jan
(25) |
Feb
(59) |
Mar
(9) |
Apr
(21) |
May
(54) |
Jun
(4) |
Jul
(16) |
Aug
(19) |
Sep
(19) |
Oct
(15) |
Nov
(13) |
Dec
(22) |
2004 |
Jan
(19) |
Feb
(8) |
Mar
(20) |
Apr
(16) |
May
(13) |
Jun
(18) |
Jul
(18) |
Aug
(14) |
Sep
(24) |
Oct
(47) |
Nov
(20) |
Dec
(10) |
2005 |
Jan
(23) |
Feb
(31) |
Mar
(11) |
Apr
(29) |
May
(18) |
Jun
(7) |
Jul
(11) |
Aug
(12) |
Sep
(8) |
Oct
(4) |
Nov
(11) |
Dec
(7) |
2006 |
Jan
(7) |
Feb
(8) |
Mar
(15) |
Apr
(3) |
May
(8) |
Jun
(25) |
Jul
(19) |
Aug
(3) |
Sep
(17) |
Oct
(27) |
Nov
(24) |
Dec
(9) |
2007 |
Jan
(6) |
Feb
(43) |
Mar
(33) |
Apr
(8) |
May
(20) |
Jun
(11) |
Jul
(7) |
Aug
(8) |
Sep
(11) |
Oct
(22) |
Nov
(15) |
Dec
(18) |
2008 |
Jan
(14) |
Feb
(6) |
Mar
(6) |
Apr
(37) |
May
(13) |
Jun
(17) |
Jul
(22) |
Aug
(16) |
Sep
(14) |
Oct
(16) |
Nov
(29) |
Dec
(13) |
2009 |
Jan
(7) |
Feb
(25) |
Mar
(38) |
Apr
(57) |
May
(12) |
Jun
(32) |
Jul
(32) |
Aug
(35) |
Sep
(10) |
Oct
(28) |
Nov
(16) |
Dec
(49) |
2010 |
Jan
(57) |
Feb
(37) |
Mar
(22) |
Apr
(15) |
May
(45) |
Jun
(25) |
Jul
(32) |
Aug
(7) |
Sep
(13) |
Oct
(2) |
Nov
(11) |
Dec
(28) |
2011 |
Jan
(35) |
Feb
(39) |
Mar
|
Apr
(25) |
May
(32) |
Jun
(17) |
Jul
(29) |
Aug
(10) |
Sep
(26) |
Oct
(9) |
Nov
(28) |
Dec
(4) |
2012 |
Jan
(24) |
Feb
(47) |
Mar
(4) |
Apr
(8) |
May
(9) |
Jun
(6) |
Jul
(4) |
Aug
(1) |
Sep
(4) |
Oct
(28) |
Nov
(2) |
Dec
(2) |
2013 |
Jan
(11) |
Feb
(3) |
Mar
(4) |
Apr
(38) |
May
(15) |
Jun
(11) |
Jul
(15) |
Aug
(2) |
Sep
(2) |
Oct
(4) |
Nov
(3) |
Dec
(14) |
2014 |
Jan
(24) |
Feb
(31) |
Mar
(28) |
Apr
(16) |
May
(7) |
Jun
(6) |
Jul
(1) |
Aug
(10) |
Sep
(10) |
Oct
(2) |
Nov
|
Dec
|
2015 |
Jan
(6) |
Feb
(5) |
Mar
(2) |
Apr
(1) |
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(2) |
Oct
(1) |
Nov
(19) |
Dec
|
2016 |
Jan
(6) |
Feb
(1) |
Mar
(7) |
Apr
|
May
(6) |
Jun
|
Jul
(3) |
Aug
(7) |
Sep
|
Oct
(2) |
Nov
(2) |
Dec
|
2017 |
Jan
|
Feb
(6) |
Mar
(8) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(4) |
Sep
(3) |
Oct
(2) |
Nov
|
Dec
|
2018 |
Jan
(9) |
Feb
(1) |
Mar
|
Apr
|
May
(1) |
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(6) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(7) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2024 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Adam S. <avs...@gm...> - 2014-03-07 03:29:14
|
Hello, I'm having what I can only imagine are installation issues. I'm wondering if I'm using a version of python that isn't supported? Website says Python 3.2+ is (experimental), and I am running Python 3.3, but the problems I'm having are at a very basic level so I was wondering if there's something else that may be wrong. I tried installing using the windows executables listed below to my x64 3.3 install. I also have a 32 bit install of 3.3, and got the same results. I did not install any of the other optional or recommended packages listed at PyOpenGL-3.0.2.win-amd64.exe on a "from OpenGL.GL import *" this gives me error that look like some conversion from python 2.x didn't take (some exceptions are of the form "Exception Blah, err:") PyOpenGL-3.1.0b1.win-amd64.exe This one installs and lets me do an import, but a simple command like "glGetString(GL_VERSION)" returns "None" Any guidance would be appreciated. I know it isnt some general OpenGL problem because I tried pyglet and it returns something sensible for the glGetString command Thanks for any help! Adam |
From: Ákos T. <dxm...@gm...> - 2014-03-05 16:24:01
|
The Optimus driver is basically a secondary X server with the Nvidia proprietary drivers loaded instead of Mesa (which houses anything the integrated Intel card is capable of running), so I assume the issue might be present when running with the Nvidia drivers regardless of Optimus. Later this week I could do a test where I disable bumblebee altogether and enable Nvidia on the primary X, thus getting around the whole Optimus deal, and see if it still happens. On 5 March 2014 17:11, Mike C. Fletcher <mcf...@vr...> wrote: > On 03/05/2014 05:49 AM, Ákos Tóth wrote: > >> The problem appears to have the same symptoms with 2.7 + PyOpenGL 3.1.0b1 >> - works without Optimus, crashes with Optimus. >> >> I also ran the trace (Optimus, Python 2.7, PyOpenGL 3.1.0b1, file with >> single line "from OpenGL.GL import *"), the last couple of screens of which >> is: >> http://pastebin.com/6gicFwsn >> Let me know if you need more context than that. >> > > Okay, that's more than a little weird. Optimus seems to mean that we're > not getting core functions in the GL dll under Linux... the weird thing is > that such a condition *should* just raise an Exception on access, it > shouldn't cause a crash. There's nothing accelerate related at that point, > so it makes sense that the behaviour is the same with and without it. The > particular entry point it's looking up is likely the first one we're seeing > which is core, but not supported on the Optimus driver. There were > modifications made in the 3.1 loader that means it will attempt to resolve > *all* core entry points by DLL lookup, where 3.0.2 was doing a glGet() for > the core entry points over 1.1 to see if we had that VERSION supported in a > particular context. > > So it seems we need to revisit how to resolve the entry points without > blowing up on Optimus yet also without requiring a glGet() during import. > It seems likely we need to go back to the lazy-resolution code that waits > until the last possible moment to determine whether the entry point is > there (i.e. first call or __bool__ check) and only then does the resolution > under a GL context. > > > Thanks, > Mike > > -- > ________________________________________________ > Mike C. Fletcher > Designer, VR Plumber, Coder > http://www.vrplumber.com > http://blog.vrplumber.com > > |
From: Mike C. F. <mcf...@vr...> - 2014-03-05 16:11:31
|
On 03/05/2014 05:49 AM, Ákos Tóth wrote: > The problem appears to have the same symptoms with 2.7 + PyOpenGL > 3.1.0b1 - works without Optimus, crashes with Optimus. > > I also ran the trace (Optimus, Python 2.7, PyOpenGL 3.1.0b1, file with > single line "from OpenGL.GL import *"), the last couple of screens of > which is: > http://pastebin.com/6gicFwsn > Let me know if you need more context than that. Okay, that's more than a little weird. Optimus seems to mean that we're not getting core functions in the GL dll under Linux... the weird thing is that such a condition *should* just raise an Exception on access, it shouldn't cause a crash. There's nothing accelerate related at that point, so it makes sense that the behaviour is the same with and without it. The particular entry point it's looking up is likely the first one we're seeing which is core, but not supported on the Optimus driver. There were modifications made in the 3.1 loader that means it will attempt to resolve *all* core entry points by DLL lookup, where 3.0.2 was doing a glGet() for the core entry points over 1.1 to see if we had that VERSION supported in a particular context. So it seems we need to revisit how to resolve the entry points without blowing up on Optimus yet also without requiring a glGet() during import. It seems likely we need to go back to the lazy-resolution code that waits until the last possible moment to determine whether the entry point is there (i.e. first call or __bool__ check) and only then does the resolution under a GL context. Thanks, Mike -- ________________________________________________ Mike C. Fletcher Designer, VR Plumber, Coder http://www.vrplumber.com http://blog.vrplumber.com |
From: Ákos T. <dxm...@gm...> - 2014-03-05 10:49:37
|
The problem appears to have the same symptoms with 2.7 + PyOpenGL 3.1.0b1 - works without Optimus, crashes with Optimus. I also ran the trace (Optimus, Python 2.7, PyOpenGL 3.1.0b1, file with single line "from OpenGL.GL import *"), the last couple of screens of which is: http://pastebin.com/6gicFwsn Let me know if you need more context than that. Another thing to note is that the crash reporter seems to think the following libraries are involved in the crash: /usr/local/lib/python2.7/dist-packages/OpenGL_accelerate/latebind.so /usr/local/lib/python2.7/dist-packages/OpenGL_accelerate/errorchecker.so /usr/local/lib/python2.7/dist-packages/OpenGL_accelerate/wrapper.so /usr/local/lib/python2.7/dist-packages/OpenGL_accelerate/arraydatatype.so /usr/local/lib/python2.7/dist-packages/OpenGL_accelerate/formathandler.so I am unsure how accurate it is, as accelerate appears to be uninvolved in the issue - I ran the following file through five v's and trace: import OpenGL OpenGL.USE_ACCELERATE = False from OpenGL.GL import * and got the same result as with accelerate. Regards, Ákos Tóth On 5 March 2014 06:10, Mike C. Fletcher <mcf...@vr...> wrote: > On 14-03-03 05:09 PM, Ákos Tóth wrote: > >> Hi, >> >> I recently started using pyOpenGL with Python 3.3 on Ubuntu 13.10. I am >> in the unfortunate situation of owning a laptop with a switchable (Optimus >> technology) dedicated video card. With the most recent version of pyOpenGL >> (3.1.0b1) installed, the program crashes quite seriously when running the >> line "from OpenGL.GL import *" - to the point where Ubuntu actually opens >> an error dialog for this. This problem is specific to Python 3 with optirun >> - in C++, OpenGL works on the optimus card without any hiccups, and version >> 3.0.2 with Python 2.7 also works without a problem. >> >> Tests concluded so far: >> - Python 2.7, pyOpenGL and accelerate version 3.0.2 with or without >> optirun. No problems. >> - Python 3.3, pyOpenGL and accelerate version 3.1.0b1 without optirun - >> no problems but the intel card only supports openGL contexts up to version >> 3.1, which is far behind what I require. >> - Python 3.3, pyOpenGL and accelerate version 3.1.0b1 with optirun, crash >> on import. In the pastebin link below is the output of python3 -vvvvvc >> "from OpenGL.GL import *". >> > Could you also test 2.7 + PyOpenGL 3.1.0b1 so we can see whether we have a > problem with PyOpenGL 3.1 + Optimus or just with PyOpenGL 3.1 + Optimus + > Python 3.x. Testing PyOpenGL 3.1.0b1 without the Accelerate module on > Python 2.7 would also help to narrow down failure cases. > > If you could do a trace of the import to see what line fails, that would > help. To do that, create a file with just the import statement, then run: > > python -m trace --trace -f yourfile.py > > it should print out each line as it executes and the last line printed > should be the one that causes the crash. > > The most likely cause of a crash is that we have something doing GL calls > before there's a GL context. > > Thanks, > Mike > > -- > ________________________________________________ > Mike C. Fletcher > Designer, VR Plumber, Coder > http://www.vrplumber.com > http://blog.vrplumber.com > > |
From: Mike C. F. <mcf...@vr...> - 2014-03-05 05:10:43
|
On 14-03-03 05:09 PM, Ákos Tóth wrote: > Hi, > > I recently started using pyOpenGL with Python 3.3 on Ubuntu 13.10. I > am in the unfortunate situation of owning a laptop with a switchable > (Optimus technology) dedicated video card. With the most recent > version of pyOpenGL (3.1.0b1) installed, the program crashes quite > seriously when running the line "from OpenGL.GL import *" - to the > point where Ubuntu actually opens an error dialog for this. This > problem is specific to Python 3 with optirun - in C++, OpenGL works on > the optimus card without any hiccups, and version 3.0.2 with Python > 2.7 also works without a problem. > > Tests concluded so far: > - Python 2.7, pyOpenGL and accelerate version 3.0.2 with or without > optirun. No problems. > - Python 3.3, pyOpenGL and accelerate version 3.1.0b1 without optirun > - no problems but the intel card only supports openGL contexts up to > version 3.1, which is far behind what I require. > - Python 3.3, pyOpenGL and accelerate version 3.1.0b1 with optirun, > crash on import. In the pastebin link below is the output of python3 > -vvvvvc "from OpenGL.GL import *". Could you also test 2.7 + PyOpenGL 3.1.0b1 so we can see whether we have a problem with PyOpenGL 3.1 + Optimus or just with PyOpenGL 3.1 + Optimus + Python 3.x. Testing PyOpenGL 3.1.0b1 without the Accelerate module on Python 2.7 would also help to narrow down failure cases. If you could do a trace of the import to see what line fails, that would help. To do that, create a file with just the import statement, then run: python -m trace --trace -f yourfile.py it should print out each line as it executes and the last line printed should be the one that causes the crash. The most likely cause of a crash is that we have something doing GL calls before there's a GL context. Thanks, Mike -- ________________________________________________ Mike C. Fletcher Designer, VR Plumber, Coder http://www.vrplumber.com http://blog.vrplumber.com |
From: Ákos T. <dxm...@gm...> - 2014-03-03 22:10:09
|
Hi, I recently started using pyOpenGL with Python 3.3 on Ubuntu 13.10. I am in the unfortunate situation of owning a laptop with a switchable (Optimus technology) dedicated video card. With the most recent version of pyOpenGL (3.1.0b1) installed, the program crashes quite seriously when running the line "from OpenGL.GL import *" - to the point where Ubuntu actually opens an error dialog for this. This problem is specific to Python 3 with optirun - in C++, OpenGL works on the optimus card without any hiccups, and version 3.0.2 with Python 2.7 also works without a problem. Tests concluded so far: - Python 2.7, pyOpenGL and accelerate version 3.0.2 with or without optirun. No problems. - Python 3.3, pyOpenGL and accelerate version 3.1.0b1 without optirun - no problems but the intel card only supports openGL contexts up to version 3.1, which is far behind what I require. - Python 3.3, pyOpenGL and accelerate version 3.1.0b1 with optirun, crash on import. In the pastebin link below is the output of python3 -vvvvvc "from OpenGL.GL import *". Run log: http://pastebin.com/yZbxWJeJ Thank you in advance for your help, Ákos Tóth |
From: Nicolas R. <Nic...@in...> - 2014-02-25 12:12:23
|
I found the bug. I was not re-setting attributes between calls to the two different program that lead to the weird behaviour. It also fixed my framebuffer problem. Now I understand the utility of VAO... Sorry for the noise. Here is the corrected version. Nicolas |
From: Nicolas R. <Nic...@in...> - 2014-02-25 11:20:02
|
I investigated furthermore and my problem may come from my misunderstanding on vertex/index buffers. I attach an simple example where 2 programs are built (cube and quad): - cube is a rotating cube using vcube and icube (vertex and index buffer) and TRIANGLES. - quad is a simple quad using vquad (vertex buffer) and TRIANGLE_STRIP. In the "display" method, one can choose to display the rotating cube (if 1) or the quad (if 0). The rotating cube runs fine on my machine but the quad display does not. It is like the quad is using the cube vertex buffer and the display appears broken (cube vertices have been divided by 2 and it impacts the quad). It's very similar to the bug I first reported here and this makes me think there's no connection with the framebuffer. Also, the quad seems to be able to use the cube texture information while I did not set the "u_texture" information in that program. Obviously I'm doing something wrong but I can't see it. Finally, binding the texture (see line #DEBUG) blacks out the texture. A lot of weird things are happening indeed. Any help appreciated. Nicolas |
From: Nicolas R. <nic...@in...> - 2014-02-25 09:51:54
|
While trying to write a strip-down example, I accidentally reproduced the same kind of output using GL_TRIANGLE_TRIP instead of GL_TRIANGLES. Here is the source (which doesn't work as expected using the indices buffer: no output). That le me think I may have introduced the same kind of bug in the other source. I'm still investigating why the index buffer doesn't work (and texture actually). I didn't thoroughy check for errors yet, pardon me if it is very obvious... (The crate.npy is a 256x256 RGB texture stored as a numpy array. You'll need to replace it or discard it.) Nicolas |
From: rndblnch <rnd...@gm...> - 2014-02-25 09:02:06
|
Nicolas Rougier <Nicolas.Rougier <at> inria.fr> writes: > First image is direct rendering (no framebuffer) and is ok. Second image is indirect rendering (rendering > to texture then displaying the texture, the left washed-out part is ok, it comes from the fragment shader > for testing purposes). > > I suspect the depth buffer is not really attached but I did not get any error along the way. did you clear the depth buffer between the binding of the FBO and the rendering? the code (even with dependencies) would help for analysis. renaud |
From: Nicolas R. <Nic...@in...> - 2014-02-24 21:23:39
|
Thanks, that will be really useful. I'll try to turn your example into a rotating cube and see what happens. And if it works, it'll make a nice addition to pyopengl. Nicolas On 24 Feb 2014, at 21:52, Mike C. Fletcher <mcf...@vr...> wrote: > On 14-02-24 03:40 PM, Nicolas Rougier wrote: > > ... >> I cannot provide a simple example since the example comes from a project (vispy). If anyone have a simple glut based example showing framebuffer rendering (with depth buffer) such as cube rendered on cube, that could help. > > I can check out code for vispy (already have it checked out on multiple > machines). > > I don't have a simple "render a cube" sample, but this code here: > > http://bazaar.launchpad.net/~mcfletch/openglcontext/trunk/view/head:/tests/shadow_2.py > > renders a shadow map (that is, a depth-texture) using FBOs and might > help with what needs to happen to render a scene into an FBO. > > HTH, > Mike > > -- > ________________________________________________ > Mike C. Fletcher > Designer, VR Plumber, Coder > http://www.vrplumber.com > http://blog.vrplumber.com > > > ------------------------------------------------------------------------------ > Flow-based real-time traffic analytics software. Cisco certified tool. > Monitor traffic, SLAs, QoS, Medianet, WAAS etc. with NetFlow Analyzer > Customize your own dashboards, set traffic alerts and generate reports. > Network behavioral analysis & security monitoring. All-in-one tool. > http://pubads.g.doubleclick.net/gampad/clk?id=126839071&iu=/4140/ostg.clktrk > _______________________________________________ > PyOpenGL Homepage > http://pyopengl.sourceforge.net > _______________________________________________ > PyOpenGL-Users mailing list > PyO...@li... > https://lists.sourceforge.net/lists/listinfo/pyopengl-users |
From: Nicolas R. <Nic...@in...> - 2014-02-24 21:21:52
|
Thanks a lot. I will start with the example by Mike and your class as well. If I do not find the bug, I will try to write the most simple example I can and come back here. Nicolas On 24 Feb 2014, at 21:58, Ian Mallett <ia...@ge...> wrote: > On Mon, Feb 24, 2014 at 1:40 PM, Nicolas Rougier <Nic...@in...> wrote: > Direct rendering (without framebuffer): http://www.loria.fr/~rougier/tmp/without-fbo.png > Indirect rendering (with framebuffer): http://www.loria.fr/~rougier/tmp/with-fbo.png > This doesn't look FBO-related; it looks like a difference in rendering. FBO errors tend to have nothing work, not have the rendering look incorrect. > > I cannot provide a simple example since the example comes from a project (vispy). If anyone have a simple glut based example showing framebuffer rendering (with depth buffer) such as cube rendered on cube, that could help. > I have a simple FBO class implemented. It is used in several of my projects (e.g. my GLSL game of life implementation). You can use it as a reference. > > As before though, it looks like some broken matrices or shading--not a FBO issue. > > Ian > ------------------------------------------------------------------------------ > Flow-based real-time traffic analytics software. Cisco certified tool. > Monitor traffic, SLAs, QoS, Medianet, WAAS etc. with NetFlow Analyzer > Customize your own dashboards, set traffic alerts and generate reports. > Network behavioral analysis & security monitoring. All-in-one tool. > http://pubads.g.doubleclick.net/gampad/clk?id=126839071&iu=/4140/ostg.clktrk_______________________________________________ > PyOpenGL Homepage > http://pyopengl.sourceforge.net > _______________________________________________ > PyOpenGL-Users mailing list > PyO...@li... > https://lists.sourceforge.net/lists/listinfo/pyopengl-users |
From: Mike C. F. <mcf...@vr...> - 2014-02-24 21:13:56
|
On 14-02-24 10:36 AM, Antoine Martin wrote: > Hi, > > Can you expand on this new buffer protocol? Is worth using yet? At the moment it's only registered for __builtin__.memoryview and __builtin__.bytearray objects. Internally it uses memoryview on the buffer-api supporting objects, and the <buffer> object doesn't *seem* to be compatible (?? weird). If you can get your buffer object into a buffer-api supporting object (something you can call memoryview() on) you should be able to pass it into the APIs. As for testing for support, if OpenGL.version.__version__.split('.')[:2] >= ['3','1']: then you should have the buffer api handler. Hope that helps, Mike -- ________________________________________________ Mike C. Fletcher Designer, VR Plumber, Coder http://www.vrplumber.com http://blog.vrplumber.com |
From: Ian M. <ia...@ge...> - 2014-02-24 20:58:31
|
On Mon, Feb 24, 2014 at 1:40 PM, Nicolas Rougier <Nic...@in...>wrote: > Direct rendering (without framebuffer): > http://www.loria.fr/~rougier/tmp/without-fbo.png > Indirect rendering (with framebuffer): > http://www.loria.fr/~rougier/tmp/with-fbo.png > This doesn't look FBO-related; it looks like a difference in rendering. FBO errors tend to have nothing work, not have the rendering look incorrect. I cannot provide a simple example since the example comes from a project > (vispy). If anyone have a simple glut based example showing framebuffer > rendering (with depth buffer) such as cube rendered on cube, that could > help. > I have a simple FBO class implemented. It is used in several of my projects (e.g. my GLSL game of life implementation<http://geometrian.com/programming/projects/index.php?project=Game%20of%20Life>). You can use it as a reference. As before though, it looks like some broken matrices or shading--not a FBO issue. Ian |
From: Mike C. F. <mcf...@vr...> - 2014-02-24 20:52:34
|
On 14-02-24 03:40 PM, Nicolas Rougier wrote: ... > I cannot provide a simple example since the example comes from a project (vispy). If anyone have a simple glut based example showing framebuffer rendering (with depth buffer) such as cube rendered on cube, that could help. I can check out code for vispy (already have it checked out on multiple machines). I don't have a simple "render a cube" sample, but this code here: http://bazaar.launchpad.net/~mcfletch/openglcontext/trunk/view/head:/tests/shadow_2.py renders a shadow map (that is, a depth-texture) using FBOs and might help with what needs to happen to render a scene into an FBO. HTH, Mike -- ________________________________________________ Mike C. Fletcher Designer, VR Plumber, Coder http://www.vrplumber.com http://blog.vrplumber.com |
From: Nicolas R. <Nic...@in...> - 2014-02-24 20:41:07
|
Hi, I'm trying to render a simple rotating textured cube to a texture using framebuffer. I attached a RGB texture for the color buffer and a depth render buffer for the depth buffer to the framebuffer (same 512x512 resolution for each of them). I checked for the status of the framebuffer which report itself as "complete". Nonetheless I get a weird rendering as shown on second image: Direct rendering (without framebuffer): http://www.loria.fr/~rougier/tmp/without-fbo.png Indirect rendering (with framebuffer): http://www.loria.fr/~rougier/tmp/with-fbo.png First image is direct rendering (no framebuffer) and is ok. Second image is indirect rendering (rendering to texture then displaying the texture, the left washed-out part is ok, it comes from the fragment shader for testing purposes). I suspect the depth buffer is not really attached but I did not get any error along the way. I'm using python (2.7.6), pyopengl (3.0.2) on osx and I'm using framebuffer operations from the main pyopengl namespace (OpenGL.GL) but I know OpenGL.GL.ext.framebuffer_object is also available. Could this be the reason ? I cannot provide a simple example since the example comes from a project (vispy). If anyone have a simple glut based example showing framebuffer rendering (with depth buffer) such as cube rendered on cube, that could help. Nicolas |
From: Antoine M. <an...@na...> - 2014-02-24 15:55:27
|
Hi, Can you expand on this new buffer protocol? Is worth using yet? I have some pixels stored in <type 'buffer'>, and at the moment I end up making a copy of the pixels to a simple python string before uploading them with glTexSubImage2D. Wouldn't it be better to use this new buffer protocol when available? (or maybe even something else?) If so, how can I easily test for the availability of the this array handler? (I've had a brief look arraydatatype.py and quickly got utterly lost..) Thanks Antoine |
From: rndblnch <rnd...@gm...> - 2014-02-24 09:49:14
|
Matthew Keeter <matt.j.keeter <at> gmail.com> writes: > OpenGL.error.NullFunctionError: Attempt to call an undefined function glGenVertexArrays, check for bool(glGenVertexArrays) before calling > > > Can anyone familiar with function construction help me track this down? you should open a context that supports core profile: glutInitDisplayMode(GLUT_RGBA|GLUT_PROFILE_3_2_CORE) renaud |
From: Matthew K. <mat...@gm...> - 2014-02-22 23:19:33
|
I’m trying to create a vertex array object on Mac OS 10.9 (using PyOpenGL 3.0.2) and running into a NullFunctionError. They’re definitely supported: OpenGL Extension Viewer says core features up to 4.1 are implemented, and VAOs were introduced in 3.0. Here’s a 6-line example that reproduces the issue: from OpenGL.GL import * from OpenGL.GLUT import * glutInit() glutCreateWindow("GL test") print glGenBuffers(1) print glGenVertexArrays(1) which fails with the exception OpenGL.error.NullFunctionError: Attempt to call an undefined function glGenVertexArrays, check for bool(glGenVertexArrays) before calling Can anyone familiar with function construction help me track this down? Thanks, Matt |
From: Mike C. F. <mcf...@vr...> - 2014-02-21 15:37:24
|
On 02/21/2014 10:23 AM, rndblnch wrote: > === modified file 'OpenGL/extensions.py' > --- OpenGL/extensions.py 2014-01-30 20:32:21 +0000 > +++ OpenGL/extensions.py 2014-02-21 15:21:36 +0000 > @@ -156,11 +156,13 @@ > if not platform.PLATFORM.CurrentContextIsValid(): > return False > from OpenGL.raw.GL._types import GLint > - from OpenGL.raw.GL.VERSION.GL_1_1 import glGetString > + from OpenGL.raw.GL.VERSION.GL_1_1 import glGetString, glGetError > from OpenGL.raw.GL.VERSION.GL_1_1 import GL_EXTENSIONS > from OpenGL import error > try: > extensions = glGetString( GL_EXTENSIONS ) > + if glGetError(): > + raise error.GLError() > if extensions: > extensions = extensions.split() > else: Patch is applied to bzr head and should appear in beta-2. Thanks, Mike -- ________________________________________________ Mike C. Fletcher Designer, VR Plumber, Coder http://www.vrplumber.com http://blog.vrplumber.com |
From: rndblnch <rnd...@gm...> - 2014-02-21 15:23:57
|
rndblnch <rndblnch <at> gmail.com> writes: > ok, i think i have made progress in my understanding of the problem. > regarding extension loading, there is one place where the > OpenGL.ERROR_CHECKING flag changes the code path taken, it is in > OpenGL/extensions.py, line 163: the glGetString call will fail silently on > OpenGL 3+ if error checking is not enabled, while it will raise an exception > if error checking is enabled. > in this later case, the alternate code path provided in the except clause > will correctly fetch the extensions. > > maybe a fix would be to explicitly check for error after the glGetString call? looks like this was it, the following patch fixes the problem for me: === modified file 'OpenGL/extensions.py' --- OpenGL/extensions.py 2014-01-30 20:32:21 +0000 +++ OpenGL/extensions.py 2014-02-21 15:21:36 +0000 @@ -156,11 +156,13 @@ if not platform.PLATFORM.CurrentContextIsValid(): return False from OpenGL.raw.GL._types import GLint - from OpenGL.raw.GL.VERSION.GL_1_1 import glGetString + from OpenGL.raw.GL.VERSION.GL_1_1 import glGetString, glGetError from OpenGL.raw.GL.VERSION.GL_1_1 import GL_EXTENSIONS from OpenGL import error try: extensions = glGetString( GL_EXTENSIONS ) + if glGetError(): + raise error.GLError() if extensions: extensions = extensions.split() else: renaud |
From: rndblnch <rnd...@gm...> - 2014-02-21 15:14:46
|
rndblnch <rndblnch <at> gmail.com> writes: > what i do not understand however, is how the error checking interact with > those import (i.e. why setting error checking makes the ARB import find a > valid glGenVertexArrays function and why it becomes invalid when error > checking is disabled). ok, i think i have made progress in my understanding of the problem. regarding extension loading, there is one place where the OpenGL.ERROR_CHECKING flag changes the code path taken, it is in OpenGL/extensions.py, line 163: the glGetString call will fail silently on OpenGL 3+ if error checking is not enabled, while it will raise an exception if error checking is enabled. in this later case, the alternate code path provided in the except clause will correctly fetch the extensions. maybe a fix would be to explicitly check for error after the glGetString call? renaud |
From: rndblnch <rnd...@gm...> - 2014-02-21 10:05:21
|
Mike C. Fletcher <mcfletch <at> vrplumber.com> writes: > > On 02/18/2014 03:44 PM, rndblnch wrote: > > hello again, > > > > while moving some code to core profile, i found a strange interaction > > between the OpenGL.ERROR_CHECKING flag and the function loading mechanism. > > the following program (MacOSX specific for the glut core profile flag) > > prints True if error checking is enabled and prints False otherwise. > I haven't been able to duplicate it here (i.e. it always returns true on > a Linux amd box). My guess is that we are seeing an error from earlier > in the process startup being interpreted as "function doesn't exist" > when loading that function, but I can't see *how* that would occur, as > the actual code to get the entry point shouldn't care about OpenGL errors. yes, it looks like it is Mac OS X specific. i think i begin to understand where it comes from. in OpenGL.GL.VERSION.GL_3_0, the version of the functions imported by: from OpenGL.raw.GL.VERSION.GL_3_0 import * (line 15) can be replaced by later imports from ARB, e.g.: from OpenGL.GL.ARB.vertex_array_object import * (line 28) but on the mac, this specific extension is not present, apple provides instead the GL_APPLE_vertex_array_object extension. what i do not understand however, is how the error checking interact with those import (i.e. why setting error checking makes the ARB import find a valid glGenVertexArrays function and why it becomes invalid when error checking is disabled). renaud > > Sorry to not be much help here, > Mike > |
From: rndblnch <rnd...@gm...> - 2014-02-21 09:48:29
|
Mike C. Fletcher <mcfletch <at> vrplumber.com> writes: > > On 02/18/2014 04:46 PM, rndblnch wrote: > > hello again, last report for tonight... > > > > i am still testing core profile, and yet another corner case: > > OpenGL.GL.shaders.glCompileShader is imported from ARB.shader_objects > > extension and its error handling relies on glGetObjectParameterivARB .. > > which is not present in core profile. > > this leads to trouble when trying to report compilation errors: > This was due to being a bit lazy in wrapping the 2.0 version, I used > precisely the same code as in the ARB extension, and in doing so made > the core code depend on the ARB version of the functionality. bzr head > *should* now fix this, though as the code actually worked on my machine > I can't be sure the problem is actually fixed on true core-only > profiles. it works for me, thanks. > BTW, your script there works fine on GLSL 1.3, which is what > Intel integrated graphics provides these days (at least on Linux). on Mac OS X 10.9, when you request a core profile context, you get a OpenGL 4.1 and the shader compiler only accept glsl 150... i will have to special case the shader version at runtime to be protable. renaud > > Thanks, > Mike > |
From: Mike C. F. <mcf...@vr...> - 2014-02-19 17:30:25
|
Hi all, I've just checked in a change that allows for passing in output arrays for each wrapper that uses setOutput(), upshot is that most functions that return arrays can now pass in an array object to be filled (as you would do in C), allowing you to explicitly specify what object should be used for the output. There is a minor cost to that change (a length check and an "X in (a,b)" check, but it should allow for easier translations between C and Python opengl code. That change will require re-compiling OpenGL_accelerate for bzr head (and will necessitate a new OpenGL_accelerate release for beta 2). Enjoy, Mike -- ________________________________________________ Mike C. Fletcher Designer, VR Plumber, Coder http://www.vrplumber.com http://blog.vrplumber.com |