From: Ryan M. <rm...@gm...> - 2015-03-13 19:21:13
|
On Fri, Mar 13, 2015 at 12:53 PM, Chris Barker <chr...@no...> wrote: > On Fri, Mar 13, 2015 at 10:21 AM, Benjamin Root <ben...@ou...> wrote: > >> Probably what I am most interested in from OpenGL is its transforms >> stack. >> > > OpenGL can't do anything with transforms that you couldn't do in python > (or C, or Cython). What it can do is push the transform computations to the > GPU(s) -- making for monstrously faster performance. > > This is the "problem" with the current MPL architecture. It does all the > transforming outside of the back-ends, and assumes that the backends can > only render in 2-d pixel coordinates. > > If we can re-factor to push the transforms to the back-end, most of them > could use the same generic code, but you'd have the option of the back-end > providing the transforms, which would buy you a LOT with Open GL, and could > maybe by you some with, say, wxAgg, as you could put the transforms in > C/C++ perhaps more efficiently. > > Note that with OPenGL in general, its the transforming that buys you > performance -- when you push brand new data to be rendered, it takes a lot > of time to push that data to the video card, so drawing the first time > doesn't buy you much. But if you need to re-render that same data in a > different view, say zooming in or out, etc, then GL can fly -- if that > transformation can be done on the GPU. > > Don't overestimate the cost of the data transfer, at least for non-millions of points. Even years ago, just doing basic opengl plotting (no sophisticated use of on-GPU memory), was a big win. The other win for mpl + opengl is giving a real rasterizer and depth buffer for the mpl3d package, which is hampered by the per-artist z-ordering. Ryan -- Ryan May |