From: Wladimir v. d. L. <la...@gm...> - 2007-12-14 16:45:39
|
Hello, I have created a GPU-accelerated plugin to playback Dirac video streams. This currently, like the software implementation it is based on (Schrodinger) uses the gst framework. To do this, it downloads back the rendered frame from GPU to CPU memory, after which it passes the raw YUV output to gst, which uploads the frame to the screen again. This generates a lot of superfluous traffic on the bus. Is there infrastructure in place to pass the output of a plugin as a GL texture, for direct rendering? Or some other recommended way to do this? I realize I've probably written the first video rendering plugin for Linux that is accelerated on graphics hardware, so this might get interesting. Greetings, Wladimir |
From: Simon H. <od...@cs...> - 2007-12-15 15:13:13
|
fre, 14 12 2007 kl. 17:45 +0100, skrev Wladimir van der Laan: > Hello, > > I have created a GPU-accelerated plugin to playback Dirac video > streams. This currently, like the software implementation it is based > on (Schrodinger) uses the gst framework. To do this, it downloads back > the rendered frame from GPU to CPU memory, after which it passes the > raw YUV output to gst, which uploads the frame to the screen again. > > This generates a lot of superfluous traffic on the bus. Is there > infrastructure in place to pass the output of a plugin as a GL > texture, for direct rendering? Or some other recommended way to do > this? > I think you could augment the capabilities for your plugin with a property that says this is an opengl handle for a texture, and write your own sink that accepts capabilities correspondingly and actually shows the texture somehow. To allow for still downloading the rendered frame from GPU to CPU memory you could write a plugin that does exactly that conversion. I could also be just completely rambling though :) > I realize I've probably written the first video rendering plugin for > Linux that is accelerated on graphics hardware, so this might get > interesting. It is indeed interesting and you should post your source code somewhere :) Simon Holm Thøgersen |
From: Wladimir v. d. L. <la...@gm...> - 2007-12-15 17:00:30
|
> > I think you could augment the capabilities for your plugin with a > property that says this is an opengl handle for a texture, and write > your own sink that accepts capabilities correspondingly and actually > shows the texture somehow. This was close to what I had in mind, thanks. > To allow for still downloading the rendered > frame from GPU to CPU memory you could write a plugin that does exactly > that conversion. I could also be just completely rambling though :) Can a plugin detect somehow what its output expects? Could I do something like 'if attached to a normal plugin, download the video data to the CPU, if attached to a GPU plugin, pass pointer on GPU'. Now that I think of it, I'd need something analogous to 'pads' but then in GPU memory to store the immediate result, and still have it pipeline nicely. It is indeed interesting and you should post your source code > somewhere :) I certainly will, probably after I solve these loose ends. At least I have everything working now, so I can think about details like this :) If it turns out too complicated I'll first publish the version that downloads back, optimizations can always be added in the future. Wladimir |
From: Simon H. <od...@cs...> - 2007-12-15 19:56:24
|
lør, 15 12 2007 kl. 18:00 +0100, skrev Wladimir van der Laan: > I think you could augment the capabilities for your plugin > with a > property that says this is an opengl handle for a texture, and > write > your own sink that accepts capabilities correspondingly and > actually > shows the texture somehow. > > This was close to what I had in mind, thanks. > > To allow for still downloading the rendered > frame from GPU to CPU memory you could write a plugin that > does exactly > that conversion. I could also be just completely rambling > though :) > > Can a plugin detect somehow what its output expects? Could I do > something like 'if attached to a normal plugin, download the video > data to the CPU, if attached to a GPU plugin, pass pointer on GPU'. The capability matching system will take care of that given the scheme I outlined. If your decoder element's src caps always have a special "GPU" property, it will require that the receiver element's sink caps have that special property too, in order to match. To allow the decoded image to be used in a "CPU" pipeline, just do an element with the "GPU" property on the sink pad and without the "GPU" property on the source pad. > Now that I think of it, I'd need something analogous to 'pads' but > then in GPU memory to store the immediate result, and still have it > pipeline nicely. > Just use pads. What you really want that is a bit special, is your own buffer that extends GstBuffer that can keep track on allocation of opengl resources. > > It is indeed interesting and you should post your source code > somewhere :) > > I certainly will, probably after I solve these loose ends. At least I > have everything working now, so I can think about details like > this :) > This would already be interesting too see :) > If it turns out too complicated I'll first publish the version that > downloads back, optimizations can always be added in the future. Yeah. Happy coding. Oh, btw, what are the performance numbers of the gpu vs cpu version? And please supply hardware details. Simon Holm Thøgersen |
From: <mr...@gm...> - 2007-12-15 17:23:11
|
Hello, Wladimir. On Dec 14, 2007 10:45 AM, Wladimir van der Laan <la...@gm...> wrote: [...] > I have created a GPU-accelerated plugin to playback Dirac video streams. [...] > This generates a lot of superfluous traffic on the bus. Is there > infrastructure in place to pass the output of a plugin as a GL texture, f= or > direct rendering? Or some other recommended way to do this? I would be interested to hear the opinions of the GStreamer developers on this. It has been discussed in the past, but I don't think it figures in the mailing list. One thing you can do is create your own hardware accelerated sink and connect both your plug-ins, thus avoiding data passing over the general-purpose processor. If you go that route, you could emit fake buffers from the filter to the sink for the GStreamer pipeline to go on. This, of course, breaks the GStreamer's purpose of controlling the data flows. > I realize I've probably written the first video rendering plugin for Linu= x > that is accelerated on graphics hardware, so this might get interesting. Indeed this is interesting. You might want to consider OpenMAX IL (look at the diagram here): http://www.khronos.org/openmax/ What you want to achieve sounds to me like an OpenMAX tunnel between components (Dirac Decoder and GL Render). On the GStreamer side, this might be of your interest: http://www.freedesktop.org/wiki/GstOpenMAX Greetings! Daniel D=EDaz yo...@da... > Greetings, > Wladimir |
From: <ma...@re...> - 2007-12-16 09:26:55
|
Wladimir van der Laan schrieb: > Hello, > > I have created a GPU-accelerated plugin to playback Dirac video streams. > This currently, like the software implementation it is based on > (Schrodinger) uses the gst framework. To do this, it downloads back the > rendered frame from GPU to CPU memory, after which it passes the raw YUV > output to gst, which uploads the frame to the screen again. > > This generates a lot of superfluous traffic on the bus. Is there > infrastructure in place to pass the output of a plugin as a GL texture, > for direct rendering? Or some other recommended way to do this? > > I realize I've probably written the first video rendering plugin for > Linux that is accelerated on graphics hardware, so this might get > interesting. > > Greetings, > Wladimir > > You are not alone; bug #431252 has some discussion you might find interesting: http://bugzilla.gnome.org/show_bug.cgi?id=431252 -- Regards, René Stadler |
From: Andy W. <wi...@po...> - 2007-12-27 19:15:47
|
Hi Wladimir, On Fri 14 Dec 2007 11:45, "Wladimir van der Laan" <la...@gm...> writes: > Hello, > > I have created a GPU-accelerated plugin to playback Dirac video > streams. Rocking! > Is there infrastructure in place to pass the output of a plugin as a > GL texture, for direct rendering? Or some other recommended way to do > this? Since no one else mentioned this in this thread IIRC, David Schleef has been doing things like this, check his weblog: http://www.schleef.org/blog/2007/12/25/opengl-in-gstreamer/ Eventually when glimagesink gets updated there will be no need for gldownload, the texture can be rendered directly. > I realize I've probably written the first video rendering plugin for > Linux that is accelerated on graphics hardware, so this might get > interesting. There have been GPU-based filters before, and direct-path hardware decoders, but not decoders on the GPU I don't think. You rock :-) Cheers, Andy -- http://wingolog.org/ |
From: Simon H. <od...@cs...> - 2007-12-27 19:31:51
|
tor, 27 12 2007 kl. 14:14 -0500, skrev Andy Wingo: > Hi Wladimir, > > On Fri 14 Dec 2007 11:45, "Wladimir van der Laan" <la...@gm...> writes: > > > Hello, > > > > I have created a GPU-accelerated plugin to playback Dirac video > > streams. > > Rocking! > > > Is there infrastructure in place to pass the output of a plugin as a > > GL texture, for direct rendering? Or some other recommended way to do > > this? > > Since no one else mentioned this in this thread IIRC, David Schleef has > been doing things like this, check his weblog: > > http://www.schleef.org/blog/2007/12/25/opengl-in-gstreamer/ > > Eventually when glimagesink gets updated there will be no need for > gldownload, the texture can be rendered directly. > gldownload will still be useful for doing further processing on the CPU and debugging, so I wouldn't say it's something temporary. Or maybe you were just referring to the pipeline David Schleef posted on the blog, and the specific case of Wladimir's decoder? > > I realize I've probably written the first video rendering plugin for > > Linux that is accelerated on graphics hardware, so this might get > > interesting. > > There have been GPU-based filters before, and direct-path hardware > decoders, but not decoders on the GPU I don't think. You rock :-) Simon Holm Thøgersen |
From: Vinay R. <vin...@gm...> - 2007-12-28 03:58:08
|
On Dec 27, 2007 11:14 AM, Andy Wingo <wi...@po...> wrote: > Hi Wladimir, > > On Fri 14 Dec 2007 11:45, "Wladimir van der Laan" <la...@gm...> writes: > > > Hello, > > > > I have created a GPU-accelerated plugin to playback Dirac video > > streams. > > Rocking! > > > Is there infrastructure in place to pass the output of a plugin as a > > GL texture, for direct rendering? Or some other recommended way to do > > this? > > Since no one else mentioned this in this thread IIRC, David Schleef has > been doing things like this, check his weblog: > > http://www.schleef.org/blog/2007/12/25/opengl-in-gstreamer/ > > Eventually when glimagesink gets updated there will be no need for > gldownload, the texture can be rendered directly. > > > I realize I've probably written the first video rendering plugin for > > Linux that is accelerated on graphics hardware, so this might get > > interesting. > > There have been GPU-based filters before, and direct-path hardware > decoders, but not decoders on the GPU I don't think. You rock :-) In case of direct rendering (decoding and display, both being done from the GPU), how does video clipping work? As in, how does the GPU know what portion of the video to display on-screen? Is the overlay color used as a mask or does someone maintain a list of clipping rectangles? Thanks, Vinay |
From: David S. <ds...@sc...> - 2007-12-28 04:06:31
|
On Thu, Dec 27, 2007 at 07:58:08PM -0800, Vinay Reddy wrote: > > There have been GPU-based filters before, and direct-path hardware > > decoders, but not decoders on the GPU I don't think. You rock :-) > > In case of direct rendering (decoding and display, both being done > from the GPU), how does video clipping work? As in, how does the GPU > know what portion of the video to display on-screen? Is the overlay > color used as a mask or does someone maintain a list of clipping > rectangles? It's no different than any other OpenGL application. dave... |
From: Vinay R. <vin...@gm...> - 2007-12-28 05:57:46
|
> It's no different than any other OpenGL application. Okay cool... But, existing applications would have to do some additional work to get subtitles and other on-screen display stuff to work right, wouldn't they? (as in, rendering all text, animations etc. to the OpenGL context) Vinay |
From: Florent <ft...@gm...> - 2008-01-18 16:51:22
|
Hi To the writer of the accelerated decoder plugin: * did you use GL shaders for implementing the decoding algorithm ? * will you commit the code so that i can peek at the implementation ? I'm wondering if (m)jpeg decoding could be done using the GPU as well; my project involves a lot of parallel mjpeg streams decoding, and it's heavy in the end (although running on a quad-core machine), i can handle about 6 streams at best. The jpeg algo is quite simple, hopefully t may be quick to do. Thanks for any info/code about how to develop a GPU-accelerated decoder. Flo |
From: David S. <ds...@sc...> - 2008-01-18 23:52:36
|
On Fri, Jan 18, 2008 at 05:51:25PM +0100, Florent wrote: > To the writer of the accelerated decoder plugin: > * did you use GL shaders for implementing the decoding algorithm ? > * will you commit the code so that i can peek at the implementation ? Perhaps you're talking about schroedinger? The code is at git://diracvideo.schleef.org/git/schroedinger.git in the cuda branch. It uses CUDA, not OpenGL. dave... |