On Tuesday 01 February 2005 18:28, Matthias Hopf wrote:
> Ok, this will be a bit lengthy:
> As far as I understand your approach you want to fetch video data in rgb
> format (on a per-frame basis) from the xine render thread into the OSG
> thread, do a double buffering there and download it to a texture as soon
> as you need it for rendering there.
Yes this generally what I'm look at, although I don't want to fetch any data
save for pointers, I'd like a xine thread (be that in a plugin or part of
core xine-lib) that can write to a rgb image buffer, while the rendering
thead reads from a second rgb imager buffer. Once copy to and copy from are
complete then pointers can be swapped.
There will have to be some kind of state specifying when the image has been
resized and video stopped etc.
> I would suppose a different framework:
> Let xine render its video using a backend based on the OpenGL one - but
> one that does not have an own render thread but rather an external
> caller interface for an OSG thread.
> Now the OSG render thread has to call the xine backend plugin in its
> mainloop to check for pending synchronous events:
> - new drawable needed - unlikely at other times than at the beginning
> in your case, as you will use your own frontend
It depends what you mean by drawable, an X drawable or a seperate xine-lib
concept of a drawable. Just to complicates there is also an osg::Drawable
which isn't a X drawable but a object that can be drawn:-)
In the context of an OSG app, this app will have created all its windows and
associated graphics contexts and will be may well be fully up and running
before any video is ever started, alternatively one could load a whole scene
graph including video texture before opening windows.
I'm expecting requests for a drawable from xine-lib would largely be a non op,
unless its this call that creates the links between xine-lib/plugin
structures and the equivilant OSG objects.
> - questioning the necessary visual
again no X style visuals, but the type of texture format would be of interest
such as RGB or RGBA or YUV.
> - resize
This is a fun one... requiring a resize of the texture, but this should
probably work out of the box on the OSG side of things, but I'll need to
check for sure.
> All in all these events occure rarly, I guess you can make sure (with
> the appropriate frontend that is embedded in the OSG application) that
> they are only called once at startup.
Indeed, I'm happy for other events to come through to, most I should be able
to handle with small updates to the OSG objects
> The video frames that are created during playback are stored
> assynchronously - that is, if nobody checks for the events, no harm is
> done and xine will continue playing the video in the background, without
> freezing. If synchronous events are buffered too long, xine will freeze
> until they are catched.
If its possible to querry state then I could probably have it so that
unhandled events just get harmlessly disgarded.
> Now in the tree traversal routine (assuming a render tree action) you
> should call the xine backend plugin in order to setup the currently
> active texture with the current frame. The routine will just do a
> glTexSubImage2D() and return. On rare occasions (video change) the
> routine may have to allocate a larger texture.
I'm expecting the OSG to do the glTexSubImaged2D but we could possibly defer
this to a xine-lib plugin. It probably doens't make much odds.
> There is no need to double buffer anything here, as video buffers are
> only recycled after 4 frames have been delivered, and as soon as you
> have fetched the current buffer number its content won't change during
> upload. This is already part of the current code.
This is 4 frames of YUV buffered format, rather than 4 frames of RGB format.
Otherwise we'd need to do the convert just before the texture subloading and
this would mean that the graphics thread stalls while the YUV to RGB
Putting the conversion into the xine-lib thread could avoid, as would defering
the colour space conversion to the GPU itself.
> > > > I've looked at the existing opengl plugin, but this way over
> > > > complicated for what I'm expecting to require - the OSG has lots of
> > > > OpenGL functionality all
> > >
> > > Over complicated with respect to what?
> > The OpenGL plugin does alot of OpenGL setup and calls that are completely
> > not required for an equivilant OSG plugin as its the OSG that can do the
> > vast majority of this work.
> All of these are encapsulated and can be thrown away. You don't need
> different render types, you only need texture upload.
> Heck, you don't even need the drawing routines, that will be done outside
> xine in OSG.
I'd expect a custom xine-lib plugin to not have any OpenGL in it, just code to
link up the OSG objects to what is being generated by xine-lib.
> But rendering takes only part in a single thread, right? Except for
> multiple contextes, that is. I haven't taken a look into OSG for a long
Each gaphics context has its own thread, so if there are multiple graphics
contexts there will be multiple threads.
> BTW - Which OSG do you mean, OpenSG or OpenSceneGraph? Or is there a
> third I'm not aware of? A former colleague of mine, Manfred Weiler,
> worked on OpenSG, another, Stefan Roettger, a bit on OpenSceneGraph.
OSG as in OpenSceneGraph. I've met Stefan, he invited me across to Stuttgart
uni to talk about the OSG.
> > > You have to make sure that the same thread does OSG rendering and
> > > texture upload.
> > The OSG will have its own threads, for the xine plugin I'll just need to
> > have a thread which manages the a set of 2d RGB images that the OSG
> > rendering thread can read on demand. I'm expect to at least double
> > buffer the 2d RGB images so that one can be written to while the OSG's
> > thread can read from the other safely without tearing.
> As said before, I would try to avoid this as data is copied at least
> twice with this approach. xine has been designed from ground up so that
> no data copy is needed, and so that CPU caches are utilized best.
> If implemented correctly it shouldn't be more complicated either.
I was actually hoping to avoid the copying of data, as mentioned above the
double buffering would be in terms of pointer swaps rather than data copies.
> > I've implemented something similar with integrating with libmpeg3, but
> > libmpeg3 is a quite a bit simplier in terms of usage since you explicitly
> > call it to get the image for each frame and it doesn't have a plugin
> > architecture which xine-lib has.
> Sure. It's a little bit less feature loaden as well ;) SCNR
> It does only video, not audio, it does only MPeg2, it does no timing,
> etc. pp.
Yep, its about as lean as one could get...
Its certainly not a functional enough solution for our needs, xine-lib meets
> > I am curious about using YUV OpenGL texture formats under OSX and
> > fragment programs for other platforms for doing the colour space
> > conversion, so I may well be able to take un scaled YUV data.
> Maybe you might want to wait a bit. If I'm lucky I'll commit YUV
> conversion in fragment programs this week. It works with both ATI and
> NVidia (first time that something like this works out-of-the-box on both
> hardware types), but needs polishing due to extension usage.
Coooool, gotta love open source ;-)
> > In some ways the OpenGL plugin is a good match, but I believe way over
> > complicated relative to my own needs.
> If done in the right way, this could easily be a backend that may be
> usable for other projects as well (e.g. a frontend that uses OpenGL
> heftily for its user interface and wants to do fancy stuff with the
> video image). If we come to a conclusion we might even end up in
> changing the OpenGL plugin a bit so that by signaling the plugin it will
> be called externally it changes its behavior. So we could stay with a
> single code basis. But this integration might better be a second step.
> I thought of this some time ago, for a *really* fancy video player with
> on-screen UI display, but I didn't find any time for that ;)
I'm happy to just get a first proof of concept working right now..
> What I would suggest:
lots of useful comments pruned...
> Would be scary if you're *that* fast :-]
Cue evil laugh....
Thanks for very much your time and useful comments.