Re: [Redbutton-devel] Small changes to rb for Fedora 9
Brought to you by:
skilvington
|
From: Simon K. <s.k...@er...> - 2008-06-13 16:20:03
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Andrea wrote: > Simon Kilvington wrote: >> xvideo output - this has been on my todo list for a long time - if you >> want to implement it I would be very pleased ;-) you should be able to > > I started, but could not find any documentation about xvideo... just 2 > examples with little comment. > Could use SDL though... for which there is plenty of documentation. > >> base the code on videoout_xshm - I've been thinking it may be better to >> also have a function in the videoout_* codes to overlay the MHEG scene >> on the video - if we can feed YUV frames to xvideo rather than having to >> convert them to RGB first, then it may turn out better to convert the >> MHEG scene to YUV and composite that on the YUV video then feed the >> result to xvideo - the scene will change a lot less often than the video >> so there should be a lot less RGB->YUV conversion going on >> >> so we would basically have a function in videoout_* called something >> like "set_overlay" that gets called each time the MHEG scene changes - >> then each time we show a video frame we composite the current overlay >> on to it > > I have not understood one thing: who displays the video and who displays > the MHEG? > > MHEGDisplay.c seems to display the MHEG > videoout_* displays the video stream. > > How do they interact? I mean, do they paint on the same canvas? Is the > MHEG permanent till it is changed? > In order to implement "set_overlay" I guess MHEG should create a bitmap > which should be added to the video????? Every frame???? > Basically: I have no idea how the overlay works... :-) But I am keen to > learn. > > Andrea > > how it works is like this - any MHEG object that wants to draw something on the screen calls the drawing routines in MHEGDisplay.c - once they have finished a block of drawing they call MHEGDisplay_useOverlay - this copies the new overlay into the used_overlay variable when the screen needs to be refreshed, eg you drag something over the window or a new video frame has been drawn, then MHEGDisplay_refresh is called. This takes the video frame as a background - the image is stored in the contents variable - and use XRender to composite the used_overlay data onto the video frame. This updated video frame (now including the overlay) is then copied onto the screen using XCopyArea so to display video, MHEGStreamPlayer has a thread that decodes the MPEG stream, this adds YUV format video frames to a queue - another thread in MHEGStreamPlayer takes the YUV frames off the queue, scales them, converts them to RGB, then waits until it is time to display the next frame - at this point it calls MHEGDisplay_refresh which combines the frame with the overlay and puts it on the screen the video thread in MHEGStreamPlayer uses functions in videoout_* to scale the frame and convert it to RGB - the actual method used depends on which video output method is chosen it would be better to encapsulate the compositing in videoout_* and also the actual putting the data on the screen - ie everything that MHEGDisplay_refresh does - the reason for this is that if the video output method can cope with YUV frames, then we can avoid having to convert every frame to RGB hope this helps! - -- Simon Kilvington -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.4 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFIUp4fmt9ZifioJSwRAnLqAJwIQLSM7zyprEM+gb6xd+2eoYb6DwCffbMU eYWAYNJqsFepZbTVSeDKPSc= =/pMD -----END PGP SIGNATURE----- |