From: Jason T. <ta...@sa...> - 2005-07-26 14:48:46
|
I have written some filters for MPlayer that we plan on using for the Freevo project. One filter is called vf_osd, which provides a BGRA buffer which is composited over a running movie at an independent framerate (i.e. the OSD can be updated at a rate independent of the movie framerate, and while it is paused). The other filter is called vf_outbuf, which passes the video data into a shared memory buffer. The latter filter will be used for our canvas system, so, for example, we can display a thumbnail view of the currently playing video while navigating the menu system. vf_outbuf has the ability to stop the flow of the video so that it doesn't reach the vo device (such as in the case when in the menus), pass the video on without writing to the shared memory buffer (such as in the case when the user is watching the video fullscreen), or both at once (such as the brief moment during a transition period between menu and fullscreen, in order to avoid noticeable lags). This is, I admit, somewhat kludgey, but in practice it works remarkably well. I am presently trying to work the above functionality into an application using xine-lib. My first attempt was to implement vf_outbuf's functionality as a post plugin in xine. However, because Xine seems to preprocess a number of frames before sending to the video device, there is an A/V sync problem. The frame being sent to the external buffer must be in sync with the audio. So the only way I can see this as being possible is to create a new video out plugin. Having done this, it seems to work inasmuch as the video data written to the external buffer is in sync with the audio. The second requirement is the ability to pass the video through to a "real" video device, like Xv, for when the user views the video in fullscreen. My "buffer" video out plugin then takes a passthrough parameter which is another video out device. If a particular flag is set, it will copy the frame and send it to the other video device. The display_frame function looks like this: static void buffer_display_frame (vo_driver_t *this_gen, vo_frame_t *frame_gen) { buffer_driver_t *this =3D (buffer_driver_t *)this_gen; buffer_frame_t *frame =3D (buffer_frame_t *)frame_gen; vo_frame_t *passthrough_frame; int do_passthrough; pthread_mutex_lock(&this->lock); // [snip] ... copy video frame to external buffer ... do_passthrough =3D check_if_passthrough_needed();=20 =20 if (this->passthrough && do_passthrough) { passthrough_frame =3D this->passthrough->get_frame(this->passthroug= h, frame->width, frame->height, frame->ratio= , frame->format, frame->flags); if (frame->format =3D=3D XINE_IMGFMT_YV12) { xine_fast_memcpy(passthrough_frame->base[0], frame_gen->base[0]= , frame_gen->pitches[0] * frame->height); xine_fast_memcpy(passthrough_frame->base[1], frame_gen->base[1]= , frame_gen->pitches[1] * (frame->height>>1)); xine_fast_memcpy(passthrough_frame->base[2], frame_gen->base[2]= , frame_gen->pitches[2] * (frame->height>>1)); } else if (frame->format =3D=3D XINE_IMGFMT_YUY2) { xine_fast_memcpy(passthrough_frame->base[0], frame_gen->base[0]= , frame_gen->pitches[0] * frame->height); } _x_post_frame_copy_down(frame, passthrough_frame); passthrough_frame->draw(passthrough_frame, frame_gen->stream); _x_post_frame_copy_up(passthrough_frame, frame); passthrough_frame->free(passthrough_frame); } frame->vo_frame.free(&frame->vo_frame); pthread_mutex_unlock(&this->lock); } So I create an Xv video device, and a buffer device setting the Xv device as the passthrough. This actually works, but only when the stream is wired directly to buffer video port. When I insert a tvtime post plugin between the stream and the buffer video device AND tvtime kicks into film mode, the video becomes jittery. My sense is this is quite telling as to the nature of the problem, but I'm simply too much a xine-lib newbie to know. Alright, so, while composing this email and staring at the above code, it occurred to me to try swapping the arguments in _x_post_frame_copy_down(). It seemed intuitive to me that I am copying _from_ the original frame _to_ the new passthrough frame, but in a random, infinite-monkeys attempt, I swapped these parameters and lo and behold, it works. The reason I'm not erasing any of the above is that I'm still hoping somebody can explain what's happening that it should work this way. Furthermore, as I have a strong feeling doing what I'm doing above is architecturally repulsive, I'm wondering if somebody might have a better suggestion as to how to accomplish my goals. At the very least, the memcpy calls seem somewhat extraneous, and it would be nice to reuse the image buffers of the original frame. Lastly, there is the question of how I am going to reproduce my vf_osd work with Xine. There are some critical features of vf_osd that we need: provides a writable BGRA buffer; per-pixel and global alpha blending; not fixed to the movie framerate; operational while paused. I don't think it would do at all to have this code in the video device, because it would be fixed to the movie framerate and, worse, not work at all while paused. Now, Xine does have an overlay manager. But having read the hacker's docs, I'm left with the impression it's not suitable for 8-bit per-pixel alpha as well as global alpha. Is it possible to implement an OSD renderer that has direct access to the YV12/YUY2 data, rather than first having to convert everything to an RLE image? Regards, Jason. |