Thread: [Redbutton-devel] Small changes to rb for Fedora 9
Brought to you by:
skilvington
|
From: Andrea <mar...@go...> - 2008-06-09 18:22:14
|
Hi, I've recently moved to Fedora 9 and I've just tried to run redbutton-browser. I had a few small of issues: 1) the alsa output device "plughw" is not good when using pulseaudio, and in general I would use "default". Or one could allow for a command line option 2) redbutton still uses img_convert and other deprecated functions from ffmpeg. The recommended funtions are swscale. I managed to convert videoout_xshm.c to the new framework. There are still 2 more occurencies which I have not converted. 3) avcodec_decode_audio should be replaced by avcodec_decode_audio2 which requires af->item.size to be initialized. 4) aspect-ratio: BBC broadcasts in 16:9, while redbutton shows everything 4:3. I have not yet found how to change it 5) audio in Multiscreen. No audio is played while in News Multiscreen. I get those warnings: Unable to open MPEG stream (16768, 53, 0) 6) xvideo: I am trying to change to xvideo. Any suggestions? Andrea |
|
From: Andrea <mar...@go...> - 2008-06-09 18:36:36
Attachments:
rb.diff
|
Andrea wrote: > Hi, > > I've recently moved to Fedora 9 and I've just tried to run > redbutton-browser. > I had a few small of issues: > > 1) the alsa output device "plughw" is not good when using pulseaudio, > and in general I would use "default". Or one could allow for a command > line option > > 2) redbutton still uses img_convert and other deprecated functions from > ffmpeg. The recommended funtions are swscale. I managed to convert > videoout_xshm.c to the new framework. > There are still 2 more occurencies which I have not converted. > > 3) avcodec_decode_audio should be replaced by avcodec_decode_audio2 > which requires af->item.size to be initialized. > > 4) aspect-ratio: BBC broadcasts in 16:9, while redbutton shows > everything 4:3. I have not yet found how to change it > > 5) audio in Multiscreen. No audio is played while in News Multiscreen. I > get those warnings: > Unable to open MPEG stream (16768, 53, 0) > > 6) xvideo: I am trying to change to xvideo. Any suggestions? > > Andrea > I forgot the patch. Andrea |
|
From: Simon K. <s.k...@er...> - 2008-06-11 13:40:48
|
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
thanks for the patch - I've applied the default alsa device and the
decode_audio2 parts - at the moment I'm changing the remaining
img_convert (in MHEGDisplay.c) to use swscale instead, so I'll commit
your fix for videoout_xshm at the same time - unless you beat me to it!
regarding the other questions:
aspect ratio - I have some hooks in the code to deal with it, but have
not implemented anything yet - at the moment it either uses exactly
720x756 pixels for its display, or the full size of your screen - but
it does not take the aspect ratio into account
not sure why the audio does not play in News Multiscreen - it works for
me - the error you get seems to suggest it is trying to open an audio
stream that does not exist - if you run rb-browser with the -v flag and
send me a log of the output, I'll see if I can work out what is going
wrong
xvideo output - this has been on my todo list for a long time - if you
want to implement it I would be very pleased ;-) you should be able to
base the code on videoout_xshm - I've been thinking it may be better to
also have a function in the videoout_* codes to overlay the MHEG scene
on the video - if we can feed YUV frames to xvideo rather than having to
convert them to RGB first, then it may turn out better to convert the
MHEG scene to YUV and composite that on the YUV video then feed the
result to xvideo - the scene will change a lot less often than the video
so there should be a lot less RGB->YUV conversion going on
so we would basically have a function in videoout_* called something
like "set_overlay" that gets called each time the MHEG scene changes -
then each time we show a video frame we composite the current overlay
on to it
Andrea wrote:
> Andrea wrote:
>> Hi,
>>
>> I've recently moved to Fedora 9 and I've just tried to run
>> redbutton-browser.
>> I had a few small of issues:
>>
>> 1) the alsa output device "plughw" is not good when using pulseaudio,
>> and in general I would use "default". Or one could allow for a command
>> line option
>>
>> 2) redbutton still uses img_convert and other deprecated functions
>> from ffmpeg. The recommended funtions are swscale. I managed to
>> convert videoout_xshm.c to the new framework.
>> There are still 2 more occurencies which I have not converted.
>>
>> 3) avcodec_decode_audio should be replaced by avcodec_decode_audio2
>> which requires af->item.size to be initialized.
>>
>> 4) aspect-ratio: BBC broadcasts in 16:9, while redbutton shows
>> everything 4:3. I have not yet found how to change it
>>
>> 5) audio in Multiscreen. No audio is played while in News Multiscreen.
>> I get those warnings:
>> Unable to open MPEG stream (16768, 53, 0)
>>
>> 6) xvideo: I am trying to change to xvideo. Any suggestions?
>>
>> Andrea
>>
> I forgot the patch.
>
> Andrea
>
>
>
> ------------------------------------------------------------------------
>
> Index: videoout_xshm.h
> ===================================================================
> --- videoout_xshm.h (revision 485)
> +++ videoout_xshm.h (working copy)
> @@ -9,6 +9,7 @@
> #include <X11/Xlib.h>
> #include <X11/extensions/XShm.h>
> #include <ffmpeg/avcodec.h>
> +#include <ffmpeg/swscale.h>
>
> typedef struct
> {
> @@ -22,11 +23,9 @@
> XShmSegmentInfo shm; /* shared memory used by current_frame */
> AVPicture rgb_frame; /* ffmpeg wrapper for current_frame SHM data */
> enum PixelFormat out_format; /* rgb_frame ffmpeg pixel format */
> - ImgReSampleContext *resize_ctx; /* NULL if we do not need to resize the frame */
> + struct SwsContext *sws_ctx;
> FrameSize resize_in; /* resize_ctx input dimensions */
> FrameSize resize_out; /* resize_ctx output dimensions */
> - AVPicture resized_frame; /* resized output frame */
> - uint8_t *resized_data; /* resized_frame data buffer */
> } vo_xshm_ctx;
>
> extern MHEGVideoOutputMethod vo_xshm_fns;
> Index: MHEGStreamPlayer.c
> ===================================================================
> --- MHEGStreamPlayer.c (revision 485)
> +++ MHEGStreamPlayer.c (working copy)
> @@ -102,7 +102,7 @@
>
> af->item.pts = AV_NOPTS_VALUE;
>
> - af->item.size = 0;
> + af->item.size = AVCODEC_MAX_AUDIO_FRAME_SIZE;
>
> return af;
> }
> @@ -433,7 +433,8 @@
> {
> audio_frame = new_AudioFrameListItem();
> af = &audio_frame->item;
> - used = avcodec_decode_audio(audio_codec_ctx, af->data, &af->size, data, size);
> +
> + used = avcodec_decode_audio2(audio_codec_ctx, af->data, &af->size, data, size);
> data += used;
> size -= used;
> if(af->size > 0)
> Index: MHEGAudioOutput.h
> ===================================================================
> --- MHEGAudioOutput.h (revision 485)
> +++ MHEGAudioOutput.h (working copy)
> @@ -15,7 +15,7 @@
> } MHEGAudioOutput;
>
> /* default ALSA device */
> -#define ALSA_AUDIO_DEVICE "plughw"
> +#define ALSA_AUDIO_DEVICE "default"
>
> bool MHEGAudioOutput_init(MHEGAudioOutput *);
> void MHEGAudioOutput_fini(MHEGAudioOutput *);
> Index: videoout_xshm.c
> ===================================================================
> --- videoout_xshm.c (revision 485)
> +++ videoout_xshm.c (working copy)
> @@ -38,8 +38,7 @@
>
> v->current_frame = NULL;
>
> - v->resize_ctx = NULL;
> - v->resized_data = NULL;
> + v->sws_ctx = NULL;
>
> return v;
> }
> @@ -49,10 +48,9 @@
> {
> vo_xshm_ctx *v = (vo_xshm_ctx *) ctx;
>
> - if(v->resize_ctx != NULL)
> + if(v->sws_ctx != NULL)
> {
> - img_resample_close(v->resize_ctx);
> - safe_free(v->resized_data);
> + sws_freeContext(v->sws_ctx);
> }
>
> if(v->current_frame != NULL)
> @@ -67,8 +65,6 @@
> vo_xshm_prepareFrame(void *ctx, VideoFrame *f, unsigned int out_width, unsigned int out_height)
> {
> vo_xshm_ctx *v = (vo_xshm_ctx *) ctx;
> - AVPicture *yuv_frame;
> - int resized_size;
>
> /* have we created the output frame yet */
> if(v->current_frame == NULL)
> @@ -79,41 +75,34 @@
> vo_xshm_resize_frame(v, out_width, out_height);
>
> /* see if the input size is different than the output size */
> - if(f->width != out_width || f->height != out_height)
> + // if(f->width != out_width || f->height != out_height)
> {
> /* have the resize input or output dimensions changed */
> - if(v->resize_ctx == NULL
> + if(v->sws_ctx == NULL
> || v->resize_in.width != f->width || v->resize_in.height != f->height
> || v->resize_out.width != out_width || v->resize_out.height != out_height)
> {
> /* get rid of any existing resize context */
> - if(v->resize_ctx != NULL)
> - img_resample_close(v->resize_ctx);
> - if((v->resize_ctx = img_resample_init(out_width, out_height, f->width, f->height)) == NULL)
> + if(v->sws_ctx != NULL)
> + sws_freeContext(v->sws_ctx);
> + // if((v->resize_ctx = img_resample_init(out_width, out_height, f->width, f->height)) == NULL)
> + // fatal("Out of memory");
> + if((v->sws_ctx = sws_getContext(f->width, f->height, f->pix_fmt, out_width, out_height, v->out_format, SWS_FAST_BILINEAR, NULL, NULL, NULL)) == NULL)
> fatal("Out of memory");
> +
> + printf("%d, %d, %d, %d, %d %d\n", f->width, f->height, f->pix_fmt, out_width, out_height, v->out_format );
> +
> /* remember the resize input and output dimensions */
> v->resize_in.width = f->width;
> v->resize_in.height = f->height;
> v->resize_out.width = out_width;
> v->resize_out.height = out_height;
> - /* somewhere to store the resized frame */
> - if((resized_size = avpicture_get_size(f->pix_fmt, out_width, out_height)) < 0)
> - fatal("vo_xshm_prepareFrame: invalid frame size");
> - v->resized_data = safe_realloc(v->resized_data, resized_size);
> - avpicture_fill(&v->resized_frame, v->resized_data, f->pix_fmt, out_width, out_height);
> }
> /* resize it */
> - img_resample(v->resize_ctx, &v->resized_frame, &f->frame);
> - yuv_frame = &v->resized_frame;
> + sws_scale(v->sws_ctx, &f->frame.data, f->frame.linesize, 0, f->height, &v->rgb_frame.data, v->rgb_frame.linesize);
> +
> }
> - else
> - {
> - yuv_frame = &f->frame;
> - }
>
> - /* convert the frame to RGB */
> - img_convert(&v->rgb_frame, v->out_format, yuv_frame, f->pix_fmt, out_width, out_height);
> -
> return;
> }
>
>
>
> ------------------------------------------------------------------------
>
> -------------------------------------------------------------------------
> Check out the new SourceForge.net Marketplace.
> It's the best place to buy or sell services for
> just about anything Open Source.
> http://sourceforge.net/services/buy/index.php
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Redbutton-devel mailing list
> Red...@li...
> https://lists.sourceforge.net/lists/listinfo/redbutton-devel
- --
Simon Kilvington
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.4 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFIT9XRmt9ZifioJSwRAoobAJ0fokuzWDdIV8KWHSv8kuCqDyGi4ACdFWqx
XNlvnn60Dgb3BdyQACJUYn4=
=ICie
-----END PGP SIGNATURE-----
|
|
From: Andrea <mar...@go...> - 2008-06-11 19:57:54
|
Simon Kilvington wrote: > not sure why the audio does not play in News Multiscreen - it works for > me - the error you get seems to suggest it is trying to open an audio > stream that does not exist - if you run rb-browser with the -v flag and > send me a log of the output, I'll see if I can work out what is going > wrong will try that. > xvideo output - this has been on my todo list for a long time - if you > want to implement it I would be very pleased ;-) you should be able to > base the code on videoout_xshm - I've been thinking it may be better to > also have a function in the videoout_* codes to overlay the MHEG scene > on the video - if we can feed YUV frames to xvideo rather than having to > convert them to RGB first, then it may turn out better to convert the > MHEG scene to YUV and composite that on the YUV video then feed the > result to xvideo - the scene will change a lot less often than the video > so there should be a lot less RGB->YUV conversion going on I have understood a little how videoout_xshm works, comparing it to _null. A you mention below, I still have to understand how the overlay of the MHEG over the video works. > so we would basically have a function in videoout_* called something > like "set_overlay" that gets called each time the MHEG scene changes - > then each time we show a video frame we composite the current overlay > on to it > need to think more about that but thanks for the hints. A couple of other points: 1) sometimes the rb-download is not fast enough to read from dvr and it gets an overflow. you might try to set the DMX_SET_BUFFER_SIZE which (from 2.6.26) works for the dvr too. One could always open the dvr and set a bigger buffer (default is just less than 2 MB) 2) why not use a video engine like xine? I should be possible to add overlay, more or less in the same way as subtitles are added to the video. what do you think? Thanks Andrea |
|
From: Andrea <mar...@go...> - 2008-06-12 20:52:11
|
Simon Kilvington wrote: > not sure why the audio does not play in News Multiscreen - it works for > me - the error you get seems to suggest it is trying to open an audio > stream that does not exist - if you run rb-browser with the -v flag and > send me a log of the output, I'll see if I can work out what is going > wrong I have run it with -v and I have uploaded the file here http://xoomer.alice.it/enodetti/log/rb-br.log Interesting to note is the following: MHEGStreamPlayer_setAudioStream: tag=53 MHEGStreamPlayer_play: service_id=16768 audio_tag=53 video_tag=0 Unable to open MPEG stream (16768, 53, 0) ElementaryAction_set_variable OctetStringVariableClass: ~//s/newsloops.mhg 93; SetVariable SetVariable: OctetString '~/c/vidloop4' Processing next asynchronous event Generated event: ~//s/newsloops.mhg 87; content_available Processing next asynchronous event Generated event: ~//a 134; stream_stopped Processing next asynchronous event Generated event: ~//a 136; stream_playing MPEG TS demux: ignoring unexpected PID 404 MPEG TS demux: ignoring unexpected PID 403 MPEG TS demux: ignoring unexpected PID 406 MPEG TS demux: ignoring unexpected PID 405 The News Multiscreen channel broadcasts with the following PIDs: Video: 202 Audios: 403, 404, 405, 406 Could it be that rb is expecting 53 instead of one of the audios? Andrea |
|
From: Simon K. <s.k...@er...> - 2008-06-13 09:33:20
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Andrea wrote: > Simon Kilvington wrote: > >> not sure why the audio does not play in News Multiscreen - it works for >> me - the error you get seems to suggest it is trying to open an audio >> stream that does not exist - if you run rb-browser with the -v flag and >> send me a log of the output, I'll see if I can work out what is going >> wrong > > I have run it with -v and I have uploaded the file here > > http://xoomer.alice.it/enodetti/log/rb-br.log > > Interesting to note is the following: > > MHEGStreamPlayer_setAudioStream: tag=53 > MHEGStreamPlayer_play: service_id=16768 audio_tag=53 video_tag=0 > Unable to open MPEG stream (16768, 53, 0) > ElementaryAction_set_variable > OctetStringVariableClass: ~//s/newsloops.mhg 93; SetVariable > SetVariable: OctetString '~/c/vidloop4' > Processing next asynchronous event > Generated event: ~//s/newsloops.mhg 87; content_available > Processing next asynchronous event > Generated event: ~//a 134; stream_stopped > Processing next asynchronous event > Generated event: ~//a 136; stream_playing > MPEG TS demux: ignoring unexpected PID 404 > MPEG TS demux: ignoring unexpected PID 403 > MPEG TS demux: ignoring unexpected PID 406 > MPEG TS demux: ignoring unexpected PID 405 > > The News Multiscreen channel broadcasts with the following PIDs: > > Video: 202 > Audios: 403, 404, 405, 406 > > Could it be that rb is expecting 53 instead of one of the audios? > > Andrea > MHEG uses "association tags" which are mapped onto PIDs - can you do: rb-download 16768 then in another terminal use netcat to connect to port 10101 (this is the port rb-download listens on) and type "assoc", ie: nc 127.0.0.1 10101 assoc this should tell us what tags map to what PIDS - here's what I got: srk@earwig ~ $ nc 127.0.0.1 10101 assoc 200 OK Tag PID Type === === ==== (audio) 0 0 (video) 0 0 1 202 6 101 301 11 102 302 11 103 303 11 104 304 11 105 305 11 110 312 11 115 313 13 2 403 3 51 404 6 52 405 6 53 406 6 . it's also a bit odd that you get those message saying "ignoring unexpected PID 404" etc - that seems to imply that it has opened the MPEG stream, but it contains PIDs that we are not expecting - the PIDs we want are supposed to be filtered out by the hardware to help debug where the open stream is failing, you could look at the open_stream function in MHEGBackend.c - everywhere you see a return NULL you could put a printf so we can see what is causing the failure - -- Simon Kilvington -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.4 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFIUj7Smt9ZifioJSwRAlzdAKCAQuBIwjj6/Ukvcvqq1Na0/VSxNwCfUWbq hvtWFEn0Qe6VZmSYIsbIRtU= =m0b6 -----END PGP SIGNATURE----- |
|
From: Andrea <mar...@go...> - 2008-06-13 15:48:05
|
Simon Kilvington wrote: > this should tell us what tags map to what PIDS - here's what I got: > > srk@earwig ~ $ nc 127.0.0.1 10101 > assoc > 200 OK > Tag PID Type > === === ==== > (audio) 0 0 > (video) 0 0 > 1 202 6 > 101 301 11 > 102 302 11 > 103 303 11 > 104 304 11 > 105 305 11 > 110 312 11 > 115 313 13 > 2 403 3 > 51 404 6 > 52 405 6 > 53 406 6 > . I get the same. I've tried to debug open_stream in MHEGBackend.c and I've noticed the following: Usually audio & video go together and open_stream is called to open the dvr for audio and video. In the news multiscreen page, first the dvr is opened for video (no audio) and afterwards it is opened for audio. This is *not* possible. For some reason the dvr can be opened only once. http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=5e85bd057f0cb29881e3d55d29f48bb55bd2f450 The problem is that I cannot figure out how you it can work for you. I've noticed that the code follows a different path if it runs on the same machine as the server. Are you running over the network? There are 2 possibilities in my opinion: 1) treat dvr as a singleton and share it 2) use the new feature DMX_OUT_TSDEMUX_TAP http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=b01cd937895ad4114a07114cfb6b8b5d2be52241 where the demux outputs TS data (the same as DVR) and not PES (as it used to do). Andrea |
|
From: Simon K. <s.k...@er...> - 2008-06-16 09:04:40
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Andrea wrote: > Simon Kilvington wrote: >> this should tell us what tags map to what PIDS - here's what I got: >> >> srk@earwig ~ $ nc 127.0.0.1 10101 >> assoc >> 200 OK >> Tag PID Type >> === === ==== >> (audio) 0 0 >> (video) 0 0 >> 1 202 6 >> 101 301 11 >> 102 302 11 >> 103 303 11 >> 104 304 11 >> 105 305 11 >> 110 312 11 >> 115 313 13 >> 2 403 3 >> 51 404 6 >> 52 405 6 >> 53 406 6 >> . > > I get the same. > > I've tried to debug open_stream in MHEGBackend.c and I've noticed the following: > > Usually audio & video go together and open_stream is called to open the dvr for audio and video. > In the news multiscreen page, first the dvr is opened for video (no audio) and afterwards it is > opened for audio. > > This is *not* possible. For some reason the dvr can be opened only once. > http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=5e85bd057f0cb29881e3d55d29f48bb55bd2f450 > > The problem is that I cannot figure out how you it can work for you. I've noticed that the code > follows a different path if it runs on the same machine as the server. Are you running over the network? > err, I just tried it again and it doesn't work for me either now! I can assure you it used to work for me :-) - either BBC have changed the way their app works, or one of the kernel/lib/redbutton/etc updates I have done has broken it at some point in the past - anyway, your singleton solution sounds like the simplest - I will make MHEGStreamPlayer a singleton - the UK MHEG spec says you only need to support a single video/audio decoder - at the moment each separate MHEG Video and Audio object gets its own stream player, and they all open the dvr device - which works fine when there is only one, but not when the audio is a separate object to the video object another issue I need to resolve is since I upgraded ffmpeg and started using avcodec_decode_audio2 - my sound is stuttering and I get errors about "incorrect frame size" on the console - this comes from inside the avcodec_decode_audio2 call, so I need to sort that out too > There are 2 possibilities in my opinion: > > 1) treat dvr as a singleton and share it > 2) use the new feature DMX_OUT_TSDEMUX_TAP > http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=b01cd937895ad4114a07114cfb6b8b5d2be52241 > > where the demux outputs TS data (the same as DVR) and not PES (as it used to do). > > Andrea > thanks for your help and comments - -- Simon Kilvington -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.4 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFIViyZmt9ZifioJSwRAiuzAJ9K2x1WtYJoHC9xWl3Vgre2CUnHSQCeOxPB XNHEKVrp3RHD0GnRpF5mHYE= =9TE/ -----END PGP SIGNATURE----- |
|
From: Andrea <mar...@go...> - 2008-06-16 12:03:54
|
Simon Kilvington wrote:
> another issue I need to resolve is since I upgraded ffmpeg and started
> using avcodec_decode_audio2 - my sound is stuttering and I get errors
> about "incorrect frame size" on the console - this comes from inside
> the avcodec_decode_audio2 call, so I need to sort that out too
>
About that, I've *always* had it. With or without avcodec_decode_audio2.
I've tried to understand what it means and this what I think
From MHEGStreamPlayer.c:433
while(size > 0)
{
audio_frame = new_AudioFrameListItem();
af = &audio_frame->item;
used = avcodec_decode_audio2(audio_codec_ctx, af->data, &af->size, data, size);
data += used;
size -= used;
if(af->size > 0)
{
....
}
else
{
....
}
}
I think the "data" contains more that the XXX bytes the decoder expects, so it prints a log message
about the incorrect frame size. I think "data" contains data for more that 1 frame, and the "while
loop" decodes 1 frame at a time, until they are exhausted.
This comes from mpegaudiodec.c:2393 in ffmpeg.
if(s->frame_size<=0 || s->frame_size > buf_size){
av_log(avctx, AV_LOG_ERROR, "incomplete frame\n");
return -1;
}else if(s->frame_size < buf_size){
av_log(avctx, AV_LOG_ERROR, "incorrect frame size\n");
buf_size= s->frame_size;
}
In the debugger I've found that ffmpeg expects 768 bytes in a single frame, but "size" here can be
up to ~5000.
I think "incorrect frame size" is more of a warning, while "incomplete frame" would be an error (
:-) maybe).
Reading the documentation of ffmpeg, I cannot find how to query the decoder for the size it expects.
What I have done it is to skip it in ffmpeg
av_log_set_level(AV_LOG_QUIET);
at the end of MHEGDisplay_init(MHEGDisplay *d, bool fullscreen, char *keymap) in MHEGDisplay.c.
But that could hide real errors...
Hope it helps.
Andrea
|
|
From: Andrea <mar...@go...> - 2008-06-13 15:55:03
|
Simon Kilvington wrote: > xvideo output - this has been on my todo list for a long time - if you > want to implement it I would be very pleased ;-) you should be able to I started, but could not find any documentation about xvideo... just 2 examples with little comment. Could use SDL though... for which there is plenty of documentation. > base the code on videoout_xshm - I've been thinking it may be better to > also have a function in the videoout_* codes to overlay the MHEG scene > on the video - if we can feed YUV frames to xvideo rather than having to > convert them to RGB first, then it may turn out better to convert the > MHEG scene to YUV and composite that on the YUV video then feed the > result to xvideo - the scene will change a lot less often than the video > so there should be a lot less RGB->YUV conversion going on > > so we would basically have a function in videoout_* called something > like "set_overlay" that gets called each time the MHEG scene changes - > then each time we show a video frame we composite the current overlay > on to it I have not understood one thing: who displays the video and who displays the MHEG? MHEGDisplay.c seems to display the MHEG videoout_* displays the video stream. How do they interact? I mean, do they paint on the same canvas? Is the MHEG permanent till it is changed? In order to implement "set_overlay" I guess MHEG should create a bitmap which should be added to the video????? Every frame???? Basically: I have no idea how the overlay works... :-) But I am keen to learn. Andrea |
|
From: Simon K. <s.k...@er...> - 2008-06-13 16:20:03
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Andrea wrote: > Simon Kilvington wrote: >> xvideo output - this has been on my todo list for a long time - if you >> want to implement it I would be very pleased ;-) you should be able to > > I started, but could not find any documentation about xvideo... just 2 > examples with little comment. > Could use SDL though... for which there is plenty of documentation. > >> base the code on videoout_xshm - I've been thinking it may be better to >> also have a function in the videoout_* codes to overlay the MHEG scene >> on the video - if we can feed YUV frames to xvideo rather than having to >> convert them to RGB first, then it may turn out better to convert the >> MHEG scene to YUV and composite that on the YUV video then feed the >> result to xvideo - the scene will change a lot less often than the video >> so there should be a lot less RGB->YUV conversion going on >> >> so we would basically have a function in videoout_* called something >> like "set_overlay" that gets called each time the MHEG scene changes - >> then each time we show a video frame we composite the current overlay >> on to it > > I have not understood one thing: who displays the video and who displays > the MHEG? > > MHEGDisplay.c seems to display the MHEG > videoout_* displays the video stream. > > How do they interact? I mean, do they paint on the same canvas? Is the > MHEG permanent till it is changed? > In order to implement "set_overlay" I guess MHEG should create a bitmap > which should be added to the video????? Every frame???? > Basically: I have no idea how the overlay works... :-) But I am keen to > learn. > > Andrea > > how it works is like this - any MHEG object that wants to draw something on the screen calls the drawing routines in MHEGDisplay.c - once they have finished a block of drawing they call MHEGDisplay_useOverlay - this copies the new overlay into the used_overlay variable when the screen needs to be refreshed, eg you drag something over the window or a new video frame has been drawn, then MHEGDisplay_refresh is called. This takes the video frame as a background - the image is stored in the contents variable - and use XRender to composite the used_overlay data onto the video frame. This updated video frame (now including the overlay) is then copied onto the screen using XCopyArea so to display video, MHEGStreamPlayer has a thread that decodes the MPEG stream, this adds YUV format video frames to a queue - another thread in MHEGStreamPlayer takes the YUV frames off the queue, scales them, converts them to RGB, then waits until it is time to display the next frame - at this point it calls MHEGDisplay_refresh which combines the frame with the overlay and puts it on the screen the video thread in MHEGStreamPlayer uses functions in videoout_* to scale the frame and convert it to RGB - the actual method used depends on which video output method is chosen it would be better to encapsulate the compositing in videoout_* and also the actual putting the data on the screen - ie everything that MHEGDisplay_refresh does - the reason for this is that if the video output method can cope with YUV frames, then we can avoid having to convert every frame to RGB hope this helps! - -- Simon Kilvington -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.4 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFIUp4fmt9ZifioJSwRAnLqAJwIQLSM7zyprEM+gb6xd+2eoYb6DwCffbMU eYWAYNJqsFepZbTVSeDKPSc= =/pMD -----END PGP SIGNATURE----- |