From: Martin S. <so...@in...> - 2004-01-29 17:24:54
|
Hi all! In the last days, I'm once again getting up to speed in DVD development for GStreamer. Let me try to summarize the current state of things: The (Recent) Past ================= I have a GStreamer based DVD player at home, with menu support and even remote control. I use it regularly now to play movies in my home theater, through a DXR3 card. It is pretty robust, and is capable of good audio/video sync without using a lot of processor power, so it kind of works for my purposes. The whole program, however, is an ugly hack. I'm using a heavily hacked version of dvdnavsrc which uses signals for reporting such things as channel and subpicture palette changes. This doesn't really work, in fact. The signals are fired before the corresponding buffers have had time to reach the MPEG demuxer, and the application ends connecting to a new pad in mpegdemux before all buffers are through. The result is menus that randomly display wrongly and all sorts of other weird stuff. An additional problem is that, depending on the type of audio (AC3, PCM, DTS, mpeg) that is currently playing, you need to activate different decoder elements. As far as I've seen, the current schedulers don't like having around unlinked or partially linked elements, so you have to do all sort of funny tricks to keep the pipeline working every time audio changes. Due to the general messiness, I'm not that motivated to make a serious release of this player. The DXR3 elements are already in GStreamer, and the rest has to be rewritten. Anyhow, if anyone has interest, I'll be glad to share the code. Just let me know and I'll send you the files. The (Near) Future ================= In the last weeks, I've been working on a new approach, that, first, should do the job, and, second, seems to be compatible with what people have been discussing here recently. Basically, the idea is to have a pipeline that doesn't need reconnecting pads at all. That means that you have to modify some of the existing elements, and add a few new ones that act as intelligent "pipe joints". The pipeline would look as follows: 1. A dvdnavsrc element that writes everything to its single source pad. Menu information, like color palettes and button highlight tables would have to be sent synchronously with the MPEG data. There are two options here: - Use events. I know many people don't like the idea of taking the event concept to this point, but it would work, and would use the current infrastructure. - Implement Benjamin's idea of having many "subchannels" in a stream (i. e. buffers have an integer whose value tells the type of content in them). This looks cleaner, but I don't know how big the implementation effort is. 2. A demuxer. Either we change the existing one, or make a new "dvddemux" or whatever. It would need (at least) three output pads: video, audio and subpicture, and would automatically switch streams based on the incoming events/packets. Changing between different types of audio, would imply caps renegotiation on the audio pad. 3. Two separate threads with appropriately long queues would connect to each one of the three demuxer pads. One thread handles audio, the other handles video and subpicture. 4. The audio thread needs to select different decoders based on the capabilities currently set for the demuxer audio pad. An element is necessary, with one sink and many sources, that routes material to the appropriate source based on the caps set for the sink. I already have a working implementation of such a thing. I'd gladly upload it to GStreamer's CVS if I were given a change ;-) 5. The video/subpicture thread needs to handle an important portion of the user interaction. A good software subpicture decoder is necessary, but I don't really know the status of the existing one. Personally, I would rather concentrate on the DXR3 stuff, and collaborate with others (Jan?) to improve the software implementation in a compatible way, so that we can simply switch components to go from software to hardware decoding. My plan now is to start working on the demuxer. I know the code relatively well, so it shouldn't take that long. Please tell me if someone else is working on that part, or if there are considerations that I must take into account. Of course, comments, suggestions and constructive criticism will be greatly appreciated. Cheers, M. S. -- Martin Soto <so...@in...> |
From: <in...@pu...> - 2004-01-31 00:11:49
|
random comments... Quoting Martin Soto <so...@in...>: > I have a GStreamer based DVD player at home, with menu support and even > remote control. David and me came up half-jokingly with the idea to write a "lircnavfilter" that could be inserted into pipelines and would be identity except for sending nav events downstream. You could even put that into non-remote enabled apps and have it work out of the box. Your idea on that? > Due to the general messiness, I'm not that motivated to make a serious > release of this player. The DXR3 elements are already in GStreamer, and > the rest has to be rewritten. Anyhow, if anyone has interest, I'll be > glad to share the code. Just let me know and I'll send you the files. > Stuff like this belongs into gst-sandbox. Judging by some of the code that is already in there, yours can't be worse. ;) > 1. A dvdnavsrc element that writes everything to its single source pad. > Menu information, like color palettes and button highlight tables would > have to be sent synchronously with the MPEG data. There are two options > here: > > - Use events. I know many people don't like the idea of taking the > event concept to this point, but it would work, and would use the > current infrastructure. > > - Implement Benjamin's idea of having many "subchannels" in a stream > (i. e. buffers have an integer whose value tells the type of content in > them). This looks cleaner, but I don't know how big the implementation > effort is. > You are going to use a custom caps for this, like application/x-dvd or application/x-gst-dvd? All that stuff is definitely not mpeg, so I'd dislike it being labeled as such. I'm not sure what the right way to differentiate buffers would be. Suggestions like using the first byte in a buffer to describe the type of buffer (which would work with current infrastructure) seem hacky to me, too. > 2. A demuxer. Either we change the existing one, or make a new > "dvddemux" or whatever. It would need (at least) three output pads: > video, audio and subpicture, and would automatically switch streams > based on the incoming events/packets. Changing between different types > of audio, would imply caps renegotiation on the audio pad. > If that works, I'd suggest subclassing mpegdemux. But I have no clue how that element works. > 4. The audio thread needs to select different decoders based on the > capabilities currently set for the demuxer audio pad. An element is > necessary, with one sink and many sources, that routes material to the > appropriate source based on the caps set for the sink. I already have a > working implementation of such a thing. I'd gladly upload it to > GStreamer's CVS if I were given a change ;-) > I'd prefer someone fixing the schedulers as the new autoplugger will definitely have to support this. I guess I know who'll do that in the end. (and yes, I'll write a new one...) Oh, and bug Thomas to create you a CVS account if you don't have one. I know that he knows how to create them. (It worked for Scott ;)) Benjamin |
From: Martin S. <so...@in...> - 2004-01-30 13:18:14
|
Hi Benjamin! On Thu, 2004-01-29 at 19:10, in...@pu... wrote: > Quoting Martin Soto <so...@in...>: > > I have a GStreamer based DVD player at home, with menu support and even > > remote control. > David and me came up half-jokingly with the idea to write a "lircnavfilter" > that could be inserted into pipelines and would be identity except for sending > nav events downstream. > You could even put that into non-remote enabled apps and have it work out of > the box. > Your idea on that? Sounds great. Since my hacked dvdnavsrc doesn't do nav events, I'm using the application itself for the remote stuff. But a lirc nav element would allow for nicer integration. I may try my hand at that, provided no one does it before I get there (I'm giving priority to the other components of course). > > Due to the general messiness, I'm not that motivated to make a serious > > release of this player. The DXR3 elements are already in GStreamer, and > > the rest has to be rewritten. Anyhow, if anyone has interest, I'll be > > glad to share the code. Just let me know and I'll send you the files. > > > Stuff like this belongs into gst-sandbox. Judging by some of the code that is > already in there, yours can't be worse. ;) Hehe. I'll think of uploading portions of that as soon as I (if I ever ;-) get CVS access. > You are going to use a custom caps for this, like application/x-dvd or > application/x-gst-dvd? All that stuff is definitely not mpeg, so I'd dislike > it being labeled as such. Definitely. I don't either like calling that stuff MPEG. > > 2. A demuxer. Either we change the existing one, or make a new ... > If that works, I'd suggest subclassing mpegdemux. But I have no clue how that > element works. Yeah. I was checking the code once again yesterday, and my idea is to clean up the mpegdemux somewhat, and then derive a dvddemux from it. I plan to try and remove all DVD specific bits from mpegdemux and move them to dvddemux. For example, mpegdemux would deliver raw packets for the private streams, whereas dvddemux would parse them and deliver the right sound, subpicture and control data channels. Opinions on that? > > 4. The audio thread needs to select different decoders based on the > > capabilities currently set for the demuxer audio pad. An element is > > necessary, with one sink and many sources, that routes material to the > > appropriate source based on the caps set for the sink. I already have a > > working implementation of such a thing. I'd gladly upload it to > > GStreamer's CVS if I were given a change ;-) > > > I'd prefer someone fixing the schedulers as the new autoplugger will > definitely have to support this. > I guess I know who'll do that in the end. (and yes, I'll write a new one...) Well, the schedulers have to be fixed. By the way, it would be nice to discuss the scheduling algorithms here "in the open", before implementing them. A big problem I see is that it is not really clear what basic behavior should be expected from a scheduler. IMO, schedulers could differ in things like performance for different applications, but their behavior in terms of how buffers are routed and processed should always be predictable. This doesn't seem to be the case now. On the other hand, the problem I'm tackling with my new element is not just the deficiencies in the current schedulers. The fact is that you have pads that change their capabilities on the fly (for example from 48Khz PCM sound to AC3 when going from a menu to the main film) and someone has to react and route the buffers to the corresponding decoders. That's what my new element does. There's probably a way (I don't have the GStreamer sources available here) to do that from the application side, but the element that selects its output pad based on the caps of the attached elements looks like a cleaner solution to me. Am I missing something? > Oh, and bug Thomas to create you a CVS account if you don't have one. I know > that he knows how to create them. (It worked for Scott ;)) Thomas, are you there? Actually I have some patches for the DXR3 elements (real sync based on the new clock system) that I'd like to commit. Of course I can create a patch and put in bugzilla, but I'm lazy... > Benjamin Thanks for your comments, M. S. -- Martin Soto <so...@in...> Universität Kaiserslautern |
From: Benjamin O. <in...@pu...> - 2004-01-30 13:41:27
|
On Fri, 30 Jan 2004, Martin Soto wrote: > Well, the schedulers have to be fixed. By the way, it would be nice to > discuss the scheduling algorithms here "in the open", before > implementing them. A big problem I see is that it is not really clear > what basic behavior should be expected from a scheduler. IMO, > schedulers could differ in things like performance for different > applications, but their behavior in terms of how buffers are routed and > processed should always be predictable. This doesn't seem to be the > case now. > Apart from the current schedulers being very buggy wrt tearing down pipelines or parts thereof, there's also the problem of cothread switching. To illustrate the problem: Imagine loopsrc ! loopsink, both loopbased. This means both require their own cothread. Opt however doesn't use cothreads. It runs the loopsink. If the loopsink requests a buffer, it runs loopsrc and puts all the buffers loopsrc throws out into some sort of queue (opt names it "bufpen"). After that it continues with loopsink. When loopsink is done, the iteration is finished. (This is a simplified view of course, but it illustrates the problems I'm coming to now.) The problems with this are: 1) You cannot assume that a buffer is processed after you've send it out before you send the next one. Some plugins rely on that. 2) There might be unhandled buffers lying around in bufpens after the iteration returns. THis is bad when you start unlinking. 3) If plugins have long loopfunctions, bufpens get huge. I believe asfdemux is an offender there (I think it never end the loopfunction until EOS). This is exceptionally bad when you do clocking and easily fill up GstQueues with that crap. This is easy to see with our example if you assume loopsrc always pushes two buffers and loopsink always requests three during one run of the loopfunction.Thern it works like this: 1) loopsink loopfunction starts and needs buffers. 2) loopsrc puts two buffers in bufpen. (problem 1 for second buffer) 3) loopsink requests more buffers 4) loopsrc puts two buffers in bufpen. (problem 1 for second buffer) 5) loopsink grabs one buffer out of bufpen and finishes loop. 6) iteration done (problem 2) The only way I know to get around those limitations (and I'd like to get around them) is to use cothreads again. That would require someone with enough knowledge about makecontext and friends to write a portable cothreads implementation though and I believe that's hard. > On the other hand, the problem I'm tackling with my new element is not > just the deficiencies in the current schedulers. The fact is that you > have pads that change their capabilities on the fly (for example from > 48Khz PCM sound to AC3 when going from a menu to the main film) and > someone has to react and route the buffers to the corresponding > decoders. That's what my new element does. There's probably a way (I > don't have the GStreamer sources available here) to do that from the > application side, but the element that selects its output pad based on > the caps of the attached elements looks like a cleaner solution to me. > Am I missing something? > In theory this is a job an autoplugger should do. And my intention is to makie the new one do that. Well, what you did is kind of a specialized autoplugger already. :) Benjamin |
From: Martin S. <so...@in...> - 2004-01-31 18:45:08
|
On Fri, 2004-01-30 at 14:40, Benjamin Otte wrote: > On Fri, 30 Jan 2004, Martin Soto wrote: > > Well, the schedulers have to be fixed. By the way, it would be nice to > > discuss the scheduling algorithms here "in the open", before > > implementing them. ... > Apart from the current schedulers being very buggy wrt tearing down > pipelines or parts thereof, there's also the problem of cothread > switching. To illustrate the problem: ... > The only way I know to get around those limitations (and I'd like to get > around them) is to use cothreads again. That would require someone with > enough knowledge about makecontext and friends to write a portable > cothreads implementation though and I believe that's hard. I agree. I cannot really see how it would be possible to have loop based elements without (co)threads. Since loop functions can keep "rolling" so long as they need before returning, there's no way you can guarantee a fair behavior of the pipeline without having many threads of control. You could, of course, require loop functions to do only small work batches before returning, but this would defeat the purpose of having them at all. Instead of living with such a restriction, you'd rather get rid of all loops and implement everything chain based, which is always possible, but often very hard. Now, I'm not really aware of the portability problems with the cothreads implementation. Isn't it possible to use Pth, for example? > > On the other hand, the problem I'm tackling with my new element is not > > just the deficiencies in the current schedulers. The fact is that you > > have pads that change their capabilities on the fly (for example from > > 48Khz PCM sound to AC3 when going from a menu to the main film) and > > someone has to react and route the buffers to the corresponding > > decoders. That's what my new element does. There's probably a way (I > > don't have the GStreamer sources available here) to do that from the > > application side, but the element that selects its output pad based on > > the caps of the attached elements looks like a cleaner solution to me. > > Am I missing something? > > > In theory this is a job an autoplugger should do. And my intention is > to makie the new one do that. Well, what you did is kind of a specialized > autoplugger already. :) Well, my element is not really an autoplugger since it doesn't automatically find the decoders. The element has one input pad, and many possible outputs, that are actually request pads. When caps are negotiated for the input pad, it just goes through all connected output pads and tries to negotiate based on the caps for the input. The first output pad that accepts the input caps, will be taken as active output. Thereafter, all input buffers will be forwarded to that pad. The element is (or should be ;-) intelligent enough to handle caps being renegotiated on any pad, and having pads connected and disconnected on the fly. Intended usage is that you connect the input to the audio pad of the DVD demux, and connect decoders for the different types of DVD sound to the outputs. If the DVD sound type changes, the demux will renegotiate the caps on the sound pad, and my element will automagically select the right decoder. I suppose this is useful functionality for an autoplugger, but the logic necessary to identify the right decoder elements, instantiate and plug them, is still beyond me ;-) Cheers, M. S. -- Martin Soto <so...@in...> |
From: Ronald S. B. <rb...@ro...> - 2004-01-31 00:29:00
|
Hi Martin, Cool to see you'll be back actively working on all this! On Wed, 2004-01-28 at 22:58, Martin Soto wrote: > 1. A dvdnavsrc element that writes everything to its single source pad. > Menu information, like color palettes and button highlight tables would > have to be sent synchronously with the MPEG data. There are two options > here: > > - Use events. I know many people don't like the idea of taking the > event concept to this point, but it would work, and would use the > current infrastructure. > > - Implement Benjamin's idea of having many "subchannels" in a stream > (i. e. buffers have an integer whose value tells the type of content in > them). This looks cleaner, but I don't know how big the implementation > effort is. I like the events. dvdnavsrc and mpegdemux (or an element derived from mpegdemux: dvddemux) could share a private event that's sent between them. dvddemux/mpegdemux would then use that to set a new caps with corresponding colour tables on whatever element needs those, or forward the event to other elements needing it (then, you'd want to make it a public event...). Or decoder, mpeg2subt, I don't really know that part of "where are colourtables needed and where not"... I don't really like the subchannels idea, I feel it gets far too complicated. I'd prefer to do this simple. > 2. A demuxer. Either we change the existing one, or make a new > "dvddemux" or whatever. It would need (at least) three output pads: > video, audio and subpicture, and would automatically switch streams > based on the incoming events/packets. Changing between different types > of audio, would imply caps renegotiation on the audio pad. Please, use mpegdemux. If you need specific DVD hacks, go ahead and make a dvddemux, but make sure it is properly derived from mpegdemux so you don't do any code duplication. > My plan now is to start working on the demuxer. I know the code > relatively well, so it shouldn't take that long. Please tell me if > someone else is working on that part, or if there are considerations > that I must take into account. I'm working on MPEG a lot, too. There's one seeking patch from me in CVS, it need some cleaning up before applying. I have some random small bits of uncommitted code here at work. Ronald -- Ronald Bultje <rb...@ro...> Linux Video/Multimedia developer |
From: Martin S. <so...@in...> - 2004-01-30 13:35:00
|
Hello Ronald! On Fri, 2004-01-30 at 10:00, Ronald S. Bultje wrote: > Cool to see you'll be back actively working on all this! Yeah, I'm glad to have some time for this again. > On Wed, 2004-01-28 at 22:58, Martin Soto wrote: > > 1. A dvdnavsrc element that writes everything to its single source pad. ... > > - Use events. I know many people don't like the idea of taking the ... > > - Implement Benjamin's idea of having many "subchannels" in a stream ... > I like the events. dvdnavsrc and mpegdemux (or an element derived from > mpegdemux: dvddemux) could share a private event that's sent between > them. dvddemux/mpegdemux would then use that to set a new caps with > corresponding colour tables on whatever element needs those, or forward > the event to other elements needing it (then, you'd want to make it a > public event...). Or decoder, mpeg2subt, I don't really know that part > of "where are colourtables needed and where not"... > > I don't really like the subchannels idea, I feel it gets far too > complicated. I'd prefer to do this simple. The channels wouldn't necessary be that complicated, but I agree that the events are enough in this case, since we are sending small bits of information. The question is, how do I go implementing this stuff? Is there any example of private events being used in gst-plugins? Additionally, in order to communicate with the actual subpicture decoder, I'd rather forward the events, since the capabilities don't actually fit the bill (color tables keep changing in DVDs, specially when moving out from and into menus). Should I simply add some new event types to GstEvent, or define a new class (GstDVDEvent) somewhere, for interested elements to share? It is important that we define and document this protocol, because many elements are going to share it. By the way, Jan, it'd be great to hear from you regarding this ideas. > > 2. A demuxer. Either we change the existing one, or make a new ... > Please, use mpegdemux. If you need specific DVD hacks, go ahead and make > a dvddemux, but make sure it is properly derived from mpegdemux so you > don't do any code duplication. Agreed. As already stated in my answer to Benjamin, my plan is to clean up mpegdemux somewhat, and derive a dvddemux from it. > Ronald Thanks for the feedback. Cheers, M. S. -- Martin Soto <so...@in...> Universität Kaiserslautern |
From: Benjamin O. <in...@pu...> - 2004-01-30 18:23:09
|
On Fri, 30 Jan 2004, Martin Soto wrote: > The question is, how do I go implementing this stuff? Is there any > example of private events being used in gst-plugins? Additionally, in > order to communicate with the actual subpicture decoder, I'd rather > forward the events, since the capabilities don't actually fit the bill > (color tables keep changing in DVDs, specially when moving out from and > into menus). Should I simply add some new event types to GstEvent, or > define a new class (GstDVDEvent) somewhere, for interested elements to > share? > GStreamer 0.10 will put all events into a GstStructure. Every new event we define will already use this format. The event will be identified by the name of the structure. So you're already free to define new events by inventing a name for the structure (probably "dvd-colortable") and the values and their names used in that structure. It's kinda like GstCaps, except that you only use one structure and aren't limited to specific GTypes in it. (Though I just noticed some nice API for that is still missing.) Benjamin |
From: Martin S. <so...@in...> - 2004-01-31 17:06:21
|
On Fri, 2004-01-30 at 14:49, Benjamin Otte wrote: > On Fri, 30 Jan 2004, Martin Soto wrote: > > The question is, how do I go implementing this stuff? Is there any ... > GStreamer 0.10 will put all events into a GstStructure. Every new event we > define will already use this format. The event will be identified by the > name of the structure. So you're already free to define new events by > inventing a name for the structure (probably "dvd-colortable") and the > values and their names used in that structure. > It's kinda like GstCaps, except that you only use one structure and aren't > limited to specific GTypes in it. > > (Though I just noticed some nice API for that is still missing.) Sounds good, when can I start using this? ;-) M. S. -- Martin Soto <so...@in...> |
From: Jan S. <ja...@sl...> - 2004-01-31 02:18:00
|
On Sat, 2004-01-31 at 00:38, Martin Soto wrote: Hi Martin, > The channels wouldn't necessary be that complicated, but I agree that > the events are enough in this case, since we are sending small bits of > information. I agree here - I think it would make things too complicated to define a new stream 'video/mpeg-with-dvd-info' without having to more or less reinvent mpegdemux to cope. I prefer the events. I think adding a new enum 'GST_EVENT_PRIVATE' to gstevent.h is sufficient. The only drawback I can see is not very important to me: I won't be able to "tee ! filesink" the information coming out of dvdnavsrc and play it back with the same subtitle display later because it won't preserve colour tables and such. > The question is, how do I go implementing this stuff? Is there any > example of private events being used in gst-plugins? Additionally, in > order to communicate with the actual subpicture decoder, I'd rather > forward the events, since the capabilities don't actually fit the bill > (color tables keep changing in DVDs, specially when moving out from and > into menus). Should I simply add some new event types to GstEvent, or > define a new class (GstDVDEvent) somewhere, for interested elements to > share? This is how I've implemented the software DVD player that I have so far - you can check out the patch I attached to my original mail a week or so ago. > It is important that we define and document this protocol, because many > elements are going to share it. By the way, Jan, it'd be great to hear > from you regarding this ideas. Agreed - I haven't written down the events yet. I've been creating events as I needed them, because I'm learning the details of DVD playback as I go. What I've done so far (a little hacky, wanted to get it work, needs revising, etc) but not committed is: * dvdnavsrc generates events of type GST_EVENT_NAVIGATION, which have an attached GstStructure and sends them downstream. * mpegdemux forwards those out the video and subtitle pads. Thus far mpegdemux has needed very little changing to support the playback. * My rewritten mpeg2subt takes video frames and applies the menu buttons/subtitles to them according to the events it has received. The events I'm using so far: still_frame - contains a duration (in seconds) that the frame just sent needs to be displayed for spu-highlight - contains the palette and crop area for the current button. spu-clut - contains the 16 uint32 variables defining the new subtitle colour table. > Agreed. As already stated in my answer to Benjamin, my plan is to clean > up mpegdemux somewhat, and derive a dvddemux from it. This sounds good. I'd like to take a look at the changes that you've made to dvdnavsrc and compare with what I've been doing. I'll write more later - I have to run out the door to catch a train. Cheers, Jan. -- Jan Schmidt th...@ma... Have you been half-asleep? Have you heard voices? I've heard them calling my name... -Kermit the Frog (Rainbow Connection) |
From: Ronald S. B. <rb...@ro...> - 2004-02-02 20:30:35
|
On Sat, 2004-01-31 at 03:17, Jan Schmidt wrote: > The events I'm using so far: > still_frame - contains a duration (in seconds) that the frame just sent > needs to be displayed for Why don't you use GST_BUFFER_DURATION (buf) for that? Ronald -- Ronald Bultje <rb...@ro...> Linux Video/Multimedia developer |
From: Jan S. <th...@ma...> - 2004-02-02 23:23:22
|
<quote who="Ronald S. Bultje"> > On Sat, 2004-01-31 at 03:17, Jan Schmidt wrote: > > The events I'm using so far: > > still_frame - contains a duration (in seconds) that the frame just sent > > needs to be displayed for > > Why don't you use GST_BUFFER_DURATION (buf) for that? > The pipeline is dvdnavsrc -> mpegdemux -> mpeg2dec dvdnavsrc knows that the data is just sent out represents a still_frame, but it is mpeg2dec that sets the frame DURATION when it decodes the frame, so we still need to send the info down to there somehow. J. -- Jan Schmidt th...@ma... <stibbons> Yeah. The whole climax thing would make much more sense if I'd paid attention. |