From: Benjamin O. <in...@pu...> - 2004-01-13 06:04:48
Attachments:
gstreamer.clocking.diff.bz2
gst-plugins.clocking.diff.bz2
|
As everyone doing video should have noticed, clocking was severly in need of fixage. With the attached patches (consider them experimental please) I get perfect A/V sync for Quicktime and AVI when using alsasink and xvimagesink in gst-launch. It deprecates a lot of old API and introduces the concept of element time. But since it's conceptually different, it is API and ABI compatible with old code. Is that change ok or not? If it is I'll polish it up and commit. Benjamin |
From: Andy W. <wi...@po...> - 2004-02-05 13:19:15
|
Hi, It seems gst_clock_id_wait_async was deprecated somewhere along the line. I used it in soundscrape to tell me when a certain time had passed, so if for instance I was rendering a stream to disk I could know when to stop. But looking closer, it seems we're going into 0.8 with a really confused clocking situation. gst_element_wait uses gst_clock_id_*, but for some reason has to put its own prototypes into gstelement.h. This new function uses deprecated old ones. Useful functions have been deprecated with no replacement. The only docs we have are a tentative document in random/ with lots of FIXMEs and unresolved questions. I don't do video/audio sync or anything, but I'm sure that situation is fine as otherwise the whole world would be bitching. But I guess my question is, what are we doing headed towards 0.8 with deprecated API? BTW: I consider either the deprecation of id_wait_async or the lack of gst_clock_wait_async to be a 0.8 blocker. Regards, -- Andy Wingo <wi...@po...> |
From: Thomas V. S. <th...@ap...> - 2004-02-05 15:24:24
|
El lun, 02-02-2004 a las 22:37, Andy Wingo escribi=C3=B3: > Hi, >=20 > It seems gst_clock_id_wait_async was deprecated somewhere along the > line. I used it in soundscrape to tell me when a certain time had > passed, so if for instance I was rendering a stream to disk I could kno= w > when to stop. >=20 > But looking closer, it seems we're going into 0.8 with a really confuse= d > clocking situation. gst_element_wait uses gst_clock_id_*, but for some > reason has to put its own prototypes into gstelement.h. This new > function uses deprecated old ones. Useful functions have been deprecate= d > with no replacement. The only docs we have are a tentative document in > random/ with lots of FIXMEs and unresolved questions. >=20 > I don't do video/audio sync or anything, but I'm sure that situation is > fine as otherwise the whole world would be bitching. But I guess my > question is, what are we doing headed towards 0.8 with deprecated API? I have the same mixed feelings. I hope that in the future we can stop deprecating/tearing apart old stuff without a good explanation of what was wrong/unfixable with the old design, and a good design for the future so everyone can help in validating and fixing. Some important things that could be clocking-related (but nobody seems to be able to tell) are still not fixed (mpeg playback, next/next in the player, ...). We should decide to either revert to the old situation (the enemy we know) or make sure these bugs get fixed ASAP and we have a decent design for it that more than one person can understand. > BTW: I consider either the deprecation of id_wait_async or the lack of > gst_clock_wait_async to be a 0.8 blocker. Agreed. For the record, 0.8 is slated for beginning of march. Thomas Dave/Dina : future TV today ! - http://www.davedina.org/ <-*- thomas (dot) apestaart (dot) org -*-> I'm so boring my clothes wanna keep somebody else warm someone cooler <-*- thomas (at) apestaart (dot) org -*-> URGent, best radio on the net - 24/7 ! - http://urgent.fm/ |
From: <in...@pu...> - 2004-02-05 18:05:42
|
The reason why everything clocking related is in such a bad state is because clocking was extremely broken even in 0.6 and I only fund out at the beginning of January when we were supposed to be API stable. Getting this fixed correctly with thinking about how it should be done, implementing and testing it would have taken until February and would have involved API changes. So from my point of view clocking is as broken as threading: It works most of the time, but don't dig too deep or it will blow up. Quoting Thomas Vander Stichele <th...@ap...>: > I have the same mixed feelings. I hope that in the future we can stop > deprecating/tearing apart old stuff without a good explanation of what > was wrong/unfixable with the old design, and a good design for the > future so everyone can help in validating and fixing. > You are all free to dig through the code and figure out how it works. It's open source for a reason. After that we can discuss design decisions. I'm not gonna babysit developers through our code though. > Some important things that could be clocking-related (but nobody seems > to be able to tell) are still not fixed (mpeg playback, next/next in the > player, ...). > next/next should be fixed. > We should decide to either revert to the old situation (the enemy we > know) or make sure these bugs get fixed ASAP and we have a decent design > for it that more than one person can understand. > Everybody can understand it. It's just that noone ever takes the time to read the code. And you don't want to revert to the old system, trust me. You'd have to fix pipeline parsing in that case... > > BTW: I consider either the deprecation of id_wait_async or the lack of > > gst_clock_wait_async to be a 0.8 blocker. > > Agreed. > You're free to undeprecate it. It should still work. I deprecated it because I don't want people tinkering around with a broken clocking system. FWIW, async waiting was always optional for clocks to implement. Just pretend no clock implements it. Benjamin |
From: Christian S. <ur...@li...> - 2004-02-05 19:00:17
|
On Thu, 2004-02-05 at 19:05 +0100, in...@pu... wrote: > The reason why everything clocking related is in such a bad state is because > clocking was extremely broken even in 0.6 and I only fund out at the beginning > of January when we were supposed to be API stable. > Getting this fixed correctly with thinking about how it should be done, > implementing and testing it would have taken until February and would have > involved API changes. The point of an API freeze is to get the system stable and make things work instead of adding new features. It should never be used as an excuse to leave things broken. And with broken I mean real broken, not 'not perfect' broken. > So from my point of view clocking is as broken as threading: It works most of > the time, but don't dig too deep or it will blow up. Well the basic broken vs not-broken test here is this: Is it possible for us to have a perfectly working video player? If the question is yes, then it is something we can live with for 0.8.x. If the question is no, then we have a bug that needs to be fixed API freeze or not. The point of api freezes is not to have something to hit over each others head, but as a tool to help us make sure that we have a working system ready in time. This includes a system which has a stable enough API for our application developers to test with before 0.8.x is declared. So the question is, does the current clocking system pass the basic test I outlined and if not, how/who will try and tackle it. Christian |
From: Benjamin O. <in...@pu...> - 2004-02-06 11:42:54
|
On Thu, 5 Feb 2004, Christian Schaller wrote: > On Thu, 2004-02-05 at 19:05 +0100, in...@pu... wrote: > > The reason why everything clocking related is in such a bad state is because > > clocking was extremely broken even in 0.6 and I only fund out at the beginning > > of January when we were supposed to be API stable. > > Getting this fixed correctly with thinking about how it should be done, > > implementing and testing it would have taken until February and would have > > involved API changes. > > The point of an API freeze is to get the system stable and make things > work instead of adding new features. It should never be used as an > excuse to leave things broken. And with broken I mean real broken, not > 'not perfect' broken. > > > So from my point of view clocking is as broken as threading: It works most of > > the time, but don't dig too deep or it will blow up. > > Well the basic broken vs not-broken test here is this: Is it possible > for us to have a perfectly working video player? If the question is yes, > then it is something we can live with for 0.8.x. If the question is no, > then we have a bug that needs to be fixed API freeze or not. > > The point of api freezes is not to have something to hit over each > others head, but as a tool to help us make sure that we have a working > system ready in time. This includes a system which has a stable enough > API for our application developers to test with before 0.8.x is > declared. > > So the question is, does the current clocking system pass the basic test > I outlined and if not, how/who will try and tackle it. > Yes, it does exactly that. But don't expect much more. Benjamin |
From: Thomas V. S. <th...@ap...> - 2004-02-11 11:06:05
|
On Thu, 2004-02-05 at 19:05, in...@pu... wrote: > The reason why everything clocking related is in such a bad state is because > clocking was extremely broken even in 0.6 and I only fund out at the beginning > of January when we were supposed to be API stable. After going over the old and new clocking system, and trying to figure out what you think was broken, I'm pretty much lost. Basically, if you're going to claim it was "extremely broken" I think you should back that up with an explanation :) FWIW you seem to call things either perfect or broken. The clocking system already passed the basic test Christian layed out in 0.6, and it is really hard to evaluate why your start of a rewrite is better at all. After discussing it a little with Wim yesterday I really don't see what problem was unsolvable in the 0.6 clocking system, so I have the feeling doing such fundamental changes so late in the cycle was wrong and we should revert and figure out how to properly do clocking. > Getting this fixed correctly with thinking about how it should be done, > implementing and testing it would have taken until February and would have > involved API changes. So, could you please outline what was broken in your view, with simple explanations, and why it is now better ? As Andy said, there were people who knew and used the old system, and it seemed to work fine for them. Also, conceptually, the old system makes a lot of sense. I'm not saying the new one does not, I'm saying I have no way of telling if it makes more sense and if it was right to make the changes. Thanks Thomas Dave/Dina : future TV today ! - http://www.davedina.org/ <-*- thomas (dot) apestaart (dot) org -*-> I'll take a quiet life a handshake of carbon monoxide <-*- thomas (at) apestaart (dot) org -*-> URGent, best radio on the net - 24/7 ! - http://urgent.fm/ |
From: <in...@pu...> - 2004-02-12 15:32:07
|
Quoting Thomas Vander Stichele <th...@ap...>: > After going over the old and new clocking system, and trying to figure > out what you think was broken, I'm pretty much lost. Basically, if > you're going to claim it was "extremely broken" I think you should back > that up with an explanation :) FWIW you seem to call things either > perfect or broken. > 1) The old system reset the time of the clock when a toplevel pipeline went from PAUSED to PLAYING. If you have multiple toplevel pipelines using the same clock, you have a problem. (It should be possible to reproduce by using esdsink and trying to play a file while loading your library - I haven't tried that). 2) The old system reset the time of the clock when a toplevel pipeline went from PAUSED to PLAYING. This means that when the clocking elements go from PAUSED to PLAYING (which can be quite a bit later when you use gst-launch with SOMETIMES pads) they might discard the first part of your data as being late. 3) The old system could not handle multiple discontinuity events. You had gst_clock_handle_discont and that set the time according to the discont event and after that allowed no more discontinuity to happen, so it wasn't bad if to sinks called handle_discont. One perversity with this is that to get around that limitation, some elements deactivated and then activated the clock before making it handle disconts, so the discont worked. These are three things I remeber from the top of my head. > The clocking system already passed the basic test Christian layed out in > 0.6, and it is really hard to evaluate why your start of a rewrite is > better at all. After discussing it a little with Wim yesterday I really > don't see what problem was unsolvable in the 0.6 clocking system, so I > have the feeling doing such fundamental changes so late in the cycle was > wrong and we should revert and figure out how to properly do clocking. > If you come up with a better solution, go ahead. :) The problem with the 0.6 clocking systemis its failure to seperate system time, stream time and synchronization and trying to do all at once. > As Andy said, there were people who knew and used the old system, and it > seemed to work fine for them. Also, conceptually, the old system makes > a lot of sense. I'm not saying the new one does not, I'm saying I have > no way of telling if it makes more sense and if it was right to make the > changes. > The old way doesn't make a lot of sense. ;) Just the fact that there's only one time inside a pipeline and the fact that time may be adjusted is bad enough for me. Benjamin |
From: Thomas V. S. <th...@ap...> - 2004-02-19 14:23:50
|
On Thu, 2004-02-12 at 16:30, in...@pu... wrote: > Quoting Thomas Vander Stichele <th...@ap...>: > > > After going over the old and new clocking system, and trying to figure > > out what you think was broken, I'm pretty much lost. Basically, if > > you're going to claim it was "extremely broken" I think you should back > > that up with an explanation :) FWIW you seem to call things either > > perfect or broken. > > > 1) The old system reset the time of the clock when a toplevel pipeline went > from PAUSED to PLAYING. (I'm assuming you meant PAUSED to READY here - at least that's how I remember and how it looks to be from looking at the code). To me this seems correct behaviour. > If you have multiple toplevel pipelines using the same > clock, you have a problem. (It should be possible to reproduce by using > esdsink and trying to play a file while loading your library - I haven't tried > that). Why would you need multiple toplevel pipelines *with the same clock* ? A clock is connected to a pipeline. It is possible to set the same clock as for one pipeline on a different pipeline, but this should only be done when there is good reason to do so - for example, to sink two output devices to the same clock even though they're playing something different. In general, I don't see the need for any current application to have this, can you give an example ? > 2) The old system reset the time of the clock when a toplevel pipeline went > from PAUSED to PLAYING. (I'm assuming here you mean "the old system unpauses the clock when going from PAUSED TO PLAYING). > This means that when the clocking elements go from > PAUSED to PLAYING (which can be quite a bit later when you use gst-launch with > SOMETIMES pads) they might discard the first part of your data as being late. Yes, and this is indeed a bug. However, IMO it can be fixed in a better way. In the long run, I really think we should re-evaluate states and state changes. After discussing with people and thinking about it a little more, I think in general that - a change between PLAYING -> PAUSED -> PLAYING should be inexpensive - READY -> PAUSED should prepare EVERYTHING for data flow, up to the moment where it hands the first buffer to rendering elements (output sinks), so that they are ready for instantaneous data flow. This is up to the scheduler to see this. - data is allowed to flow in the PAUSED state to handle negotation and get the first buffer of data to the output sinks. - this would mean going from PAUSED to PLAYING really is just a matter of setting the clock to play and getting data to flow, and having the clock start running at the same time when data is being rendered. Of course, this is not a change to make right now. However, for the clocking stuff, the solution to me seems rather simple - the clock should only start running when the all the elements in the pipeline that need to go to PLAYING, are in the PLAYING state. To do this the scheduler can keep track of a count of elements that are in PLAYING, and when all elements needing to go to PLAYING are in fact PLAYING, it can start the clock. For going from PLAYING to PAUSED, the reverse could be done; the clock can be stopped immediately. This would already make sure we have "correct clock behaviour" before we can re-evaluate how it ideally should work. How does that sound ? > and after that allowed no more discontinuity to happen, so it wasn't bad if to > sinks called handle_discont. One perversity with this is that to get around > that limitation, some elements deactivated and then activated the clock before > making it handle disconts, so the discont worked. I don't think the semantics for discont were clearly defined. It is something that would need to be reviewed anyway. But your explanation of it is a bit vague, could you elaborate on it so I can follow ? The real problem, I think, is 2) - the clock would already be running before data was actually flowing, and thus the sinks drop data because it is too late. This is fixable in the old system. I'm a bit worried because we changed behaviour, introducing new bugs (like, pause playback after x secs, then press play, and have to wait for x secs for playback to resume), and deprecating API that is good API to have, and that some people were relying on. I'm pretty sure we can fix the bugs of the old system without needing to switch over the internals completely. If the switchover was to internals that are more mature and developed, I'd be fine with it. But right now the switchover is such that the only thing we can do with some of the more useful clocking API is deprecate it, and that doesn't really seem like a good change to make, if we're going to have to redo it in 0.9 anyway. > The old way doesn't make a lot of sense. ;) > Just the fact that there's only one time inside a pipeline and the fact that > time may be adjusted is bad enough for me. It makes sense to me to have only one clock for each pipeline - the "time" is basically "the playing/process time (since the last reset of the pipeline) of a rendering sink inside the pipeline". It means exactly that, and, it's clearly defined. In cases where you only have one rendering sink, this makes it easy. In cases where you have more than one, one rendering sink uses the clock of the other through the pipeline, so they end up synchronized. As for time being adjustable - like I said, I'm not sure DISCONT was clearly defined. To me it seems like seeking shouldn't be causing a clock discont, for example. How about this idea - we start discussing actual pipelines and scenarios, so we get an idea of what we want to achieve with clocking and how it should work ? It would make it possible for us to discuss this using the same ideas/concepts. I'll reread your clocking doc again. As for the actual clocking stuff, what precisely do you think is unsolvable in the old system ? And what are your plans on the deprecated API that was useful to others, and the bugs that have been introduced because of the changed clocking ? Thomas Dave/Dina : future TV today ! - http://www.davedina.org/ <-*- thomas (dot) apestaart (dot) org -*-> If I say goodbye to love will it go away ? <-*- thomas (at) apestaart (dot) org -*-> URGent, best radio on the net - 24/7 ! - http://urgent.fm/ |
From: <in...@pu...> - 2004-02-19 15:29:55
|
Quoting Thomas Vander Stichele <th...@ap...>: > > 1) The old system reset the time of the clock when a toplevel pipeline went > > > from PAUSED to PLAYING. > (I'm assuming you meant PAUSED to READY here - at least that's how I > remember and how it looks to be from looking at the code). To me this > seems correct behaviour. > It was in fact READY=>PAUSED. And resetting a clock based on some arbitrary element (The first non-container element to change state to PAUSED) seems wrong anyway. > Why would you need multiple toplevel pipelines *with the same clock* ? A > clock is connected to a pipeline. It is possible to set the same clock > as for one pipeline on a different pipeline, but this should only be > done when there is good reason to do so - for example, to sink two > output devices to the same clock even though they're playing something > different. > In general, I don't see the need for any current application to have > this, can you give an example ? > The old system relied on the fact that there is only one system clock. Don't ask me about the reasons though. > - READY -> PAUSED should prepare EVERYTHING for data flow, up to the > moment where it hands the first buffer to rendering elements (output > sinks), so that they are ready for instantaneous data flow. This is up > to the scheduler to see this. > - data is allowed to flow in the PAUSED state to handle negotation and > get the first buffer of data to the output sinks. > You view this from a much too simplistic point of view, namely the video playback view. Are you starting to send data through a gst-recorder pipeline recorded from a webcam when it's set to PAUSED, but not immediately to PLAYING? That way you'll end up with a first picture that is completely wrong. What about streams from the web? And what should a scheduler do about gst-launch src ! switch_output_on_eos ! ximagesink switch_output_on_eos0. ! ximagesink With this approach the difference between PAUSED and PLAYING is mainly just the question wether the sinks throw incoming data away or not. > Of course, this is not a change to make right now. However, for the > clocking stuff, the solution to me seems rather simple - the clock > should only start running when the all the elements in the pipeline that > need to go to PLAYING, are in the PLAYING state. To do this the > scheduler can keep track of a count of elements that are in PLAYING, and > when all elements needing to go to PLAYING are in fact PLAYING, it can > start the clock. For going from PLAYING to PAUSED, the reverse could be > done; the clock can be stopped immediately. > How do you determine "elements that need to go to PLAYING" ? In gst-launch filesrc location=file.mp3 ! spider ! videosink spider0. ! audiosink the videosink never goes to PLAYING for example. How do you propose to get { audiosrc ! queue } ! audiosink working where the sink is set to PLAYING half a second later than the source _on_purpose_? > I don't think the semantics for discont were clearly defined. It is > something that would need to be reviewed anyway. But your explanation > of it is a bit vague, could you elaborate on it so I can follow ? > Imagine you have a muxed file where each audio chunk contains 5 seconds of audio and each video chunk contains 7 seconds of video (bare with me, the values in this example are a bit constructed to show the problem, but believe me it's a real problem). You seek to second 15 on the audio sink. The demuxer finds the next audio to start at second 15 and pushes out a DISCONT to second 15. After that it finds the next video at second 21 and sends out a DISCONT on the video to second 21. This needs to be synchronized correctly. Keep in mind that the video data might reach the videosink before the audio data reaches the audio sink. > It makes sense to me to have only one clock for each pipeline - the > "time" is basically "the playing/process time (since the last reset of > the pipeline) of a rendering sink inside the pipeline". It means > exactly that, and, it's clearly defined. In cases where you only have > one rendering sink, this makes it easy. In cases where you have more > than one, one rendering sink uses the clock of the other through the > pipeline, so they end up synchronized. > Again, you're only seeing this from the video playback view. Time in GStreamer is not something exclusive to sinks. When recording v4l and audio you need time to synchronize those two elements for example though there's not a single sink involved. > As for time being adjustable - like I said, I'm not sure DISCONT was > clearly defined. To me it seems like seeking shouldn't be causing a > clock discont, for example. > The depends on the definition of "clock" and "time of a clock". The old definition was "time == timestamp of data to display now". I used this definition for element time in my current design and I'm using the common definition for the time of clocks (see point 1 in http://dictionary.reference.com/search?q=time if you need one, it's hard to describe) > As for the actual clocking stuff, what precisely do you think is > unsolvable in the old system ? And what are your plans on the deprecated > API that was useful to others, and the bugs that have been introduced > because of the changed clocking ? > The old system provided features that were more or less just there by API. Like for example asynchronous notification, which does not work reliably at all across clocks or different states. That and the fact that the whole system needs a serious ework made me deprecate that stuff so everyone knows it will probably go away during 0.9. It still works. In fact the current code uses it. So I wouldn't protest if someone un-deprecated it, but keep in mind that it's likely to change in 0.9. Benjamin |
From: Thomas V. S. <th...@ap...> - 2004-02-19 17:32:18
|
On Thu, 2004-02-19 at 16:24, in...@pu... wrote: > Quoting Thomas Vander Stichele <th...@ap...>: > > > > 1) The old system reset the time of the clock when a toplevel pipeline went > > > > > from PAUSED to PLAYING. > > (I'm assuming you meant PAUSED to READY here - at least that's how I > > remember and how it looks to be from looking at the code). To me this > > seems correct behaviour. > > > It was in fact READY=>PAUSED. Yep. So this part was ok for you ? > And resetting a clock based on some arbitrary > element (The first non-container element to change state to PAUSED) seems > wrong anyway. gst_clock_reset was only called when the scheduler's parent element (ie, the pipeline/thread connected to the scheduler) was doing READY->PAUSED (under the assumption that this scheduler was toplevel). This almost boils down to the same, I guess. Anyway, if this is a problem, it could be changed to "clock gets reset when all non-locked elements have made the jump to PAUSED". > > Why would you need multiple toplevel pipelines *with the same clock* ? A > > clock is connected to a pipeline. It is possible to set the same clock > > as for one pipeline on a different pipeline, but this should only be > > done when there is good reason to do so - for example, to sink two > > output devices to the same clock even though they're playing something > > different. > > In general, I don't see the need for any current application to have > > this, can you give an example ? > > > The old system relied on the fact that there is only one system clock. > Don't ask me about the reasons though. That wasn't really an answer :) Unless you meant by that answer I should understand "since there's only one system clock, and I want to run two pipelines, I need to use the same system clock for both pipelines and that's a problem." Anyway, it looks to me like the object that is called SystemClock is just one possible clock implementation, using g_get_current_time as its mechanism to keep internal time. So, I don't see why you can't just create two different instances of this, one for each pipeline. In the general case there's no reason to force these clocks to be the same. Am I missing something ? > > - READY -> PAUSED should prepare EVERYTHING for data flow, up to the > > moment where it hands the first buffer to rendering elements (output > > sinks), so that they are ready for instantaneous data flow. This is up > > to the scheduler to see this. > > - data is allowed to flow in the PAUSED state to handle negotation and > > get the first buffer of data to the output sinks. > > > You view this from a much too simplistic point of view, namely the video > playback view. > Are you starting to send data through a gst-recorder pipeline recorded from a > webcam when it's set to PAUSED, but not immediately to PLAYING? That way > you'll end up with a first picture that is completely wrong. It is up to the source element to decide this. The job for the element is "get everything ready so that as soon as you get set to play you can process data". So for a webcam, I would say the best behaviour would be to open the device and get ready for passing on data", and nothing more. > What about streams from the web? I'd say it should start reading from the web, and prebuffering, unless you'd prefer the prebuffering to be handled in a gstreamer way (ie, with queues and so on). In the second case, it should probably be done with a thread, the queue would act like the actual provider, and the queue should be the one to make sure that it has enough data to be ready to play whenever. Ie, it would best fill up its queue first getting data from the other thread containing the webreading element. Maybe this would need a special kind of queue. > And what should a scheduler do about gst-launch src ! switch_output_on_eos ! > ximagesink switch_output_on_eos0. ! ximagesink Well, tell me what this pipeline intends to do :) I can't make much sense of what it would achieve. > > Of course, this is not a change to make right now. However, for the > > clocking stuff, the solution to me seems rather simple - the clock > > should only start running when the all the elements in the pipeline that > > need to go to PLAYING, are in the PLAYING state. To do this the > > scheduler can keep track of a count of elements that are in PLAYING, and > > when all elements needing to go to PLAYING are in fact PLAYING, it can > > start the clock. For going from PLAYING to PAUSED, the reverse could be > > done; the clock can be stopped immediately. > > > How do you determine "elements that need to go to PLAYING" ? all elements that are not locked and not yet in playing but still in paused > In gst-launch filesrc location=file.mp3 ! spider ! videosink spider0. ! > audiosink the videosink never goes to PLAYING for example. Because spider doesn't autoplug visualisations ? Then is this not just a broken pipeline ? In my book this pipeline should just fail because it's wrong. > How do you propose to get { audiosrc ! queue } ! audiosink working where the > sink is set to PLAYING half a second later than the source _on_purpose_? (I'm assuming the goal of this pipeline is to "play back on some card everything that's coming from some card with a delay of half a second). Two ways: a) have the thread and the pipeline have different clocks. The thread uses the audiosrc-provided clock, and the main pipeline uses the audiosink-provided clock. When audiosrc is set to playing, its clock is running and setting correct timestamps on the buffer. The first buffer going out will be marked with timestamp 0, which means "play this buffer as soon as the clock of the pipeline is playing". After .5 seconds, audiosink is set to play, which means its clock (the main pipeline one) is now running, and it can immediately output the first buffer with timestamp 0 that was already waiting in the queue for .5 seconds. Yes, in this case, the two elements have a different concept of the current time, *on purpose*. In the thread, the clock marks the recording time. In the main pipeline, the clock marks the playing time. They're different clocks, and if the synchronization between audiosrc and audiosink is perfect (ie, the same device, for example, or externally clock-linked using smpte), this will Just Work, and the two, different, clocks will always be 0.5 seconds off from each other. If the actual hardware devices aren't synced, then they will slowly drift away from each other, or the src will catch up with the sink. Those are problems to be solved at the application level. But the good thing is, it's easy to monitor the clock drift. b) If you use the same clock for both threads (which wouldn't be the default, IMO), then I would say the correct way to do this is to - lock the output sink before setting the thread to play - the toplevel pipeline sees that all the nonlocked elements are playing when you set the thread to play, so it sets the clock to playing - the clock here could be provided by either audiosrc or audiosink, and in the old system audiosrc gets preference. - in front of the audiosink, there would be a "delay" element that does nothing else than offset the timestamps on buffers coming in. So: 0.0 sec -> pipeline set to play, first buffer recorded, with timestamp 0, and more buffers filling up the queue (queue has to be big enough to cross 0.5 secs of course) 0.5 sec -> sink unlocked and set to play, sink pulls a buffer, delay pulls a buffer from queue, and gets first buffer with timestamp 0. delay gives this buffer to sink with timestamp 0.5, audiosink queries the clock, sees that it's at 0.5 sec, so it plays the buffer, and everything is ok. Anything wrong with either scenario ? > > I don't think the semantics for discont were clearly defined. It is > > something that would need to be reviewed anyway. But your explanation > > of it is a bit vague, could you elaborate on it so I can follow ? > > > Imagine you have a muxed file where each audio chunk contains 5 seconds of > audio and each video chunk contains 7 seconds of video (bare with me, the > values in this example are a bit constructed to show the problem, but believe > me it's a real problem). > You seek to second 15 on the audio sink. The demuxer finds the next audio to > start at second 15 and pushes out a DISCONT to second 15. After that it finds > the next video at second 21 and sends out a DISCONT on the video to second 21. > This needs to be synchronized correctly. Keep in mind that the video data > might reach the videosink before the audio data reaches the audio sink. Ok, so I can't say anything about this since, as I said the semantics for discont are not clearly defined. What is discont ? is it a discontinuity in the stream, or a discontinuity in the clock ? If it is the first, should seeking be a discontinuity ? It looks to me like these were mixed where they shouldn't be. Personally, I don't think a seek on an input stream should necessarily trigger a discont. It should trigger a flush, then proceed by sending data from the new point, and the clock should just go on as if nothing happened. Ie, it's up to the decoder/demuxer (or, rather, feeding pipeline) to make this look seamless. To me it seems like DISCONT was really supposed to mean "make the clock jump non-linearly" - though I'm still not yet sure what situations need that. I might be misunderstanding DISCONT, so please explain to me what according to you it's intending to do, so I can follow what your example wants to do. > > It makes sense to me to have only one clock for each pipeline - the > > "time" is basically "the playing/process time (since the last reset of > > the pipeline) of a rendering sink inside the pipeline". It means > > exactly that, and, it's clearly defined. In cases where you only have > > one rendering sink, this makes it easy. In cases where you have more > > than one, one rendering sink uses the clock of the other through the > > pipeline, so they end up synchronized. > > > Again, you're only seeing this from the video playback view. Time in GStreamer > is not something exclusive to sinks. When recording v4l and audio you need > time to synchronize those two elements for example though there's not a single > sink involved. I never said only sinks matter. In fact, the clocking documentation from 0.6 clearly states "if there are src clocks, use those. Else if there are sink clocks, use those. else use a system clock." In the case of v4l and audiosrc, the audiosrc would be the clock provider, and v4l would be using it to sync. > > > As for time being adjustable - like I said, I'm not sure DISCONT was > > clearly defined. To me it seems like seeking shouldn't be causing a > > clock discont, for example. > > > The depends on the definition of "clock" and "time of a clock". The old > definition was "time == timestamp of data to display now". I used this > definition for element time in my current design and I'm using the common > definition for the time of clocks (see point 1 in > http://dictionary.reference.com/search?q=time if you need one, it's hard to > describe) Yeah, I know what you mean. I'm just not sure it's all that important to use/know the actual "absolute" time or some approximation of it. Basically, the concept of time inside the pipeline and synchronization is only necessary when there is something to perceive. (I'm finding it hard to explain my thoughts accurately on this matter :)) So all we need is some sort of time abstraction that makes time advance together with data. Ie, no data flow, no need for the time to increase. The only time where "absolute" (outside-of-the-box) real life time matters is on the borders between "real life" and "the GStreamer system"; ie precisely on input and output sinks from actual devices. In that respect, I think Wim's ideas for clocking were perfect: - when playing, make sure that your pipeline time is advancing just like "real life time" - when paused, your pipeline time is paused too, even though real life time is advancing. The pipeline's time is a virtualization of the pipeline's lifecycle, backed up/implemented by some way of measuring "real life time" when it needs it (ie in playing) > > As for the actual clocking stuff, what precisely do you think is > > unsolvable in the old system ? And what are your plans on the deprecated > > API that was useful to others, and the bugs that have been introduced > > because of the changed clocking ? > > > The old system provided features that were more or less just there by API. > Like for example asynchronous notification, which does not work reliably at > all across clocks or different states. > That and the fact that the whole system needs a serious ework made me > deprecate that stuff so everyone knows it will probably go away during 0.9. Are you saying that all these functions are functions we wouldn't want to support in some way or another ? Because getting one-off or repeated notifications from the clock seems like a very basic thing we would want to have, no ? Anyway, why deprecate functions that we're not sure yet we'll throw away or not ? We shouldn't be replacing stuff with something we don't know what we'll replace it with yet. Thomas Dave/Dina : future TV today ! - http://www.davedina.org/ <-*- thomas (dot) apestaart (dot) org -*-> Baby no matter what love's got to offer I burn myself down to the ground <-*- thomas (at) apestaart (dot) org -*-> URGent, best radio on the net - 24/7 ! - http://urgent.fm/ |
From: <in...@pu...> - 2004-02-20 13:56:34
|
Quoting Thomas Vander Stichele <th...@ap...>: > > And what should a scheduler do about gst-launch src ! switch_output_on_eos > ! > > ximagesink switch_output_on_eos0. ! ximagesink > Well, tell me what this pipeline intends to do :) I can't make much > sense of what it would achieve. > I have no idea why anyone would use such a pipeline and it would just switch the output to another window upon receiving an EOS. But there's lots of people that have lots of interesting ideas with gst (starting with Stefan Kost, passing totem and gstsci and certainly not ending with Gnonlin) I'm pretty sure something like this will come up somewhere. It's certainly not wrong in any way I could imagine and this is what counts here. > all elements that are not locked and not yet in playing but still in > paused > In gst-launch when connecting SOMETIMES pads the sink part is locked until the connection can be made. > > In gst-launch filesrc location=file.mp3 ! spider ! videosink spider0. ! > > audiosink the videosink never goes to PLAYING for example. > Because spider doesn't autoplug visualisations ? Then is this not just a > broken pipeline ? In my book this pipeline should just fail because it's > wrong. > That pipeline has always just worked and just not output any video. > > How do you propose to get { audiosrc ! queue } ! audiosink working where > the > > sink is set to PLAYING half a second later than the source _on_purpose_? > > (I'm assuming the goal of this pipeline is to "play back on some card > everything that's coming from some card with a delay of half a second). > > Two ways: > a) have the thread and the pipeline have different clocks. The thread > uses the audiosrc-provided clock, and the main pipeline uses the > audiosink-provided clock. When audiosrc is set to playing, its clock is > running and setting correct timestamps on the buffer. The first buffer > going out will be marked with timestamp 0, which means "play this buffer > as soon as the clock of the pipeline is playing". After .5 seconds, > audiosink is set to play, which means its clock (the main pipeline one) > is now running, and it can immediately output the first buffer with > timestamp 0 that was already waiting in the queue for .5 seconds. Yes, > in this case, the two elements have a different concept of the current > time, *on purpose*. In the thread, the clock marks the recording time. > In the main pipeline, the clock marks the playing time. They're > different clocks, and if the synchronization between audiosrc and > audiosink is perfect (ie, the same device, for example, or externally > clock-linked using smpte), this will Just Work, and the two, different, > clocks will always be 0.5 seconds off from each other. If the actual > hardware devices aren't synced, then they will slowly drift away from > each other, or the src will catch up with the sink. Those are problems > to be solved at the application level. But the good thing is, it's easy > to monitor the clock drift. > > b) If you use the same clock for both threads (which wouldn't be the > default, IMO), then I would say the correct way to do this is to > - lock the output sink before setting the thread to play > - the toplevel pipeline sees that all the nonlocked elements are playing > when you set the thread to play, so it sets the clock to playing > - the clock here could be provided by either audiosrc or audiosink, and > in the old system audiosrc gets preference. > - in front of the audiosink, there would be a "delay" element that does > nothing else than offset the timestamps on buffers coming in. > > So: > 0.0 sec -> pipeline set to play, first buffer recorded, with timestamp > 0, and more buffers filling up the queue (queue has to be big enough to > cross 0.5 secs of course) > 0.5 sec -> sink unlocked and set to play, sink pulls a buffer, delay > pulls a buffer from queue, and gets first buffer with timestamp 0. > delay gives this buffer to sink with timestamp 0.5, audiosink queries > the clock, sees that it's at 0.5 sec, so it plays the buffer, and > everything is ok. > > Anything wrong with either scenario ? > - I'm pretty sure I don't want to use locked state for deciding if a clock should start running. - I meant to use the same clock for both elements. - By default all elements in a pipeline have the same clock. Everything else is pretty much impossible. - The timestamp modifier element is a good idea that is certainly the correct way to do it in this case. > Ok, so I can't say anything about this since, as I said the semantics > for discont are not clearly defined. What is discont ? is it a > discontinuity in the stream, or a discontinuity in the clock ? If it is > the first, should seeking be a discontinuity ? It looks to me like these > were mixed where they shouldn't be. > A discont describes that the data of the next buffer does not continue the bytestream where the data of the previous buffer ended. The stream is discontinued. Exactly what "continuing the bytestream" means is up to tghe actual bytestream to define. For audio/raw it means that if sample 357 is not followed by sample 358, then send a discont. > So all we need > is some sort of time abstraction that makes time advance together with > data. Ie, no data flow, no need for the time to increase. > > The only time where "absolute" (outside-of-the-box) real life time > matters is on the borders between "real life" and "the GStreamer > system"; ie precisely on input and output sinks from actual devices. > There is the question what you want to make of async notifications and what happens when the time giving element stops while other parts of the pipeline still run. (audio clock hits EOS while video still continues) > In that respect, I think Wim's ideas for clocking were perfect: > - when playing, make sure that your pipeline time is advancing just like > "real life time" > - when paused, your pipeline time is paused too, even though real life > time is advancing. > The pipeline's time is a virtualization of the pipeline's lifecycle, > backed up/implemented by some way of measuring "real life time" when it > needs it (ie in playing) > Unfortunately this breaks because the PLAYING/PAUSED distinction can not be made pipeline-wide but only per element. > Are you saying that all these functions are functions we wouldn't want > to support in some way or another ? Because getting one-off or repeated > notifications from the clock seems like a very basic thing we would want > to have, no ? Anyway, why deprecate functions that we're not sure yet > we'll throw away or not ? We shouldn't be replacing stuff with something > we don't know what we'll replace it with yet. > I deprecated those functions because they didn't work, didn't have clear semantics or didn't seem like something that was good from a GStreamer point of view. The reason was (and still is) that I don't want anyone telling me during 0.9 "this is a regression, we've always had that, so it must continue to work even if it's fundamentally flawed" for this stuff. And since 0.9 will probably have big changes in the scheduling department (including the question "what to do next?" which is quite fundamental for async notifications) I don't know what will happen there. Rest assured that I'm aware of the requirement. |
From: David S. <ds...@sc...> - 2004-02-19 20:25:12
|
On Thu, Feb 19, 2004 at 03:15:20PM +0100, Thomas Vander Stichele wrote: > - a change between PLAYING -> PAUSED -> PLAYING should be inexpensive For a number of ideas that I've been thinking about recently, I've felt it would be useful to essentially merge PLAYING and PAUSED, and define that an element is "PLAYING" when data is moving though any of a group of associated peers (in a scheduling sense), and "PAUSED" when data is not actually flowing. In addition to other things, it defines clear times at which a group of elements can be renegotiated. Right now, renegotiation can happen in PAUSED, and in PLAYING when "it happens to work". dave... |
From: Ronald B. <rb...@ro...> - 2004-02-19 21:00:43
|
Hi, On Thu, 19 Feb 2004, David Schleef wrote: > On Thu, Feb 19, 2004 at 03:15:20PM +0100, Thomas Vander Stichele wrote: > > - a change between PLAYING -> PAUSED -> PLAYING should be inexpensive > For a number of ideas that I've been thinking about recently, I've > felt it would be useful to essentially merge PLAYING and PAUSED, and I'm not sure about this.. Some hardware-elements (particularly (v4l and v4l2) need this for internal buffer handling. Ronald |
From: <in...@pu...> - 2004-02-20 11:30:38
|
Quoting Ronald Bultje <rb...@ro...>: > On Thu, 19 Feb 2004, David Schleef wrote: > > On Thu, Feb 19, 2004 at 03:15:20PM +0100, Thomas Vander Stichele wrote: > > > - a change between PLAYING -> PAUSED -> PLAYING should be inexpensive > > For a number of ideas that I've been thinking about recently, I've > > felt it would be useful to essentially merge PLAYING and PAUSED, and > > I'm not sure about this.. Some hardware-elements (particularly (v4l and > v4l2) need this for internal buffer handling. > The only real difference between PAUSED and PLAYING is that time is only progressing in the PLAYING state. Another difference - but this one is artifical - is that buffers may only be passed when the pipeline is in PLAYING. The problem to solve when merging/changing behaviour in these states is what elements are supposed to do that handle data based on time. If someone figures that out, it's fine with me :) Benjamin |
From: Ronald B. <rb...@ro...> - 2004-02-20 22:07:42
|
Hi, On Fri, 20 Feb 2004 in...@pu... wrote: > Quoting Ronald Bultje <rb...@ro...>: > > On Thu, 19 Feb 2004, David Schleef wrote: > > > On Thu, Feb 19, 2004 at 03:15:20PM +0100, Thomas Vander Stichele wrote: > > > > - a change between PLAYING -> PAUSED -> PLAYING should be inexpensive > > > For a number of ideas that I've been thinking about recently, I've > > > felt it would be useful to essentially merge PLAYING and PAUSED, and > > I'm not sure about this.. Some hardware-elements (particularly (v4l and > > v4l2) need this for internal buffer handling. > The only real difference between PAUSED and PLAYING is that time is only > progressing in the PLAYING state. > Another difference - but this one is artifical - is that buffers may only be > passed when the pipeline is in PLAYING. I'm affraid that's only true in "simple" elements... As a paradigm ("definition"), it's fine. But I don't think we should programatically merge the two... To be honest, I don't even see what it'd gain us at all... Ronald |
From: Andy W. <wi...@po...> - 2004-02-20 12:15:20
|
Hey Benjamin, On Thu, 05 Feb 2004, in...@pu... wrote: > > > BTW: I consider either the deprecation of id_wait_async or the lack of > > > gst_clock_wait_async to be a 0.8 blocker. > > > > Agreed. > > > You're free to undeprecate it. I think I'm going to do this, if there are no outcries, just because the API coverage is more complete. > FWIW, async waiting was always optional for clocks to implement. Just pretend > no clock implements it. Indeed it's optional. It is necessary, however, for the case where the app runs in the same thread as the pipeline and it does things while it's waiting for the time. That's why GstAudioClock implements it (for oss plugins, sndfile plugins, and esd at the moment). I hope this convinces you of its usefulness :) Wingo, whose emails are always asynchronous. |
From: Andy W. <wi...@po...> - 2004-02-25 12:36:03
|
On Wed, 2004-02-11 at 20:21, Andy Wingo wrote: > > > > BTW: I consider either the deprecation of id_wait_async or the lack of > > > > gst_clock_wait_async to be a 0.8 blocker. > > > > > > Agreed. > > > > > You're free to undeprecate it. > > I think I'm going to do this, if there are no outcries, just because the > API coverage is more complete. Done. I'd like to add another use case that needs to be supported for clocking. Eventually, I'd like to implement a clock for jack. The jack GStreamer element can be a master or a slave to that clock. In both cases, the clock needs to support scrubbing. As master, it needs to support setting the jack system time based on time within GStreamer, and explicitly setting the clock time. So, just throw that in with the design goals. Regards, -- Andy Wingo <wi...@po...> |
From: Benjamin O. <ot...@gn...> - 2004-02-25 13:56:06
|
Quoting Andy Wingo <wi...@po...>: > the clock needs to support scrubbing. > Can you explain a bit more (or link to an explanation) what scrubbing is? Benjamin |
From: Jan S. <th...@ma...> - 2004-02-25 23:06:20
|
<quote who="Benjamin Otte"> > Quoting Andy Wingo <wi...@po...>: > > > the clock needs to support scrubbing. > > > Can you explain a bit more (or link to an explanation) what scrubbing is? http://www.metadecks.org/software/sweep/scrub.html In short: Scrubbing is the ability to drag the mouse back and forth across a graphical representation of the audio, and hear real time feedback of the sound at each point... like spinning a record back and forth on a turntable J. -- Jan Schmidt th...@ma... ENOSIG |
From: Benjamin O. <in...@pu...> - 2004-02-26 15:11:32
|
Quoting Jan Schmidt <th...@ma...>: > <quote who="Benjamin Otte"> > > > Quoting Andy Wingo <wi...@po...>: > > > > > the clock needs to support scrubbing. > > > > > Can you explain a bit more (or link to an explanation) what scrubbing is? > > http://www.metadecks.org/software/sweep/scrub.html > > In short: Scrubbing is the ability to drag the mouse back and forth across > a graphical representation of the audio, and hear real time feedback of the > sound at each point... like spinning a record back and forth on a turntable > That sounds a lot like it should be done via seeking instead of via adjusting the clock of the computer. Benjamin |
From: Thomas V. S. <th...@ap...> - 2004-02-26 15:33:35
|
> > In short: Scrubbing is the ability to drag the mouse back and forth across > > a graphical representation of the audio, and hear real time feedback of the > > sound at each point... like spinning a record back and forth on a turntable > > > That sounds a lot like it should be done via seeking instead of via adjusting > the clock of the computer. Our seeking is way too slow by design for that to work. Run sweep and try it, you'll notice. What might be possible would be to use a sweep element that transforms the sound this way though. Thomas Dave/Dina : future TV today ! - http://www.davedina.org/ <-*- thomas (dot) apestaart (dot) org -*-> Zou een heel klein beetje oorlog soms niet beter kunnen zijn ? <-*- thomas (at) apestaart (dot) org -*-> URGent, best radio on the net - 24/7 ! - http://urgent.fm/ |
From: Ronald S. B. <rb...@ro...> - 2004-02-26 21:14:22
|
Hi, On Thu, 2004-02-26 at 16:28, Thomas Vander Stichele wrote: > Our seeking is way too slow by design for that to work. Run sweep and > try it, you'll notice. What might be possible would be to use a sweep > element that transforms the sound this way though. I'm not so sure about that. Our seeking is a *protocol* rather than a lot of executable code. The code that is executed is - in the end - very little. I've said this to Iain too, some time ago... Surely, I'd be convinced by and interested in someone timing all this. Ronald -- Ronald Bultje <rb...@ro...> Linux Video/Multimedia developer |
From: Thomas V. S. <th...@ap...> - 2004-02-26 22:31:23
|
On Thu, 2004-02-26 at 16:12, Ronald S. Bultje wrote: > Hi, > > On Thu, 2004-02-26 at 16:28, Thomas Vander Stichele wrote: > > Our seeking is way too slow by design for that to work. Run sweep and > > try it, you'll notice. What might be possible would be to use a sweep > > element that transforms the sound this way though. > > I'm not so sure about that. Our seeking is a *protocol* rather than a > lot of executable code. The code that is executed is - in the end - very > little. I've said this to Iain too, some time ago... > > Surely, I'd be convinced by and interested in someone timing all this. If you haven't heard scrubbing, there's no point in speculating if it could be. Listen to what it is. Thomas Dave/Dina : future TV today ! - http://www.davedina.org/ <-*- thomas (dot) apestaart (dot) org -*-> I should dress you up in pearl Finest silk to touch your skin Don't know how to write a love song But don't leave <-*- thomas (at) apestaart (dot) org -*-> URGent, best radio on the net - 24/7 ! - http://urgent.fm/ |
From: Ronald S. B. <rb...@ro...> - 2004-02-26 22:56:37
|
On Thu, 2004-02-26 at 23:26, Thomas Vander Stichele wrote: > If you haven't heard scrubbing, there's no point in speculating if it > could be. Listen to what it is. I know what scrubbing is. And I'm convinced that our seeking *protocol* is well able to handle it. I've done MBs of byte-per-byte write-seek-write header writing (filesink) using this protocol, and I've done the same using buffered writes and no seeks. There is *no* slowdown at all, even with our protocol. So until someone really *proves* that our protocol indeed slows down seeking so much that scrubbing (or other such operations) becomes impossible, my point shall stand... Surely, I know that filesrc ! my_cute_filter ! { queue ! { queue ! my_cute_filter_again ! queue } ! { { queue ! ... actual_demuxer [..] will not be very fast. But that's not a realistic approach. I'm assuming (series of) filesrc ! actual_demuxer. And then, seeking is a protocol, nothing more. It looks complicated, but the actual code being exeuted is nothing. Ronald -- Ronald Bultje <rb...@ro...> Linux Video/Multimedia developer |