From: Ling S. <sh...@gm...> - 2008-07-30 06:59:52
|
Hi, all I'm in a research project to port gstreamer into embedded system. Now, we encounter the issue on how to integrate hardware accelerators (DSP/GPU) into gst. After evaluating different solution, we think GstOpenMAX project may be the best one for us, because 1) OpenMAX is an industry standard 2) more and more DSP/GPU vendor support OpenMAX But, I still have several questions on this project. 1. Does Nokia N8xx serials use GstOpenMAX? I know Nokia engineers lead this project. I also know N8xx serials use gstreamer as default playback engine, and it uses TI OMAP 2420, which has DSP. Can anyone tell me if N8xx use GstOpenMAX? If no, does N8xx plan to use it in the future? 2. What's GstOpenMAX plan to support DSP/GPU in the future? I review several plugins in GstOpenMAX and find current design can only support none-tunnel communication. It's not the best solution in hardware, because of bad performance. So, we plan to improve it by adding tunneled or proprietary communication. Do you have such plan? If yes, can we involve in design? In addition, most accelerators work as a decoder and a render. It means, the encoded data sent to it will directly be decoded and rendered, and will not be retrieved back again. How the gst or omx organize its pipeline in this situation? We are evaluating two solutions. ===Solution 1=== We can design a super omx sink component to cover decoder and render. This is the solution is used by N8xx. src ! demux ! sink | super omx sink | +--------------------------------------------+ | hardware accelerator | +--------------------------------------------+ ===Solution 2=== We can separate omx decoder, omx post processer, and omx sink elements. We enhance decoder, post processor, and sink plugin in GstOpenMAX. If GstOpenMAX plugin found its neighborhood are GstOpenMAX plugin, it will try to establish tunneled communication or proprietary communication firstly. It means, although we have 3 OMX plugin in gst pipeline, there is no data in gst pad and omx port. The last two gst/omx plugin only provide control function, but not support process data flow. Of cause, If the connection is failed, it will use none-tunnel communication. src ! demux ! decoder ! post processor ! sink | | | omx dec omx pp omx sink | | | +--------------------------------------------+ | hardware accelerator | +--------------------------------------------+ It seems solution 2 is more flexible. How about your suggestion on the 2 solutions? Which one is feasible? Do you have other solutions? Thank you very much. |
From: Felipe C. <fel...@no...> - 2008-07-30 08:40:13
|
Hi, On Wed, 2008-07-30 at 15:00 +0800, ext Ling Shi wrote: > Hi, all > I'm in a research project to port gstreamer into embedded system. Now, > we encounter the issue on how to integrate hardware accelerators > (DSP/GPU) into gst. After evaluating different solution, we think > GstOpenMAX project may be the best one for us, because > > 1) OpenMAX is an industry standard > 2) more and more DSP/GPU vendor support OpenMAX > > But, I still have several questions on this project. > 1. Does Nokia N8xx serials use GstOpenMAX? > I know Nokia engineers lead this project. I also know N8xx serials use > gstreamer as default playback engine, and it uses TI OMAP 2420, which > has DSP. Can anyone tell me if N8xx use GstOpenMAX? If no, does N8xx > plan to use it in the future? The current Maemo Products don't use OpenMAX IL, they use TI's DSP directly through the open source version of the DSP bridge (DSP gateway). The plan is to use OpenMAX IL so we can choose between different implementations without much effort. TI has started to provide their OpenMAX IL source code: http://omapzoom.org/gf/project/openmax/wiki/ > 2. What's GstOpenMAX plan to support DSP/GPU in the future? > I review several plugins in GstOpenMAX and find current design can > only support none-tunnel communication. It's not the best solution in > hardware, because of bad performance. So, we plan to improve it by > adding tunneled or proprietary communication. Do you have such plan? > If yes, can we involve in design? Indeed, as Bruno mentions, NXP has contributed code that adds support for tunneled communication. It is maintained in a separate branch and will soon be merged to the master branch. > In addition, most accelerators work as a decoder and a render. It > means, the encoded data sent to it will directly be decoded and > rendered, and will not be retrieved back again. How the gst or omx > organize its pipeline in this situation? We are evaluating two > solutions. > > ===Solution 1=== > We can design a super omx sink component to cover decoder and render. > This is the solution is used by N8xx. > src ! demux ! sink > | > super omx sink > | > +--------------------------------------------+ > | hardware accelerator | > +--------------------------------------------+ The disadvantage of this solution is that it requires the creation of many elements to cover all the possible combination of elements. This becomes specially a problem when you add for example some filtering, like a volume control, etc. > ===Solution 2=== > We can separate omx decoder, omx post processer, and omx sink > elements. We enhance decoder, post processor, and sink plugin in > GstOpenMAX. If GstOpenMAX plugin found its neighborhood are GstOpenMAX > plugin, it will try to establish tunneled communication or proprietary > communication firstly. It means, although we have 3 OMX plugin in gst > pipeline, there is no data in gst pad and omx port. The last two > gst/omx plugin only provide control function, but not support process > data flow. Of cause, If the connection is failed, it will use > none-tunnel communication. > > src ! demux ! decoder ! post processor ! sink > | | | > omx dec omx pp omx sink > | | | > +--------------------------------------------+ > | hardware accelerator | > +--------------------------------------------+ > > It seems solution 2 is more flexible. How about your suggestion on the > 2 solutions? Which one is feasible? Do you have other solutions? This is exactly how NXP implemented it and seems to be working fine. The problem I see with both solutions is that there will be A/V sync issues when using OMX sinks in tunneling mode. The idea is to solve these issues by mapping the OMX clock to the GST clock. However, this hasn't been implemented yet. Best regards. -- Felipe Contreras |
From: Ling S. <sh...@gm...> - 2008-07-30 13:15:26
|
Felipe and Bruno, Thank for your reply. We have a very similar idea on how to use OMX in gst. Now, I got more confidence to use, involve, and contribute into this project. Brouno, Could you tell me where can find your change? Is it possible for me to study your code before release? Felipe, TI OMAP 3xxx is just one of our target platform. I just got an TI OMAP 3xxx board in test. I will investigate the OMX IL code from omap zoom. Please check other comments in line. On Wed, Jul 30, 2008 at 4:39 PM, Felipe Contreras < fel...@no...> wrote: > Hi, > > On Wed, 2008-07-30 at 15:00 +0800, ext Ling Shi wrote: > > Hi, all > > I'm in a research project to port gstreamer into embedded system. Now, > > we encounter the issue on how to integrate hardware accelerators > > (DSP/GPU) into gst. After evaluating different solution, we think > > GstOpenMAX project may be the best one for us, because > > > > 1) OpenMAX is an industry standard > > 2) more and more DSP/GPU vendor support OpenMAX > > > > But, I still have several questions on this project. > > 1. Does Nokia N8xx serials use GstOpenMAX? > > I know Nokia engineers lead this project. I also know N8xx serials use > > gstreamer as default playback engine, and it uses TI OMAP 2420, which > > has DSP. Can anyone tell me if N8xx use GstOpenMAX? If no, does N8xx > > plan to use it in the future? > > The current Maemo Products don't use OpenMAX IL, they use TI's DSP > directly through the open source version of the DSP bridge (DSP > gateway). > > The plan is to use OpenMAX IL so we can choose between different > implementations without much effort. > > TI has started to provide their OpenMAX IL source code: > http://omapzoom.org/gf/project/openmax/wiki/ > > > 2. What's GstOpenMAX plan to support DSP/GPU in the future? > > I review several plugins in GstOpenMAX and find current design can > > only support none-tunnel communication. It's not the best solution in > > hardware, because of bad performance. So, we plan to improve it by > > adding tunneled or proprietary communication. Do you have such plan? > > If yes, can we involve in design? > > Indeed, as Bruno mentions, NXP has contributed code that adds support > for tunneled communication. It is maintained in a separate branch and > will soon be merged to the master branch. > > > In addition, most accelerators work as a decoder and a render. It > > means, the encoded data sent to it will directly be decoded and > > rendered, and will not be retrieved back again. How the gst or omx > > organize its pipeline in this situation? We are evaluating two > > solutions. > > > > ===Solution 1=== > > We can design a super omx sink component to cover decoder and render. > > This is the solution is used by N8xx. > > src ! demux ! sink > > | > > super omx sink > > | > > +--------------------------------------------+ > > | hardware accelerator | > > +--------------------------------------------+ > > The disadvantage of this solution is that it requires the creation of > many elements to cover all the possible combination of elements. This > becomes specially a problem when you add for example some filtering, > like a volume control, etc. [Shi Ling] You idea is exaclty same with me. I also think it's not a good solution. > > > > ===Solution 2=== > > We can separate omx decoder, omx post processer, and omx sink > > elements. We enhance decoder, post processor, and sink plugin in > > GstOpenMAX. If GstOpenMAX plugin found its neighborhood are GstOpenMAX > > plugin, it will try to establish tunneled communication or proprietary > > communication firstly. It means, although we have 3 OMX plugin in gst > > pipeline, there is no data in gst pad and omx port. The last two > > gst/omx plugin only provide control function, but not support process > > data flow. Of cause, If the connection is failed, it will use > > none-tunnel communication. > > > > src ! demux ! decoder ! post processor ! sink > > | | | > > omx dec omx pp omx sink > > | | | > > +--------------------------------------------+ > > | hardware accelerator | > > +--------------------------------------------+ > > > > It seems solution 2 is more flexible. How about your suggestion on the > > 2 solutions? Which one is feasible? Do you have other solutions? > > This is exactly how NXP implemented it and seems to be working fine. > > The problem I see with both solutions is that there will be A/V sync > issues when using OMX sinks in tunneling mode. The idea is to solve > these issues by mapping the OMX clock to the GST clock. However, this > hasn't been implemented yet. > [Shi Ling] Yes, so many things need to be improved in the future. > > Best regards. > > -- > Felipe Contreras > > |
From: Felipe C. <fel...@gm...> - 2008-07-31 20:03:08
|
Hi, On Thu, Jul 31, 2008 at 1:22 PM, Bruno Smets <bru...@nx...> wrote: > Hi, > > Felipe is itegrating the changes ... you need to set up GIT and clone > > git://github.com/felipec/gst-openmax.git I've finally managed to clean this up. The branch is tunneling-v3: http://github.com/felipec/gst-openmax/commits/tuneling-v3 This is not the final version, but it's near it. Best regards. -- Felipe Contreras |
From: Ling S. <sh...@gm...> - 2008-08-03 08:05:37
|
Felipe & Bruno, Thanks, TBD, which OMX do you used in test, Bellgio or your companies own OMX? On Fri, Aug 1, 2008 at 4:03 AM, Felipe Contreras <fel...@gm... > wrote: > Hi, > > On Thu, Jul 31, 2008 at 1:22 PM, Bruno Smets <bru...@nx...> wrote: > > Hi, > > > > Felipe is itegrating the changes ... you need to set up GIT and clone > > > > git://github.com/felipec/gst-openmax.git > > I've finally managed to clean this up. > > The branch is tunneling-v3: > http://github.com/felipec/gst-openmax/commits/tuneling-v3 > > This is not the final version, but it's near it. > > Best regards. > > -- > Felipe Contreras > |
From: Felipe C. <fel...@gm...> - 2008-08-03 09:31:46
|
On Sun, Aug 3, 2008 at 11:05 AM, Ling Shi <sh...@gm...> wrote: > Felipe & Bruno, > Thanks, > > TBD, which OMX do you used in test, Bellgio or your companies own OMX? In Nokia we have been testing with TI components, Bellagio components, and some experimental components developed with Bellagio base clases. -- Felipe Contreras |
From: V. M. J. L. <ce...@gm...> - 2008-07-31 22:38:17
|
Even though I haven't work in these things for a while, I used to and have thoughts about them. I haven't read the NXP tunneling implementation in gstopenmax but once we implemented something related: when the omx gst element is linked with another omx gst element a tunnel is setup, but as no buffers traverse the pipeline, because the buffer communication is done beneath the omx layer, we had to push ghost buffer (empty buffers with calculated metadata), and those ghost buffers simulated the gst a/v sync, nevertheless the real a/v sync was done by omx. That solution is not sound: might work in some cases but we couldn't assure it for every case. We dismissed the super sinks since the beginning of the development, because, as you mentioned, is not a flexible solution. But we have a trade of: the first solution is not concordant with the gstreamer philosophy, and the second is not concordant with the omx philosophy, because a semantic overlapping, as in the buffer communication assumptions, the state management among the components, etc. Nowadays I'm more convinced that a supersink could be the best solution to integrate gst and omx in the A/V playback use case: 1. it's easy to build and setup the omx pipelines given the caps 2. it's easy to control the sync 3. it's easy add gst interfaces as volume, contrast, et all 4. it's easy to manage the state among the omx components 5. no dirty hacks as ghostbuffers 6. afaik the supersink elements can be autoplugged by playbin2 I admit the supersinks break the flexibility offered by gst, but as far as i can foresee, is the straight strategy to obtain the performance promised by omx in his interop profile. Maybe one day the interop profile won't be necessary when the pBuffer in the bufferheader leave its readonly property... Víctor Manuel Jáquez Leal On Wed, Jul 30, 2008 at 9:00 AM, Ling Shi <sh...@gm...> wrote: > Hi, all > I'm in a research project to port gstreamer into embedded system. Now, we > encounter the issue on how to integrate hardware accelerators (DSP/GPU) into > gst. After evaluating different solution, we think GstOpenMAX project may be > the best one for us, because > > 1) OpenMAX is an industry standard > 2) more and more DSP/GPU vendor support OpenMAX > > But, I still have several questions on this project. > 1. Does Nokia N8xx serials use GstOpenMAX? > I know Nokia engineers lead this project. I also know N8xx serials use > gstreamer as default playback engine, and it uses TI OMAP 2420, which has > DSP. Can anyone tell me if N8xx use GstOpenMAX? If no, does N8xx plan to use > it in the future? > > 2. What's GstOpenMAX plan to support DSP/GPU in the future? > I review several plugins in GstOpenMAX and find current design can only > support none-tunnel communication. It's not the best solution in hardware, > because of bad performance. So, we plan to improve it by adding tunneled or > proprietary communication. Do you have such plan? If yes, can we involve in > design? > > In addition, most accelerators work as a decoder and a render. It means, the > encoded data sent to it will directly be decoded and rendered, and will not > be retrieved back again. How the gst or omx organize its pipeline in this > situation? We are evaluating two solutions. > > ===Solution 1=== > We can design a super omx sink component to cover decoder and render. This > is the solution is used by N8xx. > src ! demux ! sink > | > super omx sink > | > +--------------------------------------------+ > | hardware accelerator | > +--------------------------------------------+ > > ===Solution 2=== > We can separate omx decoder, omx post processer, and omx sink elements. We > enhance decoder, post processor, and sink plugin in GstOpenMAX. If > GstOpenMAX plugin found its neighborhood are GstOpenMAX plugin, it will try > to establish tunneled communication or proprietary communication firstly. It > means, although we have 3 OMX plugin in gst pipeline, there is no data in > gst pad and omx port. The last two gst/omx plugin only provide control > function, but not support process data flow. Of cause, If the connection is > failed, it will use none-tunnel communication. > > src ! demux ! decoder ! post processor ! sink > | | | > omx dec omx pp omx sink > | | | > +--------------------------------------------+ > | hardware accelerator | > +--------------------------------------------+ > > It seems solution 2 is more flexible. How about your suggestion on the 2 > solutions? Which one is feasible? Do you have other solutions? > > Thank you very much. > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great > prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Gstreamer-openmax mailing list > Gst...@li... > https://lists.sourceforge.net/lists/listinfo/gstreamer-openmax > > |
From: Felipe C. <fel...@gm...> - 2008-08-03 09:56:22
|
Hi, On Fri, Aug 1, 2008 at 1:38 AM, Victor Manuel Jáquez Leal <ce...@gm...> wrote: > Even though I haven't work in these things for a while, I used to and > have thoughts about them. > > I haven't read the NXP tunneling implementation in gstopenmax but once > we implemented something related: when the omx gst element is linked > with another omx gst element a tunnel is setup, but as no buffers > traverse the pipeline, because the buffer communication is done > beneath the omx layer, we had to push ghost buffer (empty buffers with > calculated metadata), and those ghost buffers simulated the gst a/v > sync, nevertheless the real a/v sync was done by omx. If the A/V sync was done in the omx layer then why are the ghost buffers needed? The gst base sink requires buffers in order to do A/V sync. If the sink doesn't receive the buffers, then it doesn't do the sync, but it still works. > That solution is not sound: might work in some cases but we couldn't > assure it for every case. > > We dismissed the super sinks since the beginning of the development, > because, as you mentioned, is not a flexible solution. > > But we have a trade of: the first solution is not concordant with the > gstreamer philosophy, and the second is not concordant with the omx > philosophy, because a semantic overlapping, as in the buffer > communication assumptions, the state management among the components, > etc. > > Nowadays I'm more convinced that a supersink could be the best > solution to integrate gst and omx in the A/V playback use case: > > 1. it's easy to build and setup the omx pipelines given the caps What if you want a post-processing element in the middle? Or you want an encoder+decoder (transcoder)? I don't think all the omx pipelines can be built based on the caps. > 2. it's easy to control the sync Not quite; I'll explain at the end. > 3. it's easy add gst interfaces as volume, contrast, et all > 4. it's easy to manage the state among the omx components I'm not so sure about that. If a gst element is mapped to a single omx component it's easier to see what's happening. Actually I think that was the difficult part in the tunneling branch: align gst and omx components. > 5. no dirty hacks as ghostbuffers > 6. afaik the supersink elements can be autoplugged by playbin2 Yes. > I admit the supersinks break the flexibility offered by gst, but as > far as i can foresee, is the straight strategy to obtain the > performance promised by omx in his interop profile. Or as we are doing in the tunneling branch. > Maybe one day the interop profile won't be necessary when the pBuffer > in the bufferheader leave its readonly property... Even in that case tunneling might help; there would be less memory allocations for gst buffers. I think you are basing your ideas on the fact that in order to support A/V sync properly gst should do it by receiving real buffers on the element. Even if you have a video decoder sink that receives real gst buffers in order to do A/V sync, the sync will be done _before_ the decoding, so by the time the buffers reach the renderer some time would have been spent, and the sync would be lost. In discussions with different parties I believe the consensus has been that mapping the omx clock to a gst clock is the right way to go. In this way you have all the flexibility of gst pipelines, omx efficiency, and you have proper A/V sync. I'm a pragmaticist, so I'm not saying that approach would work, but I don't see any way it shouldn't, so I think we should better try it first. Best regards. -- Felipe Contreras |
From: V. M. J. L. <ce...@gm...> - 2008-08-04 11:34:59
|
Hi all, >> I haven't read the NXP tunneling implementation in gstopenmax but once >> we implemented something related: when the omx gst element is linked >> with another omx gst element a tunnel is setup, but as no buffers >> traverse the pipeline, because the buffer communication is done >> beneath the omx layer, we had to push ghost buffer (empty buffers with >> calculated metadata), and those ghost buffers simulated the gst a/v >> sync, nevertheless the real a/v sync was done by omx. > > If the A/V sync was done in the omx layer then why are the ghost buffers needed? Last Friday I glanced the tunneling-v3 branch in github and I could not grasp some details which I'm worry about: 1) AFAIK: in order that a pipeline could change from prerrolling to playing, the sink must receive at least one buffer. If you have an OMX sink in tunnel with a previous element the pipeline will never leave the prerolling state. 2) When the first gst buffer traverse its part of the pipeline the stream negotiation is done among the linked elements. So, if no buffers traverse some portion of the pipeline because they are tunneled, those elements in the tunnel will never inform about their real configured caps to the GStreamer client application. > The gst base sink requires buffers in order to do A/V sync. If the > sink doesn't receive the buffers, then it doesn't do the sync, but it > still works. But, as I said, the pipeline won't leave the prerolling state in the case of a tunneled omx sink. >> 1. it's easy to build and setup the omx pipelines given the caps > > What if you want a post-processing element in the middle? Or you want > an encoder+decoder (transcoder)? Yes, the supersink solutions are not flexible, but they might provide an effective solution to a common use case. > I don't think all the omx pipelines can be built based on the caps. Not at all, but if you have a fixed hardware (as happens in the embedded world) you'll only need the input stream caps to build up the OMX pipeline to render the stream. Anyway, this is not a real argument, it's just workaround a problem in the supersink concept. >> 4. it's easy to manage the state among the omx components > > I'm not so sure about that. If a gst element is mapped to a single omx > component it's easier to see what's happening. That's another issue which I couldn't figure out in the tunneling-v3 branch. According to the spec (Page 122, Figure 3-10. State Transition to Idle in the Case of Tunneled Components) if you change to idle state to a component which is not a "buffer supplier" the CommandStateSet callback won't be triggered until the other component in the tunnel have changed to idle also. AFAIK, each change state in gstopenmax is done sequentially: change_state, wait_for_state. And it could cause problems when the component which is the buffer supplier in the tunnel is the posterior in the tunneled: you'll get a dead-lock waiting for the first component to change its state. And that could be the case, in example, of tunneled videosink. In the case of a supersink that situation is easily to overcome. > Actually I think that was the difficult part in the tunneling branch: > align gst and omx components. Yes, it is. >> Maybe one day the interop profile won't be necessary when the pBuffer >> in the bufferheader leave its readonly property... > > Even in that case tunneling might help; there would be less memory > allocations for gst buffers. Yes that might be true. And also could be the situation where the tunneled components never use general purpose memory ;) > I think you are basing your ideas on the fact that in order to support > A/V sync properly gst should do it by receiving real buffers on the > element. Even if you have a video decoder sink that receives real gst > buffers in order to do A/V sync, the sync will be done _before_ the > decoding, so by the time the buffers reach the renderer some time > would have been spent, and the sync would be lost. > > In discussions with different parties I believe the consensus has been > that mapping the omx clock to a gst clock is the right way to go. In > this way you have all the flexibility of gst pipelines, omx > efficiency, and you have proper A/V sync. You're right. When I tried to find a way to map those clocks I found some creepy problems in the OMX clock implementation so I dropped it, but yes I remember it was possibly... in theory at least. Nevertheless, in the supersink solution, the mapping won't be necessary, because the supersink will only attend the (not-exposed) OMX clock :) > I'm a pragmaticist, so I'm not saying that approach would work, but I > don't see any way it shouldn't, so I think we should better try it > first. I'm agree. Cheers! vmjl |
From: Felipe C. <fel...@no...> - 2008-08-05 12:02:23
|
Hi Victor, On Mon, 2008-08-04 at 13:35 +0200, ext Victor Manuel Jáquez Leal wrote: > Hi all, > > >> I haven't read the NXP tunneling implementation in gstopenmax but once > >> we implemented something related: when the omx gst element is linked > >> with another omx gst element a tunnel is setup, but as no buffers > >> traverse the pipeline, because the buffer communication is done > >> beneath the omx layer, we had to push ghost buffer (empty buffers with > >> calculated metadata), and those ghost buffers simulated the gst a/v > >> sync, nevertheless the real a/v sync was done by omx. > > > > If the A/V sync was done in the omx layer then why are the ghost buffers needed? > > Last Friday I glanced the tunneling-v3 branch in github and I could > not grasp some details which I'm worry about: > > 1) AFAIK: in order that a pipeline could change from prerrolling to > playing, the sink must receive at least one buffer. If you have an OMX > sink in tunnel with a previous element the pipeline will never leave > the prerolling state. Well, it _does_ leave the pre-rolling state, I don't know why. I'll have to investigate. > 2) When the first gst buffer traverse its part of the pipeline the > stream negotiation is done among the linked elements. So, if no > buffers traverse some portion of the pipeline because they are > tunneled, those elements in the tunnel will never inform about their > real configured caps to the GStreamer client application. When the omx component issues a settings changed event, then the caps are properly updated. If it doesn't, then yeah, that might be an issue, although I don't think applications really make use of such data. > > The gst base sink requires buffers in order to do A/V sync. If the > > sink doesn't receive the buffers, then it doesn't do the sync, but it > > still works. > > But, as I said, the pipeline won't leave the prerolling state in the > case of a tunneled omx sink. But it does! (not sure why). > >> 1. it's easy to build and setup the omx pipelines given the caps > > > > What if you want a post-processing element in the middle? Or you want > > an encoder+decoder (transcoder)? > > Yes, the supersink solutions are not flexible, but they might provide > an effective solution to a common use case. > > > I don't think all the omx pipelines can be built based on the caps. > > Not at all, but if you have a fixed hardware (as happens in the > embedded world) you'll only need the input stream caps to build up the > OMX pipeline to render the stream. It's not that fixed... new DSP tasks can be loaded on the TI DSP for example. > Anyway, this is not a real argument, it's just workaround a problem in > the supersink concept. > > >> 4. it's easy to manage the state among the omx components > > > > I'm not so sure about that. If a gst element is mapped to a single omx > > component it's easier to see what's happening. > > That's another issue which I couldn't figure out in the tunneling-v3 branch. > > According to the spec (Page 122, Figure 3-10. State Transition to Idle > in the Case of Tunneled Components) if you change to idle state to a > component which is not a "buffer supplier" the CommandStateSet > callback won't be triggered until the other component in the tunnel > have changed to idle also. Unless the ports have been disabled. > AFAIK, each change state in gstopenmax is done sequentially: > change_state, wait_for_state. And it could cause problems when the > component which is the buffer supplier in the tunnel is the posterior > in the tunneled: you'll get a dead-lock waiting for the first > component to change its state. And that could be the case, in example, > of tunneled videosink. Again, only when the ports are enabled. In any case, after discussion with Frederik from NXP I decided to try something different: now for the special case of the Idle state gst-openmax is not waiting for the change of state until the next state: http://github.com/felipec/gst-openmax/commit/3e4fc57893a876206d381d444c97f193614e7d51 > In the case of a supersink that situation is easily to overcome. > > > Actually I think that was the difficult part in the tunneling branch: > > align gst and omx components. > > Yes, it is. > > >> Maybe one day the interop profile won't be necessary when the pBuffer > >> in the bufferheader leave its readonly property... > > > > Even in that case tunneling might help; there would be less memory > > allocations for gst buffers. > > Yes that might be true. And also could be the situation where the > tunneled components never use general purpose memory ;) True. > > I think you are basing your ideas on the fact that in order to support > > A/V sync properly gst should do it by receiving real buffers on the > > element. Even if you have a video decoder sink that receives real gst > > buffers in order to do A/V sync, the sync will be done _before_ the > > decoding, so by the time the buffers reach the renderer some time > > would have been spent, and the sync would be lost. > > > > In discussions with different parties I believe the consensus has been > > that mapping the omx clock to a gst clock is the right way to go. In > > this way you have all the flexibility of gst pipelines, omx > > efficiency, and you have proper A/V sync. > > You're right. When I tried to find a way to map those clocks I found > some creepy problems in the OMX clock implementation so I dropped it, > but yes I remember it was possibly... in theory at least. > > Nevertheless, in the supersink solution, the mapping won't be > necessary, because the supersink will only attend the (not-exposed) > OMX clock :) But again, if you have a video decoder + video sink in omx, and audio sink in GStreamer, then the A/V sync will be done by GStreamer at the supersink level. That means if the video decoding takes 1 second you would have 1 second of video delay. > > I'm a pragmaticist, so I'm not saying that approach would work, but I > > don't see any way it shouldn't, so I think we should better try it > > first. > > I'm agree. Best regards. -- Felipe Contreras |
From: Felipe C. <fel...@no...> - 2008-08-05 12:26:01
|
On Tue, 2008-08-05 at 15:01 +0300, Felipe Contreras wrote: > In any case, after discussion with Frederik from NXP I decided to try > something different: now for the special case of the Idle state > gst-openmax is not waiting for the change of state until the next state: > > http://github.com/felipec/gst-openmax/commit/3e4fc57893a876206d381d444c97f193614e7d51 Er, updated: http://github.com/felipec/gst-openmax/commit/d28cb3ca98ed8f0e6226dc21eefac56d8fe49aa4 -- Felipe Contreras |
From: Frederik V. <fre...@gm...> - 2008-08-05 14:00:16
|
> 1) AFAIK: in order that a pipeline could change from prerrolling to > playing, the sink must receive at least one buffer. If you have an OMX > sink in tunnel with a previous element the pipeline will never leave > the prerolling state. Well, it _does_ leave the pre-rolling state, I don't know why. I'll have to investigate. At the moment it does preroll because we return statechange_succes when the sink is requested to move to paused state. (in stead of returning the required statechange in progress). This is not correct though (prerolling done is stated too soon), the correct idea was that the first arriving buffer in the omx sink would be a marked buffer that would trigger an event that lets GST know the prerolling completed. Unfortunately, its not in the code yet. But its similar to how we deal with the EOS event with a tunneled OMX sink. |
From: V. M. J. L. <ce...@gm...> - 2008-08-05 14:13:54
|
Wow! quite clever I'm starting to like the implementation... :D vmjl On Tue, Aug 5, 2008 at 4:00 PM, Frederik Vernelen <fre...@gm...> wrote: >> 1) AFAIK: in order that a pipeline could change from prerrolling to >> playing, the sink must receive at least one buffer. If you have an OMX >> sink in tunnel with a previous element the pipeline will never leave >> the prerolling state. > > Well, it _does_ leave the pre-rolling state, I don't know why. I'll have > to investigate. > > At the moment it does preroll because we return statechange_succes when the > sink is requested to move to paused state. (in stead of returning the > required statechange in progress). This is not correct though (prerolling > done is stated too soon), > the correct idea was that the first arriving buffer in the omx sink would be > a marked buffer that would trigger an event that lets GST know the > prerolling completed. Unfortunately, its not in the code yet. But its > similar to how we deal with the EOS event with a tunneled OMX sink. > > |