Is it possible to tell the Dirac encoder to generate an I frame?
When a decoding client needs to connect to an encoding server and play live
video it is useful to tell the encoder to generate an I frame, so that the client
could display the video as fast as possible (without waiting for an I frame that
would be generated sometime in the future at encoder's will).
I assume that Dirac decoder can decode starting from an I frame only.
Is my assumption wrong?
Thanks
Rafael
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
No you can't do that, though I agree that could be a useful feature for the encoder API to have. I'll add it to the list. Do you know what the format of the request would be - do streaming protocols support this?
The decoder doesn't yet support random access at an I frame, although the bitstream does. It's not a difficult thing to do but it's not reached the top of the priority list yet :-) We're in the process of refactoring the decoder to have two layers, a parsing layer to handle things like this and a decoding layer, so it'll fall out as a consequence of a much bigger task.
cheers
Thomas
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
From my experience with streaming protocols (RTP) when a player connects to a UDP stream
it takes some time until the video is rendered on the screen. Sometimes the player shows
random blocks of video, but the whole picture is rendered only upon receive of an I frame.
In applications where 'live video' means 'close to real-time' this behavior is unacceptable.
Thus I am not using standard streaming protocols.
In my implementation, when a client connects to an encoding server and requests video stream
the encoder is asked to generate an I frame. The encoder generates an I frame as soon as possible.
The client waits for an I frame to start rendering.
However, the overall time that passes from the request to first seen frame on the monitor is greatly
reduced.
Of course, the encoder API must have support for such a request.
I wanted to add support for more video codecs in my system but both Theora and Dirac do not support
advising the encoder to generate an I frame.
I will wait until the API supports it.
Thanks for your reply.
Best Regards
Rafael
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi, all!
Is it possible to tell the Dirac encoder to generate an I frame?
When a decoding client needs to connect to an encoding server and play live
video it is useful to tell the encoder to generate an I frame, so that the client
could display the video as fast as possible (without waiting for an I frame that
would be generated sometime in the future at encoder's will).
I assume that Dirac decoder can decode starting from an I frame only.
Is my assumption wrong?
Thanks
Rafael
No you can't do that, though I agree that could be a useful feature for the encoder API to have. I'll add it to the list. Do you know what the format of the request would be - do streaming protocols support this?
The decoder doesn't yet support random access at an I frame, although the bitstream does. It's not a difficult thing to do but it's not reached the top of the priority list yet :-) We're in the process of refactoring the decoder to have two layers, a parsing layer to handle things like this and a decoding layer, so it'll fall out as a consequence of a much bigger task.
cheers
Thomas
From my experience with streaming protocols (RTP) when a player connects to a UDP stream
it takes some time until the video is rendered on the screen. Sometimes the player shows
random blocks of video, but the whole picture is rendered only upon receive of an I frame.
In applications where 'live video' means 'close to real-time' this behavior is unacceptable.
Thus I am not using standard streaming protocols.
In my implementation, when a client connects to an encoding server and requests video stream
the encoder is asked to generate an I frame. The encoder generates an I frame as soon as possible.
The client waits for an I frame to start rendering.
However, the overall time that passes from the request to first seen frame on the monitor is greatly
reduced.
Of course, the encoder API must have support for such a request.
I wanted to add support for more video codecs in my system but both Theora and Dirac do not support
advising the encoder to generate an I frame.
I will wait until the API supports it.
Thanks for your reply.
Best Regards
Rafael