"Hardware Accelerated Video Decoding" is the when a video-playback software-application offload portions of the video decoding process to the GPU (Graphic) hardware, it does this by executing specific code algorithms on the GPU. (In theory this process should also reduce bus bandwidth requirements).
FFmpeg (and MPlayer) should probably be the reference and test platform for all hardware accelerated video decoding development under Linux. FFmpeg is used as the codec-suit (video and audio decoder) inside most open source video-players, including MPlayer.
Video decoders that I would like to see accelerated via CTM (under Linux) is MPEG-2, MPEG-4 SP/ASP, MPEG-4 AVC (H.264), and VC-1 (which is a form of WMV3, a.k.a. WMV3). All of which FFmpeg supports.
Video decoding processes which could be accelerated:
* Motion compensation (mo comp)
* Inverse Discrete Cosine Transform (iDCT)
* Inverse Telecine 3:2 and 2:2 pull-down correction
* Bitstream processing (CAVLC/CABAC)
* in-loop deblocking
* inverse quantization (IQ)
* Variable-Length Decoding (VLD), more commonly known as slice level acceleration
* Spatial-Temporal De-Interlacing, (plus automatic interlace/progressive source detection)
PS! Best would be if the CTM could be integrated as a plug-in to the XvMC API as it is the defacto Hardware Accelerated Video Decoding API for Linux. XvMC is the Linux/UNIX equivalent of the Microsoft Windows DirectX Video Acceleration (DxVA) API.