Given a speech signal to the decoder it will returns corresponding text transcription. This is decoder problem.
Given a speech signal and corresponding text transcription to the aligner it will returns corresponding onset and offset time of each word. This is aligner problem.
Is my understanding is correct.
Is forced alignment and aligner problem both are same or not.
Aligned is independent of decoder. The accuracy of models which are built is tested using decoder. Please help me.
In configuration file there
is one option I.e. Cfg_forcedalign, the default value set is no. In
which case this is used.
Thanks in advance.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
In HTK or CMU Sphinx toolkit
Given a speech signal to the decoder it will returns corresponding text transcription. This is decoder problem.
Given a speech signal and corresponding text transcription to the aligner it will returns corresponding onset and offset time of each word. This is aligner problem.
Is my understanding is correct.
Is forced alignment and aligner problem both are same or not.
Aligned is independent of decoder. The accuracy of models which are built is tested using decoder. Please help me.
In configuration file there
is one option I.e. Cfg_forcedalign, the default value set is no. In
which case this is used.
Thanks in advance.