In the case of a Language model with 1 word, and a Dictionary with 1 word. What is the difference between giving to that only word a chance of 0.1, 0.5 or 1.0? Is there any difference at all?
Last edit: pablo fernandez 2017-01-12
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
So, if the program listens a sound, and considers that 4 words may match that sound, then it will return the one word with highest probability?
In the case of the word with the lowest probability in the language model, the only way for the program to return it is to have it pronounced so perfectly that it cannot be confused by any other word in the .lm?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
For example:
What is the -4.565931 and -0.3361348 for?
And in this other case:
What is the purpose of the
<s>
?Last edit: pablo fernandez 2017-01-12
http://cmusphinx.sourceforge.net/wiki/arpaformat
Last edit: Nickolay V. Shmyrev 2017-01-12
Okey, now I see it is a chance.
In the case of a Language model with 1 word, and a Dictionary with 1 word. What is the difference between giving to that only word a chance of 0.1, 0.5 or 1.0? Is there any difference at all?
Last edit: pablo fernandez 2017-01-12
There is no difference, only 1 word will be always recognized.
So, if the program listens a sound, and considers that 4 words may match that sound, then it will return the one word with highest probability?
In the case of the word with the lowest probability in the language model, the only way for the program to return it is to have it pronounced so perfectly that it cannot be confused by any other word in the .lm?
Yes