I want to have my lm learn as I use it. For example, if the model can't understand what I'm saying I can type the correct word and have the model update with the new audio. Is that possible?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
That at least points me in a good direction for further exploration. And thanks for the suggestion re Vosk. I'm at the "Don't know what I don't know" stage and this also helps greatly. Thank you for taking the time.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I want to have my lm learn as I use it. For example, if the model can't understand what I'm saying I can type the correct word and have the model update with the new audio. Is that possible?
You can improve the accuracy of recognition with a custom language model:
https://cmusphinx.github.io/wiki/tutoriallm/
and also teach the model with the audio with acoustic model adaptation:
https://cmusphinx.github.io/wiki/tutorialadapt/
In general you can get much better results without adaptation with more advanced and modern toolkits than pocketsphinx (Vosk).
That at least points me in a good direction for further exploration. And thanks for the suggestion re Vosk. I'm at the "Don't know what I don't know" stage and this also helps greatly. Thank you for taking the time.
Last edit: Ross Munro 2021-04-25