I made a decent grammar for recognizing numbers in pocketsphinx.
Unfortunately, it's intolerably slow. Recognition is slow, and the application
(PocketSphinxAndroidDemo) takes a very long time to start up. To get around
that for now, I used the JSGF functions in Sphinx 4 to generate 200,000
sentences and then used cmuclmtk to make a language model. PocketSphinx
performs recognition very quickly with a language model.
Are there any command line flags or other options that I can use to make
pocketsphinx perform recognition quickly with a grammar? I am already using
-backtrace false and -bestpath false.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hmm... I don't think that hand editing will be an option. I did the numbers
grammar to test if PocketSphinx would be slow with a small grammar, which it
was. For now I am using a generated LM, but any ideas on this issue will be
much appreciated.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I made a decent grammar for recognizing numbers in pocketsphinx.
Unfortunately, it's intolerably slow. Recognition is slow, and the application
(PocketSphinxAndroidDemo) takes a very long time to start up. To get around
that for now, I used the JSGF functions in Sphinx 4 to generate 200,000
sentences and then used cmuclmtk to make a language model. PocketSphinx
performs recognition very quickly with a language model.
Are there any command line flags or other options that I can use to make
pocketsphinx perform recognition quickly with a grammar? I am already using
-backtrace false and -bestpath false.
Sphinx JSGF support is severely broken.
If your grammar is a simple network of numbers, draw it as a network, number
the nodes, and hand edit an FSG file. It will run fast and accurate.
Please report about the issues you have found. So far we are not aware about
"broken" JSGF.
Hmm... I don't think that hand editing will be an option. I did the numbers
grammar to test if PocketSphinx would be slow with a small grammar, which it
was. For now I am using a generated LM, but any ideas on this issue will be
much appreciated.