When doing a test speech recognition task with a FSG grammar (actually JSGF,
but converted to FSG, and JSGF is just as slow), the FSG grammar parsing takes
a really long time, and a lot of CPU power "Computing transitive closure for
null transitions". The full log is
Since it does this every time, is there a way I can convert it to a format
where this information is pre-computed? I noticed that DMP language models
start up very quickly, but I can't seem to find any way to convert from a FSG
to a DMP. With my FSG, it takes 30 seconds to do this step, and then like half
a second to do the actual recognition.
(My FSG grammar is not a "toy" grammar, but it is pretty simply. It's
basically just combinations of a few verbs with a few nouns, and a few filler
words mixed in. I know that it is reasonable to expect it to start up in a
small amount of time, becuase it is the direct translation of an HTK grammar
which started up in <half a second.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
When doing a test speech recognition task with a FSG grammar (actually JSGF,
but converted to FSG, and JSGF is just as slow), the FSG grammar parsing takes
a really long time, and a lot of CPU power "Computing transitive closure for
null transitions". The full log is
Since it does this every time, is there a way I can convert it to a format
where this information is pre-computed? I noticed that DMP language models
start up very quickly, but I can't seem to find any way to convert from a FSG
to a DMP. With my FSG, it takes 30 seconds to do this step, and then like half
a second to do the actual recognition.
(My FSG grammar is not a "toy" grammar, but it is pretty simply. It's
basically just combinations of a few verbs with a few nouns, and a few filler
words mixed in. I know that it is reasonable to expect it to start up in a
small amount of time, becuase it is the direct translation of an HTK grammar
which started up in <half a second.
Please share the grammar
JSGF v1.0;
Hello
Thanks for the information. I've just committed a fix to speedup closure
computation. It should help, please update the sphinxbase.
AFAIK HTK doesn't do closure that explains it starts faster but works slower.