Hi,
I'm finally getting pocketsphinx integrated its taking 5 to 10 seconds to return recognized words on a desktop computer and the words are very wrong.
It's currently using a large dictionary, where ultimately I am planning on a compact dictionary that will change according to context.
My dev suggests that that is normal with a large dictionary. What is your take? Is 10 seconds to recognize "hello" (wrongly) normal?
Current version runs 5 times faster than realtime on normal CPU, you did something wrong.
Hi Nickolay,
I was able to get the pocket sphinx test app setup in a repository. Would you mind taking a look to see what is causing the 10 second recognition?
https://github.com/jamiebullock/sphinx-max-test
Hello
You have two issues
1) You inserted nanosleep, so your decoder is mostly sleeping as you told it
2) You use continuous model instead of default ptm model. Continuous model is significantly slower.
Log in to post a comment.
Hi,
I'm finally getting pocketsphinx integrated its taking 5 to 10 seconds to return recognized words on a desktop computer and the words are very wrong.
It's currently using a large dictionary, where ultimately I am planning on a compact dictionary that will change according to context.
My dev suggests that that is normal with a large dictionary. What is your take? Is 10 seconds to recognize "hello" (wrongly) normal?
Current version runs 5 times faster than realtime on normal CPU, you did something wrong.
Hi Nickolay,
I was able to get the pocket sphinx test app setup in a repository. Would you mind taking a look to see what is causing the 10 second recognition?
https://github.com/jamiebullock/sphinx-max-test
Hello
You have two issues
1) You inserted nanosleep, so your decoder is mostly sleeping as you told it
2) You use continuous model instead of default ptm model. Continuous model is significantly slower.