edu.cmu.sphinx.api.Microphone doesn't have a way to clear the buffered data (which would simply call "line.flush()").
It really need a method that does that so edu.cmu.sphinx.api.Microphone would be on par with edu.cmu.sphinx.frontend.util.Microphone (which does have a clear() method)
The lack of clear/flush method means that when there are multiple recognition contexts (such as a dictation context and a grammar context) and the application switches between them, the 2 contexts get each others' line data after the switch, making effective use of switching between contexts impossible.
Thank you for the report, indeed microphone needs love, we definitely need this clear method (patch is welcome). We also need to rework decoder itself to not require switching the decoders to switch the models but like in pocketsphinx just switch search modes.
+1. Unable to run the dialog demo due to this issue according to http://stackoverflow.com/questions/29121188/cant-access-microphone-while-running-dialog-demo-in-sphinx4-5prealpha#comment46488376_29121188
Is there any update on this? I ran into the same problem when playing around today.
me2
Microphone is not released as a resource by sphinx after recognition. Has the above bug fixed? Please Give me a solution as I have to use micrphone and recognize voice many times and at multiple places in my application.
I'm getting this same error as well using sphinx4-5prealpha. This seems like a fundamental issue that needs resolving yet it persists 5+ years after being reported. Is this project dormant? Is there a workaround that does not involve building my own version of sphinx? I'm interested in speech-to-text for a desktop application I am building. What are most doing to resolve this?