Is it possible to feed a live-stream of spoken work to pocketsphinx? As far as I can see, pocketsphinx either takes a microphone (-inmic) or a file (-infile) as input.
This question might be a bit premature in the sense that I currently do not know exactly how the stream looks apart from the fact that it is send to me over ethernet. However, maybe someone can tell me if this is in principle possible and how this could be realized, what requirements to the live stream there might be etc.
Thanks,
sebbesen
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
You have to run the server, you can write it in your preferred language - python, ruby, C++, java.
You wait for connection, accept data, feed it into decoder and then send the result back.
Here is an example in ruby using gstreamer technology:
Hi,
Is it possible to feed a live-stream of spoken work to pocketsphinx? As far as I can see, pocketsphinx either takes a microphone (-inmic) or a file (-infile) as input.
This question might be a bit premature in the sense that I currently do not know exactly how the stream looks apart from the fact that it is send to me over ethernet. However, maybe someone can tell me if this is in principle possible and how this could be realized, what requirements to the live stream there might be etc.
Thanks,
sebbesen
You have to run the server, you can write it in your preferred language - python, ruby, C++, java.
You wait for connection, accept data, feed it into decoder and then send the result back.
Here is an example in ruby using gstreamer technology:
https://github.com/alumae/ruby-pocketsphinx-server
Python should be also easy to start with.
I am also interested in this. This is my ultimate goal. Could you please explain what do you mean by "run the server " ?