Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
I realise that Simon has a background in medical deployments, and the funding that it receives comes from organisations that want non-standard features.
However, whilst new speech recognition technology for the disabled and recovering is an amazing and very important aspect in itself, for most users worldwide what will have the most impact is simon having full dictation functionality - the ability to turn endless complicated sentences in to text as they're being spoken.
As I understand it simon doesn't currently have any features like this - it can only handle single words at a time.
So my question is if the simon project ever intends to implement full dictation capabilities, akin to (and hopefully better than) dragon naturally speaking. If the intention is to implement this, then what kind of priority is it? Will this only be addressed when the app fulfils all the needs of certain medical deployments? Is there any kind of timeline for implementing it, or starting to focus on it (months, years, several years etc.)?
Simon seems like such a good foundation for "the FOSS speech recognition app", which, have no doubt, will be a killer app when it comes, that I'm keen to know what simon plans in this direction.
If the project needs more resources to focus on this aspect of the app (I've read that the dev team is very small) then this is something that the community needs to address.
Basically, if dictation functionality is not already planned and accommodated within existing timelines, what does the project need to change this? A list of what you need to accomplish this could help community members provide what's missing, whether it be funds, samples, developer time, testing, etc..
I am trying to set up a speech recognitions system for my virtual world that uses FreeSwitch as the voices server. Would it be possible to harvest language samples from my system and automatically feed them to voxforge? We will also be using Open Babble Fish t provide language translation, so multiple language samples could also be donated.
Well yes that would theoretically be possible albeit there would be a couple of issues:
* The quality would probably be not that high (just a guess)
* You would have to get permission from all users to do this
However, if you read above, simon can not recognize free sentences (dictation). The current versions are limited to command and control only.
That being said, I am sure that Voxforge would be very grateful for this contribution as your use case would provide spontanious speech that is otherwise quite hard to come by. Keep in mind, tough, that you would need to transcribe those samples which would probably make the process quite time consuming.
Feel free to contact me if you are interested…
Speechtotext: I never thanked you for your extensive reply to my questions.
Thank you very much for taking the time! I check the simon-listens website several times a week, and introduce it to other developers as the most exciting FOSS project that I'm aware of at the moment.