I am pretty new to this whole voice/speech recognition concept. Having gone through a couple of threads here on the forum on the fact that MARF (unlike CMUSphinx) doesn't recognise text from speech. From whatever I have read about speech recognition, it's all about computing the most "likely" utterance using n-gram (di-gram being the most popular as it seems) model. So how does MARF recognise speech without the text dictionary?
MARF also seems to have speaker independent recognition applications. Is this similar to Sound2Sound Technology?
If so, can someone point me to a place where I can read more about the procedures employed in achieving this?
Secondly, what is the success rate of MARF currently?
Thanks in advance,
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.