Is it possible to do offline speech recognition application using Respeaker core module?
I am a student and new in this area. If it is possible, anyone mind providing some tutorial or sample steps to start a simple offine voice recognition-windows application?
Also, is it possible to program it with C language?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi, I have tried the tutorial, it is still not working on command prompt. How do I connect Respeaker device and this pocketsphinx and how do I configure to start programming offline speech recognition?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
You read the data from the device through the driver or through the userspace library and then submit the data chunks to the decoder as described in tutorial.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
@NicKolay,
Hi Nickolay
I am trying to adapt the model with Russian language model, but I am getting this error
Read ru_ru/mixture_weights [5147x1x32 array]
FATAL: "mod_inv.c", line 358: Number of feature streams in mixture_weights file 1 differs from the configured value 3, check the command line options
Do you please have any Idea ?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Is it possible to do offline speech recognition application using Respeaker core module?
I am a student and new in this area. If it is possible, anyone mind providing some tutorial or sample steps to start a simple offine voice recognition-windows application?
Also, is it possible to program it with C language?
Start with our tutorial http://cmusphinx.github.io/wiki/tutorial
Hi, I have tried the tutorial, it is still not working on command prompt. How do I connect Respeaker device and this pocketsphinx and how do I configure to start programming offline speech recognition?
You read the data from the device through the driver or through the userspace library and then submit the data chunks to the decoder as described in tutorial.
@NicKolay,
Hi Nickolay
I am trying to adapt the model with Russian language model, but I am getting this error
Read ru_ru/mixture_weights [5147x1x32 array]
FATAL: "mod_inv.c", line 358: Number of feature streams in mixture_weights file 1 differs from the configured value 3, check the command line options
Do you please have any Idea ?
You need to ask separate question in separate threads.