Yes , I did startListening.
/JAVA CODE/
recognizer.addNgramSearch("forecast", languageModel);
recognizer.addNgramSearch("forecast", languageModel);
/JAVA CODE*/
OR
/*JAVA CODE/
recognizer.addGrammarSearch("forecast", TestGrammar);
recognizer.addGrammarSearch("forecast", TestGrammar);
/JAVA CODE****/
But, in language mode hypothesis.getHypstr() is always null
and in Grammar mode I always get "Final result does not match the grammar in Frame ..."
is there something I did wrong?
Last edit: TZDWSY 8 minutes ago
Last edit: TZDWSY 2014-05-19
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'm not sure why do you post same line two times here. startListening starts with startListening, not with addGrammarSearch call.
Also, grammar search and ngram search must have different names, not the same name "forecast"
To get more precise answer it's better to share a full and complete code, not a random chunk. You can pack your code to archive and upload it to a file sharing resource and give here a link.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi, I downloaded the zh_broadcastnews_16k_ptm256_8000 acoustic Model and the Mandarin language Model in
https://sourceforge.net/projects/cmusphinx/files/Acoustic%20and%20Language%20Models/
can these files be used in PocketSphinx on Android?
I modified the Demo code , and set acoustic path , dictionary path and language model path to these files and start recognition, but get no result.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
recognizer = defaultSetup()
.setAcousticModel(new File(modelsDir, "madan"))
.setDictionary(new File(modelsDir, "madan"+"/madan.dic"))
.setSampleRate(16000)
.setBoolean("-remove_noise", true)
.setKeywordThreshold(1e-5f)
.getRecognizer();
File languageModel = new File(modelsDir, "madan"+"/madan.lm")
recognizer.addNgramSearch("forecast", languageModel);
recognizer.addGrammarSearch("forecast", TestGrammar);
recognizer.addListener(this);
@Override
public void onBeginningOfSpeech() {
// TODO Auto-generated method stub
rs = new Results("识别未成功",rsFuzzy);
}
@Override
public void onPartialResult(Hypothesis hypothesis) {
// TODO Auto-generated method stub
text = hypothesis.getHypstr();
public void onEndOfSpeech(Hypothesis hypothesis) {
// TODO Auto-generated method stub
text = hypothesis.getHypstr();
}
~~~~~~~~~~~~~~~~~
is that correct?
and is it ok if I use only the acoustic model and use my own madarin language model and Grammar
trained from cmulmtk?
THANKS!! :)
Last edit: Nickolay V. Shmyrev 2014-05-19
You also need to start listening with
recognizer.startListening(searchName)
just like in android demo.Yes , I did startListening.
/JAVA CODE/
recognizer.addNgramSearch("forecast", languageModel);
recognizer.addNgramSearch("forecast", languageModel);
/JAVA CODE*/
OR
/*JAVA CODE/
recognizer.addGrammarSearch("forecast", TestGrammar);
recognizer.addGrammarSearch("forecast", TestGrammar);
/JAVA CODE****/
But, in language mode hypothesis.getHypstr() is always null
and in Grammar mode I always get "Final result does not match the grammar in Frame ..."
is there something I did wrong?
Last edit: TZDWSY 8 minutes ago
Last edit: TZDWSY 2014-05-19
I'm not sure why do you post same line two times here. startListening starts with startListening, not with addGrammarSearch call.
Also, grammar search and ngram search must have different names, not the same name "forecast"
To get more precise answer it's better to share a full and complete code, not a random chunk. You can pack your code to archive and upload it to a file sharing resource and give here a link.