I am using one of the Kaldi's recipes for training and decoding. I would like to know how I can do the decoding only base on the acoustic models where there is no effect of language models. Is there any parameter by which I can change (or discard) the effect of language models?
Thanks,
Reza
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
You could set a large acoustic scale in decoding, e.g.
--acoustic-scale=100, but you'd have to increase the beam accordingly (make
it 10 times larger than the default).
But this setting doesn't really make sense.
Dan
I am using one of the Kaldi's recipes for training and decoding. I would
like to know how I can do the decoding only base on the acoustic models
where there is no effect of language models. Is there any parameter by
which I can change (or discard) the effect of language models?
I am using one of the Kaldi's recipes for training and decoding. I would
like to know how I can do the decoding only base on the acoustic models
where there is no effect of language models. Is there any parameter by
which I can change (or discard) the effect of language models?
If you don't need the transition costs you can remove the costs from
the search graph using and use the unweighted machine during decoding.
fstmap --map_type=rmweight HCLG.fst > HCLG.no.weights.fst
ERROR! The markdown supplied could not be parsed correctly. Did you
forget to surround a code snippet with "~~~~"?
You can, build 0-gram LM - basically a word list as G (grammar) as part of
HCLG.decoding graf.
Probably the easiest way is to create arpa model and coverted to G.fst
Seehttps://sourceforge.net/p/kaldi/code/HEAD/tree/trunk/egs/vystadial_en/s5/local/create_LMs.sh
line 27
for the 0-gram language model creation.
I am using one of the Kaldi's recipes for training and decoding. I would
like to know how I can do the decoding only base on the acoustic models
where there is no effect of language models. Is there any parameter by
which I can change (or discard) the effect of language models?
Hello,
I am using one of the Kaldi's recipes for training and decoding. I would like to know how I can do the decoding only base on the acoustic models where there is no effect of language models. Is there any parameter by which I can change (or discard) the effect of language models?
Thanks,
Reza
You could set a large acoustic scale in decoding, e.g.
--acoustic-scale=100, but you'd have to increase the beam accordingly (make
it 10 times larger than the default).
But this setting doesn't really make sense.
Dan
On Wed, Sep 3, 2014 at 10:14 AM, Reza Sahraeian bezhvin@users.sf.net
wrote:
You can, build 0-gram LM - basically a word list as G (grammar) as part of
HCLG.decoding graf.
Probably the easiest way is to create arpa model and coverted to G.fst
See
https://sourceforge.net/p/kaldi/code/HEAD/tree/trunk/egs/vystadial_en/s5/local/create_LMs.sh
line 27
for the 0-gram language model creation.
On 3 September 2014 10:14, Reza Sahraeian bezhvin@users.sf.net wrote:
If you don't need the transition costs you can remove the costs from
the search graph using and use the unweighted machine during decoding.
fstmap --map_type=rmweight HCLG.fst > HCLG.no.weights.fst
On 3 September 2014 20:19, Ondrej Platek oplatek@users.sf.net wrote: