NeoGermi - 2005-09-08

Hello,

thank your help, the training is running perfect for 50% of my data :-) (It means that I have two corpora and for one of them, it's running very good and produces none (nearly none) error.

But for the other one, the first iteration for BW runs perfectly (wo any errors) and after that, evey utterance is ignored because of not reaching the final state :-(
While trying to fix that, I found, that after the first normalization, the likelihood explodes to numbers smaller than -1000000.00 ?!?

I used the 8KHz RAW files, procudes (like you proposed) with the sox command: sox <infile> -t raw -s -w -r 8000 <outfile> [resample -ql] .

My command for getting the dictionary is:
.../train_error_small/bin/make_dict -out_dir .../train_error_small/etc -noise_dict .../train_error_small/etc/train_error_small.filler .../train_error_small/etc/train_error_small.transcription

My wave2feat command is:

.../train_error_small/bin/wave2feat -c .../train_error_small/etc/train_error_small.fileids -verbose yes -ei raw -raw yes -eo mfc -nfilt 31 -di wav -do feat -srate 8000 -upperf 3500

I placed all the other files to https://xantippe.cs.uni-sb.de/~germi/cmu/

Do you have any idea, why this value explodes to negative?
I tried so much, but - after 1,5 weeks - I'm really depressed... Also, because the same preferences work for the other corpus...

I placed only a small example (using 50 utterrances) but it's the same problem with a huge example (using > 6000)...

Do you have an idea?

Thanks a lot in advance,

Sebastian