From: Raymond W. M. N. <wm...@sh...> - 2015-02-28 19:31:14
|
Hi Kaldi, I am training an DNN with Karel setupt on a 160hr data set. When I get to the sMBR sequence discriminative training (steps/nnet/train_mpe.sh) The memory usage exploded. The program only managed to process around 2/7 of the training files before it crashes. There's no easy accumulation function for the DNN but I assume I can just put different training file splits in consecutive iterations? I'd like to know if there's resource out there already. I was referring to the egs/tedlium recipe. thanks raymond |