|
From: Mailing l. u. f. U. C. a. U. <kal...@li...> - 2013-09-25 21:37:42
|
> > > I'm trying to train a neural net with nnet-cpu and I wonder how to > measure progress of training? > Should I assume that natural logarithm of frame accuracy on cv and > train is equal to corresponding numbers in computeprob log files? No, you have to compare the cross-entropy (xent) figure. > If so, then I probably receive strange results because after 20 epochs > accuracy on train is only about 45% when WER is not bad (7% relative > worse than sgmm result). The per-frame accuracies can't really be compared with WERs because there is no context taken into account in computing the frame accuracies, they are taken independently. > What hyper-parameters would you recommend for 10 hours training set > with 1 speaker in it for nnet-cpu? Start with the setup in RM and make the #parameters and layers a bit larger. > And should I expect that nnet will be better than sgmm for any LVCSR > test as it is better for Switchboard and WSJ? Not always, sometimes it's a bit worse. Sometimes you need to increase the decoding beam for the nnet-cpu setup. Dan > > Regards, > Valentin > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60133471&iu=/4140/ostg.clktrk > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users |