From: Xingyu Na <asr...@gm...> - 2015-07-22 06:59:29
|
I update the Maxout implementation on GPU in case anyone wants to try it in Kaldi... Please check out: https://github.com/naxingyu/kaldi-nn On 04/02/2015 01:08 AM, Daniel Povey wrote: > It looks to me like the MaxoutComponent has not been properly set up > for efficient operation on GPU: there is a loop in the Propagate > function. We didn't do this because it wasn't giving great results. > Incidentally, for the multi-splice networks (which we're now calling > TDNN) we may end up moving back from p-norm to ReLU, as ReLU now seems > to be giving better WERs. > Dan > > > On Tue, Mar 31, 2015 at 10:30 PM, Xingyu Na <asr...@gm... > <mailto:asr...@gm...>> wrote: > > Hi, > > I tried maxout training by changing the train_pnorm_fast recipe into > train_maxout_recipe, simply replacing the PnormComponent using > MaxoutComponent, forming a run_4e procedure. > The training runs extremely slow... e.g. a pnorm iteration per job > takes > 113 seconds, while a maxout iteration per job takes 3396 seconds. > Did you ever try this? Any suggestions? > > Thank you and best regards, > Xingyu > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel > Website, sponsored > by Intel and developed in partnership with Slashdot Media, is your > hub for all > things parallel software development, from weekly thought > leadership blogs to > news, videos, case studies, tutorials and more. Take a look and > join the > conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > <mailto:Kal...@li...> > https://lists.sourceforge.net/lists/listinfo/kaldi-users > > |