From: Xingyu Na <asr...@gm...> - 2015-04-01 02:31:09
|
Hi, I tried maxout training by changing the train_pnorm_fast recipe into train_maxout_recipe, simply replacing the PnormComponent using MaxoutComponent, forming a run_4e procedure. The training runs extremely slow... e.g. a pnorm iteration per job takes 113 seconds, while a maxout iteration per job takes 3396 seconds. Did you ever try this? Any suggestions? Thank you and best regards, Xingyu |
From: Daniel P. <dp...@gm...> - 2015-04-01 17:08:07
|
It looks to me like the MaxoutComponent has not been properly set up for efficient operation on GPU: there is a loop in the Propagate function. We didn't do this because it wasn't giving great results. Incidentally, for the multi-splice networks (which we're now calling TDNN) we may end up moving back from p-norm to ReLU, as ReLU now seems to be giving better WERs. Dan On Tue, Mar 31, 2015 at 10:30 PM, Xingyu Na <asr...@gm...> wrote: > Hi, > > I tried maxout training by changing the train_pnorm_fast recipe into > train_maxout_recipe, simply replacing the PnormComponent using > MaxoutComponent, forming a run_4e procedure. > The training runs extremely slow... e.g. a pnorm iteration per job takes > 113 seconds, while a maxout iteration per job takes 3396 seconds. > Did you ever try this? Any suggestions? > > Thank you and best regards, > Xingyu > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, > sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for > all > things parallel software development, from weekly thought leadership blogs > to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users > |
From: Xingyu Na <asr...@gm...> - 2015-07-22 06:59:29
|
I update the Maxout implementation on GPU in case anyone wants to try it in Kaldi... Please check out: https://github.com/naxingyu/kaldi-nn On 04/02/2015 01:08 AM, Daniel Povey wrote: > It looks to me like the MaxoutComponent has not been properly set up > for efficient operation on GPU: there is a loop in the Propagate > function. We didn't do this because it wasn't giving great results. > Incidentally, for the multi-splice networks (which we're now calling > TDNN) we may end up moving back from p-norm to ReLU, as ReLU now seems > to be giving better WERs. > Dan > > > On Tue, Mar 31, 2015 at 10:30 PM, Xingyu Na <asr...@gm... > <mailto:asr...@gm...>> wrote: > > Hi, > > I tried maxout training by changing the train_pnorm_fast recipe into > train_maxout_recipe, simply replacing the PnormComponent using > MaxoutComponent, forming a run_4e procedure. > The training runs extremely slow... e.g. a pnorm iteration per job > takes > 113 seconds, while a maxout iteration per job takes 3396 seconds. > Did you ever try this? Any suggestions? > > Thank you and best regards, > Xingyu > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel > Website, sponsored > by Intel and developed in partnership with Slashdot Media, is your > hub for all > things parallel software development, from weekly thought > leadership blogs to > news, videos, case studies, tutorials and more. Take a look and > join the > conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > <mailto:Kal...@li...> > https://lists.sourceforge.net/lists/listinfo/kaldi-users > > |
From: Jan T. <jt...@gm...> - 2015-07-22 15:59:40
|
Xingu, could you please prepare a patch against master of kaldi-asr/kaldi on github so that we can include your change? y. On Wed, Jul 22, 2015 at 2:58 AM, Xingyu Na <asr...@gm...> wrote: > I update the Maxout implementation on GPU in case anyone wants to try it > in Kaldi... > Please check out: https://github.com/naxingyu/kaldi-nn > > > On 04/02/2015 01:08 AM, Daniel Povey wrote: > > It looks to me like the MaxoutComponent has not been properly set up for > efficient operation on GPU: there is a loop in the Propagate function. We > didn't do this because it wasn't giving great results. > Incidentally, for the multi-splice networks (which we're now calling TDNN) > we may end up moving back from p-norm to ReLU, as ReLU now seems to be > giving better WERs. > Dan > > > On Tue, Mar 31, 2015 at 10:30 PM, Xingyu Na <asr...@gm...> > wrote: > >> Hi, >> >> I tried maxout training by changing the train_pnorm_fast recipe into >> train_maxout_recipe, simply replacing the PnormComponent using >> MaxoutComponent, forming a run_4e procedure. >> The training runs extremely slow... e.g. a pnorm iteration per job takes >> 113 seconds, while a maxout iteration per job takes 3396 seconds. >> Did you ever try this? Any suggestions? >> >> Thank you and best regards, >> Xingyu >> >> >> ------------------------------------------------------------------------------ >> Dive into the World of Parallel Programming The Go Parallel Website, >> sponsored >> by Intel and developed in partnership with Slashdot Media, is your hub >> for all >> things parallel software development, from weekly thought leadership >> blogs to >> news, videos, case studies, tutorials and more. Take a look and join the >> conversation now. http://goparallel.sourceforge.net/ >> _______________________________________________ >> Kaldi-users mailing list >> Kal...@li... >> https://lists.sourceforge.net/lists/listinfo/kaldi-users >> > > > > > ------------------------------------------------------------------------------ > Don't Limit Your Business. Reach for the Cloud. > GigeNET's Cloud Solutions provide you with the tools and support that > you need to offload your IT needs and focus on growing your business. > Configured For All Businesses. Start Your Cloud Today. > https://www.gigenetcloud.com/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users > > |
From: Xingyu Na <asr...@gm...> - 2015-07-23 01:10:37
|
Sure. I'll check it out. On 07/22/2015 11:59 PM, Jan Trmal wrote: > Xingu, could you please prepare a patch against master of > kaldi-asr/kaldi on github so that we can include your change? > y. > > On Wed, Jul 22, 2015 at 2:58 AM, Xingyu Na <asr...@gm... > <mailto:asr...@gm...>> wrote: > > I update the Maxout implementation on GPU in case anyone wants to > try it in Kaldi... > Please check out: https://github.com/naxingyu/kaldi-nn > > > On 04/02/2015 01:08 AM, Daniel Povey wrote: >> It looks to me like the MaxoutComponent has not been properly set >> up for efficient operation on GPU: there is a loop in the >> Propagate function. We didn't do this because it wasn't giving >> great results. >> Incidentally, for the multi-splice networks (which we're now >> calling TDNN) we may end up moving back from p-norm to ReLU, as >> ReLU now seems to be giving better WERs. >> Dan >> >> >> On Tue, Mar 31, 2015 at 10:30 PM, Xingyu Na >> <asr...@gm... <mailto:asr...@gm...>> wrote: >> >> Hi, >> >> I tried maxout training by changing the train_pnorm_fast >> recipe into >> train_maxout_recipe, simply replacing the PnormComponent using >> MaxoutComponent, forming a run_4e procedure. >> The training runs extremely slow... e.g. a pnorm iteration >> per job takes >> 113 seconds, while a maxout iteration per job takes 3396 seconds. >> Did you ever try this? Any suggestions? >> >> Thank you and best regards, >> Xingyu >> >> ------------------------------------------------------------------------------ >> Dive into the World of Parallel Programming The Go Parallel >> Website, sponsored >> by Intel and developed in partnership with Slashdot Media, is >> your hub for all >> things parallel software development, from weekly thought >> leadership blogs to >> news, videos, case studies, tutorials and more. Take a look >> and join the >> conversation now. http://goparallel.sourceforge.net/ >> _______________________________________________ >> Kaldi-users mailing list >> Kal...@li... >> <mailto:Kal...@li...> >> https://lists.sourceforge.net/lists/listinfo/kaldi-users >> >> > > > ------------------------------------------------------------------------------ > Don't Limit Your Business. Reach for the Cloud. > GigeNET's Cloud Solutions provide you with the tools and support that > you need to offload your IT needs and focus on growing your business. > Configured For All Businesses. Start Your Cloud Today. > https://www.gigenetcloud.com/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > <mailto:Kal...@li...> > https://lists.sourceforge.net/lists/listinfo/kaldi-users > > |