From: Cemil D. <cem...@gm...> - 2015-07-15 09:10:49
|
Hi, I have a workstation which have 4 NVIDIA TITAN X GPU card. I want to train a DNN model using egs/babel/s5c recipe. I am using "run-2a-nnet-ensemble-gpu.sh" script. When I use "run-2a-nnet-gpu.sh" script, the accuracy is below as compared to sgmm-mmi case. Therefore I Total amount of training data is 13 hours. want to use "ensemble" version. Training takes about 35 hours. I think, it is too slow. Could give me any suggestion to speed up training? İt is a little bit urgent for me. Thank you. -- Cemil Demir cem...@gm... |