kaldipdnn-users Mailing List for Kaldi+PDNN
Fully fledged DNN Speech Recognition based on PDNN and Kaldi
Brought to you by:
yajiemiao
You can subscribe to this list here.
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(4) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2016 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: ana <amo...@ce...> - 2016-02-29 20:54:52
|
I forgot to say I have cuda 6.5 and the last Theano from GitHub (Theano 0.7.0) ;) On 29/02/16 15:43, ana wrote: > Hi all! > I am trying to run "run-bnf-tandem.sh" from timit example of kaldipdnn. > The pretraining phase went ok, although I got a warning saying my gpu > was old for theano. > > The finetuning failed because "RuntimeError: Cuda error: k_copy_1d: > invalid device function . (n_blocks=256, n_threads_per_block=1)" > > I have no experience using gpu, I have a GeForce 9800 GT. Find bellow > the dnn.fine.log. > Any ideas of what is causing this failure? > > thx in advance > ana > > ============== dnn.fine.log ======================= > > # export > PYTHONPATH=:/media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/kaldipdnn/run_timit/pdnn/ > ; export > THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32,optimizer=None,exception_verbosity=high > ; python > /media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/pdnn/cmds/run_DNN.py > --train-data > /media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/Working_dir/train.pfile.gz,partition=128m,random=true,stream=false > --valid-data > /media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/Working_dir/valid.pfile.gz,partition=128m,random=true,stream=false > --nnet-spec 360:1024:1024:1024:42:1024:1943 --ptr-file > /media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/Working_dir/dnn.ptr > --ptr-layer-number 5 --lrate D:0.08:0.5:0.2,0.2:8 --wdir > /media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/Working_dir > --kaldi-output-file > /media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/Working_dir/dnn.nnet > > # Started at lun feb 29 14:04:53 CST 2016 > # > Using gpu device 0: GeForce 9800 GT > WARNING (theano.sandbox.cuda): You are probably using an old GPU, that > Theano does not support. This means GPU code will most likely be slow > AND may crash when we try to use features that your GPU does not support. > [2016-02-29 14:06:26.552377] > ... building the model > [2016-02-29 14:06:50.413948] > ... getting the finetuning functions > WARNING (theano.sandbox.cuda.opt): Optimization Warning: Got the > following error, but you can ignore it. This could cause less > GpuElemwise fused together. > CudaNdarray_ptr_int_size: error when calling the gpu code. (invalid > device function ) > [2016-02-29 14:07:40.917351] > ... finetuning the model > Traceback (most recent call last): > File > "/media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/pdnn/cmds/run_DNN.py", > line 101, in <module> > train_error = train_sgd(train_fn, cfg) > File > "/media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/pdnn/learning/sgd.py", > line 72, in train_sgd > train_error.append(train_fn(index=batch_index, learning_rate = > learning_rate, momentum = momentum)) > File > "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", > line 606, in __call__ > storage_map=self.fn.storage_map) > File > "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", > line 595, in __call__ > outputs = self.fn() > RuntimeError: Cuda error: k_copy_1d: invalid device function . > (n_blocks=256, n_threads_per_block=1) > > Apply node that caused the error: GpuAlloc(GpuElemwise{Inv}[(0, 0)].0, > Elemwise{Composite{(Composite{Switch(LT(i0, i1), i1, > i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), > i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i0, i1, i2), i3, i4), i2), > i3) - Switch(LT(Composite{Switch(LT(i0, i1), i1, > i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), > i1, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i5, i6, i2), i3), i2), > i3), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), > i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 > + i2), i1)}(i0, i1, i2), i3, i4), i2), i3)), Composite{Switch(LT(i0, > i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, > i0)}(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(i0, (i1 + > i2), i1)}(i5, i6, i2), i3), i2), i3), Composite{Switch(LT(i0, i1), i1, > i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), > i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i0, i1, i2), i3, i4), i2), > i3)))}}[(0, 1)].0) > Inputs types: [CudaNdarrayType(float32, (True,)), TensorType(int64, scalar)] > Inputs shapes: [(1,), ()] > Inputs strides: [(0,), ()] > Inputs values: [<CudaNdarray object at 0x7f1674bdee70>, array(256)] > > Debugprint of the apply node: > GpuAlloc [@A] <CudaNdarrayType(float32, vector)> '' > |GpuElemwise{Inv}[(0, 0)] [@B] <CudaNdarrayType(float32, (True,))> '' > | |GpuFromHost [@C] <CudaNdarrayType(float32, (True,))> '' > | |Elemwise{Cast{float32}} [@D] <TensorType(float32, (True,))> '' > | |InplaceDimShuffle{x} [@E] <TensorType(int64, (True,))> '' > | |Elemwise{Composite{(Composite{Switch(LT(i0, i1), i1, > i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), > i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i0, i1, i2), i3, i4), i2), > i3) - Switch(LT(Composite{Switch(LT(i0, i1), i1, > i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), > i1, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i5, i6, i2), i3), i2), > i3), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), > i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 > + i2), i1)}(i0, i1, i2), i3, i4), i2), i3)), Composite{Switch(LT(i0, > i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, > i0)}(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(i0, (i1 + > i2), i1)}(i5, i6, i2), i3), i2), i3), Composite{Switch(LT(i0, i1), i1, > i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), > i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i0, i1, i2), i3, i4), i2), > i3)))}}[(0, 1)] [@F] <TensorType(int64, scalar)> '' > | |Elemwise{lt,no_inplace} [@G] <TensorType(int8, scalar)> '' > | | |Elemwise{Composite{(i0 * (i1 + i2))}} [@H] > <TensorType(int64, scalar)> '' > | | | |TensorConstant{256} [@I] <TensorType(int64, scalar)> > | | | |TensorConstant{1} [@J] <TensorType(int64, scalar)> > | | | |index [@K] <TensorType(int64, scalar)> > | | |TensorConstant{0} [@L] <TensorType(int8, scalar)> > | |Elemwise{Composite{(i0 * (i1 + i2))}} [@H] > <TensorType(int64, scalar)> '' > | |Shape_i{0} [@M] <TensorType(int64, scalar)> '' > | | |y [@N] <CudaNdarrayType(float32, vector)> > | |TensorConstant{0} [@L] <TensorType(int8, scalar)> > | |TensorConstant{-1} [@O] <TensorType(int8, scalar)> > | |Elemwise{lt,no_inplace} [@P] <TensorType(int8, scalar)> '' > | | |Elemwise{mul,no_inplace} [@Q] <TensorType(int64, scalar)> '' > | | | |TensorConstant{256} [@I] <TensorType(int64, scalar)> > | | | |index [@K] <TensorType(int64, scalar)> > | | |TensorConstant{0} [@L] <TensorType(int8, scalar)> > | |Elemwise{mul,no_inplace} [@Q] <TensorType(int64, scalar)> '' > |Elemwise{Composite{(Composite{Switch(LT(i0, i1), i1, > i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), > i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i0, i1, i2), i3, i4), i2), > i3) - Switch(LT(Composite{Switch(LT(i0, i1), i1, > i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), > i1, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i5, i6, i2), i3), i2), > i3), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), > i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 > + i2), i1)}(i0, i1, i2), i3, i4), i2), i3)), Composite{Switch(LT(i0, > i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, > i0)}(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(i0, (i1 + > i2), i1)}(i5, i6, i2), i3), i2), i3), Composite{Switch(LT(i0, i1), i1, > i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), > i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i0, i1, i2), i3, i4), i2), > i3)))}}[(0, 1)] [@F] <TensorType(int64, scalar)> '' > > Storage map footprint: > - GpuFromHost.0, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s) > - TensorConstant{1}, Shape: (1,), ElemSize: 1 Byte(s), TotalSize: 1.0 > Byte(s) > - GpuDimShuffle{x}.0, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4 > Byte(s) > - GpuElemwise{Inv}[(0, 0)].0, Shape: (1,), ElemSize: 4 Byte(s), > TotalSize: 4 Byte(s) > - Elemwise{Composite{(i0 / Cast{float64}((Composite{Switch(LT(i0, i1), > i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, > i1), i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i1, i2, i3), i4, i5), > i3), i4) - Switch(LT(Composite{Switch(LT(i0, i1), i1, > i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), > i1, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i6, i7, i3), i4), i3), > i4), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), > i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 > + i2), i1)}(i1, i2, i3), i4, i5), i3), i4)), Composite{Switch(LT(i0, > i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, > i0)}(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(i0, (i1 + > i2), i1)}(i6, i7, i3), i4), i3), i4), Composite{Switch(LT(i0, i1), i1, > i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), > i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i1, i2, i3), i4, i5), i3), > i4)))))}}.0, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s) > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140 > _______________________________________________ > Kaldipdnn-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldipdnn-users |
From: ana <amo...@ce...> - 2016-02-29 20:48:01
|
Hi all! I am trying to run "run-bnf-tandem.sh" from timit example of kaldipdnn. The pretraining phase went ok, although I got a warning saying my gpu was old for theano. The finetuning failed because "RuntimeError: Cuda error: k_copy_1d: invalid device function . (n_blocks=256, n_threads_per_block=1)" I have no experience using gpu, I have a GeForce 9800 GT. Find bellow the dnn.fine.log. Any ideas of what is causing this failure? thx in advance ana ============== dnn.fine.log ======================= # export PYTHONPATH=:/media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/kaldipdnn/run_timit/pdnn/ ; export THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32,optimizer=None,exception_verbosity=high ; python /media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/pdnn/cmds/run_DNN.py --train-data /media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/Working_dir/train.pfile.gz,partition=128m,random=true,stream=false --valid-data /media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/Working_dir/valid.pfile.gz,partition=128m,random=true,stream=false --nnet-spec 360:1024:1024:1024:42:1024:1943 --ptr-file /media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/Working_dir/dnn.ptr --ptr-layer-number 5 --lrate D:0.08:0.5:0.2,0.2:8 --wdir /media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/Working_dir --kaldi-output-file /media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/Working_dir/dnn.nnet # Started at lun feb 29 14:04:53 CST 2016 # Using gpu device 0: GeForce 9800 GT WARNING (theano.sandbox.cuda): You are probably using an old GPU, that Theano does not support. This means GPU code will most likely be slow AND may crash when we try to use features that your GPU does not support. [2016-02-29 14:06:26.552377] > ... building the model [2016-02-29 14:06:50.413948] > ... getting the finetuning functions WARNING (theano.sandbox.cuda.opt): Optimization Warning: Got the following error, but you can ignore it. This could cause less GpuElemwise fused together. CudaNdarray_ptr_int_size: error when calling the gpu code. (invalid device function ) [2016-02-29 14:07:40.917351] > ... finetuning the model Traceback (most recent call last): File "/media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/pdnn/cmds/run_DNN.py", line 101, in <module> train_error = train_sgd(train_fn, cfg) File "/media/ana/a80d8314-fa32-4f6c-8ab8-c03bc4281e2f/Proyecto_Teorico/LanguageRecognition/Fribourg2015/PDNN/pdnn/learning/sgd.py", line 72, in train_sgd train_error.append(train_fn(index=batch_index, learning_rate = learning_rate, momentum = momentum)) File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 606, in __call__ storage_map=self.fn.storage_map) File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 595, in __call__ outputs = self.fn() RuntimeError: Cuda error: k_copy_1d: invalid device function . (n_blocks=256, n_threads_per_block=1) Apply node that caused the error: GpuAlloc(GpuElemwise{Inv}[(0, 0)].0, Elemwise{Composite{(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i0, i1, i2), i3, i4), i2), i3) - Switch(LT(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i5, i6, i2), i3), i2), i3), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i0, i1, i2), i3, i4), i2), i3)), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i5, i6, i2), i3), i2), i3), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i0, i1, i2), i3, i4), i2), i3)))}}[(0, 1)].0) Inputs types: [CudaNdarrayType(float32, (True,)), TensorType(int64, scalar)] Inputs shapes: [(1,), ()] Inputs strides: [(0,), ()] Inputs values: [<CudaNdarray object at 0x7f1674bdee70>, array(256)] Debugprint of the apply node: GpuAlloc [@A] <CudaNdarrayType(float32, vector)> '' |GpuElemwise{Inv}[(0, 0)] [@B] <CudaNdarrayType(float32, (True,))> '' | |GpuFromHost [@C] <CudaNdarrayType(float32, (True,))> '' | |Elemwise{Cast{float32}} [@D] <TensorType(float32, (True,))> '' | |InplaceDimShuffle{x} [@E] <TensorType(int64, (True,))> '' | |Elemwise{Composite{(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i0, i1, i2), i3, i4), i2), i3) - Switch(LT(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i5, i6, i2), i3), i2), i3), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i0, i1, i2), i3, i4), i2), i3)), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i5, i6, i2), i3), i2), i3), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i0, i1, i2), i3, i4), i2), i3)))}}[(0, 1)] [@F] <TensorType(int64, scalar)> '' | |Elemwise{lt,no_inplace} [@G] <TensorType(int8, scalar)> '' | | |Elemwise{Composite{(i0 * (i1 + i2))}} [@H] <TensorType(int64, scalar)> '' | | | |TensorConstant{256} [@I] <TensorType(int64, scalar)> | | | |TensorConstant{1} [@J] <TensorType(int64, scalar)> | | | |index [@K] <TensorType(int64, scalar)> | | |TensorConstant{0} [@L] <TensorType(int8, scalar)> | |Elemwise{Composite{(i0 * (i1 + i2))}} [@H] <TensorType(int64, scalar)> '' | |Shape_i{0} [@M] <TensorType(int64, scalar)> '' | | |y [@N] <CudaNdarrayType(float32, vector)> | |TensorConstant{0} [@L] <TensorType(int8, scalar)> | |TensorConstant{-1} [@O] <TensorType(int8, scalar)> | |Elemwise{lt,no_inplace} [@P] <TensorType(int8, scalar)> '' | | |Elemwise{mul,no_inplace} [@Q] <TensorType(int64, scalar)> '' | | | |TensorConstant{256} [@I] <TensorType(int64, scalar)> | | | |index [@K] <TensorType(int64, scalar)> | | |TensorConstant{0} [@L] <TensorType(int8, scalar)> | |Elemwise{mul,no_inplace} [@Q] <TensorType(int64, scalar)> '' |Elemwise{Composite{(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i0, i1, i2), i3, i4), i2), i3) - Switch(LT(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i5, i6, i2), i3), i2), i3), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i0, i1, i2), i3, i4), i2), i3)), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i5, i6, i2), i3), i2), i3), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i0, i1, i2), i3, i4), i2), i3)))}}[(0, 1)] [@F] <TensorType(int64, scalar)> '' Storage map footprint: - GpuFromHost.0, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s) - TensorConstant{1}, Shape: (1,), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s) - GpuDimShuffle{x}.0, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4 Byte(s) - GpuElemwise{Inv}[(0, 0)].0, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4 Byte(s) - Elemwise{Composite{(i0 / Cast{float64}((Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i1, i2, i3), i4, i5), i3), i4) - Switch(LT(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i6, i7, i3), i4), i3), i4), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i1, i2, i3), i4, i5), i3), i4)), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i6, i7, i3), i4), i3), i4), Composite{Switch(LT(i0, i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(Composite{Switch(LT(i0, i1), i2, i0)}(Composite{Switch(i0, (i1 + i2), i1)}(i1, i2, i3), i4, i5), i3), i4)))))}}.0, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s) |
From: <ha...@an...> - 2014-11-09 20:02:08
|
Mailing list test |
From: Yajie M. <yaj...@gm...> - 2014-11-06 16:32:58
|
hello this is a response to the test mail On Thu, Nov 6, 2014 at 11:04 AM, Yajie Miao <ym...@cs...> wrote: > hello this is a testing mail > > > ------------------------------------------------------------------------------ > _______________________________________________ > Kaldipdnn-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldipdnn-users > |
From: Yajie M. <ym...@cs...> - 2014-11-06 16:08:44
|
hello this is a testing mail |
From: Yajie M. <yaj...@gm...> - 2014-11-05 22:32:13
|
hello, this is a testing email |