From: Sunit S. <sun...@in...> - 2015-07-06 11:55:19
|
Hi all, I am getting a buffer overflow error while running RNNLM scripts of WSJ. Any idea as to what could have gone wrong? I trained the model using a subset of WSJ utterances and from the logs, the training seemed alright. Below are the rnnlm rescore logs followed by the RNN training logs. steps/rnnlmrescore.sh --rnnlm_ver rnnlm-hs-0.1b --N 100 0.5 data/lang_test_tgpr_5k data/lang_rnnlm_h30_me5-1000 data/dt05_multi_r_mc exp/tri4a/decode_tgpr_5k exp/tri4a/decode_tgpr_5k_rnnlm_h30_me5-1000_L0.5 steps/rnnlmrescore.sh: converting lattices to N-best. steps/rnnlmrescore.sh: removing old LM scores. steps/rnnlmrescore.sh: creating separate-archive form of N-best lists. steps/rnnlmrescore.sh: doing the same with old LM scores. steps/rnnlmrescore.sh: Creating archives with text-form of words, and LM scores without graph scores. steps/rnnlmrescore.sh: invoking rnnlm_compute_scores.sh which calls rnnlm, to get RNN LM scores. *** buffer overflow detected ***: ../../../tools/rnnlm-hs-0.1b/rnnlm terminated ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x7338f)[0x7fda06f6b38f] /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x5c)[0x7fda07002c9c] /lib/x86_64-linux-gnu/libc.so.6(+0x109b60)[0x7fda07001b60] ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4011ea] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fda06f19ec5] ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4018ac] Training logs: ../../../tools/rnnlm-hs-0.1b/rnnlm -threads 1 -independent -train /tmp/tmp.6aF3RDTFnf -valid /tmp/tmp.aTiUNgZnWT -rnnlm data/lang_rnnlm_h30_me5-1000/rnnlm -hidden 30 -rand-seed 1 -debug 2 -class 200 -bptt 2 -bptt-block 20 -direct-order 4 -direct 1000 -binary # Vocab size: 9066 Words in train file: 164907 Starting training using file /tmp/tmp.6aF3RDTFnf Iteration 0 Valid Entropy 9.595403 Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: 117.21k Iteration 1 Valid Entropy 8.564008 Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: 123.54k Iteration 2 Valid Entropy 8.297136 Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: 122.70k Iteration 3 Valid Entropy 8.175531 Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: 108.22k Iteration 4 Valid Entropy 8.107678 Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: 121.89k Iteration 5 Valid Entropy 8.069274 Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: 124.64k Iteration 6 Valid Entropy 8.049375 Decay started Alpha: 0.050000 ME-alpha: 0.050000 Progress: 97.12% Words/thread/sec: 111.30k Iteration 7 Valid Entropy 8.009795 Alpha: 0.025000 ME-alpha: 0.025000 Progress: 97.12% Words/thread/sec: 124.70k Iteration 8 Valid Entropy 7.989441 Retry 1/2 Alpha: 0.012500 ME-alpha: 0.012500 Progress: 97.12% Words/thread/sec: 113.82k Iteration 9 Valid Entropy 7.982499 Retry 2/2 # Accounting: time=439 threads=1 # Ended (code 0) at Fri Jun 26 16:22:52 CEST 2015, elapsed time 439 seconds Regards, Sunit |
From: Jan T. <jt...@gm...> - 2015-07-06 13:46:53
|
I believe there was some issue with long paths (I think there was a fixed-size buffer for the filename, or somehting like that). I have no idea if it was fixed or not. y. On Mon, Jul 6, 2015 at 7:55 AM, Sunit Sivasankaran < sun...@in...> wrote: > Hi all, > > I am getting a buffer overflow error while running RNNLM scripts of WSJ. > Any idea as to what could have gone wrong? I trained the model using a > subset of WSJ utterances and from the logs, the training seemed alright. > Below are the rnnlm rescore logs followed by the RNN training logs. > > > > steps/rnnlmrescore.sh --rnnlm_ver rnnlm-hs-0.1b --N 100 0.5 > data/lang_test_tgpr_5k data/lang_rnnlm_h30_me5-1000 data/dt05_multi_r_mc > exp/tri4a/decode_tgpr_5k exp/tri4a/decode_tgpr_5k_rnnlm_h30_me5-1000_L0.5 > > > steps/rnnlmrescore.sh: converting lattices to N-best. > steps/rnnlmrescore.sh: removing old LM scores. > steps/rnnlmrescore.sh: creating separate-archive form of N-best lists. > steps/rnnlmrescore.sh: doing the same with old LM scores. > steps/rnnlmrescore.sh: Creating archives with text-form of words, and LM > scores without graph scores. > steps/rnnlmrescore.sh: invoking rnnlm_compute_scores.sh which calls > rnnlm, to get RNN LM scores. > *** buffer overflow detected ***: ../../../tools/rnnlm-hs-0.1b/rnnlm > terminated > ======= Backtrace: ========= > /lib/x86_64-linux-gnu/libc.so.6(+0x7338f)[0x7fda06f6b38f] > /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x5c)[0x7fda07002c9c] > /lib/x86_64-linux-gnu/libc.so.6(+0x109b60)[0x7fda07001b60] > ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4011ea] > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fda06f19ec5] > ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4018ac] > > > > Training logs: > > ../../../tools/rnnlm-hs-0.1b/rnnlm -threads 1 -independent -train > /tmp/tmp.6aF3RDTFnf -valid /tmp/tmp.aTiUNgZnWT -rnnlm > data/lang_rnnlm_h30_me5-1000/rnnlm -hidden 30 -rand-seed 1 -debug 2 > -class 200 -bptt 2 -bptt-block 20 -direct-order 4 -direct 1000 -binary > # > Vocab size: 9066 > Words in train file: 164907 > Starting training using file /tmp/tmp.6aF3RDTFnf > Iteration 0 Valid Entropy 9.595403 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 117.21k Iteration 1 Valid Entropy 8.564008 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 123.54k Iteration 2 Valid Entropy 8.297136 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 122.70k Iteration 3 Valid Entropy 8.175531 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 108.22k Iteration 4 Valid Entropy 8.107678 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 121.89k Iteration 5 Valid Entropy 8.069274 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 124.64k Iteration 6 Valid Entropy 8.049375 Decay started > Alpha: 0.050000 ME-alpha: 0.050000 Progress: 97.12% Words/thread/sec: > 111.30k Iteration 7 Valid Entropy 8.009795 > Alpha: 0.025000 ME-alpha: 0.025000 Progress: 97.12% Words/thread/sec: > 124.70k Iteration 8 Valid Entropy 7.989441 Retry 1/2 > Alpha: 0.012500 ME-alpha: 0.012500 Progress: 97.12% Words/thread/sec: > 113.82k Iteration 9 Valid Entropy 7.982499 Retry 2/2 > # Accounting: time=439 threads=1 > # Ended (code 0) at Fri Jun 26 16:22:52 CEST 2015, elapsed time 439 seconds > > > Regards, > Sunit > > > > ------------------------------------------------------------------------------ > Don't Limit Your Business. Reach for the Cloud. > GigeNET's Cloud Solutions provide you with the tools and support that > you need to offload your IT needs and focus on growing your business. > Configured For All Businesses. Start Your Cloud Today. > https://www.gigenetcloud.com/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users > |
From: Jan T. <jt...@gm...> - 2015-07-06 19:52:17
|
I just committed a patch that hopefully resolves this. Sunit, please update Kaldi, recompile and run the rescoring again. Let us help if it was fixed. y. On Mon, Jul 6, 2015 at 7:55 AM, Sunit Sivasankaran < sun...@in...> wrote: > Hi all, > > I am getting a buffer overflow error while running RNNLM scripts of WSJ. > Any idea as to what could have gone wrong? I trained the model using a > subset of WSJ utterances and from the logs, the training seemed alright. > Below are the rnnlm rescore logs followed by the RNN training logs. > > > > steps/rnnlmrescore.sh --rnnlm_ver rnnlm-hs-0.1b --N 100 0.5 > data/lang_test_tgpr_5k data/lang_rnnlm_h30_me5-1000 data/dt05_multi_r_mc > exp/tri4a/decode_tgpr_5k exp/tri4a/decode_tgpr_5k_rnnlm_h30_me5-1000_L0.5 > > > steps/rnnlmrescore.sh: converting lattices to N-best. > steps/rnnlmrescore.sh: removing old LM scores. > steps/rnnlmrescore.sh: creating separate-archive form of N-best lists. > steps/rnnlmrescore.sh: doing the same with old LM scores. > steps/rnnlmrescore.sh: Creating archives with text-form of words, and LM > scores without graph scores. > steps/rnnlmrescore.sh: invoking rnnlm_compute_scores.sh which calls > rnnlm, to get RNN LM scores. > *** buffer overflow detected ***: ../../../tools/rnnlm-hs-0.1b/rnnlm > terminated > ======= Backtrace: ========= > /lib/x86_64-linux-gnu/libc.so.6(+0x7338f)[0x7fda06f6b38f] > /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x5c)[0x7fda07002c9c] > /lib/x86_64-linux-gnu/libc.so.6(+0x109b60)[0x7fda07001b60] > ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4011ea] > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fda06f19ec5] > ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4018ac] > > > > Training logs: > > ../../../tools/rnnlm-hs-0.1b/rnnlm -threads 1 -independent -train > /tmp/tmp.6aF3RDTFnf -valid /tmp/tmp.aTiUNgZnWT -rnnlm > data/lang_rnnlm_h30_me5-1000/rnnlm -hidden 30 -rand-seed 1 -debug 2 > -class 200 -bptt 2 -bptt-block 20 -direct-order 4 -direct 1000 -binary > # > Vocab size: 9066 > Words in train file: 164907 > Starting training using file /tmp/tmp.6aF3RDTFnf > Iteration 0 Valid Entropy 9.595403 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 117.21k Iteration 1 Valid Entropy 8.564008 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 123.54k Iteration 2 Valid Entropy 8.297136 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 122.70k Iteration 3 Valid Entropy 8.175531 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 108.22k Iteration 4 Valid Entropy 8.107678 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 121.89k Iteration 5 Valid Entropy 8.069274 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 124.64k Iteration 6 Valid Entropy 8.049375 Decay started > Alpha: 0.050000 ME-alpha: 0.050000 Progress: 97.12% Words/thread/sec: > 111.30k Iteration 7 Valid Entropy 8.009795 > Alpha: 0.025000 ME-alpha: 0.025000 Progress: 97.12% Words/thread/sec: > 124.70k Iteration 8 Valid Entropy 7.989441 Retry 1/2 > Alpha: 0.012500 ME-alpha: 0.012500 Progress: 97.12% Words/thread/sec: > 113.82k Iteration 9 Valid Entropy 7.982499 Retry 2/2 > # Accounting: time=439 threads=1 > # Ended (code 0) at Fri Jun 26 16:22:52 CEST 2015, elapsed time 439 seconds > > > Regards, > Sunit > > > > ------------------------------------------------------------------------------ > Don't Limit Your Business. Reach for the Cloud. > GigeNET's Cloud Solutions provide you with the tools and support that > you need to offload your IT needs and focus on growing your business. > Configured For All Businesses. Start Your Cloud Today. > https://www.gigenetcloud.com/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users > |
From: Daniel P. <dp...@gm...> - 2015-07-06 20:03:19
|
BTW, it's the recompilation in tools/ that is important. You may have to remove the rnnlm subdirectory, whatever it is called, to force update. Dan On Mon, Jul 6, 2015 at 12:52 PM, Jan Trmal <jt...@gm...> wrote: > I just committed a patch that hopefully resolves this. Sunit, please update > Kaldi, recompile and run the rescoring again. Let us help if it was fixed. > > y. > > On Mon, Jul 6, 2015 at 7:55 AM, Sunit Sivasankaran > <sun...@in...> wrote: >> >> Hi all, >> >> I am getting a buffer overflow error while running RNNLM scripts of WSJ. >> Any idea as to what could have gone wrong? I trained the model using a >> subset of WSJ utterances and from the logs, the training seemed alright. >> Below are the rnnlm rescore logs followed by the RNN training logs. >> >> >> >> steps/rnnlmrescore.sh --rnnlm_ver rnnlm-hs-0.1b --N 100 0.5 >> data/lang_test_tgpr_5k data/lang_rnnlm_h30_me5-1000 data/dt05_multi_r_mc >> exp/tri4a/decode_tgpr_5k exp/tri4a/decode_tgpr_5k_rnnlm_h30_me5-1000_L0.5 >> >> >> steps/rnnlmrescore.sh: converting lattices to N-best. >> steps/rnnlmrescore.sh: removing old LM scores. >> steps/rnnlmrescore.sh: creating separate-archive form of N-best lists. >> steps/rnnlmrescore.sh: doing the same with old LM scores. >> steps/rnnlmrescore.sh: Creating archives with text-form of words, and LM >> scores without graph scores. >> steps/rnnlmrescore.sh: invoking rnnlm_compute_scores.sh which calls >> rnnlm, to get RNN LM scores. >> *** buffer overflow detected ***: ../../../tools/rnnlm-hs-0.1b/rnnlm >> terminated >> ======= Backtrace: ========= >> /lib/x86_64-linux-gnu/libc.so.6(+0x7338f)[0x7fda06f6b38f] >> /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x5c)[0x7fda07002c9c] >> /lib/x86_64-linux-gnu/libc.so.6(+0x109b60)[0x7fda07001b60] >> ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4011ea] >> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fda06f19ec5] >> ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4018ac] >> >> >> >> Training logs: >> >> ../../../tools/rnnlm-hs-0.1b/rnnlm -threads 1 -independent -train >> /tmp/tmp.6aF3RDTFnf -valid /tmp/tmp.aTiUNgZnWT -rnnlm >> data/lang_rnnlm_h30_me5-1000/rnnlm -hidden 30 -rand-seed 1 -debug 2 >> -class 200 -bptt 2 -bptt-block 20 -direct-order 4 -direct 1000 -binary >> # >> Vocab size: 9066 >> Words in train file: 164907 >> Starting training using file /tmp/tmp.6aF3RDTFnf >> Iteration 0 Valid Entropy 9.595403 >> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >> 117.21k Iteration 1 Valid Entropy 8.564008 >> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >> 123.54k Iteration 2 Valid Entropy 8.297136 >> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >> 122.70k Iteration 3 Valid Entropy 8.175531 >> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >> 108.22k Iteration 4 Valid Entropy 8.107678 >> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >> 121.89k Iteration 5 Valid Entropy 8.069274 >> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >> 124.64k Iteration 6 Valid Entropy 8.049375 Decay started >> Alpha: 0.050000 ME-alpha: 0.050000 Progress: 97.12% Words/thread/sec: >> 111.30k Iteration 7 Valid Entropy 8.009795 >> Alpha: 0.025000 ME-alpha: 0.025000 Progress: 97.12% Words/thread/sec: >> 124.70k Iteration 8 Valid Entropy 7.989441 Retry 1/2 >> Alpha: 0.012500 ME-alpha: 0.012500 Progress: 97.12% Words/thread/sec: >> 113.82k Iteration 9 Valid Entropy 7.982499 Retry 2/2 >> # Accounting: time=439 threads=1 >> # Ended (code 0) at Fri Jun 26 16:22:52 CEST 2015, elapsed time 439 >> seconds >> >> >> Regards, >> Sunit >> >> >> >> ------------------------------------------------------------------------------ >> Don't Limit Your Business. Reach for the Cloud. >> GigeNET's Cloud Solutions provide you with the tools and support that >> you need to offload your IT needs and focus on growing your business. >> Configured For All Businesses. Start Your Cloud Today. >> https://www.gigenetcloud.com/ >> _______________________________________________ >> Kaldi-users mailing list >> Kal...@li... >> https://lists.sourceforge.net/lists/listinfo/kaldi-users > > > > ------------------------------------------------------------------------------ > Don't Limit Your Business. Reach for the Cloud. > GigeNET's Cloud Solutions provide you with the tools and support that > you need to offload your IT needs and focus on growing your business. > Configured For All Businesses. Start Your Cloud Today. > https://www.gigenetcloud.com/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users > |
From: Sunit S. <sun...@in...> - 2015-07-07 08:30:35
|
It is working fine now. Thanks a lot. Regards, Sunit On Monday 06 July 2015 10:03 PM, Daniel Povey wrote: > BTW, it's the recompilation in tools/ that is important. You may have > to remove the rnnlm subdirectory, whatever it is called, to force > update. > Dan > > > On Mon, Jul 6, 2015 at 12:52 PM, Jan Trmal <jt...@gm...> wrote: >> I just committed a patch that hopefully resolves this. Sunit, please update >> Kaldi, recompile and run the rescoring again. Let us help if it was fixed. >> >> y. >> >> On Mon, Jul 6, 2015 at 7:55 AM, Sunit Sivasankaran >> <sun...@in...> wrote: >>> Hi all, >>> >>> I am getting a buffer overflow error while running RNNLM scripts of WSJ. >>> Any idea as to what could have gone wrong? I trained the model using a >>> subset of WSJ utterances and from the logs, the training seemed alright. >>> Below are the rnnlm rescore logs followed by the RNN training logs. >>> >>> >>> >>> steps/rnnlmrescore.sh --rnnlm_ver rnnlm-hs-0.1b --N 100 0.5 >>> data/lang_test_tgpr_5k data/lang_rnnlm_h30_me5-1000 data/dt05_multi_r_mc >>> exp/tri4a/decode_tgpr_5k exp/tri4a/decode_tgpr_5k_rnnlm_h30_me5-1000_L0.5 >>> >>> >>> steps/rnnlmrescore.sh: converting lattices to N-best. >>> steps/rnnlmrescore.sh: removing old LM scores. >>> steps/rnnlmrescore.sh: creating separate-archive form of N-best lists. >>> steps/rnnlmrescore.sh: doing the same with old LM scores. >>> steps/rnnlmrescore.sh: Creating archives with text-form of words, and LM >>> scores without graph scores. >>> steps/rnnlmrescore.sh: invoking rnnlm_compute_scores.sh which calls >>> rnnlm, to get RNN LM scores. >>> *** buffer overflow detected ***: ../../../tools/rnnlm-hs-0.1b/rnnlm >>> terminated >>> ======= Backtrace: ========= >>> /lib/x86_64-linux-gnu/libc.so.6(+0x7338f)[0x7fda06f6b38f] >>> /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x5c)[0x7fda07002c9c] >>> /lib/x86_64-linux-gnu/libc.so.6(+0x109b60)[0x7fda07001b60] >>> ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4011ea] >>> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fda06f19ec5] >>> ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4018ac] >>> >>> >>> >>> Training logs: >>> >>> ../../../tools/rnnlm-hs-0.1b/rnnlm -threads 1 -independent -train >>> /tmp/tmp.6aF3RDTFnf -valid /tmp/tmp.aTiUNgZnWT -rnnlm >>> data/lang_rnnlm_h30_me5-1000/rnnlm -hidden 30 -rand-seed 1 -debug 2 >>> -class 200 -bptt 2 -bptt-block 20 -direct-order 4 -direct 1000 -binary >>> # >>> Vocab size: 9066 >>> Words in train file: 164907 >>> Starting training using file /tmp/tmp.6aF3RDTFnf >>> Iteration 0 Valid Entropy 9.595403 >>> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >>> 117.21k Iteration 1 Valid Entropy 8.564008 >>> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >>> 123.54k Iteration 2 Valid Entropy 8.297136 >>> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >>> 122.70k Iteration 3 Valid Entropy 8.175531 >>> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >>> 108.22k Iteration 4 Valid Entropy 8.107678 >>> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >>> 121.89k Iteration 5 Valid Entropy 8.069274 >>> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >>> 124.64k Iteration 6 Valid Entropy 8.049375 Decay started >>> Alpha: 0.050000 ME-alpha: 0.050000 Progress: 97.12% Words/thread/sec: >>> 111.30k Iteration 7 Valid Entropy 8.009795 >>> Alpha: 0.025000 ME-alpha: 0.025000 Progress: 97.12% Words/thread/sec: >>> 124.70k Iteration 8 Valid Entropy 7.989441 Retry 1/2 >>> Alpha: 0.012500 ME-alpha: 0.012500 Progress: 97.12% Words/thread/sec: >>> 113.82k Iteration 9 Valid Entropy 7.982499 Retry 2/2 >>> # Accounting: time=439 threads=1 >>> # Ended (code 0) at Fri Jun 26 16:22:52 CEST 2015, elapsed time 439 >>> seconds >>> >>> >>> Regards, >>> Sunit >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Don't Limit Your Business. Reach for the Cloud. >>> GigeNET's Cloud Solutions provide you with the tools and support that >>> you need to offload your IT needs and focus on growing your business. >>> Configured For All Businesses. Start Your Cloud Today. >>> https://www.gigenetcloud.com/ >>> _______________________________________________ >>> Kaldi-users mailing list >>> Kal...@li... >>> https://lists.sourceforge.net/lists/listinfo/kaldi-users >> >> >> ------------------------------------------------------------------------------ >> Don't Limit Your Business. Reach for the Cloud. >> GigeNET's Cloud Solutions provide you with the tools and support that >> you need to offload your IT needs and focus on growing your business. >> Configured For All Businesses. Start Your Cloud Today. >> https://www.gigenetcloud.com/ >> _______________________________________________ >> Kaldi-users mailing list >> Kal...@li... >> https://lists.sourceforge.net/lists/listinfo/kaldi-users >> |