You can subscribe to this list here.
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
(2) |
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2012 |
Jan
|
Feb
|
Mar
(8) |
Apr
(4) |
May
(2) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
(2) |
Mar
(2) |
Apr
(7) |
May
(31) |
Jun
(40) |
Jul
(65) |
Aug
(37) |
Sep
(12) |
Oct
(57) |
Nov
(15) |
Dec
(35) |
2014 |
Jan
(3) |
Feb
(30) |
Mar
(57) |
Apr
(26) |
May
(49) |
Jun
(26) |
Jul
(63) |
Aug
(33) |
Sep
(20) |
Oct
(153) |
Nov
(62) |
Dec
(20) |
2015 |
Jan
(6) |
Feb
(21) |
Mar
(42) |
Apr
(33) |
May
(76) |
Jun
(102) |
Jul
(39) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Jan T. <jt...@gm...> - 2015-07-31 12:59:29
|
Hi, I'm not gonna answer your question here -- the sourceforge lists are closed and not active anymore. Visit http://kaldi-asr.org/forums.html for additional info. Yenda On Fri, Jul 31, 2015 at 4:58 AM, Naresh kumar <ell...@gm...> wrote: > Dear all, > I am trying to install latest ATLAS package (3.10.0) in cygwin 32-bit > system. > I found following errors when I run install_atlas.sh > My gcc version is 4.9.3 > > Vector ISA Extension configured as SSE3 (6,448) > ERROR: enum fam=3, chip=2, mach=0 > make[3]: *[atlas_run] Error 44* > * make[2]: *[IRunArchInfo_x86] Error 2 > > Architecture configured as UNKNOWNx86 (35) > ERROR: enum fam=3, chip=2, mach=0 > make[3]: *[atlas_run] Error 44* > * make[2]: *[IRunArchInfo_x86] Error 2 > > Clock rate configured as 3092Mhz > ERROR: enum fam=3, chip=2, mach=0 > make[3]: *[atlas_run] Error 44* > * make[2]: *[IRunArchInfo_x86] Error 2 > > Maximum number of threads configured as 8 > Parallel make command configured as '$(MAKE)' > ERROR: enum fam=3, chip=2, mach=0 > make[3]: *[atlas_run] Error 44* > * make[2]: *[IRunArchInfo_x86] Error 2 > * Cannot detect CPU throttling.* > > gcc > -I/cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build/..//CONFIG/include > -g -w -o xisgcc > /cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build/..//CONFIG/src/IsGcc.c > atlconf_misc.o > gcc > -I/cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build/..//CONFIG/include > -g -w -c > /cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build/..//CONFIG/src/probe_comp.c > gcc > -I/cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build/..//CONFIG/include > -g -w -o xprobe_comp probe_comp.o atlconf_misc.o > rm -f config1.out > make atlas_run > atldir=/cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build > exe=xprobe_comp redir=config1.out \ args="-v 0 -o atlconf.txt -O 8 -A 35 > -Si nof77 0 -V 448 -b 32 -d b > /cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build" > make[1]: Entering directory > '/home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build' > cd > /cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build ; > ./xprobe_comp -v 0 -o atlconf.txt -O 8 -A 35 -Si nof77 0 -V 448 -b 32 -d b > /cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build > > config1.out > sh: -c: line 0: syntax error near unexpected token (' sh: -c: line 0:find > /usr/local/bin /usr/bin /cygdrive/c/ProgramData/Oracle/Java/javapath > /cygdrive/c/Program\ Files/SlickEditV18.0.1\ x64/win /cygdrive/c/Program\ > Files/Common\ Files/Microsoft\ Shared/Windows\ Live /cygdrive/c/Program\ > Files\ (x86)/Common\ Files/Microsoft\ Shared/Windows\ Live > /cygdrive/c/Program\ Files\ (x86)/Intel/iCLS\ Client /cygdrive/c/Program\ > Files/Intel/iCLS\ Client /cygdrive/c/Windows/system32 /cygdrive/c/Windows > /cygdrive/c/Windows/System32/Wbem > /cygdrive/c/Windows/System32/WindowsPowerShell/v1.0 /cygdrive/c/Program\ > Files/Intel/WiFi/bin /cygdrive/c/Program\ Files/Common\ > Files/Intel/WirelessCommon /cygdrive/c/Program\ Files\ (x86)/Windows\ > Live/Shared /cygdrive/c/Program\ Files\ (x86)/Microsoft\ SQL\ > Server/100/Tools/Binn /cygdrive/c/Program\ Files/Microsoft\ SQL\ > Server/100/Tools/Binn /cygdrive/c/Program\ Files/Microsoft\ SQL\ > Server/100/DTS/Binn /cygdrive/c/Program\ Files/Perforce > /cygdrive/c/Program\ Files\ (x86)/Interactive\ Intelligence/ININ\ Trace\ > Initialization /cygdrive/c/Program\ Files/Intel/Intel(R)\ Management\ > Engine\ Components/DAL /cygdrive/c/Program\ Files\ (x86)/Intel/Intel(R)\ > Management\ Engine\ Components/DAL /cygdrive/c/Program\ > Files/Intel/Intel(R)\ Management\ Engine\ Components/IPT > /cygdrive/c/Program\ Files\ (x86)/Intel/Intel(R)\ Management\ Engine\ > Components/IPT /cygdrive/c/xampp/php > /cygdrive/c/ProgramData/ComposerSetup/bin > /cygdrive/c/builds/tools_main_systest > /cygdrive/c/builds/tools_main_systest/ruby/bin > /cygdrive/d/builds/tools_main_systest > /cygdrive/d/builds/tools_main_systest/ruby/bin /usr/lib/lapack -maxdepth 1 > -name '*gcc*' -exec ./xisgcc '{}' \; > /tmp/t6dc.0 2>&1' > /tmp/cczwkvVD.s: Assembler messages: > /tmp/cczwkvVD.s:11: Error: bad register name %rsp' /tmp/cczwkvVD.s:12: > Warning: .seh_stackalloc ignored for this target /tmp/cczwkvVD.s:15: Error: > bad register name%rip)' > /tmp/cczwkvVD.s:18: Error: bad register name %rsp' /tmp/cczwkvVD.s:20: > Warning: zero assumed for missing expression /tmp/cczwkvVD.s:20: Warning: > zero assumed for missing expression make[2]: *** [IRunCComp] Error 1 > /tmp/ccXjwsY6.s: Assembler messages: /tmp/ccXjwsY6.s:11: Error: bad > register name%rsp' > /tmp/ccXjwsY6.s:12: Warning: .seh_stackalloc ignored for this target > /tmp/ccXjwsY6.s:15: Error: bad register name %rip)' /tmp/ccXjwsY6.s:18: > Error: bad register name%rsp' > /tmp/ccXjwsY6.s:20: Warning: zero assumed for missing expression > /tmp/ccXjwsY6.s:20: Warning: zero assumed for missing expression > make[2]: *** [IRunCComp] Error 1 > /tmp/cchHKpJ0.s: Assembler messages: > /tmp/cchHKpJ0.s:11: Error: bad register name %rsp' /tmp/cchHKpJ0.s:12: > Warning: .seh_stackalloc ignored for this target /tmp/cchHKpJ0.s:15: Error: > bad register name%rip)' > /tmp/cchHKpJ0.s:18: Error: bad register name `%rsp' > /tmp/cchHKpJ0.s:20: Warning: zero assumed for missing expression > /tmp/cchHKpJ0.s:20: Warning: zero assumed for missing expression > > make[2]: *[IRunCComp] Error 1* > > * /tmp/ccXjwsY6.s: Assembler messages: /tmp/ccXjwsY6.s:11: Error: bad > register name **%rsp' /tmp/ccXjwsY6.s:12: Warning: .seh_stackalloc > ignored for this target /tmp/ccXjwsY6.s:15: Error: bad register name* > *%rip)'* > * /tmp/ccXjwsY6.s:18: Error: bad register name **%rsp' > /tmp/ccXjwsY6.s:20: Warning: zero assumed for missing expression > /tmp/ccXjwsY6.s:20: Warning: zero assumed for missing expression make[2]: > *** [IRunCComp] Error 1 /tmp/cchHKpJ0.s: Assembler messages: > /tmp/cchHKpJ0.s:11: Error: bad register name**%rsp'* > > * /tmp/cchHKpJ0.s:12: Warning: .seh_stackalloc ignored for this target > /tmp/cchHKpJ0.s:15: Error: bad register name **%rip)' /tmp/cchHKpJ0.s:18: > Error: bad register name**%rsp'* > > > * /tmp/cchHKpJ0.s:20: Warning: zero assumed for missing expression > /tmp/cchHKpJ0.s:20: Warning: zero assumed for missing expression make[2]: *[IRunCComp] > Error 1 > /tmp/ccFNx4Va.s: Assembler messages: > /tmp/ccFNx4Va.s:11: Error: bad register name %rsp' /tmp/ccFNx4Va.s:12: > Warning: .seh_stackalloc ignored for this target /tmp/ccFNx4Va.s:15: Error: > bad register name%rip)' > /tmp/ccFNx4Va.s:18: Error: bad register name `%rsp' > /tmp/ccFNx4Va.s:20: Warning: zero assumed for missing expression > /tmp/ccFNx4Va.s:20: Warning: zero assumed for missing expression > make[2]: *[IRunCComp] Error 1* > * make[3]: *[atlas_run] Error 127 > make[2]: *[IRunCComp] Error 2* > * make[3]: *[atlas_run] Error 127 > make[2]: *[IRunCComp] Error 2* > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > * > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingw32.a when searching for > -lmingw32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingw32.a when searching for > -lmingw32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingw32.a when searching for > -lmingw32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -lmingw32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc.a when > searching for -lgcc > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc.a when > searching for -lgcc > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc.a when > searching for -lgcc > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -lgcc > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc_eh.a > when searching for -lgcc_eh > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc_eh.a > when searching for -lgcc_eh > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc_eh.a > when searching for -lgcc_eh > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -lgcc_eh > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmoldname.a when searching for > -lmoldname > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmoldname.a when searching for > -lmoldname > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmoldname.a when searching for > -lmoldname > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -lmoldname > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingwex.a when searching for > -lmingwex > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingwex.a when searching for > -lmingwex > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingwex.a when searching for > -lmingwex > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -lmingwex > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmsvcrt.a when searching for > -lmsvcrt > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmsvcrt.a when searching for > -lmsvcrt > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmsvcrt.a when searching for > -lmsvcrt > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -lmsvcrt > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libadvapi32.a when searching for > -ladvapi32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libadvapi32.a when searching for > -ladvapi32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libadvapi32.a when searching for > -ladvapi32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -ladvapi32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libshell32.a when searching for > -lshell32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libshell32.a when searching for > -lshell32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libshell32.a when searching for > -lshell32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -lshell32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libuser32.a when searching for > -luser32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libuser32.a when searching for > -luser32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libuser32.a when searching for > -luser32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -luser32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libkernel32.a when searching for > -lkernel32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libkernel32.a when searching for > -lkernel32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libkernel32.a when searching for > -lkernel32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -lkernel32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingw32.a when searching for > -lmingw32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingw32.a when searching for > -lmingw32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingw32.a when searching for > -lmingw32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -lmingw32 > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc.a when > searching for -lgcc > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc.a when > searching for -lgcc > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc.a when > searching for -lgcc > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -lgcc > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc_eh.a > when searching for -lgcc_eh > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc_eh.a > when searching for -lgcc_eh > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc_eh.a > when searching for -lgcc_eh > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -lgcc_eh > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmoldname.a when searching for > -lmoldname > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmoldname.a when searching for > -lmoldname > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmoldname.a when searching for > -lmoldname > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -lmoldname > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingwex.a when searching for > -lmingwex > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingwex.a when searching for > -lmingwex > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingwex.a when searching for > -lmingwex > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -lmingwex > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmsvcrt.a when searching for > -lmsvcrt > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmsvcrt.a when searching for > -lmsvcrt > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > skipping incompatible > /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmsvcrt.a when searching for > -lmsvcrt > /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: > cannot find -lmsvcrt collect2: error: ld returned 1 exit status make[2]: *[IRunCComp] > Error 1 > /bin/sh: cl: command not found > make[2]: *[/cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build/ATLwin_cl.exe] > Error 127* > > > > * Unable to to build ATLwin_cl, quitting cmnd='make > /cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build/ATLwin_cl.exe' > Makefile:106: recipe for target 'atlas_run' failed make[1]: *[atlas_run] > Error 255 > make[1]: Leaving directory > '/home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build' > Makefile:117: recipe for target 'IRun_comp' failed > make: *[IRun_comp] Error 2* > > > > > > > > > > > > > > > > > > > > > > > > > > > > * ERROR 512 IN SYSCMND: 'make IRun_comp args="-v 0 -o atlconf.txt -O 8 -A > 35 -Si nof77 0 -V 448 -b 32"' mkdir src bin tune interfaces cd src ; mkdir > testing auxil blas lapack pthreads threads cd src/blas ; \ mkdir > f77reference reference gemv ger gemm kbmm \ level1 level2 level3 pklevel3 > cd src/blas/reference ; mkdir level1 level2 level3 cd src/blas/level2 ; > mkdir kernel cd src/blas/pklevel3 ; mkdir gpmm sprk cd src/blas/level3 ; > mkdir rblas kernel cd src/pthreads ; mkdir blas misc cd src/pthreads/blas ; > mkdir level1 level2 level3 cd src/threads ; mkdir blas lapack cd > src/threads/blas ; mkdir level3 level2 cd tune ; mkdir blas sysinfo lapack > threads cd tune/blas ; mkdir gemm gemv ger level1 level3 cd interfaces ; > mkdir blas lapack cd interfaces/lapack ; mkdir C F77 cd interfaces/lapack/C > ; mkdir src testing cd interfaces/lapack/F77 ; mkdir src testing cd > interfaces/blas ; mkdir C F77 cd interfaces/blas/C ; mkdir src testing cd > interfaces/blas/F77 ; mkdir src testing cd interfaces/lapack ; mkdir C2F cd > interfaces/lapack/C2F ; mkdir src mkdir ARCHS make -f Make.top startup > make[1]: Entering directory > '/home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build' Make.top:1: > Make.inc: No such file or directory make[1]: *No rule to make target > 'Make.inc'. Stop. > make[1]: Leaving directory > '/home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build' > Makefile:493: recipe for target 'startup' failed > make: *[startup] Error 2* > > > > > > > > > > > * mv: cannot stat ‘lib/Makefile’: No such file or directory ../configure: > line 450: lib/Makefile: No such file or directory ../configure: line 451: > lib/Makefile: No such file or directory ../configure: line 452: > lib/Makefile: No such file or directory ../configure: line 453: > lib/Makefile: No such file or directory ../configure: line 509: > lib/Makefile: No such file or directory DONE configure make -f Make.top > build make[1]: Entering directory > '/home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build' Make.top:1: > Make.inc: No such file or directory make[1]: *No rule to make target > 'Make.inc'. Stop. > make[1]: Leaving directory > '/home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build' > Makefile:488: recipe for target 'build' failed > make: *** [build] Error 2 > > Please let me know how to solve this problem > > > -- > > > Regards > Naresh Kumar > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users > > |
From: Naresh k. <ell...@gm...> - 2015-07-31 08:58:48
|
Dear all, I am trying to install latest ATLAS package (3.10.0) in cygwin 32-bit system. I found following errors when I run install_atlas.sh My gcc version is 4.9.3 Vector ISA Extension configured as SSE3 (6,448) ERROR: enum fam=3, chip=2, mach=0 make[3]: *[atlas_run] Error 44* * make[2]: *[IRunArchInfo_x86] Error 2 Architecture configured as UNKNOWNx86 (35) ERROR: enum fam=3, chip=2, mach=0 make[3]: *[atlas_run] Error 44* * make[2]: *[IRunArchInfo_x86] Error 2 Clock rate configured as 3092Mhz ERROR: enum fam=3, chip=2, mach=0 make[3]: *[atlas_run] Error 44* * make[2]: *[IRunArchInfo_x86] Error 2 Maximum number of threads configured as 8 Parallel make command configured as '$(MAKE)' ERROR: enum fam=3, chip=2, mach=0 make[3]: *[atlas_run] Error 44* * make[2]: *[IRunArchInfo_x86] Error 2 * Cannot detect CPU throttling.* gcc -I/cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build/..//CONFIG/include -g -w -o xisgcc /cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build/..//CONFIG/src/IsGcc.c atlconf_misc.o gcc -I/cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build/..//CONFIG/include -g -w -c /cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build/..//CONFIG/src/probe_comp.c gcc -I/cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build/..//CONFIG/include -g -w -o xprobe_comp probe_comp.o atlconf_misc.o rm -f config1.out make atlas_run atldir=/cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build exe=xprobe_comp redir=config1.out \ args="-v 0 -o atlconf.txt -O 8 -A 35 -Si nof77 0 -V 448 -b 32 -d b /cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build" make[1]: Entering directory '/home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build' cd /cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build ; ./xprobe_comp -v 0 -o atlconf.txt -O 8 -A 35 -Si nof77 0 -V 448 -b 32 -d b /cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build > config1.out sh: -c: line 0: syntax error near unexpected token (' sh: -c: line 0:find /usr/local/bin /usr/bin /cygdrive/c/ProgramData/Oracle/Java/javapath /cygdrive/c/Program\ Files/SlickEditV18.0.1\ x64/win /cygdrive/c/Program\ Files/Common\ Files/Microsoft\ Shared/Windows\ Live /cygdrive/c/Program\ Files\ (x86)/Common\ Files/Microsoft\ Shared/Windows\ Live /cygdrive/c/Program\ Files\ (x86)/Intel/iCLS\ Client /cygdrive/c/Program\ Files/Intel/iCLS\ Client /cygdrive/c/Windows/system32 /cygdrive/c/Windows /cygdrive/c/Windows/System32/Wbem /cygdrive/c/Windows/System32/WindowsPowerShell/v1.0 /cygdrive/c/Program\ Files/Intel/WiFi/bin /cygdrive/c/Program\ Files/Common\ Files/Intel/WirelessCommon /cygdrive/c/Program\ Files\ (x86)/Windows\ Live/Shared /cygdrive/c/Program\ Files\ (x86)/Microsoft\ SQL\ Server/100/Tools/Binn /cygdrive/c/Program\ Files/Microsoft\ SQL\ Server/100/Tools/Binn /cygdrive/c/Program\ Files/Microsoft\ SQL\ Server/100/DTS/Binn /cygdrive/c/Program\ Files/Perforce /cygdrive/c/Program\ Files\ (x86)/Interactive\ Intelligence/ININ\ Trace\ Initialization /cygdrive/c/Program\ Files/Intel/Intel(R)\ Management\ Engine\ Components/DAL /cygdrive/c/Program\ Files\ (x86)/Intel/Intel(R)\ Management\ Engine\ Components/DAL /cygdrive/c/Program\ Files/Intel/Intel(R)\ Management\ Engine\ Components/IPT /cygdrive/c/Program\ Files\ (x86)/Intel/Intel(R)\ Management\ Engine\ Components/IPT /cygdrive/c/xampp/php /cygdrive/c/ProgramData/ComposerSetup/bin /cygdrive/c/builds/tools_main_systest /cygdrive/c/builds/tools_main_systest/ruby/bin /cygdrive/d/builds/tools_main_systest /cygdrive/d/builds/tools_main_systest/ruby/bin /usr/lib/lapack -maxdepth 1 -name '*gcc*' -exec ./xisgcc '{}' \; > /tmp/t6dc.0 2>&1' /tmp/cczwkvVD.s: Assembler messages: /tmp/cczwkvVD.s:11: Error: bad register name %rsp' /tmp/cczwkvVD.s:12: Warning: .seh_stackalloc ignored for this target /tmp/cczwkvVD.s:15: Error: bad register name%rip)' /tmp/cczwkvVD.s:18: Error: bad register name %rsp' /tmp/cczwkvVD.s:20: Warning: zero assumed for missing expression /tmp/cczwkvVD.s:20: Warning: zero assumed for missing expression make[2]: *** [IRunCComp] Error 1 /tmp/ccXjwsY6.s: Assembler messages: /tmp/ccXjwsY6.s:11: Error: bad register name%rsp' /tmp/ccXjwsY6.s:12: Warning: .seh_stackalloc ignored for this target /tmp/ccXjwsY6.s:15: Error: bad register name %rip)' /tmp/ccXjwsY6.s:18: Error: bad register name%rsp' /tmp/ccXjwsY6.s:20: Warning: zero assumed for missing expression /tmp/ccXjwsY6.s:20: Warning: zero assumed for missing expression make[2]: *** [IRunCComp] Error 1 /tmp/cchHKpJ0.s: Assembler messages: /tmp/cchHKpJ0.s:11: Error: bad register name %rsp' /tmp/cchHKpJ0.s:12: Warning: .seh_stackalloc ignored for this target /tmp/cchHKpJ0.s:15: Error: bad register name%rip)' /tmp/cchHKpJ0.s:18: Error: bad register name `%rsp' /tmp/cchHKpJ0.s:20: Warning: zero assumed for missing expression /tmp/cchHKpJ0.s:20: Warning: zero assumed for missing expression make[2]: *[IRunCComp] Error 1* * /tmp/ccXjwsY6.s: Assembler messages: /tmp/ccXjwsY6.s:11: Error: bad register name **%rsp' /tmp/ccXjwsY6.s:12: Warning: .seh_stackalloc ignored for this target /tmp/ccXjwsY6.s:15: Error: bad register name**%rip)'* * /tmp/ccXjwsY6.s:18: Error: bad register name **%rsp' /tmp/ccXjwsY6.s:20: Warning: zero assumed for missing expression /tmp/ccXjwsY6.s:20: Warning: zero assumed for missing expression make[2]: *** [IRunCComp] Error 1 /tmp/cchHKpJ0.s: Assembler messages: /tmp/cchHKpJ0.s:11: Error: bad register name**%rsp'* * /tmp/cchHKpJ0.s:12: Warning: .seh_stackalloc ignored for this target /tmp/cchHKpJ0.s:15: Error: bad register name **%rip)' /tmp/cchHKpJ0.s:18: Error: bad register name**%rsp'* * /tmp/cchHKpJ0.s:20: Warning: zero assumed for missing expression /tmp/cchHKpJ0.s:20: Warning: zero assumed for missing expression make[2]: *[IRunCComp] Error 1 /tmp/ccFNx4Va.s: Assembler messages: /tmp/ccFNx4Va.s:11: Error: bad register name %rsp' /tmp/ccFNx4Va.s:12: Warning: .seh_stackalloc ignored for this target /tmp/ccFNx4Va.s:15: Error: bad register name%rip)' /tmp/ccFNx4Va.s:18: Error: bad register name `%rsp' /tmp/ccFNx4Va.s:20: Warning: zero assumed for missing expression /tmp/ccFNx4Va.s:20: Warning: zero assumed for missing expression make[2]: *[IRunCComp] Error 1* * make[3]: *[atlas_run] Error 127 make[2]: *[IRunCComp] Error 2* * make[3]: *[atlas_run] Error 127 make[2]: *[IRunCComp] Error 2* * /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingw32.a when searching for -lmingw32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingw32.a when searching for -lmingw32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingw32.a when searching for -lmingw32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -lmingw32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc.a when searching for -lgcc /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc.a when searching for -lgcc /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc.a when searching for -lgcc /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -lgcc /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc_eh.a when searching for -lgcc_eh /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc_eh.a when searching for -lgcc_eh /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc_eh.a when searching for -lgcc_eh /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -lgcc_eh /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmoldname.a when searching for -lmoldname /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmoldname.a when searching for -lmoldname /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmoldname.a when searching for -lmoldname /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -lmoldname /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingwex.a when searching for -lmingwex /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingwex.a when searching for -lmingwex /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingwex.a when searching for -lmingwex /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -lmingwex /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmsvcrt.a when searching for -lmsvcrt /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmsvcrt.a when searching for -lmsvcrt /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmsvcrt.a when searching for -lmsvcrt /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -lmsvcrt /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libadvapi32.a when searching for -ladvapi32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libadvapi32.a when searching for -ladvapi32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libadvapi32.a when searching for -ladvapi32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -ladvapi32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libshell32.a when searching for -lshell32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libshell32.a when searching for -lshell32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libshell32.a when searching for -lshell32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -lshell32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libuser32.a when searching for -luser32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libuser32.a when searching for -luser32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libuser32.a when searching for -luser32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -luser32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libkernel32.a when searching for -lkernel32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libkernel32.a when searching for -lkernel32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libkernel32.a when searching for -lkernel32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -lkernel32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingw32.a when searching for -lmingw32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingw32.a when searching for -lmingw32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingw32.a when searching for -lmingw32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -lmingw32 /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc.a when searching for -lgcc /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc.a when searching for -lgcc /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc.a when searching for -lgcc /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -lgcc /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc_eh.a when searching for -lgcc_eh /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc_eh.a when searching for -lgcc_eh /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/libgcc_eh.a when searching for -lgcc_eh /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -lgcc_eh /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmoldname.a when searching for -lmoldname /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmoldname.a when searching for -lmoldname /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmoldname.a when searching for -lmoldname /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -lmoldname /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingwex.a when searching for -lmingwex /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingwex.a when searching for -lmingwex /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmingwex.a when searching for -lmingwex /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -lmingwex /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmsvcrt.a when searching for -lmsvcrt /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmsvcrt.a when searching for -lmsvcrt /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: skipping incompatible /usr/x86_64-w64-mingw32/sys-root/mingw/lib/libmsvcrt.a when searching for -lmsvcrt /usr/lib/gcc/x86_64-w64-mingw32/4.9.2/../../../../x86_64-w64-mingw32/bin/ld: cannot find -lmsvcrt collect2: error: ld returned 1 exit status make[2]: *[IRunCComp] Error 1 /bin/sh: cl: command not found make[2]: *[/cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build/ATLwin_cl.exe] Error 127* * Unable to to build ATLwin_cl, quitting cmnd='make /cygdrive/c/cygwin//home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build/ATLwin_cl.exe' Makefile:106: recipe for target 'atlas_run' failed make[1]: *[atlas_run] Error 255 make[1]: Leaving directory '/home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build' Makefile:117: recipe for target 'IRun_comp' failed make: *[IRun_comp] Error 2* * ERROR 512 IN SYSCMND: 'make IRun_comp args="-v 0 -o atlconf.txt -O 8 -A 35 -Si nof77 0 -V 448 -b 32"' mkdir src bin tune interfaces cd src ; mkdir testing auxil blas lapack pthreads threads cd src/blas ; \ mkdir f77reference reference gemv ger gemm kbmm \ level1 level2 level3 pklevel3 cd src/blas/reference ; mkdir level1 level2 level3 cd src/blas/level2 ; mkdir kernel cd src/blas/pklevel3 ; mkdir gpmm sprk cd src/blas/level3 ; mkdir rblas kernel cd src/pthreads ; mkdir blas misc cd src/pthreads/blas ; mkdir level1 level2 level3 cd src/threads ; mkdir blas lapack cd src/threads/blas ; mkdir level3 level2 cd tune ; mkdir blas sysinfo lapack threads cd tune/blas ; mkdir gemm gemv ger level1 level3 cd interfaces ; mkdir blas lapack cd interfaces/lapack ; mkdir C F77 cd interfaces/lapack/C ; mkdir src testing cd interfaces/lapack/F77 ; mkdir src testing cd interfaces/blas ; mkdir C F77 cd interfaces/blas/C ; mkdir src testing cd interfaces/blas/F77 ; mkdir src testing cd interfaces/lapack ; mkdir C2F cd interfaces/lapack/C2F ; mkdir src mkdir ARCHS make -f Make.top startup make[1]: Entering directory '/home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build' Make.top:1: Make.inc: No such file or directory make[1]: *No rule to make target 'Make.inc'. Stop. make[1]: Leaving directory '/home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build' Makefile:493: recipe for target 'startup' failed make: *[startup] Error 2* * mv: cannot stat ‘lib/Makefile’: No such file or directory ../configure: line 450: lib/Makefile: No such file or directory ../configure: line 451: lib/Makefile: No such file or directory ../configure: line 452: lib/Makefile: No such file or directory ../configure: line 453: lib/Makefile: No such file or directory ../configure: line 509: lib/Makefile: No such file or directory DONE configure make -f Make.top build make[1]: Entering directory '/home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build' Make.top:1: Make.inc: No such file or directory make[1]: *No rule to make target 'Make.inc'. Stop. make[1]: Leaving directory '/home/Naresh.Kumar/work/kaldi-trunk/tools/ATLAS/build' Makefile:488: recipe for target 'build' failed make: *** [build] Error 2 Please let me know how to solve this problem -- Regards Naresh Kumar |
From: Jan T. <jt...@gm...> - 2015-07-23 01:42:24
|
If the web interface does not work (for whatever reason), you can also subscribe using an e-mail. Just send an empty e-mail to the following addresses kal...@go... kal...@go... and you should receive a welcome e-mail shortly afterwards. NB: I tried signing up using several e-mails and not all were willing to accept the aforementioned addresses as valid. Gmail should be ok. In case of problems, feel free to e-mail me directly (jt...@gm...) and I will subscribe you manually. y. On Wed, Jul 22, 2015 at 12:04 PM, Jan Trmal <jt...@gm...> wrote: > All, > we are phasing out using the sf.net mailing lists and moving to > googlegroups.com. > You are welcome to join us. > Further info here: http://kaldi-asr.org/forums.html > > yenda > |
From: Xingyu Na <asr...@gm...> - 2015-07-23 01:10:37
|
Sure. I'll check it out. On 07/22/2015 11:59 PM, Jan Trmal wrote: > Xingu, could you please prepare a patch against master of > kaldi-asr/kaldi on github so that we can include your change? > y. > > On Wed, Jul 22, 2015 at 2:58 AM, Xingyu Na <asr...@gm... > <mailto:asr...@gm...>> wrote: > > I update the Maxout implementation on GPU in case anyone wants to > try it in Kaldi... > Please check out: https://github.com/naxingyu/kaldi-nn > > > On 04/02/2015 01:08 AM, Daniel Povey wrote: >> It looks to me like the MaxoutComponent has not been properly set >> up for efficient operation on GPU: there is a loop in the >> Propagate function. We didn't do this because it wasn't giving >> great results. >> Incidentally, for the multi-splice networks (which we're now >> calling TDNN) we may end up moving back from p-norm to ReLU, as >> ReLU now seems to be giving better WERs. >> Dan >> >> >> On Tue, Mar 31, 2015 at 10:30 PM, Xingyu Na >> <asr...@gm... <mailto:asr...@gm...>> wrote: >> >> Hi, >> >> I tried maxout training by changing the train_pnorm_fast >> recipe into >> train_maxout_recipe, simply replacing the PnormComponent using >> MaxoutComponent, forming a run_4e procedure. >> The training runs extremely slow... e.g. a pnorm iteration >> per job takes >> 113 seconds, while a maxout iteration per job takes 3396 seconds. >> Did you ever try this? Any suggestions? >> >> Thank you and best regards, >> Xingyu >> >> ------------------------------------------------------------------------------ >> Dive into the World of Parallel Programming The Go Parallel >> Website, sponsored >> by Intel and developed in partnership with Slashdot Media, is >> your hub for all >> things parallel software development, from weekly thought >> leadership blogs to >> news, videos, case studies, tutorials and more. Take a look >> and join the >> conversation now. http://goparallel.sourceforge.net/ >> _______________________________________________ >> Kaldi-users mailing list >> Kal...@li... >> <mailto:Kal...@li...> >> https://lists.sourceforge.net/lists/listinfo/kaldi-users >> >> > > > ------------------------------------------------------------------------------ > Don't Limit Your Business. Reach for the Cloud. > GigeNET's Cloud Solutions provide you with the tools and support that > you need to offload your IT needs and focus on growing your business. > Configured For All Businesses. Start Your Cloud Today. > https://www.gigenetcloud.com/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > <mailto:Kal...@li...> > https://lists.sourceforge.net/lists/listinfo/kaldi-users > > |
From: Xingyu Na <asr...@gm...> - 2015-07-23 01:08:19
|
Hi Jan, Would you please post the instruction on subscribing the new list using email? Googlegroups is banned in China mainland... Thanks and best, Xingyu On 07/23/2015 12:04 AM, Jan Trmal wrote: > All, > we are phasing out using the sf.net <http://sf.net> mailing lists and > moving to googlegroups.com <http://googlegroups.com>. > You are welcome to join us. > Further info here: http://kaldi-asr.org/forums.html > > yenda > > > ------------------------------------------------------------------------------ > > > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users |
From: Jan T. <jt...@gm...> - 2015-07-22 16:04:25
|
All, we are phasing out using the sf.net mailing lists and moving to googlegroups.com. You are welcome to join us. Further info here: http://kaldi-asr.org/forums.html yenda |
From: Jan T. <jt...@gm...> - 2015-07-22 15:59:40
|
Xingu, could you please prepare a patch against master of kaldi-asr/kaldi on github so that we can include your change? y. On Wed, Jul 22, 2015 at 2:58 AM, Xingyu Na <asr...@gm...> wrote: > I update the Maxout implementation on GPU in case anyone wants to try it > in Kaldi... > Please check out: https://github.com/naxingyu/kaldi-nn > > > On 04/02/2015 01:08 AM, Daniel Povey wrote: > > It looks to me like the MaxoutComponent has not been properly set up for > efficient operation on GPU: there is a loop in the Propagate function. We > didn't do this because it wasn't giving great results. > Incidentally, for the multi-splice networks (which we're now calling TDNN) > we may end up moving back from p-norm to ReLU, as ReLU now seems to be > giving better WERs. > Dan > > > On Tue, Mar 31, 2015 at 10:30 PM, Xingyu Na <asr...@gm...> > wrote: > >> Hi, >> >> I tried maxout training by changing the train_pnorm_fast recipe into >> train_maxout_recipe, simply replacing the PnormComponent using >> MaxoutComponent, forming a run_4e procedure. >> The training runs extremely slow... e.g. a pnorm iteration per job takes >> 113 seconds, while a maxout iteration per job takes 3396 seconds. >> Did you ever try this? Any suggestions? >> >> Thank you and best regards, >> Xingyu >> >> >> ------------------------------------------------------------------------------ >> Dive into the World of Parallel Programming The Go Parallel Website, >> sponsored >> by Intel and developed in partnership with Slashdot Media, is your hub >> for all >> things parallel software development, from weekly thought leadership >> blogs to >> news, videos, case studies, tutorials and more. Take a look and join the >> conversation now. http://goparallel.sourceforge.net/ >> _______________________________________________ >> Kaldi-users mailing list >> Kal...@li... >> https://lists.sourceforge.net/lists/listinfo/kaldi-users >> > > > > > ------------------------------------------------------------------------------ > Don't Limit Your Business. Reach for the Cloud. > GigeNET's Cloud Solutions provide you with the tools and support that > you need to offload your IT needs and focus on growing your business. > Configured For All Businesses. Start Your Cloud Today. > https://www.gigenetcloud.com/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users > > |
From: Amit B. <ami...@gm...> - 2015-07-22 14:58:37
|
Interestingly, disabling the i-vector eliminates the problem of short transcriptions (these 'yea', 'a' etc.) and outputs better ones. I will investigate it further. Thanks for your help Nagendra! On Wed, Jul 22, 2015 at 3:32 PM, Nagendra Goel <nag...@go...> wrote: > I can only tell from experience that iVector adaptation affects a word or > two (Significantly) at the most, and adapts reasonably by that time. So if > 5-6 utterances are affected, the problem may be somewhere else. Try > shuffling the order of decode (offline of course) and see if you find a > pattern. > > Nagendra > > On Wed, Jul 22, 2015 at 8:26 AM, Amit Beka <ami...@gm...> wrote: > >> I have listened to the recordings themselves (after VAD), and they all >> sound good, and were recorded with the same background noise (almost none) >> with the same speaker and in the same volume. >> >> I use the nnet2-online-latgen-faster decoder, and although my LM doesn't >> really suits the input, I expect it to give me at least *some* words as >> output >> >> On Wed, Jul 22, 2015 at 1:20 PM, Nagendra Goel < >> nag...@go...> wrote: >> >>> From your description this does not sound like a faulty ivector. Ivector >>> might have a small role but you should first look for problems elsewhere. >>> Maybe the recording itself goes bad? >>> >>> Nagendra Kumar Goel >>> On Jul 22, 2015 6:00 AM, "Amit Beka" <ami...@gm...> wrote: >>> >>>> Hi, >>>> >>>> I've been using online_nnet2_decoder for quite some time now for ASR in >>>> a dialogue system, where some users are returning users. Naturally, we use >>>> online i-vector extraction to better recognize each user's speech. >>>> >>>> Unfortunately, we have found some cases where the extracted i-vector >>>> decreases the performance of the decoder, usually by identifying 0 or 1 >>>> word (something like 'a', 'i' or 'yea') instead of recognizing the whole >>>> utterance. Usually, the degraded performance lasts for 5-6 utternaces (each >>>> is 1-3 seconds) until a good i-vector is "recovered". >>>> >>>> I would be grateful if anyone on the list may help with some of the >>>> following questions: >>>> >>>> 1. Is it a bug, or i-vectors may behave this way (for no apparent >>>> reason, when listening to the audio)? >>>> >>>> 2. Can I have a reliable way of telling when the i-vector is >>>> problematic? (except checking the lengths of the utterance and the >>>> transcription). What can be a good update method to the adaptation state >>>> (based on confidence, length of utternance)? >>>> >>>> 3. Is it possible to separate the i-vector to some features which are >>>> user-specific (like tone) and some that are environment specific (like >>>> noise)? If so, I would probably want to "forget" the environment-specific >>>> features and keep only the user-specific features when the utterances are >>>> not consecutive >>>> >>>> I was wondering if there is a way to "understand" the changes in the >>>> adaptation state, for a non-expert in signal-processing like me :) >>>> >>>> Thanks, >>>> Beka >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Don't Limit Your Business. Reach for the Cloud. >>>> GigeNET's Cloud Solutions provide you with the tools and support that >>>> you need to offload your IT needs and focus on growing your business. >>>> Configured For All Businesses. Start Your Cloud Today. >>>> https://www.gigenetcloud.com/ >>>> _______________________________________________ >>>> Kaldi-users mailing list >>>> Kal...@li... >>>> https://lists.sourceforge.net/lists/listinfo/kaldi-users >>>> >>>> >> > |
From: Nagendra G. <nag...@go...> - 2015-07-22 13:06:10
|
>From your description this does not sound like a faulty ivector. Ivector might have a small role but you should first look for problems elsewhere. Maybe the recording itself goes bad? Nagendra Kumar Goel On Jul 22, 2015 6:00 AM, "Amit Beka" <ami...@gm...> wrote: > Hi, > > I've been using online_nnet2_decoder for quite some time now for ASR in a > dialogue system, where some users are returning users. Naturally, we use > online i-vector extraction to better recognize each user's speech. > > Unfortunately, we have found some cases where the extracted i-vector > decreases the performance of the decoder, usually by identifying 0 or 1 > word (something like 'a', 'i' or 'yea') instead of recognizing the whole > utterance. Usually, the degraded performance lasts for 5-6 utternaces (each > is 1-3 seconds) until a good i-vector is "recovered". > > I would be grateful if anyone on the list may help with some of the > following questions: > > 1. Is it a bug, or i-vectors may behave this way (for no apparent reason, > when listening to the audio)? > > 2. Can I have a reliable way of telling when the i-vector is problematic? > (except checking the lengths of the utterance and the transcription). What > can be a good update method to the adaptation state (based on confidence, > length of utternance)? > > 3. Is it possible to separate the i-vector to some features which are > user-specific (like tone) and some that are environment specific (like > noise)? If so, I would probably want to "forget" the environment-specific > features and keep only the user-specific features when the utterances are > not consecutive > > I was wondering if there is a way to "understand" the changes in the > adaptation state, for a non-expert in signal-processing like me :) > > Thanks, > Beka > > > ------------------------------------------------------------------------------ > Don't Limit Your Business. Reach for the Cloud. > GigeNET's Cloud Solutions provide you with the tools and support that > you need to offload your IT needs and focus on growing your business. > Configured For All Businesses. Start Your Cloud Today. > https://www.gigenetcloud.com/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users > > |
From: Nagendra G. <nag...@go...> - 2015-07-22 12:32:18
|
I can only tell from experience that iVector adaptation affects a word or two (Significantly) at the most, and adapts reasonably by that time. So if 5-6 utterances are affected, the problem may be somewhere else. Try shuffling the order of decode (offline of course) and see if you find a pattern. Nagendra On Wed, Jul 22, 2015 at 8:26 AM, Amit Beka <ami...@gm...> wrote: > I have listened to the recordings themselves (after VAD), and they all > sound good, and were recorded with the same background noise (almost none) > with the same speaker and in the same volume. > > I use the nnet2-online-latgen-faster decoder, and although my LM doesn't > really suits the input, I expect it to give me at least *some* words as > output > > On Wed, Jul 22, 2015 at 1:20 PM, Nagendra Goel <nag...@go... > > wrote: > >> From your description this does not sound like a faulty ivector. Ivector >> might have a small role but you should first look for problems elsewhere. >> Maybe the recording itself goes bad? >> >> Nagendra Kumar Goel >> On Jul 22, 2015 6:00 AM, "Amit Beka" <ami...@gm...> wrote: >> >>> Hi, >>> >>> I've been using online_nnet2_decoder for quite some time now for ASR in >>> a dialogue system, where some users are returning users. Naturally, we use >>> online i-vector extraction to better recognize each user's speech. >>> >>> Unfortunately, we have found some cases where the extracted i-vector >>> decreases the performance of the decoder, usually by identifying 0 or 1 >>> word (something like 'a', 'i' or 'yea') instead of recognizing the whole >>> utterance. Usually, the degraded performance lasts for 5-6 utternaces (each >>> is 1-3 seconds) until a good i-vector is "recovered". >>> >>> I would be grateful if anyone on the list may help with some of the >>> following questions: >>> >>> 1. Is it a bug, or i-vectors may behave this way (for no apparent >>> reason, when listening to the audio)? >>> >>> 2. Can I have a reliable way of telling when the i-vector is >>> problematic? (except checking the lengths of the utterance and the >>> transcription). What can be a good update method to the adaptation state >>> (based on confidence, length of utternance)? >>> >>> 3. Is it possible to separate the i-vector to some features which are >>> user-specific (like tone) and some that are environment specific (like >>> noise)? If so, I would probably want to "forget" the environment-specific >>> features and keep only the user-specific features when the utterances are >>> not consecutive >>> >>> I was wondering if there is a way to "understand" the changes in the >>> adaptation state, for a non-expert in signal-processing like me :) >>> >>> Thanks, >>> Beka >>> >>> >>> ------------------------------------------------------------------------------ >>> Don't Limit Your Business. Reach for the Cloud. >>> GigeNET's Cloud Solutions provide you with the tools and support that >>> you need to offload your IT needs and focus on growing your business. >>> Configured For All Businesses. Start Your Cloud Today. >>> https://www.gigenetcloud.com/ >>> _______________________________________________ >>> Kaldi-users mailing list >>> Kal...@li... >>> https://lists.sourceforge.net/lists/listinfo/kaldi-users >>> >>> > |
From: Amit B. <ami...@gm...> - 2015-07-22 12:27:31
|
I have listened to the recordings themselves (after VAD), and they all sound good, and were recorded with the same background noise (almost none) with the same speaker and in the same volume. I use the nnet2-online-latgen-faster decoder, and although my LM doesn't really suits the input, I expect it to give me at least *some* words as output On Wed, Jul 22, 2015 at 1:20 PM, Nagendra Goel <nag...@go...> wrote: > From your description this does not sound like a faulty ivector. Ivector > might have a small role but you should first look for problems elsewhere. > Maybe the recording itself goes bad? > > Nagendra Kumar Goel > On Jul 22, 2015 6:00 AM, "Amit Beka" <ami...@gm...> wrote: > >> Hi, >> >> I've been using online_nnet2_decoder for quite some time now for ASR in a >> dialogue system, where some users are returning users. Naturally, we use >> online i-vector extraction to better recognize each user's speech. >> >> Unfortunately, we have found some cases where the extracted i-vector >> decreases the performance of the decoder, usually by identifying 0 or 1 >> word (something like 'a', 'i' or 'yea') instead of recognizing the whole >> utterance. Usually, the degraded performance lasts for 5-6 utternaces (each >> is 1-3 seconds) until a good i-vector is "recovered". >> >> I would be grateful if anyone on the list may help with some of the >> following questions: >> >> 1. Is it a bug, or i-vectors may behave this way (for no apparent reason, >> when listening to the audio)? >> >> 2. Can I have a reliable way of telling when the i-vector is problematic? >> (except checking the lengths of the utterance and the transcription). What >> can be a good update method to the adaptation state (based on confidence, >> length of utternance)? >> >> 3. Is it possible to separate the i-vector to some features which are >> user-specific (like tone) and some that are environment specific (like >> noise)? If so, I would probably want to "forget" the environment-specific >> features and keep only the user-specific features when the utterances are >> not consecutive >> >> I was wondering if there is a way to "understand" the changes in the >> adaptation state, for a non-expert in signal-processing like me :) >> >> Thanks, >> Beka >> >> >> ------------------------------------------------------------------------------ >> Don't Limit Your Business. Reach for the Cloud. >> GigeNET's Cloud Solutions provide you with the tools and support that >> you need to offload your IT needs and focus on growing your business. >> Configured For All Businesses. Start Your Cloud Today. >> https://www.gigenetcloud.com/ >> _______________________________________________ >> Kaldi-users mailing list >> Kal...@li... >> https://lists.sourceforge.net/lists/listinfo/kaldi-users >> >> |
From: Amit B. <ami...@gm...> - 2015-07-22 09:59:13
|
Hi, I've been using online_nnet2_decoder for quite some time now for ASR in a dialogue system, where some users are returning users. Naturally, we use online i-vector extraction to better recognize each user's speech. Unfortunately, we have found some cases where the extracted i-vector decreases the performance of the decoder, usually by identifying 0 or 1 word (something like 'a', 'i' or 'yea') instead of recognizing the whole utterance. Usually, the degraded performance lasts for 5-6 utternaces (each is 1-3 seconds) until a good i-vector is "recovered". I would be grateful if anyone on the list may help with some of the following questions: 1. Is it a bug, or i-vectors may behave this way (for no apparent reason, when listening to the audio)? 2. Can I have a reliable way of telling when the i-vector is problematic? (except checking the lengths of the utterance and the transcription). What can be a good update method to the adaptation state (based on confidence, length of utternance)? 3. Is it possible to separate the i-vector to some features which are user-specific (like tone) and some that are environment specific (like noise)? If so, I would probably want to "forget" the environment-specific features and keep only the user-specific features when the utterances are not consecutive I was wondering if there is a way to "understand" the changes in the adaptation state, for a non-expert in signal-processing like me :) Thanks, Beka |
From: Xingyu Na <asr...@gm...> - 2015-07-22 06:59:29
|
I update the Maxout implementation on GPU in case anyone wants to try it in Kaldi... Please check out: https://github.com/naxingyu/kaldi-nn On 04/02/2015 01:08 AM, Daniel Povey wrote: > It looks to me like the MaxoutComponent has not been properly set up > for efficient operation on GPU: there is a loop in the Propagate > function. We didn't do this because it wasn't giving great results. > Incidentally, for the multi-splice networks (which we're now calling > TDNN) we may end up moving back from p-norm to ReLU, as ReLU now seems > to be giving better WERs. > Dan > > > On Tue, Mar 31, 2015 at 10:30 PM, Xingyu Na <asr...@gm... > <mailto:asr...@gm...>> wrote: > > Hi, > > I tried maxout training by changing the train_pnorm_fast recipe into > train_maxout_recipe, simply replacing the PnormComponent using > MaxoutComponent, forming a run_4e procedure. > The training runs extremely slow... e.g. a pnorm iteration per job > takes > 113 seconds, while a maxout iteration per job takes 3396 seconds. > Did you ever try this? Any suggestions? > > Thank you and best regards, > Xingyu > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel > Website, sponsored > by Intel and developed in partnership with Slashdot Media, is your > hub for all > things parallel software development, from weekly thought > leadership blogs to > news, videos, case studies, tutorials and more. Take a look and > join the > conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > <mailto:Kal...@li...> > https://lists.sourceforge.net/lists/listinfo/kaldi-users > > |
From: Mate A. <ele...@gm...> - 2015-07-22 01:29:36
|
I am looking to compare the forced alignments I generated for the TIMIT dataset with the ground truth provided in the corpus. However, The 'text' file from the data preparation step and the PHN files provided in the corpus (which hold the ground truth) provide differing phoneme sequences for each utterance. For instance, take the sample utterance FAEM0_SX42: phoneme sequence in the *text* file: sil b ih vcl b l ih cl k el s cl k aa l er z sil aa r vcl g y uw hh ih s cl t r iy sil phoneme sequence in *PHN* file (ground truth): h# b ih bcl b l ih kcl k el s kcl k aa l er z pau q aa r gcl g y ux hv ih s tcl t r iy h# As you can see, the phoneme sequences in both files differ by several phonemes, disregarding the h#/sil phonemes. 1. Is this normal? How can I accurately test the validity of my alignments when the ground truth specifies different phoneme sequences than my generated alignments? 2. Is there a script that would provide the phoneme error rate for the generated alignments? 3. What kind of metric can I use to compare my forced alignments to the ground truth? |
From: Jan T. <jt...@gm...> - 2015-07-08 16:33:30
|
If I understand what you are asking, you can do it even now if you have GPU that supports it. Just remove the compute-exclusive flag from the GPU config. For training it doesn't make much sense. For decoding, I'm not sure -- but using GPU for decoding does not lead to significant gains. Let us know about your experience. y. On Wed, Jul 8, 2015 at 12:27 PM, blake rasmussen <bla...@gm... > wrote: > Hi all, > > is there any plan to support running different process on a single GPU? I > am more interested on the decoding part rather than in the training. > > Thanks in advance, > > Blake. > > > ------------------------------------------------------------------------------ > Don't Limit Your Business. Reach for the Cloud. > GigeNET's Cloud Solutions provide you with the tools and support that > you need to offload your IT needs and focus on growing your business. > Configured For All Businesses. Start Your Cloud Today. > https://www.gigenetcloud.com/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users > > |
From: blake r. <bla...@gm...> - 2015-07-08 16:27:47
|
Hi all, is there any plan to support running different process on a single GPU? I am more interested on the decoding part rather than in the training. Thanks in advance, Blake. |
From: Daniel P. <dp...@gm...> - 2015-07-07 19:31:28
|
> Does that mean that train_more2.sh trains on 9733 - 300 = 9433 utterances, > where the 300 excluded utterances are those from the valid set? Yes. > Further, the decode.sh script accepts the graph directory for the trained > model, and not the model itself. However, this graph directory hasn't been > modified since the tri6b model. Even the nnet_a_online and my new model > which was fine-tuned on Blizzard all use the same graph directory. Does that > mean that the decoding will be identical on all of these models? Is the > graph directory the only deciding factor for the decoding's output? The graph does not contain the acoustic model, the decoding output also depends on the .mdl file. Please read this http://mi.eng.cam.ac.uk/~mjfg/mjfg_NOW.pdf to understand the basics of speech recognition. Dan > On Mon, Jul 6, 2015 at 2:27 PM, Daniel Povey <dp...@gm...> wrote: >> >> It ignores the valid data but the train_subset is a subset of the data >> trained on. You could reduce the number a bit, e.g. to 150, if you >> are concerned about losing data. >> Dan >> >> >> On Mon, Jul 6, 2015 at 8:58 AM, Mate Andre <ele...@gm...> wrote: >> > I noticed that the egs dumped for the Blizzard corpus contains 300 >> > utterances for each the valid and train subsets, out of a total of 9733 >> > utterances. Does the train_more2.sh script train on all 9733 utterances >> > of >> > the dataset, or does it ignore the utterances included in the valid and >> > train subsets when training? >> > >> > On Fri, Jul 3, 2015 at 2:59 PM, Daniel Povey <dp...@gm...> wrote: >> >> >> >> It definitely supports that- if you set num-threads to 1 it will train >> >> with GPU, but read this page >> >> http://kaldi.sourceforge.net/dnn2.html >> >> >> >> Dan >> >> >> >> On Fri, Jul 3, 2015 at 6:48 AM, Mate Andre <ele...@gm...> >> >> wrote: >> >> > The train_more2.sh has been running for 19 hours and is currently at >> >> > pass >> >> > 42/60. Since the script is training on the 19-hour subset of >> >> > Blizzard, I >> >> > imagine it'll take quite a while longer to train on the full 300 >> >> > hours. >> >> > >> >> > Is there an option to run the train_more2.sh script on GPU? >> >> > >> >> > On Thu, Jul 2, 2015 at 2:25 PM, Daniel Povey <dp...@gm...> >> >> > wrote: >> >> >> >> >> >> > >> >> >> > Back to training on the Blizzard dataset, I was able to dump the >> >> >> > iVectors >> >> >> > for Blizzard's 19-hour subset. Where are they needed, though? >> >> >> > Neither >> >> >> > train_more2.sh nor get_egs2.sh seem to accept dumped iVectors as >> >> >> > input. >> >> >> >> >> >> It's the --online-ivector-dir option. >> >> >> >> >> >> > Regardless, I ran the train_more2.sh script on Blizzard's data/ >> >> >> > and >> >> >> > egs/ >> >> >> > folder (generated with get_egs2.sh), and I get the following >> >> >> > errors >> >> >> > in >> >> >> > train.*.*.log: >> >> >> > >> >> >> > KALDI_ASSERT: at >> >> >> > nnet-train-parallel:FormatNnetInput:nnet-update.cc:212, >> >> >> > failed: >> >> >> > data[0].input_frames.NumRows() >= num_splice >> >> >> > [...] >> >> >> > LOG (nnet-train-parallel:DoBackprop():nnet-update.cc:275) Error >> >> >> > doing >> >> >> > backprop, nnet info is: num-components 17 >> >> >> > num-updatable-components 5 >> >> >> > left-context 7 >> >> >> > right-context 7 >> >> >> > input-dim 140 >> >> >> > output-dim 5816 >> >> >> > parameter-dim 10351000 >> >> >> > [...] >> >> >> > >> >> >> > The logs tell me that the left and right contexts were set to 7. >> >> >> > However, I >> >> >> > specified them both to 3 when running get_egs2.sh. The >> >> >> > egs/info/{left,right}_context files even confirm that they are set >> >> >> > to >> >> >> > 3. >> >> >> > Is >> >> >> > it possible that train_more2.sh is using the contexts from another >> >> >> > directory? >> >> >> >> >> >> The problem is that 3 < 7. The neural net requires a certain amount >> >> >> of temporal context (7 left and right, here) and if you dump less >> >> >> than >> >> >> that in the egs it will crash. So you need to set them to 7 when >> >> >> dumping egs. >> >> >> >> >> >> Dan >> >> >> >> >> >> >> >> >> >> >> >> > On Tue, Jun 30, 2015 at 2:07 PM, Daniel Povey <dp...@gm...> >> >> >> > wrote: >> >> >> >> >> >> >> >> Check the script that generated it; probably the graph directory >> >> >> >> was >> >> >> >> in a different location e.g. in tri6 or something like that. >> >> >> >> Hopefully we would have uploaded that too. >> >> >> >> We only need to regenerate the graph when the tree changes. >> >> >> >> Dan >> >> >> >> >> >> >> >> >> >> >> >> On Tue, Jun 30, 2015 at 2:05 PM, Mate Andre >> >> >> >> <ele...@gm...> >> >> >> >> wrote: >> >> >> >> > To ensure that the nnet_a_online model is performing well on >> >> >> >> > the >> >> >> >> > 19-hour >> >> >> >> > Blizzard dataset and that it is producing correct alignments, I >> >> >> >> > want >> >> >> >> > to >> >> >> >> > run >> >> >> >> > the decoding script on the Blizzard data. However, the >> >> >> >> > nnet_a_online >> >> >> >> > model >> >> >> >> > on kadi-asr.org doesn't seem to have a graph directory needed >> >> >> >> > for >> >> >> >> > decoding. >> >> >> >> > Is there any way I can get a hold of this directory without >> >> >> >> > training >> >> >> >> > the >> >> >> >> > entire model? >> >> >> > >> >> >> > >> >> > >> >> > >> > >> > > > |
From: Mate A. <ele...@gm...> - 2015-07-07 14:50:57
|
Does that mean that *train_more2.sh* trains on 9733 - 300 = 9433 utterances, where the 300 excluded utterances are those from the valid set? Further, the *decode.sh *script accepts the graph directory for the trained model, and not the model itself. However, this graph directory hasn't been modified since the tri6b model. Even the nnet_a_online and my new model which was fine-tuned on Blizzard all use the same graph directory. Does that mean that the decoding will be identical on all of these models? Is the graph directory the only deciding factor for the decoding's output? On Mon, Jul 6, 2015 at 2:27 PM, Daniel Povey <dp...@gm...> wrote: > It ignores the valid data but the train_subset is a subset of the data > trained on. You could reduce the number a bit, e.g. to 150, if you > are concerned about losing data. > Dan > > > On Mon, Jul 6, 2015 at 8:58 AM, Mate Andre <ele...@gm...> wrote: > > I noticed that the egs dumped for the Blizzard corpus contains 300 > > utterances for each the valid and train subsets, out of a total of 9733 > > utterances. Does the train_more2.sh script train on all 9733 utterances > of > > the dataset, or does it ignore the utterances included in the valid and > > train subsets when training? > > > > On Fri, Jul 3, 2015 at 2:59 PM, Daniel Povey <dp...@gm...> wrote: > >> > >> It definitely supports that- if you set num-threads to 1 it will train > >> with GPU, but read this page > >> http://kaldi.sourceforge.net/dnn2.html > >> > >> Dan > >> > >> On Fri, Jul 3, 2015 at 6:48 AM, Mate Andre <ele...@gm...> > wrote: > >> > The train_more2.sh has been running for 19 hours and is currently at > >> > pass > >> > 42/60. Since the script is training on the 19-hour subset of > Blizzard, I > >> > imagine it'll take quite a while longer to train on the full 300 > hours. > >> > > >> > Is there an option to run the train_more2.sh script on GPU? > >> > > >> > On Thu, Jul 2, 2015 at 2:25 PM, Daniel Povey <dp...@gm...> > wrote: > >> >> > >> >> > > >> >> > Back to training on the Blizzard dataset, I was able to dump the > >> >> > iVectors > >> >> > for Blizzard's 19-hour subset. Where are they needed, though? > Neither > >> >> > train_more2.sh nor get_egs2.sh seem to accept dumped iVectors as > >> >> > input. > >> >> > >> >> It's the --online-ivector-dir option. > >> >> > >> >> > Regardless, I ran the train_more2.sh script on Blizzard's data/ and > >> >> > egs/ > >> >> > folder (generated with get_egs2.sh), and I get the following errors > >> >> > in > >> >> > train.*.*.log: > >> >> > > >> >> > KALDI_ASSERT: at > >> >> > nnet-train-parallel:FormatNnetInput:nnet-update.cc:212, > >> >> > failed: > >> >> > data[0].input_frames.NumRows() >= num_splice > >> >> > [...] > >> >> > LOG (nnet-train-parallel:DoBackprop():nnet-update.cc:275) Error > doing > >> >> > backprop, nnet info is: num-components 17 > >> >> > num-updatable-components 5 > >> >> > left-context 7 > >> >> > right-context 7 > >> >> > input-dim 140 > >> >> > output-dim 5816 > >> >> > parameter-dim 10351000 > >> >> > [...] > >> >> > > >> >> > The logs tell me that the left and right contexts were set to 7. > >> >> > However, I > >> >> > specified them both to 3 when running get_egs2.sh. The > >> >> > egs/info/{left,right}_context files even confirm that they are set > to > >> >> > 3. > >> >> > Is > >> >> > it possible that train_more2.sh is using the contexts from another > >> >> > directory? > >> >> > >> >> The problem is that 3 < 7. The neural net requires a certain amount > >> >> of temporal context (7 left and right, here) and if you dump less > than > >> >> that in the egs it will crash. So you need to set them to 7 when > >> >> dumping egs. > >> >> > >> >> Dan > >> >> > >> >> > >> >> > >> >> > On Tue, Jun 30, 2015 at 2:07 PM, Daniel Povey <dp...@gm...> > >> >> > wrote: > >> >> >> > >> >> >> Check the script that generated it; probably the graph directory > was > >> >> >> in a different location e.g. in tri6 or something like that. > >> >> >> Hopefully we would have uploaded that too. > >> >> >> We only need to regenerate the graph when the tree changes. > >> >> >> Dan > >> >> >> > >> >> >> > >> >> >> On Tue, Jun 30, 2015 at 2:05 PM, Mate Andre <ele...@gm... > > > >> >> >> wrote: > >> >> >> > To ensure that the nnet_a_online model is performing well on the > >> >> >> > 19-hour > >> >> >> > Blizzard dataset and that it is producing correct alignments, I > >> >> >> > want > >> >> >> > to > >> >> >> > run > >> >> >> > the decoding script on the Blizzard data. However, the > >> >> >> > nnet_a_online > >> >> >> > model > >> >> >> > on kadi-asr.org doesn't seem to have a graph directory needed > for > >> >> >> > decoding. > >> >> >> > Is there any way I can get a hold of this directory without > >> >> >> > training > >> >> >> > the > >> >> >> > entire model? > >> >> > > >> >> > > >> > > >> > > > > > > |
From: Sunit S. <sun...@in...> - 2015-07-07 08:30:35
|
It is working fine now. Thanks a lot. Regards, Sunit On Monday 06 July 2015 10:03 PM, Daniel Povey wrote: > BTW, it's the recompilation in tools/ that is important. You may have > to remove the rnnlm subdirectory, whatever it is called, to force > update. > Dan > > > On Mon, Jul 6, 2015 at 12:52 PM, Jan Trmal <jt...@gm...> wrote: >> I just committed a patch that hopefully resolves this. Sunit, please update >> Kaldi, recompile and run the rescoring again. Let us help if it was fixed. >> >> y. >> >> On Mon, Jul 6, 2015 at 7:55 AM, Sunit Sivasankaran >> <sun...@in...> wrote: >>> Hi all, >>> >>> I am getting a buffer overflow error while running RNNLM scripts of WSJ. >>> Any idea as to what could have gone wrong? I trained the model using a >>> subset of WSJ utterances and from the logs, the training seemed alright. >>> Below are the rnnlm rescore logs followed by the RNN training logs. >>> >>> >>> >>> steps/rnnlmrescore.sh --rnnlm_ver rnnlm-hs-0.1b --N 100 0.5 >>> data/lang_test_tgpr_5k data/lang_rnnlm_h30_me5-1000 data/dt05_multi_r_mc >>> exp/tri4a/decode_tgpr_5k exp/tri4a/decode_tgpr_5k_rnnlm_h30_me5-1000_L0.5 >>> >>> >>> steps/rnnlmrescore.sh: converting lattices to N-best. >>> steps/rnnlmrescore.sh: removing old LM scores. >>> steps/rnnlmrescore.sh: creating separate-archive form of N-best lists. >>> steps/rnnlmrescore.sh: doing the same with old LM scores. >>> steps/rnnlmrescore.sh: Creating archives with text-form of words, and LM >>> scores without graph scores. >>> steps/rnnlmrescore.sh: invoking rnnlm_compute_scores.sh which calls >>> rnnlm, to get RNN LM scores. >>> *** buffer overflow detected ***: ../../../tools/rnnlm-hs-0.1b/rnnlm >>> terminated >>> ======= Backtrace: ========= >>> /lib/x86_64-linux-gnu/libc.so.6(+0x7338f)[0x7fda06f6b38f] >>> /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x5c)[0x7fda07002c9c] >>> /lib/x86_64-linux-gnu/libc.so.6(+0x109b60)[0x7fda07001b60] >>> ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4011ea] >>> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fda06f19ec5] >>> ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4018ac] >>> >>> >>> >>> Training logs: >>> >>> ../../../tools/rnnlm-hs-0.1b/rnnlm -threads 1 -independent -train >>> /tmp/tmp.6aF3RDTFnf -valid /tmp/tmp.aTiUNgZnWT -rnnlm >>> data/lang_rnnlm_h30_me5-1000/rnnlm -hidden 30 -rand-seed 1 -debug 2 >>> -class 200 -bptt 2 -bptt-block 20 -direct-order 4 -direct 1000 -binary >>> # >>> Vocab size: 9066 >>> Words in train file: 164907 >>> Starting training using file /tmp/tmp.6aF3RDTFnf >>> Iteration 0 Valid Entropy 9.595403 >>> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >>> 117.21k Iteration 1 Valid Entropy 8.564008 >>> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >>> 123.54k Iteration 2 Valid Entropy 8.297136 >>> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >>> 122.70k Iteration 3 Valid Entropy 8.175531 >>> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >>> 108.22k Iteration 4 Valid Entropy 8.107678 >>> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >>> 121.89k Iteration 5 Valid Entropy 8.069274 >>> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >>> 124.64k Iteration 6 Valid Entropy 8.049375 Decay started >>> Alpha: 0.050000 ME-alpha: 0.050000 Progress: 97.12% Words/thread/sec: >>> 111.30k Iteration 7 Valid Entropy 8.009795 >>> Alpha: 0.025000 ME-alpha: 0.025000 Progress: 97.12% Words/thread/sec: >>> 124.70k Iteration 8 Valid Entropy 7.989441 Retry 1/2 >>> Alpha: 0.012500 ME-alpha: 0.012500 Progress: 97.12% Words/thread/sec: >>> 113.82k Iteration 9 Valid Entropy 7.982499 Retry 2/2 >>> # Accounting: time=439 threads=1 >>> # Ended (code 0) at Fri Jun 26 16:22:52 CEST 2015, elapsed time 439 >>> seconds >>> >>> >>> Regards, >>> Sunit >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Don't Limit Your Business. Reach for the Cloud. >>> GigeNET's Cloud Solutions provide you with the tools and support that >>> you need to offload your IT needs and focus on growing your business. >>> Configured For All Businesses. Start Your Cloud Today. >>> https://www.gigenetcloud.com/ >>> _______________________________________________ >>> Kaldi-users mailing list >>> Kal...@li... >>> https://lists.sourceforge.net/lists/listinfo/kaldi-users >> >> >> ------------------------------------------------------------------------------ >> Don't Limit Your Business. Reach for the Cloud. >> GigeNET's Cloud Solutions provide you with the tools and support that >> you need to offload your IT needs and focus on growing your business. >> Configured For All Businesses. Start Your Cloud Today. >> https://www.gigenetcloud.com/ >> _______________________________________________ >> Kaldi-users mailing list >> Kal...@li... >> https://lists.sourceforge.net/lists/listinfo/kaldi-users >> |
From: Kirill K. <kir...@sm...> - 2015-07-07 04:58:47
|
To make sure it all makes sense. Bitrate is not property of an *audio* stream, it is rather property of an encoded, *network* stream. This is about how much bandwidth one needs to transmit audio, and/or how much space it will take at rest. You should not worry about the bitrate at all. Audio is determined by its sample rate, sample bit width and sample format. The latter is loosely a way of interpreting the N bits of the sample: is it a signed or unsigned int? A float maybe? The bit rate is something one worries about when they transmit (the compressed) audio over the network. The same 16-bit, 44.1 kHz audio track may be compressed into a 128 kbps MP3 stream, or a 64 kbps stream, at an expense of decoding quality. For an uncompressed PCM audio the "bit rate" is simply the sample rate multiplied by the number of bits in a sample; it is a derived quantity that you are unlikely to need to specify. soxi is arguably the easiest tool to show audio format data: $ soxi /data/LibriSpeech/train-clean-100/103/1241/103-1241-0001.flac Input File : '103-1241-0001.flac' Channels : 1 Sample Rate : 16000 Precision : 16-bit Duration : 00:00:15.55 = 248800 samples ~ 1166.25 CDDA sectors File Size : 272k Bit Rate : 140k Sample Encoding: 16-bit FLAC What you really want to know is the sample rate (16000) and sample bit width (16). You hardly care about the bit rate (140k) at all. This is only an indication how tightly the file is compressed: 16000*16=256000 bits/s are packaged by flac into 140000 bit/s for storage and transmission, ~2× compression, so what? We process only uncompressed audio data anyway. MP3 compresses much more tightly, but, unlike flac, it is lossy. $ soxi ~/08-01.mp3 Input File : '/home/kkm/08-01.mp3' Channels : 2 Sample Rate : 44100 Precision : 16-bit Duration : 00:08:13.73 = 21773273 samples = 37029.4 CDDA sectors File Size : 19.8M Bit Rate : 320k Sample Encoding: MPEG audio (layer I, II or III) -kkm > -----Original Message----- > From: Jonathan L [mailto:jon...@gm...] > Sent: 2015-07-03 1045 > To: Vijayaditya Peddinti > Cc: soroush mehri; kal...@li... > Subject: Re: [Kaldi-users] LibriSpeech nnet2 model: training more on a > new dataset > > The data I want to train on is in MP3 format at a 128kbps bitrate and a > 44.1kHz sample rate. The LibriSpeech data has a 16kHz sample rate, but > doesn't seem to have a specified bitrate, When I convert the MP3 files > into 16kHz sample-rate WAV files, what bitrate should I convert them > to? > > Is there anything else I should consider when converting the speech > files? > > On Mon, Jun 29, 2015 at 12:24 PM, Vijayaditya Peddinti > <p.v...@gm...> wrote: > > > You need to provide the egs directory, not exp directory. You can > check stage -3 of steps/nnet2/train_multisplice_accel2.sh to see how > egs directory can be created from the alignment and data directories. > The context variables necessary for creating these examples can > be found in nnet_ms_a_online/conf/splice.conf file. > > Vijay > > On Mon, Jun 29, 2015 at 9:14 AM, Jonathan L > <jon...@gm...> wrote: > > > The train_more*.sh scripts accept an 'exp' directory > instead of a 'data/train' directory. Is there another script that would > accept the 'data/train' directory as input instead? > > On Mon, Jun 29, 2015 at 12:08 PM, Vijayaditya Peddinti > <p.v...@gm...> wrote: > > > See the scripts steps/nnet2/train_more*.sh > > Vijay > > On Mon, Jun 29, 2015 at 9:02 AM, Jonathan L > <jon...@gm...> wrote: > > > > I'm looking to further train an existing > LibriSpeech nnet2_a_online model on a new dataset. > > I have prepared the files for this new dataset > inside a data/train directory, as described in the Data Preparation > tutorial. I want to keep the nnet2_a_online model initialized to the > parameters it learned from training on LibriSpeech, but continue its > training on this new dataset. Is there a script that would allow me to > specify the nnet2_a_online model and the dataset's data/train directory > as input in order to output a model that has been trained more on this > new dataset? > > > ----------------------------------------------------------------------- > - > ------ > Monitor 25 network devices or servers for free > with OpManager! > OpManager is web-based network > management software that monitors > network devices and physical & virtual servers, > alerts via email & sms > for fault. Monitor 25 devices for free with no > restriction. Download now > > http://ad.doubleclick.net/ddm/clk/292181274;119417398;o > > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > > https://lists.sourceforge.net/lists/listinfo/kaldi-users > > > > > > |
From: Daniel P. <dp...@gm...> - 2015-07-06 20:03:19
|
BTW, it's the recompilation in tools/ that is important. You may have to remove the rnnlm subdirectory, whatever it is called, to force update. Dan On Mon, Jul 6, 2015 at 12:52 PM, Jan Trmal <jt...@gm...> wrote: > I just committed a patch that hopefully resolves this. Sunit, please update > Kaldi, recompile and run the rescoring again. Let us help if it was fixed. > > y. > > On Mon, Jul 6, 2015 at 7:55 AM, Sunit Sivasankaran > <sun...@in...> wrote: >> >> Hi all, >> >> I am getting a buffer overflow error while running RNNLM scripts of WSJ. >> Any idea as to what could have gone wrong? I trained the model using a >> subset of WSJ utterances and from the logs, the training seemed alright. >> Below are the rnnlm rescore logs followed by the RNN training logs. >> >> >> >> steps/rnnlmrescore.sh --rnnlm_ver rnnlm-hs-0.1b --N 100 0.5 >> data/lang_test_tgpr_5k data/lang_rnnlm_h30_me5-1000 data/dt05_multi_r_mc >> exp/tri4a/decode_tgpr_5k exp/tri4a/decode_tgpr_5k_rnnlm_h30_me5-1000_L0.5 >> >> >> steps/rnnlmrescore.sh: converting lattices to N-best. >> steps/rnnlmrescore.sh: removing old LM scores. >> steps/rnnlmrescore.sh: creating separate-archive form of N-best lists. >> steps/rnnlmrescore.sh: doing the same with old LM scores. >> steps/rnnlmrescore.sh: Creating archives with text-form of words, and LM >> scores without graph scores. >> steps/rnnlmrescore.sh: invoking rnnlm_compute_scores.sh which calls >> rnnlm, to get RNN LM scores. >> *** buffer overflow detected ***: ../../../tools/rnnlm-hs-0.1b/rnnlm >> terminated >> ======= Backtrace: ========= >> /lib/x86_64-linux-gnu/libc.so.6(+0x7338f)[0x7fda06f6b38f] >> /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x5c)[0x7fda07002c9c] >> /lib/x86_64-linux-gnu/libc.so.6(+0x109b60)[0x7fda07001b60] >> ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4011ea] >> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fda06f19ec5] >> ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4018ac] >> >> >> >> Training logs: >> >> ../../../tools/rnnlm-hs-0.1b/rnnlm -threads 1 -independent -train >> /tmp/tmp.6aF3RDTFnf -valid /tmp/tmp.aTiUNgZnWT -rnnlm >> data/lang_rnnlm_h30_me5-1000/rnnlm -hidden 30 -rand-seed 1 -debug 2 >> -class 200 -bptt 2 -bptt-block 20 -direct-order 4 -direct 1000 -binary >> # >> Vocab size: 9066 >> Words in train file: 164907 >> Starting training using file /tmp/tmp.6aF3RDTFnf >> Iteration 0 Valid Entropy 9.595403 >> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >> 117.21k Iteration 1 Valid Entropy 8.564008 >> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >> 123.54k Iteration 2 Valid Entropy 8.297136 >> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >> 122.70k Iteration 3 Valid Entropy 8.175531 >> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >> 108.22k Iteration 4 Valid Entropy 8.107678 >> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >> 121.89k Iteration 5 Valid Entropy 8.069274 >> Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: >> 124.64k Iteration 6 Valid Entropy 8.049375 Decay started >> Alpha: 0.050000 ME-alpha: 0.050000 Progress: 97.12% Words/thread/sec: >> 111.30k Iteration 7 Valid Entropy 8.009795 >> Alpha: 0.025000 ME-alpha: 0.025000 Progress: 97.12% Words/thread/sec: >> 124.70k Iteration 8 Valid Entropy 7.989441 Retry 1/2 >> Alpha: 0.012500 ME-alpha: 0.012500 Progress: 97.12% Words/thread/sec: >> 113.82k Iteration 9 Valid Entropy 7.982499 Retry 2/2 >> # Accounting: time=439 threads=1 >> # Ended (code 0) at Fri Jun 26 16:22:52 CEST 2015, elapsed time 439 >> seconds >> >> >> Regards, >> Sunit >> >> >> >> ------------------------------------------------------------------------------ >> Don't Limit Your Business. Reach for the Cloud. >> GigeNET's Cloud Solutions provide you with the tools and support that >> you need to offload your IT needs and focus on growing your business. >> Configured For All Businesses. Start Your Cloud Today. >> https://www.gigenetcloud.com/ >> _______________________________________________ >> Kaldi-users mailing list >> Kal...@li... >> https://lists.sourceforge.net/lists/listinfo/kaldi-users > > > > ------------------------------------------------------------------------------ > Don't Limit Your Business. Reach for the Cloud. > GigeNET's Cloud Solutions provide you with the tools and support that > you need to offload your IT needs and focus on growing your business. > Configured For All Businesses. Start Your Cloud Today. > https://www.gigenetcloud.com/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users > |
From: Jan T. <jt...@gm...> - 2015-07-06 19:52:17
|
I just committed a patch that hopefully resolves this. Sunit, please update Kaldi, recompile and run the rescoring again. Let us help if it was fixed. y. On Mon, Jul 6, 2015 at 7:55 AM, Sunit Sivasankaran < sun...@in...> wrote: > Hi all, > > I am getting a buffer overflow error while running RNNLM scripts of WSJ. > Any idea as to what could have gone wrong? I trained the model using a > subset of WSJ utterances and from the logs, the training seemed alright. > Below are the rnnlm rescore logs followed by the RNN training logs. > > > > steps/rnnlmrescore.sh --rnnlm_ver rnnlm-hs-0.1b --N 100 0.5 > data/lang_test_tgpr_5k data/lang_rnnlm_h30_me5-1000 data/dt05_multi_r_mc > exp/tri4a/decode_tgpr_5k exp/tri4a/decode_tgpr_5k_rnnlm_h30_me5-1000_L0.5 > > > steps/rnnlmrescore.sh: converting lattices to N-best. > steps/rnnlmrescore.sh: removing old LM scores. > steps/rnnlmrescore.sh: creating separate-archive form of N-best lists. > steps/rnnlmrescore.sh: doing the same with old LM scores. > steps/rnnlmrescore.sh: Creating archives with text-form of words, and LM > scores without graph scores. > steps/rnnlmrescore.sh: invoking rnnlm_compute_scores.sh which calls > rnnlm, to get RNN LM scores. > *** buffer overflow detected ***: ../../../tools/rnnlm-hs-0.1b/rnnlm > terminated > ======= Backtrace: ========= > /lib/x86_64-linux-gnu/libc.so.6(+0x7338f)[0x7fda06f6b38f] > /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x5c)[0x7fda07002c9c] > /lib/x86_64-linux-gnu/libc.so.6(+0x109b60)[0x7fda07001b60] > ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4011ea] > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fda06f19ec5] > ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4018ac] > > > > Training logs: > > ../../../tools/rnnlm-hs-0.1b/rnnlm -threads 1 -independent -train > /tmp/tmp.6aF3RDTFnf -valid /tmp/tmp.aTiUNgZnWT -rnnlm > data/lang_rnnlm_h30_me5-1000/rnnlm -hidden 30 -rand-seed 1 -debug 2 > -class 200 -bptt 2 -bptt-block 20 -direct-order 4 -direct 1000 -binary > # > Vocab size: 9066 > Words in train file: 164907 > Starting training using file /tmp/tmp.6aF3RDTFnf > Iteration 0 Valid Entropy 9.595403 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 117.21k Iteration 1 Valid Entropy 8.564008 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 123.54k Iteration 2 Valid Entropy 8.297136 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 122.70k Iteration 3 Valid Entropy 8.175531 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 108.22k Iteration 4 Valid Entropy 8.107678 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 121.89k Iteration 5 Valid Entropy 8.069274 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 124.64k Iteration 6 Valid Entropy 8.049375 Decay started > Alpha: 0.050000 ME-alpha: 0.050000 Progress: 97.12% Words/thread/sec: > 111.30k Iteration 7 Valid Entropy 8.009795 > Alpha: 0.025000 ME-alpha: 0.025000 Progress: 97.12% Words/thread/sec: > 124.70k Iteration 8 Valid Entropy 7.989441 Retry 1/2 > Alpha: 0.012500 ME-alpha: 0.012500 Progress: 97.12% Words/thread/sec: > 113.82k Iteration 9 Valid Entropy 7.982499 Retry 2/2 > # Accounting: time=439 threads=1 > # Ended (code 0) at Fri Jun 26 16:22:52 CEST 2015, elapsed time 439 seconds > > > Regards, > Sunit > > > > ------------------------------------------------------------------------------ > Don't Limit Your Business. Reach for the Cloud. > GigeNET's Cloud Solutions provide you with the tools and support that > you need to offload your IT needs and focus on growing your business. > Configured For All Businesses. Start Your Cloud Today. > https://www.gigenetcloud.com/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users > |
From: Daniel P. <dp...@gm...> - 2015-07-06 18:27:17
|
It ignores the valid data but the train_subset is a subset of the data trained on. You could reduce the number a bit, e.g. to 150, if you are concerned about losing data. Dan On Mon, Jul 6, 2015 at 8:58 AM, Mate Andre <ele...@gm...> wrote: > I noticed that the egs dumped for the Blizzard corpus contains 300 > utterances for each the valid and train subsets, out of a total of 9733 > utterances. Does the train_more2.sh script train on all 9733 utterances of > the dataset, or does it ignore the utterances included in the valid and > train subsets when training? > > On Fri, Jul 3, 2015 at 2:59 PM, Daniel Povey <dp...@gm...> wrote: >> >> It definitely supports that- if you set num-threads to 1 it will train >> with GPU, but read this page >> http://kaldi.sourceforge.net/dnn2.html >> >> Dan >> >> On Fri, Jul 3, 2015 at 6:48 AM, Mate Andre <ele...@gm...> wrote: >> > The train_more2.sh has been running for 19 hours and is currently at >> > pass >> > 42/60. Since the script is training on the 19-hour subset of Blizzard, I >> > imagine it'll take quite a while longer to train on the full 300 hours. >> > >> > Is there an option to run the train_more2.sh script on GPU? >> > >> > On Thu, Jul 2, 2015 at 2:25 PM, Daniel Povey <dp...@gm...> wrote: >> >> >> >> > >> >> > Back to training on the Blizzard dataset, I was able to dump the >> >> > iVectors >> >> > for Blizzard's 19-hour subset. Where are they needed, though? Neither >> >> > train_more2.sh nor get_egs2.sh seem to accept dumped iVectors as >> >> > input. >> >> >> >> It's the --online-ivector-dir option. >> >> >> >> > Regardless, I ran the train_more2.sh script on Blizzard's data/ and >> >> > egs/ >> >> > folder (generated with get_egs2.sh), and I get the following errors >> >> > in >> >> > train.*.*.log: >> >> > >> >> > KALDI_ASSERT: at >> >> > nnet-train-parallel:FormatNnetInput:nnet-update.cc:212, >> >> > failed: >> >> > data[0].input_frames.NumRows() >= num_splice >> >> > [...] >> >> > LOG (nnet-train-parallel:DoBackprop():nnet-update.cc:275) Error doing >> >> > backprop, nnet info is: num-components 17 >> >> > num-updatable-components 5 >> >> > left-context 7 >> >> > right-context 7 >> >> > input-dim 140 >> >> > output-dim 5816 >> >> > parameter-dim 10351000 >> >> > [...] >> >> > >> >> > The logs tell me that the left and right contexts were set to 7. >> >> > However, I >> >> > specified them both to 3 when running get_egs2.sh. The >> >> > egs/info/{left,right}_context files even confirm that they are set to >> >> > 3. >> >> > Is >> >> > it possible that train_more2.sh is using the contexts from another >> >> > directory? >> >> >> >> The problem is that 3 < 7. The neural net requires a certain amount >> >> of temporal context (7 left and right, here) and if you dump less than >> >> that in the egs it will crash. So you need to set them to 7 when >> >> dumping egs. >> >> >> >> Dan >> >> >> >> >> >> >> >> > On Tue, Jun 30, 2015 at 2:07 PM, Daniel Povey <dp...@gm...> >> >> > wrote: >> >> >> >> >> >> Check the script that generated it; probably the graph directory was >> >> >> in a different location e.g. in tri6 or something like that. >> >> >> Hopefully we would have uploaded that too. >> >> >> We only need to regenerate the graph when the tree changes. >> >> >> Dan >> >> >> >> >> >> >> >> >> On Tue, Jun 30, 2015 at 2:05 PM, Mate Andre <ele...@gm...> >> >> >> wrote: >> >> >> > To ensure that the nnet_a_online model is performing well on the >> >> >> > 19-hour >> >> >> > Blizzard dataset and that it is producing correct alignments, I >> >> >> > want >> >> >> > to >> >> >> > run >> >> >> > the decoding script on the Blizzard data. However, the >> >> >> > nnet_a_online >> >> >> > model >> >> >> > on kadi-asr.org doesn't seem to have a graph directory needed for >> >> >> > decoding. >> >> >> > Is there any way I can get a hold of this directory without >> >> >> > training >> >> >> > the >> >> >> > entire model? >> >> > >> >> > >> > >> > > > |
From: Mate A. <ele...@gm...> - 2015-07-06 15:58:43
|
I noticed that the egs dumped for the Blizzard corpus contains 300 utterances for each the *valid *and *train* subsets, out of a total of 9733 utterances. Does the *train_more2.sh* script train on all 9733 utterances of the dataset, or does it ignore the utterances included in the *valid* and *train *subsets when training? On Fri, Jul 3, 2015 at 2:59 PM, Daniel Povey <dp...@gm...> wrote: > It definitely supports that- if you set num-threads to 1 it will train > with GPU, but read this page > http://kaldi.sourceforge.net/dnn2.html > > Dan > > On Fri, Jul 3, 2015 at 6:48 AM, Mate Andre <ele...@gm...> wrote: > > The train_more2.sh has been running for 19 hours and is currently at pass > > 42/60. Since the script is training on the 19-hour subset of Blizzard, I > > imagine it'll take quite a while longer to train on the full 300 hours. > > > > Is there an option to run the train_more2.sh script on GPU? > > > > On Thu, Jul 2, 2015 at 2:25 PM, Daniel Povey <dp...@gm...> wrote: > >> > >> > > >> > Back to training on the Blizzard dataset, I was able to dump the > >> > iVectors > >> > for Blizzard's 19-hour subset. Where are they needed, though? Neither > >> > train_more2.sh nor get_egs2.sh seem to accept dumped iVectors as > input. > >> > >> It's the --online-ivector-dir option. > >> > >> > Regardless, I ran the train_more2.sh script on Blizzard's data/ and > egs/ > >> > folder (generated with get_egs2.sh), and I get the following errors in > >> > train.*.*.log: > >> > > >> > KALDI_ASSERT: at > nnet-train-parallel:FormatNnetInput:nnet-update.cc:212, > >> > failed: > >> > data[0].input_frames.NumRows() >= num_splice > >> > [...] > >> > LOG (nnet-train-parallel:DoBackprop():nnet-update.cc:275) Error doing > >> > backprop, nnet info is: num-components 17 > >> > num-updatable-components 5 > >> > left-context 7 > >> > right-context 7 > >> > input-dim 140 > >> > output-dim 5816 > >> > parameter-dim 10351000 > >> > [...] > >> > > >> > The logs tell me that the left and right contexts were set to 7. > >> > However, I > >> > specified them both to 3 when running get_egs2.sh. The > >> > egs/info/{left,right}_context files even confirm that they are set to > 3. > >> > Is > >> > it possible that train_more2.sh is using the contexts from another > >> > directory? > >> > >> The problem is that 3 < 7. The neural net requires a certain amount > >> of temporal context (7 left and right, here) and if you dump less than > >> that in the egs it will crash. So you need to set them to 7 when > >> dumping egs. > >> > >> Dan > >> > >> > >> > >> > On Tue, Jun 30, 2015 at 2:07 PM, Daniel Povey <dp...@gm...> > wrote: > >> >> > >> >> Check the script that generated it; probably the graph directory was > >> >> in a different location e.g. in tri6 or something like that. > >> >> Hopefully we would have uploaded that too. > >> >> We only need to regenerate the graph when the tree changes. > >> >> Dan > >> >> > >> >> > >> >> On Tue, Jun 30, 2015 at 2:05 PM, Mate Andre <ele...@gm...> > >> >> wrote: > >> >> > To ensure that the nnet_a_online model is performing well on the > >> >> > 19-hour > >> >> > Blizzard dataset and that it is producing correct alignments, I > want > >> >> > to > >> >> > run > >> >> > the decoding script on the Blizzard data. However, the > nnet_a_online > >> >> > model > >> >> > on kadi-asr.org doesn't seem to have a graph directory needed for > >> >> > decoding. > >> >> > Is there any way I can get a hold of this directory without > training > >> >> > the > >> >> > entire model? > >> > > >> > > > > > > |
From: Jan T. <jt...@gm...> - 2015-07-06 13:46:53
|
I believe there was some issue with long paths (I think there was a fixed-size buffer for the filename, or somehting like that). I have no idea if it was fixed or not. y. On Mon, Jul 6, 2015 at 7:55 AM, Sunit Sivasankaran < sun...@in...> wrote: > Hi all, > > I am getting a buffer overflow error while running RNNLM scripts of WSJ. > Any idea as to what could have gone wrong? I trained the model using a > subset of WSJ utterances and from the logs, the training seemed alright. > Below are the rnnlm rescore logs followed by the RNN training logs. > > > > steps/rnnlmrescore.sh --rnnlm_ver rnnlm-hs-0.1b --N 100 0.5 > data/lang_test_tgpr_5k data/lang_rnnlm_h30_me5-1000 data/dt05_multi_r_mc > exp/tri4a/decode_tgpr_5k exp/tri4a/decode_tgpr_5k_rnnlm_h30_me5-1000_L0.5 > > > steps/rnnlmrescore.sh: converting lattices to N-best. > steps/rnnlmrescore.sh: removing old LM scores. > steps/rnnlmrescore.sh: creating separate-archive form of N-best lists. > steps/rnnlmrescore.sh: doing the same with old LM scores. > steps/rnnlmrescore.sh: Creating archives with text-form of words, and LM > scores without graph scores. > steps/rnnlmrescore.sh: invoking rnnlm_compute_scores.sh which calls > rnnlm, to get RNN LM scores. > *** buffer overflow detected ***: ../../../tools/rnnlm-hs-0.1b/rnnlm > terminated > ======= Backtrace: ========= > /lib/x86_64-linux-gnu/libc.so.6(+0x7338f)[0x7fda06f6b38f] > /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x5c)[0x7fda07002c9c] > /lib/x86_64-linux-gnu/libc.so.6(+0x109b60)[0x7fda07001b60] > ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4011ea] > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fda06f19ec5] > ../../../tools/rnnlm-hs-0.1b/rnnlm[0x4018ac] > > > > Training logs: > > ../../../tools/rnnlm-hs-0.1b/rnnlm -threads 1 -independent -train > /tmp/tmp.6aF3RDTFnf -valid /tmp/tmp.aTiUNgZnWT -rnnlm > data/lang_rnnlm_h30_me5-1000/rnnlm -hidden 30 -rand-seed 1 -debug 2 > -class 200 -bptt 2 -bptt-block 20 -direct-order 4 -direct 1000 -binary > # > Vocab size: 9066 > Words in train file: 164907 > Starting training using file /tmp/tmp.6aF3RDTFnf > Iteration 0 Valid Entropy 9.595403 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 117.21k Iteration 1 Valid Entropy 8.564008 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 123.54k Iteration 2 Valid Entropy 8.297136 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 122.70k Iteration 3 Valid Entropy 8.175531 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 108.22k Iteration 4 Valid Entropy 8.107678 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 121.89k Iteration 5 Valid Entropy 8.069274 > Alpha: 0.100000 ME-alpha: 0.100000 Progress: 97.12% Words/thread/sec: > 124.64k Iteration 6 Valid Entropy 8.049375 Decay started > Alpha: 0.050000 ME-alpha: 0.050000 Progress: 97.12% Words/thread/sec: > 111.30k Iteration 7 Valid Entropy 8.009795 > Alpha: 0.025000 ME-alpha: 0.025000 Progress: 97.12% Words/thread/sec: > 124.70k Iteration 8 Valid Entropy 7.989441 Retry 1/2 > Alpha: 0.012500 ME-alpha: 0.012500 Progress: 97.12% Words/thread/sec: > 113.82k Iteration 9 Valid Entropy 7.982499 Retry 2/2 > # Accounting: time=439 threads=1 > # Ended (code 0) at Fri Jun 26 16:22:52 CEST 2015, elapsed time 439 seconds > > > Regards, > Sunit > > > > ------------------------------------------------------------------------------ > Don't Limit Your Business. Reach for the Cloud. > GigeNET's Cloud Solutions provide you with the tools and support that > you need to offload your IT needs and focus on growing your business. > Configured For All Businesses. Start Your Cloud Today. > https://www.gigenetcloud.com/ > _______________________________________________ > Kaldi-users mailing list > Kal...@li... > https://lists.sourceforge.net/lists/listinfo/kaldi-users > |