Dear Sir,
I am trying to run LDA/MLLT training. I have got the following error. " mllt.py failed to create MLLT transform with status 0".
I am using default dimension values only.
$CFG_LDA_MLLT = 'yes';
$CFG_LDA_DIMENSION = 29;
Phase 4: LDA transform estimation Baum welch starting for LDA, iteration: N (1 of 1) 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% This step had 50 ERROR messages and 0 WARNING messages. Please check the log file for details. LDA Training completed MODULE: 06 Train MLLT transformation Phase 1: Cleaning up directories: accumulator...logs...qmanager... Phase 2: Flat initialize Phase 3: Forward-Backward
LDA training is completed and when it is trying to create MLLR transform there is some problem.
The last few lines of the log file looks like this
gradient L2: 24480866593407272689493724513120478270342243870937261685361086618 574251334224349981661966139951557131991509563827334874937182786334614280661635 66221197312.000000 likelihood: 1413579874.459867 gradient L2: 39838480563851398187715471605087740555929064055132708439808230619 507186700877400335781835995286306817103477191447911311693023427678371824111210 09698537472.000000 likelihood: 1401368524.530368 gradient L2: 88724084115061340753212736576854558112794744842911667295902240000 466439694025685169246194555555545592989106009033436724536017138586720665988216 0809312256.000000 likelihood: 1413441897.808376 gradient L2: 53990418893570954439897574318123363367378089634178278056402379074 155809980142527984458284828997305166747664872134965009174575233711184312308581 68750833664.000000 likelihood: 1414674290.991693 gradient L2: 55849494718929241150961626702025885046964166269244166613734401091 790295429799413469747251880343654833170403636716179171317324526417056008773836 10808664064.000000 likelihood: 1407100552.696218 gradient L2: 17091045104761428218571603520864329472726832524738597365696385410 635491224830358000905707569207160239503315558857592318683251353501538861641666 93470863360.000000 likelihood: 1415586198.331440 gradient L2: 65592510340811019542071688287598025964172993695250590809367295498 610010542774396346614268645500250110314122658724163597585278057647727330832615 42768574464.000000 likelihood: 1410740002.193484 gradient L2: 33991264013457004428023385100981599345560032254808764142426704640 024406133942872223574020930139994822951806263817030306370497525258401860901499 24583768064.000000 likelihood: 1416786711.473604 gradient L2: 81096600279110704083907309728642849568445125067151064033631297755 6191223940835511581606192711343562506733653534504691603324868702336766Warning: divide by zero encountered in log 5198774539459231744.000000 likelihood: 1401052508.807922 gradient L2: 75743759085989013715517585325660999964753833082982449266710901502 573208959959360265173598262327023248923005976741039136405177307540562080503054 4993681408.000000 likelihood: 1413629802.726191 gradient L2: 41996145025059630582095604700810621661113638694629881478329708011 832211161849335177958360284199644611111925490623319485675285130596306261159213 89796982784.000000 likelihood: 1418446524.020463 gradient L2: 10124847208876831261015066515538656506358456028220866185663458480 645205070205593149823802229696156176232360478114339740973590219899724435625940 874073997312.000000 likelihood: 1406625897.912199 gradient L2: 21515710634539703414682707752550963593059497848815969143883780045 661662666136734037036125770689866588460910673887602698381952121535831277969511 18440824832.000000 likelihood: 1418722734.555799 gradient L2: 11295202417865168849209702294698601805226502454142760005185821370 776175170874057492976564982508784115752099281648395352734769036603654201288582 570897833984.000000 likelihood: 1419714528.443789 gradient L2: 12381010190558624183499680132211584649367679351363373516786629875 770340704420055145523991983991247211205389157478914257903405973382298888546596 354086928384.000000 Traceback (most recent call last): File "/home/lahari/Speech/tellda/python/cmusphinx/mllt.py", line 139, in <module> mllt = m.train() File "/home/lahari/Speech/tellda/python/cmusphinx/mllt.py", line 110, in train AA, f, d = fmin_l_bfgs_b(self.objective, A.ravel(), args=A.shape, factr=10) File "/usr/lib/python2.7/site-packages/scipy/optimize/lbfgsb.py", line 196, in fmin_l_bfgs_b f, g = func_and_grad(x) File "/usr/lib/python2.7/site-packages/scipy/optimize/lbfgsb.py", line 147, in func_and_grad f, g = func(x, *args) File "/home/lahari/Speech/tellda/python/cmusphinx/mllt.py", line 78, in objective lg = self.totalcount * inv(A.T) File "/usr/lib/python2.7/site-packages/numpy/linalg/linalg.py", line 445, in inv return wrap(solve(a, identity(a.shape, dtype=a.dtype))) File "/usr/lib/python2.7/site-packages/numpy/linalg/linalg.py", line 328, in solve raise LinAlgError, 'Singular matrix' numpy.linalg.linalg.LinAlgError: Singular matrix
Is there any problem with my model?
Update to sphinxbase snapshot and sphinxtrain snapshot.
Hi Nikolay,
i am facing the same issue. What do you mean by sphinxbase and sphinxtrain snapshot?
Log in to post a comment.
Dear Sir,
I am trying to run LDA/MLLT training. I have got the following error. "
mllt.py failed to create MLLT transform with status 0".
I am using default dimension values only.
Calculate an LDA/MLLT transform?
$CFG_LDA_MLLT = 'yes';
Dimensionality of LDA/MLLT output
$CFG_LDA_DIMENSION = 29;
Phase 4: LDA transform estimation
Baum welch starting for LDA, iteration: N (1 of 1)
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
This step had 50 ERROR messages and 0 WARNING messages. Please check the log
file for details.
LDA Training completed
MODULE: 06 Train MLLT transformation
Phase 1: Cleaning up directories:
accumulator...logs...qmanager...
Phase 2: Flat initialize
Phase 3: Forward-Backward
LDA training is completed and when it is trying to create MLLR transform there
is some problem.
The last few lines of the log file looks like this
gradient L2: 24480866593407272689493724513120478270342243870937261685361086618
574251334224349981661966139951557131991509563827334874937182786334614280661635
66221197312.000000
likelihood: 1413579874.459867
gradient L2: 39838480563851398187715471605087740555929064055132708439808230619
507186700877400335781835995286306817103477191447911311693023427678371824111210
09698537472.000000
likelihood: 1401368524.530368
gradient L2: 88724084115061340753212736576854558112794744842911667295902240000
466439694025685169246194555555545592989106009033436724536017138586720665988216
0809312256.000000
likelihood: 1413441897.808376
gradient L2: 53990418893570954439897574318123363367378089634178278056402379074
155809980142527984458284828997305166747664872134965009174575233711184312308581
68750833664.000000
likelihood: 1414674290.991693
gradient L2: 55849494718929241150961626702025885046964166269244166613734401091
790295429799413469747251880343654833170403636716179171317324526417056008773836
10808664064.000000
likelihood: 1407100552.696218
gradient L2: 17091045104761428218571603520864329472726832524738597365696385410
635491224830358000905707569207160239503315558857592318683251353501538861641666
93470863360.000000
likelihood: 1415586198.331440
gradient L2: 65592510340811019542071688287598025964172993695250590809367295498
610010542774396346614268645500250110314122658724163597585278057647727330832615
42768574464.000000
likelihood: 1410740002.193484
gradient L2: 33991264013457004428023385100981599345560032254808764142426704640
024406133942872223574020930139994822951806263817030306370497525258401860901499
24583768064.000000
likelihood: 1416786711.473604
gradient L2: 81096600279110704083907309728642849568445125067151064033631297755
6191223940835511581606192711343562506733653534504691603324868702336766Warning:
divide by zero encountered in log
5198774539459231744.000000
likelihood: 1401052508.807922
gradient L2: 75743759085989013715517585325660999964753833082982449266710901502
573208959959360265173598262327023248923005976741039136405177307540562080503054
4993681408.000000
likelihood: 1413629802.726191
gradient L2: 41996145025059630582095604700810621661113638694629881478329708011
832211161849335177958360284199644611111925490623319485675285130596306261159213
89796982784.000000
likelihood: 1418446524.020463
gradient L2: 10124847208876831261015066515538656506358456028220866185663458480
645205070205593149823802229696156176232360478114339740973590219899724435625940
874073997312.000000
likelihood: 1406625897.912199
gradient L2: 21515710634539703414682707752550963593059497848815969143883780045
661662666136734037036125770689866588460910673887602698381952121535831277969511
18440824832.000000
likelihood: 1418722734.555799
gradient L2: 11295202417865168849209702294698601805226502454142760005185821370
776175170874057492976564982508784115752099281648395352734769036603654201288582
570897833984.000000
likelihood: 1419714528.443789
gradient L2: 12381010190558624183499680132211584649367679351363373516786629875
770340704420055145523991983991247211205389157478914257903405973382298888546596
354086928384.000000
Traceback (most recent call last):
File "/home/lahari/Speech/tellda/python/cmusphinx/mllt.py", line 139, in
<module>
mllt = m.train()
File "/home/lahari/Speech/tellda/python/cmusphinx/mllt.py", line 110, in train
AA, f, d = fmin_l_bfgs_b(self.objective, A.ravel(), args=A.shape, factr=10)
File "/usr/lib/python2.7/site-packages/scipy/optimize/lbfgsb.py", line 196, in
fmin_l_bfgs_b
f, g = func_and_grad(x)
File "/usr/lib/python2.7/site-packages/scipy/optimize/lbfgsb.py", line 147, in
func_and_grad
f, g = func(x, *args)
File "/home/lahari/Speech/tellda/python/cmusphinx/mllt.py", line 78, in
objective
lg = self.totalcount * inv(A.T)
File "/usr/lib/python2.7/site-packages/numpy/linalg/linalg.py", line 445, in
inv
return wrap(solve(a, identity(a.shape, dtype=a.dtype)))
File "/usr/lib/python2.7/site-packages/numpy/linalg/linalg.py", line 328, in
solve
raise LinAlgError, 'Singular matrix'
numpy.linalg.linalg.LinAlgError: Singular matrix
Is there any problem with my model?
Update to sphinxbase snapshot and sphinxtrain snapshot.
Hi Nikolay,
i am facing the same issue. What do you mean by sphinxbase and sphinxtrain snapshot?