I just used slave.pl which comes under scripts_pl/6.0sa_train in SphinxTrain..
which does the same work ...
BW,mllr_solve,mllr_transform
which creates a Speaker Adapted Model_parameters in the name of applicationname_sat
under ModelParameters
it contains
adapted mean,variances,transformation marix and speaker.mllr
I decoded with adapted mean instead of using mllr at run time..
Sir, whether the procedure I followed is right ?????
>Sure there is an idea, please be more specific asking your questions. What exactly do you need to know about MLLR?
Since I got negative result in adapting small amount of utterences..
reason may be - single class MLLR transformation is global hence to divide a transformation into multi class .....
grey areas:
1.how to create a regression classes automatically...(my model is triphone based)
2.how to use these while adapting.....?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I did speaker adaptation by invoking the following perl progarm in sphinx trainer
scripts_pl / 6.sa_train / slave.pl
(single class MLLR)
when I adapted the speaker with same amount as training data ,
there is a improvement in accuracy.
but when i adapt with small amount of data (say 100 words)the accuracy falls down...
What would be the reason ??
whether any one is having idea on Multi class MLLR......
> but when i adapt with small amount of data (say 100 words)the accuracy falls down...
Please follow http://www.speech.cs.cmu.edu/cmusphinx/moinmoin/AcousticModelAdaptation
> whether any one is having idea on Multi class MLLR......
Sure there is an idea, please be more specific asking your questions. What exactly do you need to know about MLLR?
Sir,
Thanks a lot for replying...
>Please follow http://www.speech.cs.cmu.edu/cmusphinx/moinmoin/AcousticModelAdaptation
I just used slave.pl which comes under scripts_pl/6.0sa_train in SphinxTrain..
which does the same work ...
BW,mllr_solve,mllr_transform
which creates a Speaker Adapted Model_parameters in the name of applicationname_sat
under ModelParameters
it contains
adapted mean,variances,transformation marix and speaker.mllr
I decoded with adapted mean instead of using mllr at run time..
Sir, whether the procedure I followed is right ?????
>Sure there is an idea, please be more specific asking your questions. What exactly do you need to know about MLLR?
Since I got negative result in adapting small amount of utterences..
reason may be - single class MLLR transformation is global hence to divide a transformation into multi class .....
grey areas:
1.how to create a regression classes automatically...(my model is triphone based)
2.how to use these while adapting.....?