There is no support for the theory of Universal Tempral Patterns in CMUSPhinx. You couldn't do that out of box. You will have to modify feature extraction and other processing elements in order to recognize the patterns you need.
Maybe some music analysis toolkit should be more suitable for your research.
Last edit: Nickolay V. Shmyrev 2012-10-18
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
How to implement feature vectors with variable timestep in sphinx? I've try to do it directly, by just replacing feature generating utility, but sphinxtrain yields overflow error.
May be it's strictly assumed that timestep of feature vectors must be constant?
There is no support for this theory in CMUSPhinx. You couldn't do that out of box. You will have to modify feature extraction and other processing elements in order to recognize the patterns you need.
Hi ,
I want to use the library for Android.
But we have to recognise UST’s from the speech.
Here is the link for same,
http://www.labo-mim.org/site/index.php?2008/09/11/81-ust-et-graphisme
Let me know if this is possible using CMU Sphinx?
There is no support for the theory of Universal Tempral Patterns in CMUSPhinx. You couldn't do that out of box. You will have to modify feature extraction and other processing elements in order to recognize the patterns you need.
Maybe some music analysis toolkit should be more suitable for your research.
Last edit: Nickolay V. Shmyrev 2012-10-18
How to implement feature vectors with variable timestep in sphinx? I've try to do it directly, by just replacing feature generating utility, but sphinxtrain yields overflow error.
May be it's strictly assumed that timestep of feature vectors must be constant?
Thu, 18 Oct 2012 18:07:38 +0000 от "Nickolay V. Shmyrev" nshmyrev@users.sf.net:
Any pointers to move ahead would be really helpfull.
Do you have any idea how this can be possible?