In my application, I want to create multiple Pocketsphinx decoder instances (Pocketsphinx wrappers with SWIG for Puthon) with various hmm (different languages), dict, lm and alltogether diffrent config parameters.
I have few queries :
Is there any limitation on number of pocketsphinx decoder instances created in a python application ? or it is limited by Memory available on Machine ?
If a decoder is created in a thread, will it be releasing resources associated with the decoder automatically when thread done with processing ? or do we need to call ps_free() explicitly ?
Can we re-use a decoder again if ps_free() is already called on it ?
Which model is better for python application, create multiple decoder instances at starting of application or create them dynamically on the fly ?
Any mem usage analysis is available for native pocketsphinx decoder ?
Thanks
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Is there any limitation on number of pocketsphinx decoder instances created in a python application ?
No
or it is limited by Memory available on Machine ?
Yes
If a decoder is created in a thread, will it be releasing resources associated with the decoder automatically when thread done with processing ? or do we need to call ps_free() explicitly ?
You need to call ps_free in C code
Python object will automatically release the decoder
Can we re-use a decoder again if ps_free() is already called on it ?
No
Which model is better for python application, create multiple decoder instances at starting of application or create them dynamically on the fly ?
It depends on the purpose of your application, usually worker pattern is better
Any mem usage analysis is available for native pocketsphinx decoder ?
You have to do it yourself, it depends on the model size mostly
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Is is okay to use a single decoder from multiple threads, as long as I take care of synchronization? That is, can I create a decoder on thread A, then do some decoding with it on thread B, then do some more decoding on thread C?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
In my application, I want to create multiple Pocketsphinx decoder instances (Pocketsphinx wrappers with SWIG for Puthon) with various hmm (different languages), dict, lm and alltogether diffrent config parameters.
I have few queries :
Is there any limitation on number of pocketsphinx decoder instances created in a python application ? or it is limited by Memory available on Machine ?
If a decoder is created in a thread, will it be releasing resources associated with the decoder automatically when thread done with processing ? or do we need to call ps_free() explicitly ?
Can we re-use a decoder again if ps_free() is already called on it ?
Which model is better for python application, create multiple decoder instances at starting of application or create them dynamically on the fly ?
Any mem usage analysis is available for native pocketsphinx decoder ?
Thanks
No
Yes
You need to call ps_free in C code
Python object will automatically release the decoder
No
It depends on the purpose of your application, usually worker pattern is better
You have to do it yourself, it depends on the model size mostly
Thank you Nickolay !! Got it.
Is is okay to use a single decoder from multiple threads, as long as I take care of synchronization? That is, can I create a decoder on thread A, then do some decoding with it on thread B, then do some more decoding on thread C?
It is ok. You can protect decoder with a mutex.
Thanks, that's what I was hoping for!