Hi All,
Greetings !!! I am using Karel's DNN2 setup for my application. I wanted to know if
there exists a method to know CPU & Memory usage during decoding i.e. while running the nnet-latgen-faster binary.
FYI, I am running the binary on ARM Cortex A15 platform.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Looking at the decoding graph size and the model size might provide some
rough estimate -- not really sure if there is some way how to provide a
more precise estimate.
y.
Hi All,
Greetings !!! I am using Karel's DNN2 setup for my application. I wanted
to know if
there exists a method to know CPU & Memory usage during decoding i.e.
while running the nnet-latgen-faster binary.
FYI, I am running the binary on ARM Cortex A15 platform.
ARM is usually for small embedded platforms. The decoding method used
in Kaldi is based on compiling a large decoding graph, and it's not
well optimized for platforms where memory consumption is an issue. It
is possible in principle to create a more memory-efficient decoding
pipeline, but it would need attention from someone very experienced
with decoding.
Dan
ERROR! The markdown supplied could not be parsed correctly. Did you forget
to surround a code snippet with "~~~~"?
Looking at the decoding graph size and the model size might provide some
rough estimate -- not really sure if there is some way how to provide a
more precise estimate.
y.
Hi All,
Greetings !!! I am using Karel's DNN2 setup for my application. I wanted
to know if
there exists a method to know CPU & Memory usage during decoding i.e.
while running the nnet-latgen-faster binary.
FYI, I am running the binary on ARM Cortex A15 platform.
Hi All,
Greetings !!! I am using Karel's DNN2 setup for my application. I wanted to know if
there exists a method to know CPU & Memory usage during decoding i.e. while running the nnet-latgen-faster binary.
FYI, I am running the binary on ARM Cortex A15 platform.
Looking at the decoding graph size and the model size might provide some
rough estimate -- not really sure if there is some way how to provide a
more precise estimate.
y.
On Mon, Jul 6, 2015 at 8:21 AM, Abhijit Tripathy t-aniruddha@users.sf.net
wrote:
ARM is usually for small embedded platforms. The decoding method used
in Kaldi is based on compiling a large decoding graph, and it's not
well optimized for platforms where memory consumption is an issue. It
is possible in principle to create a more memory-efficient decoding
pipeline, but it would need attention from someone very experienced
with decoding.
Dan
On Mon, Jul 6, 2015 at 6:49 AM, Jan jtrmal@users.sf.net wrote:
Thanks for your input @Dan and @Jan :)
Abhijit