I've done some experiments with the Currennt lib and it works like a charm. However, I've run into problems when I wanted to use the trained network in my application where the network provides an output to a controller online. I hoped that I'll be able to externally run the console tool with the trained network file and a one element long input sequence but I realized that cell states are not stored in the network file so I lost the context in between calls.
I found one solution to this problem: store the history as the task develops in time and then always run the ff phase from the begining with one more sequence element each time step. This should give the right numbers but it wastes a lot of time.
So, is it somehow possible to store/restore cell states using the current currennt implementation? Anyway, thanks a lot for the library.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello,
I've done some experiments with the Currennt lib and it works like a charm. However, I've run into problems when I wanted to use the trained network in my application where the network provides an output to a controller online. I hoped that I'll be able to externally run the console tool with the trained network file and a one element long input sequence but I realized that cell states are not stored in the network file so I lost the context in between calls.
I found one solution to this problem: store the history as the task develops in time and then always run the ff phase from the begining with one more sequence element each time step. This should give the right numbers but it wastes a lot of time.
So, is it somehow possible to store/restore cell states using the current currennt implementation? Anyway, thanks a lot for the library.
This would be very interesting for me as well.
Did you find a solution for it?