Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
Hi developers of Mocapy,
I am currently using Kevin Murphy's BNT toolbox to do inference in a dynamic Bayesian network modelling chord, key, and other musical features from beat-segmented audio. BNT has worked really nicely for me, but now I'm wondering if your Python toolbox could be a nice alternative.
The question is: will Mocapy perform similarly efficiently, or more efficiently than BNT does? It would be very nice of you to help me with the decision.
My model looks as follows:
I have a six-layered DBN with four discrete, hidden layers of sizes 4, 6, 30, 13, as well as two Gaussian layers of dimension 13 and 12. (Ideally the discrete node sizes would be 4, 24, 109, 13, but that was too memory-intensive in BNT.) The node with the largest CPT is the third, it depends on its own predecessor and nodes 1 and 2, i.e. the CPT has size 30 x 4 x 6 x 30 (ideally: 109 x 4 x 24 x 109).
I will typically have 400-800 slices to model.
So far I don't do any learning in the model, all parameters are set according to expert functions. However, in a project later this year we are planning to do parameter learning in the model.
Do you think it's feasible to implement the model in Mocapy to get the posterior distribution as well as the Viterbi path?