Hey guys.
I had originally posted this question in the Audacity general forum, but was
told to try here, so here goes.
I'm developing a sound format that achieves 32:9 compression by the use of
linear prediction. Specifically:
y_n = y_(n1)*c_1 + y_(n2)*c_2 + n*S
... where y is the decoded data stream, c is the prediction coefficients, n
is the error correction data [8..+7] and S is the error scale [in other
words, how much to scale the correction data by].
The general specs (in C/C++) are http://pastebin.com/Kpr0sCa0 here (also
shows a sample decoder).
Basically, I can calculate the prediction coefficients for a given set of
data just fine. The problem is that my format uses at most 16 coefficient
pairs, so I need a way to find the optimum set of prediction coefficients
from all the 'real' coefficients calculated for each sample frame.
Any help is appreciated.

View this message in context: http://audacity.238276.n2.nabble.com/Linearpredictivecodinghelpneededtp7412469p7412469.html
Sent from the audacitydevel mailing list archive at Nabble.com.
