From: Steve H. <S.W...@ec...> - 2002-11-07 23:09:51
|
On Thu, Nov 07, 2002 at 07:32:16 +0100, Benno Senoner wrote: > The jitter correction just ensures that the delay will be constant, > exactly one fragment. OK, I'm not sure that is always what you want, but it is a minor issue, and easy to experiment with. > Plus when driven from a sequencer, the sampler can provide sample > accurate audio rendering. (but in that case a time stamped protocol is > probably needed, time stamped MIDI anyone ? ) Yes, I agree here, but time stamped MIDI sounds nasty :) > > Yes, though generally the CV signals run at a lower rate than the audio > > signals (eg. 1/4tr or 1/16th). Like Krate signals in Music N. Providing > > pitch data at 1/4 audio rate is more than enough and will save cycles. > > Yes this is a good idea ... perhaps allowing variable CV granularity > or better to run at fixed 1/4 audio rate ? Better to have it variable, or at least in a macro I think. In csound it is selectable per orc file IIRC. > > As long as the compiler issues the branch prediction instruction correctly > > (to hint that the condition will be false), it will be fine. You can check > > this by looking at the .s output. > > How do you check this ? In PIII+ there is an instruction that gets issued, I think. Its one of the things they improved in gcc3.2. The compilers defualt should go the right way anyway. > > If you are refering to phase pointers, then its not an efficientcy issue, > > if you use floating point numbers then the sample will play out of key, > > only slightly, but enough that you can tell. > > Out of key because using 32bit floats does provide only a 24bit mantissa > ? > In my proof of concept code I use 64bit float for the playback pointers > and it works flawlessly even with extreme pitches. OK, doubles should be OK, but it seems a bit wasteful to use doubles for this. Maybe not. [in process] > Ok, at least the engine is designed to work that way (so I guess for > maximum performance some extensions for JACK will be required but I > assume that will not be a big problem) No, it shouldn't be too bad. > > Obviously things like OSC have greater time resolution, but it shows > >that sample accurate note triggering isn't essential. It may something > > worth dropping for efficiency. > > Steve, using your own words, for the efficiency "nazis" we could always > tell > the signal recompiler to #undef the event handling code and compile an > event less version. ;) Yes, absolutly, this is what I meant. The system should know whether its expecting timestamps or not. - Steve |