|
From: Benno S. <be...@ga...> - 2003-08-09 14:14:51
|
[ CCing David Olofson, David if you can join the LS list, we need hints for an optimized design ;-) ] Hi, to continue the blockless bs block based, cv based etc audio rendering discussion: Steve H. agrees with me that block mode eases the processing of events that can drive the modules. Basically one philosopy is to adopt the everything is a CV (control value) where control ports are treated like they were audio streams and you run these streams at a fraction of the samplingrate (usually up to samplerate/4). The other philosopy is not to adopt the "everything is a CV", use typed ports and use time stamped scheduled events. Those events are scheduled in the future but we queue up only events that belong to the next to be rendered audio block (eg 64 samples). That way real time manipulation is still possible since the real time events belong to the next audio block too (the current one is already in the dma buffer of the sound card and cannot be manipulated anymore). That way with very little effort and overhead you achieve both sample accurate event scheduling and good scheduling of real time events too. Assume a midi input events occurs during processing the current block. We read the midi event using a higher priority SCHED_FIFO thread and read out the current sample pointer when the event occurred. We can then simply insert a scheduled MIDI event during the next audio block that occurs exaclty N samples (64 in our examples) when the event was registered. That way we get close to zero jitter of real time events even when we use bigger audio fragment sizes. With the streamed approach we would need some scheduling of MIDI events too thus we would probably need to create a module that waits N samples (control samples) and then emits the event. So basically we end up in a timestamped event scenario too. Now if we assume we do all blockless processing eg the dsp compiler generates one giant equation for each dsp network (instrument). output = func(input1,input2,....) Not sure we gain in performance compared to the block based processing where we apply all operations sequentially on a buffer (filters, envelopes etc) like they were ladspa modules but without calling external modules but instead "pasting" their sources in sequence without function calls etc. I remember someone long time ago talked about better cache locality of this approach (was it you David ? ;-) ) but after discussing about blockless vs block based on irc with steve and simon I'm now confused. I guess we should try both methods and benchmark them. As said I dislike "everyting is a CV" a bit because you cannot do what I proposed: eg. you have a MIDI keymap modules that takes real time midi events (note on / off) and spits out events that drive the RAM sampler module (that knows nothing about MIDI). In an event based system you can send Pointer to Sample data in RAM, length of sample, looping points, envelope curves (organized as sequences of linear segments) etc. Basically in my model you cannot connect everything with everything (Steve says it it bad but I don't think so) but you can connect everything with "everything that makes sense to connect to". Plus if you have special needs you can always implement your converter module (converting a midi velocity in a filter frequency etc). (but I think such a module will be part of the standard set anyway since we need midi pitch to filter frequency conversion too if we want filters that support frequency tracking). As said I will come up with a running proof of concept , if we end up all dissatified with the event based model we can always switch to other models but I'm pretty confident that the system will be both performant and flexible (but it takes time to code). thoughts ? Benno http://linuxsampler.sourceforge.net ------------------------------------------------- This mail sent through http://www.gardena.net |