|
From: David O. <da...@ol...> - 2003-08-09 15:29:22
|
On Saturday 09 August 2003 16.14, Benno Senoner wrote: > [ CCing David Olofson, David if you can join the LS list, we need > hints for an optimized design ;-) ] I've been on the list for a good while, but only in Half Asleep Lurk=20 Mode. :-) > Hi, > to continue the blockless bs block based, cv based etc audio > rendering discussion: > > Steve H. agrees with me that block mode eases the processing of > events that can drive the modules. > > Basically one philosopy is to adopt the everything is a CV (control > value) where control ports are treated like they were audio streams > and you run these streams at a fraction of the samplingrate > (usually up to samplerate/4). > > The other philosopy is not to adopt the "everything is a CV", use > typed ports and use time stamped scheduled events. Another way of thinking about it is to view streams of control ramp=20 events as "structured audio data". It allows for various=20 optimizations for low bandwidth data, but still doesn't have any=20 absolute bandwidth limit apart from the audio sample rate. (And not=20 even that, if you use a different unit for event timestamps - but=20 that's probably not quite as easy as it may sound.) > Those events are scheduled in the future but we queue up only > events that belong to the next to be rendered audio block (eg 64 > samples). That way real time manipulation is still possible since > the real time events belong to the next audio block too (the > current one is already in the dma buffer of the sound card and > cannot be manipulated anymore). Or: Buffering/timing is exactly the same for events as for audio=20 streams. There is no reason to treat the differently, unless you want=20 a high level interface to the sequencer - and that's a different=20 thing, IMHO. > That way with very little effort and overhead you achieve both > sample accurate event scheduling and good scheduling of real time > events too. Assume a midi input events occurs during processing the > current block. We read the midi event using a higher priority > SCHED_FIFO thread and read out the current sample pointer when the > event occurred. We can then simply insert a scheduled MIDI event > during the next audio block that occurs exaclty N samples (64 in > our examples) when the event was registered. > That way we get close to zero jitter of real time events even when > we use bigger audio fragment sizes. Exactly. Now, that is apparently impossible to implement that on some platforms=20 (poor RT scheduling), but some people using broken OSes is no=20 argument for broken API designs, IMNSHO... And of course, it's completely optional to implement it in the host.=20 In Audiality, I'm using macros/inlines for sending events, and the=20 only difference is a timestamp argument - and you can just set that=20 to 0 if you don't care. (0 means "start of current block", as there's=20 a global "timer" variable used by those macros/inlines.) > With the streamed approach we would need some scheduling of MIDI > events too thus we would probably need to create a module that > waits N samples (control samples) and then emits the event. > So basically we end up in a timestamped event scenario too. Or the usual approach; MIDI is processed once per block and quantized=20 to block boundaries... > Now if we assume we do all blockless processing eg the > dsp compiler generates one giant equation for each dsp network > (instrument). output =3D func(input1,input2,....) > > Not sure we gain in performance compared to the block based > processing where we apply all operations sequentially on a buffer > (filters, envelopes etc) like they were ladspa modules but without > calling external modules but instead "pasting" their sources in > sequence without function calls etc. I suspect it is *always* a performance loss, except in a few special=20 cases and/or with very small nets and a good optimizing "compiler". Some kind of hybrid approach (ie "build your own plugins") would be=20 very interesting, as it could offer the best of both worlds. I think=20 that's pretty much beyond the scope of "high level" plugin APIs (such=20 as VST, DX, XAP, GMPI and even LADSPA). > I remember someone long time ago talked about better cache locality > of this approach (was it you David ? ;-) ) but after discussing > about blockless vs block based on irc with steve and simon I'm now > confused. I don't think there is a simple answer. Both approaches have their=20 advantages in some situations, even WRT performance, although I think=20 for the stuff most people do on DAWs these days, blockless processing=20 will be significantly slower. That said, something that generates C code that's passed to a good=20 optimizing compiler might shift things around a bit, especially now=20 that there are compilers that automatically generate SIMD code and=20 stuff like that. The day you can compile a DSP net into native code in a fraction of a=20 second, I think traditional plugin APIs will soon be obsolete, at=20 least in the Free/Open Source world. (Byte code + JIT will probable=20 do the trick for the closed source people, though.) > I guess we should try both methods and benchmark them. Yes. However, keep in mind that what we design now will run on hardware=20 that's at least twice as fast as what we have now. It's likely that=20 the MIPS/memory bandwidth ratio will be worse, but you never know... What I'm saying is basically that benchmarking for future hardware is=20 pretty much gambling, and results on current hardware may not give us=20 the right answer. > As said I dislike "everyting is a CV" a bit because you cannot do > what I proposed: > eg. you have a MIDI keymap modules that takes real time midi events > (note on / off) and spits out events that drive the RAM sampler > module (that knows nothing about MIDI). In an event based system > you can send Pointer to Sample data in RAM, length of sample, > looping points, envelope curves (organized as sequences of linear > segments) etc. I disagree to some extent - but this is a very complex subject. Have=20 you followed the XAP discussions? I think we pretty much concluded=20 that you can get away with "everything is a control", only one event=20 type (RAMP, where duration =3D=3D 0 means SET) and a few data types.=20 That's what I'm using internally in Audiality, and I'm not seing any=20 problems with it. > Basically in my model you cannot connect everything with everything > (Steve says it it bad but I don't think so) but you can connect > everything with "everything that makes sense to connect to". Well, you *can* convert back and forth, but it ain't free... You can't=20 have everything. Anyway, I see timestamped events mostly as a performance hack. More=20 accurate than control rate streams (lower rate than audio rate), less=20 expensive than audio rate controls in normal cases, but still capable=20 of carrying audio rate data when necessary. Audio rate controls *are* the real answer (except for some special=20 cases, perhaps; audio rate text messages, anyone? ;-), but it's still=20 a bit on the expensive side on current hardware. (Filters have to=20 recalculate coefficients, or at least check the input, every sample=20 frame, for example.) In modular synths, it probably is the right=20 answer already, but I don't think it fits the bill well enough for=20 "normal" plugins, like the standard VST/DX/TDM/... stuff. > Plus if you have special needs you can always implement your > converter module (converting a midi velocity in a filter frequency > etc). (but I think such a module will be part of the standard set > anyway since we need midi pitch to filter frequency conversion too > if we want filters that support frequency tracking). Yes... In XAP, we tried to forget about the "argument bundling" of=20 MIDI, and just have plain controls. We came up with a nice and clean=20 design that can do everything that MIDI can, and then some, still=20 without any multiple argument events. (Well, events *have* multiple=20 arguments, but only one value argument - the others are the timestamp=20 and various addressing info.) > As said I will come up with a running proof of concept , if we end > up all dissatified with the event based model we can always switch > to other models but I'm pretty confident that the system will be > both performant and flexible (but it takes time to code). In my limited hands-on experience, the event system actually makes=20 some things *simpler* for plugins. They just do what they're told=20 when they're told, and there's no need to check when to do things or=20 scan control input streats: Just process audio as usual until you hit=20 the next event. Things like envelope generators, that have to=20 generate their own timing internally, look pretty much the same=20 whether they deal with events or audio rate streams. The only major=20 difference is that the rendering of the output is done by whatever=20 receives the generated events, rather than by the EG itself. Either way, the real heavy stuff is always the DSP code. In cases=20 where it isn't, the whole plugin is usually so simple that it doesn't=20 really matter what kind of control interface you're using; the DSP=20 code fits right into the basic "standard model" anyway. In such=20 cases, an API like XAP or Audiality's internal "plugin API" could=20 provide some macros that make it all insanely simple - maybe simpler=20 than LADSPA. Anyway, need to get back to work now... :-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |