|
From: Benno S. <be...@ga...> - 2003-08-09 17:01:34
|
Scrive David Olofson <da...@ol...>:
>
> I've been on the list for a good while, but only in Half Asleep Lurk
> Mode. :-)
Nice to have you onboard did not know that you were on the list ;-)
> >
> > The other philosopy is not to adopt the "everything is a CV", use
> > typed ports and use time stamped scheduled events.
>
> Another way of thinking about it is to view streams of control ramp
> events as "structured audio data". It allows for various
> optimizations for low bandwidth data, but still doesn't have any
> absolute bandwidth limit apart from the audio sample rate. (And not
> even that, if you use a different unit for event timestamps - but
> that's probably not quite as easy as it may sound.)
So at this time for linuxsampler would you advocate an event
based approach or continuous control stream (that runs at a fraction
of the samplerate) ?
As far as I understood it from reading your mail it seems that you
agree that on current machines (see filters that need to recalculate
coefficients etc) it makes sense to use an event based system.
>
>
> > Those events are scheduled in the future but we queue up only
> > events that belong to the next to be rendered audio block (eg 64
> > samples). That way real time manipulation is still possible since
> > the real time events belong to the next audio block too (the
> > current one is already in the dma buffer of the sound card and
> > cannot be manipulated anymore).
>
> Or: Buffering/timing is exactly the same for events as for audio
> streams. There is no reason to treat the differently, unless you want
> a high level interface to the sequencer - and that's a different
> thing, IMHO.
Yes the timebase is the samplingrate which keeps audio / midi and
other general events nicely in sync.
>
> Exactly.
>
> Now, that is apparently impossible to implement that on some platforms
> (poor RT scheduling), but some people using broken OSes is no
> argument for broken API designs, IMNSHO...
Ok but even if there is a jitter of a few samples it is much better than
having an event jitter equivalent to the audio fragment size.
It will be impossible to the user to notice that the midi pitchbend
event was schedule a few usecs too late compared to the ideal time.
Plus as said it will work with relatively large audio fragsizes too.
>
> > With the streamed approach we would need some scheduling of MIDI
> > events too thus we would probably need to create a module that
> > waits N samples (control samples) and then emits the event.
> > So basically we end up in a timestamped event scenario too.
>
> Or the usual approach; MIDI is processed once per block and quantized
> to block boundaries...
I don't like that , it might work ok for very small fragsizes eg
32-64 samples / block but if you go up to let's say 512 - 1024
timing of MIDI events will suck badly.
> > Not sure we gain in performance compared to the block based
> > processing where we apply all operations sequentially on a buffer
> > (filters, envelopes etc) like they were ladspa modules but without
> > calling external modules but instead "pasting" their sources in
> > sequence without function calls etc.
>
> I suspect it is *always* a performance loss, except in a few special
> cases and/or with very small nets and a good optimizing "compiler".
So it seems that the best compromise is to process audio in blocks
but to perform all dsp operations relative to an output sample
(relative to a voice) in one rush.
Assume a sample that is processed
first by an amplitude modulator and then a LP filter).
filter
basically instead of doing (like LADSPA would do)
am_sample=amplitude_modulator(input_sample,256samples)
output_sample=LP_filter(am_sample, 256samples)
(output_sample is a pointer to 256 samples) the current audio block.
it would be faster do do
for(i=0;i<256;i++) {
output_sample[i]=do_filter(do_amplitude_modulation(input_sample[i]));
}
right ?
While for pure audio processing the approach is quite straightforward,
when we take envelope generators etc into account we must inject
code that checks the current timestamp (an if() statement and then
modifies the right values accordingly to the events pending in the queue
(or autogenerated events).
For example I was envisioning a simple RAM sampler module that knows
nothing about MIDI etc but it flexible enough to offer
the full functionality hardcoded sampler designs do have.
eg. the ram sampler modules has the following inputs:
start trigger: starts the sample
release trigger: sample goes in release mode
stop trigger: sample output stops completely
rate: (a float where 1.0 means output sample at original rate,
2.0 shift one octave up etc).
volume: output volume
rate and volume would receive RAMP events so that you can modulate
these two values in arbitrary ways.
sample_ptr_and_len: pointer to a sample stored in RAM with associated
len
attack looping: a list of looping points:
(position_to_jump, loop_len, number_of_repeats)
release looping: same structure as above but it is used when
the sampler module goes into release phase.
Basically when you release a key if the sample has loops after
the current loop comes to the end pos you switch to the
release_looping list.
but as you see this RAM sampler modules does not fit well in
single RAMP event.
ok you could for example separate
sample_ptr_and_len into two variables but it seems a bit inefficient
to me.
Same could be said of the looping structure.
make more input ports eg
attack_looping_position
attack_looping_loop_len
etc
but it seems a waste of time to me since you end up managing
multiple lists of events even when they are mutually linked
(a loop_position does not make sense without the loop_len etc).
So I'd be interested how the RAM sampler module described above
could be made to work with only the RAMP event.
BTW: you said RAMP with value = 0 means set a value.
But what do you set exactly to 0 , the timestamp ?
this would not be ideal since 0 is a legitimate value.
It would be better to use -1 or something alike.
OTOH this would require an additional if() statement
(to check it it is a regular ramp or a set statement) and it could
possibly slow down things a bit.
My proposed ramping approach that consists of
value_to_be_set, delta
does not require an if and if you simply want to set a value
you set delta = 0
But my approach has the disadvantage that if you want do to mosly ramping
you have always to calculate value_to_be_set at each event and this
could become not trivial if you do not track the values within the modules.
Comments on that issue ?
>
> > I remember someone long time ago talked about better cache locality
> > of this approach (was it you David ? ;-) ) but after discussing
> > about blockless vs block based on irc with steve and simon I'm now
> > confused.
>
> I don't think there is a simple answer. Both approaches have their
> advantages in some situations, even WRT performance, although I think
> for the stuff most people do on DAWs these days, blockless processing
> will be significantly slower.
blockless as refered above by me (blockless = one single equation
but processed in blocks), or blockless using another kind of approach ?
(elaborate please)
>
> That said, something that generates C code that's passed to a good
> optimizing compiler might shift things around a bit, especially now
> that there are compilers that automatically generate SIMD code and
> stuff like that.
The question is indeed if yo do LADSPA style processing
(applying all DSP processing in sequence) the compiler uses SIMD
and optimization of the processing loops and is therefore faster
than calculating the result one single big equation at time which
could possibly not take advantage of SIMD etc.
But OTOH the blockless processing has the advantage that
things are not moved around much in the cache.
The output value of the first module is directly available as the
input value of the next module without needing to move it to
a temporary buffer or variable.
>
> The day you can compile a DSP net into native code in a fraction of a
> second, I think traditional plugin APIs will soon be obsolete, at
> least in the Free/Open Source world. (Byte code + JIT will probable
> do the trick for the closed source people, though.)
Linuxsampler is an attempt to prove that this work but as said I prefer
very careful design in advance rather than quick'n dirty results.
Even if some people like to joke about linuxsampler remaining
vaporware forever, I have to admit that after long discussions on
mailing list we learned quite some stuff that will be very handy
to make a powerful engine.
>
> > As said I dislike "everyting is a CV" a bit because you cannot do
> > what I proposed:
> > eg. you have a MIDI keymap modules that takes real time midi events
> > (note on / off) and spits out events that drive the RAM sampler
> > module (that knows nothing about MIDI). In an event based system
> > you can send Pointer to Sample data in RAM, length of sample,
> > looping points, envelope curves (organized as sequences of linear
> > segments) etc.
>
> I disagree to some extent - but this is a very complex subject. Have
> you followed the XAP discussions? I think we pretty much concluded
> that you can get away with "everything is a control", only one event
> type (RAMP, where duration == 0 means SET) and a few data types.
> That's what I'm using internally in Audiality, and I'm not seing any
> problems with it.
Ah you are using the concept of duration.
Ins't it a bit redundant ? Instead of using duration one can use
duration-less RAMP events and just generate an event that sets
the delta ramp value to zero when you want the ramp to stop.
>
>
> > Basically in my model you cannot connect everything with everything
> > (Steve says it it bad but I don't think so) but you can connect
> > everything with "everything that makes sense to connect to".
>
> Well, you *can* convert back and forth, but it ain't free... You can't
> have everything.
Ok but converters will be the exception and not the rule:
for example the MIDI mapper module
see the GUI screenshot message here:
http://sourceforge.net/mailarchive/forum.php?thread_id=2841483&forum_id=12792
acts as a proxy between the MIDI Input and the RAM sampler module.
So it makes the right port types available.
No converters are needed. It's all done internally in the best possible way
without needless float to int conversions, interpreting pointers as floats
and other "everything is a CV" oddities ;-)
>
> Anyway, I see timestamped events mostly as a performance hack. More
> accurate than control rate streams (lower rate than audio rate), less
> expensive than audio rate controls in normal cases, but still capable
> of carrying audio rate data when necessary.
Yes they are a bit of a performance hack but on current hw as you
pointed out it would be a waste of resources and since every musician's
goal is to get out the most number of voices / effects / tracks etc
from the hardware I think it pays off quite alot to use an event based system.
> Yes... In XAP, we tried to forget about the "argument bundling" of
> MIDI, and just have plain controls. We came up with a nice and clean
> design that can do everything that MIDI can, and then some, still
> without any multiple argument events. (Well, events *have* multiple
> arguments, but only one value argument - the others are the timestamp
> and various addressing info.)
Hmm I did not follow the XAP discussions (I was overloaded during that
time as usual ;-) ) but can you briefly explain how this XAP model would
fit the model where the MIDI IN module talks to the MIDI mapper which
in turns talks to the RAM sampler.
>
> In my limited hands-on experience, the event system actually makes
> some things *simpler* for plugins. They just do what they're told
> when they're told, and there's no need to check when to do things or
> scan control input streats: Just process audio as usual until you hit
> the next event. Things like envelope generators, that have to
> generate their own timing internally, look pretty much the same
> whether they deal with events or audio rate streams. The only major
> difference is that the rendering of the output is done by whatever
> receives the generated events, rather than by the EG itself.
>
> Either way, the real heavy stuff is always the DSP code. In cases
> where it isn't, the whole plugin is usually so simple that it doesn't
> really matter what kind of control interface you're using; the DSP
> code fits right into the basic "standard model" anyway. In such
> cases, an API like XAP or Audiality's internal "plugin API" could
> provide some macros that make it all insanely simple - maybe simpler
> than LADSPA.
So you are saying that in pratical terms (processing performance) it does not
matter whether you use events or streamed control values ?
I still prefer the event based system because it allows you to deal more
easily with real time events (with sub audio block precision) and if you
need it you can run at full sample rate.
>
>
> Anyway, need to get back to work now... :-)
yeah, we unleashed those km-long mails again ... just like in the old times ;-)
can you say infinite recursion ;-)))
cheers,
Benno.
-------------------------------------------------
This mail sent through http://www.gardena.net
|