|
From: Mark K. <mar...@co...> - 2003-12-29 02:00:08
|
Hi, I sat for a while this afternoon and investigated the clicks and pops that we are all getting out of LS. As far as I can tell they are (in my cases) coming from the lack of release samples and not from the attack envelope as I had hypothesized earlier. To look at this I first set up Pro Tools and recorded audio coming from both samplers. After that I measured the latency between the MIDI event in Pro Tools and when audio was recorded from that event in LS. This is much easier to find based on going from quiet to loud visually. Then knowing that latency number I used the Pro Tools scrubber and searched for clicks and pops in the LS audio stream. When I found them, I looked for note off events in the MIDI stream and found that they were always the same latency value ahead of the click and pop. I think this pretty clearly demonstrates the problem comes from the end of the sample and not the beginning. One other thing I noted was that for an arbitrarily complex MIDI stream LS is not actually being asked to work very hard without release samples. In the case of the bounced jazz piano file I have on my server, while GSt hits a maximum of 96 notes playing at the same time (the limit of my GSt license) LS never goes above 14 notes due completely to the lack of the release samples. With this in mind, I think we will likely have another round of optimization to look at when we do get release samples in the tool as this would represent nearly 7 timers as much streaming bandwidth to keep up with GSt. I'm not saying there will be problems, but rather just trying to show the importance of release samples on stressing the overall design. Keep in mind also that we need to ensure no clicks and pops when we use a gig file with no release samples. I intend to look this over later this evening or by tomorrow evening. I hope folks find this interesting. If you have any questions or want me to look at something specific please let me know. Cheers, Mark |
|
From: <be...@ga...> - 2003-12-30 09:26:19
|
Yes the clicks are coming from the missing release envelopes. Adding release envelopes will make them go away but for implementing envelopes we need the event system implemented first. The event system will not only make possible envelopes but release triggered samples, layering, pitch modulation and all the other stuff you need for perfect GIG playback. The problem with the event system is that we need to figure out a good tradeoff between flexibility and speed. David (Olofson), what do you suggest ? Using an event system similar to the one you use in audiality ? For performance reasons I'd opt for a system that uses only linear segments for both volume and pitch enveloping. exp curves and other kind of envelopes can be easily simulated by using a serie of linear segments. Keep in mind it must for example handle following tasks: assume there is a pitch envelope running (composed of linear segments). Now the user operates the pitchbender. The real time event is timestamped and delayed till the next audio fragment gets processed. If there is already an envelope running the pitchbender needs to "add" his own pitch to the current one possibly using an event. for example if we want two pitch envelopes modulating the same sample what's the best way to do it ? Since pitch envelopes are made of "deltas" in theory one could simply add up the deltas when events come in. eg ... if there is only one envelope then the events should overwrite the current pitch delta but if you want to mix two or more envelopes then deltas should added up (basically you could calculate the delta of the delta between events and simply add up that to the current delta. That way AFAIK it should work with one single and multiple active envelopes. Am I missing something ? :-) cheers, Benno http://www.linuxsampler.org Scrive Mark Knecht <mar...@co...>: > Hi, > I sat for a while this afternoon and investigated the clicks and pops > that we are all getting out of LS. As far as I can tell they are (in my > cases) coming from the lack of release samples and not from the attack > envelope as I had hypothesized earlier. > > To look at this I first set up Pro Tools and recorded audio coming > from both samplers. After that I measured the latency between the MIDI > event in Pro Tools and when audio was recorded from that event in LS. > This is much easier to find based on going from quiet to loud visually. > > Then knowing that latency number I used the Pro Tools scrubber and > searched for clicks and pops in the LS audio stream. When I found them, > I looked for note off events in the MIDI stream and found that they were > always the same latency value ahead of the click and pop. I think this > pretty clearly demonstrates the problem comes from the end of the sample > and not the beginning. ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: ian e. <de...@cu...> - 2003-12-30 14:14:00
|
On Tue, 2003-12-30 at 04:26, be...@ga... wrote: > The problem with the event system is that we need to figure out a > good tradeoff between flexibility and speed. > > David (Olofson), what do you suggest ? Using an event system similar to the > one you use in audiality ? > For performance reasons I'd opt for a system that uses only linear > segments for both volume and pitch enveloping. > exp curves and other kind of envelopes can be easily simulated by > using a serie of linear segments. i think you would probably find this to be more expensive than having log/exp curves. to create the traditional log style envelopes is actually very easy, and can be done with a modified one pole (no zero) lowpass filter. that just requires one coefficient calculation for the whole curve, rather than doing a whole load of calculations for each linear segment. > Keep in mind it must for example handle following tasks: > assume there is a pitch envelope running (composed of linear segments). > Now the user operates the pitchbender. > The real time event is timestamped and delayed till the next audio fragment > gets processed. If there is already an envelope running the pitchbender > needs to "add" his own pitch to the current one possibly using an event. > for example if we want two pitch envelopes modulating the same sample > what's the best way to do it ? just add the two (or more) modulating sources scaled by the mod depths to get the final mod amount. ian |
|
From: <be...@ga...> - 2003-12-30 15:36:22
|
Scrive ian esten <de...@cu...>:
> > For performance reasons I'd opt for a system that uses only linear
> > segments for both volume and pitch enveloping.
> > exp curves and other kind of envelopes can be easily simulated by
> > using a serie of linear segments.
>
> i think you would probably find this to be more expensive than having
> log/exp curves. to create the traditional log style envelopes is
> actually very easy, and can be done with a modified one pole (no zero)
> lowpass filter. that just requires one coefficient calculation for the
> whole curve, rather than doing a whole load of calculations for each
> linear segment.
>
Expensive ? I don't think so:
let's make this example:
with linear envelopes (let's assume we allow volume and pitch envelopes)
for(i=0; i < numoutput_samples; i++) {
output_sample[i] = curr_volume * sample[curr_pitch];
curr_volume += volume_delta;
curr_pitch += pitch_delta;
}
As you see with only two additions:
curr_volume += volume_delta;
curr_pitch += pitch_delta;
you get arbitrary volume and pitch envelopes.
if you keep the value to zero then there is no envelope.
Otherwise just update
volume_delta and/or pitch_delta from time to time
(event based).
The system is so fast that you could even update the delta values
eg every 4 samples allowing for arbitrary curves without significant overhead.
But in pratice even exponential curves can be emulated using relatively
few linear segments (in the fast changing sections you can use more points).
Am I missing something ?
Ian can you post a small code example (like I did) to show me how
many instruction the main sample rendering loop contains, and if we
can achieve the same as in my case above (arbitrary modulation)
cheers,
Benno
http://www.linuxsampler.org
-------------------------------------------------
This mail sent through http://www.gardena.net
|
|
From: ian e. <de...@cu...> - 2003-12-30 16:09:49
|
On Tue, 2003-12-30 at 10:36, be...@ga... wrote:
> Scrive ian esten <de...@cu...>:
>
>
> > > For performance reasons I'd opt for a system that uses only linear
> > > segments for both volume and pitch enveloping.
> > > exp curves and other kind of envelopes can be easily simulated by
> > > using a serie of linear segments.
> >
> > i think you would probably find this to be more expensive than having
> > log/exp curves. to create the traditional log style envelopes is
> > actually very easy, and can be done with a modified one pole (no zero)
> > lowpass filter. that just requires one coefficient calculation for the
> > whole curve, rather than doing a whole load of calculations for each
> > linear segment.
> >
>
> Expensive ? I don't think so:
> let's make this example:
>
> with linear envelopes (let's assume we allow volume and pitch envelopes)
>
> for(i=0; i < numoutput_samples; i++) {
>
> output_sample[i] = curr_volume * sample[curr_pitch];
> curr_volume += volume_delta;
> curr_pitch += pitch_delta;
> }
>
> As you see with only two additions:
> curr_volume += volume_delta;
> curr_pitch += pitch_delta;
>
> you get arbitrary volume and pitch envelopes.
> if you keep the value to zero then there is no envelope.
> Otherwise just update
> volume_delta and/or pitch_delta from time to time
> (event based).
> The system is so fast that you could even update the delta values
> eg every 4 samples allowing for arbitrary curves without significant overhead.
> But in pratice even exponential curves can be emulated using relatively
> few linear segments (in the fast changing sections you can use more points).
>
> Am I missing something ?
> Ian can you post a small code example (like I did) to show me how
> many instruction the main sample rendering loop contains, and if we
> can achieve the same as in my case above (arbitrary modulation)
>
> cheers,
> Benno
> http://www.linuxsampler.org
>
a lowpass filter with just a pole:
y[i] = (x[i]-y[i])*c + y[i]; //x[i] is just a gate signal - 1 to turn
the env on, 0 to turn it off
a linear segment:
y[i] = y[i] + k;
the lowpass filter costs you an extra add and multiply compared to a
linear segment, not much really. the trouble with the linear segment
method is that to approximate a log segment you have to do a fair number
of linear segments. it's the computation of a segments 'k' and how long
each segment must last to approximate the log shape that i believe will
make it more expensive when compared to the lowpass approach which does
the right shape in one segment with one coefficient. the only problem
with the lowpass approach is that it needs a tiny little bit of work to
guarantee that it is at maximum in a certain time, and to guarantee it
is off after a certain time. they are pretty trivial modifications
though.
another issue to consider is that the slower your attack time is, the
more segments you will need to use to approximate it without it sounding
bad.
so, i would say if you want a log envelope, use a real log evelope
generator, as it will make life much much easier.
ian
|
|
From: Juan L. <co...@re...> - 2003-12-30 17:13:49
|
On Tuesday 30 December 2003 13:09, ian esten wrote: > > a lowpass filter with just a pole: > > y[i] = (x[i]-y[i])*c + y[i]; //x[i] is just a gate signal - 1 to turn > the env on, 0 to turn it off > > a linear segment: > > y[i] = y[i] + k; > > the lowpass filter costs you an extra add and multiply compared to a > linear segment, not much really. the trouble with the linear segment > method is that to approximate a log segment you have to do a fair number > of linear segments. it's the computation of a segments 'k' and how long > each segment must last to approximate the log shape that i believe will > make it more expensive when compared to the lowpass approach which does > the right shape in one segment with one coefficient. the only problem > with the lowpass approach is that it needs a tiny little bit of work to > guarantee that it is at maximum in a certain time, and to guarantee it > is off after a certain time. they are pretty trivial modifications > though. > another issue to consider is that the slower your attack time is, the > more segments you will need to use to approximate it without it sounding > bad. I have two things to add to this First, considering that a segment size is between 50 and 100 samples, updating the values between segments and ramping linearly inbetween (being the enveloped curved or not) will make absolutely NO audible difference than using a lowpass (I've implemented this several times, and I composed a few hundred of songs with the code :), and considering that this is the most critical part of the whole app, I have to say that i'm against adding more code in there. The segments are too small already, much smaller than the delta times between envelope points. I also read somewhere in the thread that updating every a few (4) samples would be a good idea, but I have to remind you that adding any kind of conditional inside the critical loop reduces the performance enormously in modern procesors, I believe this is because it stalls the instruction pipeline. Cheers! Juan Linietsky |
|
From: Steve H. <S.W...@ec...> - 2004-01-01 15:28:15
|
On Tue, Dec 30, 2003 at 02:19:10PM -0300, Juan Linietsky wrote: > First, considering that a segment size is between 50 and 100 samples, > updating the values between segments and ramping linearly inbetween (being the > enveloped curved or not) will make absolutely NO audible difference than > using a lowpass (I've implemented this several times, and I composed a few > hundred of songs with the code :), and considering that this is the most > critical part of the whole app, I have to say that i'm against adding more > code in there. The segments are too small already, much smaller than the > delta times between envelope points. I think 50-100 samples is too few - for fast attachs it definatly wont be enough. I think 32 is quite a common value. > I also read somewhere in the thread that updating every a few (4) samples > would be a good idea, but I have to remind you that adding any > kind of conditional inside the critical loop reduces the performance > enormously in modern procesors, I believe this is because it stalls > the instruction pipeline. You can roll it in with the loop termination condition, but I'd agree that 4 is too often. Someone could benchmark it to find where theres a sweet spot. This would be a good candidate for SIMD instruction optimisation, to run 4 enevelopes at once. - Steve |
|
From: <be...@ga...> - 2003-12-30 17:54:17
|
> > > > a lowpass filter with just a pole: > > y[i] = (x[i]-y[i])*c + y[i]; //x[i] is just a gate signal - 1 to turn > the env on, 0 to turn it off > > a linear segment: > > y[i] = y[i] + k; > > the lowpass filter costs you an extra add and multiply compared to a > linear segment, not much really. the trouble with the linear segment > method is that to approximate a log segment you have to do a fair number > of linear segments. it's the computation of a segments 'k' and how long > each segment must last to approximate the log shape that i believe will > make it more expensive when compared to the lowpass approach which does > the right shape in one segment with one coefficient. It depends what the user wants. For example do we want changing the enveloping information in real time ? Do other samplers permit this ? We could precompute dozen (or 100) of linear envelope segments resembling arbitrary curves and switch the tables in real time (at note-on time) without any overhead. > the only problem > with the lowpass approach is that it needs a tiny little bit of work to > guarantee that it is at maximum in a certain time, and to guarantee it > is off after a certain time. they are pretty trivial modifications > though. > another issue to consider is that the slower your attack time is, the > more segments you will need to use to approximate it without it sounding > bad. > so, i would say if you want a log envelope, use a real log evelope > generator, as it will make life much much easier. performance = 90% of life :-) I'm still not convinced if we should go the log route. Let's see what others say. cheers, Benno http://www.linuxsampler.org ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Steve H. <S.W...@ec...> - 2004-01-01 15:29:21
|
On Tue, Dec 30, 2003 at 06:54:25PM +0100, be...@ga... wrote: > It depends what the user wants. > For example do we want changing the enveloping information in real time ? > Do other samplers permit this ? > We could precompute dozen (or 100) of linear envelope segments resembling > arbitrary curves and switch the tables in real time (at note-on time) without > any overhead. It must be faster to compute the linear segments than to read them from a table. > performance = 90% of life :-) > I'm still not convinced if we should go the log route. > Let's see what others say. Well, the envelope needs to be log/exp shaped, but I think linear segments is OK. - Steve |
|
From: Christian S. <chr...@ep...> - 2003-12-30 16:10:02
|
Es geschah am Dienstag, 30. Dezember 2003 15:13 als ian esten schrieb: > > Keep in mind it must for example handle following tasks: > > assume there is a pitch envelope running (composed of linear segments). > > Now the user operates the pitchbender. > > The real time event is timestamped and delayed till the next audio > > fragment gets processed. If there is already an envelope running the > > pitchbender needs to "add" his own pitch to the current one possibly > > using an event. for example if we want two pitch envelopes modulating the > > same sample what's the best way to do it ? > > just add the two (or more) modulating sources scaled by the mod depths > to get the final mod amount. My idea is the following: using an array with array size = audio fragment size. So each element of the array refers to exactly one sample point of the current audio fragment. Each element of the array will hold a linked list with events for that sample point if there is at least one event to be processed for this sample point. if there are none then the array element reflects NULL. My idea is not to hold all raw events there that came, but the final sum for the respective modulation destination. So if for example a envelope generator wants to alter the pitch it looks if there's already a 'pitch' event in the array element for the respective sample point, if not it will create one with it's pitch delta, if there's already a 'pitch' event, then it will add it's pitch delta to the delta of the event. Each modulation source will do that and that way the audio thread doesn't have to make these calculations, it already has the final delta for the modulation destination (e.g. pitch, VCA, VCF cutoff, VCF resonance,...). The advantage of this array approach is we won't have to sort events and if we only have a small amount of modulation destinations we could even better create a two dimensional array (instead of linked lists), where the first index of the array is the sample point in the current audio fragment and the second index refers to the modulation destination, so if an envelope generator wants to alter the pitch it would just do this: fragment_events[sample_point][MOD_DST_PITCH] += egspitchdelta; I would prefer the two dimensional approach, as we don't have that much mod. destinations, do we? What do you guys think? CU Christian |
|
From: Christian S. <chr...@ep...> - 2003-12-30 17:47:57
|
Es geschah am Dienstag, 30. Dezember 2003 15:13 als ian esten schrieb: > > Keep in mind it must for example handle following tasks: > > assume there is a pitch envelope running (composed of linear segments). > > Now the user operates the pitchbender. > > The real time event is timestamped and delayed till the next audio > > fragment gets processed. If there is already an envelope running the > > pitchbender needs to "add" his own pitch to the current one possibly > > using an event. for example if we want two pitch envelopes modulating the > > same sample what's the best way to do it ? > > just add the two (or more) modulating sources scaled by the mod depths > to get the final mod amount. My idea is the following: using an array with array size = audio fragment size. So each element of the array refers to exactly one sample point of the current audio fragment. Each element of the array will hold a linked list with events for that sample point if there is at least one event to be processed for this sample point. if there are none then the array element reflects NULL. My idea is not to hold all raw events there that came, but the final sum for the respective modulation destination. So if for example a envelope generator wants to alter the pitch it looks if there's already a 'pitch' event in the array element for the respective sample point, if not it will create one with it's pitch delta, if there's already a 'pitch' event, then it will add it's pitch delta to the delta of the event. Each modulation source will do that and that way the audio thread doesn't have to make these calculations, it already has the final delta for the modulation destination (e.g. pitch, VCA, VCF cutoff, VCF resonance,...). The advantage of this array approach is we won't have to sort events and if we only have a small amount of modulation destinations we could even better create a two dimensional array (instead of linked lists), where the first index of the array is the sample point in the current audio fragment and the second index refers to the modulation destination, so if an envelope generator wants to alter the pitch it would just do this: fragment_events[sample_point][MOD_DST_PITCH] += egspitchdelta; I would prefer the two dimensional approach, as we don't have that much mod. destinations, do we? What do you guys think? CU Christian |
|
From: David O. <da...@ol...> - 2003-12-30 19:42:05
|
On Tuesday 30 December 2003 10.26, be...@ga... wrote: [...] > David (Olofson), what do you suggest ? Using an event system > similar to the one you use in audiality ? Well, that's really more about scalability and accuracy; with a sample=20 accurate event system, you can put ramps (of whatever kind you may=20 have) exactly where you want them, and the "new ramp" calculations=20 hit you only when you actually start a new ramp. If you're using=20 linear ramps only, the code in the inner DSP loop is very simple and=20 fast, but depending on what you're playing, you *may* get extra=20 overhead due to the greater number of ramp events needed at times. Basically, an event system has a low, fixed cost in the DSP units that=20 use it. It does not have the accuracy limitations of systems with a=20 control rate lower than the audio sample rate, nor does it have the=20 higher fixed cost of converting control values to internal audio rate=20 "control change coefficients" eveny N audio frames. So, in short, I'd recommend an event system regardless of what kind of=20 ramping is used, for any serious system that doesn't use audio rate=20 control streams. Any fixed lower control rates will cause trouble in=20 some situation, forcing sound programmers to use prerendered samples=20 for trivial things like short attack noises and the like. OTOH, this is probably not a major issue in a sampler. A virtual=20 analog synth *must* have very accurate envelope timing for serious=20 percussive sounds, but on a sampler, you tend to use envelopes and=20 effects to tweak the timbre of complex sounds, rather than to create=20 new sounds from very simple waveforms. A fixed control rate might=20 work, but it would make LinuxSampler more specifically a sampler,=20 without much scalability into "real" synthesis. > For performance reasons I'd opt for a system that uses only linear > segments for both volume and pitch enveloping. > exp curves and other kind of envelopes can be easily simulated by > using a serie of linear segments. Note that Ian points out, non-linear segments aren't all that=20 expensive. However, reverse calculations, splitting, combining and=20 otherwise manipulating streams of "ramp" events, gets more=20 complicated with non-linear functions, so in some cases, the=20 reduction of event density at the targets may not be worth the=20 increased complexity in other places. If only performance matters, I guess non-linear ramps could be faster=20 than linear only, but I suspect the code would be a lot more=20 complicated, if we are to do things like combining and manipulating=20 event streams. Not sure, though. It's a balance thing. If there isn't=20 too much code that's affected, a more complex data format might pay=20 off. > Keep in mind it must for example handle following tasks: > assume there is a pitch envelope running (composed of linear > segments). Now the user operates the pitchbender. > The real time event is timestamped and delayed till the next audio > fragment gets processed. If there is already an envelope running > the pitchbender needs to "add" his own pitch to the current one > possibly using an event. for example if we want two pitch envelopes > modulating the same sample what's the best way to do it ? Right; this is where linear segments become a bit easier. However, note that not even linear segments are trivial in a system=20 where the ramps of streams to combine can start and end at any time.=20 There's also a risk of event density exploding, if you combine too=20 many streams, as the normal solution would be to just split every=20 time a new ramp event comes in from either source. If there is a risk=20 that you'll frequently be combining streams with high density, you'll=20 need to take care of this one way or another. Meanwhile, this problem=20 does not exist in a fixed control rate system. Hybrid solution: Dynamic control rate. Use timestamped events, but try=20 to keep events locked to a common heartbeat as far as possible, so=20 you get events from multiple inputs simultaneously most of the time.=20 You could even require that all plugins in a sub-net use the same=20 heartbeat at all times, so your event processors will only ever see=20 simultaneous events. I guess one could come up with hybrids from all over the scale between=20 fixed control rate and timestamped events. > Since pitch envelopes are made of "deltas" in theory one could > simply add up the deltas when events come in. Yes - if the adding is done where events turn into parameters for some=20 DSP code. However, if you have dedicated event processing plugins,=20 the normal action would be to update the internal deltas and generate=20 new events. Thus, every input event results in one output events.=20 Even eliminating doubles with the same timestamp requires extra work.=20 (Not much, though. Just keep a flag to trig the generation of the=20 output event before checking the # of frames to the next input=20 event.) > eg ... if there is only one envelope then the events should > overwrite the current pitch delta but if you want to mix two or > more envelopes then deltas should added up (basically you could > calculate the delta of the delta between events and simply add up > that to the current delta. That way AFAIK it should work with one > single and multiple active envelopes. > Am I missing something ? :-) Well, if you're going to do this in a seriously useful way, I don't=20 think you can do it at the delta level near the DSP code. Combining=20 controls usually includes some scaling as well as adding. More=20 importantly, unless you want to hardwire the control routing, you'll=20 proper nets of event processor plugins, rather than support for=20 multiple outputs to one input. //David Olofson - Programmer, Composer, Open Source Advocate =2E- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: <be...@ga...> - 2003-12-31 12:20:14
|
Scrive David Olofson <da...@ol...>: > > Well, that's really more about scalability and accuracy; with a sample=20 > accurate event system, you can put ramps (of whatever kind you may=20 > have) exactly where you want them, and the "new ramp" calculations=20 > hit you only when you actually start a new ramp. If you're using=20 > linear ramps only, the code in the inner DSP loop is very simple and=20 > fast, but depending on what you're playing, you *may* get extra=20 > overhead due to the greater number of ramp events needed at times. Agreed, but I think that the density of linear ramp segments is relatively low even in the most complex cases. I mean: I guess it's impossible to hear the difference between exponential envelope that is rendered sample for sample and one that is made up of linear segments that are let's say 8 samples long. As said our event system is not limited, we can even change values EVERY sample if we need to but I guess it would be totally overkill because you are entering the AM/FM modulation domain then :-). My only concern is if we want envelopes whose shape and characteristics can be changed by realtime user inputs (eg a MIDI controller). To keep overhead low we would probably need to precompute such envelopes and the limitation could be if there are too many variables that the number of tables gets too big. But these are extreme cases. Perhaps you could even compute them in real time since MIDI CC events occur relatively seldom compared to the samplerate. > > Basically, an event system has a low, fixed cost in the DSP units that=20 > use it. It does not have the accuracy limitations of systems with a=20 > control rate lower than the audio sample rate, nor does it have the=20 > higher fixed cost of converting control values to internal audio rate=20 > "control change coefficients" eveny N audio frames. Yes, I'm against such systems too. We want sample accurate events because we want future interoperability with sequencers that can send sample accurate events and we want LS to provide decent synth-like modulation stuff. Think about the dynamic DSP recompiler. I guess users will come up with very interesting "synths" made out of LS modules. Perhaps in future the "Sampler" word in LinuxSampler will not be appropriate anymore. Who knows, we will see .... :-) > > So, in short, I'd recommend an event system regardless of what kind of=20 > ramping is used, for any serious system that doesn't use audio rate=20 > control streams. Any fixed lower control rates will cause trouble in=20 > some situation, forcing sound programmers to use prerendered samples=20 > for trivial things like short attack noises and the like. As said pure sampling is nice and you can reproduce natural instruments faithfully by using very large multisamples (many velocity layers plus sampling each note/key), but for electronic instrument stuff etc it's much better if samples can be modified and shaped by a good modulation engine. you know .... total world domination in the sampler domain requires high quality engines :-) > > OTOH, this is probably not a major issue in a sampler. A virtual=20 > analog synth *must* have very accurate envelope timing for serious=20 > percussive sounds, but on a sampler, you tend to use envelopes and=20 > effects to tweak the timbre of complex sounds, rather than to create=20 > new sounds from very simple waveforms. A fixed control rate might=20 > work, but it would make LinuxSampler more specifically a sampler,=20 > without much scalability into "real" synthesis. That is exactly my point. I'm against fixed control rate and other quick-n-dirty tradeoffs I guess you guys are too. > > Note that Ian points out, non-linear segments aren't all that=20 > expensive. However, reverse calculations, splitting, combining and=20 > otherwise manipulating streams of "ramp" events, gets more=20 > complicated with non-linear functions, so in some cases, the=20 > reduction of event density at the targets may not be worth the=20 > increased complexity in other places. Exactly. > > If only performance matters, I guess non-linear ramps could be faster=20 > than linear only, but I suspect the code would be a lot more=20 > complicated, if we are to do things like combining and manipulating=20 > event streams. Not sure, though. It's a balance thing. If there isn't=20 > too much code that's affected, a more complex data format might pay=20 > off. As said as soon as you have less than one ramping event every 8-10 samples (assuming that the there is no audible difference between envelopes rendered for each sample) then it pays off to use linear ramps because the difference between linear and higher order ones is so small. Plus if you have an event every 8 samples the overhead compared to having an event let's say every 20-30 samples (because using higher order ramps require a lower event density) is low since "most of times" there will not be an event that needs to be handled (at max 1 every 8 samples). > > > > Keep in mind it must for example handle following tasks: > > assume there is a pitch envelope running (composed of linear > > segments). Now the user operates the pitchbender. > > The real time event is timestamped and delayed till the next audio > > fragment gets processed. If there is already an envelope running > > the pitchbender needs to "add" his own pitch to the current one > > possibly using an event. for example if we want two pitch envelopes > > modulating the same sample what's the best way to do it ? > > Right; this is where linear segments become a bit easier. > > However, note that not even linear segments are trivial in a system=20 > where the ramps of streams to combine can start and end at any time.=20 > There's also a risk of event density exploding, if you combine too=20 > many streams, as the normal solution would be to just split every=20 > time a new ramp event comes in from either source. If there is a risk=20 > that you'll frequently be combining streams with high density, you'll=20 > need to take care of this one way or another. Meanwhile, this problem=20 > does not exist in a fixed control rate system. Yes this is one of the disadvantages of event based systems. Perhaps Christian's proposed bidimensional event arrray is a good solution ? The only thing I worry about is the time it takes to figure out if there are events pending. With the event list you know the samples_to_next_event value and can simply skip over it in the inntermost audio loop thus when no events occur there is no CPU overhead. Christian any idea if the samples_to_next_event can be applied to your bidimensional event array too ? > > Hybrid solution: Dynamic control rate. Use timestamped events, but try=20 > to keep events locked to a common heartbeat as far as possible, so=20 > you get events from multiple inputs simultaneously most of the time.=20 > You could even require that all plugins in a sub-net use the same=20 > heartbeat at all times, so your event processors will only ever see=20 > simultaneous events. Interesting but I guess it makes the system even more complex perhaps not buying us much in terms of performance. I don't know ... we would need some realworld benchmarks to see what's better for us. > > I guess one could come up with hybrids from all over the scale between=20 > fixed control rate and timestamped events. > > > > Since pitch envelopes are made of "deltas" in theory one could > > simply add up the deltas when events come in. > > Yes - if the adding is done where events turn into parameters for some=20 > DSP code. However, if you have dedicated event processing plugins,=20 > the normal action would be to update the internal deltas and generate=20 > new events. Thus, every input event results in one output events.=20 > Even eliminating doubles with the same timestamp requires extra work.=20 > (Not much, though. Just keep a flag to trig the generation of the=20 > output event before checking the # of frames to the next input=20 > event.) > > > > eg ... if there is only one envelope then the events should > > overwrite the current pitch delta but if you want to mix two or > > more envelopes then deltas should added up (basically you could > > calculate the delta of the delta between events and simply add up > > that to the current delta. That way AFAIK it should work with one > > single and multiple active envelopes. > > Am I missing something ? :-) > > Well, if you're going to do this in a seriously useful way, I don't=20 > think you can do it at the delta level near the DSP code. Combining=20 > controls usually includes some scaling as well as adding. More=20 > importantly, unless you want to hardwire the control routing, you'll=20 > proper nets of event processor plugins, rather than support for=20 > multiple outputs to one input. We will see, I think that after some discussions, we can come up with a good tradeoff that is both performant, flexible and provided high audio quality. cheers, Benno ------------------------------------------------- This mail sent through http://www.gardena.net |