|
From: Paul K. <pau...@ma...> - 2002-11-14 11:30:32
|
Benno Senoner <be...@ga...> wrote:
>
> I have an important question regarding the effect sends: (since I am not
> an expert here) Are FXes in soft samplers/synths usually stereo or mono ?
>
> The CPU power for two mono sends is about the same for one single stereo
> send so I was just wondering which way we should go initially. (mono I
> guess ?).
Mono FX sends / stereo FX returns is the most common configuration - on
hardware samplers and synths, and on all but the biggest analogue mixers.
But hardware synths and samplers will usually have less FX sends than they
have FX busses, so you set the send level and destination for each source.
If each effect can be chained into the input of the next effect, one
routable FX send may be enough.
But if you want to support "insert" effects like compression, these should
be on stereo busses. So the question turns into: do we have one sort of
effect and use stereo FX busses, or do we have separate send and insert
FX (or just have FX sends, and insert effects can be applied to outputs).
Paul.
_____________________________
m a x i m | digital audio
http://mda-vst.com
_____________________________
|
|
From: Benno S. <be...@ga...> - 2002-11-14 13:41:21
|
Paul Kellet wrote: > Mono FX sends / stereo FX returns is the most common configuration - > on hardware samplers and synths, and on all but the biggest analogue > mixers. Ok, but what does this mean for the the audible result of a panned mono signal processed by a FX with mono sends ? Eg only the try part gets panned while the wet part is still centered in them middle ? Is this the audible result ok or does it sound bad in extreme panning positions ? I'd prefer to use mono sends (as default, stereo will be possible too) if that is the standard since it will save us some CPU cycles which helps to increase polyphony. > But hardware synths and samplers will usually have less FX sends than > they have FX busses, so you set the send level and destination for > each source. Yes of course. Since we use the recompiler we can create as many FX sends per voice as we wish. I AFAIK the usual GM MIDI standard has two sends for each MIDI channel.(reverb and chorus). The flexible nature of linuxsampler will allow arbitrary per-voice dynamically routable FX sends but in most cases this will not be needed since when implementing a MIDI sampler we can simply mix all voices on the same channel and then send the result to the FX since all voices belonging to the same channel share the same FX send level. This saves a lot of CPU. This means the benchmarks I posted in my previous mail (284 voices on P4 1.8GHz , 331 voices on Athlon 1.4GHz) are meant with 2 different FX sends on a per-voice basis. With per-MIDI-channel the performance is probably around 500-600 voices on the same boxes. This means that there will be plenty room for running the actual FXes and providing additional insert FXes like LP filters etc. > If each effect can be chained into the input of the next effect, one > routable FX send may be enough. Sorry I am no expert here, but taking the usual reverb, chorus case I don't think they can be chained , can they ? > But if you want to support "insert" effects like compression, these > should be on stereo busses. So the question turns into: do we have > one sort of effect and use stereo FX busses, or do we have separate > send and insert FX (or just have FX sends, and insert effects can be > applied to outputs). I think separate send and insert FXes should be supported. They can be either stereo or mono (probably for insert FXes it makes sense to keep them mono in the cases the sample sources are mono). Can the concept of per channel FXes the case of MIDI devices be applied to inserts too ? I guess yes. (eg let's say on midi chan 1 we have a polyphonic synth sound and we want to use an insert FX (a lowpass) to process the sound. In that case we can simply apply the FX to the channel mix buffer, right ? Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
|
From: Steve H. <S.W...@ec...> - 2002-11-14 13:54:18
|
On Thu, Nov 14, 2002 at 03:52:15 +0100, Benno Senoner wrote:
> This means the benchmarks I posted in my previous mail
> (284 voices on P4 1.8GHz , 331 voices on Athlon 1.4GHz) are meant with 2
> different FX sends on a per-voice basis. With per-MIDI-channel the
> performance is probably around 500-600 voices on the same boxes. This
> means that there will be plenty room for running the actual FXes and
> providing additional insert FXes like LP filters etc.
I think these benchamrks are optomistic, but for very simple voices it
may be approachable.
> > If each effect can be chained into the input of the next effect, one
> > routable FX send may be enough.
>
> Sorry I am no expert here, but taking the usual reverb, chorus case I
> don't think they can be chained , can they ?
For reverb and chrous I think you want them to be serial, parallel:
.----> chorus ---.
| |
-----+ +------>
| |
'----> reverb ---'
will sound odd I think.
- Steve
|
|
From: Steve H. <S.W...@ec...> - 2002-11-14 14:44:49
|
I'm assuming you meant to apply to the list... On Thu, Nov 14, 2002 at 02:37:54 +0000, Nathaniel Virgo wrote: > > For reverb and chrous I think you want them to be serial, parallel: > > > > .----> chorus ---. > > > > -----+ +------> > > > > '----> reverb ---' > > > > will sound odd I think. > > On my aging Yamaha CS1x keyboard (an XG thing with lots of "analogue-style" > sounds and built in fx) they are in parallel on most of the presets. You can > put them in series (kind of) but it makes the reverb take up more of the mix. OK, well that answers that then. Sure, this can be configurable, we have to support both anyway. And the code will be build dynamically, so it wont really hurt speed. > Why not allow the user to set up the routing however they want it? Perhaps > you could simplify things a lot by letting the user send each voice to either > a mono JACK output or a stereo pair, and do effects routing in something like > Ardour. Or would that be inefficient/impractical/at odds with the aims of > this project? One of the (eventual) aims is to build optimal optimal code paths for the effects routing, to get the voice count as high / cpu load as low as possible, that kind of rules out external processing, and it would be problematic anyway and the number of active voices varies from block to block. PS I was vaguely worried about the overhead from having to use position independent code in the recompiled voices, but it turns out to only be a few percent overhead on my PIIIM (which has terrible rspeed4 benchmark performance BTW, >100 cycles for the last test). Benno, you could add -fPIC -DPIC to the CFLAGS if you want to account for this in your benchmarks. - Steve |
|
From: Paul K. <pau...@ma...> - 2002-11-15 14:15:44
|
Benno Senoner <be...@ga...> wrote:
>
> > If each effect can be chained into the input of the next effect, one
> > routable FX send may be enough.
>
> Sorry I am no expert here, but taking the usual reverb, chorus case I
> don't think they can be chained , can they ?
I think even some GM modules (Roland?) let you send some of the chorus
output into the reverb. This is what I meant... instead of the effects
being in series (which can also be useful, depending on the effects)
the first effect is sent to the output *and* has a send level into the
next effect.
One example of where series effects (with no way of routing the signal
from one effect to another) is not good enough is delay and reverb: You
hear the signal with reverb, but then the delay repeats are dry - this
sounds silly!
> I think separate send and insert FXes should be supported.
> They can be either stereo or mono (probably for insert FXes it makes
> sense to keep them mono in the cases the sample sources are mono).
...except you might have a mono sample, but the pan of each voice might
be random or track the keyboard, so unless you are applying effects to
each voice individually (something the Native Instruments Kontakt
sampler allows) inserts need stereo busses.
Paul.
_____________________________
m a x i m | digital audio
http://mda-vst.com
_____________________________
|
|
From: Paul K. <pau...@ma...> - 2002-11-15 14:15:44
|
Steve Harris <S.W...@ec...> wrote:
>
> Benno and I were discussing envelope generation last night, I think that
> the right way to generate an exponential envelope (I checked some synths
> outputs too and it looks like this is that way its done) is to feed a
> constant value into a LP filter with different parameters for each stage.
Yes, so the envelope level tends exponentially to a target level.
Where this gets complicated is the attack, which should have a target level
maybe 1.5 times it's end level, otherwise you spend a long time at nearly full
volume waiting for the decay to start. DLS specifies the attack should be
linear not exponental, and I tend to agree with that - for short attacks it
doesn't sound any different, but for long attacks an exponential curve gets
too loud too soon.
Some softsynths now have much more complicated envelopes, with a time,
target level and curve (variable between exp/lin/log) for each stage, but
it's important to let the user set up a simple ADSR if that is all that's
needed.
> I suspect that, in general you need to calculate the amplitude of the
> envelope for each sample, to avoid stairstepping (zipper noise).
Yes, or use short linear segments, and update the envelope every 32 samples
for
example (64 samples is too long, and people will complain about the
resolution).
> Example code:
>
> env = env_input * ai + env * a;
May be faster with one multiplication: env += ai * (env_input - env);
If you allow real-time control of envelope times/rates, counting down the
time to the next stage can get complicated, so it might be better to
trigger the next stage when you reach a certain level. Here is some
nasty code that does it that way, so env_rate could be modulated in
real-time (but to be able to modulate the sustain_level, you would have
to make env_target a pointer).
//initialize:
env = 0.0f;
env_rate = ATTACK_RATE;
env_thresh = 1.0f;
env_target = 1.5f; //else we will never reach threshold
//per sample:
env += env_rate * (env_target - env);
if(env > env_thresh) //end of attack
{
env_rate = DECAY_RATE;
env_thresh = env; //move threshold out of the way
env_target = SUSTAIN_LEVEL;
//could set a flag so this block is skipped in future
}
//note off:
env_rate = RELEASE_RATE;
env_target = 0.0f; //kill the voice before this de-normals!
Paul.
_____________________________
m a x i m | digital audio
http://mda-vst.com
_____________________________
|