|
From: Benno S. <be...@ga...> - 2002-10-29 14:16:13
|
Hi,
I thought about using "interpreted" LADSPA as a preview and
subsequent compilation of the code.
Well, I do not see major hurdles to do this, but inlining LADSPA code,
even it results in faster code than calling a chain of LADSPA plugins,
it is still suboptimal especially with very small buffer sizes.
The process() code of LADSPA usually does
for(i=0;i< numsamples; i++) {
sample[i]=.....
}
this means that chaining (at source level) two LADSPA plugins it results
in:
for(i=0;i< numsamples; i++) {
sample[i]=.... (DSP algorithm 1)
}
for(i=0;i< numsamples; i++) {
sample[i]=..... (DSP algoritm 2)
}
in linuxsampler it would be better to process all in a single pass
eg:
for(i=0;i< numsamples; i++) {
sample[i]=.... (DSP algorithm 1)
sample[i]=..... (DSP algoritm 2)
}
So while being able to chain LADSPA plugs at source level is still very
useful, I would opt to provide "native" signal processing units like
adders, multipliers, filters etc too because they can be inlined without
requiring to loop again over the whole buffer.
I think sample-by-sample inlining increases the performance quite a bit
in the case of very small buffer sizes (my usual 32 sample buffer test
case).
What do you think ?
How about caching isssues ? From the point of view of the cache is
sample-by-sample inlining preferable over ladspa-style chaining ?
An additional issue about LADSPA is that it is only suitable for audio
processing but not for midi (or event)-triggered audio generation.
This means we need internal audio unit generators anyway.
It would be cool to modularize everything and let almost all the audio
rendering code stay in the audio unit source files.
This means that as an advanced audio hacker you can easily fire up your
favorite editor and tweak the modules (or copy and modify them) to suit
your needs.
The only thing that is IMHO tied to the disk streaming engine is the
sample looping part since both the audio and the disk thread need to be
aware of looping information.
I'm not a big hardware sampler expert but since we want flexibility I
thought about using a linked list of loop points in order to allow
arbitrary looping.
eg a list of startpoint, endpoint where the single looped parts of
the sample do not need to be interconnected each other.
(with this kind of looping you could even load one single sample that
contains the sounds of a drumset and trigger them in such a way to form
a drumloop :-) )
Regarding the enveloping I made some tests and I think I'd opt for the
same principle as the looping stuff: a list of linear or second order
segments with a starting point and a dx value(or in the 2nd order case
an additional ddx value).
This would allow to apply arbitrary envelope curves to parameters
(volume, filter freqs) with low CPU and memory usage.
Of course if you want to provide only simple ADSR support you can easily
generate an appropriate envelope table for it.
Since a sample is comprised of attack, looped (in our case we can have
many looping segments) and release phase, we could have different
enveloping curves for each phase.
This would mean that the enveloping tables get switched when we go from
one phase to the next one.
This could be useful for thing like vibrato effects where the vibrato is
applied only during loop and release phase but not in the attack phase.
(same applies to filter freqs, pitch etc ... I think one could create
pretty cool sounds with these things alone).
Regarding GUIs: I agree with Juan that we need to decouple the sampler
and the GUI completely (he even proposed using a TCP socket so that you
can remote control the sampler form another machine).
I'd go with shared mem/local IPC but if you say TCP has advantages so
let's go for it.
I think over the long term the GUI is important especially when support
for what you hear is what you get is implemented, aka where you can edit
samples and sounds without resorting to an external editor that saves
the sample to file which the sampler is forced to reload into mem.
Here I guess that Josh Green (the author of smurf/swami) would be very
helpful in that area.
cheers,
Benno
|
|
From: Phil K. <phi...@el...> - 2002-10-29 14:38:09
|
Hi,
UDP would probably be better than TCP for GUI control, it's easier and
faster to implement. From the remote control work I've done I've tried
to keep the mappings as close to MIDI as possible as this allows remote
MIDI hardware to interface easier, but this does mean a decision whether
to use MIDI's 7 bit resolution or to move up to a higher resolution. A
lot of performers like to 'twiddle knobs' so sticking with MIDI is a
plus side.
Using a networked interface also allows you to mix and match what
toolkits for the GUI. QT is nice and fast and the signal and slot
mechanism is really easy, even for a C developer. The QT Designer can
rapidly put together a basic interface. GTK is another good choice.
Phil
On Tue, 2002-10-29 at 15:25, Benno Senoner wrote:
> Hi,
> I thought about using "interpreted" LADSPA as a preview and
> subsequent compilation of the code.
>
> Well, I do not see major hurdles to do this, but inlining LADSPA code,
> even it results in faster code than calling a chain of LADSPA plugins,
> it is still suboptimal especially with very small buffer sizes.
>
> The process() code of LADSPA usually does
>
> for(i=0;i< numsamples; i++) {
> sample[i]=.....
> }
>
> this means that chaining (at source level) two LADSPA plugins it results
> in:
>
> for(i=0;i< numsamples; i++) {
> sample[i]=.... (DSP algorithm 1)
> }
> for(i=0;i< numsamples; i++) {
> sample[i]=..... (DSP algoritm 2)
> }
>
> in linuxsampler it would be better to process all in a single pass
> eg:
>
> for(i=0;i< numsamples; i++) {
> sample[i]=.... (DSP algorithm 1)
> sample[i]=..... (DSP algoritm 2)
> }
>
> So while being able to chain LADSPA plugs at source level is still very
> useful, I would opt to provide "native" signal processing units like
> adders, multipliers, filters etc too because they can be inlined without
> requiring to loop again over the whole buffer.
>
> I think sample-by-sample inlining increases the performance quite a bit
> in the case of very small buffer sizes (my usual 32 sample buffer test
> case).
> What do you think ?
> How about caching isssues ? From the point of view of the cache is
> sample-by-sample inlining preferable over ladspa-style chaining ?
>
> An additional issue about LADSPA is that it is only suitable for audio
> processing but not for midi (or event)-triggered audio generation.
> This means we need internal audio unit generators anyway.
> It would be cool to modularize everything and let almost all the audio
> rendering code stay in the audio unit source files.
> This means that as an advanced audio hacker you can easily fire up your
> favorite editor and tweak the modules (or copy and modify them) to suit
> your needs.
>
> The only thing that is IMHO tied to the disk streaming engine is the
> sample looping part since both the audio and the disk thread need to be
> aware of looping information.
> I'm not a big hardware sampler expert but since we want flexibility I
> thought about using a linked list of loop points in order to allow
> arbitrary looping.
> eg a list of startpoint, endpoint where the single looped parts of
> the sample do not need to be interconnected each other.
> (with this kind of looping you could even load one single sample that
> contains the sounds of a drumset and trigger them in such a way to form
> a drumloop :-) )
>
> Regarding the enveloping I made some tests and I think I'd opt for the
> same principle as the looping stuff: a list of linear or second order
> segments with a starting point and a dx value(or in the 2nd order case
> an additional ddx value).
> This would allow to apply arbitrary envelope curves to parameters
> (volume, filter freqs) with low CPU and memory usage.
> Of course if you want to provide only simple ADSR support you can easily
> generate an appropriate envelope table for it.
>
> Since a sample is comprised of attack, looped (in our case we can have
> many looping segments) and release phase, we could have different
> enveloping curves for each phase.
> This would mean that the enveloping tables get switched when we go from
> one phase to the next one.
> This could be useful for thing like vibrato effects where the vibrato is
> applied only during loop and release phase but not in the attack phase.
> (same applies to filter freqs, pitch etc ... I think one could create
> pretty cool sounds with these things alone).
>
> Regarding GUIs: I agree with Juan that we need to decouple the sampler
> and the GUI completely (he even proposed using a TCP socket so that you
> can remote control the sampler form another machine).
> I'd go with shared mem/local IPC but if you say TCP has advantages so
> let's go for it.
>
> I think over the long term the GUI is important especially when support
> for what you hear is what you get is implemented, aka where you can edit
> samples and sounds without resorting to an external editor that saves
> the sample to file which the sampler is forced to reload into mem.
>
> Here I guess that Josh Green (the author of smurf/swami) would be very
> helpful in that area.
>
> cheers,
> Benno
>
>
>
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by:ThinkGeek
> Welcome to geek heaven.
> http://thinkgeek.com/sf
> _______________________________________________
> Linuxsampler-devel mailing list
> Lin...@li...
> https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel
|
|
From: Steve H. <S.W...@ec...> - 2002-10-29 15:19:29
|
On Tue, Oct 29, 2002 at 04:25:41 +0100, Benno Senoner wrote:
> The process() code of LADSPA usually does
>
> for(i=0;i< numsamples; i++) {
> sample[i]=.....
> }
...
> in linuxsampler it would be better to process all in a single pass
> eg:
Not neccesarily, once the inner loop goes over a certain size it becomes
/very/ inneficient. Infact one of the common ways of optimising bigish
plugins is to break them into smaller passes.
Obviously you wont win every time, but I think inlined finction calls to
LADSPA run() routines will be faster on average.
Obviously, things like mixers should be native, but not for eg. ringmods
(multipliers), a decent ringmod has antialiasing code in (though mine
doesn't, yet).
> An additional issue about LADSPA is that it is only suitable for audio
> processing but not for midi (or event)-triggered audio generation.
No, its not suitable for MIDI, but it can certainly do unit generation,
infact AFAIK the only bandlimited oscilators for Linux are LADSPA plugins.
You just the the CV model for triggering.
- Steve
|