|
From: Josh G. <jg...@us...> - 2003-11-26 02:44:10
|
On Tue, 2003-11-25 at 18:36, Marek Peteraj wrote:
<cut>
> So what i'm suggesting in general is - if .gig offers a certain model or
> system, can we extend upon that? how far can we go? is the model
> limited? is an unlimited model possible?
> If yes, let's implement that instead of the limited gig one, becasue the
> unlimited one will provide the same functionality.
>
The GigaSampler format seems like its very specific in its parameters
and the ranges of those parameters aren't really that much compared to
other formats. SoundFont uses 16 bit integers to describe all parameters
and DLS uses 32 bit. Don't get me wrong, GigaSampler does seem to offer
some nice features but I also agree that basing a synth just on that
format is probably short sited and its also a proprietary format. I
don't really see any problem with starting a synth with that format, but
it may be good to start adding some generic functionality from the
beginning.
GigaSampler does have some nice features like the dimension abstraction,
velocity cross fading and disk streaming. Disk streaming could be done
with DLS2 or SoundFont as well, nothing preventing that. The dimension
abstraction can be represented linearly by a list of regions that
trigger on certain criteria (velocity level, MIDI note, etc). The
velocity abstraction can also be implemented in SoundFont and DLS2 using
modulators/connection blocks respectively (although its not quite as
clean). Of note is that SoundFont/DLS2 can also have overlapping regions
(I haven't seen a way to do that in GigaSampler, but it wouldn't
surprise me if I'm wrong).
Providing a generic interface to GigaSampler for interfacing other
instrument formats would be really nice. FluidSynth has something like
this in place right now, although the parameters are SoundFont oriented
(not really that big of a problem really). Perhaps using DLS2 parameters
would be even more powerful in describing different formats.
Borrowing from the FluidSynth voice creation API:
LinuxSampler provides a noteon callback which passes a LinuxSampler
"voice" instance, the MIDI channel, note number and velocity to the
handler (or alternatively store these as fields in the LinuxSampler
instance). The handler is then responsible for creating voices in the
manner it sees fit (most of the voice data could be "cached", for speed,
from a "program" change handler that gets called when the MIDI
bank/program changes).
The handler routine would do something like this (very sudo code):
int
noteon_handler (LinuxSampler *ls)
{
foreach custom_voice
{
voice = ls.new_voice (<voice template ID>);
voice.set_param (DST_LFO_FREQ, custom_voice_val);
voice.set_param (DST_FILTER_CUTOFF, custom_voice_val);
voice.set_modulator (CONN_SRC_PITCHWHEEL, CONN_SRC_NONE,
CONN_DST_FILTER_CUTOFF, CONN_TRN_CONCAVE,
2400);
voice.set_sample_read_func (custom_sample_stream_callback);
}
ls.commit ()
}
Anyways, much is missing from that, but I hope someone gets the idea. By
using DLS2 parameters to form the base set of synthesis modules and then
allowing additional modules to be added (perhaps have "voice templates"
that would allow one to define a default voice state which a voice gets
initialized to with new_voice(template_id). I'm sure there are probably
a lot of tricks that would need to be done in order to not allocate any
memory or lock anything, etc. A lot of this type of stuff could be done
when the MIDI program changes, since whatever custom format is being
synthesized might have some calculation overhead to convert to this
format. The addition of modulators (connection blocks in DLS terms)
would allow for modulating capabilities. I hope that meant something to
someone. Cheers.
Josh Green
P.S. If you have read this far, perhaps you might be interested in a new
format I'm creating called FlacPak for losslessly compressing many
instrument formats (including GigaSampler). The web site for it is at:
http://swami.sourceforge.net/flacpak.php
|