|
From: Christian S. <sch...@li...> - 2014-01-07 11:37:22
|
On Tuesday 07 January 2014 04:56:01 David Jaffe wrote:
> Second, I have a number of naive questions; please indulge me…
Don't worry about that. The code base became a bit complex with the years, so
it always takes a bit to get used to it. So it is normal to have plenty of
questions about it.
> I gather there is, for each engine, an audio thread, a disk thread, and an
> instrument loader thread. Is that right? I'm wondering if it's been
> considered to allow multiple parallel voices to be computed in separate
> threads to take advantage of machines with many cores? Or is the
> disk-reading speed the limiting factor? (probably depends somewhat on the
> disk?)
Yes, this feature was somewhat planned years ago, but then there were other
priorities which we thought were more important, especially since there is a
workaround for users to gain SMP support: creating one audio device per CPU
core. So you could i.e. create 4 JACK audio device instances on a system with
4 CPU cores. Or if you multiple disks attached to your system and want to
leverage parallel reading, then you could use the same trick.
It works, but is a bit inconvenient for the user of course. Since he has the
burden to control this. Especially he has to decide which sampler part
("sampler channel") shall be connected with which audio device instance. So he
has to think about how to distribute the load among the parts to achieve the
best result.
> Is there one engine per audio device? Or can multiple engines talk to the
> same device? (Can more than one engine be active at a time?)
There is always exactly one sampler engine instance per audio device instance.
And each sampler engine instance in turn has its own disk thread instance.
This happens transparently for the user. For example: the user creates 10
sampler parts ("sampler channels") but has not attached an audio device
attached to any of this sampler channels yet. In this situation there is no
sampler engine instance yet.
Then the user creates an audio device and connects it to all 10 sampler
channels, the sampler will automatically create exactly one sampler engine
which are now shared by the 10 sampler channels.
Then the user creates another audio device and attaches it to the first
sampler channel. This will cause a new sampler engine instance to be created
for the first sampler channel, where as the other 9 sampler channels still
share the sampler engine instance. And so on.
There is just one addition: an engine instance obviously just provides support
for exactly one sampler format. So if you have i.e. 4 sampler channels with
the "gig" format being set and 3 sampler channels with the "SFZ" format set
and yet another 3 sampler channels with the "SoundFont2" format set, and all
10 sampler channels being connected to one audio device, then there there will
be 3 engine instances, since you use 3 different formats. So obviously the 10
sampler channels cannot share one engine instance in that scenario.
> It looks like the code defines gigasampler format in terms of DLS...true?
> If so, was this originally a DLS sampler? Is giga format a superset of
> DLS?
The GigaSampler/GigaStudio format was designed based on DLS. There were things
added, but also informations from DLS removed / ignored for the gig format. So
you can't truly say .gig is a real superset of DLS. It just comes from there,
kind of. However since there are similarities between, it made sense in libgig
to split the libgig code in 3 parts:
* RIFF classes: RIFF is used on the lowest level of DLS files, gig files,
but many other popular file formats like .wav, .avi and more). You
could say RIFF is something like XML, however in a compact and
efficient binary format.
* DLS classes: Implements support for sounds in DLS sounds. Theoretically
these could be used directly, i.e. for yet another sampler engine in
LinuxSampler, however since this format never really became popular, it
would not make sense.
* gig classes: Mostly derived from the DLS classes and extended for the
Gigasampler/GigaStudio specific stuff.
And to answer you other question about DLS: the sampler never supported DLS,
simply because there were always just a hand full of DLS files out there.
There were "so many" of them, that at one day Josh Green and I realized that
we were using the same DLS file for testing our DLS parsing code (Josh Green
also wrote a sampler format file library called libinstpatch - not used in
this project though).
> I didn't see the SFZ source, is that available separately? Also, the web
All sources regarding SFZ are in the LinuxSampler code base. You might be
confused about the fact that SFZ is a text based, human readable format. So
there is not a complex format parsing library (like i.e. libgig) required for
reading SFZ files.
> page lists the Akai engine as "not started yet" in the "LS development
> road map." Does that mean it's not available at all at this point?
There is no support for Akai sounds right now in the sampler.
When the project kicked off at the end of 2002, we were discussing which
sampler format to use first. So during that discussion I ported libakai to
Linux. That library does the parsing job, allowing to access Akai sound images
(similar to libgig for the Giga and SF2 format). I think we should have those
modified sources somwhere. There is also the original project on SourceForge,
however I don't know if my Linux/POSIX patches were ever applied there.
Since the Giga format was much more attractive in 2002, we decided to go that
route instead and left Akai on the agenda for somewhere in future. However
nowadays the Akai format is so outdated, that it is probably not worth
spending time on implementing support for it in LinuxSampler. There are
nowadays other sampler formats which would be more interesting to be added.
But hey, there is also still a lot to do regarding SFZ support ...
CU
Christian
|