Scrive Mark Knecht <mknecht@...>:
> I THINK THERE IS A HUGE OPPORTUNITY HERE TO BRING PEOPLE TO LINUX IF
> THIS TOOL JUST REALLY EXISTED IN A FORMAT THAT USERS COULD USE!
> Too bad we haven't gotten that far.
Ok LS development is a bit slow but our advantage is "we will never go out of
business" regardless of the speed of development.
But I think once LS will start offering some basic features that
users can use to make real music then I think it will it could over
time become "the OpenOffice of samplers" We will see.
As you know sample library producers are interested in LS because
it can offer big scalability (clustering) or used to make low cost
sound modules etc.
Not to mention a Windows VSTi or Mac OS X port (and thanks to Stehpane
this will soon be a reality).
The GIG engine is not that far from completion. Ok tuning and fixing
EG/keymap/velocity curves will take some time but it's not rocketscience.
And once GIG (v2) files will playback well I think users will start to embrace
it fast. (but only if we produce an easy to use GUI).
BTW did I tell you that a friend of mine, a professional graphician
expert in both 3d/2d which does animated trailers for italian state TV,
did design GUIs for multi media CDROMs etc. wants to collaborate to the
LS project ? :)
He said these cheesy GUI elements like faders/knobs/leds etc is not
Let's Rui produce a preliminary version of the GUI so that the graphician
can see the screenshots and provide us with pixmaps to implement these
eyecandy GUI elements.
He likes the fact that we support OS X too and that motivates him
further to give us a hand (as a graphician he is a OS X fan
of course :) )
> My machines are:
> 1) 3GHz P4 laptop (512MB) - disk gives about 26MB/S + 1394 or USB 2.0
> external drives. This machine will nominally be running Pro Tools, but
> it runs Linux most of the day and is what I'm doing quick LS tests on
> right now.
> 2) Athlon XP 2500+ (512MB) with fast drive
> 3) Athlon XP 1600+ (512MB) with fast drive. (Used to run Pro Tools)
> 4) P3 500MHz (768MB) running GSt 96. No problems at all hitting maximum
Quite nice set of machines.
Anyway I remember that I hit peaks 60-80 stereo voices on my PII400 with
IBM 16GB IDE drive.
> OK, but here's what I'm more concerned about. If I load multiple gig
> files, then I need to preload more samples into memory than when I load
> a single gig file. How will LS use memory in this case?
As any softsampler would do:
the amount of RAM used by each GIG file depends from the number of
samples contained in it.
Currently we preload about 64KByte per mono sample (double that value
if the instrument is stereo).
It's about the same amount GSt uses.
We could even go lower (I had success playing the chopin classical MIDI
file using only 48KB preload). But 48KB is really pushing the disk
to the limits since the times to react are really small.
> 1) A piano - 88 notes, 16 samples per note
> 2) Drums
> 3) Strings
> 4) Brass 1
> 5) Brass 2
> 6) Brass 3
> 7) Woodwinds
> 8) Drones
> 9) Loops
> 10) Etc.
> 11) Etc.
> 12) Etc.
> I'm most curious (and concerned really) in how we decide to partition
> memory between all the different gig files I want to load and how many
> samples we need to preload.
Currently the preload amount per sample is fixed but we could do further
optimization like allowing to specify how much preload each instrument should
use. But I think the defaults are quite reasonable and allow for high polyphony
> How does this trade off against disk speed? If the piano and the loops
> are at very different locations on the disk, and they are played at the
> same time, then even a very fast disk will have trouble keeping up.
There is no guarantee that a samples are close together, not even in the
same instrument, this is why the softsampler must read the data in not
too large chunks and frequently seek back and forth to keep all streaming
buffers (one per voice) filled.
Any disk based sampler needs to operate that way.
There is simply no alternative.
If a HD has high throughput (MByte/sec) but not so good seek times you can
overcome to this limitation by preloading more into RAM.
but if the HD throughput is low then the number of sustained notes you
will be able to play is limited.
On the other hand the goal is to preload as little as possible in RAM
to allow for heavily multisampled instruments and permit loading many of them
at the same time so that you can have a full arrangement of instrument ready
This is why a fast disk (both in terms of bandwidth and seek time) is
Hardware raid arrays can speed up things considerably because they
increase throughput (but not seek time).
This means given a bit bigger RAM preload sizes by using
a RAID array you can achieve hundreds of voices on a very fast machine.
GSt3 claims 600 voices on a high end machine with 10k rpm disk RAID array.
In that benchmark they probably turn off some of the EGs/filters and/or
interpolation to achieve these high numbers.
(if you use chromatically sampled instruments no interpolation
is needed, just copy the data directly from disk to the mix buffer).
Don't worry LS code is quite optimal and I think it cannot
be sped up by a high amount. (perhaps 20% max).
on northernsounds.com the folks all seem to be quite impressed by
the polyphony count in my benchmarks so I think we are on the right track.
And we have several cards left to play like SIMD (SSE/Altivec).
> Clearly spreading the samples across multiple disks can help some.
Yes spreading the samples across multiple disks helps a lot, because
you increase not only the bandwidth but the disk seek performance too.
Probably to achive maximum benefits LS should use one disk streaming thread
per disk. (currently it uses only one but it's easy to extend it to
multiple streaming threads).
> I'm just concerned that we're not yet doing enough work with multiple
> gig files to learn how to do this right.
It does not matter if you benchmark using one single GIG or many of them.
what matters is the number of active disk streams and this is a number
limited by the speed of the disk.
> (And I'm just a worrier by nature!) ;-)
It's a good thing, the more we worry about performance the better
LS will become.
I'm usually very sceptical when analyzing the innermost loops and
functions in LS and I often look at the assembler code that gcc generates
and run synthetic timing benchmarks.
For example the filter unrolling code I sent to Christian increases
the performance of the filter by at least factor 2-3 :)
> > It's up to the user to find out the limits of the machine, although its
> > to define what reliable diskstreaming means in the softsampler context.
> In the end, yes, BUT it's up to us to be able to optimize LS to do a
> very good job wit the hardware available, isn't it? At least as good as
> GSt would do on the same machine?
I think it's a reasonable target not so hard to achieve.
And even if we delivered 20% less performance, the value of LS is still high
because its free/open source and will run on many platforms etc.
> > It's not like a harddisk recording app where you have the data streaming
> > the time so a benchmark can give you a good estimate how many track
> > you can reliably
> Hard disk recording isn't a good measure anyway since it's a very linear
> and predictable thing that happens with the data on disk. Audio playback
> by a sampler is more random in terms of any key on the controller can be
> played and the sampler must respond without fail. The sampler has no
> idea where it's going in 3 seconds. The hard disk recorder does.
> > In the case of a softsampler the stream count is very dynamic with very
> > peaks. Just watch the voice count in classical piano piece.
> > Lots of high polyphony peaks but usually the average is much lower.
> > Eg if the peak is 100 then the average will be 30-40.
> Yes, I agree. (Please explain what you mean by 'stream count' though...)
stream count = number of active disk streams. the equivalent of an
audio track in a HD recording app.
In substance each active voice needs an associated disk stream.
> <LOL!> Pretty much ALL of my MIDI files make GSt fail at times. That's a
> part of why I eventually have to record audio in multiple passes at the
> end. I live with clicks and pops while I'm writing.
But above you said you can achieve full polyphony without problem ?
Anyway LS will be able to provide you with diagrams where you
see the buffer fill states over time so you can see where bottlenecks are
(eg if you need to increase preload sizes etc).
> I don't think any of that will be that useful. More useful in my mind is
> to set the voice count high enough to be better than GSt3 (when it comes
> out) on the same machine. GSt3 allows unlimited notes, but obviously it
> will fail at some point. I just want LS to be that good or better. Set
> the voice count at unlimted and then prove we can do better on the same
As said perhaps we can beat the others, perhaps we will at par with them
perhaps we will be 20% slower.
In any case LS is certainly not going to be an app that wastes
hardware resources. And as you know, open source develoopers do not sleep well
until they squeezed out the last bit of performance from the hardware :)
For Peter Roos:
about the 2GB RAM limitations, and how linux overcomes this:
In short, while 64GB RAM intel compatible machines are extremely rare
and expensive (server stuff, like IBM servers),
3GB is achievable by many mainboards and Linux can make it available
to regular user applications.
For over 3GB I suggest to go with AMD 64bit CPUs/mainboards which
are becoming quite cheap.
This mail sent through http://www.gardena.net