You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
| 2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
| 2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
| 2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
| 2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
| 2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
| 2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
| 2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
| 2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
| 2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
| 2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
| 2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
| 2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
| 2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
| 2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
| 2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
| 2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
| 2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
| 2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
| 2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
| 2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
| 2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(1) |
Nov
(9) |
Dec
|
|
From: <le...@rd...> - 2004-05-27 21:37:07
|
On 27 mai 04, at 23:08, be...@ga... wrote: > The offending code (probably due to a cut'n paste error) > is in line 199 of Voice.h > > pOutputLeft[i++] += this->FilterRight.Apply(&bq_base, &bq_main, > effective_volume * ((((a * pos_fract) + b) * pos_fract + c) * > pos_fract + x0)); > > It should be > pOutputRight[i++] += .... > > Stephane spotted this problem and told me about it on IRC, he said > tried a patch that speeds up things a bit > (reading pSrc[0]..pSrc[7] in one rush in variables and then > use these variable for the stereo cubic interpolator. > He said on PPC its faster. > > Perhaps it makes sense to commit it ? Christian ? > (I sent a copy of the diff to Christian). What really speedup is the use of float instead of double (3.0f instead of 3.0 and so on..) *if* computing in double is not required of course. But this must be tested on X86 before committing. I 'm not sure the other things really matter (reading in local variable ...) Stephane |
|
From: <be...@ga...> - 2004-05-27 21:08:34
|
The offending code (probably due to a cut'n paste error) is in line 199 of Voice.h pOutputLeft[i++] += this->FilterRight.Apply(&bq_base, &bq_main, effective_volume * ((((a * pos_fract) + b) * pos_fract + c) * pos_fract + x0)); It should be pOutputRight[i++] += .... Stephane spotted this problem and told me about it on IRC, he said tried a patch that speeds up things a bit (reading pSrc[0]..pSrc[7] in one rush in variables and then use these variable for the stereo cubic interpolator. He said on PPC its faster. Perhaps it makes sense to commit it ? Christian ? (I sent a copy of the diff to Christian). cheers, Benno http://www.linuxsampler.org Scrive Vladimir Senkov <ha...@so...>: > Hi guys, > > I've checked out the latest from the CVS last night and i can only hear > the left channel. > It seems to sound OK, but nothing in the right ear at all. I have older > version it sill sounds OK so this is not my hardware/drivers/etc. > Anything i'm doing wrong? > I could finish troubleshooting as it was getting late and i was going to > get in trouble :) > Perhaps something in voice.h? > > Regards, > Vladimir. > > ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Christian S. <chr...@ep...> - 2004-05-27 15:41:29
|
Es geschah am Donnerstag, 27. Mai 2004 14:21 als Vladimir Senkov schrieb: > Hi guys, > > I've checked out the latest from the CVS last night and i can only hear > the left channel. Confirmed, same accounts to me. Not sure yet what the problem is. If your time allows it, maybe you try older versions to check when this problem was introduced. CU Christian |
|
From: Mark K. <mk...@co...> - 2004-05-27 13:31:05
|
be...@ga... wrote: > Scrive Mark Knecht <mk...@co...>: > > > >>I THINK THERE IS A HUGE OPPORTUNITY HERE TO BRING PEOPLE TO LINUX IF >>THIS TOOL JUST REALLY EXISTED IN A FORMAT THAT USERS COULD USE! >> >>Too bad we haven't gotten that far. > > > Ok LS development is a bit slow but our advantage is "we will never go out of > business" regardless of the speed of development. OK, that's very true! It's not clear today whether the company that makes GSt is really in business on this product. Month after month slips reduce their credibility with me. <SNIP> >> >>I'm most curious (and concerned really) in how we decide to partition >>memory between all the different gig files I want to load and how many >>samples we need to preload. > > > Currently the preload amount per sample is fixed but we could do further > optimization like allowing to specify how much preload each instrument should > use. But I think the defaults are quite reasonable and allow for high polyphony > setups. OK, works for me. <SNIP> > Don't worry <be happy> LS code is quite optimal and I think it cannot > be sped up by a high amount. (perhaps 20% max). > on northernsounds.com the folks all seem to be quite impressed by > the polyphony count in my benchmarks so I think we are on the right track. > And we have several cards left to play like SIMD (SSE/Altivec). Certainly. Any the ability to use more memory under Linux will someday play out in our favor. <SNIP> >> >>Yes, I agree. (Please explain what you mean by 'stream count' though...) > > > stream count = number of active disk streams. the equivalent of an > audio track in a HD recording app. > In substance each active voice needs an associated disk stream. OK - what's a 'disk stream'? Is that an open file? Maybe this doesn't matter for a user like me. <SNIP> > > But above you said you can achieve full polyphony without problem ? > :) Meaning I have no problem getting to maximum polyphony in my compositions - not that GSt has no problems when it's reaching maximum polyphony. It does, at least on my 500MHz machine with an RME card and low latency. Set the latency higher (for final audio recording) and I do better. Thanks! - Mark |
|
From: Vladimir S. <ha...@so...> - 2004-05-27 12:21:51
|
Hi guys, I've checked out the latest from the CVS last night and i can only hear the left channel. It seems to sound OK, but nothing in the right ear at all. I have older version it sill sounds OK so this is not my hardware/drivers/etc. Anything i'm doing wrong? I could finish troubleshooting as it was getting late and i was going to get in trouble :) Perhaps something in voice.h? Regards, Vladimir. |
|
From: Rui N. C. <rn...@rn...> - 2004-05-26 22:04:40
|
benno wrote: > > BTW did I tell you that a friend of mine, a professional graphician > expert in both 3d/2d which does animated trailers for italian state > TV, did design GUIs for multi media CDROMs etc. wants to collaborate > to the LS project ? :) He said these cheesy GUI elements like > faders/knobs/leds etc is not a problem. > > Let's Rui produce a preliminary version of the GUI so that the > graphician can see the screenshots and provide us with pixmaps to > implement these eyecandy GUI element > Hi, I'm Rui :) As you might know, Qsampler is the intended GUI for linuxsampler and it's evolving, slowly. It already starts and connects to a linuxsampler server, although very crudely. It's still in alpha stage, but it's already something you can sneak preview. Nothing is better than the-real-thing, better than any screenshot ;) OK. I'll just put it here the very same clues I've posted a week or so ago: Qsampler's project is hosted on sourceforge.net (http://sourceforge.net/projects/qsampler) and it's preliminar CVS repository can be checked out through anonymous (pserver) CVS with the following instructions: cvs -d:pserver:ano...@cv...:/cvsroot/qsampler login when prompted for a password, you'll know what to do: just hit enter. First, you'll need to install the liblscp package. This is for LinuxSampler Control Protocol support library, which qsampler is based: cvs -z3 -d:pserver:ano...@cv...:/cvsroot/qsampler co liblscp cd liblscp make -f Makefile.cvs ./configure make make install cd .. This will install liblscp.so under /usr/local/lib, so be sure to have it registered on your shared library path (either on LD_LIBRARY_PATH environment variable or on /etc/ld.so.conf). Maybe it's already up there. Then, you'll may try with qsampler itself. cvs -z3 -d:pserver:ano...@cv...:/cvsroot/qsampler co qsampler cd qsampler make -f Makefile.cvs ./configure make then run ./qsampler and you'll see what I'm cookin' :) These instructions are for linux of course. But win32 support is also in the box. If someone want to try Hope you enjoy, -- rncbc aka Rui Nuno Capela rn...@rn... |
|
From: <be...@ga...> - 2004-05-26 20:29:22
|
Scrive Mark Knecht <mk...@co...>: > > I THINK THERE IS A HUGE OPPORTUNITY HERE TO BRING PEOPLE TO LINUX IF > THIS TOOL JUST REALLY EXISTED IN A FORMAT THAT USERS COULD USE! > > Too bad we haven't gotten that far. Ok LS development is a bit slow but our advantage is "we will never go out of business" regardless of the speed of development. But I think once LS will start offering some basic features that users can use to make real music then I think it will it could over time become "the OpenOffice of samplers" We will see. As you know sample library producers are interested in LS because it can offer big scalability (clustering) or used to make low cost sound modules etc. Not to mention a Windows VSTi or Mac OS X port (and thanks to Stehpane this will soon be a reality). The GIG engine is not that far from completion. Ok tuning and fixing EG/keymap/velocity curves will take some time but it's not rocketscience. And once GIG (v2) files will playback well I think users will start to embrace it fast. (but only if we produce an easy to use GUI). BTW did I tell you that a friend of mine, a professional graphician expert in both 3d/2d which does animated trailers for italian state TV, did design GUIs for multi media CDROMs etc. wants to collaborate to the LS project ? :) He said these cheesy GUI elements like faders/knobs/leds etc is not a problem. Let's Rui produce a preliminary version of the GUI so that the graphician can see the screenshots and provide us with pixmaps to implement these eyecandy GUI elements. He likes the fact that we support OS X too and that motivates him further to give us a hand (as a graphician he is a OS X fan of course :) ) > > My machines are: > > 1) 3GHz P4 laptop (512MB) - disk gives about 26MB/S + 1394 or USB 2.0 > external drives. This machine will nominally be running Pro Tools, but > it runs Linux most of the day and is what I'm doing quick LS tests on > right now. > > 2) Athlon XP 2500+ (512MB) with fast drive > > 3) Athlon XP 1600+ (512MB) with fast drive. (Used to run Pro Tools) > > 4) P3 500MHz (768MB) running GSt 96. No problems at all hitting maximum > polyphony. Quite nice set of machines. Anyway I remember that I hit peaks 60-80 stereo voices on my PII400 with IBM 16GB IDE drive. > > OK, but here's what I'm more concerned about. If I load multiple gig > files, then I need to preload more samples into memory than when I load > a single gig file. How will LS use memory in this case? As any softsampler would do: the amount of RAM used by each GIG file depends from the number of samples contained in it. Currently we preload about 64KByte per mono sample (double that value if the instrument is stereo). It's about the same amount GSt uses. We could even go lower (I had success playing the chopin classical MIDI file using only 48KB preload). But 48KB is really pushing the disk to the limits since the times to react are really small. > > 1) A piano - 88 notes, 16 samples per note > 2) Drums > 3) Strings > 4) Brass 1 > 5) Brass 2 > 6) Brass 3 > 7) Woodwinds > 8) Drones > 9) Loops > 10) Etc. > 11) Etc. > 12) Etc. > > I'm most curious (and concerned really) in how we decide to partition > memory between all the different gig files I want to load and how many > samples we need to preload. Currently the preload amount per sample is fixed but we could do further optimization like allowing to specify how much preload each instrument should use. But I think the defaults are quite reasonable and allow for high polyphony setups. > > How does this trade off against disk speed? If the piano and the loops > are at very different locations on the disk, and they are played at the > same time, then even a very fast disk will have trouble keeping up. Not true. There is no guarantee that a samples are close together, not even in the same instrument, this is why the softsampler must read the data in not too large chunks and frequently seek back and forth to keep all streaming buffers (one per voice) filled. Any disk based sampler needs to operate that way. There is simply no alternative. If a HD has high throughput (MByte/sec) but not so good seek times you can overcome to this limitation by preloading more into RAM. but if the HD throughput is low then the number of sustained notes you will be able to play is limited. On the other hand the goal is to preload as little as possible in RAM to allow for heavily multisampled instruments and permit loading many of them at the same time so that you can have a full arrangement of instrument ready to go. This is why a fast disk (both in terms of bandwidth and seek time) is absolutely mandatory. Hardware raid arrays can speed up things considerably because they increase throughput (but not seek time). This means given a bit bigger RAM preload sizes by using a RAID array you can achieve hundreds of voices on a very fast machine. GSt3 claims 600 voices on a high end machine with 10k rpm disk RAID array. In that benchmark they probably turn off some of the EGs/filters and/or interpolation to achieve these high numbers. (if you use chromatically sampled instruments no interpolation is needed, just copy the data directly from disk to the mix buffer). Don't worry LS code is quite optimal and I think it cannot be sped up by a high amount. (perhaps 20% max). on northernsounds.com the folks all seem to be quite impressed by the polyphony count in my benchmarks so I think we are on the right track. And we have several cards left to play like SIMD (SSE/Altivec). > > Clearly spreading the samples across multiple disks can help some. Yes spreading the samples across multiple disks helps a lot, because you increase not only the bandwidth but the disk seek performance too. Probably to achive maximum benefits LS should use one disk streaming thread per disk. (currently it uses only one but it's easy to extend it to multiple streaming threads). > > I'm just concerned that we're not yet doing enough work with multiple > gig files to learn how to do this right. It does not matter if you benchmark using one single GIG or many of them. what matters is the number of active disk streams and this is a number limited by the speed of the disk. > > (And I'm just a worrier by nature!) ;-) It's a good thing, the more we worry about performance the better LS will become. I'm usually very sceptical when analyzing the innermost loops and functions in LS and I often look at the assembler code that gcc generates and run synthetic timing benchmarks. For example the filter unrolling code I sent to Christian increases the performance of the filter by at least factor 2-3 :) > > <SNIP> > > > > It's up to the user to find out the limits of the machine, although its > hard > > to define what reliable diskstreaming means in the softsampler context. > > In the end, yes, BUT it's up to us to be able to optimize LS to do a > very good job wit the hardware available, isn't it? At least as good as > GSt would do on the same machine? I think it's a reasonable target not so hard to achieve. And even if we delivered 20% less performance, the value of LS is still high because its free/open source and will run on many platforms etc. > > > > It's not like a harddisk recording app where you have the data streaming > all > > the time so a benchmark can give you a good estimate how many track > > you can reliably > > Hard disk recording isn't a good measure anyway since it's a very linear > and predictable thing that happens with the data on disk. Audio playback > by a sampler is more random in terms of any key on the controller can be > played and the sampler must respond without fail. The sampler has no > idea where it's going in 3 seconds. The hard disk recorder does. Exactly. > > > In the case of a softsampler the stream count is very dynamic with very > high > > peaks. Just watch the voice count in classical piano piece. > > Lots of high polyphony peaks but usually the average is much lower. > > Eg if the peak is 100 then the average will be 30-40. > > Yes, I agree. (Please explain what you mean by 'stream count' though...) stream count = number of active disk streams. the equivalent of an audio track in a HD recording app. In substance each active voice needs an associated disk stream. > > <LOL!> Pretty much ALL of my MIDI files make GSt fail at times. That's a > part of why I eventually have to record audio in multiple passes at the > end. I live with clicks and pops while I'm writing. But above you said you can achieve full polyphony without problem ? :) Anyway LS will be able to provide you with diagrams where you see the buffer fill states over time so you can see where bottlenecks are (eg if you need to increase preload sizes etc). > > > I don't think any of that will be that useful. More useful in my mind is > to set the voice count high enough to be better than GSt3 (when it comes > out) on the same machine. GSt3 allows unlimited notes, but obviously it > will fail at some point. I just want LS to be that good or better. Set > the voice count at unlimted and then prove we can do better on the same > hardware. As said perhaps we can beat the others, perhaps we will at par with them perhaps we will be 20% slower. In any case LS is certainly not going to be an app that wastes hardware resources. And as you know, open source develoopers do not sleep well until they squeezed out the last bit of performance from the hardware :) For Peter Roos: about the 2GB RAM limitations, and how linux overcomes this: http://strasbourg.linuxfr.org/jl3/features-2.3-2.html In short, while 64GB RAM intel compatible machines are extremely rare and expensive (server stuff, like IBM servers), 3GB is achievable by many mainboards and Linux can make it available to regular user applications. For over 3GB I suggest to go with AMD 64bit CPUs/mainboards which are becoming quite cheap. cheers, Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Mark K. <mk...@co...> - 2004-05-26 16:23:26
|
Peter Roos wrote: > Hi all, > > Quick question from one of the "lurkers" on this list ;-) > > What are the current memory addressing limits under Linux? > > A lot of "heavy" Gigastudio users, like myself, are running into the 2 Gb > address space limit of the current Win32 OS's. > > Just curious. If Linux does not have this limit, then that will be a super > advantage for LinuxSampler. > > Take care, > > Peter Roos > www.PeterRoos.com > Peter, I'm told that the 2.6 series kernels, as well as some 2.4 series kernels, support up to 64GB of DRAM. Should be a metric that the Windows platform running GSt would have a hard time gettign to. ;-) - Mark |
|
From: Mark K. <mk...@co...> - 2004-05-26 16:19:32
|
Robert Jonsson wrote: > Hi Mark, > > >>More useful in my mind is >>to set the voice count high enough to be better than GSt3 (when it comes >>out) on the same machine. GSt3 allows unlimited notes, but obviously it >>will fail at some point. I just want LS to be that good or better. Set >>the voice count at unlimted and then prove we can do better on the same >>hardware. > > > Not to dampen your entusiasm too much, and... aiming high is good! > > But, to beat GS on every base on the first release is a little steep ;-). > Who said every base? There are probably 20 or more features in GSt3 that the developers not even trying to implement at this point. (Not that they're needed just yet.) Voice count is the most basic way that Tascam markets the different revisions of Gst2 and Gst3. It's always been that way. To ignore it would be silly. And, actually, I agree with you. It's unlikely that LS will actually beat GSt3 on voice count on the first release. However, it's good to have appropriate goals. This is just my wish - not a stated goal of the developers. Just my two cent, Mark |
|
From: Robert J. <rj...@sp...> - 2004-05-26 15:27:08
|
Hi Mark, > More useful in my mind is > to set the voice count high enough to be better than GSt3 (when it comes > out) on the same machine. GSt3 allows unlimited notes, but obviously it > will fail at some point. I just want LS to be that good or better. Set > the voice count at unlimted and then prove we can do better on the same > hardware. Not to dampen your entusiasm too much, and... aiming high is good! But, to beat GS on every base on the first release is a little steep ;-). Regards, Robert ps. new mail adress! ds. -- http://spamatica.se/music/ |
|
From: Mark K. <mk...@co...> - 2004-05-26 14:53:04
|
Peter Roos wrote:
<SNIP>
> I'll keep lurking on this list.
>
> Peter
>
Peter,
Please do!
Mark
|
|
From: Peter R. <pet...@de...> - 2004-05-26 14:46:00
|
Hi Mark, Thanks for the offer, but it was just out of curiosity ;-) I think LS will be a very interesting alternative to GST, when it is available. I'll keep lurking on this list. Peter -----Original Message----- From: Mark Knecht [mailto:mk...@co...] Sent: Wednesday, May 26, 2004 16:43 To: Peter Roos Cc: 'lin...@li...' Subject: Re: [Linuxsampler-devel] Polyphony, Was: MacOSX note release prob lem solved Peter Roos wrote: > Hi all, > > Quick question from one of the "lurkers" on this list ;-) > > What are the current memory addressing limits under Linux? > > A lot of "heavy" Gigastudio users, like myself, are running into the 2 Gb > address space limit of the current Win32 OS's. > > Just curious. If Linux does not have this limit, then that will be a super > advantage for LinuxSampler. > > Take care, > > Peter Roos > www.PeterRoos.com > > Hi Peter, There is a 'BIGMEM' option when you build the Linux kernel. It can take you beyond the 2GB limit pretty easily. 5 years ago folks were doing 3-4GB in Linux boxes. I don;t know where they are today. However, any machine's ability to use this is (I think) somewhat dependent on the motherboard chipset the machine is built around and the type of memory technology it uses. Still most economical machines only have 3 memory slots so 3GB might be the practical limit right now, but if you bought a server type machine you could almost certainly go higher as some of them offer more memory slots. I can do some research on what the limits are these days if the info is of a time sensitive nature to you. (I.e. - you're buying a new machine.) - Mark |
|
From: Mark K. <mk...@co...> - 2004-05-26 14:43:06
|
Peter Roos wrote: > Hi all, > > Quick question from one of the "lurkers" on this list ;-) > > What are the current memory addressing limits under Linux? > > A lot of "heavy" Gigastudio users, like myself, are running into the 2 Gb > address space limit of the current Win32 OS's. > > Just curious. If Linux does not have this limit, then that will be a super > advantage for LinuxSampler. > > Take care, > > Peter Roos > www.PeterRoos.com > > Hi Peter, There is a 'BIGMEM' option when you build the Linux kernel. It can take you beyond the 2GB limit pretty easily. 5 years ago folks were doing 3-4GB in Linux boxes. I don;t know where they are today. However, any machine's ability to use this is (I think) somewhat dependent on the motherboard chipset the machine is built around and the type of memory technology it uses. Still most economical machines only have 3 memory slots so 3GB might be the practical limit right now, but if you bought a server type machine you could almost certainly go higher as some of them offer more memory slots. I can do some research on what the limits are these days if the info is of a time sensitive nature to you. (I.e. - you're buying a new machine.) - Mark |
|
From: Mark K. <mk...@co...> - 2004-05-26 14:35:05
|
be...@ga... wrote: > Scrive Mark Knecht <mk...@co...>: > >>Benno, >> Will we have any imposed limitations on voice counts? Or will the >>only limitation be the capability of the hardware? How will we assess >>what number of voices a given machine can handle? > > > The limitation is given only by the speed of the hardware, (CPU,disk). > > Keep in mind LS voices I gave are stereo and GST voices are mono. > so GSt's 160 voices equal to max 80 stereo voices. > > The current defines in the latest LS CVS is 128 stereo voices which equals > to 256 GSt mono voices .. which GSt2 cannot deliver :) True - GSt3 was supposed to do unlimited in May, 2004. I see this morning it's been pushed back to the end of June, 2004. That might mean Sept, 2005. I THINK THERE IS A HUGE OPPORTUNITY HERE TO BRING PEOPLE TO LINUX IF THIS TOOL JUST REALLY EXISTED IN A FORMAT THAT USERS COULD USE! Too bad we haven't gotten that far. > > >> I typically run 8-10 MIDI channels. The piano channel easily uses >>40, but can peak at 80-100. (Divide by two in LS from GST???) The other >>channels are more modest using between 10-20 voices. Still that's 100 + >>9*20=280 voices. Of course I can never get this from GSt because of >>imposed limitations so I have to record audio in multiple passes. > > > > With LS it would be possible given a fast machine (athlon 2400 or P4 3GHz) > and a 7200rpm drive My machines are: 1) 3GHz P4 laptop (512MB) - disk gives about 26MB/S + 1394 or USB 2.0 external drives. This machine will nominally be running Pro Tools, but it runs Linux most of the day and is what I'm doing quick LS tests on right now. 2) Athlon XP 2500+ (512MB) with fast drive 3) Athlon XP 1600+ (512MB) with fast drive. (Used to run Pro Tools) 4) P3 500MHz (768MB) running GSt 96. No problems at all hitting maximum polyphony. Gst doesn;t really require a high powered machine. <SNIP> > yes 64 was really low and you can hit the polyphony limits really quickly > especially because the pedal sustain algorithm is not optimized > yet and simply keeps adding voices if you repeat hitting the same keys. > The optimized version should hold only the last 2 voices on the key or so. > That way you never go up beyond a certain polyphony. Sounds pretty reasonable to me... > On the other hand each voice needs a streaming buffer which is currently > set to 256KByte. > So 128 voices use 32MByte of RAM, 256voices 64MByte of RAM. > I think for now 128 is a good value since it does not use that much memory > and stresses the average machine's CPU/disk quite a bit. OK, but here's what I'm more concerned about. If I load multiple gig files, then I need to preload more samples into memory than when I load a single gig file. How will LS use memory in this case? 1) A piano - 88 notes, 16 samples per note 2) Drums 3) Strings 4) Brass 1 5) Brass 2 6) Brass 3 7) Woodwinds 8) Drones 9) Loops 10) Etc. 11) Etc. 12) Etc. I'm most curious (and concerned really) in how we decide to partition memory between all the different gig files I want to load and how many samples we need to preload. How does this trade off against disk speed? If the piano and the loops are at very different locations on the disk, and they are played at the same time, then even a very fast disk will have trouble keeping up. Clearly spreading the samples across multiple disks can help some. I'm just concerned that we're not yet doing enough work with multiple gig files to learn how to do this right. (And I'm just a worrier by nature!) ;-) <SNIP> > > It's up to the user to find out the limits of the machine, although its hard > to define what reliable diskstreaming means in the softsampler context. In the end, yes, BUT it's up to us to be able to optimize LS to do a very good job wit the hardware available, isn't it? At least as good as GSt would do on the same machine? > It's not like a harddisk recording app where you have the data streaming all > the time so a benchmark can give you a good estimate how many track > you can reliably Hard disk recording isn't a good measure anyway since it's a very linear and predictable thing that happens with the data on disk. Audio playback by a sampler is more random in terms of any key on the controller can be played and the sampler must respond without fail. The sampler has no idea where it's going in 3 seconds. The hard disk recorder does. > In the case of a softsampler the stream count is very dynamic with very high > peaks. Just watch the voice count in classical piano piece. > Lots of high polyphony peaks but usually the average is much lower. > Eg if the peak is 100 then the average will be 30-40. Yes, I agree. (Please explain what you mean by 'stream count' though...) Thanks. > > > > What's the worst case for a disk based softsampler is starting lots of notes at > the same time and then holding them for a long time. > This puts an extreme stress on the engine and the disk becomes the bottleneck: > let's say you have 32k samples preload, which means 0.7sec. > If you fire up 50 notes at the same time, the softsampler will have > only 0.7secs available to fill up 50 buffers with data. > Its a very small time and the risk is high that data cannot be filled in time. > It's easy to construct a midi file that > makes any softsampler fail. (alias causing voice dropouts). <LOL!> Pretty much ALL of my MIDI files make GSt fail at times. That's a part of why I eventually have to record audio in multiple passes at the end. I live with clicks and pops while I'm writing. <SNIP> > > In conclusion: some automatic benchmark estimate could be incorporated into LS > but it would never give you a 100% guarantee that you don't overload the > streaming engine. > We could do a very conservative benchmark too (one that fires up all > the voices at the same time) but then numbers would way below the > voice count achievable in practice. (eg the benchmark would say 60 voices > while in practice you would achieve 100-120 voices or more. > I don't think any of that will be that useful. More useful in my mind is to set the voice count high enough to be better than GSt3 (when it comes out) on the same machine. GSt3 allows unlimited notes, but obviously it will fail at some point. I just want LS to be that good or better. Set the voice count at unlimted and then prove we can do better on the same hardware. |
|
From: Peter R. <pet...@de...> - 2004-05-26 10:59:17
|
Hi all, Quick question from one of the "lurkers" on this list ;-) What are the current memory addressing limits under Linux? A lot of "heavy" Gigastudio users, like myself, are running into the 2 Gb address space limit of the current Win32 OS's. Just curious. If Linux does not have this limit, then that will be a super advantage for LinuxSampler. Take care, Peter Roos www.PeterRoos.com -----Original Message----- From: Robert Jonsson [mailto:rob...@da...] Sent: Wednesday, May 26, 2004 12:50 To: lin...@li... Subject: Re: [Linuxsampler-devel] Polyphony, Was: MacOSX note release problem solved Hi, > On the other hand each voice needs a streaming buffer which is currently > set to 256KByte. > So 128 voices use 32MByte of RAM, 256voices 64MByte of RAM. > I think for now 128 is a good value since it does not use that much memory > and stresses the average machine's CPU/disk quite a bit. I don't know if it's a big deal, but 32MB ram is nothing by todays standards, I don't run any machines with less than 512MB (oh, I do have a file server with 256MB come to think of it). If it helps I would think it quite natural to increase the memory consumption. /Robert ------------------------------------------------------- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149&alloc_id=8166&op=click _______________________________________________ Linuxsampler-devel mailing list Lin...@li... https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
|
From: Robert J. <rob...@da...> - 2004-05-26 10:53:58
|
Hi, > On the other hand each voice needs a streaming buffer which is currently > set to 256KByte. > So 128 voices use 32MByte of RAM, 256voices 64MByte of RAM. > I think for now 128 is a good value since it does not use that much memory > and stresses the average machine's CPU/disk quite a bit. I don't know if it's a big deal, but 32MB ram is nothing by todays standards, I don't run any machines with less than 512MB (oh, I do have a file server with 256MB come to think of it). If it helps I would think it quite natural to increase the memory consumption. /Robert |
|
From: <be...@ga...> - 2004-05-26 07:59:02
|
Scrive Mark Knecht <mk...@co...>: > > Benno, > Will we have any imposed limitations on voice counts? Or will the > only limitation be the capability of the hardware? How will we assess > what number of voices a given machine can handle? The limitation is given only by the speed of the hardware, (CPU,disk). Keep in mind LS voices I gave are stereo and GST voices are mono. so GSt's 160 voices equal to max 80 stereo voices. The current defines in the latest LS CVS is 128 stereo voices which equals to 256 GSt mono voices .. which GSt2 cannot deliver :) > > I typically run 8-10 MIDI channels. The piano channel easily uses > 40, but can peak at 80-100. (Divide by two in LS from GST???) The other > channels are more modest using between 10-20 voices. Still that's 100 + > 9*20=280 voices. Of course I can never get this from GSt because of > imposed limitations so I have to record audio in multiple passes. With LS it would be possible given a fast machine (athlon 2400 or P4 3GHz) and a 7200rpm drive > > OK, so this only goes to ask why is max voices set so low during > development? Wouldn't we be better to set it very high in development > and then lower it later when the code goes 1.0? yes 64 was really low and you can hit the polyphony limits really quickly especially because the pedal sustain algorithm is not optimized yet and simply keeps adding voices if you repeat hitting the same keys. The optimized version should hold only the last 2 voices on the key or so. That way you never go up beyond a certain polyphony. On the other hand each voice needs a streaming buffer which is currently set to 256KByte. So 128 voices use 32MByte of RAM, 256voices 64MByte of RAM. I think for now 128 is a good value since it does not use that much memory and stresses the average machine's CPU/disk quite a bit. The #define solution is only a temporary solution, at a later stage it will possible to specify the voice count via LSCP. Regarding the max number of voices that a machine can achieve. It's hard to say. If you push up the number too much you can choke even the fastest machine on the planet. Often the disk is the limitation but CPU is stressed heavily too especially if you use cubic interpolation and filters. The windows/mac softsamplers that can stream from disk all have this problem. You have to deal with a shared, non real time device, the hard disk. It's up to the user to find out the limits of the machine, although its hard to define what reliable diskstreaming means in the softsampler context. It's not like a harddisk recording app where you have the data streaming all the time so a benchmark can give you a good estimate how many track you can reliably In the case of a softsampler the stream count is very dynamic with very high peaks. Just watch the voice count in classical piano piece. Lots of high polyphony peaks but usually the average is much lower. Eg if the peak is 100 then the average will be 30-40. What's the worst case for a disk based softsampler is starting lots of notes at the same time and then holding them for a long time. This puts an extreme stress on the engine and the disk becomes the bottleneck: let's say you have 32k samples preload, which means 0.7sec. If you fire up 50 notes at the same time, the softsampler will have only 0.7secs available to fill up 50 buffers with data. Its a very small time and the risk is high that data cannot be filled in time. It's easy to construct a midi file that makes any softsampler fail. (alias causing voice dropouts). But normal music pieces don't stress the disk that much because we usually have lots of notes but they are usally relatively short with some exceptions. The first part of a note is always read from RAM and the first part ofthe notes streamed from disk are often cached in the disk cache thus do not put real stress on the disk (as long as you don't sustain them by more than a few secs). In conclusion: some automatic benchmark estimate could be incorporated into LS but it would never give you a 100% guarantee that you don't overload the streaming engine. We could do a very conservative benchmark too (one that fires up all the voices at the same time) but then numbers would way below the voice count achievable in practice. (eg the benchmark would say 60 voices while in practice you would achieve 100-120 voices or more. cheers, Benno http://www.linuxsampler.org ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: <le...@gr...> - 2004-05-26 06:46:17
|
Le 25 mai 04, =E0 21:22, Christian Schoenebeck a =E9crit : > Es geschah am Dienstag, 25. Mai 2004 18:30 als St=E9phane Letz = schrieb: >> Is there any "block-diagram vision of the LS audio loop " available >> somewhere? Or could one be specfied? > > You are only speaking about the most inner interpolate loop, right? Yes , i mean the interpolate loop first. > We don't > have a block diagram yet, but creating one won't be hard for the=20 > interpolate > loop. EGs and LFOs won't be factors, because their synthesis = parameters > they're throwing are calculated outside the interpolate loop. This could be expressed also because has the notion of control signal=20 that are evaluated once ouside of the loop for each audio cycle. > > But I won't have the time for this at least until the weekend. Maybe=20= > you can > do it meanwhile. The 2 level of description will be interesting. Stephane |
|
From: Christian S. <chr...@ep...> - 2004-05-25 19:22:35
|
Es geschah am Dienstag, 25. Mai 2004 18:30 als St=E9phane Letz schrieb: > Is there any "block-diagram vision of the LS audio loop " available > somewhere? Or could one be specfied? You are only speaking about the most inner interpolate loop, right? We don'= t=20 have a block diagram yet, but creating one won't be hard for the interpolat= e=20 loop. EGs and LFOs won't be factors, because their synthesis parameters=20 they're throwing are calculated outside the interpolate loop. But I won't have the time for this at least until the weekend. Maybe you ca= n=20 do it meanwhile. CU Christian |
|
From: Christian S. <chr...@ep...> - 2004-05-25 19:07:09
|
Es geschah am Dienstag, 25. Mai 2004 15:32 als Mark Knecht schrieb: > Benno, > Will we have any imposed limitations on voice counts? Or will the > only limitation be the capability of the hardware? How will we assess > what number of voices a given machine can handle? The max. values for voice and stream count are meant to be fix values, but of course customizable. Most probably we will change these compile time parameters to runtime parameters, but only in that way that these values will be changed by an explicit command of the frontend / script, not automatically changed by the engine itself while realtime operation is running. Because changing that limit implies reallocation and that of course disturbs realtime operation. Planned for a later stage are also benchmark, stress tests within LS, conveniently controllable via frontend / LSCP script to aid the user by making such decisions. CU Christian |
|
From: Christian S. <chr...@ep...> - 2004-05-25 18:50:50
|
Es geschah am Dienstag, 25. Mai 2004 12:59 als be...@ga... schrieb: > Christian: did you already open the CVS write account for Stephane so that > he can commit his stuff ? *ehem* -> http://www.linuxsampler.org CU Christian |
|
From: <le...@gr...> - 2004-05-25 16:32:20
|
Hi, I would like to try to express LS internal audio loop in the Faust language. Faust AUdio STreams is a functional programming language for realtime audio plugins and applications development. The Faust compiler translates signal processing specifications into C++ code. http://sourceforge.net/projects/faudiostream/ Faust can generate scalar or vectorized code for X86 and Altivec architecture. Faust language is a "traduction" of signal processing black diagrams into textual expressions that are later compiled (with several optimisation) into a loop expressed in C++. Basically the idea would be to start from a block-diagram vision of the LS audio loop (including envelope, filter,...), express this scheme in Faust language, get the result and see what kind of efficiency we can get compared with the current hand written code. Is there any "block-diagram vision of the LS audio loop " available somewhere? Or could one be specfied? Thanks Stephane |
|
From: Mark K. <mk...@co...> - 2004-05-25 13:33:02
|
be...@ga... wrote:
> hi Stephane,
> ah so basically the PPC asm code was wrong ?
>
> Regarding the no free voice, do you get them only at the
> end of the song ? (afaik at the end it peaks to over 128voice).
> Try to increase max streams and max voices to 256 and rerun the chopin midi.
>
> please watch CPU load (using top or whatever) and report peaks here.
> I'm very curious.
> On an athlon 2000 I get peaks of about 50-60% CPU.
>
Benno,
Will we have any imposed limitations on voice counts? Or will the
only limitation be the capability of the hardware? How will we assess
what number of voices a given machine can handle?
I typically run 8-10 MIDI channels. The piano channel easily uses
40, but can peak at 80-100. (Divide by two in LS from GST???) The other
channels are more modest using between 10-20 voices. Still that's 100 +
9*20=280 voices. Of course I can never get this from GSt because of
imposed limitations so I have to record audio in multiple passes.
OK, so this only goes to ask why is max voices set so low during
development? Wouldn't we be better to set it very high in development
and then lower it later when the code goes 1.0?
- Mark
|
|
From: <le...@gr...> - 2004-05-25 12:42:13
|
Le 25 mai 04, =E0 14:09, Jan Weil a =E9crit : > Hi, > > Am Die, den 25.05.2004 schrieb St=E9phane Letz um 13:23: >> Now most of the time is used in int to float conversion : >> >> float xm1_r =3DpSrc[pos_int+1]; >> > > Well, probably you already know this and actually it's float to int = and > it's on i386 but anyway, maybe this helps: > > http://www.eca.cx/lad/2001/Nov/0000.html > http://mega-nerd.com/FPcast/ I know this code. > > The thread on LAD also mentions PPC. As far as I remember there have > been more threads about this issue on LAD. > > Cheers, I think a lot of int to flot conversion could be avoided in the code :=20= the cubic interpolation used 4 samples that are int to float converted,=20= and the *same* sample is converted 4 times when the computation window=20= moves. This could be improved to only make one int to float =20 conversion per sample Stephane |
|
From: Jan W. <Jan...@we...> - 2004-05-25 12:09:30
|
Hi, Am Die, den 25.05.2004 schrieb St=E9phane Letz um 13:23: > Now most of the time is used in int to float conversion :=20 >=20 > float xm1_r =3DpSrc[pos_int+1]; >=20 Well, probably you already know this and actually it's float to int and it's on i386 but anyway, maybe this helps: http://www.eca.cx/lad/2001/Nov/0000.html http://mega-nerd.com/FPcast/ The thread on LAD also mentions PPC. As far as I remember there have been more threads about this issue on LAD. Cheers, Jan |