|
From: Robert J. <rob...@da...> - 2003-10-28 10:19:57
|
Hi guys, I tried linuxsampler yesterday with the aid of benno, looks great! cpu load was low and it seemed pretty stable. I encountered a few irks that according to benno are known issues, i'll list them just for the record: - clicks when doing note_off - trying the original gigapiano it sounded very strange when playing hard. To me it sounded as if it the samples where played from the wrong key (one octave up), benno told me there where issues with the midimapping which may explain this. - it segfaults when ctrl-c:ing out of the app. - on my system i get mlockall failures all the time, regardless of the size of the gig file and such... how much mem does LS allocate? (I've got 512mb so it should work rather well I think. I can run debug compiled-muse together with jack with no problems, which both do mlockall and occupy lots of memory...) All in all, this looks set to be a real killer app! :) The things I miss are (probably mostly on the todo already) - gui (not strictly necessary) - jack output - soundfont support, for now I'm perfectly happy with gig support, I wonder how hard it would be to add soundfont support though..? - hmmm I think I ran it as an ordinary user (with pretty good results), it occurs to me that SCHED_FIFO should not be available then. LS uses SCHED_FIFO right? Should it complain if it can't set it? (this may be a misconception of mine though) ---- And finally, a wish. I think this mailinglist is too quiet, it's hard to know if there is any activity at all. May I propose that you guys start announcing cvs checkins on this list? Regards, Robert |
|
From: <be...@ga...> - 2003-10-28 13:29:27
|
Scrive Robert Jonsson <rob...@da...>: > Hi guys, > > I tried linuxsampler yesterday with the aid of benno, looks great! > > cpu load was low and it seemed pretty stable. > > I encountered a few irks that according to benno are known issues, i'll list > > them just for the record: > > - clicks when doing note_off this is because the sample is cutoff hard without any release envelope because the enveloping stuff is still missing. So it is not an error. > - trying the original gigapiano it sounded very strange when playing hard. To > > me it sounded as if it the samples where played from the wrong key (one > octave up), benno told me there where issues with the midimapping which may > explain this. yes exactly, Christian will add the complete articulation implementation soon so these strange effects will go away. (actually libgig supports full articulation, it's linuxsampler that does not yet honor all the values). > - it segfaults when ctrl-c:ing out of the app. probably due to the incorrect stopping of threads when the CTRL-C handler is called, not a serious issue and probably very easy to fix. As said it would be more serious if linuxsampler segfaulted during playing. :-) > - on my system i get mlockall failures all the time, regardless of the size > of > the gig file and such... how much mem does LS allocate? (I've got 512mb so it > > should work rather well I think. I can run debug compiled-muse together with > > jack with no problems, which both do mlockall and occupy lots of memory...) see diskthread.h: #define STREAM_BUFFER_SIZE 131072 #define MAX_INPUT_STREAMS 64 this means we can have max 64 disk streams (each voice if streamed from disk exactly needs one stream). We are talking about 16bit samples (2bytes) thus the streams occupy MAX_INPUT_STREAMS*STREAM_BUFFER_SIZE*2 this is not an extremely high amount, in the above case 16MB most of the memory is consumed by the pre-cached initial parts of the samples. see audiothread: #define NUM_RAM_PRELOAD_SAMPLES normally we set this value to 65536, which means for mono 16bit samples we preload 65536*2=128KB of memory for each sample. If the .GIG contains 1000 samples we use 128KB*1000= about 128MB of memory. For stereo samples you must multiply the number by 2. (in the above case 256KB of memory per stereo sample). Preloading 65536 samples means 1.48secs of samples at 44.1kHz. This is needed to overcome the delay that occurs for the disk thread filling up the ringbuffers where the remaining part of the sample will be streamed from. The most critical case occurs when you start lots of notes at the same time. Eg assume we have 50 samples (all different) and start them all at the same time. This means that within 1.48secs (the RAM part of the sample), the disk thread has to fill at least some samples in ALL 50 ringbuffers associated to the voices. This means that in the above case the disk thread might fail to meet the deadlines and you would risk that the audio thread wants to fetch the audio data from the diskstream ringbuffers but the data has not been loaded yet, leading to an error condition (we opted for note cutoff and report the error). The solution to this is tuning the disk streaming (there is room for improvement by cutting read() sizes in critical situations but I will add this stuff at a later stage), installing faster disks or increase the size of the part of sample we preload in RAM. All disk based samples like giga,kontakt,halion have this problem when many voices are triggered simultaneously and often the only solution is to increase the RAM preload size which has the disadvantage that you can load fewer sample libraries at the same time. People that make extensive use of disk based sampler often have machines with 1-2GB RAM to load as much sample libraries as possible. There are sample libraries that require a minimum of 0.5-1GB to be loaded. (the size of these libs is often 2-4GB). For example NI Kontakt while having more features and offering better sample manipulation tools than Giga, is not that performant when it comes to streaming like Giga (giga uses kernel based streaming, special low latency GSIF drivers etc). This means in Kontakt you have often to increase preload sizes to up to 600KB which greatly reduces the size and number of sample libraries you can load at the same time. You know ... 32bit x86 boxes hit the wall at 1-2GB RAM because of the limited addressing space but many users using a disk based sampler would like to install more ram, perhaps 8,16GB and more. Even if there are already relatively cheap 64bit x86 boxes around, most notably the AMD Opteron / Athlon 64, Windows cannot use its full capabilities because the 64bit support is not it yet. Read this and you will see that Windows XP on AMD 64bit CPUs will take long long time to become a viable option: (a very bumby road I would say) http://www.pcworld.com/news/article/0,aid,112749,pg,6,00.asp On the other hand Linux already runs on AMD 64bit CPUs and linuxsampler is 64bit compatible too. This means we can beat the windows sampler at their own game: linuxsampler already runs on performant 64bit boxes and what's more important we can handle much bigger amounts of RAM which is crucial for those wanting to load many samples at the same time. I guess once linuxsampler's engine is production ready and can play preexisting libraries correctly it will turn up on the windows people radarscreens, I assume even those using the windows samplers professionally because those are the people that need these monster installations with dozen, even hundreds of GBs worth of samples. And I don't even touched the fact that linuxsampler could be clustred to provide sampler farms that can render thousand of voices and pipe them back in real time over ethernet without requiring expensive audio cards, midi interfaces etc for each cluster slave. Only time will tell we we will remain a niche product or we will get high visibility. For now let's make work the basic engine perfectly. Returning to you mlockall problem: try to lower the NUM_RAM_PRELOAD_SAMPLES eg to 16384 (do a make clean because there are some dependencies missing in the Makefile). This could help but keep in mind if you preload fewer samples from disk you risk to run into the problem described above where the disk thread is unable to fill the ringbuffer in time, leading to note cut-offs. > > All in all, this looks set to be a real killer app! :) We're all working hard to make it happen. As said every contribution count, not only code contributions. Ideas, testing, debugging, advices from musicians, sampler/samplelibrary experts all count. > > The things I miss are (probably mostly on the todo already) > > - gui (not strictly necessary) I have half finished a simple load&play GUI (load samples, assign them to midi channels and start playing, plus the GUI permits you to set basic settings like number of voices, RAM preload sizes etc, shows you activity and real time voice count etc). As I stated before linuxsampler is remotely controllable via TCP/IP which means the GUI can run on a different box and even on a different platform. What I am working on now is the remote control protocol which is a simple API that can be used by anone so anyone can build their favorite kind of remote control application. It can be a GUI, a text based application, a script, hardware buttons etc. The default GUI I will provide is written using the Qt library because of its portability. This will allow to run the GUI frontend on Windows and Mac too. This is because I'm assuming sooner or later we will see those linuxsampler clusters remote controlled by Windows PCs or Macs :-). Plus do not forget separating the frontend from the engine forces you to follow certain rules during coding which leads to a better quality of the code and helps to isolate errors and performance problems. > - jack output Of course. We will go as far that linuxsampler will not only export the main output (where all vocies will be mixed too) but single midi channel mix buffers which you can send to jack unprocessed and process the channels with your own FXes or record the tracks into eg ardour for further editing. But for now we only support direct ALSA out because we must first iron out potential latency problems. (if you put jack into the game then you cannot easily figure out if it's linuxsampler's or jack's fault). Divide et impera is the keyword here :-) > - soundfont support, for now I'm perfectly happy with gig support, I wonder > how hard it would be to add soundfont support though..? Well soundfont offers quite some modulation possibilities (Peter Hanappe and Josh Green know this better than anyone). I guess for now if you want SF2 it is better to use fluidsynth. SF2 is IMHO a bit limited and linuxsampler aims at the professional sampling domain so our priority is to make .GIG working. (at a later stage we will try to add 24bit sample in Halion / Kontakt formats too). Or alternatively when our GIG engine will be complete you can try to convert your SF2 files using a conversion program like cdxtract http://www.cdxtract.com (but relatively expensive $150). Or better, integrate the fluid engine to linuxsampler :-) > > - hmmm I think I ran it as an ordinary user (with pretty good results), it > occurs to me that SCHED_FIFO should not be available then. LS uses SCHED_FIFO > > right? Should it complain if it can't set it? (this may be a misconception of > > mine though) In theory it should complain: if (sched_setscheduler(0, SCHED_FIFO, &schp) != 0) { perror("sched_setscheduler"); return -1; } anyway sched_setscheduler() is deprecated because for threads one should use pthread_attr_init() to set priorities. I heard rumors that sched_setscheduler() does not work correctly on recent threading libraries but I cannot confirm it. > > ---- > And finally, a wish. I think this mailinglist is too quiet, it's hard to know > > if there is any activity at all. > > May I propose that you guys start announcing cvs checkins on this list? We will certainly announce new (relevant) CVS checkins on the list (manually). Regarding myself I'm working on the GUI/remote control protocol and it will take some time to make it work well, document etc so I guess I will not check in new stuff before 1-2 weeks. Perhaps Christian will add some stuff meanwhile and I'm sure he will announce it on this mailing list. BTW: for those that wantet to check out the CVS source and did not find the exact location here is how to get the sources: cvs -z3 -d:pserver:ano...@cv...:/home/schropp/linuxsampler co linuxsampler cheers, Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Robert J. <rob...@da...> - 2003-10-28 13:57:59
|
Tuesday 28 October 2003 14.22 skrev be...@ga...: ... > > > - it segfaults when ctrl-c:ing out of the app. > > probably due to the incorrect stopping of threads when the CTRL-C handler > is called, not a serious issue and probably very easy to fix. > As said it would be more serious if linuxsampler segfaulted during playing. > :-) yeah, it's no big deal ;) > > > - on my system i get mlockall failures all the time, regardless of the > > size of > > the gig file and such... how much mem does LS allocate? (I've got 512mb > > so it > > > > should work rather well I think. I can run debug compiled-muse together > > with > > > > jack with no problems, which both do mlockall and occupy lots of > > memory...) > > see diskthread.h: > #define STREAM_BUFFER_SIZE 131072 > #define MAX_INPUT_STREAMS 64 > > this means we can have max 64 disk streams (each voice if streamed from > disk exactly needs one stream). > We are talking about 16bit samples (2bytes) thus > the streams occupy > MAX_INPUT_STREAMS*STREAM_BUFFER_SIZE*2 > > this is not an extremely high amount, in the above case 16MB Definately not alot.... it may be that my machine has severely fragmented memory, I'll try rebooting it tonight.. <lots of interesting stuff shamelessly deleted> > Even if there are already relatively cheap 64bit x86 boxes around, most > notably the AMD Opteron / Athlon 64, Windows cannot use its full > capabilities because the 64bit support is not it yet. > Read this and you will see that Windows XP on AMD 64bit CPUs will take long > long time to become a viable option: (a very bumby road I would say) > http://www.pcworld.com/news/article/0,aid,112749,pg,6,00.asp > > On the other hand Linux already runs on AMD 64bit CPUs and linuxsampler is > 64bit compatible too. > This means we can beat the windows sampler at their own game: linuxsampler > already runs on performant 64bit boxes and what's more important we can > handle much bigger amounts of RAM which is crucial for those wanting to > load many samples at the same time. > I guess once linuxsampler's engine is production ready and can play > preexisting libraries correctly it will turn up on the windows people > radarscreens, I assume even those using the windows samplers professionally > because those are the people that need these monster installations with > dozen, even hundreds of GBs worth of samples. This could become a real killer feature! :) > And I don't even touched the fact that linuxsampler could be clustred to > provide sampler farms that can render thousand of voices and pipe them > back in real time over ethernet without requiring expensive audio cards, > midi interfaces etc for each cluster slave. Hehe, okay, but I think you should save that for version 2 ;). > > The things I miss are (probably mostly on the todo already) > > > > - gui (not strictly necessary) > > I have half finished a simple load&play GUI (load samples, assign them > to midi channels and start playing, plus the GUI permits you to set > basic settings like number of voices, RAM preload sizes etc, shows you > activity and real time voice count etc). > As I stated before linuxsampler is remotely controllable via TCP/IP which > means the GUI can run on a different box and even on a different platform. > What I am working on now is the remote control protocol which is a simple > API that can be used by anone so anyone can build their favorite kind of > remote control application. > It can be a GUI, a text based application, a script, hardware buttons etc. > The default GUI I will provide is written using the Qt library because of > its portability. This will allow to run the GUI frontend on Windows and > Mac too. > This is because I'm assuming sooner or later we will see those linuxsampler > clusters remote controlled by Windows PCs or Macs :-). > Plus do not forget separating the frontend from the engine forces you > to follow certain rules during coding which leads to a better quality > of the code and helps to isolate errors and performance problems. sounds great! > > > - jack output > > Of course. We will go as far that linuxsampler will not only export > the main output (where all vocies will be mixed too) but single > midi channel mix buffers which you can send to jack unprocessed > and process the channels with your own FXes or record the tracks > into eg ardour for further editing. > But for now we only support direct ALSA out because we must first > iron out potential latency problems. > (if you put jack into the game then you cannot easily figure out > if it's linuxsampler's or jack's fault). > Divide et impera is the keyword here :-) > > > - soundfont support, for now I'm perfectly happy with gig support, I > > wonder how hard it would be to add soundfont support though..? > > Well soundfont offers quite some modulation possibilities > (Peter Hanappe and Josh Green know this better than anyone). > I guess for now if you want SF2 it is better to use fluidsynth. > SF2 is IMHO a bit limited and linuxsampler aims at the professional > sampling domain so our priority is to make .GIG working. (at a later stage > we will try to add 24bit sample in Halion / Kontakt formats too). yeah, soundfonts may not have the same professional feel, there are lots of tome though, and some of the new ones allocate quite a lot of memory. > > Or alternatively when our GIG engine will be complete you can try to > convert your SF2 files using a conversion program like cdxtract > http://www.cdxtract.com (but relatively expensive $150). > Or better, integrate the fluid engine to linuxsampler :-) yeah, I was thinking along those lines :) but I have no idea if it would be feasible. Josh are you here somewhere? Enlighten me :) > > > - hmmm I think I ran it as an ordinary user (with pretty good results), > > it occurs to me that SCHED_FIFO should not be available then. LS uses > > SCHED_FIFO > > > > right? Should it complain if it can't set it? (this may be a > > misconception of > > > > mine though) > > In theory it should complain: > if (sched_setscheduler(0, SCHED_FIFO, &schp) != 0) { > perror("sched_setscheduler"); > return -1; > } > > anyway sched_setscheduler() is deprecated because for threads one should > use pthread_attr_init() to set priorities. > I heard rumors that sched_setscheduler() does not work correctly on > recent threading libraries but I cannot confirm it. It may be all my fault, I may just as well have run as root out of habit, I run most audio stuff as root... > > > ---- > > And finally, a wish. I think this mailinglist is too quiet, it's hard to > > know > > > > if there is any activity at all. > > > > May I propose that you guys start announcing cvs checkins on this list? > > We will certainly announce new (relevant) CVS checkins on the list > (manually). > Regarding myself I'm working on the GUI/remote control protocol and it will > take some time to make it work well, document etc so I guess I will not > check in new stuff before 1-2 weeks. Perhaps Christian will add some stuff > meanwhile and I'm sure he will announce it on this mailing list. Release early, release often :) repeat the mantra. Though I'm happy if it atleast compiles ;) /Robert |
|
From: Josh G. <jg...@us...> - 2003-10-28 20:05:32
|
On Tue, 2003-10-28 at 05:48, Robert Jonsson wrote: > Tuesday 28 October 2003 14.22 skrev be...@ga...: > > > > > - soundfont support, for now I'm perfectly happy with gig support, I > > > wonder how hard it would be to add soundfont support though..? > > > > Well soundfont offers quite some modulation possibilities > > (Peter Hanappe and Josh Green know this better than anyone). > > I guess for now if you want SF2 it is better to use fluidsynth. > > SF2 is IMHO a bit limited and linuxsampler aims at the professional > > sampling domain so our priority is to make .GIG working. (at a later stage > > we will try to add 24bit sample in Halion / Kontakt formats too). > > yeah, soundfonts may not have the same professional feel, there are lots of > tome though, and some of the new ones allocate quite a lot of memory. > > > > Or alternatively when our GIG engine will be complete you can try to > > convert your SF2 files using a conversion program like cdxtract > > http://www.cdxtract.com (but relatively expensive $150). > > Or better, integrate the fluid engine to linuxsampler :-) > > yeah, I was thinking along those lines :) but I have no idea if it would be > feasible. > Josh are you here somewhere? Enlighten me :) > Yep, I'm here :) When comparing GigaSampler versus SoundFont versus DLS2, I don't think there is anything in particular that would make GigaSampler more professional than these other formats, except for a few key points: streaming of samples and > 16 bit audio. There is no reason why streaming of samples couldn't be implemented with these other formats, though. In the case of DLS2, its a really flexible format (heck, GigaSampler used it, or rather trashed it in my opinion, would have been nice if they had some sort of signature in the header). I think DLS2/SF2 has more flexibility in some areas than GigaSampler (particularly in the area of modulation, and extent of control values, GigaSampler has rather restricted parameter ranges, at least from what I have seen). DLS2 could theoretically contain any audio format that WAV files can store, since the audio is stored as embedded WAV files. Its just that the spec says that only 8bit or 16bit audio is defined as being part of the standard. Thinking of converting SoundFont or DLS2 files to GigaSampler gives me the creeps. Doing it for your own purposes might make sense, but distributing those GigaSampler files, means that less people can now use that instrument patch, since its not an open standard. Maybe this will change in the future, but I haven't quite come to like the GigaSampler format yet. Maybe I just haven't worked with it enough. It would be nice to see LinuxSampler built in a way that makes it modular enough to use other formats easily. I still envision my own project libInstPatch being usable for supporting other formats for GigaSampler. It supports DLS2 and SF2 at this point and GigaSampler to some extent and has a Python binding which I just added. I've been working a lot lately though, so I haven't had a lot of time to put toward these projects. But I hope that changes in the near future. It would be nice to collaborate more on some of this stuff. The C/C++ issue is probably going to come up again I'm sure, I'll look into how hard it would be to add a C++ wrapper to libInstPatch, if that would make it more appealing. Nice to see some activity on this list. Cheers. Josh Green |
|
From: <be...@ga...> - 2003-10-29 11:19:46
|
Scrive Josh Green <jg...@us...>: > Yep, I'm here :) Hi Josh ! Pressure is mounting on you, we will have soon a fully working streaming capable sampler and I guess it will not take much time for user begin asking for an editor that can edit and create GIG file and other streamable sampling libraries. Be prepared :-) > When comparing GigaSampler versus SoundFont versus > DLS2, I don't think there is anything in particular that would make > GigaSampler more professional than these other formats, except for a few > key points: streaming of samples and > 16 bit audio. I would say only streaming and the articulation via MIDI controller that permits to and espressive playing of these instruments. Perhaps DLS2 supports >16bit but currently the .GIG format does not, or at least the gigastudio does not support it. This is one of the reasons why both sample library producers and users are looking at the competition (Steinberg Halion and NI Kontakt) because they support 24bit. Those samplers, AFAIK use a sample format that is easier to deal with (especially when editing the single waves): AFAIK it's basically an XML file that contains the program/patch data and a bunch of .WAV files. Such a format is certainly easier to handle in swami too because you have not to diassemble and reassemble those giant monolithic files. At a later stage we will try to add support for those sample formats too in LS. While not being yet fully modular, LS is modular enough to allow one single engine deal with multiple formats thanks to C++ which permits you to derive subclasses, eg Voice:: be Voice::gig or Voice::akai. When loading foreign sample formats we do not "convert" them to a new, one-size-fits-all format but load and play them using native loading and playback routines. This ensures faithful playback of those formats. > There is no reason > why streaming of samples couldn't be implemented with these other > formats, though. Indeed SF2 could be streamed too, but keep in mind that some weird stuff like backwards looping etc make this task very hard or unviable. The loader decide whether streaming or not the sample on a case to case basis (eg if no problematic SF2 stuff like backwards looping etc is used then stream, otherwise keep it in RAM). I've heard that fluidsynth has a good SF2 engine so if you folks want future SF2 support in LS then try to persuade Peter Hanappe to port the SF2 engine to LS (I would say it would be too early now since the basic architecture of LS has not stabilized yet, perhaps in a few months). > In the case of DLS2, its a really flexible format > (heck, GigaSampler used it, or rather trashed it in my opinion, would > have been nice if they had some sort of signature in the header). I > think DLS2/SF2 has more flexibility in some areas than GigaSampler > (particularly in the area of modulation, and extent of control values, > GigaSampler has rather restricted parameter ranges, at least from what I > have seen). yes the giga did not want to complicate their lives too much so they left out many stuff from originally contained in DLS2. But even with this limitations GIG libraries sound great thanks to the streaming that allows very large sample capacity and expressive playing through midi controller articulation. > DLS2 could theoretically contain any audio format that WAV > files can store, since the audio is stored as embedded WAV files. Its > just that the spec says that only 8bit or 16bit audio is defined as > being part of the standard. Yes, but I think the future lies in formats like Halion and Kontakt use: XML for patch/program data plus a bunch of WAV files. Monolithic files are a mess to deal with while single WAV files allow for much more finegrained editing. Ok .... for relatively small files like soundfonts a single file makes sense but not for multi GByte sample libraries. From a performance point of view if you stream from a single file or a bunch of WAV files it makes virtually no difference (even if you open and close the file at each triggering of the sample, it's very fast I did some benchmarks and my conclusion is that the performance is exactly the same). > > Thinking of converting SoundFont or DLS2 files to GigaSampler gives me > the creeps. Doing it for your own purposes might make sense, but > distributing those GigaSampler files, means that less people can now use > that instrument patch, since its not an open standard. Fully agree but being able to read GIG files is very important if you want that your sampler can make use of a vast pool of professional, high quality sample libraries. GIG is only one among many formats that LS will support. We can use our own format (probably XML + WAV files) if we want an open, extensible sample library format. But for LS, being able to read GIG files is as important as for OpenOffice reading .DOC files. The keyword is interoperability. In that sense Christian's libgig is really well done and once he gets his articulation stuff in place GIG playback will be excellent. > Maybe this will > change in the future, but I haven't quite come to like the GigaSampler > format yet. Maybe I just haven't worked with it enough. It would be nice > to see LinuxSampler built in a way that makes it modular enough to use > other formats easily. It already is. Ok, it is still monolithic (the recompiler stuff will come later, for now let's focus on good playback of existing sample formats) but we will have no problems to accomodate new loaders and engines because the Voice:: classes can be subclassed to implement the characteristics of the engine associated to a certain sample format. > > I still envision my own project libInstPatch being usable for supporting > other formats for GigaSampler. It supports DLS2 and SF2 at this point > and GigaSampler to some extent and has a Python binding which I just > added. I've been working a lot lately though, so I haven't had a lot of > time to put toward these projects. But I hope that changes in the near > future. It would be nice to collaborate more on some of this stuff. The > C/C++ issue is probably going to come up again I'm sure, I'll look into > how hard it would be to add a C++ wrapper to libInstPatch, if that would > make it more appealing. Well certainly you and Christian should collaborate more in order to avoid wheel reinvention etc. Christian's libgig is made for loading GIG files and exports methods (functions) for both reading patch/program data, articulation layers etc plus it has functions to cache and read data from disk into memory buffers. The question is, if Christian has already done this excellent lib wouldn't it be beneficial to use that lib in swami too ? Yes I know there are the C/C++ issues but I'm more and more convinced that C++ is the ideal language for handling sample related data because samples are made up of many objects and subobjects. Josh you could probably argue that LS should use libInstPatch. Well I haven't looked to your stuff you I cannot judge what can be done, what would be beneficial etc. But what's for sure is that libgig is 100% optimized for playback it uses no mutexing and all routines that are called from within the audio thread are carefully optimized and have guaranteed execution times and being the lib shipped directly with LS it will ease the dependency problem too. (at least during the beginning phase, perhaps later it will be better to make separate libs for each sample format). I'm noth the author of libgig so I cannot say anything about which kind of collaboration could be beneficial between swami and LS. But even if you Josh make a totally separate GIG editor in swami it would still be very useful since you can edit GIGs and then load them into LS for playing. Since only the sample heads are loaded, the loading is pretty fast even for very large sample libs. > > Nice to see some activity on this list. Cheers. I'm looking forward for the day that swami will go hand in hand with LS and will be able to edit many sample formats that can be played using LS. cheers, Benno http://www.linuxsampler.org ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Steve H. <S.W...@ec...> - 2003-10-29 11:37:41
|
On Wed, Oct 29, 2003 at 12:19:49 +0100, be...@ga... wrote: > Ok, it is still monolithic (the recompiler stuff will come later, > for now let's focus on good playback of existing sample formats) > but we will have no problems to accomodate > new loaders and engines because the Voice:: classes can be subclassed > to implement the characteristics of the engine associated to a certain > sample format. I'm starting to doubt that the recompiling stuff is neccesary, the Gig/DLS2/SF2 formats seem to be quite expressive, yet efficiently implementatble, not even counting the XML+WAV formats used by newer things. I dont think that there would be much/any performance gain in using dynamic compilation, and how many peopel would want to design thier own instruments, when they could use standard formats have have greater compatibility? Its still a nice idea though, but maybe more suited to a softsynth. - Steve |
|
From: Mark K. <mar...@co...> - 2003-10-29 12:50:36
|
On Wed, 2003-10-29 at 03:37, Steve Harris wrote: > On Wed, Oct 29, 2003 at 12:19:49 +0100, be...@ga... wrote: > > Ok, it is still monolithic (the recompiler stuff will come later, > > for now let's focus on good playback of existing sample formats) > > but we will have no problems to accomodate > > new loaders and engines because the Voice:: classes can be subclassed > > to implement the characteristics of the engine associated to a certain > > sample format. > > I'm starting to doubt that the recompiling stuff is neccesary, the > Gig/DLS2/SF2 formats seem to be quite expressive, yet efficiently > implementatble, not even counting the XML+WAV formats used by newer > things. I dont think that there would be much/any performance gain in > using dynamic compilation, and how many peopel would want to design thier > own instruments, when they could use standard formats have have greater > compatibility? > > Its still a nice idea though, but maybe more suited to a softsynth. > > - Steve Hi, In a private email to Benno earlier, I had asked the question "What sort of sampler do you want LS to be?" (I have no opinion formed today) I see GSt and Kontakt as the two leaders today, and they really are different sorts of animals. GSt is pretty much a playback engine. You do all the editing in GigaEdit to set up samples, filters, envelopes, etc., to do what you want, but primarily the wave files just get played in digital bit order. Nothing much happens in GSt. The *current* version of GSt doesn't do a lot to modify sounds. Kontakt seems more oriented toward allowing you to mangle the sounds in strange and interesting ways. It will chop samples, play parts out of order, play them in reverse, etc., so you can get more wild stuff. What does this team want LS to be? I expect that LS will *need* to have some wild capabilities or users of GST, Kontakt and others will find it sort of boring. GSt 3.0 will be out some day. It will likely go beyond where Kontakt is today. By that measure, LS would seem boring wo many without some interesting ways to twist the sounds around. Personally, I agree with you. I'll probably just use the standard model where LS looks like GSt most of the time. However, like the difference between Reaktor and Reaktor Session, should some interesting person take LS and mangle it into an interesting new configuration, I'll probably use the compiled version of that also. I somehow doubt I will ever wire together pieces and compile them myself, but it would be fun to see what others dream up. Mark |
|
From: <be...@ga...> - 2003-10-29 14:06:32
|
Scrive Mark Knecht <mar...@co...>: > Hi, > In a private email to Benno earlier, I had asked the question "What > sort of sampler do you want LS to be?" (I have no opinion formed today) > > I see GSt and Kontakt as the two leaders today, and they really are > different sorts of animals. GSt is pretty much a playback engine. You do > all the editing in GigaEdit to set up samples, filters, envelopes, etc., > to do what you want, but primarily the wave files just get played in > digital bit order. Nothing much happens in GSt. The *current* version of > GSt doesn't do a lot to modify sounds. This is true about GSt but it's one of it's major strenghts: most natural instruments do not need any processing at all. piano, guitars, orchestral stuff, it is all played as recorded and as you see it sounds so great that film composers can use it in film scores often by completely replacing real orchestras without the listeners noticing that the song is played by a sampler and not by humans. Ok nothing is perfect, the pros will always notice a difference but since in many case the monetary budget is limited one has to do some tradeoffs and streaming samplers seem to do quite well in certain areas. > > Kontakt seems more oriented toward allowing you to mangle the sounds > in strange and interesting ways. It will chop samples, play parts out of > order, play them in reverse, etc., so you can get more wild stuff. Yes this is certainly a strenght of Kontakt and we would like LS being able to mangle the samples too. But for now let's focus on standard high performance streaming playback. > > What does this team want LS to be? I expect that LS will *need* to > have some wild capabilities or users of GST, Kontakt and others will > find it sort of boring. It could be that it will be boring but I guess even if it provides only decent GIG playback it will be appealing to some users because it is stable, can be tweaked, can be made run on 64 bit CPUs allowing to precache dozen of gigabytes of RAM (possibly saving quite a few bucks over those multi-machine installations), streaming over ethernet can be added too so LS clusters would provide unlimited scalability using off the shelf hardware without the need of equipping each machine with expensive 24bit digital audio cards, midi interfaces and the like. I think the community feedback loop will help us to identify quickly waht 90% of users want. If we can achieve that the ball will start rolling and will become unstoppable :-) (we will put a discussion forum on the LS site to ease communication with non technical users) > GSt 3.0 will be out some day. It will likely go > beyond where Kontakt is today. I'll believe that when I'll see it. :-) I think GSt's biggest strenght (low latency streaming thanks to kernel level programming) is one of its biggest disadvantages too. They have big problems when interoperating with other applications because frankly said putting a sampler in a kernel modules is a bad hack but probably on Windows the only way to get out the maximum of the iron (the hardware, disks, RAM, latencies etc). People want VST integration and I see it hard for GSt achieving perfect VST integration without giving up its performance lead. If GSt 3.0 performance level drops to Kontakt then users will probably have no reason of choosing GSt over Kontakt, even if it will provide some advanced features. NI has built up a nice knowledge in the DSP area and I think it is hard for competitors to bring out softsamplers that offer a considerably more DSP features without investing quite some resources in R&D. > By that measure, LS would seem boring wo > many without some interesting ways to twist the sounds around. It could be but for orchestral and natural instrument stuff it would already be a very appealing because of its ability to adapt to hardware, CPUs, platforms, etc. > > Personally, I agree with you. I'll probably just use the standard > model where LS looks like GSt most of the time. However, like the > difference between Reaktor and Reaktor Session, should some interesting > person take LS and mangle it into an interesting new configuration, I'll > probably use the compiled version of that also. You already know that I fully agree on this. I originally wanted to go the "Reaktor" -> "Reaktor Session" route but it turned out to be a too big task for now. People want to make music with softsamplers on Linux today and not in several months/years and probably 90% of users are quite happy with the features that the first production release of LS will have. This is why we inverted course and will go the "Reaktor Session" -> "Reaktor" way. :-) > > I somehow doubt I will ever wire together pieces and compile them > myself, but it would be fun to see what others dream up. Yes this is true, never undersestimate the creativity of people. Sometimes people use a tool and do things that go beyond the wildest dreams of the original creators of a tool itself. Let's work hard building such a tool and sit back watching what those crazy users will do with it :-) Benno http://www.linuxsampler.org ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: <be...@ga...> - 2003-10-29 13:04:04
|
Scrive Steve Harris <S.W...@ec...>: > On Wed, Oct 29, 2003 at 12:19:49 +0100, be...@ga... wrote: > > Ok, it is still monolithic (the recompiler stuff will come later, > > for now let's focus on good playback of existing sample formats) > > but we will have no problems to accomodate > > new loaders and engines because the Voice:: classes can be subclassed > > to implement the characteristics of the engine associated to a certain > > sample format. > > I'm starting to doubt that the recompiling stuff is neccesary, the > Gig/DLS2/SF2 formats seem to be quite expressive, yet efficiently > implementatble, not even counting the XML+WAV formats used by newer > things. I dont think that there would be much/any performance gain in > using dynamic compilation, and how many peopel would want to design thier > own instruments, when they could use standard formats have have greater > compatibility? Yes you are right and that's why Christian and others convinced me to leave the recompiler stuff for now. But since to quote Linus, our goal is total world domination :-) , we should never stop innovating or thinking about new ways to generate and manipulate audio. Let's produce a rock solid standalone version of LS and wait for the user's reactions. As we know from other open soruce projects the feedback from the community will help us to improve LS to become a truly professional product that can play in the same league as the proprietary samplers. > > Its still a nice idea though, but maybe more suited to a softsynth. Perhaps in future LS will evolve in something where the S stays for both Sampler and for Synth ? Who knows. Let's first working on build strong the fundamentals. PS: let us know when you make progress with the filters. cheers, Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Steve H. <S.W...@ec...> - 2003-10-29 13:56:58
|
On Wed, Oct 29, 2003 at 02:04:09 +0100, be...@ga... wrote: > leave the recompiler stuff for now. > But since to quote Linus, our goal is total world domination :-) , > we should never stop innovating or thinking about new ways to generate > and manipulate audio. Sure. > Let's produce a rock solid standalone version of LS and wait for > the user's reactions. As we know from other open soruce projects > the feedback from the community will help us to improve LS to become > a truly professional product that can play in the same league as > the proprietary samplers. Absolutly. > PS: let us know when you make progress with the filters. I'm going to try to have a crack at it this evening (GMT), but my girlfriend is ill and I might have to go home early. - Steve |
|
From: Mark K. <mar...@co...> - 2003-10-29 12:56:37
|
On Wed, 2003-10-29 at 03:19, be...@ga... wrote: > Yes, but I think the future lies in formats like Halion and Kontakt use: > XML for patch/program data plus a bunch of WAV files. > Monolithic files are a mess to deal with while single WAV files allow > for much more finegrained editing. > Ok .... for relatively small files like soundfonts a single file makes sense > but not for multi GByte sample libraries. > >From a performance point of view if you stream from a single file or a bunch > of WAV files it makes virtually no difference (even if you open and close > the file at each triggering of the sample, it's very fast I did some benchmarks > and my conclusion is that the performance is exactly the same). I know nothing, well, less than nothing if possible, about disk fragmentation on Linux, but wouldn't a collection of files be less likely to be guaranteed to be collected together on the drive in a group? If this is true, then wouldn't individual wave files be more likely to have playback problems due to disk seek latencies for a single library? I completely get that if the drive is slow, *some* library will be placed in the slow position, and that library would have problems, but for debugging hardware problems and making your system work well, after adding and deleting libraries for a year, wouldn't it be better to somehow guarantee that all samples are collected together in a contiguous disk area? - Mark |
|
From: <be...@ga...> - 2003-10-29 13:24:39
|
Scrive Mark Knecht <mar...@co...>: > > >From a performance point of view if you stream from a single file or a > bunch > > of WAV files it makes virtually no difference (even if you open and close > > the file at each triggering of the sample, it's very fast I did some > benchmarks > > and my conclusion is that the performance is exactly the same). > > I know nothing, well, less than nothing if possible, about disk > fragmentation on Linux, but wouldn't a collection of files be less > likely to be guaranteed to be collected together on the drive in a > group? > > If this is true, then wouldn't individual wave files be more likely to > have playback problems due to disk seek latencies for a single library? > > I completely get that if the drive is slow, *some* library will be > placed in the slow position, and that library would have problems, but > for debugging hardware problems and making your system work well, after > adding and deleting libraries for a year, wouldn't it be better to > somehow guarantee that all samples are collected together in a > contiguous disk area? Well the fact is that most of the wav files will reside in contiguous blocks. Of course the single WAV files could lie far apart but if you take a 2GB file, the first block and the last block are quite distant from each other, even if the whole file is residing in contiguous blocks. And what happens if you trigger two notes that cause the reading of one sample in the first part of the 2GB file and one in the last part ? The disk will have to seek back and forth like mad. I see no reason to organize files in a certain layout becaue they are usually so large that the contiguous blocks argument does not hold water. Ok of course if each 4KB block is spread randomly on the disk then performance will suck. But linux filesystems usually try to avoid this by putting large parts of a file in contiguous blocks. This ensures that the read performance is still good even on a full disk because the number of disk seeks/time is low compared to the amount of data read. I would not worry in any way about this. Of course there are some idiotic cases where fragmentation can slow down your disk performance, see here: http://lists.debian.org/debian-user/2003/debian-user-200302/msg02363.html I don't know if there exist a "defrag-like" util for Linux yet, but as stated in the link above, if you have really concerns you can backup all your data to another medium (HD, CDROM,tape etc), reformat you data disk and copy back the data again. Not sure if it is worth the trouble. Anyway think about how many variables come into play when you let LS play a complex MIDI file that streams many samplelibs from disk. The exact disk seek sequence cannot be figured out in advance because it depends from the task scheduling, sample pitches (higher pitched samples must be streamed faster thus need faster buffer refills), background tasks, characteristics, geometry of the disk etc. But as long as the filesystem is keeping large files in chunks of contiguous blocks of 128KB and more, LS performance will be perfectly fine and I guess virtually equal to a totally unfragmented disk. cheers, Benno http://www.linuxsampler.org ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Josh G. <jg...@us...> - 2003-10-31 10:23:41
|
On Wed, 2003-10-29 at 03:19, be...@ga... wrote: > Scrive Josh Green <jg...@us...>: > > > Yep, I'm here :) > > Hi Josh ! > Pressure is mounting on you, we will have soon a fully working > streaming capable sampler and I guess it will not take much time > for user begin asking for an editor that can edit and create > GIG file and other streamable sampling libraries. > Be prepared :-) > Yeah, I'm hoping for the best, but things always take longer than I plan and I've only got ten fingers to push those buttons and my mind is kind of slow :) One of these days I'll unveil Swami 1.0, that will be the day, since it consists of over a years worth of un-praised work. It would be nice to feel like I'm back in the Linux Audio loop again. > > When comparing GigaSampler versus SoundFont versus > > DLS2, I don't think there is anything in particular that would make > > GigaSampler more professional than these other formats, except for a few > > key points: streaming of samples and > 16 bit audio. > > I would say only streaming and the articulation via MIDI controller that permits > to and espressive playing of these instruments. I often wonder if DLS2/SF2 is more flexible in that area. Since both of them use the idea of a connection blocks which can have 2 sources with a polarity and mapping function, these inputs can be either MIDI controllers or even things like envelopes in the case of DLS2. There is a bit of discrepancy between whats "supported" for the DLS2 spec, and whats allowed to be put in a DLS2 file, though. > Perhaps DLS2 supports >16bit but currently the .GIG format does not, or at least > the gigastudio does not support it. The file format itself is capable, but it is not part of the spec. > This is one of the reasons why both sample library producers and users are > looking at the competition (Steinberg Halion and NI Kontakt) because they > support 24bit. Those samplers, AFAIK use a sample format that is easier to > deal with (especially when editing the single waves): AFAIK it's basically > an XML file that contains the program/patch data and a bunch of .WAV files. > Such a format is certainly easier to handle in swami too because you have > not to diassemble and reassemble those giant monolithic files. Those formats sound really cool. I'd like to look into them. Using an XML like format would make them much easier to decipher too. That does sound like the way of future patches. > At a later stage we will try to add support for those sample formats too in > LS. While not being yet fully modular, LS is modular enough to allow > one single engine deal with multiple formats thanks to C++ which permits > you to derive subclasses, eg Voice:: be Voice::gig or Voice::akai. Right, OO programming :) C does that as well (with the help of GObject), but its not nearly as clean as C++. Someday I may regret not just using C++, but I still like the thought of using lowest common denominator code, and the possibility of becoming part of the GStreamer project at some point (its based on the same archaic technology :) > > There is no reason > > why streaming of samples couldn't be implemented with these other > > formats, though. > > Indeed SF2 could be streamed too, but keep in mind that some weird > stuff like backwards looping etc make this task very hard or unviable. No backwards looping in SF2. Only loop and "loop till release then play rest of sample". > The loader decide whether streaming or not the sample on a case to case basis > (eg if no problematic SF2 stuff like backwards looping etc is used then stream, > otherwise keep it in RAM). > > I've heard that fluidsynth has a good SF2 engine so if you folks want > future SF2 support in LS then try to persuade Peter Hanappe to port > the SF2 engine to LS (I would say it would be too early now since the > basic architecture of LS has not stabilized yet, perhaps in a few months). > It is a nice SoundFont engine and I use it regularly for my small amount of music experiments. Its still lacking a few polishing touches, and unfortunately all of the developers are rather busy right now. > > > > Thinking of converting SoundFont or DLS2 files to GigaSampler gives me > > the creeps. Doing it for your own purposes might make sense, but > > distributing those GigaSampler files, means that less people can now use > > that instrument patch, since its not an open standard. > > Fully agree but being able to read GIG files is very important if you > want that your sampler can make use of a vast pool of professional, > high quality sample libraries. Yeah, I wasn't saying you shouldn't support them :) > Well certainly you and Christian should collaborate more in order to avoid > wheel reinvention etc. Christian's libgig is made for loading GIG files and > exports methods (functions) for both reading patch/program data, articulation > layers etc plus it has functions to cache and read data from disk into > memory buffers. > The question is, if Christian has already done this excellent lib wouldn't it > be beneficial to use that lib in swami too ? > Yes I know there are the C/C++ issues but I'm more and more convinced > that C++ is the ideal language for handling sample related data because > samples are made up of many objects and subobjects. > The more I think about it, the more I realize I would just like an interface to LinuxSampler. I don't really want to try and force anything on anyone else, and I have gotten very little feedback on my stuff as it is. I would much rather just write my own voice handling for LinuxSampler (maybe something along the lines of the API that FluidSynth provides, creating voice instances, etc). I've already got GigaSampler support, but I just need to get a better understanding of the math behind the parameters, for doing conversions and the like. I'm sure much of that could be had from the work Christian and others are doing. > Josh you could probably argue that LS should use libInstPatch. > Well I haven't looked to your stuff you I cannot judge what can be done, > what would be beneficial etc. Nahh.. I don't think it should rely on libInstPatch. You guys seem pretty hard core C++ and while I may end up putting together a C++ binding for libInstPatch, I don't want to pretend I have the time for such things at this point. > But what's for sure is that libgig is 100% optimized for playback it uses > no mutexing and all routines that are called from within the audio thread > are carefully optimized and have guaranteed execution times and being > the lib shipped directly with LS it will ease the dependency problem too. > (at least during the beginning phase, perhaps later it will be better to > make separate libs for each sample format). The way I see it there are trade offs. You either have fast lock free access to patch information or full multi-thread enabled editing of patch objects. I kind of feel both are necessary for a professional soft synth editor environment. libInstPatch is becoming analogous to a multi-thread capable instrument patch database. I see the synth process as having its own set of parameters that are synchronized to the patch database on noteon or patch selection. Real time control is then routed directly to the synth, with perhaps a cue for updating the patch database (when doing real time editing). When thinking about an interface to LinuxSampler that would provide a way to use arbitrary formats, I see an API for instantiating voices. Most patch formats I have seen can be collapsed into a simple list of voices with synthesis parameters. I'm currently working on creating a voice cache system, that takes patch formats in Swami and converts them to a list of voices with SoundFont oriented parameters (for the sake of using FluidSynth to play other formats). This is actually already in place but needs to be overhauled for greater performance. The caching would occur at program selection or when a program is edited. A similar system could probably be extended to LinuxSampler, but perhaps with more generic parameters and not so patch specific as SoundFont units (although they are fairly generic). > I'm noth the author of libgig so I cannot say anything about > which kind of collaboration could be beneficial between swami and LS. > But even if you Josh make a totally separate GIG editor in swami it would > still be very useful since you can edit GIGs and then load them into LS > for playing. Since only the sample heads are loaded, the loading is > pretty fast even for very large sample libs. > I agree, there are lots of ways for LinuxSampler and Swami to interface. > > > > Nice to see some activity on this list. Cheers. > > I'm looking forward for the day that swami will go hand in hand with > LS and will be able to edit many sample formats that can be played using LS. > That goes for me too :) > cheers, > Benno > http://www.linuxsampler.org > Sorry for the big email! This list has really been getting a work out lately. Josh |
|
From: Steve H. <S.W...@ec...> - 2003-10-31 10:44:22
|
On Fri, Oct 31, 2003 at 02:21:08 -0800, Josh Green wrote: > On Wed, 2003-10-29 at 03:19, be...@ga... wrote: > Right, OO programming :) C does that as well (with the help of GObject), > but its not nearly as clean as C++. Someday I may regret not just using ... Pfft! GObject is a little messy around the edges, but /nothing/ could compare to the nightmare mess that is C++. <dons flame proof suit> - Steve |
|
From: Josh G. <jg...@us...> - 2003-11-02 19:32:18
|
On Fri, 2003-10-31 at 02:44, Steve Harris wrote: > On Fri, Oct 31, 2003 at 02:21:08 -0800, Josh Green wrote: > > On Wed, 2003-10-29 at 03:19, be...@ga... wrote: > > Right, OO programming :) C does that as well (with the help of GObject), > > but its not nearly as clean as C++. Someday I may regret not just using > ... > > Pfft! GObject is a little messy around the edges, but /nothing/ could > compare to the nightmare mess that is C++. > > <dons flame proof suit> > > - Steve He he thats interesting to hear actually. I haven't used C++ enough to know its pitfalls, but I have used GObject enough to know some of the rough edges :) Maybe one of these days I'll actually look into learning some of the ++ stuff, or perhaps I'll just use Python instead. Josh |
|
From: Steve H. <S.W...@ec...> - 2003-11-03 09:43:37
|
On Sun, Nov 02, 2003 at 11:29:33 -0800, Josh Green wrote: > > Pfft! GObject is a little messy around the edges, but /nothing/ could > > compare to the nightmare mess that is C++. > > > > <dons flame proof suit> > > > > - Steve > > He he thats interesting to hear actually. I haven't used C++ enough to > know its pitfalls, but I have used GObject enough to know some of the > rough edges :) Maybe one of these days I'll actually look into learning > some of the ++ stuff, or perhaps I'll just use Python instead. ObjectiveC is a fine language, and well supported by gcc now (OSX uses it extensivly), but less efficient than GObject or C++. ObjC has OO messages ala Smalltalk, rather than just 'magic function' methods. - Steve |