|
From: Christian S. <chr...@ep...> - 2003-09-06 20:02:11
|
Hi Guys! I just finished a preliminary version of libgig: http://stud.fh-heilbronn.de/~cschoene/projects/libgig/ the library consists of three parts: - RIFF classes to parse and access arbitrary RIFF files - DLS classes using the RIFF classes to provide abstract acess to DLS Level 1 and 2 files - gig classes based on the DLS classes to access Gigasampler .gig files It already comes with a couple of console tools / demo apps (e.g. to extract samples from a .gig file or to print out the RIFF tree of an arbitrary RIFF file). I still have to decode some Gigasampler chunks and clean up everything, but the basics are finished and working. So far I couldn't confirm everything that Ruben has already decoded, so I had to recheck all Gigasamper chunks which is really time consuming. I will create an UML diagram probably next week and add support for compressed files. CU |
|
From: Christian S. <chr...@ep...> - 2003-09-12 22:50:49
|
Hi everybody! I now uploaded an UML diagram for the Gigasampler library. For detailed descriptions about the class attributes have a look at the header files (gig.h, dls.h) or the API documentation. http://stud.fh-heilbronn.de/~cschoene/projects/libgig/ Josh, I used Ruben's header files as a notepad as I decoded new Gigasampler specific fields and corrected old ones. Let me know if you're interested in them! CU Christian |
|
From: Josh G. <jg...@us...> - 2003-09-12 23:49:22
|
On Fri, 2003-09-12 at 15:49, Christian Schoenebeck wrote: > Hi everybody! > > I now uploaded an UML diagram for the Gigasampler library. For detailed > descriptions about the class attributes have a look at the header files > (gig.h, dls.h) or the API documentation. > > http://stud.fh-heilbronn.de/~cschoene/projects/libgig/ > > Josh, I used Ruben's header files as a notepad as I decoded new Gigasampler > specific fields and corrected old ones. Let me know if you're interested in > them! > > CU > Christian > Yeah I would be interested in them actually. I just recently split libInstPatch off from the Swami branch and I'm looking to write some command line conversion utilities and the like. The DLS/Gig writer is coming along, although I imagine there is going to be a lot of things to test. I haven't been doing a whole lot of development recently, since I'm now working at a "real" job, and just felt like giving things a rest for a while. Its been a lot of work, with very little reward, recently. Too much API architecture work, not enough flashy things to see or hear :) Cheers. Josh Green |
|
From: <be...@ga...> - 2003-09-13 09:06:52
|
Scrive Christian Schoenebeck <chr...@ep...>: > Hi everybody! > > I now uploaded an UML diagram for the Gigasampler library. For detailed > descriptions about the class attributes have a look at the header files > (gig.h, dls.h) or the API documentation. > > http://stud.fh-heilbronn.de/~cschoene/projects/libgig/ > > Josh, I used Ruben's header files as a notepad as I decoded new Gigasampler > specific fields and corrected old ones. Let me know if you're interested in > them! > > CU > Christian > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: <be...@ga...> - 2003-09-13 09:20:14
|
Hi Christian, I did look at the UML, really nice work. BTW: does you libgig recognize the release triggered samples (the ones played at note-offs, eg the piano hammers that gets back up). We want to hear on Linux those multi GByte-pianos soon ! For those that do not know it yet, since there is big pressure by many users to a decent GIG playback engine for Linux, we are now taking a bit a different route for LinuxSampler. At first we wanted to use the synth compiler to let the user design arbitrary engines, including a GIG engine. But for now it simply takes too much time and since I've bet a beer with Steve H. that at ZKM 2004 we will show a working GIG player ;-) me, Christian and others decided it is faster (for now, no worry the synth compiler will be done too) to resurrect the old evo (LinuxSampler proof-of-concept code) , because it has all the infrastructure in place needed for streaming large samples from disk. The only missing things were a good DLS/GIG importing library (almost finished thanks to our hard working Christian) and some enveloping stuff, looping and filtering and tuning of the parameters which is not that hard. As for the GUI, evo is currently GUI-less but we will probably add a minimalistic load&play GUI (no sample editing), which permits you to load samples, assign them to various MIDI channels, perhaps storing performance data (you load a set of samples, assign them to midi channels, set volumes and some other basic parameters and save the setup, so that next time you can load the whole performance with a single command). Another (I hope not to big) problem is the tweaking of the envelopes, filters etc so that the samples sound as they are supposed to sound. I this case I hope we get some feedback from the users which will help to speed up the tuning process. cheers, Benno Scrive Christian Schoenebeck <chr...@ep...>: > Hi everybody! > > I now uploaded an UML diagram for the Gigasampler library. For detailed > descriptions about the class attributes have a look at the header files > (gig.h, dls.h) or the API documentation. > > http://stud.fh-heilbronn.de/~cschoene/projects/libgig/ > > Josh, I used Ruben's header files as a notepad as I decoded new Gigasampler > specific fields and corrected old ones. Let me know if you're interested in > them! > > CU > Christian > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Josh G. <jg...@us...> - 2003-09-13 19:50:30
|
On Sat, 2003-09-13 at 02:20, be...@ga... wrote: > Hi Christian, I did look at the UML, really nice work. > BTW: does you libgig recognize the release triggered samples > (the ones played at note-offs, eg the piano hammers that > gets back up). > We want to hear on Linux those multi GByte-pianos soon ! > > For those that do not know it yet, since there is big > pressure by many users to a decent GIG playback engine for > Linux, we are now taking a bit a different route for LinuxSampler. > At first we wanted to use the synth compiler to let the > user design arbitrary engines, including a GIG engine. > But for now it simply takes too much time and since I've > bet a beer with Steve H. that at ZKM 2004 we will show a > working GIG player ;-) me, Christian and others decided > it is faster (for now, no worry the synth compiler will be done too) > to resurrect the old evo (LinuxSampler proof-of-concept code) > , because it has all the infrastructure > in place needed for streaming large samples from disk. > The only missing things were a good DLS/GIG importing library > (almost finished thanks to our hard working Christian) and > some enveloping stuff, looping and filtering and tuning of the > parameters which is not that hard. > As for the GUI, evo is currently GUI-less but we will probably > add a minimalistic load&play GUI (no sample editing), which > permits you to load samples, assign them to various MIDI channels, > perhaps storing performance data (you load a set of samples, > assign them to midi channels, set volumes and some other basic > parameters and save the setup, so that next time you can load > the whole performance with a single command). > Any interest in making an API that would allow this to be used with other GUIs, such as Swami? Still trying to see if these projects can work together on something. It would be cool to add LinuxSampler as a wavetable soft synth to Swami. I hope I can make it to the LAD ZKM 2004 :) Cheers. Josh Green |
|
From: Christian S. <chr...@ep...> - 2003-09-13 18:00:19
|
Es geschah am Samstag, 13. September 2003 11:20 als be...@ga... schrieb: > BTW: does you libgig recognize the release triggered samples > (the ones played at note-offs, eg the piano hammers that > gets back up). Yes, that's the release trigger dimension. The library now provides all articulation informations the Gigasampler format has I think. The only things I left out are some chunks with names and descriptions and stuff like that which would be only interesting (if any) when we have a GUI or something and there is a chunk (<3ewg>) which Ruben assumed to be global articulation information for an instrument, but it's only a 12 Byte chunk and as far as I can see it at the moment, I don't expect anything interesting in there. CU Christian |
|
From: Josh G. <jg...@us...> - 2003-09-13 19:46:50
|
On Sat, 2003-09-13 at 10:58, Christian Schoenebeck wrote: > Es geschah am Samstag, 13. September 2003 11:20 als be...@ga... schrieb: > > BTW: does you libgig recognize the release triggered samples > > (the ones played at note-offs, eg the piano hammers that > > gets back up). > > Yes, that's the release trigger dimension. > > The library now provides all articulation informations the Gigasampler format > has I think. The only things I left out are some chunks with names and > descriptions and stuff like that which would be only interesting (if any) > when we have a GUI or something and there is a chunk (<3ewg>) which Ruben > assumed to be global articulation information for an instrument, but it's > only a 12 Byte chunk and as far as I can see it at the moment, I don't expect > anything interesting in there. > > CU > Christian > Isn't the <3ewg> chunk like so? (from Ruben's findings) guint32 attenuate; guint16 effect_send; guint16 fine_tune; guint16 pitch_bend_range; guint8 dim_key_start; guint8 dim_key_end; I don't understand yet what many of the fields do. But I can guess the attenuate field is probably global instrument attenuation, pitch bend range looks valid (its 0x0002 in most cases which would seem to make sense, 2 semitones). Fine tune I can also guess that its a global fine tune. Cheers. Josh Green |
|
From: Christian S. <chr...@ep...> - 2003-09-15 01:00:06
|
Es geschah am Samstag, 13. September 2003 21:43 als Josh Green schrieb:
> Isn't the <3ewg> chunk like so? (from Ruben's findings)
Man I'm blind, haven't seen that, but you're right.
>
> guint32 attenuate;
signed (in dB)
> guint16 effect_send;
only interesting for the gigastudio routing not for us
> guint16 fine_tune;
signed, in cents
> guint16 pitch_bend_range;
unsigned, number of semitones pitchbend controller should be able to pitch
> guint8 dim_key_start;
lowest bit here is PianoReleaseMode flag (whatever that means),
the bits above (dim_key_start >> 1;) is the key position of dim start (0-127
or C1 - G9)
> guint8 dim_key_end;
that one hasn't to be bitshifted, it's already the key number of dim end
(0-127, C1 - G9)
But I'm not really sure about the sense of the last two parameters yet.
And Josh, btw. i found out that the options field in
LIST(3prg)->LIST(3ewl)-><wsmp> is abused for the crossfading points, so
instead of
DWORD options;
there is
BYTE crossfade_in_start; // Start position of fade in
BYTE crossfade_in_end; // End position of fade in
BYTE crossfade_out_start; // Start position of fade out
BYTE crossfade_out_end; // End postition of fade out
CU
Christian
|
|
From: Josh G. <jg...@us...> - 2003-09-15 04:38:18
|
On Sun, 2003-09-14 at 17:58, Christian Schoenebeck wrote: > And Josh, btw. i found out that the options field in > LIST(3prg)->LIST(3ewl)-><wsmp> is abused for the crossfading points, so > instead of > > DWORD options; > > there is > > BYTE crossfade_in_start; // Start position of fade in > BYTE crossfade_in_end; // End position of fade in > BYTE crossfade_out_start; // Start position of fade out > BYTE crossfade_out_end; // End postition of fade out > I was wondering where the cross fading stuff was being put. Good to know :) I've been looking into how to emulate cross fading with SoundFont modulators or DLS2 connection blocks, if one were to convert to those formats. As you probably know, the DLS2 parameter model is essentially SoundFont modulators (a fixed parameter has constant inputs, rather than a controller), but I'm still a little unsure whats allowed. There is a defined list of "connection blocks" (modulators), but I'm curious if it breaks the format to put ones own custom ones in there. It would be sad if custom modulators weren't allowed, because that would mean that DLS2 would be weaker than SF2 in that regard. > CU > Christian > Cheers. Josh Green |
|
From: Benno S. <be...@ga...> - 2003-09-14 01:32:53
|
Scrive Josh Green <jg...@us...>: > > As for the GUI, evo is currently GUI-less but we will probably > > add a minimalistic load&play GUI (no sample editing), which > > permits you to load samples, assign them to various MIDI channels, > > perhaps storing performance data (you load a set of samples, > > assign them to midi channels, set volumes and some other basic > > parameters and save the setup, so that next time you can load > > the whole performance with a single command). > > > > Any interest in making an API that would allow this to be used with > other GUIs, such as Swami? Of course ! Since lately I gathered useful experience about the decoupling of GUI and engine, I'll add the same concepts to evo thus it will have a simple to use API (probably through a socket or form of IPC) and driving it from swami will become a breeze. Reusability of components anyone ? ;-) > Still trying to see if these projects can > work together on something. It would be cool to add LinuxSampler as a > wavetable soft synth to Swami. Yes this be ideal because you are probably becoming the reference sample library editor in the Linux field. Josh, Christian: about libgig and swami: I know swami is in C while libgig is in C++, but in order to avoid code duplication, would it make sense for swami to use libgig via C wrapper or do you think it would be easier to read and write GIG/DLS2 using your infrastructure that is already in place ? As said I'd like swami to become a fully fledged GIG/DLS2 editor that permit the creation of, access and edit that kind of sample libraries, without trying to fit them in a SF2 model. > I hope I can make it to the LAD ZKM 2004 Yes that will be cool, I'll be there too if LinuxSampler gets ready for that time (probably yes). Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Josh G. <jg...@us...> - 2003-09-15 04:50:59
|
On Sat, 2003-09-13 at 18:33, Benno Senoner wrote: > > Any interest in making an API that would allow this to be used with > > other GUIs, such as Swami? > > Of course ! > Since lately I gathered useful experience about the decoupling > of GUI and engine, I'll add the same concepts to evo thus > it will have a simple to use API (probably through a socket > or form of IPC) and driving it from swami will become a breeze. > Reusability of components anyone ? ;-) > Cool! Looking forward to working on integrating LinuxSampler with Swami :) > > Still trying to see if these projects can > > work together on something. It would be cool to add LinuxSampler as a > > wavetable soft synth to Swami. > > Yes this be ideal because you are probably becoming the reference > sample library editor in the Linux field. > Josh, Christian: about libgig and swami: I know swami is in C while > libgig is in C++, but in order to avoid code duplication, would > it make sense for swami to use libgig via C wrapper or do you think > it would be easier to read and write GIG/DLS2 using your infrastructure > that is already in place ? It doesn't make sense for me to use libgig, because libInstPatch provides a standard object API to multiple formats and will provide conversion, multi-thread locking and parameter change tracking and it fits quite well with the rest of Swami. It would be interesting to see what kind of API could be created for LinuxSampler that would be generic enough to allow for different formats to be synthesized into voices. This is currently how FluidSynth does things (with the limitation that its SoundFont centric). So basically when a note on event occurs, my note_on handler gets called which creates each voice containing the sample and SoundFont parameters. This makes it easy to synthesize other formats, like my GigaSampler tests. If a similar voice based API could be realized with GigaSampler it would mean that libInstPatch/Swami could then easily synthesize different formats via LinuxSampler. So the issue of having different instrument libraries wouldn't be such a big deal, except that it is duplication of effort of course. > As said I'd like swami to become a fully fledged GIG/DLS2 editor > that permit the creation of, access and edit that kind of sample libraries, > without trying to fit them in a SF2 model. > Right, Swami 1.0 (development branch) is already non-SF2 centric. Just got to get the damn thing working :) > > I hope I can make it to the LAD ZKM 2004 > > Yes that will be cool, I'll be there too if LinuxSampler gets ready for > that time (probably yes). > > Benno > I'd really like to make it myself, even if I don't have my own projects in a stable state :) A lot of it comes down to my financial situation, though. Cheers. Josh Green |
|
From: <be...@ga...> - 2003-09-15 12:57:29
|
Hi, I was just wondering if Christian, Josh or others could enlighten me how stereo stamples are stored in DLS2 / GIG files. Are they stored as a single interleaved stereo sample or as two separate mono samples where the first is mapped to the left output channel while the second to the right channel ? Treating stereo samples as two mono samples has the advantage that you can set different loop points, modulation etc stuff. Regarding the sample sizes: I've seen only GIG files with 16bit samples in in them so far. Is 24bit possible ? If yes, how are they stored ? 24-bit packed ? (which means 3 32 bit words store 4 24 bit samples). In order to read these 24bit samples from disk efficiently (in large chunks) probably we would need a temporary buffer because the streaming ringbuffers (which in case of 16bit samples are arrays of 16bit quantities (=short datatype)), cannot easily cope with 3-bytes long (24bit) quantities. Ok they can (the ringbuffer.h template class can handle any kind of datatype so just define sample_24bit_t as char[3]. But I guess in order to not incur into CPU read/write penalties because usually the x86 needs 32bit-aligned read/writes to be able to work with the maximum efficientcy, we whould just fit the 24bit samples into 32bit words. Or am I wrong about the read/write alignement issues ? thoughts ? Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Josh G. <jg...@us...> - 2003-09-15 17:17:00
|
On Mon, 2003-09-15 at 05:57, be...@ga... wrote: > Hi, I was just wondering if Christian, Josh or others > could enlighten me how stereo stamples are stored in DLS2 / GIG files. > > Are they stored as a single interleaved stereo sample or > as two separate mono samples where the first is mapped to the > left output channel while the second to the right channel ? > Treating stereo samples as two mono samples has the > advantage that you can set different loop points, modulation etc stuff. > I'm going to answer for DLS2 since I'm not real sure what GigaSampler files allow. Either way is possible actually. A sample can be stored as stereo interleaved or as individual synchronized mono samples (or even multiple synchronized surround sound samples). I think the first sample in an instrument determines the tuning and loop stuff though. So the samples have to be synchronized. > Regarding the sample sizes: I've seen only GIG files with 16bit > samples in in them so far. Is 24bit possible ? If yes, how > are they stored ? 24-bit packed ? (which means 3 32 bit words > store 4 24 bit samples). The DLS2 specification defines that only 8 bit or 16 bit data should be found in the sample pool. The format is very flexible though, since it is essentially just embedded wave files. So in theory if one were to use conditional chunks (a feature in DLS2 files) you could create a new 24bit DLSID signature (globally unique ID) and then use that in your program without breaking things too badly. This actually might not even be necessary though. If one were to just write 24 bit wave samples, I'm sure loading programs would detect this and balk at it, it just wouldn't be within the DLS2 specs (not breaking the file format though). > In order to read these 24bit samples from disk efficiently (in large > chunks) probably we would need a temporary buffer because > the streaming ringbuffers (which in case of 16bit samples are arrays > of 16bit quantities (=short datatype)), cannot easily cope > with 3-bytes long (24bit) quantities. > Ok they can (the ringbuffer.h template class can handle any kind > of datatype so just define sample_24bit_t as char[3]. > But I guess in order to not incur into CPU read/write penalties > because usually the x86 needs 32bit-aligned read/writes to be able > to work with the maximum efficientcy, > we whould just fit the 24bit samples into 32bit words. > Well I guess as long as you are out of DLS2 specs you could probably store any format that wave files support. Looks like 32 bit integer, 32 bit float and 64 bit float are all formats you can store in a wave file. These wouldn't suffer from the alignment issues, but would cause the file to be larger of course (hmm, now that I think about it, RIFF files are only 2 byte aligned, I wonder if there is a way to pad 2 bytes when writing 4 byte samples, if someone was to say mmap the whole mess). It seems DLS2/GigaSampler files are probably limited to 4Gigs, since thats the limit of RIFF files. > Or am I wrong about the read/write alignement issues ? > > thoughts ? > > Benno > I really like the DLS2 specification when it comes to the file format. It seems to be very flexible and is fairly extend-able (through the use of conditional chunks and the like). Unfortunately there is a lot of stuff that seems to be somewhat undefined in the area of modulator connection blocks and additional audio data formats. As there really aren't a lot of DLS2 files out on the net, its hard to get an idea of whats okay and whats not okay. It would be nice to take advantage of > 16 bit width audio, its just deciding how this should be done, and checking if maybe there is already a program out there that has done it so we could keep compatibility with other implementations. Its kind of funny too, since DLS2 is being implemented in rather interesting stuff (MPEG4 and MS Media player and Quicktime have soft synth support for playing back MIDI files with DLS2 instruments), but I cannot find one DLS2 file on the net! Time for an open source editor :) Cheers. Josh Green |