From: Benno S. <be...@ga...> - 2002-11-02 16:34:43
|
H, Yesterday I had a discussion on IRC with Steve H. and Juan L. about some issues regarding LinuxSampler: Since our goal is to provide a sampler that can work with a large number of sample library formats we need to implement engines that can reproduce so that the samples sound as they were played on the original hardware/software sampler. (or at least coming very close). The question is: do we build a single "one-size-fits-all" engine and write loaders for various sample formats trying to fit the original sample parameters (filters,envelopes etc) in such a way that they sound as close as on the original or is is better to implement separate engines for each type of sample library (eg akai s1000, SF2, GIGA, etc). associated with the related sample loader. Since the plan is to use compilation techniques in order to allow very flexible signal flow diagrams while providing speed that is at par with hardcoded designs, my question was if it would be better to implement these commercial sampler designs (AKAI,SF2,GIG etc) using our signal network builder (It will probably become a graphical editor) without writing any (or almost zero) C/C++ code. Of course we need and associated loader and probably it is hard to not handcode the single loaders , but for the engine I think one could avoid the implementation step once a powerful signal network builder is implemented. Juan says that we should use the signal network compiler only for future designs and for experimental stuff for now and start to provide hardcoded versions of the sampler engines mentioned above while converting them to a signal network at a later stage. (when the network compiler will be sufficiently evolved). While this could provide some short term advantages (faster results , perhaps more developers jumping on the bandwaggon) it is IMHO a bit of a waste of time. I have not that much experience writing very large audio applications but my proof of concept linuxsampler code (see home page) while still small and despite organized in C++ classes, started to look unclean and hard to maintain since every design decision is embedded deep in the code. Generating notes from MIDI events requires performing several tasks that are dependent each other. I'm thinking for example about: handling the keymap: - which notes are on/off , - handling multiple note on on the same note on a certain channel (eg does a note off mute the first note or the last one ? we could make this configurable by using linked lists assigned to each key on the MIDI scale) - sustain pedal - different key/velocity/controller zones trigger different samples with different parameters voice generation stuff: - sample playback (from RAM/disk) - looping (needs to work in synergy with the sample playback) - modulation, enveloping, filters and FXes that become active based on the instrument's preset etc Within the code I'd like to these things separate in a clean way but I think it is not that simple with hardcoded designs since you (or at least I) tend to optimize things and perform serveral tasks within the same routine thus effectively merging things that belong to different layers. This is why I'm asking you folks about what the right way to do looks like in your opinion. Waiting for opinions from everyone ! Josh: regarding SF2, DLS importing, your help is very welcome, perhaps you could comment on how to best solve the multiple sample format importing problem. PS: regarding the name change of the project from EVO to LinuxSampler I did it for several reasons: first the name linux sampler makes it clear that is it a sampler for linux, second it will make it easier for users and developers to find us on search engines since the term "evo" brings up lots of unrelated terms. Plus I think the name linux in linuxsampler should advertise Linux as a viable audio platform. but nothing stops us to port it to let's say MacOS X too since it is an unix derivative ... perhaps generating interest for linux by pro-audio guys since they are almost all Mac users. cheers, Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-02 18:43:51
|
On Sat, Nov 02, 2002 at 07:37:36 +0100, Benno Senoner wrote: > The question is: do we build a single "one-size-fits-all" engine and > write loaders for various sample formats > trying to fit the original sample parameters (filters,envelopes etc) in > such a way that they sound as close as on the > original or is is better to implement separate engines for each type of > sample library (eg akai s1000, SF2, GIGA, etc). > associated with the related sample loader. I think that the best approach is to make the sample loaders mini engines, all the things like how the sampler handles note off etc. will vary a lot from sampler to sampler. If we just make the engine provide the MIDI routing and parsing, and deal with jack i/o stuff then the individual sub engines can do whatever they like*. It also means we can get up and running with a single sampler format without compromising the design, as long as the interface between the main engine and the sub engines in general enough. If the sub engines want to use recompilation techniques then the main engine can just export an API to handle that. * ...although this makes me think, playing devil advocate, maybe we should not be aiming for one giant engine that will handle every sample format known to man, maybe we should make a "sampler construction kit", that allows people to bolt on thier sample loading code and sampler emulating code and build a sampler out of that. It would encourage lots of simple, special purpose tools and avoid toolkit issues - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-04 12:16:32
|
[Peter, I'm assuming you meant to mail this to the list, I'm replying to the list anyway ;)] As discussed on IRC last night, the problem is that some sample formats have features that can't easily be implemented with a disk based generic engine, for example the AKAI sample format allows you to vary the start point with note on velocity (though I dont know by how much). I think that some hardware samplers allow you to modulate the loop points in realtime, though the 3000 series AKAIs cannot aparently. So, I think it is better to have seperate sub-engines that communicate with the main engine at a high level (eg. to the sub-engine: "Here is a bunch of event data ...", from the sub-engine: "I want 8 outputs", "here is a lump of audio data ..."). Though obviously data transfer would be callback based or something. The alternative would be to normalise all the sample formats into one, grand unified sample format and just handle that (I believe that is how gigasampler works?). I suspect that is less efficient though, and it doesn't allow for specific support for styles of sample playback. I think it mould make sense to preparse the event data, rather than trying to handle raw midi. Mayeb using something like the OSC event stream? Anyone know of other preparsed event formats? - Steve On Mon, Nov 04, 2002 at 09:06:08 +1000, Peter wrote: > i persoanlly like the idea of a sampler construction kit... > or at least a modularised sample engine.. > > my agenda is more towards loop sampling/re-sequencing... > normal event handling in samplers(especiall the akia's) doesn't lend > itself that kind of stuff > i'll probably be more inclinded to work towards the yamaha style of > things (ish)... > > i've been playing around with some ideas over the past few months > > i'd like for the sampler disk streaming, audio i/o and midi channel > routing (eg. noteon/off,pith,mod NOT cc or rpn/npn data) to be handled > by the base engine > aka i/o engine > > then, when a file is loaded onto a layer (midi-channel) > the base class calls the respective sampler extension.. > which handles everything on the channel, from sample-loading to > note-on-off handling to audio and even midi outputs..depending on > the type.. > > that way you could have say, a instrument extension, which could load > dls's or soundfonts > a akai extension that loads akai files > etc.etc. > umm.. > i guess thats enough for the time being > cheers > [3] |
From: [3] <ma...@ve...> - 2002-11-04 14:03:44
|
heh. thanks Steve Harris wrote: >[Peter, I'm assuming you meant to mail this to the list, I'm replying to > the list anyway ;)] > >As discussed on IRC last night, the problem is that some sample formats >have features that can't easily be implemented with a disk based generic >engine, for example the AKAI sample format allows you to vary the start >point with note on velocity (though I dont know by how much). I think that >some hardware samplers allow you to modulate the loop points in realtime, >though the 3000 series AKAIs cannot aparently. > wouldn't you be better off loading those samples straight into memory? > >So, I think it is better to have seperate sub-engines that communicate >with the main engine at a high level (eg. to the sub-engine: "Here is a >bunch of event data ...", from the sub-engine: "I want 8 outputs", "here >is a lump of audio data ..."). > mmm... > >Though obviously data transfer would be callback based or something. > >The alternative would be to normalise all the sample formats into one, >grand unified sample format and just handle that (I believe that is how >gigasampler works?). > > >I suspect that is less efficient though, and it doesn't allow for >specific support for styles of sample playback. > amen brother... > >I think it mould make sense to preparse the event data, rather than trying >to handle raw midi. Mayeb using something like the OSC event stream? > >anyone know of other preparsed event formats? > ..snip cheers [3] ma...@ve... |
From: Christian S. <chr...@ep...> - 2002-11-04 21:22:40
|
Es geschah am Montag, 4. November 2002 13:16 als Steve Harris schrieb: > > As discussed on IRC last night, the problem is that some sample formats > have features that can't easily be implemented with a disk based generic > engine, for example the AKAI sample format allows you to vary the start > point with note on velocity (though I dont know by how much). I think that > some hardware samplers allow you to modulate the loop points in realtime, > though the 3000 series AKAIs cannot aparently. It's been a while since I created my last Akai programs, but AFAIK the S3000 series (only regarding this start point) just differs between four velocities. I think they called them zones and for each of these 4 zones you are able to assign an individual 'sample' (each already containing it's firm loop points) and additional parameters. So it's not that these loop points are almost randomly. If that's the problem. But correct me if memory lies. I hope there is a way without loading those libraries completely in memory. Although they're limited to 'just' 32MB, it doesn't need a big arrengement to fill 256MB of RAM or more. But whatabout those cross fade loops? These are essential for small sound libraries for sounding natural and smooth. How much loop points can there be? BTW the limit for S5000/S6000 is 256MB. |
From: Steve H. <S.W...@ec...> - 2002-11-04 22:24:08
|
On Mon, Nov 04, 2002 at 10:23:13 +0100, Christian Schoenebeck wrote: > It's been a while since I created my last Akai programs, but AFAIK the S3000 > series (only regarding this start point) just differs between four > velocities. I think they called them zones and for each of these 4 zones you There is also start point varition, keyed off note velocity, the range is -9999 to +9999 or something like that, but there is no indication what the units are (as usual). I dont know how much it was used, I used it once or twice I think. It was good for pecussion. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-04 14:13:21
|
On Tue, Nov 05, 2002 at 12:03:36 +1000, [3] wrote: > >So, I think it is better to have seperate sub-engines that communicate > >with the main engine at a high level (eg. to the sub-engine: "Here is a > >bunch of event data ...", from the sub-engine: "I want 8 outputs", "here > >is a lump of audio data ..."). > >The alternative would be to normalise all the sample formats into one, > >grand unified sample format and just handle that (I believe that is how > >gigasampler works?). Of course, the counter argument too all this is that writing a full sampler engine for every format we want to support fully sucks, no-one probably needs all that functionlaity anyway, and we should just write translators ont a common, comprehensive format and live with the slight conversion loss. <shrug> - Steve |
From: Phil K. <phi...@el...> - 2002-11-04 14:56:04
|
Quoting Steve Harris <S.W...@ec...>: > Of course, the counter argument too all this is that writing a full > sampler engine for every format we want to support fully sucks, no-one > probably needs all that functionlaity anyway, and we should just write > translators ont a common, comprehensive format and live with the slight > conversion loss. <shrug> > > - Steve Sounds like a job for DLS at the core and then have import/export modules support Akai and other native sampler formats. We should be able to stream DLS in and out of the engine as well. -P > > > ------------------------------------------------------- > This SF.net email is sponsored by: ApacheCon, November 18-21 in > Las Vegas (supported by COMDEX), the only Apache event to be > fully supported by the ASF. http://www.apachecon.com > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > -- Phil Kerr Centre for Music Technology Researcher Glasgow University phi...@el... T (+44) 141 330 5740 Without music, life would be a mistake. Friedrich Nietzsche |
From: Steve H. <S.W...@ec...> - 2002-11-04 15:29:13
|
On Mon, Nov 04, 2002 at 04:13:17 +0000, Phil Kerr wrote: > Quoting Steve Harris <S.W...@ec...>: > > > Of course, the counter argument too all this is that writing a full > > sampler engine for every format we want to support fully sucks, no-one > > probably needs all that functionlaity anyway, and we should just write > > translators ont a common, comprehensive format and live with the slight > > conversion loss. <shrug> > > Sounds like a job for DLS at the core and then have import/export modules > support Akai and other native sampler formats. > > We should be able to stream DLS in and out of the engine as well. I dont get the impression that DLS is anywhere near rich enough to do this job, it would need to be something pretty expressive. Gigasampler uses DLS 2 + proprietary extensions, doesn't it? - Steve |
From: Phil K. <phi...@el...> - 2002-11-04 15:44:59
|
DLS 2 is a better option than DLS 1 although the specs for DLS 1 are downloadable free from the MMA. It's a question of balance between using a widely used standard with some limitations over a custom format which may not fully interoperate. -P Quoting Steve Harris <S.W...@ec...>: > On Mon, Nov 04, 2002 at 04:13:17 +0000, Phil Kerr wrote: > > Quoting Steve Harris <S.W...@ec...>: > > > > > Of course, the counter argument too all this is that writing a full > > > sampler engine for every format we want to support fully sucks, > no-one > > > probably needs all that functionlaity anyway, and we should just > write > > > translators ont a common, comprehensive format and live with the > slight > > > conversion loss. <shrug> > > > > Sounds like a job for DLS at the core and then have import/export > modules > > support Akai and other native sampler formats. > > > > We should be able to stream DLS in and out of the engine as well. > > I dont get the impression that DLS is anywhere near rich enough to do > this > job, it would need to be something pretty expressive. > > Gigasampler uses DLS 2 + proprietary extensions, doesn't it? > > - Steve > > > ------------------------------------------------------- > This SF.net email is sponsored by: ApacheCon, November 18-21 in > Las Vegas (supported by COMDEX), the only Apache event to be > fully supported by the ASF. http://www.apachecon.com > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > -- Phil Kerr Centre for Music Technology Researcher Glasgow University phi...@el... T (+44) 141 330 5740 Without music, life would be a mistake. Friedrich Nietzsche |
From: Matthias W. <mat...@in...> - 2002-11-04 21:31:06
|
On Mon, Nov 04, 2002 at 02:13:16PM +0000, Steve Harris wrote: > On Tue, Nov 05, 2002 at 12:03:36 +1000, [3] wrote: > > >So, I think it is better to have seperate sub-engines that communicate > > >with the main engine at a high level (eg. to the sub-engine: "Here is a > > >bunch of event data ...", from the sub-engine: "I want 8 outputs", "here > > >is a lump of audio data ..."). > > > >The alternative would be to normalise all the sample formats into one, > > >grand unified sample format and just handle that (I believe that is how > > >gigasampler works?). > > Of course, the counter argument too all this is that writing a full > sampler engine for every format we want to support fully sucks, no-one > probably needs all that functionlaity anyway, and we should just write > translators ont a common, comprehensive format and live with the slight > conversion loss. <shrug> In order to provide the whole features that a sample format provides, we have to represent the parameters in linuxsampler. But that means we allready have a "grand unified sample" system. We could write a set of specialized functions that handle special features of a sample format. When a sample set of a certain sample format is used, the right set of functions are put together while loading the samples ( via function pointers, process lists, ... ). matthias |
From: Juan L. <co...@re...> - 2002-11-05 03:35:22
|
On Mon, 4 Nov 2002 14:13:16 +0000 Steve Harris <S.W...@ec...> wrote: > On Tue, Nov 05, 2002 at 12:03:36 +1000, [3] wrote: > > >So, I think it is better to have seperate sub-engines that communicate > > >with the main engine at a high level (eg. to the sub-engine: "Here is a > > >bunch of event data ...", from the sub-engine: "I want 8 outputs", "here > > >is a lump of audio data ..."). > > > >The alternative would be to normalise all the sample formats into one, > > >grand unified sample format and just handle that (I believe that is how > > >gigasampler works?). > > Of course, the counter argument too all this is that writing a full > sampler engine for every format we want to support fully sucks, no-one > probably needs all that functionlaity anyway, and we should just write > translators ont a common, comprehensive format and live with the slight > conversion loss. <shrug> > > - Steve > I think i said this over IRC, but i'd like to say it again. Being the central part of the issue the "Voice". The abstration to me should be like this: (hope you have fixed font, in any other case, left item connects to last :) #Sample Library reading engine -> *Disk Streamer -> #Voice <- *Engine Manager <- #Engine |_________________________________________________________________________________^ * means common-to-all object # means specific implementation inheriting from a common base (through a polymorfic interface). So, I think the Engine/Voice processing/Library File reading should be implementation specific (giga/akai/etc), but it should communicate with the existing framework through common objects to make our life easier while programming. Remember, not everything is just reading and streaming, all the midi event handling, voice mixing/allocation, effect processing and buffer exporting must be common to all interfaces. This would end up in a framework for emulating existing samplers. I used this approach in legasynth (http://reduz.com.ar/legasynth) with a lot of success already, and it should make writing specific implementations of sampling engines a _lot_ easier. Juan Linietsky |