You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(1) |
Nov
|
Dec
|
From: Steve H. <S.W...@ec...> - 2002-11-05 18:15:59
|
On Tue, Nov 05, 2002 at 07:20:25 +0100, Cadaceus wrote: > P.S.: Anyone thought of something like VST-support for linuxsampler (the > diagram didn't look like it)? I don't know if rosegarden or else supports it > (or a linux-"version" of VST) , but it would be nice to control linuxsampler > from your favorite sequencer (that's a thing most gigastudio-users still get > pissed by because VST-support is still missing). Just an idea for the future > (certainly not for the beginning)... That would be LADSPA. - Steve |
From: Cadaceus <cad...@Po...> - 2002-11-05 18:06:38
|
Hi everyone, thought I quickly introduce myself. My name is Alex Klein, I'm studying computer sience (besides being a wanna-be-composer) and since I'm gonna have lot's of spare time in next semester (at least I hope so : ) I thought I try to help a bit with linuxsampler. Quite new to Linux, I'm recently in the process of "audio-programming-research" or bluntly said I don't have a clou about it know and hope it's different in a month or two... : ) So I probably won't be of much help in the planning-phase but I can offer some c++ experience and "a musician's/gigastudio-user's view" (so at least you got a beta tester... : )... Fields of interest? Well, pretty much the whole thing (most of all the audiothread -- envelope, looping, .. -- but I don't have much experience in that field) except sample importing... so if you got something to do just try me but keep in mind I'm a musician and a programmer, unfortunately -- up to now -- not a music-app programmer... Well, going over to reading on ALSA, LADSPA and all that stuff... : ) Cheers Alex P.S.: Anyone thought of something like VST-support for linuxsampler (the diagram didn't look like it)? I don't know if rosegarden or else supports it (or a linux-"version" of VST) , but it would be nice to control linuxsampler from your favorite sequencer (that's a thing most gigastudio-users still get pissed by because VST-support is still missing). Just an idea for the future (certainly not for the beginning)... |
From: Steve H. <S.W...@ec...> - 2002-11-05 08:32:00
|
On Tue, Nov 05, 2002 at 02:42:25 -0300, Juan Linietsky wrote: > > The problem with out-of-context is the the cache has to be cleared and > > refilled (well the part touched, and I imagine a big sample set would use > > a lot of cache) and the context switch time is small, but non-zero. As you > > pointied out we dont have very long to write out 64 samples for every > > active sample of every active note. > > > > Sorry, I didnt get the parent mail to this, could you please explain me where > does this out-of-context issue comes from? It was a braino, I meant out-of-process. The previous person was discussin why it was neccesary to go in process for linuxsampler. I think the short answer is that it isn't neccesary, but its damn hard as it is any we dont need anything to make it any harder. - Steve |
From: Juan L. <co...@re...> - 2002-11-05 08:16:24
|
Ok, just did more work on the codebase (basic code skeleton). Just a bit is missing to get it to work, but i'd rather wait until tomorrow so i can get it well checked with gcc 3.2, newer version of jack,etc. Let's hope tomorrow i have something working! cheers Juan Linietsky |
From: Juan L. <co...@re...> - 2002-11-05 05:40:23
|
On Mon, 4 Nov 2002 22:10:21 +0000 Steve Harris <S.W...@ec...> wrote: > On Mon, Nov 04, 2002 at 10:15:56 +0100, Matthias Weiss wrote: > > > Regarding JACK we will probably need to use the in-process model (which > > > is actually not used much AFAIK) > > > in order to achieve latencies at par with direct output so this needs > > > further research. > > > > Well this means we have to provide GUI implementations for every graphic > > toolkit that is used by the available sequencers. > > If it's right that processes and threads are handled very similar in the > > Linux kernel there should be not alot of a performance difference > > between in-process and out-of-process model, anyone knows more about > > that? > > One idea was that linuxsampler UI's would communicate with the main engine > over a (non X) socket of some kind. > > The problem with out-of-context is the the cache has to be cleared and > refilled (well the part touched, and I imagine a big sample set would use > a lot of cache) and the context switch time is small, but non-zero. As you > pointied out we dont have very long to write out 64 samples for every > active sample of every active note. > Sorry, I didnt get the parent mail to this, could you please explain me where does this out-of-context issue comes from? thanks! Juan Linietsky |
From: Juan L. <co...@re...> - 2002-11-05 03:35:22
|
On Mon, 4 Nov 2002 14:13:16 +0000 Steve Harris <S.W...@ec...> wrote: > On Tue, Nov 05, 2002 at 12:03:36 +1000, [3] wrote: > > >So, I think it is better to have seperate sub-engines that communicate > > >with the main engine at a high level (eg. to the sub-engine: "Here is a > > >bunch of event data ...", from the sub-engine: "I want 8 outputs", "here > > >is a lump of audio data ..."). > > > >The alternative would be to normalise all the sample formats into one, > > >grand unified sample format and just handle that (I believe that is how > > >gigasampler works?). > > Of course, the counter argument too all this is that writing a full > sampler engine for every format we want to support fully sucks, no-one > probably needs all that functionlaity anyway, and we should just write > translators ont a common, comprehensive format and live with the slight > conversion loss. <shrug> > > - Steve > I think i said this over IRC, but i'd like to say it again. Being the central part of the issue the "Voice". The abstration to me should be like this: (hope you have fixed font, in any other case, left item connects to last :) #Sample Library reading engine -> *Disk Streamer -> #Voice <- *Engine Manager <- #Engine |_________________________________________________________________________________^ * means common-to-all object # means specific implementation inheriting from a common base (through a polymorfic interface). So, I think the Engine/Voice processing/Library File reading should be implementation specific (giga/akai/etc), but it should communicate with the existing framework through common objects to make our life easier while programming. Remember, not everything is just reading and streaming, all the midi event handling, voice mixing/allocation, effect processing and buffer exporting must be common to all interfaces. This would end up in a framework for emulating existing samplers. I used this approach in legasynth (http://reduz.com.ar/legasynth) with a lot of success already, and it should make writing specific implementations of sampling engines a _lot_ easier. Juan Linietsky |
From: Benno S. <be...@ga...> - 2002-11-04 23:34:42
|
> > In order to provide the whole features that a sample format provides, we > > have to represent the parameters in linuxsampler. But that means we > > allready have a "grand unified sample" system. > > We dont have to do that, we can have format specific engines, the question > is whether its a good idea or not. > > Benno's plan to use dynamic compilation units would make the engines quick > to construct, they won't be as fast as ticghtly hand coded engines, but it > may be worth it for the RAD features. > > - Steve I think the speed penalty is low since you are probably using only pre-optimized macros plus you can alway take the generated source and optimize it manually. not a big deal. Steve as suspected there are people that agree with me that when loading AKAI samples in RAM you can easily end up burning 256MB of RAM which is a lot for not high-end PCs. Let's see how the discussion evolves ... AKAI experts your saying ? cheers, Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-04 22:26:37
|
On Mon, Nov 04, 2002 at 10:27:49 +0100, Matthias Weiss wrote: > > Of course, the counter argument too all this is that writing a full > > sampler engine for every format we want to support fully sucks, no-one > > probably needs all that functionlaity anyway, and we should just write > > translators ont a common, comprehensive format and live with the slight > > conversion loss. <shrug> > > In order to provide the whole features that a sample format provides, we > have to represent the parameters in linuxsampler. But that means we > allready have a "grand unified sample" system. We dont have to do that, we can have format specific engines, the question is whether its a good idea or not. Benno's plan to use dynamic compilation units would make the engines quick to construct, they won't be as fast as ticghtly hand coded engines, but it may be worth it for the RAD features. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-04 22:24:08
|
On Mon, Nov 04, 2002 at 10:23:13 +0100, Christian Schoenebeck wrote: > It's been a while since I created my last Akai programs, but AFAIK the S3000 > series (only regarding this start point) just differs between four > velocities. I think they called them zones and for each of these 4 zones you There is also start point varition, keyed off note velocity, the range is -9999 to +9999 or something like that, but there is no indication what the units are (as usual). I dont know how much it was used, I used it once or twice I think. It was good for pecussion. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-04 22:20:13
|
On Mon, Nov 04, 2002 at 07:55:50 -0000, Paul Kellett wrote: > > regarding the AKAI samples: Steve says akai samplers were quite limited > > in terms or RAM availabilty (32-64MB) and since akai samplers allow some > > funny stuff like modulating the loop points I was wondering what you > > thing about not using disk streaming for this format. > > How about the s5000/6000 series ? what is the maximum RAM configuration > > ? Do they allow nasty loop point modulation too ? > > I don't think Akai ever had loop point modulation? Except while editing > samples, anyway. No, old Akai's only have start point modulation (based on note velocity), I think EMUs might have loop point modulation, I've used them much less though. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-04 22:10:27
|
On Mon, Nov 04, 2002 at 10:15:56 +0100, Matthias Weiss wrote: > > Regarding JACK we will probably need to use the in-process model (which > > is actually not used much AFAIK) > > in order to achieve latencies at par with direct output so this needs > > further research. > > Well this means we have to provide GUI implementations for every graphic > toolkit that is used by the available sequencers. > If it's right that processes and threads are handled very similar in the > Linux kernel there should be not alot of a performance difference > between in-process and out-of-process model, anyone knows more about > that? One idea was that linuxsampler UI's would communicate with the main engine over a (non X) socket of some kind. The problem with out-of-context is the the cache has to be cleared and refilled (well the part touched, and I imagine a big sample set would use a lot of cache) and the context switch time is small, but non-zero. As you pointied out we dont have very long to write out 64 samples for every active sample of every active note. > > regarding the AKAI samples: Steve says akai samplers were quite limited > > in terms or RAM availabilty (32-64MB) > > and since akai samplers allow some funny stuff like modulating the loop > > points I was wondering what you thing about not > > using disk streaming for this format. > > Or caching enough audio data that covers the modulation range which > might impact RAM usage. The 3000 series were limited to 32meg, generaly the samples were small, but in either case the point is that the optimal implementation isn't disk streamed. Its just an example though, dont get hung up on AKAIs. - Steve |
From: Matthias W. <mat...@in...> - 2002-11-04 21:31:06
|
On Mon, Nov 04, 2002 at 02:13:16PM +0000, Steve Harris wrote: > On Tue, Nov 05, 2002 at 12:03:36 +1000, [3] wrote: > > >So, I think it is better to have seperate sub-engines that communicate > > >with the main engine at a high level (eg. to the sub-engine: "Here is a > > >bunch of event data ...", from the sub-engine: "I want 8 outputs", "here > > >is a lump of audio data ..."). > > > >The alternative would be to normalise all the sample formats into one, > > >grand unified sample format and just handle that (I believe that is how > > >gigasampler works?). > > Of course, the counter argument too all this is that writing a full > sampler engine for every format we want to support fully sucks, no-one > probably needs all that functionlaity anyway, and we should just write > translators ont a common, comprehensive format and live with the slight > conversion loss. <shrug> In order to provide the whole features that a sample format provides, we have to represent the parameters in linuxsampler. But that means we allready have a "grand unified sample" system. We could write a set of specialized functions that handle special features of a sample format. When a sample set of a certain sample format is used, the right set of functions are put together while loading the samples ( via function pointers, process lists, ... ). matthias |
From: Christian S. <chr...@ep...> - 2002-11-04 21:22:40
|
Es geschah am Montag, 4. November 2002 13:16 als Steve Harris schrieb: > > As discussed on IRC last night, the problem is that some sample formats > have features that can't easily be implemented with a disk based generic > engine, for example the AKAI sample format allows you to vary the start > point with note on velocity (though I dont know by how much). I think that > some hardware samplers allow you to modulate the loop points in realtime, > though the 3000 series AKAIs cannot aparently. It's been a while since I created my last Akai programs, but AFAIK the S3000 series (only regarding this start point) just differs between four velocities. I think they called them zones and for each of these 4 zones you are able to assign an individual 'sample' (each already containing it's firm loop points) and additional parameters. So it's not that these loop points are almost randomly. If that's the problem. But correct me if memory lies. I hope there is a way without loading those libraries completely in memory. Although they're limited to 'just' 32MB, it doesn't need a big arrengement to fill 256MB of RAM or more. But whatabout those cross fade loops? These are essential for small sound libraries for sounding natural and smooth. How much loop points can there be? BTW the limit for S5000/S6000 is 256MB. |
From: Matthias W. <mat...@in...> - 2002-11-04 21:19:14
|
On Mon, Nov 04, 2002 at 09:30:07PM +0100, Benno Senoner wrote: > > > > If you set a reasonable target for a first release (but still with a > > good, extensible API), you will get there quicker, you will get users > for > > testing earlier, and the devleopers will be more motivated. > > not sure if this will bring us more long-term advantages ... what do > others say ? > As said for the beginning the recompiler can be very simple since we can > extend it later > without needing to introduce radical changes). > Though the idea of the recompiler seems appealing to me, I'd like to see some results against a "standard - hardcoded" solution that uses techniques like inlining, function pointers ... The disadvantages of compilation time, the need of a working compilation environment don't have to outweight the benefit of some CPU cycles less. > Regarding JACK we will probably need to use the in-process model (which > is actually not used much AFAIK) > in order to achieve latencies at par with direct output so this needs > further research. Well this means we have to provide GUI implementations for every graphic toolkit that is used by the available sequencers. If it's right that processes and threads are handled very similar in the Linux kernel there should be not alot of a performance difference between in-process and out-of-process model, anyone knows more about that? > My tests with direct OSS output show that it is possibile to achieve > 3msec latency on a PII+ with the sampler so > we want to get out these numbers from jack too so we need to test it > first in a direct output enviroment and then > in conjunction with jack (or better implement both backends from the > beginning and allow you to switch it via cmdline). I think it should be the target to achieve latency of one jack cycle. That means in one cycle the midi events get read and the audio data is prepared in a buffer. In the next jack cycle the processed audio data is copied into the shared memory segment provided by jack. In case of 48kHz and 64 samples/cycle this means 1.3 msec time to finish. Well in fact it is less, because there should be room for other jack clients ... > > regarding the AKAI samples: Steve says akai samplers were quite limited > in terms or RAM availabilty (32-64MB) > and since akai samplers allow some funny stuff like modulating the loop > points I was wondering what you thing about not > using disk streaming for this format. Or caching enough audio data that covers the modulation range which might impact RAM usage. matthias |
From: Paul K. <pau...@ma...> - 2002-11-04 19:57:29
|
I'm going through today's digest, so this includes several people's replies... Steve Harris <S.W...@ec...> wrote: > > > We should be able to stream DLS in and out of the engine as well. > > I dont get the impression that DLS is anywhere near rich enough to > do this job, it would need to be something pretty expressive. > > Gigasampler uses DLS 2 + proprietary extensions, doesn't it? Phil Kerr <phi...@el...> wrote: > > It's a question of balance between using a widely used standard with > some limitations over a custom format which may not fully interoperate. The nice thing about DLS 2 is it's *designed* to have proprietary extensions added, but other applications should be able to ignore the parts they don't understand and still get at the keymapping and maybe the envelopes and other simple stuff. Benno Senoner <be...@ga...> wrote: > > regarding the AKAI samples: Steve says akai samplers were quite limited > in terms or RAM availabilty (32-64MB) and since akai samplers allow some > funny stuff like modulating the loop points I was wondering what you > thing about not using disk streaming for this format. > How about the s5000/6000 series ? what is the maximum RAM configuration > ? Do they allow nasty loop point modulation too ? I don't think Akai ever had loop point modulation? Except while editing samples, anyway. S5000/S6000 is I think 128 or 256MB max RAM, but they do have disk streaming. Don't think they have loop point modulation but I've not used one. > And since we speak about looping I was wondering how looping is handled > in most hardware and software samplers: > > do most of them use loop-until-release (eg looping part is looped and > after release the sample gets played as it was not looped) Most samplers give an option of loop until release or infinite looping. Akai have (depending on model) up to 8 loops per sample, each with a different number of repeats, but nobody ever used more than 2 loops... > When implementing these kind of looping techniques in a disk streaming > sampler, the first looping technique requires caching a region (let's > say 64k samples) > past the final loop point in order to give the engine the time to refill > the ring buffers from disk. This means the memory consumption almost > doubles > for looped samples over oneshot samples. With very large sample > libraries this could mean that RAM can become scarce. (but I am not sure > if large > libraries of looped samples exist). I expect there are some libraries that are big and looped (e.g. sustained string sections) as the sound designers like to push the boundaries of what can be acheived. Paul. _____________________________ m a x i m | digital audio http://mda-vst.com _____________________________ |
From: Richard A. S. <rs...@bi...> - 2002-11-04 18:49:39
|
On Mon, 4 Nov 2002 17:15:48 +0000, Steve Harris wrote: > > Why am I in favour of a modular design (graphical signal editor etc) > > instead of > > hardcoding (as Juan L. proposed) popular engines ? > > Well assume you write a GIG loader and engine. But now you discover the > > giga engine is > > too limited. Fire up the signal editor enhance the signal > > routing/processing features of the engine, > > compile and play your .GIG files with the new enhanced engine. > > Thats very compelling, but my feeling is that its better to support > features like that in principle, but target a more reasonable feature set > for an initial release. My experience of large projects with big ambitions > is that people loose interest and they never get finished. > It will follow the old 80-20 rule. Probally only 80% of the feature set will see any wide-spread use but implementing that final 20% will take another 80% effort and usually complicates thing up quite a bit. > If you set a reasonable target for a first release (but still with a > good, extensible API), you will get there quicker, you will get users for > testing earlier, and the devleopers will be more motivated. > > I would like to see a first milestone of a realtime, jacked sampler that > can receive midi and play a subset of GIG samples fomr disk, with a clean > and extensible deisgn. I agree. Many a time I have designed what I thought to be the perfect system for the task at hand. Only to be blindsided by either some nasty-real-world effect or some other part of the system that can't really live up to its original design specs. Most systems are just too complicated to be able to get a good handle on all the possible input values. I would fully expect that several modules will need to be overhauled somewhere after the first stable releases are available. Now we seem to have a wealth of experienced developers on this list and Benno has already flushed out most of the issues with streaming from the disk so I think things will be fairly well designed from the start but there's always something lurking in a dark corner. Small simple incremental goals seem to be a very good choice to me. And if we try to minimize the intermodule dependance what does need to be overhauled shouldn't be too painfull. Also the earlier some sort of engine exists the sooner UI developement can progress. The UI for this thing is one area where we can really come up with some innovative stuff. I personally don't have a whole lot of experience with samplers (disk or hardware) but my studio friend Mike Bailey has bunches and he's got what I think are some really good UI ideas with managing large sample sets and studio type use. Or at least I've listened to him pick apart the UI for most of the commercial products. I'd like to try and build him a box that can take the place of his current GigiStudio setup. -- Richard A. Smith Bitworks, Inc. rs...@bi... 479.846.5777 x104 Sr. Design Engineer http://www.bitworks.com |
From: Benno S. <be...@ga...> - 2002-11-04 18:27:11
|
Steve wrote: > > Well assume you write a GIG loader and engine. But now you discover the > > giga engine is > > too limited. Fire up the signal editor enhance the signal > > routing/processing features of the engine, > > compile and play your .GIG files with the new enhanced engine. > > Thats very compelling, but my feeling is that its better to support > features like that in principle, but target a more reasonable feature set > for an initial release. My experience of large projects with big ambitions > is that people loose interest and they never get finished. I agree that about the unfinished projects I tend to loose interest too, but this time I will concentrate only on this project since it would be very nice of being able to deliver something that is usable in the real world. Regarding the recompiler a too big task: I would not say that: the streaming engine is more or less ready, just take the routines and algorithms and encapsulate them into a cleaner framework. For the signal processing part we have most stuff ready to use (reverb,filters,ladspa plugs) and I don't think that writing a framework that is able to build simple signal networks is such a big task. Speaking of GIG I think it will take a bit of time to figure out all params embedded in the file (although Ruben van Royen and Paul K. have already done some work in this field) and map them correctly to the engine. (correct emulation of envelopes, filters ecc). > > If you set a reasonable target for a first release (but still with a > good, extensible API), you will get there quicker, you will get users for > testing earlier, and the devleopers will be more motivated. not sure if this will bring us more long-term advantages ... what do others say ? As said for the beginning the recompiler can be very simple since we can extend it later without needing to introduce radical changes). > > OK, you will have to throw away some code when you want to generalise the > engine, but I think this is very mworth it. Espcially as you will learn > things from the first (or in Benno's case second :) implementation. > > I would like to see a first milestone of a realtime, jacked sampler that > can receive midi and play a subset of GIG samples fomr disk, with a clean > and extensible deisgn. The problem is this clean and extensible design ... personally I think it is represented by the recompiler. Perhaps I am wrong , that's why we are discussing these issues on the list here. > PS I'm not sure that I agree with supporting OSS, it imposes design > decisions that don't make much sense in the long term. > - Steve Juan says when using JACK, instead of exporting only the stereo output we should export each MIDI channel (so that we can route instruments to arbitrary destinations). This is ok for JACK and introduces flexiblity but I do notsee supporting OSS as a problem: just export the stereo output and use a simple OSS backend that simulates callback: while(1) { data=audio_call_back(); output_oss(data); } The prefered method will be of course JACK but I think for testing, debugging and tuning purposes OSS and ALSA are ok. (plus in case you want to dedicate a machine only for sampling you can work without jack). Regarding JACK we will probably need to use the in-process model (which is actually not used much AFAIK) in order to achieve latencies at par with direct output so this needs further research. My tests with direct OSS output show that it is possibile to achieve 3msec latency on a PII+ with the sampler so we want to get out these numbers from jack too so we need to test it first in a direct output enviroment and then in conjunction with jack (or better implement both backends from the beginning and allow you to switch it via cmdline). regarding the AKAI samples: Steve says akai samplers were quite limited in terms or RAM availabilty (32-64MB) and since akai samplers allow some funny stuff like modulating the loop points I was wondering what you thing about not using disk streaming for this format. How about the s5000/6000 series ? what is the maximum RAM configuration ? Do they allow nasty loop point modulation too ? And since we speak about looping I was wondering how looping is handled in most hardware and software samplers: do most of them use loop-until-release (eg looping part is looped and after release the sample gets played as it was not looped) or infinite looping: when you release the loop is still played but the volume fades to zero within a certain time. (release part not used) When implementing these kind of looping techniques in a disk streaming sampler, the first looping technique requires caching a region (let's say 64k samples) past the final loop point in order to give the engine the time to refill the ring buffers from disk. This means the memory consumption almost doubles for looped samples over oneshot samples. With very large sample libraries this could mean that RAM can become scarce. (but I am not sure if large libraries of looped samples exist). The second looping method is easy to implement since you can let the disk thread refill re ringbuffers with the looped areas and then when you release the key the volume simply fades to zero within a certain amount of time. Anyway it is not hard to implement loop-until-release and the engine can easily be designed to support both in order to give the user the best of both worlds. If you have experience with hardware or software samplers please share your thoughts on the issues I mentioned on the list. thanks, Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-04 17:15:55
|
On Mon, Nov 04, 2002 at 06:42:38 +0100, Benno Senoner wrote: > > I dont get the impression that DLS is anywhere near rich enough to do > this > > job, it would need to be something pretty expressive. > > That's why I am against this one-size-fits-all sample format. > At least if we keep the engines separate we do not risk making mistakes > in designing > a format that later tuns out to be a PITA because of design errors. I agree, I just didn't want to be accused of proposing an enourmous task ;) > Why am I in favour of a modular design (graphical signal editor etc) > instead of > hardcoding (as Juan L. proposed) popular engines ? > Well assume you write a GIG loader and engine. But now you discover the > giga engine is > too limited. Fire up the signal editor enhance the signal > routing/processing features of the engine, > compile and play your .GIG files with the new enhanced engine. Thats very compelling, but my feeling is that its better to support features like that in principle, but target a more reasonable feature set for an initial release. My experience of large projects with big ambitions is that people loose interest and they never get finished. If you set a reasonable target for a first release (but still with a good, extensible API), you will get there quicker, you will get users for testing earlier, and the devleopers will be more motivated. OK, you will have to throw away some code when you want to generalise the engine, but I think this is very mworth it. Espcially as you will learn things from the first (or in Benno's case second :) implementation. I would like to see a first milestone of a realtime, jacked sampler that can receive midi and play a subset of GIG samples fomr disk, with a clean and extensible deisgn. PS I'm not sure that I agree with supporting OSS, it imposes design decisions that don't make much sense in the long term. - Steve |
From: Phil K. <phi...@el...> - 2002-11-04 15:44:59
|
DLS 2 is a better option than DLS 1 although the specs for DLS 1 are downloadable free from the MMA. It's a question of balance between using a widely used standard with some limitations over a custom format which may not fully interoperate. -P Quoting Steve Harris <S.W...@ec...>: > On Mon, Nov 04, 2002 at 04:13:17 +0000, Phil Kerr wrote: > > Quoting Steve Harris <S.W...@ec...>: > > > > > Of course, the counter argument too all this is that writing a full > > > sampler engine for every format we want to support fully sucks, > no-one > > > probably needs all that functionlaity anyway, and we should just > write > > > translators ont a common, comprehensive format and live with the > slight > > > conversion loss. <shrug> > > > > Sounds like a job for DLS at the core and then have import/export > modules > > support Akai and other native sampler formats. > > > > We should be able to stream DLS in and out of the engine as well. > > I dont get the impression that DLS is anywhere near rich enough to do > this > job, it would need to be something pretty expressive. > > Gigasampler uses DLS 2 + proprietary extensions, doesn't it? > > - Steve > > > ------------------------------------------------------- > This SF.net email is sponsored by: ApacheCon, November 18-21 in > Las Vegas (supported by COMDEX), the only Apache event to be > fully supported by the ASF. http://www.apachecon.com > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > -- Phil Kerr Centre for Music Technology Researcher Glasgow University phi...@el... T (+44) 141 330 5740 Without music, life would be a mistake. Friedrich Nietzsche |
From: Benno S. <be...@ga...> - 2002-11-04 15:39:41
|
> > On Mon, Nov 04, 2002 at 04:13:17 +0000, Phil Kerr wrote: > > Quoting Steve Harris <S.W...@ec...>: > I dont get the impression that DLS is anywhere near rich enough to do this > job, it would need to be something pretty expressive. That's why I am against this one-size-fits-all sample format. At least if we keep the engines separate we do not risk making mistakes in designing a format that later tuns out to be a PITA because of design errors. Why am I in favour of a modular design (graphical signal editor etc) instead of hardcoding (as Juan L. proposed) popular engines ? Well assume you write a GIG loader and engine. But now you discover the giga engine is too limited. Fire up the signal editor enhance the signal routing/processing features of the engine, compile and play your .GIG files with the new enhanced engine. It does not sound THAT bad to me. Comments ? > > Gigasampler uses DLS 2 + proprietary extensions, doesn't it? Yes I think so (since the simple DLS2 parsing code that Paul Kellett posted has no problems at extracting samples and keyzones). > > - Steve Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-04 15:29:13
|
On Mon, Nov 04, 2002 at 04:13:17 +0000, Phil Kerr wrote: > Quoting Steve Harris <S.W...@ec...>: > > > Of course, the counter argument too all this is that writing a full > > sampler engine for every format we want to support fully sucks, no-one > > probably needs all that functionlaity anyway, and we should just write > > translators ont a common, comprehensive format and live with the slight > > conversion loss. <shrug> > > Sounds like a job for DLS at the core and then have import/export modules > support Akai and other native sampler formats. > > We should be able to stream DLS in and out of the engine as well. I dont get the impression that DLS is anywhere near rich enough to do this job, it would need to be something pretty expressive. Gigasampler uses DLS 2 + proprietary extensions, doesn't it? - Steve |
From: Phil K. <phi...@el...> - 2002-11-04 14:56:04
|
Quoting Steve Harris <S.W...@ec...>: > Of course, the counter argument too all this is that writing a full > sampler engine for every format we want to support fully sucks, no-one > probably needs all that functionlaity anyway, and we should just write > translators ont a common, comprehensive format and live with the slight > conversion loss. <shrug> > > - Steve Sounds like a job for DLS at the core and then have import/export modules support Akai and other native sampler formats. We should be able to stream DLS in and out of the engine as well. -P > > > ------------------------------------------------------- > This SF.net email is sponsored by: ApacheCon, November 18-21 in > Las Vegas (supported by COMDEX), the only Apache event to be > fully supported by the ASF. http://www.apachecon.com > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > -- Phil Kerr Centre for Music Technology Researcher Glasgow University phi...@el... T (+44) 141 330 5740 Without music, life would be a mistake. Friedrich Nietzsche |
From: Benno S. <be...@ga...> - 2002-11-04 14:55:15
|
Hi Xavier ! we plan to use multiple audio backends thush, OSS, ALSA and jack will be supported. It is actually relatively easy since you just add an audio output module for your desired audio API. We need basically help in every area but for the beginning we would like to concentrate on building a good sample engine that is modular and can take advatage of compilation techniques. After the code comes into shape we can think about adding GUIs, new DSP algorithms etc. What is your area of expertise or interest ? (Probably you told this to me long time ago but unfortunately I forgot it). Xavier, be sure to join our mailing list ! (I CCed my response to your mail) cheers, Benno > > Hi, i've just seen your project on sourceforge, i have a few questions: > > -do you plan to support OSS, Alsa, ladspa ? > -what is need right now: MIDI IO, DSP, GUI ?? > > > > > Xavier. -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-04 14:13:21
|
On Tue, Nov 05, 2002 at 12:03:36 +1000, [3] wrote: > >So, I think it is better to have seperate sub-engines that communicate > >with the main engine at a high level (eg. to the sub-engine: "Here is a > >bunch of event data ...", from the sub-engine: "I want 8 outputs", "here > >is a lump of audio data ..."). > >The alternative would be to normalise all the sample formats into one, > >grand unified sample format and just handle that (I believe that is how > >gigasampler works?). Of course, the counter argument too all this is that writing a full sampler engine for every format we want to support fully sucks, no-one probably needs all that functionlaity anyway, and we should just write translators ont a common, comprehensive format and live with the slight conversion loss. <shrug> - Steve |
From: [3] <ma...@ve...> - 2002-11-04 14:03:44
|
heh. thanks Steve Harris wrote: >[Peter, I'm assuming you meant to mail this to the list, I'm replying to > the list anyway ;)] > >As discussed on IRC last night, the problem is that some sample formats >have features that can't easily be implemented with a disk based generic >engine, for example the AKAI sample format allows you to vary the start >point with note on velocity (though I dont know by how much). I think that >some hardware samplers allow you to modulate the loop points in realtime, >though the 3000 series AKAIs cannot aparently. > wouldn't you be better off loading those samples straight into memory? > >So, I think it is better to have seperate sub-engines that communicate >with the main engine at a high level (eg. to the sub-engine: "Here is a >bunch of event data ...", from the sub-engine: "I want 8 outputs", "here >is a lump of audio data ..."). > mmm... > >Though obviously data transfer would be callback based or something. > >The alternative would be to normalise all the sample formats into one, >grand unified sample format and just handle that (I believe that is how >gigasampler works?). > > >I suspect that is less efficient though, and it doesn't allow for >specific support for styles of sample playback. > amen brother... > >I think it mould make sense to preparse the event data, rather than trying >to handle raw midi. Mayeb using something like the OSC event stream? > >anyone know of other preparsed event formats? > ..snip cheers [3] ma...@ve... |