You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Matthias W. <mat...@in...> - 2002-11-04 21:19:14
|
On Mon, Nov 04, 2002 at 09:30:07PM +0100, Benno Senoner wrote: > > > > If you set a reasonable target for a first release (but still with a > > good, extensible API), you will get there quicker, you will get users > for > > testing earlier, and the devleopers will be more motivated. > > not sure if this will bring us more long-term advantages ... what do > others say ? > As said for the beginning the recompiler can be very simple since we can > extend it later > without needing to introduce radical changes). > Though the idea of the recompiler seems appealing to me, I'd like to see some results against a "standard - hardcoded" solution that uses techniques like inlining, function pointers ... The disadvantages of compilation time, the need of a working compilation environment don't have to outweight the benefit of some CPU cycles less. > Regarding JACK we will probably need to use the in-process model (which > is actually not used much AFAIK) > in order to achieve latencies at par with direct output so this needs > further research. Well this means we have to provide GUI implementations for every graphic toolkit that is used by the available sequencers. If it's right that processes and threads are handled very similar in the Linux kernel there should be not alot of a performance difference between in-process and out-of-process model, anyone knows more about that? > My tests with direct OSS output show that it is possibile to achieve > 3msec latency on a PII+ with the sampler so > we want to get out these numbers from jack too so we need to test it > first in a direct output enviroment and then > in conjunction with jack (or better implement both backends from the > beginning and allow you to switch it via cmdline). I think it should be the target to achieve latency of one jack cycle. That means in one cycle the midi events get read and the audio data is prepared in a buffer. In the next jack cycle the processed audio data is copied into the shared memory segment provided by jack. In case of 48kHz and 64 samples/cycle this means 1.3 msec time to finish. Well in fact it is less, because there should be room for other jack clients ... > > regarding the AKAI samples: Steve says akai samplers were quite limited > in terms or RAM availabilty (32-64MB) > and since akai samplers allow some funny stuff like modulating the loop > points I was wondering what you thing about not > using disk streaming for this format. Or caching enough audio data that covers the modulation range which might impact RAM usage. matthias |
From: Paul K. <pau...@ma...> - 2002-11-04 19:57:29
|
I'm going through today's digest, so this includes several people's replies... Steve Harris <S.W...@ec...> wrote: > > > We should be able to stream DLS in and out of the engine as well. > > I dont get the impression that DLS is anywhere near rich enough to > do this job, it would need to be something pretty expressive. > > Gigasampler uses DLS 2 + proprietary extensions, doesn't it? Phil Kerr <phi...@el...> wrote: > > It's a question of balance between using a widely used standard with > some limitations over a custom format which may not fully interoperate. The nice thing about DLS 2 is it's *designed* to have proprietary extensions added, but other applications should be able to ignore the parts they don't understand and still get at the keymapping and maybe the envelopes and other simple stuff. Benno Senoner <be...@ga...> wrote: > > regarding the AKAI samples: Steve says akai samplers were quite limited > in terms or RAM availabilty (32-64MB) and since akai samplers allow some > funny stuff like modulating the loop points I was wondering what you > thing about not using disk streaming for this format. > How about the s5000/6000 series ? what is the maximum RAM configuration > ? Do they allow nasty loop point modulation too ? I don't think Akai ever had loop point modulation? Except while editing samples, anyway. S5000/S6000 is I think 128 or 256MB max RAM, but they do have disk streaming. Don't think they have loop point modulation but I've not used one. > And since we speak about looping I was wondering how looping is handled > in most hardware and software samplers: > > do most of them use loop-until-release (eg looping part is looped and > after release the sample gets played as it was not looped) Most samplers give an option of loop until release or infinite looping. Akai have (depending on model) up to 8 loops per sample, each with a different number of repeats, but nobody ever used more than 2 loops... > When implementing these kind of looping techniques in a disk streaming > sampler, the first looping technique requires caching a region (let's > say 64k samples) > past the final loop point in order to give the engine the time to refill > the ring buffers from disk. This means the memory consumption almost > doubles > for looped samples over oneshot samples. With very large sample > libraries this could mean that RAM can become scarce. (but I am not sure > if large > libraries of looped samples exist). I expect there are some libraries that are big and looped (e.g. sustained string sections) as the sound designers like to push the boundaries of what can be acheived. Paul. _____________________________ m a x i m | digital audio http://mda-vst.com _____________________________ |
From: Richard A. S. <rs...@bi...> - 2002-11-04 18:49:39
|
On Mon, 4 Nov 2002 17:15:48 +0000, Steve Harris wrote: > > Why am I in favour of a modular design (graphical signal editor etc) > > instead of > > hardcoding (as Juan L. proposed) popular engines ? > > Well assume you write a GIG loader and engine. But now you discover the > > giga engine is > > too limited. Fire up the signal editor enhance the signal > > routing/processing features of the engine, > > compile and play your .GIG files with the new enhanced engine. > > Thats very compelling, but my feeling is that its better to support > features like that in principle, but target a more reasonable feature set > for an initial release. My experience of large projects with big ambitions > is that people loose interest and they never get finished. > It will follow the old 80-20 rule. Probally only 80% of the feature set will see any wide-spread use but implementing that final 20% will take another 80% effort and usually complicates thing up quite a bit. > If you set a reasonable target for a first release (but still with a > good, extensible API), you will get there quicker, you will get users for > testing earlier, and the devleopers will be more motivated. > > I would like to see a first milestone of a realtime, jacked sampler that > can receive midi and play a subset of GIG samples fomr disk, with a clean > and extensible deisgn. I agree. Many a time I have designed what I thought to be the perfect system for the task at hand. Only to be blindsided by either some nasty-real-world effect or some other part of the system that can't really live up to its original design specs. Most systems are just too complicated to be able to get a good handle on all the possible input values. I would fully expect that several modules will need to be overhauled somewhere after the first stable releases are available. Now we seem to have a wealth of experienced developers on this list and Benno has already flushed out most of the issues with streaming from the disk so I think things will be fairly well designed from the start but there's always something lurking in a dark corner. Small simple incremental goals seem to be a very good choice to me. And if we try to minimize the intermodule dependance what does need to be overhauled shouldn't be too painfull. Also the earlier some sort of engine exists the sooner UI developement can progress. The UI for this thing is one area where we can really come up with some innovative stuff. I personally don't have a whole lot of experience with samplers (disk or hardware) but my studio friend Mike Bailey has bunches and he's got what I think are some really good UI ideas with managing large sample sets and studio type use. Or at least I've listened to him pick apart the UI for most of the commercial products. I'd like to try and build him a box that can take the place of his current GigiStudio setup. -- Richard A. Smith Bitworks, Inc. rs...@bi... 479.846.5777 x104 Sr. Design Engineer http://www.bitworks.com |
From: Benno S. <be...@ga...> - 2002-11-04 18:27:11
|
Steve wrote: > > Well assume you write a GIG loader and engine. But now you discover the > > giga engine is > > too limited. Fire up the signal editor enhance the signal > > routing/processing features of the engine, > > compile and play your .GIG files with the new enhanced engine. > > Thats very compelling, but my feeling is that its better to support > features like that in principle, but target a more reasonable feature set > for an initial release. My experience of large projects with big ambitions > is that people loose interest and they never get finished. I agree that about the unfinished projects I tend to loose interest too, but this time I will concentrate only on this project since it would be very nice of being able to deliver something that is usable in the real world. Regarding the recompiler a too big task: I would not say that: the streaming engine is more or less ready, just take the routines and algorithms and encapsulate them into a cleaner framework. For the signal processing part we have most stuff ready to use (reverb,filters,ladspa plugs) and I don't think that writing a framework that is able to build simple signal networks is such a big task. Speaking of GIG I think it will take a bit of time to figure out all params embedded in the file (although Ruben van Royen and Paul K. have already done some work in this field) and map them correctly to the engine. (correct emulation of envelopes, filters ecc). > > If you set a reasonable target for a first release (but still with a > good, extensible API), you will get there quicker, you will get users for > testing earlier, and the devleopers will be more motivated. not sure if this will bring us more long-term advantages ... what do others say ? As said for the beginning the recompiler can be very simple since we can extend it later without needing to introduce radical changes). > > OK, you will have to throw away some code when you want to generalise the > engine, but I think this is very mworth it. Espcially as you will learn > things from the first (or in Benno's case second :) implementation. > > I would like to see a first milestone of a realtime, jacked sampler that > can receive midi and play a subset of GIG samples fomr disk, with a clean > and extensible deisgn. The problem is this clean and extensible design ... personally I think it is represented by the recompiler. Perhaps I am wrong , that's why we are discussing these issues on the list here. > PS I'm not sure that I agree with supporting OSS, it imposes design > decisions that don't make much sense in the long term. > - Steve Juan says when using JACK, instead of exporting only the stereo output we should export each MIDI channel (so that we can route instruments to arbitrary destinations). This is ok for JACK and introduces flexiblity but I do notsee supporting OSS as a problem: just export the stereo output and use a simple OSS backend that simulates callback: while(1) { data=audio_call_back(); output_oss(data); } The prefered method will be of course JACK but I think for testing, debugging and tuning purposes OSS and ALSA are ok. (plus in case you want to dedicate a machine only for sampling you can work without jack). Regarding JACK we will probably need to use the in-process model (which is actually not used much AFAIK) in order to achieve latencies at par with direct output so this needs further research. My tests with direct OSS output show that it is possibile to achieve 3msec latency on a PII+ with the sampler so we want to get out these numbers from jack too so we need to test it first in a direct output enviroment and then in conjunction with jack (or better implement both backends from the beginning and allow you to switch it via cmdline). regarding the AKAI samples: Steve says akai samplers were quite limited in terms or RAM availabilty (32-64MB) and since akai samplers allow some funny stuff like modulating the loop points I was wondering what you thing about not using disk streaming for this format. How about the s5000/6000 series ? what is the maximum RAM configuration ? Do they allow nasty loop point modulation too ? And since we speak about looping I was wondering how looping is handled in most hardware and software samplers: do most of them use loop-until-release (eg looping part is looped and after release the sample gets played as it was not looped) or infinite looping: when you release the loop is still played but the volume fades to zero within a certain time. (release part not used) When implementing these kind of looping techniques in a disk streaming sampler, the first looping technique requires caching a region (let's say 64k samples) past the final loop point in order to give the engine the time to refill the ring buffers from disk. This means the memory consumption almost doubles for looped samples over oneshot samples. With very large sample libraries this could mean that RAM can become scarce. (but I am not sure if large libraries of looped samples exist). The second looping method is easy to implement since you can let the disk thread refill re ringbuffers with the looped areas and then when you release the key the volume simply fades to zero within a certain amount of time. Anyway it is not hard to implement loop-until-release and the engine can easily be designed to support both in order to give the user the best of both worlds. If you have experience with hardware or software samplers please share your thoughts on the issues I mentioned on the list. thanks, Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-04 17:15:55
|
On Mon, Nov 04, 2002 at 06:42:38 +0100, Benno Senoner wrote: > > I dont get the impression that DLS is anywhere near rich enough to do > this > > job, it would need to be something pretty expressive. > > That's why I am against this one-size-fits-all sample format. > At least if we keep the engines separate we do not risk making mistakes > in designing > a format that later tuns out to be a PITA because of design errors. I agree, I just didn't want to be accused of proposing an enourmous task ;) > Why am I in favour of a modular design (graphical signal editor etc) > instead of > hardcoding (as Juan L. proposed) popular engines ? > Well assume you write a GIG loader and engine. But now you discover the > giga engine is > too limited. Fire up the signal editor enhance the signal > routing/processing features of the engine, > compile and play your .GIG files with the new enhanced engine. Thats very compelling, but my feeling is that its better to support features like that in principle, but target a more reasonable feature set for an initial release. My experience of large projects with big ambitions is that people loose interest and they never get finished. If you set a reasonable target for a first release (but still with a good, extensible API), you will get there quicker, you will get users for testing earlier, and the devleopers will be more motivated. OK, you will have to throw away some code when you want to generalise the engine, but I think this is very mworth it. Espcially as you will learn things from the first (or in Benno's case second :) implementation. I would like to see a first milestone of a realtime, jacked sampler that can receive midi and play a subset of GIG samples fomr disk, with a clean and extensible deisgn. PS I'm not sure that I agree with supporting OSS, it imposes design decisions that don't make much sense in the long term. - Steve |
From: Phil K. <phi...@el...> - 2002-11-04 15:44:59
|
DLS 2 is a better option than DLS 1 although the specs for DLS 1 are downloadable free from the MMA. It's a question of balance between using a widely used standard with some limitations over a custom format which may not fully interoperate. -P Quoting Steve Harris <S.W...@ec...>: > On Mon, Nov 04, 2002 at 04:13:17 +0000, Phil Kerr wrote: > > Quoting Steve Harris <S.W...@ec...>: > > > > > Of course, the counter argument too all this is that writing a full > > > sampler engine for every format we want to support fully sucks, > no-one > > > probably needs all that functionlaity anyway, and we should just > write > > > translators ont a common, comprehensive format and live with the > slight > > > conversion loss. <shrug> > > > > Sounds like a job for DLS at the core and then have import/export > modules > > support Akai and other native sampler formats. > > > > We should be able to stream DLS in and out of the engine as well. > > I dont get the impression that DLS is anywhere near rich enough to do > this > job, it would need to be something pretty expressive. > > Gigasampler uses DLS 2 + proprietary extensions, doesn't it? > > - Steve > > > ------------------------------------------------------- > This SF.net email is sponsored by: ApacheCon, November 18-21 in > Las Vegas (supported by COMDEX), the only Apache event to be > fully supported by the ASF. http://www.apachecon.com > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > -- Phil Kerr Centre for Music Technology Researcher Glasgow University phi...@el... T (+44) 141 330 5740 Without music, life would be a mistake. Friedrich Nietzsche |
From: Benno S. <be...@ga...> - 2002-11-04 15:39:41
|
> > On Mon, Nov 04, 2002 at 04:13:17 +0000, Phil Kerr wrote: > > Quoting Steve Harris <S.W...@ec...>: > I dont get the impression that DLS is anywhere near rich enough to do this > job, it would need to be something pretty expressive. That's why I am against this one-size-fits-all sample format. At least if we keep the engines separate we do not risk making mistakes in designing a format that later tuns out to be a PITA because of design errors. Why am I in favour of a modular design (graphical signal editor etc) instead of hardcoding (as Juan L. proposed) popular engines ? Well assume you write a GIG loader and engine. But now you discover the giga engine is too limited. Fire up the signal editor enhance the signal routing/processing features of the engine, compile and play your .GIG files with the new enhanced engine. It does not sound THAT bad to me. Comments ? > > Gigasampler uses DLS 2 + proprietary extensions, doesn't it? Yes I think so (since the simple DLS2 parsing code that Paul Kellett posted has no problems at extracting samples and keyzones). > > - Steve Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-04 15:29:13
|
On Mon, Nov 04, 2002 at 04:13:17 +0000, Phil Kerr wrote: > Quoting Steve Harris <S.W...@ec...>: > > > Of course, the counter argument too all this is that writing a full > > sampler engine for every format we want to support fully sucks, no-one > > probably needs all that functionlaity anyway, and we should just write > > translators ont a common, comprehensive format and live with the slight > > conversion loss. <shrug> > > Sounds like a job for DLS at the core and then have import/export modules > support Akai and other native sampler formats. > > We should be able to stream DLS in and out of the engine as well. I dont get the impression that DLS is anywhere near rich enough to do this job, it would need to be something pretty expressive. Gigasampler uses DLS 2 + proprietary extensions, doesn't it? - Steve |
From: Phil K. <phi...@el...> - 2002-11-04 14:56:04
|
Quoting Steve Harris <S.W...@ec...>: > Of course, the counter argument too all this is that writing a full > sampler engine for every format we want to support fully sucks, no-one > probably needs all that functionlaity anyway, and we should just write > translators ont a common, comprehensive format and live with the slight > conversion loss. <shrug> > > - Steve Sounds like a job for DLS at the core and then have import/export modules support Akai and other native sampler formats. We should be able to stream DLS in and out of the engine as well. -P > > > ------------------------------------------------------- > This SF.net email is sponsored by: ApacheCon, November 18-21 in > Las Vegas (supported by COMDEX), the only Apache event to be > fully supported by the ASF. http://www.apachecon.com > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > -- Phil Kerr Centre for Music Technology Researcher Glasgow University phi...@el... T (+44) 141 330 5740 Without music, life would be a mistake. Friedrich Nietzsche |
From: Benno S. <be...@ga...> - 2002-11-04 14:55:15
|
Hi Xavier ! we plan to use multiple audio backends thush, OSS, ALSA and jack will be supported. It is actually relatively easy since you just add an audio output module for your desired audio API. We need basically help in every area but for the beginning we would like to concentrate on building a good sample engine that is modular and can take advatage of compilation techniques. After the code comes into shape we can think about adding GUIs, new DSP algorithms etc. What is your area of expertise or interest ? (Probably you told this to me long time ago but unfortunately I forgot it). Xavier, be sure to join our mailing list ! (I CCed my response to your mail) cheers, Benno > > Hi, i've just seen your project on sourceforge, i have a few questions: > > -do you plan to support OSS, Alsa, ladspa ? > -what is need right now: MIDI IO, DSP, GUI ?? > > > > > Xavier. -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-04 14:13:21
|
On Tue, Nov 05, 2002 at 12:03:36 +1000, [3] wrote: > >So, I think it is better to have seperate sub-engines that communicate > >with the main engine at a high level (eg. to the sub-engine: "Here is a > >bunch of event data ...", from the sub-engine: "I want 8 outputs", "here > >is a lump of audio data ..."). > >The alternative would be to normalise all the sample formats into one, > >grand unified sample format and just handle that (I believe that is how > >gigasampler works?). Of course, the counter argument too all this is that writing a full sampler engine for every format we want to support fully sucks, no-one probably needs all that functionlaity anyway, and we should just write translators ont a common, comprehensive format and live with the slight conversion loss. <shrug> - Steve |
From: [3] <ma...@ve...> - 2002-11-04 14:03:44
|
heh. thanks Steve Harris wrote: >[Peter, I'm assuming you meant to mail this to the list, I'm replying to > the list anyway ;)] > >As discussed on IRC last night, the problem is that some sample formats >have features that can't easily be implemented with a disk based generic >engine, for example the AKAI sample format allows you to vary the start >point with note on velocity (though I dont know by how much). I think that >some hardware samplers allow you to modulate the loop points in realtime, >though the 3000 series AKAIs cannot aparently. > wouldn't you be better off loading those samples straight into memory? > >So, I think it is better to have seperate sub-engines that communicate >with the main engine at a high level (eg. to the sub-engine: "Here is a >bunch of event data ...", from the sub-engine: "I want 8 outputs", "here >is a lump of audio data ..."). > mmm... > >Though obviously data transfer would be callback based or something. > >The alternative would be to normalise all the sample formats into one, >grand unified sample format and just handle that (I believe that is how >gigasampler works?). > > >I suspect that is less efficient though, and it doesn't allow for >specific support for styles of sample playback. > amen brother... > >I think it mould make sense to preparse the event data, rather than trying >to handle raw midi. Mayeb using something like the OSC event stream? > >anyone know of other preparsed event formats? > ..snip cheers [3] ma...@ve... |
From: Peter <ma...@ve...> - 2002-11-04 13:48:46
|
From: Steve H. <S.W...@ec...> - 2002-11-04 12:16:32
|
[Peter, I'm assuming you meant to mail this to the list, I'm replying to the list anyway ;)] As discussed on IRC last night, the problem is that some sample formats have features that can't easily be implemented with a disk based generic engine, for example the AKAI sample format allows you to vary the start point with note on velocity (though I dont know by how much). I think that some hardware samplers allow you to modulate the loop points in realtime, though the 3000 series AKAIs cannot aparently. So, I think it is better to have seperate sub-engines that communicate with the main engine at a high level (eg. to the sub-engine: "Here is a bunch of event data ...", from the sub-engine: "I want 8 outputs", "here is a lump of audio data ..."). Though obviously data transfer would be callback based or something. The alternative would be to normalise all the sample formats into one, grand unified sample format and just handle that (I believe that is how gigasampler works?). I suspect that is less efficient though, and it doesn't allow for specific support for styles of sample playback. I think it mould make sense to preparse the event data, rather than trying to handle raw midi. Mayeb using something like the OSC event stream? Anyone know of other preparsed event formats? - Steve On Mon, Nov 04, 2002 at 09:06:08 +1000, Peter wrote: > i persoanlly like the idea of a sampler construction kit... > or at least a modularised sample engine.. > > my agenda is more towards loop sampling/re-sequencing... > normal event handling in samplers(especiall the akia's) doesn't lend > itself that kind of stuff > i'll probably be more inclinded to work towards the yamaha style of > things (ish)... > > i've been playing around with some ideas over the past few months > > i'd like for the sampler disk streaming, audio i/o and midi channel > routing (eg. noteon/off,pith,mod NOT cc or rpn/npn data) to be handled > by the base engine > aka i/o engine > > then, when a file is loaded onto a layer (midi-channel) > the base class calls the respective sampler extension.. > which handles everything on the channel, from sample-loading to > note-on-off handling to audio and even midi outputs..depending on > the type.. > > that way you could have say, a instrument extension, which could load > dls's or soundfonts > a akai extension that loads akai files > etc.etc. > umm.. > i guess thats enough for the time being > cheers > [3] |
From: Benno S. <be...@ga...> - 2002-11-03 15:09:25
|
Hi, I'm forwarding an excerpt of a mail from Paul Kellett he sent me. He said he will help us out with sample library importing (he is the guy that wrote the AKAI sample format docs you find on the web) and other DSP issues. Yay ! cheers, Benno ---- > I discussed some issues on IRC with Steve H. and Juan L. about supporting > multiple sample formats and the debate is if it is more convenient to > implement standard engines (AKAI s1000 , gigasamp. etc) in a static way > or use the audio unit compiler we have in mind and design these "standard" > engines using a graphical editor. Not sure I understand this... But there are 2 stages which can be separate: - loading the program(patch) and sample information - accessing the sample data, which could just be a list of where to find the sample data and what format the data is in, and not care if it is an individual Akai sample, or part of a .GIG file. Maybe there could be an intermediate program format, which any foreign formats need to be converted to, but the sample data stays in the original file(s). I'm not sure how important export to foreign formats is? It looks like lots of sample libraries for Giga are using all it's triggering options, to have different "pages" of samples available by MIDI control, so this would be important to people wanting an alternative to Gigasampler. Paul ----- -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-02 18:43:51
|
On Sat, Nov 02, 2002 at 07:37:36 +0100, Benno Senoner wrote: > The question is: do we build a single "one-size-fits-all" engine and > write loaders for various sample formats > trying to fit the original sample parameters (filters,envelopes etc) in > such a way that they sound as close as on the > original or is is better to implement separate engines for each type of > sample library (eg akai s1000, SF2, GIGA, etc). > associated with the related sample loader. I think that the best approach is to make the sample loaders mini engines, all the things like how the sampler handles note off etc. will vary a lot from sampler to sampler. If we just make the engine provide the MIDI routing and parsing, and deal with jack i/o stuff then the individual sub engines can do whatever they like*. It also means we can get up and running with a single sampler format without compromising the design, as long as the interface between the main engine and the sub engines in general enough. If the sub engines want to use recompilation techniques then the main engine can just export an API to handle that. * ...although this makes me think, playing devil advocate, maybe we should not be aiming for one giant engine that will handle every sample format known to man, maybe we should make a "sampler construction kit", that allows people to bolt on thier sample loading code and sampler emulating code and build a sampler out of that. It would encourage lots of simple, special purpose tools and avoid toolkit issues - Steve |
From: Benno S. <be...@ga...> - 2002-11-02 16:34:43
|
H, Yesterday I had a discussion on IRC with Steve H. and Juan L. about some issues regarding LinuxSampler: Since our goal is to provide a sampler that can work with a large number of sample library formats we need to implement engines that can reproduce so that the samples sound as they were played on the original hardware/software sampler. (or at least coming very close). The question is: do we build a single "one-size-fits-all" engine and write loaders for various sample formats trying to fit the original sample parameters (filters,envelopes etc) in such a way that they sound as close as on the original or is is better to implement separate engines for each type of sample library (eg akai s1000, SF2, GIGA, etc). associated with the related sample loader. Since the plan is to use compilation techniques in order to allow very flexible signal flow diagrams while providing speed that is at par with hardcoded designs, my question was if it would be better to implement these commercial sampler designs (AKAI,SF2,GIG etc) using our signal network builder (It will probably become a graphical editor) without writing any (or almost zero) C/C++ code. Of course we need and associated loader and probably it is hard to not handcode the single loaders , but for the engine I think one could avoid the implementation step once a powerful signal network builder is implemented. Juan says that we should use the signal network compiler only for future designs and for experimental stuff for now and start to provide hardcoded versions of the sampler engines mentioned above while converting them to a signal network at a later stage. (when the network compiler will be sufficiently evolved). While this could provide some short term advantages (faster results , perhaps more developers jumping on the bandwaggon) it is IMHO a bit of a waste of time. I have not that much experience writing very large audio applications but my proof of concept linuxsampler code (see home page) while still small and despite organized in C++ classes, started to look unclean and hard to maintain since every design decision is embedded deep in the code. Generating notes from MIDI events requires performing several tasks that are dependent each other. I'm thinking for example about: handling the keymap: - which notes are on/off , - handling multiple note on on the same note on a certain channel (eg does a note off mute the first note or the last one ? we could make this configurable by using linked lists assigned to each key on the MIDI scale) - sustain pedal - different key/velocity/controller zones trigger different samples with different parameters voice generation stuff: - sample playback (from RAM/disk) - looping (needs to work in synergy with the sample playback) - modulation, enveloping, filters and FXes that become active based on the instrument's preset etc Within the code I'd like to these things separate in a clean way but I think it is not that simple with hardcoded designs since you (or at least I) tend to optimize things and perform serveral tasks within the same routine thus effectively merging things that belong to different layers. This is why I'm asking you folks about what the right way to do looks like in your opinion. Waiting for opinions from everyone ! Josh: regarding SF2, DLS importing, your help is very welcome, perhaps you could comment on how to best solve the multiple sample format importing problem. PS: regarding the name change of the project from EVO to LinuxSampler I did it for several reasons: first the name linux sampler makes it clear that is it a sampler for linux, second it will make it easier for users and developers to find us on search engines since the term "evo" brings up lots of unrelated terms. Plus I think the name linux in linuxsampler should advertise Linux as a viable audio platform. but nothing stops us to port it to let's say MacOS X too since it is an unix derivative ... perhaps generating interest for linux by pro-audio guys since they are almost all Mac users. cheers, Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Josh G. <jg...@us...> - 2002-11-01 20:21:51
|
So you are resuming development of evo? Cool. I wanted to contact Benno again in particular since we spoke a year ago about having a SoundFont loading library. I have gone through several revisions of this idea and currently have a rather untested code base (but good API in my opinion) of a library called libInstPatch. Its based on GObject/glib 2.0 and has a fully multi-threaded patch object system. The lowlevel load/save routines still need an API re-work but you can check out the API at the Swami developers site: http://swami.sourceforge.net/devel.php libInstPatch is rather full featured because I'm also using it as the basis of Swami (the new name for Smurf if you didn't know already). My plans are to add other wave table patch file formats like DLS2, etc (right now it supports SoundFont 2.01). It might be nice to use this as a basis for linuxsampler as well and perhaps have a plugin to use linuxsampler with Swami (is that the name or is it evo?). So what do you think? Cheers! Josh Green |
From: Steve H. <S.W...@ec...> - 2002-11-01 19:39:17
|
On Fri, Nov 01, 2002 at 08:16:02 +0100, Christian Schoenebeck wrote: > But this kind of static routing is not very user friendly, is it? It's not > very convenient having to recompile everytime a small peace of the routing > was changed. I would definitely sacrifice some ms latency for the sake of an > easy an intiutive use ability. My idea was that the static graphs would be built from LADSPA sources, so the use can still edit graphs in realtime, but when they are happy with it they can hit "compile" and get some cycles back. - Steve |
From: Christian S. <chr...@ep...> - 2002-11-01 19:15:47
|
Es geschah am Montag, 28. Oktober 2002 20:54 als Benno Senoner schrieb: > I was toying with the idea of using some sort of recompilation > techniques where the user can graphically design the sampler's signal > flow (routing, modulation, FXes etc) which in turns get translated into > C code that get loaded as a .so file and executed within the sampler's > main app. This would make up for a very flexible engine while retaining > most of the speed of hard coded ones. But this kind of static routing is not very user friendly, is it? It's not very convenient having to recompile everytime a small peace of the routing was changed. I would definitely sacrifice some ms latency for the sake of an easy an intiutive use ability. |
From: Antti B. <ant...@mi...> - 2002-10-31 12:24:05
|
Benno Senoner wrote: > Hi, > thanks for willing to help us. > We plan to decouple sampler engine and GUI completely so you can treat > the sampler engine as an application able to run on an embedded device > :-) I forgot to mention I've been planning an audio + small LCD display (you know, those test only displays) superserver, with a GUI system of it's own. This way it would be a lot easier to drop LinuxSampler in. More about it later. -agb |
From: Benno S. <be...@ga...> - 2002-10-31 11:50:39
|
Hi, thanks for willing to help us. We plan to decouple sampler engine and GUI completely so you can treat the sampler engine as an application able to run on an embedded device :-) Regarding the development stage: Almost 2 years ago I wrote some proof of concept code that can stream 60 voices on PII hw from disk in real time at sub 5msec latencies and this is a good starting base for the new engine that will use recompilation techniques for maximum flexibility and speed. The sampler we have in mind is quite an evil beast since it is basically a combination of real time execution cores, disk streaming, efficient DSP algorithms, networking layers for remote control and possibily handling of clustered enviroments (you know musicians are CPU hungry people :-) ) and GUI stuff. This means we need many experts in many areas so your help is very welcome. cheers, Benno On Thu, 2002-10-31 at 09:44, Stoll, Jake wrote: > Hi all, > > I'm interested in helping out with the LinuxSampler development, I've just > subscribed so I'm not sure at what stage your at with the development (very > early i'm guessing). > > I've got general c/c++ programming ability (mostly c experience with control > systems and embedded devices), haven't had much linux gui or audio > programming experience. The amount of time I can commit varies due to work. > > Regards, > Jake. > |
From: Stoll, J. <St...@lo...> - 2002-10-31 08:45:10
|
Hi all, I'm interested in helping out with the LinuxSampler development, I've just subscribed so I'm not sure at what stage your at with the development (very early i'm guessing). I've got general c/c++ programming ability (mostly c experience with control systems and embedded devices), haven't had much linux gui or audio programming experience. The amount of time I can commit varies due to work. Regards, Jake. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. |
From: Juan L. <co...@re...> - 2002-10-30 01:54:17
|
Here's a control flow proposal for linuxsampler. It's based on my previous experiences with multitimbral/polyphonic sound synthesis applications... (some background at http://freshmeat.net/~reduz) The control flow is basically how "objects control the state of other objects", (data flow is usually in the opposite way). Movie this diagram to a lower level (C++) IT can be taken as "Which class knows/includes/uses which class" :) The * in EDITOR means that i'd like to develop that area further in deep. Attached is the diagram in 3 formats: DIA,EPS and PNG (only 100k overall anyway) so you can read it with whathever app fits better for you. Cheers! Juan Linietsky |
From: Frank N. <bea...@we...> - 2002-10-30 00:16:07
|
confirm 172218 |