You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
| 2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
| 2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
| 2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
| 2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
| 2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
| 2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
| 2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
| 2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
| 2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
| 2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
| 2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
| 2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
| 2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
| 2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
| 2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
| 2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
| 2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
| 2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
| 2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
| 2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
| 2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(1) |
Nov
|
Dec
|
|
From: Mitchell <mgjohn@u.washington.edu> - 2003-03-13 18:57:55
|
Actually I have been experiencing similar frustrations. Also, I don't really like the framework of the current distribution of linuxsampler, since I would rather work in C (just a personal preference), and also I don't understand where they want to go with the code so I haven't contributed anything or don't realy know how. I am on the very edge of writing my own sampler framework, so maybe we could start our own homebrew and then later see what could be incorporated back into linuxsampler. Otherwise, if someone would explain to me where and how I could contribute, I could work on linuxsampler and contribute some code. However, from reading the source that's there so far it seems like most of the organization is still in peoples' heads :) -Mitchell You can talk to me on IRC too... I'm mitchell on #lad on freenode.net and on #foo on irc.silverninja.net. On Thu, Mar 13, 2003 at 01:56:29PM +0100, Christian Schoenebeck wrote: > Sorry guys for being impatient, but is there any progress at all or are > you all too busy to work on this project? > > As nobody answered if and how I could participate in this project, I > just can make a suggestion to e.g. write an Akai and / or Gigasampler > import module, as I haven't seen that anyone's working on it yet. > > But due to my impatience and demand to be able to use my Gig and Akai > library again (unfortunately I already sold my Akai sampler *sigh*) I > already considered starting to write a usable sample engine on my own in > case this project can be considered as dead. > > How about you Mitchell? > > Regards, > Christian > > > ------------------------------------------------------- > This SF.net email is sponsored by:Crypto Challenge is now open! > Get cracking and register here for some mind boggling fun and > the chance of winning an Apple iPod: > http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0031en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
|
From: Josh G. <jg...@us...> - 2003-03-13 18:54:21
|
On Thu, 2003-03-13 at 04:56, Christian Schoenebeck wrote: > Sorry guys for being impatient, but is there any progress at all or are > you all too busy to work on this project? > > As nobody answered if and how I could participate in this project, I > just can make a suggestion to e.g. write an Akai and / or Gigasampler > import module, as I haven't seen that anyone's working on it yet. > I'm working on it right now actually with libinstpatch. I've already written the DLS runtime objects and RIFF parser and DLS loader. All of it is untested and it will be a little while before these files are actually editable in Swami as there are many other things that need to be done with the Swami 1.0 branch before it is usable (lots and lots of new features being added). If you would like to see this code for some reason, its in the swami-1-0 branch in Swami CVS (http://swami.sourceforge.net), expect brokenness. > But due to my impatience and demand to be able to use my Gig and Akai > library again (unfortunately I already sold my Akai sampler *sigh*) I > already considered starting to write a usable sample engine on my own in > case this project can be considered as dead. > > How about you Mitchell? > > Regards, > Christian > Cheers. Josh Green |
|
From: Mark K. <mk...@co...> - 2003-03-13 14:45:18
|
> Sorry guys for being impatient, but is there any progress at all or are > you all too busy to work on this project? > > As nobody answered if and how I could participate in this project, I > just can make a suggestion to e.g. write an Akai and / or Gigasampler > import module, as I haven't seen that anyone's working on it yet. > > But due to my impatience and demand to be able to use my Gig and Akai > library again (unfortunately I already sold my Akai sampler *sigh*) I > already considered starting to write a usable sample engine on my own in > case this project can be considered as dead. > > How about you Mitchell? > > Regards, > Christian Christian, Impatience is cool. It makes things happen. There hasn't been much going on here. I'm just a user, not a developer, but GigaStudio is my main sample player and I'd love to move my libraries over to Linux one day. so... 1) If someone wrote a gig file reader, I'd test my 100+ libraries and make sure they worked. 2) There is very limited some code out there for this project. I do not know what it does, but if you accomplished #1, I'd help see if we could load them into #2. Anyway, it's a quiet reflector right now, but I' love to see it wake up. Cheers, Mark |
|
From: Christian S. <chr...@ep...> - 2003-03-13 12:56:29
|
Sorry guys for being impatient, but is there any progress at all or are you all too busy to work on this project? As nobody answered if and how I could participate in this project, I just can make a suggestion to e.g. write an Akai and / or Gigasampler import module, as I haven't seen that anyone's working on it yet. But due to my impatience and demand to be able to use my Gig and Akai library again (unfortunately I already sold my Akai sampler *sigh*) I already considered starting to write a usable sample engine on my own in case this project can be considered as dead. How about you Mitchell? Regards, Christian |
|
From: Mitchell <mgjohn@u.washington.edu> - 2003-03-09 18:26:05
|
I'm still pretty free too, next quarter I'll be taking some school off so I'll have a lot more time. If someone wanted to work on something. On Sun, Mar 09, 2003 at 06:58:37PM +0100, Christian Schoenebeck wrote: > Hi everybody! > > I'm wondering how work is going on and if there's any task I can > participate in, as I'm expecting to have a lot of spare time in the next > months. > > Among other things I'm experienced in C++ and of course making music > with hardware equipment as well as software based sequencers and tools > (mostly windows in the past). Unfortunately I have no experience in DSP > programming yet, although I'm very, very interested in this particular > subject and therefore would appreciate to get the opportunity to learn > and gain experience. > > So if there's something I can do, drop me a note! > > Christian > > > ------------------------------------------------------- > This SF.net email is sponsored by: Etnus, makers of TotalView, The debugger > for complex code. Debugging C/C++ programs can leave you feeling lost and > disoriented. TotalView can help you find your way. Available on major UNIX > and Linux platforms. Try it free. www.etnus.com > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
|
From: Christian S. <chr...@ep...> - 2003-03-09 17:57:36
|
Hi everybody! I'm wondering how work is going on and if there's any task I can participate in, as I'm expecting to have a lot of spare time in the next months. Among other things I'm experienced in C++ and of course making music with hardware equipment as well as software based sequencers and tools (mostly windows in the past). Unfortunately I have no experience in DSP programming yet, although I'm very, very interested in this particular subject and therefore would appreciate to get the opportunity to learn and gain experience. So if there's something I can do, drop me a note! Christian |
|
From: Steve H. <S.W...@ec...> - 2003-02-01 21:16:15
|
On Sat, Feb 01, 2003 at 12:46:25 -0800, M. Johnson wrote: > I read your page and I'm sold. I've been trying to write a linux sampler > for a while, and have learned extensively about OSS. (which turned out not > to help that much). > > I'd love to help with your project, and I think I have a thing or two I > could contribute. Ideally I'd like to spend about 9 hours a week working > on this, since I have a contract with my University for a certain amount > of time dedicated to working on open source audio software. Great :) Things are a little stalled around here at them momnet, as the main coders are all swamped by other work, but things should pick up in a few weeks. > By the way, do you guys have some kind of IRC presence? If not, I think > that would really help me; mailing lists are nice, but for me IRC is > better ;) Yes, irc.freenode.net #lad, its a bit quiet sometimes though. - Steve |
|
From: M. J. <mgjohn@u.washington.edu> - 2003-02-01 20:46:28
|
I read your page and I'm sold. I've been trying to write a linux sampler for a while, and have learned extensively about OSS. (which turned out not to help that much). I'd love to help with your project, and I think I have a thing or two I could contribute. Ideally I'd like to spend about 9 hours a week working on this, since I have a contract with my University for a certain amount of time dedicated to working on open source audio software. I think one of the things I can offer in particular, is that I came up with a great method of coding up really fast bandpass, lowpass and highpass filters by simulating the electronics that do the same things, I have written a proof-of-concept oscillator which makes a exponentially enveloped sine waves in two multiplies, to additions, and two assignments per sample per wave. This is available on my personal website: http://www.silverninja.net/~mitchell/ The sample output mp3 demonstraits this by playing the binary of emacs through oscillators at the partials of a bell, which acts like a parallel array of bandpass filters. I hope I can help out. By the way, do you guys have some kind of IRC presence? If not, I think that would really help me; mailing lists are nice, but for me IRC is better ;) -Mitchell |
|
From: Juan L. <co...@re...> - 2003-01-23 00:30:50
|
On 22 Jan 2003 20:08:13 +0000 Bob Ham <no...@us...> wrote: > Hi all, > > Some fixes attached for the cvs code. This is as far as I got before > the build tried to compile code that mentioned non-existant functions: > > sound_driver_jack.cpp: In member function `virtual int > Sound_Driver_JACK::init()': > sound_driver_jack.cpp:172: no matching function for call to `Mixer_Base:: > set_mix_stereo(bool)' > > I'll just say one thing: use std:: !!!!!!!!!!! > > Bob > I guess you are right, i'm kind of lazy with that, I just do #include <something> using std::something; everywhere hehe However I am unable to test with debian (Well i guess i am but it will break everything if i update) because it used the old gcc.. Anyway, is the latest code on the CVS? I was unable to upload (sourceforge hates me), and my proovider was inactive for a long time. benno and steve are quite busy, I guess it's up to someone to start writing an engine using the existing code, i wont have much time to work on it again for a month or two :( cheers! Juan Linietsky |
|
From: Bob H. <no...@us...> - 2003-01-22 20:11:03
|
Hi all, Some fixes attached for the cvs code. This is as far as I got before the build tried to compile code that mentioned non-existant functions: sound_driver_jack.cpp: In member function `virtual int Sound_Driver_JACK::init()': sound_driver_jack.cpp:172: no matching function for call to `Mixer_Base:: set_mix_stereo(bool)' I'll just say one thing: use std:: !!!!!!!!!!! Bob |
|
From: Josh G. <jg...@us...> - 2003-01-22 02:26:32
|
On Tue, 2003-01-21 at 13:14, David Olofson wrote: > > Well, MIDI may not suffer as much from unbounded latency as audio, > but I'm not willing to take chances. We're talking about *unbounded* > worst case latency here, and it's really as bad as it sounds. If you > *can* have memory management stall MIDI processing for half a second > in the middle of a live performance, it *will* happen sooner or > later. (You know Murphy...) > Half a second? I'm sure that rarely occurs. I can't speak for Python's memory management, but much of the critical stuff in Swami uses glib memory chunks. These allow for an initial allocation block and then only allocate more if and when needed (as long as you pre-allocate enough, it shouldn't happen). If I do ever get around to creating a sequencing sub system, using Python functions will be completely optional. Users who use this feature will probably understand the potential for problems. When just playing around with composing music, I don't think its much of an issue. When one wants to do real time stuff, all the Python functions can be rendered to a MIDI buffer with explicit time stamps (those that don't take real time input of course). Currently, I'm more interested in nice functionality then sub ms latency. This can always be optimized at a later date. > Either way, Audiality runs all event processing in the same context > as the audio processing, so I can't realistically use anything that > isn't RT safe anyway. Even the slightest deadline misses would cause > audible drop-outs. > > > > I'm not yet fully familiar with > > using Python embedded in a program, but I'm sure there is probably > > a way to compile script source into object code. Anyways.. > > That might work, but I suspect it will only improve throughput > without making worst case latencies bounded. If the compiled code > could still uses malloc(), garbage collection and other > non-deterministic stuff, you have gained next to nothing WRT RT > reliability. > > > //David Olofson - Programmer, Composer, Open Source Advocate > Cheers. Josh Green |
|
From: Josh G. <jg...@us...> - 2003-01-22 02:15:51
|
On Tue, 2003-01-21 at 15:35, Mark Knecht wrote: <cut> > > Couple these two features with the velocity sensitive samples that we > were discussing yesterday and fluid-synth then approaches a lot of the > functionality of something like GSt. (Missing ADSR's, filters, etc. on each > voice, but practically speaking I don't have a lot of time to use those much > anyway.) > Ummm.. You should probably read up on SoundFont files. Because they contain 2 envelopes DAHDSR (Delay/Attack/Hold/Decay/Sustain/Release) actually, one which controls volume the other the lowpass Filter or pitch. There are also 2 LFOs, one controls Pitch, Filter Cutoff, and volume the other can control only pitch. There is Reverb and Chorus and tuning parameters. All this is layered in a Preset/Instrument/Sample tree fashion to allow for overriding and offsetting parameters. Also, Swami and FluidSynth have support for modulators (most credit goes to FluidSynth). Modulators allow one to connect arbitrary MIDI controllers (and other things like note on velocity, MIDI note number, pitch bender, etc) to almost any SoundFont effect to control it in real time. Anything else missing? If you would like to read an intro I wrote on SoundFont files you can find it on the Docs section of the Swami web site, which is here: http://swami.sourceforge.net/docs.php > Thanks for all the help! > > Cheers, > Mark > Sure thing. I hope you get a chance to see what these 2 projects can do for you :) Cheers. Josh Green |
|
From: David O. <da...@ol...> - 2003-01-21 21:15:02
|
On Tuesday 21 January 2003 21.11, Josh Green wrote: > On Tue, 2003-01-21 at 11:42, David Olofson wrote: > > On Tuesday 21 January 2003 20.32, Josh Green wrote: > > [...] > > > > > velocity. Once the Python binding is completed in Swami (not > > > really that much to do I think) writing scripts to do these > > > types of things should be fairly easy :) Cheers. > > > > Speaking of scripting, are you planning on actually running > > Python in RT context, or just use it for "rendering" maps? > > For just editing operations, the idea of real time is not of > importance. For doing real time control of effects and MIDI, it > might be. It really remains to be seen in practice what kind of > latency is induced by calling Python code in real time. In the MIDI > realm it might not matter too much. Well, MIDI may not suffer as much from unbounded latency as audio,=20 but I'm not willing to take chances. We're talking about *unbounded*=20 worst case latency here, and it's really as bad as it sounds. If you=20 *can* have memory management stall MIDI processing for half a second=20 in the middle of a live performance, it *will* happen sooner or=20 later. (You know Murphy...) Either way, Audiality runs all event processing in the same context=20 as the audio processing, so I can't realistically use anything that=20 isn't RT safe anyway. Even the slightest deadline misses would cause=20 audible drop-outs. > I'm not yet fully familiar with > using Python embedded in a program, but I'm sure there is probably > a way to compile script source into object code. Anyways.. That might work, but I suspect it will only improve throughput=20 without making worst case latencies bounded. If the compiled code=20 could still uses malloc(), garbage collection and other=20 non-deterministic stuff, you have gained next to nothing WRT RT=20 reliability. //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Josh G. <jg...@us...> - 2003-01-21 20:11:50
|
On Tue, 2003-01-21 at 11:42, David Olofson wrote: > On Tuesday 21 January 2003 20.32, Josh Green wrote: > [...] > > velocity. Once the Python binding is completed in Swami (not really > > that much to do I think) writing scripts to do these types of > > things should be fairly easy :) Cheers. > > Speaking of scripting, are you planning on actually running Python in > RT context, or just use it for "rendering" maps? > For just editing operations, the idea of real time is not of importance. For doing real time control of effects and MIDI, it might be. It really remains to be seen in practice what kind of latency is induced by calling Python code in real time. In the MIDI realm it might not matter too much. I'm not yet fully familiar with using Python embedded in a program, but I'm sure there is probably a way to compile script source into object code. Anyways.. Josh Green |
|
From: Mark K. <mar...@at...> - 2003-01-21 20:10:56
|
On Tue, 2003-01-21 at 20:02, Josh Green wrote: I will start looking into adding DLS2, .gig, Akai, Gus and > perhaps GSt (I don't know anything about it yet, so not sure how hard > that would be). .gig is GSt, one and the same (GSt is GigaStudio, .gig is its file format) |
|
From: Josh G. <jg...@us...> - 2003-01-21 20:02:47
|
On Tue, 2003-01-21 at 09:51, Mark Knecht wrote: > > Markus, > As I said earlier, I haven't, in the past, been a big fan of sound fonts, > not because I know anything about them technically. Just because the Windows > based SF players haven't sounded as good as GSt. When this conversation > started yesterday, I was (and still am) a proponent of doing this app using > the LinuxSampler engine. I'm not asking or even suggesting that anyone > change anything that exists in the Linux SF app space to do what I want to > do. On the other hand, I think peopl eare askign me to use SF's, and I'm not > sure they'll work. > > Nor do I know how to map my GSt libraries to one. If Josh wants to tackle > that problem in Swami, then I would certain be happy to help out and do a > bit of testing. > I'm still doing API work in the area of the GUI to make it easy to plugin new patch formats. Once this work is done (and a few other things) I will start looking into adding DLS2, .gig, Akai, Gus and perhaps GSt (I don't know anything about it yet, so not sure how hard that would be). > Also, please understand I'm not trying to give a complete use of this > feature in GSt and do not suggest that my list is complete. I know it isn't. > I'm just hoping that I'm getting the key switch idea across so that you > developers can make it real. If there is continued interest in the > Linux-Sampler community to support GSt libraries (a stated goal) then this > threshold will eventually have to be crossed. > > In GSt all of the stuff you mention, and more, is supported. I believe that > in GigaSampler it is not all supported, however, GS is pretty much gone now > except as a CD in sound card distributions. > I think a lot of this could be tackled with some of the "session state" saving/restoring ideas that have been discussed on LAD before. If you had a MIDI processor with scripting support, etc. You could create little filters and actuators and save them along with a project which could then be loaded at a later date. I don't think we need to worry about supporting every single feature in a patch format (at least when it comes to MIDI processing). Having a Linux Audio/Music session format/standard would be cool. Cheers. Josh Green |
|
From: David O. <da...@ol...> - 2003-01-21 20:02:03
|
On Tuesday 21 January 2003 20.37, Mark Knecht wrote: > Hi, > Reading through the thread from the last day or two, I did end > up with a question about how LinuxSampler might be impacted by > being a stand-alone app. > > To start off, I'm only considering a Jack issue here. If the > sampler engine was used in a plugin type app, more or less a soft > synth, then I presume that this app would have to run with the same > Jack requirements as my audio recording system. It either meets > normal Jack latency requirements or we get xruns in the Jack audio > stream, just like any other Jack app. Correct. > If I want to push buffer numbers and sizes down to reduce real > time audio latency, does this have any impact on the Soft Sampler's > ability to stream from disk? Does this mean that it has to buffer > more data, or do things differently, based on how this is set? No, all it means is that the RT part (the "actual sampler") reads=20 smaller chunks of data more frequently from it's end of the lock-free=20 FIFOs. This may cause some extra memory stress, but it should have=20 little or no effect on the operation of the disk butler. > I presume that if we were able to stream sample libraries from > disk, then we have to buffer enough data in DRAM to ensure the > sample stream continues until the disk catches up. Is this driven > only by the drive's response time? Or would there be advantages in > running the sampler engine at higher Jack latencies to reduce > buffering? Disk buffering is basically the responsibility of the butler, and the=20 RT part doesn't even have to know much about this. It just reads the=20 # of samples it needs from the FIFO, and if they're there, it Just=20 Works. (As if playing was entirely from RAM.) As to total disk latency and catch-up time, that's a function of seek=20 latency, transfer rate read block size, physical location of data,=20 number of files streamed at once, etc etc. Either way, just pre-cache=20 enough data and don't read too small or too big blocks at a time from=20 the disk, and the data will be there in time. > Seems to me that if I run this on a second machine (or have a > second jack daemon on the same machine...) I get an extra degree of > freedom. Does it help me? Well, you get all CPU power, memory and disk bandwidth for yourself,=20 but if it actually *helps* probably depends more on the requirements=20 of the other software you need to use. An OS with proper real time scheduling doesn't have a problem with=20 dispatching CPU time. OTOH, hard drives are still mechanical devices=20 that suffer severely from multiple simultaneous streams. Also, plugins and RT applications with drastically varying CPU=20 utilization mean trouble. The closer to 100% CPU usage you get, the=20 greater the risk of too many plugins deciding to burn CPU at the same=20 time, causing a drop-out. Though, this kind of "nervous" plugins can=20 usually be considered more or less broken, because this is *really*=20 not the way to go if there's any way to avoid it. (And there usually=20 are ways. Large window FFTs and the like can be split and scheduled,=20 for example.) [...] //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Josh G. <jg...@us...> - 2003-01-21 19:48:48
|
On Tue, 2003-01-21 at 07:45, David Olofson wrote: > On Tuesday 21 January 2003 13.41, M. Nentwig wrote: > > Moi, > > > > I don't think that there are any restrictions concerning velocity > > mapping with Swami. You can assign samples to arbitrary note and / > > or velocity 'windows'. In theory it's possible to have a different > > sample for each key / velocity combination (but I'd bet nobody has > > tried that yet :) > > <plug qualifier="shameless"> > How about being able to write C-like code that calculates or > otherwise determines mapping when a note is started? > > Well, whether or not it's really useful, this is where Audiality is > going. Processing timestamped events in C is a bit hairy, so I'd > prefer using a custom higher level language for that. Another point > is that strapping on a scripting engine eliminates lots of hardcoded > logic, and the restrictions that come with it. > </plug> > > <plug qualifier="also shameless"> Yes, I can envision Python being a nice language for this type of thing. A project for the future of Swami as well. As things stand now modulators can be used with MIDI velocity source controls to do weired mappings with velocity (can even control other effects, say Filter Cutoff for instance :) </plug> |
|
From: David O. <da...@ol...> - 2003-01-21 19:42:45
|
On Tuesday 21 January 2003 20.32, Josh Green wrote: [...] > velocity. Once the Python binding is completed in Swami (not really > that much to do I think) writing scripts to do these types of > things should be fairly easy :) Cheers. Speaking of scripting, are you planning on actually running Python in=20 RT context, or just use it for "rendering" maps? //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: David O. <da...@ol...> - 2003-01-21 19:39:28
|
On Tuesday 21 January 2003 19.03, Mark Knecht wrote: [...] > Dave, > I hope I wasn't misunderstood. RG, Pro Tools, Cubase SX, they > all work with key switches. I can put key switch events on the same > track. They are sent and cause the sample sets to switch. That does > not cause a problem. Yes, that's what's so great with them; they're based on a part of the=20 protocol that you basically *have* to support to control an=20 instrument anyway. > The _only_ problem I've run into is that while you and I > understand that a key switch at C-3 isn't a musical note, a > notation program does not, and will paint a quarter note there when > one does not need one musically. Yeah, I see what you mean, although I'm still not sure whether you're=20 thinking about printing or editing in "staff view". Either way, both=20 have the same problem, although it's probably even more annoying to=20 get it on paper...! ;-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Mark K. <mk...@co...> - 2003-01-21 19:38:22
|
Hi, Reading through the thread from the last day or two, I did end up with a question about how LinuxSampler might be impacted by being a stand-alone app. To start off, I'm only considering a Jack issue here. If the sampler engine was used in a plugin type app, more or less a soft synth, then I presume that this app would have to run with the same Jack requirements as my audio recording system. It either meets normal Jack latency requirements or we get xruns in the Jack audio stream, just like any other Jack app. If I want to push buffer numbers and sizes down to reduce real time audio latency, does this have any impact on the Soft Sampler's ability to stream from disk? Does this mean that it has to buffer more data, or do things differently, based on how this is set? I presume that if we were able to stream sample libraries from disk, then we have to buffer enough data in DRAM to ensure the sample stream continues until the disk catches up. Is this driven only by the drive's response time? Or would there be advantages in running the sampler engine at higher Jack latencies to reduce buffering? Seems to me that if I run this on a second machine (or have a second jack daemon on the same machine...) I get an extra degree of freedom. Does it help me? I'm just asking questions for the sake of learning. I do understand that there are perceived patent issues, so let's leave that for a separate discussion. Thanks, Mark |
|
From: Josh G. <jg...@us...> - 2003-01-21 19:33:00
|
On Tue, 2003-01-21 at 04:41, M. Nentwig wrote: > Moi, > > I don't think that there are any restrictions concerning velocity > mapping with Swami. You can assign samples to arbitrary note and / or > velocity 'windows'. In theory it's possible to have a different sample > for each key / velocity combination (but I'd bet nobody has tried that > yet :) > > -Markus > It would be interesting to create layered velocity sounds as well, where samples could be blended over the velocity range in conjunction with an inverted velocity modulator (to cause a sample to fade out towards the top of its velocity range). You could have a morphing effect as one plays notes in increasing or decreasing velocity. Once the Python binding is completed in Swami (not really that much to do I think) writing scripts to do these types of things should be fairly easy :) Cheers. Josh Green |
|
From: David O. <da...@ol...> - 2003-01-21 19:31:43
|
On Tuesday 21 January 2003 18.51, Mark Knecht wrote: [...] > > In case somebody is interested in the solution I'm using to get a > > similar result (with a control program 'wrapped' around > > iiwusynth): > > <SNIP> > > This looks like an interesting way to possibly take a drum track > and then split off individaul drums mapped to certain notes and > send them to different synths? Interesting idea. > > What sort of latency does this incur? I would assume it's pretty > high if you have to recieve a MIDI event, process it, and then > retransmit. Can that be used live and get a good, tight feel? I don't know how it's implemented here, but technically, as long as=20 it's done somewhere in between the sequencer and the hardware MIDI=20 output, there shouldn't be any significant latency. It's only when=20 you're running physical 31250 bps wire between units that chaining=20 devices is a problem. //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Mark K. <mk...@co...> - 2003-01-21 18:04:37
|
> -----Original Message----- > From: lin...@li... > [mailto:lin...@li...]On Behalf Of > David Olofson > Sent: Tuesday, January 21, 2003 9:38 AM > To: lin...@li... > Subject: Re: [Linuxsampler-devel] RE: Hi - Very quiet list - my first > post > > > On Tuesday 21 January 2003 18.12, Mark Knecht wrote: > [...] > > It's a bit cranky when you consider notation capabilities in > > programs like Rosegarden. None of them know that these notes are > > key switches. I've taken recently to moving key switch notes to a > > separate track that transmits on the same channel. > > Well, I'm using piano roll for the few edits I make (I just record > and maybe quantize, scale velocities, scale note lenghts etc), and > that would work perfecty with key switches, I think. > > One thing I just *have* to try is hooking some keys up to f and q of > some resonant filters... :-) > Dave, I hope I wasn't misunderstood. RG, Pro Tools, Cubase SX, they all work with key switches. I can put key switch events on the same track. They are sent and cause the sample sets to switch. That does not cause a problem. The _only_ problem I've run into is that while you and I understand that a key switch at C-3 isn't a musical note, a notation program does not, and will paint a quarter note there when one does not need one musically. Cheers, Mark |
|
From: David O. <da...@ol...> - 2003-01-21 17:38:41
|
On Tuesday 21 January 2003 18.12, Mark Knecht wrote: [...] > It's a bit cranky when you consider notation capabilities in > programs like Rosegarden. None of them know that these notes are > key switches. I've taken recently to moving key switch notes to a > separate track that transmits on the same channel. Well, I'm using piano roll for the few edits I make (I just record=20 and maybe quantize, scale velocities, scale note lenghts etc), and=20 that would work perfecty with key switches, I think. One thing I just *have* to try is hooking some keys up to f and q of=20 some resonant filters... :-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |