You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
| 2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
| 2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
| 2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
| 2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
| 2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
| 2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
| 2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
| 2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
| 2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
| 2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
| 2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
| 2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
| 2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
| 2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
| 2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
| 2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
| 2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
| 2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
| 2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
| 2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
| 2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(1) |
Nov
(14) |
Dec
(2) |
|
From: Bob H. <no...@us...> - 2003-01-22 20:11:03
|
Hi all, Some fixes attached for the cvs code. This is as far as I got before the build tried to compile code that mentioned non-existant functions: sound_driver_jack.cpp: In member function `virtual int Sound_Driver_JACK::init()': sound_driver_jack.cpp:172: no matching function for call to `Mixer_Base:: set_mix_stereo(bool)' I'll just say one thing: use std:: !!!!!!!!!!! Bob |
|
From: Josh G. <jg...@us...> - 2003-01-22 02:26:32
|
On Tue, 2003-01-21 at 13:14, David Olofson wrote: > > Well, MIDI may not suffer as much from unbounded latency as audio, > but I'm not willing to take chances. We're talking about *unbounded* > worst case latency here, and it's really as bad as it sounds. If you > *can* have memory management stall MIDI processing for half a second > in the middle of a live performance, it *will* happen sooner or > later. (You know Murphy...) > Half a second? I'm sure that rarely occurs. I can't speak for Python's memory management, but much of the critical stuff in Swami uses glib memory chunks. These allow for an initial allocation block and then only allocate more if and when needed (as long as you pre-allocate enough, it shouldn't happen). If I do ever get around to creating a sequencing sub system, using Python functions will be completely optional. Users who use this feature will probably understand the potential for problems. When just playing around with composing music, I don't think its much of an issue. When one wants to do real time stuff, all the Python functions can be rendered to a MIDI buffer with explicit time stamps (those that don't take real time input of course). Currently, I'm more interested in nice functionality then sub ms latency. This can always be optimized at a later date. > Either way, Audiality runs all event processing in the same context > as the audio processing, so I can't realistically use anything that > isn't RT safe anyway. Even the slightest deadline misses would cause > audible drop-outs. > > > > I'm not yet fully familiar with > > using Python embedded in a program, but I'm sure there is probably > > a way to compile script source into object code. Anyways.. > > That might work, but I suspect it will only improve throughput > without making worst case latencies bounded. If the compiled code > could still uses malloc(), garbage collection and other > non-deterministic stuff, you have gained next to nothing WRT RT > reliability. > > > //David Olofson - Programmer, Composer, Open Source Advocate > Cheers. Josh Green |
|
From: Josh G. <jg...@us...> - 2003-01-22 02:15:51
|
On Tue, 2003-01-21 at 15:35, Mark Knecht wrote: <cut> > > Couple these two features with the velocity sensitive samples that we > were discussing yesterday and fluid-synth then approaches a lot of the > functionality of something like GSt. (Missing ADSR's, filters, etc. on each > voice, but practically speaking I don't have a lot of time to use those much > anyway.) > Ummm.. You should probably read up on SoundFont files. Because they contain 2 envelopes DAHDSR (Delay/Attack/Hold/Decay/Sustain/Release) actually, one which controls volume the other the lowpass Filter or pitch. There are also 2 LFOs, one controls Pitch, Filter Cutoff, and volume the other can control only pitch. There is Reverb and Chorus and tuning parameters. All this is layered in a Preset/Instrument/Sample tree fashion to allow for overriding and offsetting parameters. Also, Swami and FluidSynth have support for modulators (most credit goes to FluidSynth). Modulators allow one to connect arbitrary MIDI controllers (and other things like note on velocity, MIDI note number, pitch bender, etc) to almost any SoundFont effect to control it in real time. Anything else missing? If you would like to read an intro I wrote on SoundFont files you can find it on the Docs section of the Swami web site, which is here: http://swami.sourceforge.net/docs.php > Thanks for all the help! > > Cheers, > Mark > Sure thing. I hope you get a chance to see what these 2 projects can do for you :) Cheers. Josh Green |
|
From: David O. <da...@ol...> - 2003-01-21 21:15:02
|
On Tuesday 21 January 2003 21.11, Josh Green wrote: > On Tue, 2003-01-21 at 11:42, David Olofson wrote: > > On Tuesday 21 January 2003 20.32, Josh Green wrote: > > [...] > > > > > velocity. Once the Python binding is completed in Swami (not > > > really that much to do I think) writing scripts to do these > > > types of things should be fairly easy :) Cheers. > > > > Speaking of scripting, are you planning on actually running > > Python in RT context, or just use it for "rendering" maps? > > For just editing operations, the idea of real time is not of > importance. For doing real time control of effects and MIDI, it > might be. It really remains to be seen in practice what kind of > latency is induced by calling Python code in real time. In the MIDI > realm it might not matter too much. Well, MIDI may not suffer as much from unbounded latency as audio,=20 but I'm not willing to take chances. We're talking about *unbounded*=20 worst case latency here, and it's really as bad as it sounds. If you=20 *can* have memory management stall MIDI processing for half a second=20 in the middle of a live performance, it *will* happen sooner or=20 later. (You know Murphy...) Either way, Audiality runs all event processing in the same context=20 as the audio processing, so I can't realistically use anything that=20 isn't RT safe anyway. Even the slightest deadline misses would cause=20 audible drop-outs. > I'm not yet fully familiar with > using Python embedded in a program, but I'm sure there is probably > a way to compile script source into object code. Anyways.. That might work, but I suspect it will only improve throughput=20 without making worst case latencies bounded. If the compiled code=20 could still uses malloc(), garbage collection and other=20 non-deterministic stuff, you have gained next to nothing WRT RT=20 reliability. //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Josh G. <jg...@us...> - 2003-01-21 20:11:50
|
On Tue, 2003-01-21 at 11:42, David Olofson wrote: > On Tuesday 21 January 2003 20.32, Josh Green wrote: > [...] > > velocity. Once the Python binding is completed in Swami (not really > > that much to do I think) writing scripts to do these types of > > things should be fairly easy :) Cheers. > > Speaking of scripting, are you planning on actually running Python in > RT context, or just use it for "rendering" maps? > For just editing operations, the idea of real time is not of importance. For doing real time control of effects and MIDI, it might be. It really remains to be seen in practice what kind of latency is induced by calling Python code in real time. In the MIDI realm it might not matter too much. I'm not yet fully familiar with using Python embedded in a program, but I'm sure there is probably a way to compile script source into object code. Anyways.. Josh Green |
|
From: Mark K. <mar...@at...> - 2003-01-21 20:10:56
|
On Tue, 2003-01-21 at 20:02, Josh Green wrote: I will start looking into adding DLS2, .gig, Akai, Gus and > perhaps GSt (I don't know anything about it yet, so not sure how hard > that would be). .gig is GSt, one and the same (GSt is GigaStudio, .gig is its file format) |
|
From: Josh G. <jg...@us...> - 2003-01-21 20:02:47
|
On Tue, 2003-01-21 at 09:51, Mark Knecht wrote: > > Markus, > As I said earlier, I haven't, in the past, been a big fan of sound fonts, > not because I know anything about them technically. Just because the Windows > based SF players haven't sounded as good as GSt. When this conversation > started yesterday, I was (and still am) a proponent of doing this app using > the LinuxSampler engine. I'm not asking or even suggesting that anyone > change anything that exists in the Linux SF app space to do what I want to > do. On the other hand, I think peopl eare askign me to use SF's, and I'm not > sure they'll work. > > Nor do I know how to map my GSt libraries to one. If Josh wants to tackle > that problem in Swami, then I would certain be happy to help out and do a > bit of testing. > I'm still doing API work in the area of the GUI to make it easy to plugin new patch formats. Once this work is done (and a few other things) I will start looking into adding DLS2, .gig, Akai, Gus and perhaps GSt (I don't know anything about it yet, so not sure how hard that would be). > Also, please understand I'm not trying to give a complete use of this > feature in GSt and do not suggest that my list is complete. I know it isn't. > I'm just hoping that I'm getting the key switch idea across so that you > developers can make it real. If there is continued interest in the > Linux-Sampler community to support GSt libraries (a stated goal) then this > threshold will eventually have to be crossed. > > In GSt all of the stuff you mention, and more, is supported. I believe that > in GigaSampler it is not all supported, however, GS is pretty much gone now > except as a CD in sound card distributions. > I think a lot of this could be tackled with some of the "session state" saving/restoring ideas that have been discussed on LAD before. If you had a MIDI processor with scripting support, etc. You could create little filters and actuators and save them along with a project which could then be loaded at a later date. I don't think we need to worry about supporting every single feature in a patch format (at least when it comes to MIDI processing). Having a Linux Audio/Music session format/standard would be cool. Cheers. Josh Green |
|
From: David O. <da...@ol...> - 2003-01-21 20:02:03
|
On Tuesday 21 January 2003 20.37, Mark Knecht wrote: > Hi, > Reading through the thread from the last day or two, I did end > up with a question about how LinuxSampler might be impacted by > being a stand-alone app. > > To start off, I'm only considering a Jack issue here. If the > sampler engine was used in a plugin type app, more or less a soft > synth, then I presume that this app would have to run with the same > Jack requirements as my audio recording system. It either meets > normal Jack latency requirements or we get xruns in the Jack audio > stream, just like any other Jack app. Correct. > If I want to push buffer numbers and sizes down to reduce real > time audio latency, does this have any impact on the Soft Sampler's > ability to stream from disk? Does this mean that it has to buffer > more data, or do things differently, based on how this is set? No, all it means is that the RT part (the "actual sampler") reads=20 smaller chunks of data more frequently from it's end of the lock-free=20 FIFOs. This may cause some extra memory stress, but it should have=20 little or no effect on the operation of the disk butler. > I presume that if we were able to stream sample libraries from > disk, then we have to buffer enough data in DRAM to ensure the > sample stream continues until the disk catches up. Is this driven > only by the drive's response time? Or would there be advantages in > running the sampler engine at higher Jack latencies to reduce > buffering? Disk buffering is basically the responsibility of the butler, and the=20 RT part doesn't even have to know much about this. It just reads the=20 # of samples it needs from the FIFO, and if they're there, it Just=20 Works. (As if playing was entirely from RAM.) As to total disk latency and catch-up time, that's a function of seek=20 latency, transfer rate read block size, physical location of data,=20 number of files streamed at once, etc etc. Either way, just pre-cache=20 enough data and don't read too small or too big blocks at a time from=20 the disk, and the data will be there in time. > Seems to me that if I run this on a second machine (or have a > second jack daemon on the same machine...) I get an extra degree of > freedom. Does it help me? Well, you get all CPU power, memory and disk bandwidth for yourself,=20 but if it actually *helps* probably depends more on the requirements=20 of the other software you need to use. An OS with proper real time scheduling doesn't have a problem with=20 dispatching CPU time. OTOH, hard drives are still mechanical devices=20 that suffer severely from multiple simultaneous streams. Also, plugins and RT applications with drastically varying CPU=20 utilization mean trouble. The closer to 100% CPU usage you get, the=20 greater the risk of too many plugins deciding to burn CPU at the same=20 time, causing a drop-out. Though, this kind of "nervous" plugins can=20 usually be considered more or less broken, because this is *really*=20 not the way to go if there's any way to avoid it. (And there usually=20 are ways. Large window FFTs and the like can be split and scheduled,=20 for example.) [...] //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Josh G. <jg...@us...> - 2003-01-21 19:48:48
|
On Tue, 2003-01-21 at 07:45, David Olofson wrote: > On Tuesday 21 January 2003 13.41, M. Nentwig wrote: > > Moi, > > > > I don't think that there are any restrictions concerning velocity > > mapping with Swami. You can assign samples to arbitrary note and / > > or velocity 'windows'. In theory it's possible to have a different > > sample for each key / velocity combination (but I'd bet nobody has > > tried that yet :) > > <plug qualifier="shameless"> > How about being able to write C-like code that calculates or > otherwise determines mapping when a note is started? > > Well, whether or not it's really useful, this is where Audiality is > going. Processing timestamped events in C is a bit hairy, so I'd > prefer using a custom higher level language for that. Another point > is that strapping on a scripting engine eliminates lots of hardcoded > logic, and the restrictions that come with it. > </plug> > > <plug qualifier="also shameless"> Yes, I can envision Python being a nice language for this type of thing. A project for the future of Swami as well. As things stand now modulators can be used with MIDI velocity source controls to do weired mappings with velocity (can even control other effects, say Filter Cutoff for instance :) </plug> |
|
From: David O. <da...@ol...> - 2003-01-21 19:42:45
|
On Tuesday 21 January 2003 20.32, Josh Green wrote: [...] > velocity. Once the Python binding is completed in Swami (not really > that much to do I think) writing scripts to do these types of > things should be fairly easy :) Cheers. Speaking of scripting, are you planning on actually running Python in=20 RT context, or just use it for "rendering" maps? //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: David O. <da...@ol...> - 2003-01-21 19:39:28
|
On Tuesday 21 January 2003 19.03, Mark Knecht wrote: [...] > Dave, > I hope I wasn't misunderstood. RG, Pro Tools, Cubase SX, they > all work with key switches. I can put key switch events on the same > track. They are sent and cause the sample sets to switch. That does > not cause a problem. Yes, that's what's so great with them; they're based on a part of the=20 protocol that you basically *have* to support to control an=20 instrument anyway. > The _only_ problem I've run into is that while you and I > understand that a key switch at C-3 isn't a musical note, a > notation program does not, and will paint a quarter note there when > one does not need one musically. Yeah, I see what you mean, although I'm still not sure whether you're=20 thinking about printing or editing in "staff view". Either way, both=20 have the same problem, although it's probably even more annoying to=20 get it on paper...! ;-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Mark K. <mk...@co...> - 2003-01-21 19:38:22
|
Hi, Reading through the thread from the last day or two, I did end up with a question about how LinuxSampler might be impacted by being a stand-alone app. To start off, I'm only considering a Jack issue here. If the sampler engine was used in a plugin type app, more or less a soft synth, then I presume that this app would have to run with the same Jack requirements as my audio recording system. It either meets normal Jack latency requirements or we get xruns in the Jack audio stream, just like any other Jack app. If I want to push buffer numbers and sizes down to reduce real time audio latency, does this have any impact on the Soft Sampler's ability to stream from disk? Does this mean that it has to buffer more data, or do things differently, based on how this is set? I presume that if we were able to stream sample libraries from disk, then we have to buffer enough data in DRAM to ensure the sample stream continues until the disk catches up. Is this driven only by the drive's response time? Or would there be advantages in running the sampler engine at higher Jack latencies to reduce buffering? Seems to me that if I run this on a second machine (or have a second jack daemon on the same machine...) I get an extra degree of freedom. Does it help me? I'm just asking questions for the sake of learning. I do understand that there are perceived patent issues, so let's leave that for a separate discussion. Thanks, Mark |
|
From: Josh G. <jg...@us...> - 2003-01-21 19:33:00
|
On Tue, 2003-01-21 at 04:41, M. Nentwig wrote: > Moi, > > I don't think that there are any restrictions concerning velocity > mapping with Swami. You can assign samples to arbitrary note and / or > velocity 'windows'. In theory it's possible to have a different sample > for each key / velocity combination (but I'd bet nobody has tried that > yet :) > > -Markus > It would be interesting to create layered velocity sounds as well, where samples could be blended over the velocity range in conjunction with an inverted velocity modulator (to cause a sample to fade out towards the top of its velocity range). You could have a morphing effect as one plays notes in increasing or decreasing velocity. Once the Python binding is completed in Swami (not really that much to do I think) writing scripts to do these types of things should be fairly easy :) Cheers. Josh Green |
|
From: David O. <da...@ol...> - 2003-01-21 19:31:43
|
On Tuesday 21 January 2003 18.51, Mark Knecht wrote: [...] > > In case somebody is interested in the solution I'm using to get a > > similar result (with a control program 'wrapped' around > > iiwusynth): > > <SNIP> > > This looks like an interesting way to possibly take a drum track > and then split off individaul drums mapped to certain notes and > send them to different synths? Interesting idea. > > What sort of latency does this incur? I would assume it's pretty > high if you have to recieve a MIDI event, process it, and then > retransmit. Can that be used live and get a good, tight feel? I don't know how it's implemented here, but technically, as long as=20 it's done somewhere in between the sequencer and the hardware MIDI=20 output, there shouldn't be any significant latency. It's only when=20 you're running physical 31250 bps wire between units that chaining=20 devices is a problem. //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Mark K. <mk...@co...> - 2003-01-21 18:04:37
|
> -----Original Message----- > From: lin...@li... > [mailto:lin...@li...]On Behalf Of > David Olofson > Sent: Tuesday, January 21, 2003 9:38 AM > To: lin...@li... > Subject: Re: [Linuxsampler-devel] RE: Hi - Very quiet list - my first > post > > > On Tuesday 21 January 2003 18.12, Mark Knecht wrote: > [...] > > It's a bit cranky when you consider notation capabilities in > > programs like Rosegarden. None of them know that these notes are > > key switches. I've taken recently to moving key switch notes to a > > separate track that transmits on the same channel. > > Well, I'm using piano roll for the few edits I make (I just record > and maybe quantize, scale velocities, scale note lenghts etc), and > that would work perfecty with key switches, I think. > > One thing I just *have* to try is hooking some keys up to f and q of > some resonant filters... :-) > Dave, I hope I wasn't misunderstood. RG, Pro Tools, Cubase SX, they all work with key switches. I can put key switch events on the same track. They are sent and cause the sample sets to switch. That does not cause a problem. The _only_ problem I've run into is that while you and I understand that a key switch at C-3 isn't a musical note, a notation program does not, and will paint a quarter note there when one does not need one musically. Cheers, Mark |
|
From: David O. <da...@ol...> - 2003-01-21 17:38:41
|
On Tuesday 21 January 2003 18.12, Mark Knecht wrote: [...] > It's a bit cranky when you consider notation capabilities in > programs like Rosegarden. None of them know that these notes are > key switches. I've taken recently to moving key switch notes to a > separate track that transmits on the same channel. Well, I'm using piano roll for the few edits I make (I just record=20 and maybe quantize, scale velocities, scale note lenghts etc), and=20 that would work perfecty with key switches, I think. One thing I just *have* to try is hooking some keys up to f and q of=20 some resonant filters... :-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: David O. <da...@ol...> - 2003-01-21 17:31:49
|
On Tuesday 21 January 2003 18.08, Steve Harris wrote: > On Tue, Jan 21, 2003 at 04:30:18 +0100, David Olofson wrote: > > > A JACK app can connect directly to physical i/o, and can > > > connect to any ported part of another application. > > > > So can a XAP plugin - only it's actually the host that decides > > and makes the connection. After all, what you get is still an > > audio buffer that you're supposed to read or write once per block > > cycle. I still don't see why it would matter what API(s) are used > > to get access to the buffers. > > So, in other words, it can't ;) Right; it can't *make* the connection, but it can handle the=20 connection, once it's made. > The idea is that the sampler should > be able to connect to hardware inputs from its UI in order to > sample. From within XAP you cant do that. I hope. Its not really a > plugin UI feature and if it was present it would be bloat. Agreed. That said, the sampler being a XAP plugin doesn't prevent it's GUI=20 from being unaware of the difference between running as a XAP plugin=20 and running as a JACK client. When doing the latter, just make the=20 I/O selection features available. Nothing says that the GUI must talk=20 *only* to the sampler plugin when running as a JACK client. > > > The behaviour of a hardware sampler leads me to think of it > > > more like a jack application than a plugin. Thats not to say > > > that I dont think a sampler plugin is useful, obviously it is, > > > but I think a JACK sanmpler is more useful. > > > > What behavior are you referring to? Seriously, I want XAP plugins > > to behave as much like real hardware as is possible and > > desirable. I think there is a design issue with XAP if it can't > > host a sampler properly. > > XAP can host a sampler properly, its just not /ideal/. If it was > ideal it would be JACK. > > There are two use cases (it seems, from the windows world), > samplers as applications (gigasampler, ie. linuxsampler under > JACK), and as plugins (Halion and friends, ie linuxsample under > XAP). > > I think that the application model gives you more power and > control, but both are useful. Yes, I see what you mean now. However, I don't think the API used for=20 the RT part of the sampler matters. When you want to record, just=20 grab audio from one or more inputs. If the GUI knows you're running=20 as a JACK client, it can let the user connect the inputs. If not, the=20 user will have to do that with the plugin host. Where's the problem? //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: M. N. <ne...@us...> - 2003-01-21 17:20:15
|
Moi, > I think that sound fonts can map out a > portion of the keyboard, from looking at Swami, so would it be possible to > have a bass below middle C and a piano above middle C? Yes, that's possible. Take two instruments and assign the zones on preset level. > The technical issue with these libraries is that sometimes > a velocity of > 63 and 64 do not compare well with each other, so there are > adjustments made > in each sample set to get it all to work. You are going to find these > adjustments in a .gig file I'm sure. That's no problem. The SF2 standard allows completely independent parameters for each individual velocity / key zone. One could even use a modulator to make the transition gradual throughout the vel / key range of a sample (for example velocity to filter cutoff and amplitude). > The other one we need is 'key switching', where a range of > keys on the > keyboard are reserved as switches, not notes. When one of > these keys is > pressed, the complete sample set for all MIDI velocities > changes. I think > this one is easier to implement though. (Famous last > words...) You'll find > this in some of the .gig libraries, but possibly not on Worra's site. Famous last words indeed... That would mean adding new features to the SF2 format, to synth and editor. And then, why switch samples only? If I change to another sample, I'll probably also want to change filter, envelopes and so on. In case somebody is interested in the solution I'm using to get a similar result (with a control program 'wrapped' around iiwusynth): In iiwusynth there is a quite new feature, the so-called MIDI router. It can change (for example) the MIDI channel of received data, as in 'all data received on channels 4..7 goes to the synth on channel 0'. When I want to switch on-the-fly between different sounds, I assign them to different synth channels. To change sounds, I just upload a new router configuration (the router is smart enough to get pending 'noteoff' events right). For example: In state 1, all data goes to channel 0, in state 2 all data to channel 1. I use program change messages to switch between 'states' (instead of reserved keys). Together with the Ladspa Fx unit this also allows to change the effects setup. For example: Rhodes EP on synth channel 0 and 1, and a phaser inserted at the audio output of channel 0 only. This effectively switches the phaser on and off (what's best: held notes are unaffected, so you can hold a chord, switch the router setup, and continue playing with a different sound). If there is interest in the control program I'm using, let me know. But it's meant for live playing, not for sequencing, and far from ready-for-the-masses. Cheers Markus |
|
From: Mark K. <mk...@co...> - 2003-01-21 17:13:28
|
> -----Original Message----- > From: lin...@li... > [mailto:lin...@li...]On Behalf Of > David Olofson > Sent: Tuesday, January 21, 2003 7:50 AM > To: lin...@li... > Subject: Re: [Linuxsampler-devel] RE: Hi - Very quiet list - my first > post > > > On Tuesday 21 January 2003 15.51, Mark Knecht wrote: > [...] > > The other one we need is 'key switching', where a range of keys > > on the keyboard are reserved as switches, not notes. When one of > > these keys is pressed, the complete sample set for all MIDI > > velocities changes. I think this one is easier to implement though. > > (Famous last words...) You'll find this in some of the .gig > > libraries, but possibly not on Worra's site. > > That's a very interesting idea... (Especially if you have 88 keys. ;-) > > I have NRPN programmable CC->mixer control mapping (so you can hook > the mod wheel up to the auto-wah base cutoff or something), but I > never thought about mapping *keys*... Or poly pressure. :-) > Key switching is used very nicely in most GSt horn libraries today, as well as in the Scarbee Bass libraries. Here's an idea of what you get: (Key Switch Map from memory - definitely not correct) Key-map Sample C-3 Standard notes, long sustain D-3 Standard note, staccato E-3 Slide up to note F-3 Slide down to note G-3 Trills It's very powerful and allows a library developer to map lots of useful stuff into the library without taking up normal note space, and also not confusing beginning users. It's a bit cranky when you consider notation capabilities in programs like Rosegarden. None of them know that these notes are key switches. I've taken recently to moving key switch notes to a separate track that transmits on the same channel. Cheers, Mark |
|
From: Steve H. <S.W...@ec...> - 2003-01-21 17:09:31
|
On Tue, Jan 21, 2003 at 04:30:18 +0100, David Olofson wrote: > > A JACK app can connect directly to physical i/o, and can connect to > > any ported part of another application. > > So can a XAP plugin - only it's actually the host that decides and > makes the connection. After all, what you get is still an audio > buffer that you're supposed to read or write once per block cycle. I > still don't see why it would matter what API(s) are used to get > access to the buffers. So, in other words, it can't ;) The idea is that the sampler should be able to connect to hardware inputs from its UI in order to sample. From within XAP you cant do that. I hope. Its not really a plugin UI feature and if it was present it would be bloat. > > The behaviour of a hardware sampler leads me to think of it more > > like a jack application than a plugin. Thats not to say that I dont > > think a sampler plugin is useful, obviously it is, but I think a > > JACK sanmpler is more useful. > > What behavior are you referring to? Seriously, I want XAP plugins to > behave as much like real hardware as is possible and desirable. I > think there is a design issue with XAP if it can't host a sampler > properly. XAP can host a sampler properly, its just not /ideal/. If it was ideal it would be JACK. There are two use cases (it seems, from the windows world), samplers as applications (gigasampler, ie. linuxsampler under JACK), and as plugins (Halion and friends, ie linuxsample under XAP). I think that the application model gives you more power and control, but both are useful. - Steve |
|
From: David O. <da...@ol...> - 2003-01-21 15:50:03
|
On Tuesday 21 January 2003 15.51, Mark Knecht wrote: [...] > The other one we need is 'key switching', where a range of keys > on the keyboard are reserved as switches, not notes. When one of > these keys is pressed, the complete sample set for all MIDI > velocities changes. I think this one is easier to implement though. > (Famous last words...) You'll find this in some of the .gig > libraries, but possibly not on Worra's site. That's a very interesting idea... (Especially if you have 88 keys. ;-) I have NRPN programmable CC->mixer control mapping (so you can hook=20 the mod wheel up to the auto-wah base cutoff or something), but I=20 never thought about mapping *keys*... Or poly pressure. :-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: David O. <da...@ol...> - 2003-01-21 15:45:57
|
On Tuesday 21 January 2003 13.41, M. Nentwig wrote: > Moi, > > I don't think that there are any restrictions concerning velocity > mapping with Swami. You can assign samples to arbitrary note and / > or velocity 'windows'. In theory it's possible to have a different > sample for each key / velocity combination (but I'd bet nobody has > tried that yet :) <plug qualifier=3D"shameless"> How about being able to write C-like code that calculates or=20 otherwise determines mapping when a note is started? Well, whether or not it's really useful, this is where Audiality is=20 going. Processing timestamped events in C is a bit hairy, so I'd=20 prefer using a custom higher level language for that. Another point=20 is that strapping on a scripting engine eliminates lots of hardcoded=20 logic, and the restrictions that come with it. </plug> //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: David O. <da...@ol...> - 2003-01-21 15:30:35
|
On Tuesday 21 January 2003 10.31, Steve Harris wrote: > On Tue, Jan 21, 2003 at 01:19:42 +0100, David Olofson wrote: > > > Oh, I see, thats not specific to JACK, it affects all > > > SCHED_FIFO programs. > > > > Yes, it's a general RT issue that just happens to impact JACK > > more than single RT thread apps. I wasn't very clear. > > No, it just shows up in jack because thats the only useful way to > run multiple SCHED_FIFO applications! Well, most people aren't using SCHED_FIFO for RT prototyping of RTAI=20 or RTL applications, so that's a point... :-) [...] > > > Nothing concrete, but I can imagine how hard it would be to get > > > it working reliably in LADSPA and this isnt really any > > > different. > > > > Well, I don't really see any problems beyond getting the > > sampler/butler interaction to work right. You still have an RT > > part (the "official" plugin) and one or more lower priority > > worker threads. > > Well, the butler /is/ the problem. Of course - it's a worker thread, and the plugin needs to communicate=20 with it without screwing other stuff up. Apart from obvious resource=20 sharing conflicts (signals, maybe), it seems to me that it would be=20 little more than a matter of picking a suitable priority for the=20 butler thread. What am I missing? > > > The JACK audio routing system is just more powerful than plugin > > > hosting. > > > > In what way? Isn't the only significant difference that the > > connections are made by clients rather than a host? > > A JACK app can connect directly to physical i/o, and can connect to > any ported part of another application. So can a XAP plugin - only it's actually the host that decides and=20 makes the connection. After all, what you get is still an audio=20 buffer that you're supposed to read or write once per block cycle. I=20 still don't see why it would matter what API(s) are used to get=20 access to the buffers. > A XAP instance is limited > to the connections that can be provided by the host application. Yes, and the way I see it, that's *intended*. When you're using a=20 host app, the host app is responsible for connections with the=20 outside world. (The only exception would be "driver plugins", which=20 interface the RT net with JACK, ALSA and other APIs.) If the host=20 doesn't allow the user to make the desired connections, the host is=20 broken and/or not the right tool for the job. =46rom the user POV, the only difference is that the JACK version would=20 integrate I/O selection in the LinuxSampler UI, while the XAP version=20 would rely on the host UI for that. Is that a problem? > The behaviour of a hardware sampler leads me to think of it more > like a jack application than a plugin. Thats not to say that I dont > think a sampler plugin is useful, obviously it is, but I think a > JACK sanmpler is more useful. What behavior are you referring to? Seriously, I want XAP plugins to=20 behave as much like real hardware as is possible and desirable. I=20 think there is a design issue with XAP if it can't host a sampler=20 properly. > > > Obviously you need that too, but for something as > > > potentially sophisticated as a sampler I'd really want it > > > available drectly to JACK. More layers of overheard and shims > > > would kinda defeat the point. > > > > Well, I have problems seeing where the overhead is, when you're > > still dealing with buffers of float32 samples all over the place, > > but I'm probably missing something. You'll need wrappers or other > > solutions in one direction or another, no matter what the lowest > > level API is - unless you support only one API, of course. > > Well as XAP and JACK are fundamentally callback based you can > provide common source with just the control handling that isn't > shared. Exactly. > It isn't neccesary to have all the XAP host cruft (VVIDs, > events, blah, blah) between jack and the sampler code. Right, but you still need a control interface. And it should probably=20 be sample accurate and support ramping, even if the first priority is=20 driving it from MIDI. If you use the ALSA sequencer API directly, you'll have to provide an=20 alternative interface anyway, since ALSA sequencer doesn't make much=20 sense for the XAP plugin. If you use your own custom control=20 interface, you need to wrap it with custom code for *both* JACK and=20 XAP. Using XAP, no extra work is needed for the XAP version (obviously),=20 and you could just use a standard or custom MIDI->XAP=20 driver/converter (which is one of the first things I'll implement, as=20 I still need MIDI to do anything useful) together with LinuxSampler=20 when running it as a JACK client. Using XAP as your "custom" API makes a lot of sense to me. If it's=20 too complex to be a viable solution for that, I'm suspecting that we=20 need to do some cleaning up. It's really supposed to be about as=20 clean and simple as possible, while providing what you need for=20 instrument control. If it's not suitable for a sampler, I think we=20 might be on the wrong track. What I'm saying is that XAP is *intended* for this kind of stuff. If=20 it doesn't fit, we'll have to make it fit, or there's just no point=20 in having it. //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Mark K. <mk...@co...> - 2003-01-21 14:52:23
|
> -----Original Message----- > From: lin...@li... > [mailto:lin...@li...]On Behalf Of Josh > Green > Sent: Tuesday, January 21, 2003 4:25 AM > > What kind of velocity functionality are you looking for? > Josh, I think Markus had it basically right in his follow up email. We need to be able to map multiple sample sets against a single note, or range of notes, but choose them based on what MIDI velocity is received. This is the way all of the good GSt libraries work. As an example only, for the piano libraries they may sample the piano at 4, 8 or 16 even different playing key pressures. Then the softest sample is mapped from a MIDI velocity of 0-31, the second from 32-63, third from 64-95, fourth from 96-127. Within each range the same sample is played, but the sample's audio volume is adjusted based on the velocity, so that a MIDI velocity of 93 plays louder than a velocity of 68, but they both play the same sample. The technical issue with these libraries is that sometimes a velocity of 63 and 64 do not compare well with each other, so there are adjustments made in each sample set to get it all to work. You are going to find these adjustments in a .gig file I'm sure. The other one we need is 'key switching', where a range of keys on the keyboard are reserved as switches, not notes. When one of these keys is pressed, the complete sample set for all MIDI velocities changes. I think this one is easier to implement though. (Famous last words...) You'll find this in some of the .gig libraries, but possibly not on Worra's site. Cheers, Mark |
|
From: M. N. <ne...@us...> - 2003-01-21 12:41:48
|
Moi, I don't think that there are any restrictions concerning velocity mapping with Swami. You can assign samples to arbitrary note and / or velocity 'windows'. In theory it's possible to have a different sample for each key / velocity combination (but I'd bet nobody has tried that yet :) -Markus |