|
From: M. N. <ne...@us...> - 2003-01-21 12:41:48
|
Moi, I don't think that there are any restrictions concerning velocity mapping with Swami. You can assign samples to arbitrary note and / or velocity 'windows'. In theory it's possible to have a different sample for each key / velocity combination (but I'd bet nobody has tried that yet :) -Markus |
|
From: David O. <da...@ol...> - 2003-01-21 15:45:57
|
On Tuesday 21 January 2003 13.41, M. Nentwig wrote: > Moi, > > I don't think that there are any restrictions concerning velocity > mapping with Swami. You can assign samples to arbitrary note and / > or velocity 'windows'. In theory it's possible to have a different > sample for each key / velocity combination (but I'd bet nobody has > tried that yet :) <plug qualifier=3D"shameless"> How about being able to write C-like code that calculates or=20 otherwise determines mapping when a note is started? Well, whether or not it's really useful, this is where Audiality is=20 going. Processing timestamped events in C is a bit hairy, so I'd=20 prefer using a custom higher level language for that. Another point=20 is that strapping on a scripting engine eliminates lots of hardcoded=20 logic, and the restrictions that come with it. </plug> //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Josh G. <jg...@us...> - 2003-01-21 19:48:48
|
On Tue, 2003-01-21 at 07:45, David Olofson wrote: > On Tuesday 21 January 2003 13.41, M. Nentwig wrote: > > Moi, > > > > I don't think that there are any restrictions concerning velocity > > mapping with Swami. You can assign samples to arbitrary note and / > > or velocity 'windows'. In theory it's possible to have a different > > sample for each key / velocity combination (but I'd bet nobody has > > tried that yet :) > > <plug qualifier="shameless"> > How about being able to write C-like code that calculates or > otherwise determines mapping when a note is started? > > Well, whether or not it's really useful, this is where Audiality is > going. Processing timestamped events in C is a bit hairy, so I'd > prefer using a custom higher level language for that. Another point > is that strapping on a scripting engine eliminates lots of hardcoded > logic, and the restrictions that come with it. > </plug> > > <plug qualifier="also shameless"> Yes, I can envision Python being a nice language for this type of thing. A project for the future of Swami as well. As things stand now modulators can be used with MIDI velocity source controls to do weired mappings with velocity (can even control other effects, say Filter Cutoff for instance :) </plug> |
|
From: Josh G. <jg...@us...> - 2003-01-21 19:33:00
|
On Tue, 2003-01-21 at 04:41, M. Nentwig wrote: > Moi, > > I don't think that there are any restrictions concerning velocity > mapping with Swami. You can assign samples to arbitrary note and / or > velocity 'windows'. In theory it's possible to have a different sample > for each key / velocity combination (but I'd bet nobody has tried that > yet :) > > -Markus > It would be interesting to create layered velocity sounds as well, where samples could be blended over the velocity range in conjunction with an inverted velocity modulator (to cause a sample to fade out towards the top of its velocity range). You could have a morphing effect as one plays notes in increasing or decreasing velocity. Once the Python binding is completed in Swami (not really that much to do I think) writing scripts to do these types of things should be fairly easy :) Cheers. Josh Green |
|
From: David O. <da...@ol...> - 2003-01-21 19:42:45
|
On Tuesday 21 January 2003 20.32, Josh Green wrote: [...] > velocity. Once the Python binding is completed in Swami (not really > that much to do I think) writing scripts to do these types of > things should be fairly easy :) Cheers. Speaking of scripting, are you planning on actually running Python in=20 RT context, or just use it for "rendering" maps? //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Josh G. <jg...@us...> - 2003-01-21 20:11:50
|
On Tue, 2003-01-21 at 11:42, David Olofson wrote: > On Tuesday 21 January 2003 20.32, Josh Green wrote: > [...] > > velocity. Once the Python binding is completed in Swami (not really > > that much to do I think) writing scripts to do these types of > > things should be fairly easy :) Cheers. > > Speaking of scripting, are you planning on actually running Python in > RT context, or just use it for "rendering" maps? > For just editing operations, the idea of real time is not of importance. For doing real time control of effects and MIDI, it might be. It really remains to be seen in practice what kind of latency is induced by calling Python code in real time. In the MIDI realm it might not matter too much. I'm not yet fully familiar with using Python embedded in a program, but I'm sure there is probably a way to compile script source into object code. Anyways.. Josh Green |
|
From: David O. <da...@ol...> - 2003-01-21 21:15:02
|
On Tuesday 21 January 2003 21.11, Josh Green wrote: > On Tue, 2003-01-21 at 11:42, David Olofson wrote: > > On Tuesday 21 January 2003 20.32, Josh Green wrote: > > [...] > > > > > velocity. Once the Python binding is completed in Swami (not > > > really that much to do I think) writing scripts to do these > > > types of things should be fairly easy :) Cheers. > > > > Speaking of scripting, are you planning on actually running > > Python in RT context, or just use it for "rendering" maps? > > For just editing operations, the idea of real time is not of > importance. For doing real time control of effects and MIDI, it > might be. It really remains to be seen in practice what kind of > latency is induced by calling Python code in real time. In the MIDI > realm it might not matter too much. Well, MIDI may not suffer as much from unbounded latency as audio,=20 but I'm not willing to take chances. We're talking about *unbounded*=20 worst case latency here, and it's really as bad as it sounds. If you=20 *can* have memory management stall MIDI processing for half a second=20 in the middle of a live performance, it *will* happen sooner or=20 later. (You know Murphy...) Either way, Audiality runs all event processing in the same context=20 as the audio processing, so I can't realistically use anything that=20 isn't RT safe anyway. Even the slightest deadline misses would cause=20 audible drop-outs. > I'm not yet fully familiar with > using Python embedded in a program, but I'm sure there is probably > a way to compile script source into object code. Anyways.. That might work, but I suspect it will only improve throughput=20 without making worst case latencies bounded. If the compiled code=20 could still uses malloc(), garbage collection and other=20 non-deterministic stuff, you have gained next to nothing WRT RT=20 reliability. //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Josh G. <jg...@us...> - 2003-01-22 02:26:32
|
On Tue, 2003-01-21 at 13:14, David Olofson wrote: > > Well, MIDI may not suffer as much from unbounded latency as audio, > but I'm not willing to take chances. We're talking about *unbounded* > worst case latency here, and it's really as bad as it sounds. If you > *can* have memory management stall MIDI processing for half a second > in the middle of a live performance, it *will* happen sooner or > later. (You know Murphy...) > Half a second? I'm sure that rarely occurs. I can't speak for Python's memory management, but much of the critical stuff in Swami uses glib memory chunks. These allow for an initial allocation block and then only allocate more if and when needed (as long as you pre-allocate enough, it shouldn't happen). If I do ever get around to creating a sequencing sub system, using Python functions will be completely optional. Users who use this feature will probably understand the potential for problems. When just playing around with composing music, I don't think its much of an issue. When one wants to do real time stuff, all the Python functions can be rendered to a MIDI buffer with explicit time stamps (those that don't take real time input of course). Currently, I'm more interested in nice functionality then sub ms latency. This can always be optimized at a later date. > Either way, Audiality runs all event processing in the same context > as the audio processing, so I can't realistically use anything that > isn't RT safe anyway. Even the slightest deadline misses would cause > audible drop-outs. > > > > I'm not yet fully familiar with > > using Python embedded in a program, but I'm sure there is probably > > a way to compile script source into object code. Anyways.. > > That might work, but I suspect it will only improve throughput > without making worst case latencies bounded. If the compiled code > could still uses malloc(), garbage collection and other > non-deterministic stuff, you have gained next to nothing WRT RT > reliability. > > > //David Olofson - Programmer, Composer, Open Source Advocate > Cheers. Josh Green |