|
From: Christian S. <sch...@li...> - 2014-05-21 12:27:09
|
Hi list! I am currently thinking about whether it would make sense to add real-time scripting support to LS. Something like Kontakt already provides. A set of event handlers you can write and commands you can trigger by function calls. I would like to use it as extension to the GIG engine, but it could theoretically also be used for the other engines (SFZ and SF2). In the gig format there are so called "MIDI rules". But as far as I can see it, they are very limited. Example: I currently have a string ensemble instrument where I would like to have staccato sounds automatically being triggered if aftertouch passed a certain level. That way I can i.e. play and hold string chords with both hands, and by increasing pressure on the keys once in a while I can trigger other sounds again and again without having to release any key. Sounds like a job for the so called "control trigger MIDI rule". But in the way it was implemented GigaStudio 4 there are four restrictions that prevent me doing so: 1. Those "control trigger midi rules" only support CCs, not aftertouch. 2. As far as I can see it there is a maximum of 32 triggers, thus I could use those staccato sounds only on a key range of a bit more than 2 octaves. 3. Those trigger rules have no concept for distinguishing whether the sound was triggered by a MIDI rule or normally/directly by the musician. So all I could do is triggering another string ensemble note, but not a staccato note with such MIDI rules. There is the so called "smart MIDI" dimensions, but as far as I can see it, it is restricted to other MIDI rules like legato, but not supposed to be used for the trigger MIDI rule. 4. As far as I can see it, you can only use one MIDI rule type per instrument. So if you are using this "controller trigger MIDI rule" then you are not supposed to use i.e. the "legato MIDI rule". Obviously those restrictions could be hacked by adding LinuxSampler's own minor custom extensions on GigaStudios original MIDI rules concept. But does it make sense? I mean one would probably reach another restriction with those MIDI rules soon and various use cases that would not be covered by them, and we would be back at point one. Thoughts? CU Christian |
|
From: David O. <da...@ol...> - 2014-05-21 13:40:13
|
On Wed, May 21, 2014 at 1:16 PM, Christian Schoenebeck <sch...@li...> wrote: [...] > Obviously those restrictions could be hacked by adding LinuxSampler's own > minor custom extensions on GigaStudios original MIDI rules concept. But does > it make sense? I mean one would probably reach another restriction with those > MIDI rules soon and various use cases that would not be covered by them, and > we would be back at point one. > > Thoughts? Well, having written two scripting engines for realtime applications, one of which evolved as part of an audio engine: http://eelang.org http://audiality.org I would say it's an incredibly flexible and powerful solution, generally speaking. Actually, the main reason why I went down that route in the first place with Audiality 2 was that I wanted a *really* small and simple engine that could still do something useful. (At 2000 LOC, it was already capable of all-synthetic sound effects and music. It's a bit larger now, but the scripting language is friendlier, and there is modular synthesis, filters, effects etc...) A traditional modular, or worse, hardwired design tends to call for countless features - and you still keep running into things that are awkward or impossible to do without adding even more features. I think that's a dead end with the kind of complexity and dynamics we want from synths and audio engines these days. IMHO, hardwired features are for performance or convenience - not core functionality. The downside? Well, a scripting language isn't the best user interface for non-programmers. One way to deal with that might be to think of the scripting as an intermediate level, where scripted instruments and the like come with GUIs, so ordinary users can just load them and use them as you would with a more traditional solution. -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '---------------------------------------------------------------------' |
|
From: Christian S. <sch...@li...> - 2014-05-21 15:38:09
|
On Wednesday 21 May 2014 15:05:53 David Olofson wrote: > Well, having written two scripting engines for realtime applications, > one of which evolved as part of an audio engine: > > http://eelang.org > http://audiality.org Yes, we talked about this issue years ago, since you already had that in place. Using your EEL as basis for a script language might be an interesting option. So far I considered taking Kontakt's script language as basis for the actual language (with minor adjustments/extensions here and there that might be necessary for the GIG format for example). Which would bring the advantage that users could recycle their scripts from Kontakt instruments. Kontakt's script language also seems to have an overall reasonable set of functionalities that could be sufficient even for very sophisticated purposes. That would mean however that a lot of stuff would need to be adjusted in EEL. Or does anybody have another good suggestion for an existing script language that might be used as basis for such a language for the sampler except of Kontakt's one? So before you wrote your Email today, I was considering using 1) Bison as compiler compiler (for automatically translating the script language grammar definition into parser tables, which is done only when the sampler source code is compiled) and 2) using a custom parser skeleton C++ code (instead of Bison's default skeleton parser) which processes the Bison generated parser tables and would take care about real-time safe memory management etc. That might be faster to achieve than adjusting EEL, no? Do you see some drawback of this approach? If I got it correctly, in EEL you are using bytecode and an intermediate translation layer. Do you think that is necessary for the use case in the sampler? That would be an important issue to decide. The current use cases that come to my mind right now, are just rather simple scripts that are executed when a MIDI event arrives. I obviously haven't tested nor benchmarked it, but it could be possible that parsing a script on demand (without an intermediate layer), right on such events, could even be sufficient without encountering overall performance impacts. And in the instrument files it would probably even make sense to just store the scripts in the script language's source code form (instead of bytecode), in order to avoid problems with script engine backend changes as far as possible. > A traditional modular, or worse, hardwired design tends to call for > countless features - and you still keep running into things that are > awkward or impossible to do without adding even more features. I think > that's a dead end with the kind of complexity and dynamics we want > from synths and audio engines these days. IMHO, hardwired features are > for performance or convenience - not core functionality. Exactly. > The downside? Well, a scripting language isn't the best user interface > for non-programmers. One way to deal with that might be to think of > the scripting as an intermediate level, where scripted instruments and > the like come with GUIs, so ordinary users can just load them and use > them as you would with a more traditional solution. Right, but on the other hand, scripting comes into game only where rather complex features need to be created for the respective instrument. Trying to implement such rather complex tasks with a very ergonomic UI is extremely hard to achieve or almost impossible. As you said, trying to hardwire that (like Tascam tried in GSt4 to avoid scripting) easily brings you to restrictions. So I think if an already existing script language is taken as basis (i.e. Kontakt's one), plus a decent script editor in the instrument editor with syntax highlight and auto completion suggestion and builtin documentation, that might be a good basis for sophisticated instrument designers. CU Christian |
|
From: Raphaël M. <rmo...@gm...> - 2014-05-21 21:10:29
|
Hello, I would love to have a scripting engine on LS ! Recently i've been working hard to create a SFZ drum preset for live use, and the hihat escpecially cannot be done as i want without scripting. So i need to use either mididings or puredata to trigger special events like "pedal is going down" or "pedal is going up" and do tricky sfz layouts to achieve realistic player's sensation. Having an embedded scripting in LS would be of great help here, and also help in realtime treatment of these events. I have no preference for a language, but definately interested ! Raphaël Le 21 mai 2014 14:27, "Christian Schoenebeck" <sch...@li...> a écrit : > Hi list! > > I am currently thinking about whether it would make sense to add real-time > scripting support to LS. Something like Kontakt already provides. A set of > event handlers you can write and commands you can trigger by function > calls. I > would like to use it as extension to the GIG engine, but it could > theoretically also be used for the other engines (SFZ and SF2). > > In the gig format there are so called "MIDI rules". But as far as I can see > it, they are very limited. > > Example: I currently have a string ensemble instrument where I would like > to > have staccato sounds automatically being triggered if aftertouch passed a > certain level. That way I can i.e. play and hold string chords with both > hands, and by increasing pressure on the keys once in a while I can trigger > other sounds again and again without having to release any key. > > Sounds like a job for the so called "control trigger MIDI rule". But in the > way it was implemented GigaStudio 4 there are four restrictions that > prevent > me doing so: > > 1. Those "control trigger midi rules" only support CCs, not aftertouch. > 2. As far as I can see it there is a maximum of 32 triggers, thus I could > use > those staccato sounds only on a key range of a bit more than 2 octaves. > 3. Those trigger rules have no concept for distinguishing whether the sound > was triggered by a MIDI rule or normally/directly by the musician. So > all I > could do is triggering another string ensemble note, but not a staccato > note with such MIDI rules. There is the so called "smart MIDI" > dimensions, > but as far as I can see it, it is restricted to other MIDI rules like > legato, but not supposed to be used for the trigger MIDI rule. > 4. As far as I can see it, you can only use one MIDI rule type per > instrument. > So if you are using this "controller trigger MIDI rule" then you are not > supposed to use i.e. the "legato MIDI rule". > > Obviously those restrictions could be hacked by adding LinuxSampler's own > minor custom extensions on GigaStudios original MIDI rules concept. But > does > it make sense? I mean one would probably reach another restriction with > those > MIDI rules soon and various use cases that would not be covered by them, > and > we would be back at point one. > > Thoughts? > > CU > Christian > > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. > Get unparalleled scalability from the best Selenium testing platform > available > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > |
|
From: David O. <da...@ol...> - 2014-05-22 14:52:16
|
On Wed, May 21, 2014 at 4:43 PM, Christian Schoenebeck <sch...@li...> wrote: > On Wednesday 21 May 2014 15:05:53 David Olofson wrote: >> Well, having written two scripting engines for realtime applications, >> one of which evolved as part of an audio engine: >> >> http://eelang.org >> http://audiality.org [...] > > Yes, we talked about this issue years ago, since you already had that in > place. Using your EEL as basis for a script language might be an interesting > option. > > So far I considered taking Kontakt's script language as basis for the actual > language (with minor adjustments/extensions here and there that might be > necessary for the GIG format for example). Which would bring the advantage > that users could recycle their scripts from Kontakt instruments. Kontakt's > script language also seems to have an overall reasonable set of > functionalities that could be sufficient even for very sophisticated purposes. > That would mean however that a lot of stuff would need to be adjusted in EEL. Compatibility would of course be very nice in this case. Not really familiar with the Kontakt scripting language, so I can't tell how much work it would be to write a parser for it - but that's basically how to go about it; write an alternative compiler that issues bytecode for an existing VM. A VM like this is basically just a high level virtual CPU, and not really tied to any specific language. > Or does anybody have another good suggestion for an existing script language > that might be used as basis for such a language for the sampler except of > Kontakt's one? EEL exists mostly because I couldn't find anything like it. I looked into subverting Lua to suit my needs (replacing the GC, most critically), but the Lua community showed virtually no interest in it (not really needed, even for <100 Hz game scripting), so I would have been completely on my own with it - and I'd rather be in that situation with code that I know inside out. For something really simple, you could look at the Audiality 2 scripting engine (not physically related to EEL), but that's a small, domain specific language that's somewhat tied to the design of the audio engine. Apart from being massively microthreaded with message passing, it's a really small and simple language. > [...Bison and stuff...] Not sure about parser and lexer generators, really... These tools only solve a small, simple part of the problem - and they're not even particularly good at dealing with some types of languages. I prefer just coding it all in plain C. Fewer tools to depend on, which is particularly nice when porting and cross-compiling! :-) > If I got it correctly, in EEL you are using bytecode and an intermediate > translation layer. Sort of... EEL compiles to bytecode, which runs on a custom VM - just like Lua. The compiler is included with the VM. The EEL VM can be compiled with a traditional switch() dispatcher, or computed gotos (switch() results in ~99% mispredictions on most CPUs), but there's no JIT or native compiler - yet. (I'm looking into using LLVM to generate native code, asm.js and whatnot, both "live" and build time.) > Do you think that is necessary for the use case in the sampler? Not strictly, but even disregarding raw speed, interpreting a proper scripting language from source in a realtime safe manner is going to be hairy. The normal, easy way of coding a parser involves deeply recursive code, associative arrays and other nasty things that are hard to do right in a realtime system. -- //David Olofson - Consultant, Developer, Artist, Open Source Advocate .--- Games, examples, libraries, scripting, sound, music, graphics ---. | http://consulting.olofson.net http://olofsonarcade.com | '---------------------------------------------------------------------' |
|
From: Christian S. <sch...@li...> - 2014-05-22 19:16:19
|
On Thursday 22 May 2014 16:52:03 David Olofson wrote: > Compatibility would of course be very nice in this case. Not really > familiar with the Kontakt scripting language, so I can't tell how much > work it would be to write a parser for it - There are various kick start introductions for the "KSP" (Kontakt Script Processor) script language. I.e.: http://www.askaudiomag.com/articles/introduction-to-scripting-in-kontakt- part-1 Also a lot of videos out there. You basically just use one of the defined event callback functions for two main purposes: 1) reacting on and processing MIDI events and 2) adding custom GUI controls (dials, input fields, labels, buttons) and write event handlers for their UI widget events. For now I am mostly interested to implement aspect 1) (MIDI processing). One thing I find suboptimal with KSP is that it is very focused on using function calls all over the place. For example if you want to trigger some note when a key is released, you would write something like: on release declare $n := 42 play_note($n, 127, 0, 0) end So as a user you need to remember the sequence of function arguments for play_note() (or you need to lookup in the manual each time). Using a rather more object oriented approach like: on release Note $n $n.note := '42 $n.velocity := 127 //optional: //n.offset := 0 //n.duration := 0 $n.trigger() end would probably be more intuitive and easier to remember. Modifying incoming MIDI events is also a bit odd, with KSP it is like this: on note if ($EVENT_VELOCITY > 100) ignore_event($EVENT) else change_note($EVENT, $EVENT_NOTE + 12) change_velo($EVENT, $EVENT_VELOCITY / 2) end if end which IMO I would found more intuitive like: on note if ($NOTE.velocity > 100) $NOTE.ignore() else $NOTE.note += 12 $NOTE.velocity /= 2 end if end There are 3 different scopes for variables in KSP by the way. You declare global variable in the "init" callback: on init declare $myGlobalVariable := 10 declare %someGlobalArray[5] = ( 2, 3, 5, 7, 11 ) end You can then access those global variables from any other callback function. Whereas if you declare a variable in some other callback like: on note declare $i := 0 end Then this variable by default is rather something like a static variable, bound in the namespace of that "on note" function, *but* shared by all callbacks that process that particular callback function! So you might interfere with other instances running this callback "at the same time". Even though KSP only has one thread, there is a wait() function which pushes the respective function execution on a wait queue, thus rendering a scenario that you can compare causally to the effect as if multiple instances of the same callback function were executing in parallel. That's why KSP has a third variable scope as well: on note declare polyphonic $i := 0 end In this case such a "polyphonic" variable is not shared, it is bound to the event which triggered the callback function. So no other instance of the same callback can modify variable $i in between, preventing you from any trouble. However from memory management point of view this case is a bit problematic. Because you have no information at parse time how many instances of the callback might be triggered in parallel. I am not sure what the best solution would be to implement this case (from memory management POV). Do you have a solid idea? > but that's basically how > to go about it; write an alternative compiler that issues bytecode for > an existing VM. A VM like this is basically just a high level virtual > CPU, and not really tied to any specific language. Probably, but in fact I would like to avoid opening the door for numerous flavors of high level script languages used in the sampler right now. Because it might create confusion among users when learning scripting with the sampler and/or by reading scripts of other people, if everybody is using a different language on top that is. So I would prefer to pick one single language flavor that should remain alone in this flavor (for quite some time at least). > EEL exists mostly because I couldn't find anything like it. I looked > into subverting Lua to suit my needs (replacing the GC, most > critically), but the Lua community showed virtually no interest in it > (not really needed, even for <100 Hz game scripting), so I would have > been completely on my own with it - and I'd rather be in that > situation with code that I know inside out. Yes, I saw you went a bit deeper with EEL than what we probably need to achieve right now. From what you saw above, the typical use case for the script language in a sampler is event handling once in a while. Not at high frequencies. You are even defining tight DSP stuff with EEL if I saw it correctly. That's already beyond the scope what we need right now. > For something really simple, you could look at the Audiality 2 > scripting engine (not physically related to EEL), but that's a small, > domain specific language that's somewhat tied to the design of the > audio engine. Apart from being massively microthreaded with message > passing, it's a really small and simple language. Why is it called "Extensible" by the way? What is the particular extensible aspect of EEL? > > [...Bison and stuff...] > > Not sure about parser and lexer generators, really... These tools only > solve a small, simple part of the problem - and they're not even > particularly good at dealing with some types of languages. I prefer > just coding it all in plain C. Fewer tools to depend on, which is > particularly nice when porting and cross-compiling! :-) Well, I have a favor for compiler-compiler. You work with them at a more intuitive and compact level, they keep you safe from typical manual parser programming mistakes and can safe you a lot of time and stress for languages which grow in time. > > Do you think that is necessary for the use case in the sampler? > > Not strictly, but even disregarding raw speed, interpreting a proper > scripting language from source in a realtime safe manner is going to > be hairy. The normal, easy way of coding a parser involves deeply > recursive code, associative arrays and other nasty things that are > hard to do right in a realtime system. Yeah, I already dropped the idea about interpreting script source in real- time. We need some intermediary layer anyway. So we would not safe efforts by parsing on each event in real-time. CU Christian |
|
From: David O. <da...@ol...> - 2014-05-23 08:55:02
|
On Thu, May 22, 2014 at 8:21 PM, Christian Schoenebeck
<sch...@li...> wrote:
[...]
> One thing I find suboptimal with KSP is that it is very focused on using
> function calls all over the place.
[...]
Yeah, functions with umpteen arguments get old... Then again, I'm not
generally a big fan of overly verbose code either. Named arguments? An
editor that automatically shows the argument names of the function at
hand in the status bar or similar?
> would probably be more intuitive and easier to remember. Modifying incoming
> MIDI events is also a bit odd, with KSP it is like this:
>
> on note
> if ($EVENT_VELOCITY > 100)
> ignore_event($EVENT)
> else
> change_note($EVENT, $EVENT_NOTE + 12)
> change_velo($EVENT, $EVENT_VELOCITY / 2)
> end if
> end
>
> which IMO I would found more intuitive like:
>
> on note
> if ($NOTE.velocity > 100)
> $NOTE.ignore()
> else
> $NOTE.note += 12
> $NOTE.velocity /= 2
> end if
> end
[...]
The latter needs either a proper type system with structs, classes or
similar, or run-time indexing by "name" (typically immutable strings),
both of which are quite complicated to do well with good performance.
This suggests that they took the easy route with KSP and just made
"everything a variable," resolved at compile time (performance is not
critical) by means of a one-dimensional non-nested symbol table.
So, the latter is much nicer, but TANSTAAFL. :-)
[...]
> In this case such a "polyphonic" variable is not shared, it is bound to the
> event which triggered the callback function. So no other instance of the same
> callback can modify variable $i in between, preventing you from any trouble.
> However from memory management point of view this case is a bit problematic.
> Because you have no information at parse time how many instances of the
> callback might be triggered in parallel. I am not sure what the best solution
> would be to implement this case (from memory management POV). Do you have a
> solid idea?
This is where realtime programming gets real hairy... Again, it would
seem like the "easy" route was taken with KSP, and it's not truly
dynamic (that is, all structures, variables etc are allocated and
initialized as an instrument is loaded), except that the 'polyphonic'
keyword somehow sets up pools or arrays of "sufficient" size for
polyphonic action.
In hardware synths, there's usually a fixed number of voices,
traditionally because the voices were physically built into the
hardware, and in more recent designs, because of (relatively speaking)
very tight CPU power and memory budgets. In some cases (like the old
JV-1080), you can manually split the voice pool across parts to manage
voice stealing issues, but with a soft synth/sampler these days, you
can probably get away with just pre-allocating a few hundred voices,
with polyphonic variables and whatnot that goes with them, and be done
with it. I've noticed proprietary software synths and samplers still
tend to have hard limits on channels, voices etc, and this might be
the main reason for that...
With EEL, I decided to go fully dynamic, so there are no special cases
for things like this. There are static variables (allocated per
module, and shared within the module instance) and local (VM stack)
variables that work like stack variables in C. If you want per-channel
or per-note data, you just design your objects like that, OOP style.
(EEL is really a generic programming language, not designed for any
specific type of applications, apart from the requirement of being
"realtime safe".)
So, there is full dynamic memory management in the core of the
language (call stack, arrays, tables, custom classes etc), and if you
want it all hard realtime, you should plug in a suitable memory
manager instead of malloc()/realloc()/free(), like TLSF:
http://www.gii.upv.es/tlsf/
It has crossed my mind to hardwire TLSF or similar into EEL, but there
are downsides to a realtime memory manager when you don't really need
one, and existing realtime OSes and application often have their own
custom memory managers already, so it might be better to plug into
those.
>> but that's basically how
>> to go about it; write an alternative compiler that issues bytecode for
>> an existing VM. A VM like this is basically just a high level virtual
>> CPU, and not really tied to any specific language.
>
> Probably, but in fact I would like to avoid opening the door for numerous
> flavors of high level script languages used in the sampler right now. Because
> it might create confusion among users when learning scripting with the sampler
> and/or by reading scripts of other people, if everybody is using a different
> language on top that is. So I would prefer to pick one single language flavor
> that should remain alone in this flavor (for quite some time at least).
Yes... Choice is good - but most people don't know what to choose, or why. :-D
>> EEL exists mostly because I couldn't find anything like it. I looked
>> into subverting Lua to suit my needs (replacing the GC, most
>> critically), but the Lua community showed virtually no interest in it
>> (not really needed, even for <100 Hz game scripting), so I would have
>> been completely on my own with it - and I'd rather be in that
>> situation with code that I know inside out.
>
> Yes, I saw you went a bit deeper with EEL than what we probably need to
> achieve right now. From what you saw above, the typical use case for the
> script language in a sampler is event handling once in a while. Not at high
> frequencies. You are even defining tight DSP stuff with EEL if I saw it
> correctly. That's already beyond the scope what we need right now.
Well, I've tried to make EEL about as fast as possible without (so
far) compiling to native code - but that's more or less unrelated to
the realtime aspect. As long as you run scripts in the audio thread, a
scripting engine that occasionally does a GC cycle, rebuilds large
structures or hits unsafe system calls is going to cause trouble. Even
if you only use it once during a whole gig, that's one chance of it
causing an audio drop-out.
>> For something really simple, you could look at the Audiality 2
>> scripting engine (not physically related to EEL), but that's a small,
>> domain specific language that's somewhat tied to the design of the
>> audio engine. Apart from being massively microthreaded with message
>> passing, it's a really small and simple language.
>
> Why is it called "Extensible" by the way? What is the particular extensible
> aspect of EEL?
Host applications and native modules can inject custom classes that
work just like the built-in high level types; string, array, table
etc. Other than that, it's just the usual modular stuff that any real
programming language has these days. :-)
It's crossed my mind to support "plugins" of some sort in the
compiler, to support custom syntax extensions, but I'm not sure that's
a great idea... I'd rather just add nice, generally useful features to
the official language as needed instead - though I'm trying to keep
that to a minimum as well. Inspired by the Lua philosophy, that is,
but EEL is slightly less minimalistic.
--
//David Olofson - Consultant, Developer, Artist, Open Source Advocate
.--- Games, examples, libraries, scripting, sound, music, graphics ---.
| http://consulting.olofson.net http://olofsonarcade.com |
'---------------------------------------------------------------------'
|
|
From: Christian S. <sch...@li...> - 2014-05-23 19:06:07
|
On Friday 23 May 2014 10:54:50 David Olofson wrote: > > One thing I find suboptimal with KSP is that it is very focused on using > > function calls all over the place. > > Yeah, functions with umpteen arguments get old... Then again, I'm not > generally a big fan of overly verbose code either. Named arguments? An > editor that automatically shows the argument names of the function at > hand in the status bar or similar? Yeah, I also though about named arguments for function calls. But I decided to completely stick with Kontakt's KSP language now (plus some additional builtin functions we need), since I also haven't heard any complaints here about my suggestion to make it based on Kontakt's KSP language. I found the compatibility aspect much more important than personal opinions about the precise "look" of the script language. So I will start by implementing the MIDI processing part of the KSP language. The UI part of the language are not so important for me ATM, so I will skip them for now. > > In this case such a "polyphonic" variable is not shared, it is bound to > > the event which triggered the callback function. So no other instance of > > the same callback can modify variable $i in between, preventing you from > > any trouble. However from memory management point of view this case is a > > bit problematic. Because you have no information at parse time how many > > instances of the callback might be triggered in parallel. I am not sure > > what the best solution would be to implement this case (from memory > > management POV). Do you have a solid idea? > > This is where realtime programming gets real hairy... Again, it would > seem like the "easy" route was taken with KSP, and it's not truly > dynamic (that is, all structures, variables etc are allocated and > initialized as an instrument is loaded), except that the 'polyphonic' > keyword somehow sets up pools or arrays of "sufficient" size for > polyphonic action. Regarding memory management, I planned to do the same, that is allocating all data when the script is loaded. For the "polyphonic" variables I will probably allocate and stick the memory to the respective LS internal Event object. But also done at load time. For RT safety reasons I simply want to avoid any memory allocation while actually playing the instrument. > In hardware synths, there's usually a fixed number of voices, > traditionally because the voices were physically built into the > hardware, and in more recent designs, because of (relatively speaking) > very tight CPU power and memory budgets. In some cases (like the old > JV-1080), you can manually split the voice pool across parts to manage > voice stealing issues, but with a soft synth/sampler these days, you > can probably get away with just pre-allocating a few hundred voices, > with polyphonic variables and whatnot that goes with them, and be done > with it. I've noticed proprietary software synths and samplers still > tend to have hard limits on channels, voices etc, and this might be > the main reason for that... Yes. Memory allocation at runtime is always a threat to RT stability. LS is also allocating a fixed amount of voices. You can adjust that amount of voices of course at any time, but unless the user change the amount explicitly, they will be constant at a certain value. So memory allocation and its RT stability danger is one reason for that, and the other reason is of course that you have limited set of resources on you system (CPU power, HD speed, ..). So once that user defined voice limit is reached the voice stealing algorithm kicks in to keep the side effect of limited voices to a minimum. You might let the sampler auto detect some kind of maximum amount of voices, however sometimes you also want to leave some head room for other applications, which the sampler would not be able to understand on its own of course. > It has crossed my mind to hardwire TLSF or similar into EEL, but there > are downsides to a realtime memory manager when you don't really need > one, and existing realtime OSes and application often have their own > custom memory managers already, so it might be better to plug into > those. > > >> but that's basically how > >> to go about it; write an alternative compiler that issues bytecode for > >> an existing VM. A VM like this is basically just a high level virtual > >> CPU, and not really tied to any specific language. > > > > Probably, but in fact I would like to avoid opening the door for numerous > > flavors of high level script languages used in the sampler right now. > > Because it might create confusion among users when learning scripting > > with the sampler and/or by reading scripts of other people, if everybody > > is using a different language on top that is. So I would prefer to pick > > one single language flavor that should remain alone in this flavor (for > > quite some time at least). > > Yes... Choice is good - but most people don't know what to choose, or why. > :-D Well, the main use case for this script languages are not die hard coders, but rather sound designers. Many if not most of them are not too keen to deal with scripts at all, but expecting them to learn yet another language is probably too much expected. At least Kontakt's script language is well known to anybody who works with samplers and there are already plenty of easy to understand video tutorials, written howtos, free scripts on the net (and commercial script packages to be purchased) ready to be used. So better we stick with that now. So if somebody has some doubts, complaints, other suggestions, better raise your word now, before it will be too late! ;-) CU Christian |