|
From: David O. <da...@ol...> - 2003-11-07 21:26:07
|
Speaking of timestamped event systems (in another thread); I have some=20 code lying around; one working implementation inside Audiality, and=20 one very similar prototype for XAP, based on the Audiality code. I=20 never got around to make the latter available to the public, but=20 OTOH, it doesn't really differ from the Audiality event system in any=20 interesting way, if you just want to have a look at a working design. Anyway, there's one important design decision I'm thinking about: =09To queue or not to queue... :-) In Audiality, I'm never queueing events beyond the end of the current=20 audio buffer. The obvious disadvantage is that envelope generators=20 and the like cannot be "smart" and prequeue events and then go to=20 sleep - but OTOH, with prequeueing, input events may force prequeued=20 events to be taken back or modified. Not doing any prequeueing keeps=20 things simple, since event queues *are* actually simple queues, as=20 opposed to random access sequencers. (In Audiality, I do have a sorting "insert" operation, though it's not=20 used, and probably never will be. XAP wasn't even meant to have such=20 a thing. Events are always required to be sent in timestamp order.=20 The addressing and routing system ensures that multiple event streams=20 to the same input get sort/merged properly, and without cost for 1->1=20 connections.) Another advantage with avoiding prequeueing is that "now" is the time=20 span of the current audio buffer, and that's all we wory about for=20 the current buffer cycle. Event processors (such as envelope=20 generators) only need to worry about one buffer period at a time, and=20 never need to take back or modify their output. Unfortunately, there's no way of avoiding the fact that *something*,=20 somewhere, has to schedule things in event processors that deal with=20 timing. An envelope generator has to keep track of when to switch to=20 the next section, even if it's just generating long linear ramps=20 every now and then. One might think that an event system that=20 supports prequeueing will help here, but the more I think about it,=20 the more convinced I get that that's the wrong tool. It complicates=20 things (the event system as well as most of the code that uses it)=20 and just moves the problem to another place in the system. The current Audiality DAHDSR EG generates only one ramp event for each=20 envelope stage. Each EG is a state machine that is called once per=20 buffer cycle, like a plugin in your average VST/LADSPA style system.=20 First, input events (such as "note-off") are evaluated, since they=20 may affect the state of the EG. Then a downcounting timer keeps track=20 of when the next state change is to occur. No state change means the=20 EG just returns. So, the EG never really goes to sleep - but it=20 cannot anyway, since it *has* to poll for input events once per=20 buffer cycle. Although this probably isn't much of a performance issue at this point=20 (there's just one EG per voice ATM! *hehe*), I'm considering some=20 ways of optimizing it, mostly because I want more EGs, LFOs and other=20 stuff, and I don't want all those objects to burn my DSP cycles even=20 when not doing anything. Scalability, that is. (Audiality is meant to=20 run on Pentium class hardware and up.) Considering the above, my conclusion has to be that the answer must be=20 on the other side of the event processor code, so to speak. A=20 scheduler that allows objects to go to sleep, blocking on event input=20 ports and/or timers, making event processors a bit more like=20 processes in an RTOS with message passing. My Audiality EGs for example, would block, waiting for input events=20 *or* timer events. That is, instead of making a guess about the=20 future and sending an event that might have to be taken back, or poll=20 all the time, the EG just says "if no events come in, wake me up in N=20 frames". If no events come in, the block will time out, and the EG=20 will be called again, switch to the next envelope stage, generating=20 the corresponding event(s). Before it returns, it'll decide how long=20 to sleep until the next state change, in case no events come in. (One might use something as simple as the return code from the event=20 processor to pass 'N' to the scheduler. Just say 0 all the time, if=20 you want to run every buffer cycle.) Indeed, this is yet another way of moving a problem somewhere else.=20 However, if the audio thread has a single scheduler that keeps track=20 of all objects, various optimizations become possible. If objects=20 don't have to check input and timers for themselves while "sleeping",=20 some function call overhead (and some event processor complexity) can=20 be eliminated. If that doesn't cut it, arrange all sleeping objects=20 in a priority queue, and only keep track of when to wake up the next=20 object. Note that I was mostly thinking about Audiality's low end scaling when=20 thinking this up, so it might not be viable for LinuxSampler. OTOH,=20 it could probably make event processing slightly simpler, whether you=20 care to implement an optimized scheduler or not. Either way, a good=20 discussion might benefit both projects, even if we end up using=20 different approaches. Now, those of you who actually read all the way here are probably at=20 least as obsessed with event systems as I am. We should all consider=20 getting a life! ;-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |