|
From: David O. <da...@ol...> - 2003-11-07 21:26:07
|
Speaking of timestamped event systems (in another thread); I have some=20 code lying around; one working implementation inside Audiality, and=20 one very similar prototype for XAP, based on the Audiality code. I=20 never got around to make the latter available to the public, but=20 OTOH, it doesn't really differ from the Audiality event system in any=20 interesting way, if you just want to have a look at a working design. Anyway, there's one important design decision I'm thinking about: =09To queue or not to queue... :-) In Audiality, I'm never queueing events beyond the end of the current=20 audio buffer. The obvious disadvantage is that envelope generators=20 and the like cannot be "smart" and prequeue events and then go to=20 sleep - but OTOH, with prequeueing, input events may force prequeued=20 events to be taken back or modified. Not doing any prequeueing keeps=20 things simple, since event queues *are* actually simple queues, as=20 opposed to random access sequencers. (In Audiality, I do have a sorting "insert" operation, though it's not=20 used, and probably never will be. XAP wasn't even meant to have such=20 a thing. Events are always required to be sent in timestamp order.=20 The addressing and routing system ensures that multiple event streams=20 to the same input get sort/merged properly, and without cost for 1->1=20 connections.) Another advantage with avoiding prequeueing is that "now" is the time=20 span of the current audio buffer, and that's all we wory about for=20 the current buffer cycle. Event processors (such as envelope=20 generators) only need to worry about one buffer period at a time, and=20 never need to take back or modify their output. Unfortunately, there's no way of avoiding the fact that *something*,=20 somewhere, has to schedule things in event processors that deal with=20 timing. An envelope generator has to keep track of when to switch to=20 the next section, even if it's just generating long linear ramps=20 every now and then. One might think that an event system that=20 supports prequeueing will help here, but the more I think about it,=20 the more convinced I get that that's the wrong tool. It complicates=20 things (the event system as well as most of the code that uses it)=20 and just moves the problem to another place in the system. The current Audiality DAHDSR EG generates only one ramp event for each=20 envelope stage. Each EG is a state machine that is called once per=20 buffer cycle, like a plugin in your average VST/LADSPA style system.=20 First, input events (such as "note-off") are evaluated, since they=20 may affect the state of the EG. Then a downcounting timer keeps track=20 of when the next state change is to occur. No state change means the=20 EG just returns. So, the EG never really goes to sleep - but it=20 cannot anyway, since it *has* to poll for input events once per=20 buffer cycle. Although this probably isn't much of a performance issue at this point=20 (there's just one EG per voice ATM! *hehe*), I'm considering some=20 ways of optimizing it, mostly because I want more EGs, LFOs and other=20 stuff, and I don't want all those objects to burn my DSP cycles even=20 when not doing anything. Scalability, that is. (Audiality is meant to=20 run on Pentium class hardware and up.) Considering the above, my conclusion has to be that the answer must be=20 on the other side of the event processor code, so to speak. A=20 scheduler that allows objects to go to sleep, blocking on event input=20 ports and/or timers, making event processors a bit more like=20 processes in an RTOS with message passing. My Audiality EGs for example, would block, waiting for input events=20 *or* timer events. That is, instead of making a guess about the=20 future and sending an event that might have to be taken back, or poll=20 all the time, the EG just says "if no events come in, wake me up in N=20 frames". If no events come in, the block will time out, and the EG=20 will be called again, switch to the next envelope stage, generating=20 the corresponding event(s). Before it returns, it'll decide how long=20 to sleep until the next state change, in case no events come in. (One might use something as simple as the return code from the event=20 processor to pass 'N' to the scheduler. Just say 0 all the time, if=20 you want to run every buffer cycle.) Indeed, this is yet another way of moving a problem somewhere else.=20 However, if the audio thread has a single scheduler that keeps track=20 of all objects, various optimizations become possible. If objects=20 don't have to check input and timers for themselves while "sleeping",=20 some function call overhead (and some event processor complexity) can=20 be eliminated. If that doesn't cut it, arrange all sleeping objects=20 in a priority queue, and only keep track of when to wake up the next=20 object. Note that I was mostly thinking about Audiality's low end scaling when=20 thinking this up, so it might not be viable for LinuxSampler. OTOH,=20 it could probably make event processing slightly simpler, whether you=20 care to implement an optimized scheduler or not. Either way, a good=20 discussion might benefit both projects, even if we end up using=20 different approaches. Now, those of you who actually read all the way here are probably at=20 least as obsessed with event systems as I am. We should all consider=20 getting a life! ;-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Christian S. <chr...@ep...> - 2003-11-09 00:31:37
|
Es geschah am Freitag, 7. November 2003 22:26 als David Olofson schrieb: > Anyway, there's one important design decision I'm thinking about: > > To queue or not to queue... :-) > > > In Audiality, I'm never queueing events beyond the end of the current > audio buffer. The obvious disadvantage is that envelope generators > and the like cannot be "smart" and prequeue events and then go to > sleep - but OTOH, with prequeueing, input events may force prequeued > events to be taken back or modified. Not doing any prequeueing keeps > things simple, since event queues *are* actually simple queues, as > opposed to random access sequencers. Ok, for things like EGs where undeterministic factors are involved I share your opinion better not to let them prequeue their events, but I'm thinking about the sequencer scenario where the sequencer might want to prequeue events somewhere past the scrope of the current frame. I'm not sure if there's already something like that (sequencer / protocol). Don't you think that would be a pro for not-frame-relative events? > (In Audiality, I do have a sorting "insert" operation, though it's not > used, and probably never will be. XAP wasn't even meant to have such > a thing. Events are always required to be sent in timestamp order. But what if you're using an UDP based protocol? That might mix the events. > The addressing and routing system ensures that multiple event streams > to the same input get sort/merged properly, and without cost for 1->1 > connections.) You mean no need for sorting events that are dedicated for different purposes anyway, right? Would it make sense to have individual queues for some special purposes (to avoid mixing things and thus reduce time complexity)? > Now, those of you who actually read all the way here are probably at > least as obsessed with event systems as I am. We should all consider > getting a life! ;-) Just a matter of multi tasking... [switch] > `-----------------------------------> http://audiality.org -' > --- http://olofson.net --- http://www.reologica.se --- I like the Requirements for Audiality ("Reasonably new C compiler", "An operating system", "Sound card with drivers"). :) CU Christian |
|
From: David O. <da...@ol...> - 2003-11-09 17:52:16
|
On Sunday 09 November 2003 01.31, Christian Schoenebeck wrote:
[...]
> Ok, for things like EGs where undeterministic factors are involved
> I share your opinion better not to let them prequeue their events,
> but I'm thinking about the sequencer scenario where the sequencer
> might want to prequeue events somewhere past the scrope of the
> current frame. I'm not sure if there's already something like that
> (sequencer / protocol). Don't you think that would be a pro for
> not-frame-relative events?
Well, in the case of Audiality, normal plugins just won't care about=20
input events past the end of the buffer cycle, so you *could*=20
prequeue without causing any trouble, that far.
(Actually, there is a minor issue with big gaps and wrapping=20
timestamps, but that's just because I use 16 bit wrapping timestamps.=20
No need for more, as long as the "one buffer at a time" rule is=20
obeyed.)
However, if you really want the prequeueing to be of any real use, you=20
have to allow some plugins to look ahead on the input as well - or=20
you're still limited by the same issue as always: You do not know=20
about the future (which is defined by future input events), and thus,=20
you cannot prequeue output events.
For example, if you let the sequencer prequeue, you've effectively=20
forwarded the timing part of sequencing to whatever receives the=20
events, instead of just processing one buffer cycle at a time. (Note=20
that this can potentially mean that the sequencer causes CPU load=20
peaks potential event pool drain, when it occasionally prequeues new=20
events!) That *could* perhaps allow EGs to save some cycles by not=20
running every buffer cycle - but then the EGs have to *know* whether=20
or not they're allowed to work outside the current buffer cycle, or=20
they will not function properly with "live" input.
> > (In Audiality, I do have a sorting "insert" operation, though
> > it's not used, and probably never will be. XAP wasn't even meant
> > to have such a thing. Events are always required to be sent in
> > timestamp order.
>
> But what if you're using an UDP based protocol? That might mix the
> events.
That's something the "gateway" to the outside world will have to deal=20
with. This kind of event system is based on the idea that events are=20
just a form of structured audio rate data, which differs quite a bit=20
from QNX style message passing, UDP, GUI toolkit event systems and=20
the like.
Basically, if you can maintain a fixed rate audio stream, you can also=20
transmit an Audiality/XAP style event stream. If you have drop-outs=20
and similar issues, both audio and event streams will need to be=20
"repaired" one way or another before going into a plugin net.
This may seem restrictive if you consider only plain events, but if=20
you consider ramp events, you'll realize that anything else would=20
make life quite a bit harder for event receivers. Overlapping ramp=20
sections, gaps etc, and you'd have to take special measures to avoid=20
clicks, that are not necessary otherwise. An extra layer of zipper=20
noise protection, that belongs in the soft/hard RT gateway; not=20
inside every plugin.
> > The addressing and routing system ensures that multiple event
> > streams to the same input get sort/merged properly, and without
> > cost for 1->1 connections.)
>
> You mean no need for sorting events that are dedicated for
> different purposes anyway, right?
Actually, it doesn't matter what the events are for, since they still=20
have to be in timestamp order if they go to the same physical input=20
queue. Decoding is normally driven by the event stream, like this:
=09while(samples_left_to_process())
=09{
=09=09while(time_to_next_event() =3D=3D 0)
=09=09{
=09=09=09process_event();
=09=09}
=09=09samples =3D time_to_next_event();
=09=09if(samples > samples_left_to_process())
=09=09=09samples =3D samples_left_to_process();
=09=09process(samples);
=09}
> Would it make sense to have
> individual queues for some special purposes (to avoid mixing things
> and thus reduce time complexity)?
Definitely, and that was designed into the XAP event system, be means=20
of "context IDs" of control ins and outs.
A plugin with multiple "inner" loops (like a mixer processing one=20
channel at a time) would have separate event queues (thus controls=20
with different context IDs), and the only ordering that matters is=20
that within a queue. No sort/merging needed if there's one sender (ie=20
one loop generating ordered events) for each queue, or if one sender=20
sends to multiple queues.
A host detects the need for a sort/merge object by checking for=20
multiple output contexts sending to a single input context. That is,=20
sort/merge is *only* done if two "inner loops" are sending events to=20
the same physical event queue.
A plugin developer can organize things any way he/she sees fit, and=20
then just slap context IDs onto controls according to their relations=20
to the internal loops of the plugin.
> > Now, those of you who actually read all the way here are probably
> > at least as obsessed with event systems as I am. We should all
> > consider getting a life! ;-)
>
> Just a matter of multi tasking... [switch]
Still, there are only so many cycles in a day. :-)
> > `-----------------------------------> http://audiality.org -'
> > --- http://olofson.net --- http://www.reologica.se ---
>
> I like the Requirements for Audiality ("Reasonably new C compiler",
> "An operating system", "Sound card with drivers"). :)
Keeps the volume of email from people trying to compile Kobo Deluxe=20
(which uses Audiality) really rather low. There was some gcc 3.x type=20
casting issue somewhere, and I discovered that you need a macro to=20
copy a va_list on PPC, but that's about it. :-)
//David Olofson - Programmer, Composer, Open Source Advocate
=2E- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
--- http://olofson.net --- http://www.reologica.se ---
|