|
From: Benno S. <be...@ga...> - 2003-08-06 07:08:13
|
Interesting thoughts Simon, but I am still unsure which approach wins in terms of speed. Since the audio processing is block based (we process N samples at time where N is preferably the audio fragment size of the sound card), and since there can be hundreds of active voices, each voice can have it's own modulator. Assume we use blocks of 256 samples and we have 200 voices active. Each voice has an envelope modulation attached to it. This means that during the DSP cycle (that generates the final 256 output samples), 200 envelope generators wrtite 200 * 256 samples = 51200 samples (assuming we want very precise envelope curves thus we allow one new volume value per sample). Since the DSP engine is all float based (4 byte) we end up touching 51200 * 4= 204800 bytes of data. This puts (IMHO) quite some stress on the cache perhaps slowing down things because audio mixing requires lots of cache too. OTOH you say linear events is some form of "compression". Yes it is, but I do not see it as an evil kind of compression since compared to the streamed approach (the envelope generator "streams" the volume data to the audio sampler module) it requires only one more addition which is a very fast operation and whose execution's time probably goes down in noise when compared to the whole DSP network. Perhaps for single sample based processing the streamed approach is the way to go since the data gets consumed immediately but AFAIK on today's CPUs even if you could run an exclusive DSP thread with single sample latency (assume there is no OS in the way that complicated things and you are the only process running) performance would be worse than using block based processing due to worse locality of the referenced data compared to the block model. If I said nonsense or if my approach is flawed performance-wise let me know. cheers, Benno to measure the total CPU usage. Scrive Simon Jenkins <sje...@bl...>: > > > IMHO events are best used for units of "musical meaning" eg the sorts of > things that MIDI encodes moderately well (provided you are a pianist). > That sort of stuff enters a synthesis engine's inputs, may get moved > around and processed a bit, but sooner or later its got to be turned > into something the audio end of the engine can actually *work* with... > a continuous stream of data either at sample-rate or some low-fi > subdivision of it. Why delay the inevitable? Its an envelope-generator's > *job* to turn some events into a data-stream according to some parameters. > > >Of course nothing forbids us to implement that approach too. > >But I think modulation by linear segments is flexible enough > > > Linear segments aren't so much "events" as they are data-compressed > versions of continuous streams. The recipient has to decompress them > back into a stream (probably "on the fly" by interpolating along the > segments as it goes) before modulating any audio with them. > > Its a performance overhead, not a saving, to do... > > events+params -> envelope encoding -> envelope data stream > > rather than directly > > events+params -> envelope data stream > > unless... > > > and is IMHO > >one of the fastest approaches since the amount data moved between modules > >is small. > > > ...you are planning to win back the time by moving less data around. > > However: > > If the synth engines are going to be compiled, then data streams > don't have to be moved anywhere. Nothing except the final audio > outputs ever needs to leave the engine, and the compiler can > generate code which *takes each value from wherever it already is*. > > (If the generated code were internally blockless and reasonably > optimised then the data for a lot of streams would never even > make it to RAM: It would appear in an FPU register as a result > of one FP operation and be almost immediately consumed > from there by a subsequent FP operation.) > > Simon Jenkins > (Bristol, UK) > ------------------------------------------------- This mail sent through http://www.gardena.net |