|
From: Benno S. <be...@ga...> - 2003-07-20 16:29:15
|
Hello, sorry for being that quiet during the last weeks, work overload you know .. Anyway during the last days a few of us (me, Marek, Juan L. and others) discussed about the possible roadmaps for linuxsampler. While we haven't produced useful code lately, some of us came up with nice ideas. Some say linuxsampler will never happen, or that some do "voice" too much, but hey I think passionate discussion sparks interest and creativity. Plus good things take time to develops and unfortunately we are not in the fortunate situation like Paul D. where we can work full time on our favorite OSS project. That said, personally I'm still convinced that it should become a "modular sampler" because I do not like hardcoded engines that much since they limit the creativity of the user. Obviously should allow importing multiple sampleformats but in this regard there is a opinion split: some (like me) say we should write separate sample loading modules for each format (gig, akai etc) that store the samples in datastructures using the native representation, while others say that it would be better to create some powerful (and if possible extensible) internal format that can accomodate most common formats in order to simplify things. I'm not sure about that and if we really write a modular sampler (where you can assemble the various engines using a GUI editor along the lines of NI Reaktor) if in case of using "native sampleformat loaders" the code duplication would really be that high as some state (Marek). Of course it would be cool to use the GIG engine to play AKAI samples and viceversa. But when using native loaders you could alway write a GIG to AKAI converter module that reads the GIG metadata that the GIG loader put into mem and converts them to the AKAI datastuctures. Of course the "universal internal format" has the advantage that you can convert form any input format to any output format by writing a converter module that can convert from/to the universal to/from the specific format. That way if you have N formats you only need to write N converter modules instead of N*(N-1) if you are using only native formats. I back the native format loader solution since you can tune the engine exactly to work hand in hand with that format instead of going through the intermediate universal format. This mean you can tune the GIG engine to work exactly with GIG specific enveloping data, filters, looping etc. Same applies to all other formats. Of course one would want to use the GIG engine to play AKAI samples but we can write a converter module (that is probably quite simple since AKAI's sample engine is simpler than GIG's one). But I think the N*(N-1) n. of converter modules equation does not apply here since I think it is unlikely to want to play GIGs using the limited AKAI engine. Since we include both engines it makes sense to use the native one. So probably the number of format converter modules is more likely determined by (N of simple formats) *( N of complex formats). Since the N of complex formats (like GIG) is not that high I guess the number of converter modules would still stay quite low but with the benefits of having native loaders and native engines that go hand in hand. Marek would object here but I think using an internal universal format make it hard to avoid conversion losses that makes it hard to achieve faithful playback. Of course many sampleformats share many properties like keymaps, envelopes, filters, looping information etc, but unfortunately often each format uses its own representation where a bijective A<->B converter function does not exist. For example when speaking about envelopes: some formats use simple ADSR models while others allow arbitrary linear or exponential enveloping segments. How to avoid the code duplication in the loaders and engines ? Use the modular approach ! Since the final engines will be compiled using the following approach: module editor -> C source code -> compiled .so file that gets loaded by the sampler the speed will be OK even if we assemble the engines using a Reaktor-like GUI (and it will be very userfriendly and handy to add new custom engines). So basically when we have implemented enough basic building blocks (filters, envelope generators, etc) designing new engines becomes a breeze. Not sure about the loaders (if we could use a GUI editor to compose loaders too). Most sampling formats are based on the RIFF format (GIG, DLS, SF2, WAV etc) so a RIFF parser is needed anyway but the individual modules that interpret the data contained in various RIFF chunks still need be written by hand. But to achieve maximum flexibility instead of writing a monolithic format importer we could write let's say a general RIFF parser and then specialized modules that interpret the various RIFF chunks. That way if we happen to support formats that share some characterisics (eg GIG is nothing more than DLS with some proprietary additions) can can fully reuse the modules. I think using the approaches described above the "code duplication" can be kept really low while the flexibility increases quite. Regarding the GUIs: As you know I support the idea of totally decoupling the engine from the GUI. It improves code quality because you do not mix GUI stuff with engine stuff. Plus if you use a TCP socket you can even run the GUI on a remote machine that can run even another OS than Linux. (eg. assume a Mac OS X frontend box networked with a headless linuxsampler box). As for the module composer form the discussion we had on IRC we seem to agree that the best thing to do would be making a "stupid" GUI which only forward commands to the engine. That way when you wire togheter the basic building blocks in the GUI editor apart from displaying them wired together it does nothing than issuing a connect(module1.out,module2.in) command to the engine which does the appropriate stuff. That way you can even use scripts, commandlines or textual interfaces that can manage and construct audio rendering engines. Regarding the sample importing modules: I think it would be wise to use a bottom up approach thus IMHO it would be better to not use and extend complex monolithic applications that can read/edit samples. Josh is writing LibInstPatch and I think it is a nice lib but I'm unsure if it is the right way to go. I think the only way we can solve the problems that linuxsampler poses is a divide et impera approach. So write many "micro modules" and "wire" them toghether using a GUI. That way you can, as Steve Harris said some time ago, make a good "sampler construction kit". Regarding the code and what to implement first. I admit we haven't done much lately but meanwhile the ideas kept flowing and I think it helps to avoid design errors. Thanks to Sebastien and Christian we now have libakai running on Linux. (libakai is a small lib that reads an parses AKAI sample images). I think this is the right kind of lib for linuxsampler since it is lean, very low level, does not contain a GUI and can easily be embedded into linuxsampler importing modules. So basically we need to figure out which kind of modules we should start implementing. It is not easy to choose where to start since design errors in one module can later affect the design of other modules. So I think it would probably be the best to start with the GUI. A module editor similar to Reaktor where you can assemble basic building blocks. The real DSP network building process will occur within the sampler engine so the GUI will remain quite simple and stupid. I think once the GUI runs we can start to implement the necessary backend. The logic that manages DSP modules networks and then at a later stage the code that turns the DSP network in a C file that can be compiled into a .so file that runs the actual DSP stuff. I'm using Qt and will post a few screenshots for commenting in a few weeks. At least we will have some code and stuff to play with. The Qt Designer has the nice capability to allow the integration of custom made Widgets which can be used to build more complex ones. That way you can for example make audio in, audio out, midi in,midi out, CV in, CV out widget which you can use to make new basic building blocks. Some of the developers (Marek ? ;-) ) do not like Qt that much (because it is C++ based or because it Qt si GPL and not LGPL like gtk) but that is not a problem. GUIs can be implemented by using any toolkit (even curses etc) since the engine it totally decoupled from the GUI. Personally I like Qt very much because I think GUIs can be implemented easier in an object oriented language rather than in C plus Qt classes are IMHO much cleaner (and easier to use) than Gtk/gtkmm ones. It is just a personal taste so if at a later stage someone wants to implement a gtk based GUI feel free to do so. The engine just does not care what kind of GUI drives it. Returning to the sample importer modules: as said before it would be nice to support the AKAI (via libakai) and GIG formats. GIG is a bit more tricky since we need to decode the GIG specific chunks. Ruben Van Royen has done some good stuff some ago (Paul K. and Marek do have Rubens header files) but it is still not complete. So one task that it open to some of you developers (the ones that do have gigasampler) is to continue that stuff (contact Paul K. or Marek for more infos). So it would be cool if you could produce a small and lean libgig that does something like libakai does: parse and read GIG files, decode the various chunks (envelope data, filters, looping data, MIDI keymaps) etc. While it is not a really hard task, it is not very easy either. You need a running gigasampler, some sample libraries. Decode data, change parameters, see what changes in the data chunks on the GIG file etc. Not to mention that you need a copy of the DLS2 specs (since GIG is based on DLS2) which are not publicy available on the net. (but some on the list like Paul K seem to have a copy so just ask him how to obtain one). While I would technically able to do such stuff I'd rather prefer to focus on building an efficient low level engine so it would be handy if some of you could implement the GIG importing lib. Any volounteers ? Such a lib would not only benefit linuxsampler but other OSS projects too. (sample editors, softsamplers etc). Ok shutting up now ;-) I'd like you guys to comment on the various issues (if my reasoning is flawed, what you would do in different ways etc). cheers, Benno http://linuxsampler.sourceforge.net ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Josh G. <jg...@us...> - 2003-07-20 21:21:32
|
I've also been thinking more about Swami/libInstPatch and Linuxsampler.
I realized that there really is no reason for me to push
Swami/libInstPatch for the Linuxsampler project, in fact when I started
to think of the modular nature that Linuxsampler aims to be it occured
to me that it shouldn't matter.
If Linuxsampler can be thought of as a network of synthesis modules then
couldn't it be possible to have an API for creating a synthesis model
for a particular instrument format? So say for a SoundFont one would do
something like (sudo code follows):
/* streams and interpolates audio data, loop and tuning parameters */
add_wave_table ("Wavetable")
add_lowpass_filter ("Filter")
add_envelope ("VolumeEnv")
add_envelope ("ModEnv")
add_lfo ("ModLFO")
add_lfo ("VibratoLFO")
add_pan ("Panning")
etc.. I am still of the opinion that the instrument patch objects should
be decoupled from the synthesis engine, due to its real time nature. So
a handler for a particular instrument format would then setup the
synthesis model (perhaps the network would be compiled or optimized in
some way). A note on handler would then be written which might look
like:
voice = new_synthesis_voice ("SoundFont")
voice.Wavetable.set_sample_callback (my_sample_callback)
voice.Wavetable.set_pitch (midi_note)
voice.Filter.set_Q (soundfont Q)
..
I'm not sure how efficient this would be in practice but it does fit the
modular goal. The nice thing about this is that other projects could
take direct advantage of Linuxsampler (such as Swami).
If anyone cares to check out whats currently happening with the
development version of Swami I put up a screenshot.
http://swami.sourceforge.net/images/swami_devel_screenshot.jpg
You'll notice a few gigasampler files open as well as one DLS file and a
SoundFont. Only sample info and splits are viewable with DLS2 files at
the moment, and there is still quite a lot of work to do before I would
consider it a working instrument editor, but things are progressing
quite nicely now :) Of note is that the piano, splits and sample view
are now implemented with the GnomeCanvas (of additional note is that
GnomeCanvas 2.x is not dependent on Gnome). This means piano and splits
can be scaled and controls can be overlayed onto the same canvas. If you
would like to try out the development version, let me know. This stuff
is so new it hasn't been checked into CVS yet. Cheers.
Josh Green
|
|
From: Benno S. <be...@ga...> - 2003-08-04 23:40:36
|
I started to write the GUI for the module editor for Linuxsampler. See this screenshot for now (code will follow later). http://www.linuxdj.com/benno/lsgui4.gif About 1.5 day of coding. (Qt lib) ;) You can create modules with an arbitrary number of input and output ports that can be of several kind of types (midi, audio, control ports etc). You can connect ports that are of the same type and move the modules around the screen. As you can see in the screenshot the idea is to make a very general purpose system which is not tied to MIDI. The screenshot shows that the MIDI keymap module acts as a proxy between the MIDI data and the sampler module itself which knows nothing about MIDI etc. More details about the GUI within the next days ... meanwhile if you have suggestions to make (GUI wise) speak out loudly ;-) here follows a response to Josh's mail: Scrive Josh Green <jg...@us...>: > > If Linuxsampler can be thought of as a network of synthesis modules then > couldn't it be possible to have an API for creating a synthesis model > for a particular instrument format? So say for a SoundFont one would do > something like (sudo code follows): > (....) > > etc.. I am still of the opinion that the instrument patch objects should > be decoupled from the synthesis engine, due to its real time nature. So > a handler for a particular instrument format would then setup the > synthesis model (perhaps the network would be compiled or optimized in > some way). (....) > I'm not sure how efficient this would be in practice but it does fit the > modular goal. The nice thing about this is that other projects could > take direct advantage of Linuxsampler (such as Swami). > > Yes the goal is to design it that way, thus it will be easy to build a powerful SF2 engine too.. Regarding swami / libinstpatch etc: please work together with Christian S. on the DLS / GIG loading stuff since he is interested too. As said LinuxSampler needs some lib that decodes the chunks and presents them in a way that is easy to handle for the engine. The less dependencies it has the easier it is to use the lib within linuxsampler. cheers, Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Steve H. <S.W...@ec...> - 2003-08-05 06:17:43
|
On Tue, Aug 05, 2003 at 01:40:47 +0200, Benno Senoner wrote: > I started to write the GUI for the module editor for Linuxsampler. > See this screenshot for now (code will follow later). > > http://www.linuxdj.com/benno/lsgui4.gif Nice, you can never have to many modular synth style interfaces ;) Are we still planning to use downcompiling to make an efficient runtime sampler? Some questions: what does SamplePtr do? And is pitch really pitch, or is it rate. I'm afraid I've been out of the loop a bit recently with other commitments, but I'l do what I can. - Steve |
|
From: Benno S. <be...@ga...> - 2003-08-05 15:10:12
|
I wrote a response but I think the mail got lost :-( so I'll try again. Scrive Steve Harris <S.W...@ec...>: > > Are we still planning to use downcompiling to make an efficient runtime > sampler? yes. It is not a trivial task but should be manageable. (some modules require special care like filters that mantain state information etc). > > Some questions: what does SamplePtr do? Basically when a MIDI event occurs the MIDI Keymap module picks the right sample (key and velocity mapping) and sends pointer to sample data and sample length to the sample module. Same must be done for looping etc. As for modulators I was thinking using events that send (base, increment) values. That way you can approximate complex curves via small linear segment while still having a fast engine. >And is pitch really pitch, or is it rate. Sorry yes I think it is rate. Basically rate = 1.0 plays the sample at normal speed. rate = 2.0 double speed (one octave up etc). > > I'm afraid I've been out of the loop a bit recently with other > commitments, but I'l do what I can. no problem, I'm in the same situation so I'm the first that has to shut up ! ;-) Benno. ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Steve H. <S.W...@ec...> - 2003-08-05 16:22:24
|
On Tue, Aug 05, 2003 at 05:10:17 +0200, Benno Senoner wrote: > > Are we still planning to use downcompiling to make an efficient runtime > > sampler? > > yes. It is not a trivial task but should be manageable. > (some modules require special care like filters that mantain state information > etc). Yes, but if we use LADSPA's API (as was suggested a while back) this should be a non-problem, cos LADSPA defines how this is handled. > > Some questions: what does SamplePtr do? > > Basically when a MIDI event occurs the MIDI Keymap module picks the right > sample (key and velocity mapping) and sends pointer to sample data and > sample length to the sample module. > Same must be done for looping etc. > As for modulators I was thinking using events that send (base, increment) > values. That way you can approximate complex curves via small linear segment > while still having a fast engine. Wouldn't a streaming approach be more appropraite? So the sample source module streas PCM data to the audio output module. > >And is pitch really pitch, or is it rate. > > Sorry yes I think it is rate. Basically rate = 1.0 plays the sample at > normal speed. rate = 2.0 double speed (one octave up etc). Yup. NP, I just wanted to be clear. - Steve |
|
From: Benno S. <be...@ga...> - 2003-08-05 21:13:56
|
Scrive Steve Harris <S.W...@ec...>: > > yes. It is not a trivial task but should be manageable. > > (some modules require special care like filters that mantain state > information > > etc). > > Yes, but if we use LADSPA's API (as was suggested a while back) this > should be a non-problem, cos LADSPA defines how this is handled. Ok but keep in mind that the synthesis network is not composed only of audio modules like LADSPA but modulators, event generators etc thus perhaps ideas can be taken from LADSPA but will probably need to be extended. We will see as we progress. > > > > Some questions: what does SamplePtr do? > > > > Basically when a MIDI event occurs the MIDI Keymap module picks the right > > sample (key and velocity mapping) and sends pointer to sample data and > > sample length to the sample module. > > Same must be done for looping etc. > > As for modulators I was thinking using events that send (base, increment) > > values. That way you can approximate complex curves via small linear > segment > > while still having a fast engine. > > Wouldn't a streaming approach be more appropraite? So the sample source > module streas PCM data to the audio output module. You mean using the straming approach for audio or for modulation data ? For audio it's obvious that it is a continuous stream of data but for modulation data I was thinking about using events as described above. That way you avoid wasting memory, bandwidth and cpu cycles doing like other modular synths where you send modulation data as it were an audio stream (at k rate like in csound). Of course nothing forbids us to implement that approach too. But I think modulation by linear segments is flexible enough and is IMHO one of the fastest approaches since the amount data moved between modules is small. If you meant a different thing please let me know. (or if your method is more efficient than mine) Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Simon J. <sje...@bl...> - 2003-08-05 23:59:04
|
Benno Senoner wrote: >For audio it's obvious that it is a continuous stream of data but for >modulation data I was thinking about using events as described above. >That way you avoid wasting memory, bandwidth and cpu cycles doing like other >modular synths where you send modulation data as it were an audio stream (at k >rate like in csound). > IMHO events are best used for units of "musical meaning" eg the sorts of things that MIDI encodes moderately well (provided you are a pianist). That sort of stuff enters a synthesis engine's inputs, may get moved around and processed a bit, but sooner or later its got to be turned into something the audio end of the engine can actually *work* with... a continuous stream of data either at sample-rate or some low-fi subdivision of it. Why delay the inevitable? Its an envelope-generator's *job* to turn some events into a data-stream according to some parameters. >Of course nothing forbids us to implement that approach too. >But I think modulation by linear segments is flexible enough > Linear segments aren't so much "events" as they are data-compressed versions of continuous streams. The recipient has to decompress them back into a stream (probably "on the fly" by interpolating along the segments as it goes) before modulating any audio with them. Its a performance overhead, not a saving, to do... events+params -> envelope encoding -> envelope data stream rather than directly events+params -> envelope data stream unless... > and is IMHO >one of the fastest approaches since the amount data moved between modules >is small. > ...you are planning to win back the time by moving less data around. However: If the synth engines are going to be compiled, then data streams don't have to be moved anywhere. Nothing except the final audio outputs ever needs to leave the engine, and the compiler can generate code which *takes each value from wherever it already is*. (If the generated code were internally blockless and reasonably optimised then the data for a lot of streams would never even make it to RAM: It would appear in an FPU register as a result of one FP operation and be almost immediately consumed from there by a subsequent FP operation.) Simon Jenkins (Bristol, UK) |
|
From: Benno S. <be...@ga...> - 2003-08-06 07:08:13
|
Interesting thoughts Simon, but I am still unsure which approach wins in terms of speed. Since the audio processing is block based (we process N samples at time where N is preferably the audio fragment size of the sound card), and since there can be hundreds of active voices, each voice can have it's own modulator. Assume we use blocks of 256 samples and we have 200 voices active. Each voice has an envelope modulation attached to it. This means that during the DSP cycle (that generates the final 256 output samples), 200 envelope generators wrtite 200 * 256 samples = 51200 samples (assuming we want very precise envelope curves thus we allow one new volume value per sample). Since the DSP engine is all float based (4 byte) we end up touching 51200 * 4= 204800 bytes of data. This puts (IMHO) quite some stress on the cache perhaps slowing down things because audio mixing requires lots of cache too. OTOH you say linear events is some form of "compression". Yes it is, but I do not see it as an evil kind of compression since compared to the streamed approach (the envelope generator "streams" the volume data to the audio sampler module) it requires only one more addition which is a very fast operation and whose execution's time probably goes down in noise when compared to the whole DSP network. Perhaps for single sample based processing the streamed approach is the way to go since the data gets consumed immediately but AFAIK on today's CPUs even if you could run an exclusive DSP thread with single sample latency (assume there is no OS in the way that complicated things and you are the only process running) performance would be worse than using block based processing due to worse locality of the referenced data compared to the block model. If I said nonsense or if my approach is flawed performance-wise let me know. cheers, Benno to measure the total CPU usage. Scrive Simon Jenkins <sje...@bl...>: > > > IMHO events are best used for units of "musical meaning" eg the sorts of > things that MIDI encodes moderately well (provided you are a pianist). > That sort of stuff enters a synthesis engine's inputs, may get moved > around and processed a bit, but sooner or later its got to be turned > into something the audio end of the engine can actually *work* with... > a continuous stream of data either at sample-rate or some low-fi > subdivision of it. Why delay the inevitable? Its an envelope-generator's > *job* to turn some events into a data-stream according to some parameters. > > >Of course nothing forbids us to implement that approach too. > >But I think modulation by linear segments is flexible enough > > > Linear segments aren't so much "events" as they are data-compressed > versions of continuous streams. The recipient has to decompress them > back into a stream (probably "on the fly" by interpolating along the > segments as it goes) before modulating any audio with them. > > Its a performance overhead, not a saving, to do... > > events+params -> envelope encoding -> envelope data stream > > rather than directly > > events+params -> envelope data stream > > unless... > > > and is IMHO > >one of the fastest approaches since the amount data moved between modules > >is small. > > > ...you are planning to win back the time by moving less data around. > > However: > > If the synth engines are going to be compiled, then data streams > don't have to be moved anywhere. Nothing except the final audio > outputs ever needs to leave the engine, and the compiler can > generate code which *takes each value from wherever it already is*. > > (If the generated code were internally blockless and reasonably > optimised then the data for a lot of streams would never even > make it to RAM: It would appear in an FPU register as a result > of one FP operation and be almost immediately consumed > from there by a subsequent FP operation.) > > Simon Jenkins > (Bristol, UK) > ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Simon J. <sje...@bl...> - 2003-08-06 23:49:31
|
Benno Senoner wrote: >Interesting thoughts Simon, but I am still unsure which approach wins >in terms of speed. > I think you misunderstood what I was suggesting. I'm well aware that you mustn't generate one giant loop with a cache footprint bigger than the actual cache! But its also very inefficient to generate lots of tiny little loops and move data continually from one to the next via buffers when a single loop could have achieved the same computation and still fit in the cache. (Its not the slight overhead of the extra looping that matters. Its that any intermediate values which must cross loop boundaries are forced out of registers and into memory buffers). The trick is to generate loops which are just about the right size: Definitely not too big for the cache but, at the same, not needlessly fragmented into sub-loops that are too small. IMO: The code engine for a single voice is probably just about the right size for a loop, and things would run a lot faster if the code was generated blocklessly *within that loop* than if it was generated as a lot of tiny sub-loops leaving buffers of data in RAM for each other. The fact that a voice is designed by connecting little modules with wires doesn't mean that the compiled code must connect little loops with buffers! Giving each envelope generator, each filter, each LFO its own individual loop won't speed things up... it will slow them down. (At higher levels of granularity than a single voice, eg your "200 voices" example, everything should of course be processed in blocks). BTW I'm not making this stuff up out of thin air... there's a fairly thorough proof of concept demo at http://www.sbibble.pwp.blueyonder.co.uk/amble/amble-0.1.1.tar.gz Its not a true compiler unfortunately: It generates C source code by pasting together code fragments, each representing a module, into a single internally blockless (but externally block-processing) function. I/O is transferred via buffers but internal connections between modules are modelled by local variables, and many of these get optimised away by the C compiler becoming temps in registers as I have been describing. Not only does this work, but it delivers the performance advantages I've been talking about. Its not as good as a true compiler could be, but with a bit of work it could actually be hacked into a quick and dirty code generator for LinuxSampler while we wait for the real compiler to arrive. It really could. Simon Jenkins (Bristol, UK) |
|
From: Steve H. <S.W...@ec...> - 2003-08-07 07:28:06
|
On Thu, Aug 07, 2003 at 01:50:25 +0100, Simon Jenkins wrote: > BTW I'm not making this stuff up out of thin air... there's a fairly > thorough proof of concept demo at > > http://www.sbibble.pwp.blueyonder.co.uk/amble/amble-0.1.1.tar.gz I did a similar thing also, in perl (yuk). I'm not convinced its actually faster the blocked processing in the general case, but this may be one where it is. NB SAOL/sfront uses the same technique, so we could possibly use that, and its language is more appropriate to blockless signal processing than C is. > Its not a true compiler unfortunately: It generates C source code by > pasting together code fragments, each representing a module, into > a single internally blockless (but externally block-processing) > function. Mine also, but the level of blocks that were real C code are very tiny (like + and *) everything else was built out of subgraphs. OTOH the other advantage of blockless processing (very low latency in feedback loops) is pretty irrelevent for samplers. - Steve |
|
From: Simon J. <sje...@bl...> - 2003-08-07 08:28:17
|
Steve Harris wrote: >On Thu, Aug 07, 2003 at 01:50:25 +0100, Simon Jenkins wrote: > > >>BTW I'm not making this stuff up out of thin air... there's a fairly >>thorough proof of concept demo at >> >>http://www.sbibble.pwp.blueyonder.co.uk/amble/amble-0.1.1.tar.gz >> >> > >I did a similar thing also, in perl (yuk). > I remember your perl thing! What happened was: I mentioned a pasting-C-together idea in LAD, then found myself actually implementing it (which took a couple of months) to see if it would work. Meanwhile, you did the same thing in a couple of days in perl. However: You+perl aren't *really* 30 times faster than me+C :) A lot of the extra time I'd taken was to make sure that the potential optimisations were actually available in the generated code and visible to the C compiler. I then reproduced your perl demos in amble and found that, indeed, the compiler was able to optimise the amble-generated C code in ways that it could not do with the equivalent perl-generated C code. >I'm not convinced its actually >faster the blocked processing in the general case, but this may be one >where it is. > The biggest hole in my argument (that I can see :)) is that module internal state would often end up in RAM instead of registers in blockless code. If a group of modules were heavy on internal state but light on interconnections then the blockless code could lose out. Simon Jenkins (Bristol, UK) |
|
From: Steve H. <S.W...@ec...> - 2003-08-07 08:42:21
|
On Thu, Aug 07, 2003 at 10:29:12 +0100, Simon Jenkins wrote: > The biggest hole in my argument (that I can see :)) is that module > internal state would often end up in RAM instead of registers in > blockless code. If a group of modules were heavy on internal state > but light on interconnections then the blockless code could lose out. Not really on ia32, there are so few registers that its not likly to make much difference. On PPC it may matter. - Steve |
|
From: ian e. <de...@cu...> - 2003-08-05 21:44:55
|
the downcompiling idea is definitely the way to go, but i think it would be much better if it was one of the things that was left until later to develop. i think people would much rather have a working sampler that used more cpu than have to wait until the downcompiler was ready to have anything they could use at all. also, the non-downcompiled network is going to be necessary for synthesis network design, so it won't be wasted effort. just my opinion, but the quicker a useable sampler appears, the better! ian On Tue, 2003-08-05 at 11:10, Benno Senoner wrote: > I wrote a response but I think the mail got lost :-( so I'll try again. > > Scrive Steve Harris <S.W...@ec...>: > > > > > Are we still planning to use downcompiling to make an efficient runtime > > sampler? > > yes. It is not a trivial task but should be manageable. > (some modules require special care like filters that mantain state information > etc). > > > > > Some questions: what does SamplePtr do? > > Basically when a MIDI event occurs the MIDI Keymap module picks the right > sample (key and velocity mapping) and sends pointer to sample data and > sample length to the sample module. > Same must be done for looping etc. > As for modulators I was thinking using events that send (base, increment) > values. That way you can approximate complex curves via small linear segment > while still having a fast engine. > > >And is pitch really pitch, or is it rate. > > Sorry yes I think it is rate. Basically rate = 1.0 plays the sample at > normal speed. rate = 2.0 double speed (one octave up etc). > > > > > I'm afraid I've been out of the loop a bit recently with other > > commitments, but I'l do what I can. > > no problem, I'm in the same situation so I'm the first that has to shut up ! > ;-) > > Benno. > > > > ------------------------------------------------- > This mail sent through http://www.gardena.net > > > ------------------------------------------------------- > This SF.Net email sponsored by: Free pre-built ASP.NET sites including > Data Reports, E-commerce, Portals, and Forums are available now. > Download today and enter to win an XBOX or Visual Studio .NET. > http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01 > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel -- ian esten <de...@cu...> |
|
From: Christian S. <chr...@ep...> - 2003-07-26 23:00:43
|
Es geschah am Sonntag, 20. Juli 2003 18:29 als Benno Senoner schrieb: > While I would technically able to do such stuff I'd rather prefer to focus > on building an efficient low level engine so it would be handy if some of > you could implement the GIG importing lib. > Any volounteers ? oI IO Yep, count me in! I hope to see one of you guys in IRC (#lad) today (sunday), so you (perhaps Marek?) can tell me what's already done and what we'll still have to do. I have e.g. no DLS specs yet, so would be nice if somebody could send it to me. CU Christian |
|
From: Josh G. <jg...@us...> - 2003-07-28 13:57:06
|
On Sun, 2003-07-27 at 00:59, Christian Schoenebeck wrote: > > oI > IO Yep, count me in! > > I hope to see one of you guys in IRC (#lad) today (sunday), so you (perhaps > Marek?) can tell me what's already done and what we'll still have to do. I > have e.g. no DLS specs yet, so would be nice if somebody could send it to me. > > CU > Christian I'm on #lad right now, since I'm also interested in doing GIG support with Swami. The DLS loader is already working in libInstPatch, but I'd like to add the gig extensions. Its somewhat unfortunate that one can not tell the difference between a .gig and .dls file until running into one of the .gig specific chunks. They kind of polluted the magic file name space :) I have DLS specs, unfortunately they are a printed copy. If someone has an electronic form, I'd also be interested. Cheers. Josh Green |
|
From: Alexandre P. <av...@al...> - 2003-07-28 14:32:28
|
Josh Green wrote > On Sun, 2003-07-27 at 00:59, Christian Schoenebeck wrote: > > > > oI > > IO Yep, count me in! > > > > I hope to see one of you guys in IRC (#lad) today (sunday), so you (perhaps > > Marek?) can tell me what's already done and what we'll still have to do. I > > have e.g. no DLS specs yet, so would be nice if somebody could send it to me. > > > > CU > > Christian > > I'm on #lad right now, since I'm also interested in doing GIG support > with Swami. The DLS loader is already working in libInstPatch, but I'd > like to add the gig extensions. Its somewhat unfortunate that one can > not tell the difference between a .gig and .dls file until running into > one of the .gig specific chunks. They kind of polluted the magic file > name space :) I have DLS specs, unfortunately they are a printed copy. > If someone has an electronic form, I'd also be interested. Cheers. > Josh Green Ahem, doesn't Google rule? :-) http://www.midi.org/about-midi/dls/dls2spec.shtml -- Alexandre Prokoudine ALT Linux Documentation Team JabberID: av...@al... |
|
From: Josh G. <jg...@us...> - 2003-07-28 14:45:40
|
On Mon, 2003-07-28 at 16:32, Alexandre Prokoudine wrote: > Ahem, doesn't Google rule? :-) > > http://www.midi.org/about-midi/dls/dls2spec.shtml Yes it does, almost as much as the MMA sucks. You can happily download DLS1 in electronic form, unfortunately DLS2 is by order only and they give you a nice printed copy which is much harder to transport on laptop (my copy is in California, while I'm currently in Germany). DLS1 has much in common with DLS2 though. Cheers. Josh Green |
|
From: Josh G. <jg...@us...> - 2003-07-28 18:03:49
|
On Sun, 2003-07-20 at 18:29, Benno Senoner wrote: > Josh is writing LibInstPatch and I think it is a nice lib but I'm unsure if > it is the right way to go. Just wanted to clear up one point that Benno mentioned. I'm curious if anyone has actually looked at libInstPatch (not the one with swami-0.9.x thats totally outdated, but the one in the swami-1-0 development branch, use "co -r swami-1-0 swami" when checking out CVS, explains more on Swami download page). I really think it is a nice library architecture and it is constantly improving. It already does quite a bit of what you have been talking about: IpatchRiffParser - RIFF parser object Patch formats are loaded into multi-threaded safe object trees "named" object properties for easy setting of object values All file IO done via IpatchFile which is a virtual file object Abstract sample storage (RAM, file, swap, libsndfile, etc) Supports DLS and SoundFont files currently, rather trivial to add additional formats. Has a nice GUI already, although incomplete (Swami) Anyways, I'm not sure what the goal is of the LinuxSampler project, but I know I'll be continuing work on my own project. If there is some way in which my work could help the goals of LinuxSampler, I think it would be cool. I don't really feel like I have gotten any feedback on if libInstPatch makes sense or not for this project, so until I do, I will keep pushing it :) Cheers. Josh Green |
|
From: Marek P. <ma...@na...> - 2003-08-01 20:24:13
|
> > Some of the developers (Marek ? ;-) ) do not like Qt that much (because it is > C++ based or because Qt is GPL and not LGPL like gtk) Personally i have nothing against Qt being GPL or C++ or whatever, in fact Qt is cool, but i simply like GTK+ better that's all. :) Marek |
|
From: Simon J. <sje...@bl...> - 2003-08-07 21:34:02
|
ian esten wrote: >the downcompiling idea is definitely the way to go, but i think it would >be much better if it was one of the things that was left until later to >develop. i think people would much rather have a working sampler that >used more cpu than have to wait until the downcompiler was ready to have >anything they could use at all. > IMO if linuxsampler is going to downcompile at all then it can't just be bolted on for version 2.0. The feature needs to be present, in some form, in something more like version 0.2. >also, the non-downcompiled network is >going to be necessary for synthesis network design, so it won't be >wasted effort. > Its true that some sort of design-time engine will be required for use when interactively designing a voice. *But the downcompiler should be capable of generating that design-time engine automatically*: Typing: #downcompiler --generate-design-time-engine would be considerably less effort than coding one by hand. And how much extra effort would it be to make the downcompiler capable of this feat? Almost none! (If you consider how very, very, very close you could get to the objective simply by compiling a voice which consisted of loads of modules, all connected to a patchbay module...) Simon Jenkins (Bristol, UK) |
|
From: Benno S. <be...@ga...> - 2003-08-09 14:14:51
|
[ CCing David Olofson, David if you can join the LS list, we need hints for an optimized design ;-) ] Hi, to continue the blockless bs block based, cv based etc audio rendering discussion: Steve H. agrees with me that block mode eases the processing of events that can drive the modules. Basically one philosopy is to adopt the everything is a CV (control value) where control ports are treated like they were audio streams and you run these streams at a fraction of the samplingrate (usually up to samplerate/4). The other philosopy is not to adopt the "everything is a CV", use typed ports and use time stamped scheduled events. Those events are scheduled in the future but we queue up only events that belong to the next to be rendered audio block (eg 64 samples). That way real time manipulation is still possible since the real time events belong to the next audio block too (the current one is already in the dma buffer of the sound card and cannot be manipulated anymore). That way with very little effort and overhead you achieve both sample accurate event scheduling and good scheduling of real time events too. Assume a midi input events occurs during processing the current block. We read the midi event using a higher priority SCHED_FIFO thread and read out the current sample pointer when the event occurred. We can then simply insert a scheduled MIDI event during the next audio block that occurs exaclty N samples (64 in our examples) when the event was registered. That way we get close to zero jitter of real time events even when we use bigger audio fragment sizes. With the streamed approach we would need some scheduling of MIDI events too thus we would probably need to create a module that waits N samples (control samples) and then emits the event. So basically we end up in a timestamped event scenario too. Now if we assume we do all blockless processing eg the dsp compiler generates one giant equation for each dsp network (instrument). output = func(input1,input2,....) Not sure we gain in performance compared to the block based processing where we apply all operations sequentially on a buffer (filters, envelopes etc) like they were ladspa modules but without calling external modules but instead "pasting" their sources in sequence without function calls etc. I remember someone long time ago talked about better cache locality of this approach (was it you David ? ;-) ) but after discussing about blockless vs block based on irc with steve and simon I'm now confused. I guess we should try both methods and benchmark them. As said I dislike "everyting is a CV" a bit because you cannot do what I proposed: eg. you have a MIDI keymap modules that takes real time midi events (note on / off) and spits out events that drive the RAM sampler module (that knows nothing about MIDI). In an event based system you can send Pointer to Sample data in RAM, length of sample, looping points, envelope curves (organized as sequences of linear segments) etc. Basically in my model you cannot connect everything with everything (Steve says it it bad but I don't think so) but you can connect everything with "everything that makes sense to connect to". Plus if you have special needs you can always implement your converter module (converting a midi velocity in a filter frequency etc). (but I think such a module will be part of the standard set anyway since we need midi pitch to filter frequency conversion too if we want filters that support frequency tracking). As said I will come up with a running proof of concept , if we end up all dissatified with the event based model we can always switch to other models but I'm pretty confident that the system will be both performant and flexible (but it takes time to code). thoughts ? Benno http://linuxsampler.sourceforge.net ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: David O. <da...@ol...> - 2003-08-09 15:29:22
|
On Saturday 09 August 2003 16.14, Benno Senoner wrote: > [ CCing David Olofson, David if you can join the LS list, we need > hints for an optimized design ;-) ] I've been on the list for a good while, but only in Half Asleep Lurk=20 Mode. :-) > Hi, > to continue the blockless bs block based, cv based etc audio > rendering discussion: > > Steve H. agrees with me that block mode eases the processing of > events that can drive the modules. > > Basically one philosopy is to adopt the everything is a CV (control > value) where control ports are treated like they were audio streams > and you run these streams at a fraction of the samplingrate > (usually up to samplerate/4). > > The other philosopy is not to adopt the "everything is a CV", use > typed ports and use time stamped scheduled events. Another way of thinking about it is to view streams of control ramp=20 events as "structured audio data". It allows for various=20 optimizations for low bandwidth data, but still doesn't have any=20 absolute bandwidth limit apart from the audio sample rate. (And not=20 even that, if you use a different unit for event timestamps - but=20 that's probably not quite as easy as it may sound.) > Those events are scheduled in the future but we queue up only > events that belong to the next to be rendered audio block (eg 64 > samples). That way real time manipulation is still possible since > the real time events belong to the next audio block too (the > current one is already in the dma buffer of the sound card and > cannot be manipulated anymore). Or: Buffering/timing is exactly the same for events as for audio=20 streams. There is no reason to treat the differently, unless you want=20 a high level interface to the sequencer - and that's a different=20 thing, IMHO. > That way with very little effort and overhead you achieve both > sample accurate event scheduling and good scheduling of real time > events too. Assume a midi input events occurs during processing the > current block. We read the midi event using a higher priority > SCHED_FIFO thread and read out the current sample pointer when the > event occurred. We can then simply insert a scheduled MIDI event > during the next audio block that occurs exaclty N samples (64 in > our examples) when the event was registered. > That way we get close to zero jitter of real time events even when > we use bigger audio fragment sizes. Exactly. Now, that is apparently impossible to implement that on some platforms=20 (poor RT scheduling), but some people using broken OSes is no=20 argument for broken API designs, IMNSHO... And of course, it's completely optional to implement it in the host.=20 In Audiality, I'm using macros/inlines for sending events, and the=20 only difference is a timestamp argument - and you can just set that=20 to 0 if you don't care. (0 means "start of current block", as there's=20 a global "timer" variable used by those macros/inlines.) > With the streamed approach we would need some scheduling of MIDI > events too thus we would probably need to create a module that > waits N samples (control samples) and then emits the event. > So basically we end up in a timestamped event scenario too. Or the usual approach; MIDI is processed once per block and quantized=20 to block boundaries... > Now if we assume we do all blockless processing eg the > dsp compiler generates one giant equation for each dsp network > (instrument). output =3D func(input1,input2,....) > > Not sure we gain in performance compared to the block based > processing where we apply all operations sequentially on a buffer > (filters, envelopes etc) like they were ladspa modules but without > calling external modules but instead "pasting" their sources in > sequence without function calls etc. I suspect it is *always* a performance loss, except in a few special=20 cases and/or with very small nets and a good optimizing "compiler". Some kind of hybrid approach (ie "build your own plugins") would be=20 very interesting, as it could offer the best of both worlds. I think=20 that's pretty much beyond the scope of "high level" plugin APIs (such=20 as VST, DX, XAP, GMPI and even LADSPA). > I remember someone long time ago talked about better cache locality > of this approach (was it you David ? ;-) ) but after discussing > about blockless vs block based on irc with steve and simon I'm now > confused. I don't think there is a simple answer. Both approaches have their=20 advantages in some situations, even WRT performance, although I think=20 for the stuff most people do on DAWs these days, blockless processing=20 will be significantly slower. That said, something that generates C code that's passed to a good=20 optimizing compiler might shift things around a bit, especially now=20 that there are compilers that automatically generate SIMD code and=20 stuff like that. The day you can compile a DSP net into native code in a fraction of a=20 second, I think traditional plugin APIs will soon be obsolete, at=20 least in the Free/Open Source world. (Byte code + JIT will probable=20 do the trick for the closed source people, though.) > I guess we should try both methods and benchmark them. Yes. However, keep in mind that what we design now will run on hardware=20 that's at least twice as fast as what we have now. It's likely that=20 the MIPS/memory bandwidth ratio will be worse, but you never know... What I'm saying is basically that benchmarking for future hardware is=20 pretty much gambling, and results on current hardware may not give us=20 the right answer. > As said I dislike "everyting is a CV" a bit because you cannot do > what I proposed: > eg. you have a MIDI keymap modules that takes real time midi events > (note on / off) and spits out events that drive the RAM sampler > module (that knows nothing about MIDI). In an event based system > you can send Pointer to Sample data in RAM, length of sample, > looping points, envelope curves (organized as sequences of linear > segments) etc. I disagree to some extent - but this is a very complex subject. Have=20 you followed the XAP discussions? I think we pretty much concluded=20 that you can get away with "everything is a control", only one event=20 type (RAMP, where duration =3D=3D 0 means SET) and a few data types.=20 That's what I'm using internally in Audiality, and I'm not seing any=20 problems with it. > Basically in my model you cannot connect everything with everything > (Steve says it it bad but I don't think so) but you can connect > everything with "everything that makes sense to connect to". Well, you *can* convert back and forth, but it ain't free... You can't=20 have everything. Anyway, I see timestamped events mostly as a performance hack. More=20 accurate than control rate streams (lower rate than audio rate), less=20 expensive than audio rate controls in normal cases, but still capable=20 of carrying audio rate data when necessary. Audio rate controls *are* the real answer (except for some special=20 cases, perhaps; audio rate text messages, anyone? ;-), but it's still=20 a bit on the expensive side on current hardware. (Filters have to=20 recalculate coefficients, or at least check the input, every sample=20 frame, for example.) In modular synths, it probably is the right=20 answer already, but I don't think it fits the bill well enough for=20 "normal" plugins, like the standard VST/DX/TDM/... stuff. > Plus if you have special needs you can always implement your > converter module (converting a midi velocity in a filter frequency > etc). (but I think such a module will be part of the standard set > anyway since we need midi pitch to filter frequency conversion too > if we want filters that support frequency tracking). Yes... In XAP, we tried to forget about the "argument bundling" of=20 MIDI, and just have plain controls. We came up with a nice and clean=20 design that can do everything that MIDI can, and then some, still=20 without any multiple argument events. (Well, events *have* multiple=20 arguments, but only one value argument - the others are the timestamp=20 and various addressing info.) > As said I will come up with a running proof of concept , if we end > up all dissatified with the event based model we can always switch > to other models but I'm pretty confident that the system will be > both performant and flexible (but it takes time to code). In my limited hands-on experience, the event system actually makes=20 some things *simpler* for plugins. They just do what they're told=20 when they're told, and there's no need to check when to do things or=20 scan control input streats: Just process audio as usual until you hit=20 the next event. Things like envelope generators, that have to=20 generate their own timing internally, look pretty much the same=20 whether they deal with events or audio rate streams. The only major=20 difference is that the rendering of the output is done by whatever=20 receives the generated events, rather than by the EG itself. Either way, the real heavy stuff is always the DSP code. In cases=20 where it isn't, the whole plugin is usually so simple that it doesn't=20 really matter what kind of control interface you're using; the DSP=20 code fits right into the basic "standard model" anyway. In such=20 cases, an API like XAP or Audiality's internal "plugin API" could=20 provide some macros that make it all insanely simple - maybe simpler=20 than LADSPA. Anyway, need to get back to work now... :-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Benno S. <be...@ga...> - 2003-08-09 17:01:34
|
Scrive David Olofson <da...@ol...>:
>
> I've been on the list for a good while, but only in Half Asleep Lurk
> Mode. :-)
Nice to have you onboard did not know that you were on the list ;-)
> >
> > The other philosopy is not to adopt the "everything is a CV", use
> > typed ports and use time stamped scheduled events.
>
> Another way of thinking about it is to view streams of control ramp
> events as "structured audio data". It allows for various
> optimizations for low bandwidth data, but still doesn't have any
> absolute bandwidth limit apart from the audio sample rate. (And not
> even that, if you use a different unit for event timestamps - but
> that's probably not quite as easy as it may sound.)
So at this time for linuxsampler would you advocate an event
based approach or continuous control stream (that runs at a fraction
of the samplerate) ?
As far as I understood it from reading your mail it seems that you
agree that on current machines (see filters that need to recalculate
coefficients etc) it makes sense to use an event based system.
>
>
> > Those events are scheduled in the future but we queue up only
> > events that belong to the next to be rendered audio block (eg 64
> > samples). That way real time manipulation is still possible since
> > the real time events belong to the next audio block too (the
> > current one is already in the dma buffer of the sound card and
> > cannot be manipulated anymore).
>
> Or: Buffering/timing is exactly the same for events as for audio
> streams. There is no reason to treat the differently, unless you want
> a high level interface to the sequencer - and that's a different
> thing, IMHO.
Yes the timebase is the samplingrate which keeps audio / midi and
other general events nicely in sync.
>
> Exactly.
>
> Now, that is apparently impossible to implement that on some platforms
> (poor RT scheduling), but some people using broken OSes is no
> argument for broken API designs, IMNSHO...
Ok but even if there is a jitter of a few samples it is much better than
having an event jitter equivalent to the audio fragment size.
It will be impossible to the user to notice that the midi pitchbend
event was schedule a few usecs too late compared to the ideal time.
Plus as said it will work with relatively large audio fragsizes too.
>
> > With the streamed approach we would need some scheduling of MIDI
> > events too thus we would probably need to create a module that
> > waits N samples (control samples) and then emits the event.
> > So basically we end up in a timestamped event scenario too.
>
> Or the usual approach; MIDI is processed once per block and quantized
> to block boundaries...
I don't like that , it might work ok for very small fragsizes eg
32-64 samples / block but if you go up to let's say 512 - 1024
timing of MIDI events will suck badly.
> > Not sure we gain in performance compared to the block based
> > processing where we apply all operations sequentially on a buffer
> > (filters, envelopes etc) like they were ladspa modules but without
> > calling external modules but instead "pasting" their sources in
> > sequence without function calls etc.
>
> I suspect it is *always* a performance loss, except in a few special
> cases and/or with very small nets and a good optimizing "compiler".
So it seems that the best compromise is to process audio in blocks
but to perform all dsp operations relative to an output sample
(relative to a voice) in one rush.
Assume a sample that is processed
first by an amplitude modulator and then a LP filter).
filter
basically instead of doing (like LADSPA would do)
am_sample=amplitude_modulator(input_sample,256samples)
output_sample=LP_filter(am_sample, 256samples)
(output_sample is a pointer to 256 samples) the current audio block.
it would be faster do do
for(i=0;i<256;i++) {
output_sample[i]=do_filter(do_amplitude_modulation(input_sample[i]));
}
right ?
While for pure audio processing the approach is quite straightforward,
when we take envelope generators etc into account we must inject
code that checks the current timestamp (an if() statement and then
modifies the right values accordingly to the events pending in the queue
(or autogenerated events).
For example I was envisioning a simple RAM sampler module that knows
nothing about MIDI etc but it flexible enough to offer
the full functionality hardcoded sampler designs do have.
eg. the ram sampler modules has the following inputs:
start trigger: starts the sample
release trigger: sample goes in release mode
stop trigger: sample output stops completely
rate: (a float where 1.0 means output sample at original rate,
2.0 shift one octave up etc).
volume: output volume
rate and volume would receive RAMP events so that you can modulate
these two values in arbitrary ways.
sample_ptr_and_len: pointer to a sample stored in RAM with associated
len
attack looping: a list of looping points:
(position_to_jump, loop_len, number_of_repeats)
release looping: same structure as above but it is used when
the sampler module goes into release phase.
Basically when you release a key if the sample has loops after
the current loop comes to the end pos you switch to the
release_looping list.
but as you see this RAM sampler modules does not fit well in
single RAMP event.
ok you could for example separate
sample_ptr_and_len into two variables but it seems a bit inefficient
to me.
Same could be said of the looping structure.
make more input ports eg
attack_looping_position
attack_looping_loop_len
etc
but it seems a waste of time to me since you end up managing
multiple lists of events even when they are mutually linked
(a loop_position does not make sense without the loop_len etc).
So I'd be interested how the RAM sampler module described above
could be made to work with only the RAMP event.
BTW: you said RAMP with value = 0 means set a value.
But what do you set exactly to 0 , the timestamp ?
this would not be ideal since 0 is a legitimate value.
It would be better to use -1 or something alike.
OTOH this would require an additional if() statement
(to check it it is a regular ramp or a set statement) and it could
possibly slow down things a bit.
My proposed ramping approach that consists of
value_to_be_set, delta
does not require an if and if you simply want to set a value
you set delta = 0
But my approach has the disadvantage that if you want do to mosly ramping
you have always to calculate value_to_be_set at each event and this
could become not trivial if you do not track the values within the modules.
Comments on that issue ?
>
> > I remember someone long time ago talked about better cache locality
> > of this approach (was it you David ? ;-) ) but after discussing
> > about blockless vs block based on irc with steve and simon I'm now
> > confused.
>
> I don't think there is a simple answer. Both approaches have their
> advantages in some situations, even WRT performance, although I think
> for the stuff most people do on DAWs these days, blockless processing
> will be significantly slower.
blockless as refered above by me (blockless = one single equation
but processed in blocks), or blockless using another kind of approach ?
(elaborate please)
>
> That said, something that generates C code that's passed to a good
> optimizing compiler might shift things around a bit, especially now
> that there are compilers that automatically generate SIMD code and
> stuff like that.
The question is indeed if yo do LADSPA style processing
(applying all DSP processing in sequence) the compiler uses SIMD
and optimization of the processing loops and is therefore faster
than calculating the result one single big equation at time which
could possibly not take advantage of SIMD etc.
But OTOH the blockless processing has the advantage that
things are not moved around much in the cache.
The output value of the first module is directly available as the
input value of the next module without needing to move it to
a temporary buffer or variable.
>
> The day you can compile a DSP net into native code in a fraction of a
> second, I think traditional plugin APIs will soon be obsolete, at
> least in the Free/Open Source world. (Byte code + JIT will probable
> do the trick for the closed source people, though.)
Linuxsampler is an attempt to prove that this work but as said I prefer
very careful design in advance rather than quick'n dirty results.
Even if some people like to joke about linuxsampler remaining
vaporware forever, I have to admit that after long discussions on
mailing list we learned quite some stuff that will be very handy
to make a powerful engine.
>
> > As said I dislike "everyting is a CV" a bit because you cannot do
> > what I proposed:
> > eg. you have a MIDI keymap modules that takes real time midi events
> > (note on / off) and spits out events that drive the RAM sampler
> > module (that knows nothing about MIDI). In an event based system
> > you can send Pointer to Sample data in RAM, length of sample,
> > looping points, envelope curves (organized as sequences of linear
> > segments) etc.
>
> I disagree to some extent - but this is a very complex subject. Have
> you followed the XAP discussions? I think we pretty much concluded
> that you can get away with "everything is a control", only one event
> type (RAMP, where duration == 0 means SET) and a few data types.
> That's what I'm using internally in Audiality, and I'm not seing any
> problems with it.
Ah you are using the concept of duration.
Ins't it a bit redundant ? Instead of using duration one can use
duration-less RAMP events and just generate an event that sets
the delta ramp value to zero when you want the ramp to stop.
>
>
> > Basically in my model you cannot connect everything with everything
> > (Steve says it it bad but I don't think so) but you can connect
> > everything with "everything that makes sense to connect to".
>
> Well, you *can* convert back and forth, but it ain't free... You can't
> have everything.
Ok but converters will be the exception and not the rule:
for example the MIDI mapper module
see the GUI screenshot message here:
http://sourceforge.net/mailarchive/forum.php?thread_id=2841483&forum_id=12792
acts as a proxy between the MIDI Input and the RAM sampler module.
So it makes the right port types available.
No converters are needed. It's all done internally in the best possible way
without needless float to int conversions, interpreting pointers as floats
and other "everything is a CV" oddities ;-)
>
> Anyway, I see timestamped events mostly as a performance hack. More
> accurate than control rate streams (lower rate than audio rate), less
> expensive than audio rate controls in normal cases, but still capable
> of carrying audio rate data when necessary.
Yes they are a bit of a performance hack but on current hw as you
pointed out it would be a waste of resources and since every musician's
goal is to get out the most number of voices / effects / tracks etc
from the hardware I think it pays off quite alot to use an event based system.
> Yes... In XAP, we tried to forget about the "argument bundling" of
> MIDI, and just have plain controls. We came up with a nice and clean
> design that can do everything that MIDI can, and then some, still
> without any multiple argument events. (Well, events *have* multiple
> arguments, but only one value argument - the others are the timestamp
> and various addressing info.)
Hmm I did not follow the XAP discussions (I was overloaded during that
time as usual ;-) ) but can you briefly explain how this XAP model would
fit the model where the MIDI IN module talks to the MIDI mapper which
in turns talks to the RAM sampler.
>
> In my limited hands-on experience, the event system actually makes
> some things *simpler* for plugins. They just do what they're told
> when they're told, and there's no need to check when to do things or
> scan control input streats: Just process audio as usual until you hit
> the next event. Things like envelope generators, that have to
> generate their own timing internally, look pretty much the same
> whether they deal with events or audio rate streams. The only major
> difference is that the rendering of the output is done by whatever
> receives the generated events, rather than by the EG itself.
>
> Either way, the real heavy stuff is always the DSP code. In cases
> where it isn't, the whole plugin is usually so simple that it doesn't
> really matter what kind of control interface you're using; the DSP
> code fits right into the basic "standard model" anyway. In such
> cases, an API like XAP or Audiality's internal "plugin API" could
> provide some macros that make it all insanely simple - maybe simpler
> than LADSPA.
So you are saying that in pratical terms (processing performance) it does not
matter whether you use events or streamed control values ?
I still prefer the event based system because it allows you to deal more
easily with real time events (with sub audio block precision) and if you
need it you can run at full sample rate.
>
>
> Anyway, need to get back to work now... :-)
yeah, we unleashed those km-long mails again ... just like in the old times ;-)
can you say infinite recursion ;-)))
cheers,
Benno.
-------------------------------------------------
This mail sent through http://www.gardena.net
|
|
From: Steve H. <S.W...@ec...> - 2003-08-09 19:02:04
|
On Sat, Aug 09, 2003 at 07:01:40 +0200, Benno Senoner wrote:
> basically instead of doing (like LADSPA would do)
>
> am_sample=amplitude_modulator(input_sample,256samples)
> output_sample=LP_filter(am_sample, 256samples)
>
> (output_sample is a pointer to 256 samples) the current audio block.
>
> it would be faster do do
>
> for(i=0;i<256;i++) {
> output_sample[i]=do_filter(do_amplitude_modulation(input_sample[i]));
> }
Thats not what I did, I did something like:
for(i=0;i<256;i++) {
do_amplitude_modulation(input_sample+i, &tmp);
do_filter(&tmp, output_sample[i]);
}
Otherwise your limited in the number of outputs you can have, and this way
makes it more obvious what the execution order is (imagine branches in the
graph) and its no slower.
> The question is indeed if yo do LADSPA style processing
> (applying all DSP processing in sequence) the compiler uses SIMD
> and optimization of the processing loops and is therefore faster
I doubt it, I've not come across a compiler that can generate halfway decent
SIMD instructions - including the intel one.
NB you can still use SIMD instruction in blockless code, you just do it
across the input channels, rather than along them.
> But OTOH the blockless processing has the advantage that
> things are not moved around much in the cache.
> The output value of the first module is directly available as the
> input value of the next module without needing to move it to
> a temporary buffer or variable.
Yes, the only thing we can do is benchmark it.
> Ah you are using the concept of duration.
> Ins't it a bit redundant ? Instead of using duration one can use
> duration-less RAMP events and just generate an event that sets
> the delta ramp value to zero when you want the ramp to stop.
If you just specify the delta 'cos then the receiver is limited to linear
segments, as it cant 2nd guess the duration. This wont work well eg. for
envelopes. Some of the commercial guys (eg. cakewalk) are now evaluating
thier envelope curves every sample anyway, so if we go for ramp events for
envelope curves we will be behind the state of the art. FWIW Adobe use
1/4 audio rate controls, as its convienient for SIMD processing (this came
out in some GMPI discusions).
- Steve
|