You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
| 2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
| 2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
| 2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
| 2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
| 2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
| 2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
| 2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
| 2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
| 2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
| 2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
| 2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
| 2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
| 2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
| 2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
| 2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
| 2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
| 2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
| 2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
| 2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
| 2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
| 2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(1) |
Nov
(3) |
Dec
|
|
From: Christian S. <chr...@ep...> - 2003-09-12 22:50:49
|
Hi everybody! I now uploaded an UML diagram for the Gigasampler library. For detailed descriptions about the class attributes have a look at the header files (gig.h, dls.h) or the API documentation. http://stud.fh-heilbronn.de/~cschoene/projects/libgig/ Josh, I used Ruben's header files as a notepad as I decoded new Gigasampler specific fields and corrected old ones. Let me know if you're interested in them! CU Christian |
|
From: Ismael V. T. <is...@sa...> - 2003-09-12 20:24:51
|
El viernes, 12 de septiembre de 2003, a las 18:27, "Mr.Freeze" escribe: > Would you then be gentle to provide me some ressource links in order > to educate myself a bit about music software devel for a prospective > indulgment? You have some tutorials at www.alsa-project.org but, talking about Linux, the point is having access to the source code for all the already existing applications. Regards, Ismael --=20 "Tout fourmille de commentaries; d'auteurs il en est grande chert=E9" |
|
From: \\\Mr.Freeze\\\ <the...@fr...> - 2003-09-12 16:30:28
|
Heya, I'm afraid i mayn't be very helpful for the moment as i've only learnt C/C++ for 1 year now and i'm only starting to get my hands on Linux-assisted music... Would you then be gentle to provide me some ressource links in order to educate myself a bit about music software devel for a prospective indulgment? I was wondering about one point: LinuxSampler is bound to import lots of existing proprietary sample formats, but will it get its own? I've not yet seen any leading linux multisample format, so which one should i use to keep mines? If it had to be planned, what kinda specifications would it feature? - packed like *.gig version or extracted like Kontakt's wiser format? - editable or crypted map file? I'm eager to be able to develop or test it! ++ Mr.Freeze |
|
From: Christian S. <chr...@ep...> - 2003-09-06 20:02:11
|
Hi Guys! I just finished a preliminary version of libgig: http://stud.fh-heilbronn.de/~cschoene/projects/libgig/ the library consists of three parts: - RIFF classes to parse and access arbitrary RIFF files - DLS classes using the RIFF classes to provide abstract acess to DLS Level 1 and 2 files - gig classes based on the DLS classes to access Gigasampler .gig files It already comes with a couple of console tools / demo apps (e.g. to extract samples from a .gig file or to print out the RIFF tree of an arbitrary RIFF file). I still have to decode some Gigasampler chunks and clean up everything, but the basics are finished and working. So far I couldn't confirm everything that Ruben has already decoded, so I had to recheck all Gigasamper chunks which is really time consuming. I will create an UML diagram probably next week and add support for compressed files. CU |
|
From: <be...@ga...> - 2003-08-26 16:42:31
|
Scrive Christian Schoenebeck <chr...@ep...>: > Hi! > > Made some updates to libakai: > > - added SetPos() and Read() methods to AkaiSample class for disk streaming excellent ! (the first RAM sampler module will load all in RAM for simplicity reasons, but nothing speaks agains streaming AKAI samples too ;-) ) > - added WriteImage() method to DiskImage class to extract Akai track to HD This will be very handy for sample editors. > - DiskImage class now capable to read images from regular files > - and last but not least a bugfix in the main Read method Very useful too, because you can just cat /dev/cdrom >file.img on your HD and forget about the CD. > > http://stud.fh-heilbronn.de/~cschoene/projects/libakai/ > > Benno, the Read() method is eventually still suboptimal, but only a bit. The > > only thing left for me is to clean up the code, but I won't spend time on > that now as I think it's more important for me to finish the Gigasampler > libarary. Yep ! > > So, now it's your turn... :) yes ... die letzten sind immer die besten, oder wie sagt man das ;-) (use babelfish for non germans ;-) ) Anyway the goal is to use this lib for implementing the first ram sampler module that is generated by the synth compiler. thanks again for those precious libs ! cheers, Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Christian S. <chr...@ep...> - 2003-08-25 19:13:26
|
Hi! Made some updates to libakai: - added SetPos() and Read() methods to AkaiSample class for disk streaming - added WriteImage() method to DiskImage class to extract Akai track to HD - DiskImage class now capable to read images from regular files - and last but not least a bugfix in the main Read method http://stud.fh-heilbronn.de/~cschoene/projects/libakai/ Benno, the Read() method is eventually still suboptimal, but only a bit. The only thing left for me is to clean up the code, but I won't spend time on that now as I think it's more important for me to finish the Gigasampler libarary. So, now it's your turn... :) CU Christian |
|
From: Ismael V. T. <is...@sa...> - 2003-08-23 10:36:34
|
I subscribed to this mailing list after an IRC talk with benno and others (my nick there is `hugobass'). My regards to all people involved in the linuxsampler project. Don't hesitate to ask for any help on my side, if it fits my capabilities. Regards, Ismael -- "Tout fourmille de commentaries; d'auteurs il en est grande cherté" |
|
From: Benno S. <be...@ga...> - 2003-08-14 17:00:33
|
Scrive Christian Henz <ch...@gm...>: > > Ok I agree with you when we consider decompression of compressed formats. > > But (being an efficiency obsessed person by definition) I just wanted to > make > > sure that the disk reading operations would be as fast as possible. > > For example in evo I avoid any copies and temporary buffers: > > the data from disk is read directly into a large ring buffer where > > the audio thread directly resamples the data from putting it into > > the output buffer (which is sent to sound card). > > > > But for the real thing you'll need one buffer per voice anyway, right? Not necessarily: for example for a MIDI device you usually have an instrument assigned to each channel thus in presence of polyphonic instruments many voices get downmixed to the same channel. This means that if you do it right you can just mix the current sample output of each voice to to the channel buffer. So yes my former statement was a bit incorrect, you have more than one output buffers (mixdown buffers) which at the last stage get mixed down to the final soundcard audio out buffer. > > > So as long as your library gives us an efficient > > read(buffer, numsamples) function > > that in case of uncompressed samples just calls posix read() then I have > > nothing to object to let your library handle the I/O. > > Well, at some point the decoding/de-interleaving/resampling has to happen, so > why not in the respective loaders? Then you could have a unified concept of a > sample. Instead of reading raw data, the voices would call something like > sample->read(int channel, sample_t *voice_buffer, int num_samples) ok, but the library needs to supply uncompressed samples (linear block of samples) because the engine has to handle complex ring buffer issues, code that cannot be put within loaders. But the abstraction read(buffer, numsamples) (which internally does decompression, deinterleaving etc) is ok since it allows you to real any sample format you like. > > I've also been thinking about the case where you got several voices reading > from one (streamed) sample (like when you map a sample across several notes > or for polyphony on the same note). The sample start would of course be > buffered globally for all 'instances' of the streamed sample, but you'd also > need a mechanism of reading/buffering for each instance individually. I thought about the same but I came to the conclusion that if it is a large sample (eg let's say a sample that is 10-20MB long) you need to stream from different parts of the file anyway (you cannot keep MBs worth of cached sample just in the hope that it will be accessed at different positions simultaneously) so your method does not pay off. If the samples are short or the notes are short, then most of the time only the RAM cached part of the sample will be accessed requiring no disk accesses anyway or if there are any disk accesses the linux filesystem cache will cache the data for you thus disk accesses will be mostly avoided. (just play around with evo by loading large samples and hitting keys on your midi keyboard to convince yourself that this approach works) cheers, Benno. ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Benno S. <be...@ga...> - 2003-08-11 21:19:43
|
Scrive Christian Schoenebeck <chr...@ep...>: > > Hmmm, not sure if that's a good idea either :) > Because you may have to e.g. uncompress the wave stream (not with Akai, but > at > least with gig files). Of course you can also do that within the engine, but > > IMO that wouldn't be that clean and I just thought it would be some kind of > code overhead to let the engine maintain sample specific infos. Ok I agree with you when we consider decompression of compressed formats. But (being an efficiency obsessed person by definition) I just wanted to make sure that the disk reading operations would be as fast as possible. For example in evo I avoid any copies and temporary buffers: the data from disk is read directly into a large ring buffer where the audio thread directly resamples the data from putting it into the output buffer (which is sent to sound card). So as long as your library gives us an efficient read(buffer, numsamples) function that in case of uncompressed samples just calls posix read() then I have nothing to object to let your library handle the I/O. Keep in mind we need lseek() too since the engine fills all the ringbuffers in chunks (eg max 128k samples read per read() call) thus we need to lseek() after each ringbuffer got refilled. (I use sorting to ensure that you refill the "least filled" buffer first, this is needed because sample rate of each voice can vary dinamically, and it actually works quite well). For compressed data (eg GIG compressed WAVs or MP3) I think we have no choice other than read a chunk of data into a temporary buffer and then decompress directly into the buffer supplied by the engine to the library's read() call. But keep in mind if you need to manage lots of disk streams (100 and more) you have to read at least 64k-128k samples at time in order to maximize disk bandwidth (otherwise you end up seeking around too much). So when you will design those decompression routines you need to take this into account (give the engine the possibility to tune these values (max read size) since we will probably need to change these values dinamically in certain situations (eg if all stream ringbuffers are very little filled we risk an audio dropout thus it is better to fill them up quickly with a bit of data rather than filling one single buffer with a large amount of data while the other buffers underrun. Anyway triggering a large library of MP3s with sub 5msec latency and streaming them from disk will be a bonus we will get for free since we abstract the decompression within the sample loading lib (as you said the engine will know nothing about the compression of the sample). > > Regarding the amount of cached samples: It doesn't make sense to me not to > cache all sample heads, at least all those samples which belong to a program Ok but probably almost samples belong to a program so this kind of optimization would not buy us too much. Anyway if you implement it even better so ! BTW: make a function where the engine can specify the amount of samples that are cached in RAM. That way the user (or the engine itself) can tune them to best suit the hardware characteristics / performance needs. (eg if your box is low of RAM make the cache value smaller, if you have plenty increase the value to give better stabilty and to alleviate the stress put on the disk). cheers, Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Christian S. <chr...@ep...> - 2003-08-11 15:38:09
|
Es geschah am Montag, 11. August 2003 15:44 als Benno Senoner schrieb: > Scrive Christian Schoenebeck <chr...@ep...>: > > Hi! > > > > I thought about adding mmap() support as a compile time option to the > > Akai and > > Giga libs. I would also recommend to place the sample head cache (for the > > first couple of sample chunks to allow instant triggering of the samples) > > directly into those libs, so the engine doesn't have to care about > > maintaining such things. What do you think? > > Hmm not sure if this would be a good idea (that the lib handles the caching > of the first samples). > The fact is that the playback engine will need to know about the > cached size, offset where to switch from ram to disk based playback etc > anyway, plus the engine could decide to cache different amounts of samples > depending on some factors thus I think it is better if your lib just > interprets all chunks and returns data structures describing the data > contained in them. > As for the chunks containing the actual samples it is sufficient to > export the offset (relative to the file) where they begin and the total > length. Hmmm, not sure if that's a good idea either :) Because you may have to e.g. uncompress the wave stream (not with Akai, but at least with gig files). Of course you can also do that within the engine, but IMO that wouldn't be that clean and I just thought it would be some kind of code overhead to let the engine maintain sample specific infos. Regarding the amount of cached samples: It doesn't make sense to me not to cache all sample heads, at least all those samples which belong to a program (/instrumen) that's currently in usage have to be cached. Or maybe I just don't know what you're getting at. Christian |
|
From: Benno S. <be...@ga...> - 2003-08-11 13:44:08
|
Scrive Christian Schoenebeck <chr...@ep...>:
> Hi!
>
> I thought about adding mmap() support as a compile time option to the Akai
> and
> Giga libs. I would also recommend to place the sample head cache (for the
> first couple of sample chunks to allow instant triggering of the samples)
> directly into those libs, so the engine doesn't have to care about
> maintaining such things. What do you think?
>
Hmm not sure if this would be a good idea (that the lib handles the caching
of the first samples).
The fact is that the playback engine will need to know about the
cached size, offset where to switch from ram to disk based playback etc
anyway, plus the engine could decide to cache different amounts of samples
depending on some factors thus I think it is better if your lib just
interprets all chunks and returns data structures describing the data
contained in them.
As for the chunks containing the actual samples it is sufficient to
export the offset (relative to the file) where they begin and the total
length.
Basically linuxsampler should do the following
handle=libgig->parsefile("file.gig");
now the engine reads the structs exported by libgig
traverse_datastructs_of_parsed_chunks(handle);
// file can be closed since the engine will do it's own filehandling
// (could be via read() or mmap() thus IMHO it is better to let
// libgig close it's file handles
libgig->closefile(handle);
fd=engine->open("file.gig"); (regular posix open() call)
// the engine now has all data it needs to play the sample
// sample attributes data structs and sample offsets within files stored
// in GIG chunks accessible via libgig->handle...
sampler_playback_main_loop();
Is this ok for you Christian ?
cheers,
Benno
http://linuxsampler.sourceforge.net
-------------------------------------------------
This mail sent through http://www.gardena.net
|
|
From: Josh G. <jo...@re...> - 2003-08-11 13:00:57
|
I sent this a few days ago but it never made it some lists, some mail problems maybe? Looks like the linuxsampler mailing list archives are broken too, so if you receive a duplicate of this email, please excuse it. ---- Well after a few days of hacking libInstPatch now loads GigaSampler files and will synthesize to some extent using FluidSynth. Still many things missing from the synthesis conversion to SoundFont of course, but its nice to at least hear something :) I now have a better idea of what GigaSampler files are like, and I don't like them. Seems like they should have just used DLS 2, which is what they walked all over anyways with their file format. Its pretty simple to convert DLS dimensions to regular DLS regions though. If anyone is interested in getting the development branch, it is now somewhat functional. Here are some tips: - Drag and drop Icons to the different panes to set what interface is there (and restart Swami when you take out the tree view since it doesn't have an icon yet) - Ooo pretty sample viewer (compared to the old one), use mouse wheel to zoom in/out, no other method of zoom at the moment, now that I think about it. Also a vertical (amplitude) zoom. - Keyboard scales too. - Load up some GigaSampler files if you have them and play keyboard with mouse (no virtual computer key mapping yet) - Have fun and let me know if you actually try it. Remember to check out the swami-1-0 branch of CVS, not the head branch! cvs -z3 -d:pserver:ano...@cv...:/cvsroot/swami co -r swami-1-0 swami Have a look at the Swami download page for more details. Cheers. Josh Green |
|
From: Christian S. <chr...@ep...> - 2003-08-11 12:21:51
|
Hi! I thought about adding mmap() support as a compile time option to the Akai and Giga libs. I would also recommend to place the sample head cache (for the first couple of sample chunks to allow instant triggering of the samples) directly into those libs, so the engine doesn't have to care about maintaining such things. What do you think? Regarding the Gigasampler lib: I have only finished the RIFF classes so far, as I'm waiting for the DLS2 specs. I always put the current version here: http://stud.fh-heilbronn.de/~cschoene/projects/libgig CU Christian |
|
From: Benno S. <be...@ga...> - 2003-08-10 22:44:30
|
Hi Florian, I fully agree what you said.
Know what ? I too do not love modular synths too much
since often (if you are not a good user of modular synths)
you end up wasting your time trying to compose cool sounds instead of
making real music.
BUT....
This modular GUI is only to compose the "load&play" engines.
Yes it seems a bit strange approach and many suggested "why not
write a hardcoded load&play sampler engine instead to go through this
painful modular route" ?
Well, the goal is to have a sampler that eats any format and implements
as many engines as possible.
But hardcoding multiple engines like AKAI, GIG , SF2 is a bit of a PITA
and you end up writing duplicate code, algorithms etc not to mention
that once the engine it coded it is painful to add new functionality.
The first milestone is to implement enough basic building blocks
(modules) so that you can build an AKAI S1000/2000 load and play engine.
Once the dsp network is assembled you can just forget about the modules
etc. The downcompiler will produce an akaiengine.so DLL.
Now to complete your load&play sampler you just
need to write an associated GUI that exports only the stuff you like.
Eg you can load an arbitrary number of intruments (eg via a file dialog)
assign them to the desired MIDI port/channel, tweak the instrument's parameters
(filter values, envelope data etc).
At this point the load&play sampler is done.
The difference between a hardcoded load&play sampler and that one is almost
zero.
But we have the advantage that we can add new file formats and new engines
(good .GIG playback is one of the most important goal after AKAI support
will be complete).
As said the modular GUI will be here only for experts or for those that
liek to experiment with building new sounds, but the majority of the userbase
will probably just use the load&play sampler.
I think this will satisfy everyone hopefully you too ;-)
Why I started with the coding of the modular GUI ?
Simple: it is needed for building the load&play engines.
basically the roadmap is the following:
finishing the modular GUI
implementing the downcompiler (takes the module description and generates
an executable sampler engine as .so (DLL).
implementing sample library loader modules (akai format, gig format etc).
GUIs for the various engines.
To produce quicker results I'd suggest to leave out the more complex stuff
for now like disk based sampling (thus the gig format etc) while trying
to get a good working AKAI engine.
For Steve H.: I thought about your method of pasting sources:
for(i=0;i<samples_per_block;i++) {
do_amplitude_modulator(input, tmp);
do_filter(tmp, output);
... etc ...
}
That way we could even create low latency (1 sample) feedback loops
without much effort (since we operate "blockless" within the loop).
I fear LS wil get many modular synth capabilities for free ;-)
(not a bad thing for those that want such stuff, but the goal
will always be sampler engines)
cheers,
Benno
Scrive Florian Berger <fl...@ar...>:
> Hi everybody,
>
>
> First I should tell some things about me and my motivation: I´m a
> musician, not a programmer, and I want to use Free Software for my
> music. I was pointed to this project while looking for a Linux Sampler
> at the LAU list. I need a (virtual) sampler which is almost "invisible"
> in the process of creating music, e.g. which is easy and
> straightforward to use, relieable and fast.
>
> Well, Benno, thanks for your work, but in my mind this starts to look
> like just _another_ virtual modular thing. There is already some stuff
> like this walking around in the Linux Audio world, as far as I know.
>
> I used to hope (and I still do :-) ) that Linuxsampler would more look
> like that:
>
> http://www.creamware.de/de/products/Volkszampler/defaultpics.asp
>
> http://www.creamware.de/de/products/Volkszampler/default.asp
> (this one in German, sorry)
>
> I think in the "where should it go"-discussion you just missed the
> point which role Linuxsampler should play in the creative action.
> A modular system where the artist has to build the instruments/patches
> "from the bone" naturally consumes more time and power but leaves room
> for very individual sounds and experimenting.
> A sampler more dedicated to a "load&play"-philosophy lacks this kind of
> freedom, but the artist can start composing immediately (which I
> prefer).
>
> Well. It´s up to you programmers, and this is just my humble opinion
> :-). Maybe both ways will be possible in the final GUI. Thanks for your
> work so far and keep it up!
>
> Regards,
>
> Florian Berger, Leipzig, Germany
>
>
-------------------------------------------------
This mail sent through http://www.gardena.net
|
|
From: Simon J. <sje...@bl...> - 2003-08-10 04:22:52
|
David Olofson wrote: >>[Benno:]Now if we assume we do all blockless processing eg the >>dsp compiler generates one giant equation for each dsp network >>(instrument). output = func(input1,input2,....) >> >>Not sure we gain in performance compared to the block based >>processing where we apply all operations sequentially on a buffer >>(filters, envelopes etc) like they were ladspa modules but without >>calling external modules but instead "pasting" their sources in >>sequence without function calls etc. >> >> > >I suspect it is *always* a performance loss, except in a few special >cases and/or with very small nets and a good optimizing "compiler". > You certainly need an optimising compiler because the gains, if present, would be achieved by register optimisations. I was getting better performance from the paste-together-one-big-function approach and had thought that it "just was" better (provided the function fit in the cache of course), but there's been some disagreement on the matter so I'm going over it again. >Audio rate controls *are* the real answer (except for some special >cases, perhaps; audio rate text messages, anyone? ;-), but it's still >a bit on the expensive side on current hardware. (Filters have to >recalculate coefficients, or at least check the input, every sample >frame, for example.) > Agreed it may be very expensive to send a-rate control to certain inputs of certain modules. OTOH that's a limitation of those particular inputs, on those particular modules. Given that we're going to downcompile it would maybe be possible to deal with such inputs "surgically" by (for example) generating code which k-rated connections to just those inputs. Simon Jenkins (Bristol, UK) |
|
From: David O. <da...@ol...> - 2003-08-09 22:31:08
|
On Saturday 09 August 2003 19.01, Benno Senoner wrote:
[...]
> > > The other philosopy is not to adopt the "everything is a CV",
> > > use typed ports and use time stamped scheduled events.
> >
> > Another way of thinking about it is to view streams of control
> > ramp events as "structured audio data". It allows for various
> > optimizations for low bandwidth data, but still doesn't have any
> > absolute bandwidth limit apart from the audio sample rate. (And
> > not even that, if you use a different unit for event timestamps -
> > but that's probably not quite as easy as it may sound.)
>
> So at this time for linuxsampler would you advocate an event
> based approach or continuous control stream (that runs at a
> fraction of the samplerate) ?
I don't like "control rate streams" and the like very much at all.=20
They're only slightly easier to deal with than timestamped events,=20
while they restrict timing accuracy in a way that's not musically=20
logical. They get easier to deal with if you fix the C-rate at=20
compile time, but then you have a quantization that may differ=20
between installed systems - annoying if you want to move=20
sounds/projects/whatever around.
I've had enough trouble with h/w synths that have these kind of=20
restrictions (MCUs and/or IRQ driven EGs + LFOs and whatnot) that I=20
don't want to see anything like it in any serious synth or sampler=20
again. Envelopes and stuff *have* to be sample accurate for serious=20
sound programming. (That is, anything beyond applying slow filter=20
sweeps to samples. Worse than sample accurate timing essentially=20
rules out virtual analog synthesis, at least when it comes to=20
percussive sounds.)
> As far as I understood it from reading your mail it seems that you
> agree that on current machines (see filters that need to
> recalculate coefficients etc) it makes sense to use an event based
> system.
Yes, that seems to be the best compromize at the moment. Maybe that=20
will change some time, but I'm not sure. *If* audio rate controls and=20
blockless processing become state-of-the-art for native processing,=20
it's only because CPUs are so fast that the cost is acceptable in=20
relation to the gains. (Audio rate control data everywhere and=20
slightly simpler DSP code, I guess...)
[...]
> > Now, that is apparently impossible to implement that on some
> > platforms (poor RT scheduling), but some people using broken OSes
> > is no argument for broken API designs, IMNSHO...
>
> Ok but even if there is a jitter of a few samples it is much better
> than having an event jitter equivalent to the audio fragment size.
Sure - but it seems that on Windoze, the higher priority MIDI thread=20
will pretty much be quantized to audio block granularity by the=20
scheduler, so the results are no better than polling MIDI once per=20
audio block.
Anyway, that's not our problem - and it's not an issue with h/w=20
timestamping MIDI interfaces, since they do the job properly=20
regardless of OS RT performance.
> It will be impossible to the user to notice that the midi pitchbend
> event was schedule a few usecs too late compared to the ideal time.
> Plus as said it will work with relatively large audio fragsizes
> too.
Yes. Or on Win32; maybe. ;-) I don't have any first hand experience=20
with pure Win32. The last time I messed with it, it seemed to "sort=20
of" work as intended, but that was on Win95 with Win16 code running=20
in IRQ context, and that's a different story entirely. You can't do=20
that on Win32 without some serious Deep Hack Mode sessions, AFAIK.
Anyway, who cares!? ;-) It works on Linux, QNX, BeOS, Mac OS X, Irix=20
and probably some other platforms.
The only reason I brought it up is that Win32 failing to support this=20
kind of timestamped real time input was seriously suggested as an=20
argument to drop timestamped events in GMPI. Of course, that was=20
overruled, as the kind of plugins GMPI is meant for are usually=20
driven from the host's integrated sequencer anyway, and there's no=20
point in having two event systems; one with timestamps and one=20
without.
> > > With the streamed approach we would need some scheduling of
> > > MIDI events too thus we would probably need to create a module
> > > that waits N samples (control samples) and then emits the
> > > event. So basically we end up in a timestamped event scenario
> > > too.
> >
> > Or the usual approach; MIDI is processed once per block and
> > quantized to block boundaries...
>
> I don't like that , it might work ok for very small fragsizes eg
> 32-64 samples / block but if you go up to let's say 512 - 1024
> timing of MIDI events will suck badly.
Right. I don't like it either, but suggested it only for completeness.=20
(See above, on Win32 etc; they do this for RT MIDI input, because it=20
hardly gets any better anyway.)
> > > Not sure we gain in performance compared to the block based
> > > processing where we apply all operations sequentially on a
> > > buffer (filters, envelopes etc) like they were ladspa modules
> > > but without calling external modules but instead "pasting"
> > > their sources in sequence without function calls etc.
> >
> > I suspect it is *always* a performance loss, except in a few
> > special cases and/or with very small nets and a good optimizing
> > "compiler".
>
> So it seems that the best compromise is to process audio in blocks
> but to perform all dsp operations relative to an output sample
> (relative to a voice) in one rush.
Sometimes... I think this depends on the architecture (# of registers,=20
cache behavior etc) and on the algorithm. For most simple algorithms,=20
it's the memory access pattern that determines how things should be=20
done.
[...]
> it would be faster do do
>
> for(i=3D0;i<256;i++) {
> output_sample[i]=3Ddo_filter(do_amplitude_modulation(input_sample[i])
>);
> }
>
> right ?
Maybe - if you still get a properly optimized inner loop that way. Two=20
optimized inner loops with a hot buffer in between is probably faster=20
than one that isn't properly optimized.
> While for pure audio processing the approach is quite
> straightforward, when we take envelope generators etc into account
> we must inject code that checks the current timestamp (an if()
> statement and then modifies the right values accordingly to the
> events pending in the queue (or autogenerated events).
It shouldn't make much of a difference if the "integration" is done=20
properly. In Audiality, I just wrap the DSP loop inside the event=20
decoding loop (the normal, efficient and obvious way of handling=20
timestamped events), so there's only event checking overhead per=20
event queue (normally one per "unit") and per block - not per sample.=20
Code compiled from multiple plugins should be structured the same=20
way, and then it doesn't matter if some of these plugin do various=20
forms of processing that doesn't fit the "once-per-sample" model.
[...]
> sample_ptr_and_len: pointer to a sample stored in RAM with
> associated len
>
>
> attack looping: a list of looping points:
> (position_to_jump, loop_len, number_of_repeats)
>
> release looping: same structure as above but it is used when
> the sampler module goes into release phase.
> Basically when you release a key if the sample has loops after
> the current loop comes to the end pos you switch to the
> release_looping list.
This is where it goes wrong, I think. These things are not musical=20
events, but rather "self triggered" chains of events, closely related=20
to the audio data and the inner workings of the sampler. You can't=20
really control this from a sequencer in a useful way, because it=20
requires sub-sample accurate timing as well as intimate knowledge of=20
how the sampler handles pitch bend, and how accurate it's=20
pitch/position tracking is. You'll most likely get small glitches=20
everywhere for no obvious reason if you try this approach. Lots of=20
fun! ;-)
Of course, you'll need some nice way of passing this stuff as=20
*parameters" to the sampler - but that goes for the audio data as=20
well. If it can't be represented as plain values and text controls in=20
any useful way, it's better handled as "raw data" controls; ie just=20
blocks of private data. (Somewhat like MIDI SysEx dumps.)
[...]
> So I'd be interested how the RAM sampler module described above
> could be made to work with only the RAMP event.
You just handle the private stuff as "raw data", and there's no=20
problem. :-) Controls are meant for things that can actually be=20
driven by a sequencer, or some other event generator or processor.
Also note that a control event is just a command that tells the plugin=20
to change a control to a specified value at a specified time. It's=20
not to be viewed as an object passed to the plugin, to be added to=20
some list or anything like that.
> BTW: you said RAMP with value =3D 0 means set a value.
No, RAMP (<value>, <duration>), where <duration> is 0. The value and=20
timestamp fields are always valid.
> But what do you set exactly to 0 , the timestamp ?
The 'duration' field. (Which BTW, is completely ignored by controls=20
that support only SET operations.)
> this would not be ideal since 0 is a legitimate value.
> It would be better to use -1 or something alike.
No, -1 (or rather, whatever it is in unsigned format) is a perfectly=20
valid timestamp as well, at least in Audiality. It uses 16 bit=20
timestamps that wrap pretty frequently, which is just fine, since=20
events are only for the current buffer anyway.
> OTOH this would require an additional if() statement
> (to check it it is a regular ramp or a set statement) and it could
> possibly slow down things a bit.
This cannot be avoided anyway. You'll have to put that if() somewhere=20
anyway, to avoid div-by-zero and other nasty issues. Consider this=20
code from Audiality:
(in the envelope generator:)
=09case PES_DELAY:
=09=09duration =3D S2S(p->param[APP_ENV_T1]);
=09=09target =3D p->param[APP_ENV_L1];
=09=09clos->env_state =3D PES_ATTACK;
=09=09break;
This is the switch from DELAY to ATTACK. 'duration' goes into the=20
event's duration field, and may well be 0, either because the=20
APP_ENV_T1 parameter is very small, or because the sample rate is=20
low. (S2S() is a macro that converts seconds to samples, based on the=20
current system sample rate.) Since RAMP(target, 0) does exactly what=20
we want in that case, that's just fine. We leave the special case=20
handling to the receiver:
(in the voice mixer:)
=09case VE_IRAMP:
=09=09if(ev->arg2)
=09=09{
=09=09=09v->ic[ev->index].dv =3D ev->arg1 << RAMP_BITS;
=09=09=09v->ic[ev->index].dv -=3D v->ic[ev->index].v;
=09=09=09v->ic[ev->index].dv /=3D ev->arg2 + 1;
=09=09}
=09=09else
=09=09=09v->ic[ev->index].v =3D ev->arg1 << RAMP_BITS;
=09=09v->ic[ev->index].aimv =3D ev->arg1;
=09=09v->ic[ev->index].aimt =3D ev->arg2 + s;
=09=09break;
Here we calculate our internal ramping increments - or, if the=20
duration argument (ev->arg2) is 0, we just grab the target value=20
right away.
One would think that it would be appropriate to set .dv to 0 in the=20
SET case, but it's irrelevant, since what happens after the RAMP=20
duration is undefined, and it's illegal to leave a connected control=20
without input.
That is, the receiver will never end up in the "where do I go now?"=20
situation. In the case of Audiality, the instance is destroyed at the=20
very moment the RAMP event stream ends. In the case of XAP, there is=20
always the option of disconnecting the control, and in that case, the=20
disconnect() call would clear .dv as required to lock the control at=20
a fixed value. (Or the plugin will reconfigure itself internally to=20
remove the control entirely, or whatever...)
> My proposed ramping approach that consists of
> value_to_be_set, delta
> does not require an if and if you simply want to set a value
> you set delta =3D 0
Delta based ramping has the error build-up issue - but <value, delta>=20
tuples should be ok... The <target, duration> approach has the nice=20
side effects of
=091) telling the plugin for how long the (possibly
=09 approximated) ramp must behave nicely, and
=092) eliminating the requirement for the sender to
=09 know the exact current value of controls.
(The latter can make life easier for senders in some cases, but in the=20
case of Audiality - which uses fixed point controls - it's mostly=20
about avoiding error build-up drift without introducing clicks.)
> But my approach has the disadvantage that if you want do to mosly
> ramping you have always to calculate value_to_be_set at each event
> and this could become not trivial if you do not track the values
> within the modules.
Actually, I think you'll have to keep track of what you're doing most=20
of the time anyway, so that's not a major one. The <target, duration>=20
approach doesn't have that requirement, though...
[...]
> blockless as refered above by me (blockless =3D one single equation
> but processed in blocks), or blockless using another kind of
> approach ? (elaborate please)
By "blockless", I mean "no blocks", or rather "blocks of exactly one=20
sample frame". That is, no granularity below the audio rate, since=20
plugins are executed once per sample frame. C-rate =3D=3D A-rate, and=20
there's no need for timestamped events or anything like that, since=20
everything is sample accurate anyway.
> > That said, something that generates C code that's passed to a
> > good optimizing compiler might shift things around a bit,
> > especially now that there are compilers that automatically
> > generate SIMD code and stuff like that.
>
> The question is indeed if yo do LADSPA style processing
> (applying all DSP processing in sequence) the compiler uses SIMD
> and optimization of the processing loops and is therefore faster
> than calculating the result one single big equation at time which
> could possibly not take advantage of SIMD etc.
> But OTOH the blockless processing has the advantage that
> things are not moved around much in the cache.
> The output value of the first module is directly available as the
> input value of the next module without needing to move it to
> a temporary buffer or variable.
It's just that the cost of temporary buffers is very low, as long as=20
they all fit in the cache. Meanwhile, if you throw too many cache=20
abusing plugins into the same inner loop, you may end up with=20
something that thrashes the cache on a once-per-sample-frame basis...
[...]
> > > As said I dislike "everyting is a CV" a bit because you cannot
> > > do what I proposed:
[...]
> > I disagree to some extent - but this is a very complex subject.
> > Have you followed the XAP discussions? I think we pretty much
> > concluded that you can get away with "everything is a control",
> > only one event type (RAMP, where duration =3D=3D 0 means SET) and a
> > few data types. That's what I'm using internally in Audiality,
> > and I'm not seing any problems with it.
>
> Ah you are using the concept of duration.
> Ins't it a bit redundant ?
No - especially not if you consider that some plugins will only=20
*approximate* the linear ramps. When you do that, it becomes useful=20
to know where the sender intends the ramp to end, to come up with a=20
nice approximation that hits the start and end points dead on.
> Instead of using duration one can use
> duration-less RAMP events and just generate an event that sets
> the delta ramp value to zero when you want the ramp to stop.
Yes, but since it's not useful to "just let go" of a control anyway,=20
this doesn't make any difference. Each control receives a continous=20
stream of structured audio rate data, and that stream must be=20
maintained by the sender until the control is disconnected.
Of course, there are other ways of looking at it, but I don't really=20
see the point. Either you're connected, or you aren't. Analog CV=20
outputs don't have a high impedance "free" mode, AFAIK. ;-) (Or maybe=20
some weird variants do - but that would still be an extra feature, ie=20
"soft disconnect", rather than a property of CV control signals.)
> > > Basically in my model you cannot connect everything with
> > > everything (Steve says it it bad but I don't think so) but you
> > > can connect everything with "everything that makes sense to
> > > connect to".
> >
> > Well, you *can* convert back and forth, but it ain't free... You
> > can't have everything.
>
> Ok but converters will be the exception and not the rule:
One would hope so... That should be carefully considered when deciding=20
which controls use what format.
> for example the MIDI mapper module
>
> see the GUI screenshot message here:
> http://sourceforge.net/mailarchive/forum.php?thread_id=3D2841483&foru
>m_id=3D12792
>
> acts as a proxy between the MIDI Input and the RAM sampler module.
> So it makes the right port types available.
> No converters are needed. It's all done internally in the best
> possible way without needless float to int conversions,
> interpreting pointers as floats and other "everything is a CV"
> oddities ;-)
*hehe*
Well, it gets harder when those modular synth dudes show up and want=20
to play around with everything. ;-)
[...]
> > Yes... In XAP, we tried to forget about the "argument bundling"
> > of MIDI, and just have plain controls. We came up with a nice and
> > clean design that can do everything that MIDI can, and then some,
> > still without any multiple argument events. (Well, events *have*
> > multiple arguments, but only one value argument - the others are
> > the timestamp and various addressing info.)
>
> Hmm I did not follow the XAP discussions (I was overloaded during
> that time as usual ;-) ) but can you briefly explain how this XAP
> model would fit the model where the MIDI IN module talks to the
> MIDI mapper which in turns talks to the RAM sampler.
Well, the MIDI IN module would map everything to XAP Instrument=20
Control events, for starters.
=09NoteOn(Ch, Pitch, Vel);
would become something like
=09vid =3D Pitch;=09=09//Use Pitch for voice ID, like MIDI...
=09ALLOC_VOICE(vid);=09//Tell the receiver we need a voice for 'vid'
=09VCTRL(vid, VELOCITY, Vel);=09//Set velocity
=09VCTRL(vid, PITCH, Pitch);=09//Set pitch
=09VCTRL(vid, VOICE, 1);=09=09//Voice on
and
=09NoteOff(Ch, Pitch);
becomes
=09vid =3D Pitch;
=09VCTRL(vid, VOICE, 0)=09=09//Voice off
=09RELEASE_VOICE(vid)=09=09//We won't talk about 'vid' no more.
Note that RELEASE_VOICE does *not* mean the actual voice dies=20
instantly. That's entirely up to the receiver. What it *does* mean is=20
that you give up the voice ID, so you can't control the voice from=20
now on. If you ALLOC_VOICE(that_same_vid), it well be assigned to a=20
new, independent voice.
Anyway, VELOCITY, PITCH (and any others you like) are just plain=20
controls. They may be continous, "voice on latched" and/or "voice off=20
latched", so you can emulate the same (rather restrictive) logic that=20
applies to MIDI NoteOn/Off parameters. That is, we use controls that=20
are latched at state transitions instead of explicit argruments to=20
some VOICE ON/OFF event.
[...]
> > Either way, the real heavy stuff is always the DSP code. In cases
> > where it isn't, the whole plugin is usually so simple that it
> > doesn't really matter what kind of control interface you're
> > using; the DSP code fits right into the basic "standard model"
> > anyway. In such cases, an API like XAP or Audiality's internal
> > "plugin API" could provide some macros that make it all insanely
> > simple - maybe simpler than LADSPA.
>
> So you are saying that in pratical terms (processing performance)
> it does not matter whether you use events or streamed control
> values ?
I'm saying it doesn't matter much in terms of code complexity. (Which=20
is rather important - someone has to code all those kewl plugins! :-)
It *does* matter a great deal in terms of performance, though, which=20
is why we have to go with the slightly more (and sometimes very much=20
more) complicated solutions.
> I still prefer the event based system because it allows you to deal
> more easily with real time events (with sub audio block precision)
> and if you need it you can run at full sample rate.
Sure, some things actually get *simpler* with timestamped events, and=20
other things don't get all that complicated. After all, you can just=20
do the "VST style quick hack" and process all events first and then=20
process the audio for the full block, if you can't be arsed to do it=20
properly. (They actually do that in some examples that come with the=20
VST SDK, at least in the older versions... *heh*)
> > Anyway, need to get back to work now... :-)
>
> yeah, we unleashed those km-long mails again ... just like in the
> old times ;-) can you say infinite recursion ;-)))
*hehe* :-)
//David Olofson - Programmer, Composer, Open Source Advocate
=2E- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`-----------------------------------> http://audiality.org -'
--- http://olofson.net --- http://www.reologica.se ---
|
|
From: Florian B. <fl...@ar...> - 2003-08-09 21:36:39
|
Hi everybody, On Tue, 5 Aug 2003 01:40:47, Benno Senoner <be...@ga...> wrote: >=20 > I started to write the GUI for the module editor for Linuxsampler. > See this screenshot for now (code will follow later). > http://www.linuxdj.com/benno/lsgui4.gif > [...] > More details about the GUI within the next days ... meanwhile if you > have suggestions to make (GUI wise) speak out loudly ;-) Here some thoughts about the GUI screenshot. First I should tell some things about me and my motivation: I=B4m a musician, not a programmer, and I want to use Free Software for my music. I was pointed to this project while looking for a Linux Sampler at the LAU list. I need a (virtual) sampler which is almost "invisible" in the process of creating music, e.g. which is easy and straightforward to use, relieable and fast. Well, Benno, thanks for your work, but in my mind this starts to look like just _another_ virtual modular thing. There is already some stuff like this walking around in the Linux Audio world, as far as I know. I used to hope (and I still do :-) ) that Linuxsampler would more look like that: http://www.creamware.de/de/products/Volkszampler/defaultpics.asp http://www.creamware.de/de/products/Volkszampler/default.asp (this one in German, sorry) I think in the "where should it go"-discussion you just missed the point which role Linuxsampler should play in the creative action. A modular system where the artist has to build the instruments/patches "from the bone" naturally consumes more time and power but leaves room for very individual sounds and experimenting. A sampler more dedicated to a "load&play"-philosophy lacks this kind of freedom, but the artist can start composing immediately (which I prefer). Well. It=B4s up to you programmers, and this is just my humble opinion :-). Maybe both ways will be possible in the final GUI. Thanks for your work so far and keep it up! Regards, Florian Berger, Leipzig, Germany |
|
From: Benno S. <be...@ga...> - 2003-08-09 20:07:37
|
Scrive Steve Harris <S.W...@ec...>:
> On Sat, Aug 09, 2003 at 07:01:40 +0200, Benno Senoner wrote:
> > basically instead of doing (like LADSPA would do)
> >
> > am_sample=amplitude_modulator(input_sample,256samples)
> > output_sample=LP_filter(am_sample, 256samples)
> >
> > (output_sample is a pointer to 256 samples) the current audio block.
> >
> > it would be faster do do
> >
> > for(i=0;i<256;i++) {
> > output_sample[i]=do_filter(do_amplitude_modulation(input_sample[i]));
> > }
>
> Thats not what I did, I did something like:
>
> for(i=0;i<256;i++) {
> do_amplitude_modulation(input_sample+i, &tmp);
> do_filter(&tmp, output_sample[i]);
> }
>
> Otherwise your limited in the number of outputs you can have, and this way
> makes it more obvious what the execution order is (imagine branches in the
> graph) and its no slower.
You mean that your method is almost as fast as my equation since the compiler
optimizes away these (needless) temporary variables ?
And yes you are right with you method you can tap any output you like
(and probably often you need to do so).
>
> > The question is indeed if yo do LADSPA style processing
> > (applying all DSP processing in sequence) the compiler uses SIMD
> > and optimization of the processing loops and is therefore faster
>
> I doubt it, I've not come across a compiler that can generate halfway decent
> SIMD instructions - including the intel one.
Yes
>
> NB you can still use SIMD instruction in blockless code, you just do it
> across the input channels, rather than along them.
But since the channels are dynamic and often each new voice points to
different sample data (with different pitches etc) I see it hard to
get any speed up from SIMD.
>
> > Ah you are using the concept of duration.
> > Ins't it a bit redundant ? Instead of using duration one can use
> > duration-less RAMP events and just generate an event that sets
> > the delta ramp value to zero when you want the ramp to stop.
>
> If you just specify the delta 'cos then the receiver is limited to linear
> segments, as it cant 2nd guess the duration. This wont work well eg. for
> envelopes. Some of the commercial guys (eg. cakewalk) are now evaluating
> thier envelope curves every sample anyway, so if we go for ramp events for
> envelope curves we will be behind the state of the art. FWIW Adobe use
> 1/4 audio rate controls, as its convienient for SIMD processing (this came
> out in some GMPI discusions).
But since you can set the ramping value at each sample you can
perform any kind of modulation not only at 1/4 sample rate but
at samplerate too. (but it will be a bit more expensive than
the pure streamed approach, but OTOH such an accuracy is seldom needed,
see exponential curves where you can aproximate a large part of the curve
using only few linear segments while the first part needs a more dense
event stream, but on the average you win over the streamed approach).
PS: I attached a small C file to test the performance of code using
temporary variables and code that inlines all in one equation.
Steve, you are right the speed difference is almost null.
(aplitude modulator -> amplitude modulator -> filter).
On my box 40k iterations of func1() take 18.1sec while the optimized case
(func2) takes 18.0sec = 0.5% speed difference.
Of course we cannot generalize but I guess that the speed difference
between a hand opzimized function and a the code generated using
tmp variables is in the low single digit percentage.
Benno.
#include <stdio.h>
#include <stdlib.h>
#define BLOCKSIZE 256
int func1();
int func2();
static double oldfiltervalue;
static double sample[BLOCKSIZE];
static double output[BLOCKSIZE];
main() {
int i;
int u;
int res;
//init the sample array
for(i=0;i<BLOCKSIZE;i++) {
sample[i]=i;
}
oldfiltervalue=0.0;
for(u=0;u<40000; u++) {
res=func1();
// res=func2();
}
}
int func1(void) {
int i;
double tmp;
double tmp2;
double tmp3;
double v1,v2;
v1=1.0;
v1=1.1;
for(i=0;i < BLOCKSIZE; i++) {
tmp=sample[i]*v1;
v1+=0.00001;
tmp2=tmp*v2;
v2+=0.00001;
tmp3=(tmp2 + oldfiltervalue)/2.0;
oldfiltervalue=tmp2;
output[i]=tmp3;
}
}
int func2(void) {
int i;
double tmp;
double v1,v2;
v1=1.0;
v1=1.1;
for(i=0;i < BLOCKSIZE; i++) {
tmp=sample[i]*v1*v2;
v1+=0.00001;
v2+=0.00001;
output[i]=(tmp + oldfiltervalue)/2.0;
oldfiltervalue=tmp;
}
}
-------------------------------------------------
This mail sent through http://www.gardena.net
|
|
From: Steve H. <S.W...@ec...> - 2003-08-09 19:18:56
|
I'm not thinking about a completely CV systems (for once ;), I suspect that
the right approach is something like a CV system (with or without a
control rate nad blocked or blockless), but with discrete events added
in, so that note down is a event with some paramters.
A disk streamed block diagram might look something like:
note events -----> voice ------- request events ---> disk
control stream
module <----- audio data --------- module
////// <----- audio data --------- /////
| |
audio
data
| |
V V
envelope
generators
| |
V V
whatever
| |
V V
outputs
So everything is generic, apart from the voice control module, which
defines the functionality and loads/controls the other modules.
- Steve
|
|
From: Steve H. <S.W...@ec...> - 2003-08-09 19:02:04
|
On Sat, Aug 09, 2003 at 07:01:40 +0200, Benno Senoner wrote:
> basically instead of doing (like LADSPA would do)
>
> am_sample=amplitude_modulator(input_sample,256samples)
> output_sample=LP_filter(am_sample, 256samples)
>
> (output_sample is a pointer to 256 samples) the current audio block.
>
> it would be faster do do
>
> for(i=0;i<256;i++) {
> output_sample[i]=do_filter(do_amplitude_modulation(input_sample[i]));
> }
Thats not what I did, I did something like:
for(i=0;i<256;i++) {
do_amplitude_modulation(input_sample+i, &tmp);
do_filter(&tmp, output_sample[i]);
}
Otherwise your limited in the number of outputs you can have, and this way
makes it more obvious what the execution order is (imagine branches in the
graph) and its no slower.
> The question is indeed if yo do LADSPA style processing
> (applying all DSP processing in sequence) the compiler uses SIMD
> and optimization of the processing loops and is therefore faster
I doubt it, I've not come across a compiler that can generate halfway decent
SIMD instructions - including the intel one.
NB you can still use SIMD instruction in blockless code, you just do it
across the input channels, rather than along them.
> But OTOH the blockless processing has the advantage that
> things are not moved around much in the cache.
> The output value of the first module is directly available as the
> input value of the next module without needing to move it to
> a temporary buffer or variable.
Yes, the only thing we can do is benchmark it.
> Ah you are using the concept of duration.
> Ins't it a bit redundant ? Instead of using duration one can use
> duration-less RAMP events and just generate an event that sets
> the delta ramp value to zero when you want the ramp to stop.
If you just specify the delta 'cos then the receiver is limited to linear
segments, as it cant 2nd guess the duration. This wont work well eg. for
envelopes. Some of the commercial guys (eg. cakewalk) are now evaluating
thier envelope curves every sample anyway, so if we go for ramp events for
envelope curves we will be behind the state of the art. FWIW Adobe use
1/4 audio rate controls, as its convienient for SIMD processing (this came
out in some GMPI discusions).
- Steve
|
|
From: Steve H. <S.W...@ec...> - 2003-08-09 18:31:08
|
On Sat, Aug 09, 2003 at 05:29:05PM +0200, David Olofson wrote: > However, keep in mind that what we design now will run on hardware > that's at least twice as fast as what we have now. It's likely that > the MIPS/memory bandwidth ratio will be worse, but you never know... I'm always nervous about this suggestion, I will most likly still be using the machine I'm using now, so moores law wont have effected me personally. We should support machines that are current now and not try to second guess. > What I'm saying is basically that benchmarking for future hardware is > pretty much gambling, and results on current hardware may not give us > the right answer. But at least we knew they are accurate for /some/ hardware. > Audio rate controls *are* the real answer (except for some special > cases, perhaps; audio rate text messages, anyone? ;-), but it's still > a bit on the expensive side on current hardware. (Filters have to > recalculate coefficients, or at least check the input, every sample > frame, for example.) In modular synths, it probably is the right Not really, they /can/ recalcualte every sample, they dont have to. - Steve |
|
From: Benno S. <be...@ga...> - 2003-08-09 17:01:34
|
Scrive David Olofson <da...@ol...>:
>
> I've been on the list for a good while, but only in Half Asleep Lurk
> Mode. :-)
Nice to have you onboard did not know that you were on the list ;-)
> >
> > The other philosopy is not to adopt the "everything is a CV", use
> > typed ports and use time stamped scheduled events.
>
> Another way of thinking about it is to view streams of control ramp
> events as "structured audio data". It allows for various
> optimizations for low bandwidth data, but still doesn't have any
> absolute bandwidth limit apart from the audio sample rate. (And not
> even that, if you use a different unit for event timestamps - but
> that's probably not quite as easy as it may sound.)
So at this time for linuxsampler would you advocate an event
based approach or continuous control stream (that runs at a fraction
of the samplerate) ?
As far as I understood it from reading your mail it seems that you
agree that on current machines (see filters that need to recalculate
coefficients etc) it makes sense to use an event based system.
>
>
> > Those events are scheduled in the future but we queue up only
> > events that belong to the next to be rendered audio block (eg 64
> > samples). That way real time manipulation is still possible since
> > the real time events belong to the next audio block too (the
> > current one is already in the dma buffer of the sound card and
> > cannot be manipulated anymore).
>
> Or: Buffering/timing is exactly the same for events as for audio
> streams. There is no reason to treat the differently, unless you want
> a high level interface to the sequencer - and that's a different
> thing, IMHO.
Yes the timebase is the samplingrate which keeps audio / midi and
other general events nicely in sync.
>
> Exactly.
>
> Now, that is apparently impossible to implement that on some platforms
> (poor RT scheduling), but some people using broken OSes is no
> argument for broken API designs, IMNSHO...
Ok but even if there is a jitter of a few samples it is much better than
having an event jitter equivalent to the audio fragment size.
It will be impossible to the user to notice that the midi pitchbend
event was schedule a few usecs too late compared to the ideal time.
Plus as said it will work with relatively large audio fragsizes too.
>
> > With the streamed approach we would need some scheduling of MIDI
> > events too thus we would probably need to create a module that
> > waits N samples (control samples) and then emits the event.
> > So basically we end up in a timestamped event scenario too.
>
> Or the usual approach; MIDI is processed once per block and quantized
> to block boundaries...
I don't like that , it might work ok for very small fragsizes eg
32-64 samples / block but if you go up to let's say 512 - 1024
timing of MIDI events will suck badly.
> > Not sure we gain in performance compared to the block based
> > processing where we apply all operations sequentially on a buffer
> > (filters, envelopes etc) like they were ladspa modules but without
> > calling external modules but instead "pasting" their sources in
> > sequence without function calls etc.
>
> I suspect it is *always* a performance loss, except in a few special
> cases and/or with very small nets and a good optimizing "compiler".
So it seems that the best compromise is to process audio in blocks
but to perform all dsp operations relative to an output sample
(relative to a voice) in one rush.
Assume a sample that is processed
first by an amplitude modulator and then a LP filter).
filter
basically instead of doing (like LADSPA would do)
am_sample=amplitude_modulator(input_sample,256samples)
output_sample=LP_filter(am_sample, 256samples)
(output_sample is a pointer to 256 samples) the current audio block.
it would be faster do do
for(i=0;i<256;i++) {
output_sample[i]=do_filter(do_amplitude_modulation(input_sample[i]));
}
right ?
While for pure audio processing the approach is quite straightforward,
when we take envelope generators etc into account we must inject
code that checks the current timestamp (an if() statement and then
modifies the right values accordingly to the events pending in the queue
(or autogenerated events).
For example I was envisioning a simple RAM sampler module that knows
nothing about MIDI etc but it flexible enough to offer
the full functionality hardcoded sampler designs do have.
eg. the ram sampler modules has the following inputs:
start trigger: starts the sample
release trigger: sample goes in release mode
stop trigger: sample output stops completely
rate: (a float where 1.0 means output sample at original rate,
2.0 shift one octave up etc).
volume: output volume
rate and volume would receive RAMP events so that you can modulate
these two values in arbitrary ways.
sample_ptr_and_len: pointer to a sample stored in RAM with associated
len
attack looping: a list of looping points:
(position_to_jump, loop_len, number_of_repeats)
release looping: same structure as above but it is used when
the sampler module goes into release phase.
Basically when you release a key if the sample has loops after
the current loop comes to the end pos you switch to the
release_looping list.
but as you see this RAM sampler modules does not fit well in
single RAMP event.
ok you could for example separate
sample_ptr_and_len into two variables but it seems a bit inefficient
to me.
Same could be said of the looping structure.
make more input ports eg
attack_looping_position
attack_looping_loop_len
etc
but it seems a waste of time to me since you end up managing
multiple lists of events even when they are mutually linked
(a loop_position does not make sense without the loop_len etc).
So I'd be interested how the RAM sampler module described above
could be made to work with only the RAMP event.
BTW: you said RAMP with value = 0 means set a value.
But what do you set exactly to 0 , the timestamp ?
this would not be ideal since 0 is a legitimate value.
It would be better to use -1 or something alike.
OTOH this would require an additional if() statement
(to check it it is a regular ramp or a set statement) and it could
possibly slow down things a bit.
My proposed ramping approach that consists of
value_to_be_set, delta
does not require an if and if you simply want to set a value
you set delta = 0
But my approach has the disadvantage that if you want do to mosly ramping
you have always to calculate value_to_be_set at each event and this
could become not trivial if you do not track the values within the modules.
Comments on that issue ?
>
> > I remember someone long time ago talked about better cache locality
> > of this approach (was it you David ? ;-) ) but after discussing
> > about blockless vs block based on irc with steve and simon I'm now
> > confused.
>
> I don't think there is a simple answer. Both approaches have their
> advantages in some situations, even WRT performance, although I think
> for the stuff most people do on DAWs these days, blockless processing
> will be significantly slower.
blockless as refered above by me (blockless = one single equation
but processed in blocks), or blockless using another kind of approach ?
(elaborate please)
>
> That said, something that generates C code that's passed to a good
> optimizing compiler might shift things around a bit, especially now
> that there are compilers that automatically generate SIMD code and
> stuff like that.
The question is indeed if yo do LADSPA style processing
(applying all DSP processing in sequence) the compiler uses SIMD
and optimization of the processing loops and is therefore faster
than calculating the result one single big equation at time which
could possibly not take advantage of SIMD etc.
But OTOH the blockless processing has the advantage that
things are not moved around much in the cache.
The output value of the first module is directly available as the
input value of the next module without needing to move it to
a temporary buffer or variable.
>
> The day you can compile a DSP net into native code in a fraction of a
> second, I think traditional plugin APIs will soon be obsolete, at
> least in the Free/Open Source world. (Byte code + JIT will probable
> do the trick for the closed source people, though.)
Linuxsampler is an attempt to prove that this work but as said I prefer
very careful design in advance rather than quick'n dirty results.
Even if some people like to joke about linuxsampler remaining
vaporware forever, I have to admit that after long discussions on
mailing list we learned quite some stuff that will be very handy
to make a powerful engine.
>
> > As said I dislike "everyting is a CV" a bit because you cannot do
> > what I proposed:
> > eg. you have a MIDI keymap modules that takes real time midi events
> > (note on / off) and spits out events that drive the RAM sampler
> > module (that knows nothing about MIDI). In an event based system
> > you can send Pointer to Sample data in RAM, length of sample,
> > looping points, envelope curves (organized as sequences of linear
> > segments) etc.
>
> I disagree to some extent - but this is a very complex subject. Have
> you followed the XAP discussions? I think we pretty much concluded
> that you can get away with "everything is a control", only one event
> type (RAMP, where duration == 0 means SET) and a few data types.
> That's what I'm using internally in Audiality, and I'm not seing any
> problems with it.
Ah you are using the concept of duration.
Ins't it a bit redundant ? Instead of using duration one can use
duration-less RAMP events and just generate an event that sets
the delta ramp value to zero when you want the ramp to stop.
>
>
> > Basically in my model you cannot connect everything with everything
> > (Steve says it it bad but I don't think so) but you can connect
> > everything with "everything that makes sense to connect to".
>
> Well, you *can* convert back and forth, but it ain't free... You can't
> have everything.
Ok but converters will be the exception and not the rule:
for example the MIDI mapper module
see the GUI screenshot message here:
http://sourceforge.net/mailarchive/forum.php?thread_id=2841483&forum_id=12792
acts as a proxy between the MIDI Input and the RAM sampler module.
So it makes the right port types available.
No converters are needed. It's all done internally in the best possible way
without needless float to int conversions, interpreting pointers as floats
and other "everything is a CV" oddities ;-)
>
> Anyway, I see timestamped events mostly as a performance hack. More
> accurate than control rate streams (lower rate than audio rate), less
> expensive than audio rate controls in normal cases, but still capable
> of carrying audio rate data when necessary.
Yes they are a bit of a performance hack but on current hw as you
pointed out it would be a waste of resources and since every musician's
goal is to get out the most number of voices / effects / tracks etc
from the hardware I think it pays off quite alot to use an event based system.
> Yes... In XAP, we tried to forget about the "argument bundling" of
> MIDI, and just have plain controls. We came up with a nice and clean
> design that can do everything that MIDI can, and then some, still
> without any multiple argument events. (Well, events *have* multiple
> arguments, but only one value argument - the others are the timestamp
> and various addressing info.)
Hmm I did not follow the XAP discussions (I was overloaded during that
time as usual ;-) ) but can you briefly explain how this XAP model would
fit the model where the MIDI IN module talks to the MIDI mapper which
in turns talks to the RAM sampler.
>
> In my limited hands-on experience, the event system actually makes
> some things *simpler* for plugins. They just do what they're told
> when they're told, and there's no need to check when to do things or
> scan control input streats: Just process audio as usual until you hit
> the next event. Things like envelope generators, that have to
> generate their own timing internally, look pretty much the same
> whether they deal with events or audio rate streams. The only major
> difference is that the rendering of the output is done by whatever
> receives the generated events, rather than by the EG itself.
>
> Either way, the real heavy stuff is always the DSP code. In cases
> where it isn't, the whole plugin is usually so simple that it doesn't
> really matter what kind of control interface you're using; the DSP
> code fits right into the basic "standard model" anyway. In such
> cases, an API like XAP or Audiality's internal "plugin API" could
> provide some macros that make it all insanely simple - maybe simpler
> than LADSPA.
So you are saying that in pratical terms (processing performance) it does not
matter whether you use events or streamed control values ?
I still prefer the event based system because it allows you to deal more
easily with real time events (with sub audio block precision) and if you
need it you can run at full sample rate.
>
>
> Anyway, need to get back to work now... :-)
yeah, we unleashed those km-long mails again ... just like in the old times ;-)
can you say infinite recursion ;-)))
cheers,
Benno.
-------------------------------------------------
This mail sent through http://www.gardena.net
|
|
From: David O. <da...@ol...> - 2003-08-09 15:29:22
|
On Saturday 09 August 2003 16.14, Benno Senoner wrote: > [ CCing David Olofson, David if you can join the LS list, we need > hints for an optimized design ;-) ] I've been on the list for a good while, but only in Half Asleep Lurk=20 Mode. :-) > Hi, > to continue the blockless bs block based, cv based etc audio > rendering discussion: > > Steve H. agrees with me that block mode eases the processing of > events that can drive the modules. > > Basically one philosopy is to adopt the everything is a CV (control > value) where control ports are treated like they were audio streams > and you run these streams at a fraction of the samplingrate > (usually up to samplerate/4). > > The other philosopy is not to adopt the "everything is a CV", use > typed ports and use time stamped scheduled events. Another way of thinking about it is to view streams of control ramp=20 events as "structured audio data". It allows for various=20 optimizations for low bandwidth data, but still doesn't have any=20 absolute bandwidth limit apart from the audio sample rate. (And not=20 even that, if you use a different unit for event timestamps - but=20 that's probably not quite as easy as it may sound.) > Those events are scheduled in the future but we queue up only > events that belong to the next to be rendered audio block (eg 64 > samples). That way real time manipulation is still possible since > the real time events belong to the next audio block too (the > current one is already in the dma buffer of the sound card and > cannot be manipulated anymore). Or: Buffering/timing is exactly the same for events as for audio=20 streams. There is no reason to treat the differently, unless you want=20 a high level interface to the sequencer - and that's a different=20 thing, IMHO. > That way with very little effort and overhead you achieve both > sample accurate event scheduling and good scheduling of real time > events too. Assume a midi input events occurs during processing the > current block. We read the midi event using a higher priority > SCHED_FIFO thread and read out the current sample pointer when the > event occurred. We can then simply insert a scheduled MIDI event > during the next audio block that occurs exaclty N samples (64 in > our examples) when the event was registered. > That way we get close to zero jitter of real time events even when > we use bigger audio fragment sizes. Exactly. Now, that is apparently impossible to implement that on some platforms=20 (poor RT scheduling), but some people using broken OSes is no=20 argument for broken API designs, IMNSHO... And of course, it's completely optional to implement it in the host.=20 In Audiality, I'm using macros/inlines for sending events, and the=20 only difference is a timestamp argument - and you can just set that=20 to 0 if you don't care. (0 means "start of current block", as there's=20 a global "timer" variable used by those macros/inlines.) > With the streamed approach we would need some scheduling of MIDI > events too thus we would probably need to create a module that > waits N samples (control samples) and then emits the event. > So basically we end up in a timestamped event scenario too. Or the usual approach; MIDI is processed once per block and quantized=20 to block boundaries... > Now if we assume we do all blockless processing eg the > dsp compiler generates one giant equation for each dsp network > (instrument). output =3D func(input1,input2,....) > > Not sure we gain in performance compared to the block based > processing where we apply all operations sequentially on a buffer > (filters, envelopes etc) like they were ladspa modules but without > calling external modules but instead "pasting" their sources in > sequence without function calls etc. I suspect it is *always* a performance loss, except in a few special=20 cases and/or with very small nets and a good optimizing "compiler". Some kind of hybrid approach (ie "build your own plugins") would be=20 very interesting, as it could offer the best of both worlds. I think=20 that's pretty much beyond the scope of "high level" plugin APIs (such=20 as VST, DX, XAP, GMPI and even LADSPA). > I remember someone long time ago talked about better cache locality > of this approach (was it you David ? ;-) ) but after discussing > about blockless vs block based on irc with steve and simon I'm now > confused. I don't think there is a simple answer. Both approaches have their=20 advantages in some situations, even WRT performance, although I think=20 for the stuff most people do on DAWs these days, blockless processing=20 will be significantly slower. That said, something that generates C code that's passed to a good=20 optimizing compiler might shift things around a bit, especially now=20 that there are compilers that automatically generate SIMD code and=20 stuff like that. The day you can compile a DSP net into native code in a fraction of a=20 second, I think traditional plugin APIs will soon be obsolete, at=20 least in the Free/Open Source world. (Byte code + JIT will probable=20 do the trick for the closed source people, though.) > I guess we should try both methods and benchmark them. Yes. However, keep in mind that what we design now will run on hardware=20 that's at least twice as fast as what we have now. It's likely that=20 the MIPS/memory bandwidth ratio will be worse, but you never know... What I'm saying is basically that benchmarking for future hardware is=20 pretty much gambling, and results on current hardware may not give us=20 the right answer. > As said I dislike "everyting is a CV" a bit because you cannot do > what I proposed: > eg. you have a MIDI keymap modules that takes real time midi events > (note on / off) and spits out events that drive the RAM sampler > module (that knows nothing about MIDI). In an event based system > you can send Pointer to Sample data in RAM, length of sample, > looping points, envelope curves (organized as sequences of linear > segments) etc. I disagree to some extent - but this is a very complex subject. Have=20 you followed the XAP discussions? I think we pretty much concluded=20 that you can get away with "everything is a control", only one event=20 type (RAMP, where duration =3D=3D 0 means SET) and a few data types.=20 That's what I'm using internally in Audiality, and I'm not seing any=20 problems with it. > Basically in my model you cannot connect everything with everything > (Steve says it it bad but I don't think so) but you can connect > everything with "everything that makes sense to connect to". Well, you *can* convert back and forth, but it ain't free... You can't=20 have everything. Anyway, I see timestamped events mostly as a performance hack. More=20 accurate than control rate streams (lower rate than audio rate), less=20 expensive than audio rate controls in normal cases, but still capable=20 of carrying audio rate data when necessary. Audio rate controls *are* the real answer (except for some special=20 cases, perhaps; audio rate text messages, anyone? ;-), but it's still=20 a bit on the expensive side on current hardware. (Filters have to=20 recalculate coefficients, or at least check the input, every sample=20 frame, for example.) In modular synths, it probably is the right=20 answer already, but I don't think it fits the bill well enough for=20 "normal" plugins, like the standard VST/DX/TDM/... stuff. > Plus if you have special needs you can always implement your > converter module (converting a midi velocity in a filter frequency > etc). (but I think such a module will be part of the standard set > anyway since we need midi pitch to filter frequency conversion too > if we want filters that support frequency tracking). Yes... In XAP, we tried to forget about the "argument bundling" of=20 MIDI, and just have plain controls. We came up with a nice and clean=20 design that can do everything that MIDI can, and then some, still=20 without any multiple argument events. (Well, events *have* multiple=20 arguments, but only one value argument - the others are the timestamp=20 and various addressing info.) > As said I will come up with a running proof of concept , if we end > up all dissatified with the event based model we can always switch > to other models but I'm pretty confident that the system will be > both performant and flexible (but it takes time to code). In my limited hands-on experience, the event system actually makes=20 some things *simpler* for plugins. They just do what they're told=20 when they're told, and there's no need to check when to do things or=20 scan control input streats: Just process audio as usual until you hit=20 the next event. Things like envelope generators, that have to=20 generate their own timing internally, look pretty much the same=20 whether they deal with events or audio rate streams. The only major=20 difference is that the rendering of the output is done by whatever=20 receives the generated events, rather than by the EG itself. Either way, the real heavy stuff is always the DSP code. In cases=20 where it isn't, the whole plugin is usually so simple that it doesn't=20 really matter what kind of control interface you're using; the DSP=20 code fits right into the basic "standard model" anyway. In such=20 cases, an API like XAP or Audiality's internal "plugin API" could=20 provide some macros that make it all insanely simple - maybe simpler=20 than LADSPA. Anyway, need to get back to work now... :-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Benno S. <be...@ga...> - 2003-08-09 14:14:51
|
[ CCing David Olofson, David if you can join the LS list, we need hints for an optimized design ;-) ] Hi, to continue the blockless bs block based, cv based etc audio rendering discussion: Steve H. agrees with me that block mode eases the processing of events that can drive the modules. Basically one philosopy is to adopt the everything is a CV (control value) where control ports are treated like they were audio streams and you run these streams at a fraction of the samplingrate (usually up to samplerate/4). The other philosopy is not to adopt the "everything is a CV", use typed ports and use time stamped scheduled events. Those events are scheduled in the future but we queue up only events that belong to the next to be rendered audio block (eg 64 samples). That way real time manipulation is still possible since the real time events belong to the next audio block too (the current one is already in the dma buffer of the sound card and cannot be manipulated anymore). That way with very little effort and overhead you achieve both sample accurate event scheduling and good scheduling of real time events too. Assume a midi input events occurs during processing the current block. We read the midi event using a higher priority SCHED_FIFO thread and read out the current sample pointer when the event occurred. We can then simply insert a scheduled MIDI event during the next audio block that occurs exaclty N samples (64 in our examples) when the event was registered. That way we get close to zero jitter of real time events even when we use bigger audio fragment sizes. With the streamed approach we would need some scheduling of MIDI events too thus we would probably need to create a module that waits N samples (control samples) and then emits the event. So basically we end up in a timestamped event scenario too. Now if we assume we do all blockless processing eg the dsp compiler generates one giant equation for each dsp network (instrument). output = func(input1,input2,....) Not sure we gain in performance compared to the block based processing where we apply all operations sequentially on a buffer (filters, envelopes etc) like they were ladspa modules but without calling external modules but instead "pasting" their sources in sequence without function calls etc. I remember someone long time ago talked about better cache locality of this approach (was it you David ? ;-) ) but after discussing about blockless vs block based on irc with steve and simon I'm now confused. I guess we should try both methods and benchmark them. As said I dislike "everyting is a CV" a bit because you cannot do what I proposed: eg. you have a MIDI keymap modules that takes real time midi events (note on / off) and spits out events that drive the RAM sampler module (that knows nothing about MIDI). In an event based system you can send Pointer to Sample data in RAM, length of sample, looping points, envelope curves (organized as sequences of linear segments) etc. Basically in my model you cannot connect everything with everything (Steve says it it bad but I don't think so) but you can connect everything with "everything that makes sense to connect to". Plus if you have special needs you can always implement your converter module (converting a midi velocity in a filter frequency etc). (but I think such a module will be part of the standard set anyway since we need midi pitch to filter frequency conversion too if we want filters that support frequency tracking). As said I will come up with a running proof of concept , if we end up all dissatified with the event based model we can always switch to other models but I'm pretty confident that the system will be both performant and flexible (but it takes time to code). thoughts ? Benno http://linuxsampler.sourceforge.net ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Simon J. <sje...@bl...> - 2003-08-07 21:34:02
|
ian esten wrote: >the downcompiling idea is definitely the way to go, but i think it would >be much better if it was one of the things that was left until later to >develop. i think people would much rather have a working sampler that >used more cpu than have to wait until the downcompiler was ready to have >anything they could use at all. > IMO if linuxsampler is going to downcompile at all then it can't just be bolted on for version 2.0. The feature needs to be present, in some form, in something more like version 0.2. >also, the non-downcompiled network is >going to be necessary for synthesis network design, so it won't be >wasted effort. > Its true that some sort of design-time engine will be required for use when interactively designing a voice. *But the downcompiler should be capable of generating that design-time engine automatically*: Typing: #downcompiler --generate-design-time-engine would be considerably less effort than coding one by hand. And how much extra effort would it be to make the downcompiler capable of this feat? Almost none! (If you consider how very, very, very close you could get to the objective simply by compiling a voice which consisted of loads of modules, all connected to a patchbay module...) Simon Jenkins (Bristol, UK) |